text
stringlengths 6
128k
|
---|
Antonino Ingargiola wrote much of the high level Model code and has provided
many bug fixes and improvements. Daniel B. Allan wrote much of the original
version of the high level Model code, and many improvements to the testing and
documentation. Austen Fox fixed many of the built-in model functions and
improved the testing and documentation of these. Michal Rawlik added plotting
capabilities for Models. The method used for placing bounds on parameters was
derived from the clear description in the MINUIT documentation, and adapted
from J. J. Helmus’s Python implementation in leastsqbounds.py. E. O. Le Bigot
wrote the uncertainties package, a version of which was used by lmfit for many
years, and is now an external dependency. The original AMPGO code came from
Andrea Gavana and was adopted for lmfit. The propagation of parameter
uncertainties to uncertainties in a Model was adapted from the excellent
description at
https://www.astro.rug.nl/software/kapteyn/kmpfittutorial.html#confidence-and-
prediction-intervals, which references the original work of: J. Wolberg, Data
Analysis Using the Method of Least Squares, 2006, Springer. Additional
patches, bug fixes, and suggestions have come from Faustin Carter, Christoph
Deil, Francois Boulogne, Thomas Caswell, Colin Brosseau, nmearl, Gustavo
Pasquevich, Clemens Prescher, LiCode, Ben Gamari, Yoav Roam, Alexander Stark,
Alexandre Beelen, Andrey Aristov, Nicholas Zobrist, Ethan Welty, Julius
Zimmermann, Mark Dean, Arun Persaud, Ray Osborn, @lneuhaus, Marcel Stimberg,
Yoshiera Huang, Leon Foks, Sebastian Weigand, Florian LB, Michael Hudson-
Doyle, Ruben Verweij, @jedzill4, @spalato, Jens Hedegaard Nielsen, Martin
Majli, Kristian Meyer, @azelcer, Ivan Usov, and many others. The lmfit code
obviously depends on, and owes a very large debt to the code in
scipy.optimize. Several discussions on the SciPy-user and lmfit mailing lists
have also led to improvements in this code. Other software: numpy (Harris et
al., 2020), pandas (pandas development team, 2020; Wes McKinney, 2010),
matplotlib (Hunter, 2007), sncosmo (Barbary et al., 2021), simsurvey (Feindt
et al., 2019a)
## Data Availability
The ZTF DR2 photometry is available on GitHub. The binning program is
available at https://github.com/JTerwel/late-time_lc_binner. snap is available
at https://github.com/JTerwel/SuperNova_Animation_Program. The ePESSTO+
photometry is available on the ESO archive. The late-time spectrum of SN
2020alm will be available upon request to the author.
## References
* Adelman-McCarthy et al. (2006) Adelman-McCarthy, J. K., Agüeros, M. A., Allam, S. S., et al. 2006, ApJS, 162, 38
* Ailawadhi et al. (2023) Ailawadhi, B., Dastidar, R., Misra, K., et al. 2023, MNRAS, 519, 248
* Aldering et al. (2006) Aldering, G., Antilogus, P., Bailey, S., et al. 2006, ApJ, 650, 510
* Andrews et al. (2019) Andrews, J. E., Sand, D. J., Valenti, S., et al. 2019, ApJ, 885, 43
* Astropy Collaboration et al. (2022) Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, ApJ, 935, 167
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33
* Barbary et al. (2021) Barbary, K., Bailey, S., Barentsen, G., et al. 2021, SNCosmo
* Becker (2015) Becker, A. 2015, HOTPANTS: High Order Transform of PSF ANd Template Subtraction, Astrophysics Source Code Library, record ascl:1504.004
* Bellm et al. (2019) Bellm, E. C., Kulkarni, S. R., Barlow, T., et al. 2019, PASP, 131, 068003
* Biswas et al. (2022) Biswas, R., Goobar, A., Dhawan, S., et al. 2022, MNRAS, 509, 5340
* Buzzoni et al. (1984) Buzzoni, B., Delabre, B., Dekker, H., et al. 1984, The Messenger, 38, 9
* Camacho-Neves et al. (2023) Camacho-Neves, Y., Jha, S. W., Barna, B., et al. 2023, arXiv e-prints, arXiv:2302.03105
* Cardelli et al. (1989) Cardelli, J. A., Clayton, G. C., & Mathis, J. S. 1989, ApJ, 345, 245
* Chandrasekhar (1931) Chandrasekhar, S. 1931, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, 11, 592
* Coughlin et al. (2023) Coughlin, M. W., Bloom, J. S., Nir, G., et al. 2023, ApJS, 267, 31
* De et al. (2019) De, K., Kasliwal, M. M., Polin, A., et al. 2019, ApJ, 873, L18
* Dekany et al. (2020) Dekany, R., Smith, R. M., Riddle, R., et al. 2020, PASP, 132, 038001
* Dey et al. (2019) Dey, A., Schlegel, D. J., Lang, D., et al. 2019, AJ, 157, 168
* Dilday et al. (2012) Dilday, B., Howell, D. A., Cenko, S. B., et al. 2012, Science, 337, 942
* Dimitriadis et al. (2017) Dimitriadis, G., Sullivan, M., Kerzendorf, W., et al. 2017, MNRAS, 468, 3798
* Dubay et al. (2022) Dubay, L. O., Tucker, M. A., Do, A., Shappee, B. J., & Anand, G. S. 2022, ApJ, 926, 98
* Duev et al. (2019) Duev, D. A., Mahabal, A., Masci, F. J., et al. 2019, Monthly Notices of the Royal Astronomical Society, 489, 3582
* Dutta et al. (2022) Dutta, A., Sahu, D. K., Anupama, G. C., et al. 2022, ApJ, 925, 217
* Feindt et al. (2019a) Feindt, U., M., R., V., B., & J., N. 2019a, ufeindt/simsurvey: 0.6.0
* Feindt et al. (2019b) Feindt, U., Nordin, J., Rigault, M., et al. 2019b, J. Cosmology Astropart. Phys., 2019, 005
* Filippenko et al. (1992) Filippenko, A. V., Richmond, M. W., Matheson, T., et al. 1992, ApJ, 384, L15
* Fink et al. (2010) Fink, M., Röpke, F. K., Hillebrandt, W., et al. 2010, A&A, 514, A53
* Fremling et al. (2020) Fremling, C., Miller, A. A., Sharma, Y., et al. 2020, ApJ, 895, 32
* Friesen et al. (2017) Friesen, B., Baron, E., Parrent, J. T., et al. 2017, MNRAS, 467, 2392
* Frohmaier et al. (2018) Frohmaier, C., Sullivan, M., Maguire, K., & Nugent, P. 2018, ApJ, 858, 50
* Frohmaier et al. (2019) Frohmaier, C., Sullivan, M., Nugent, P. E., et al. 2019, MNRAS, 486, 2308
* Graham et al. (2019a) Graham, M. J., Kulkarni, S. R., Bellm, E. C., et al. 2019a, PASP, 131, 078001
* Graham et al. (2022) Graham, M. L., Fremling, C., Perley, D. A., et al. 2022, MNRAS, 511, 241
* Graham et al. (2019b) Graham, M. L., Harris, C. E., Nugent, P. E., et al. 2019b, ApJ, 871, 62
* Graham et al. (2015) Graham, M. L., Nugent, P. E., Sullivan, M., et al. 2015, MNRAS, 454, 1948
* Graur et al. (2016) Graur, O., Zurek, D., Shara, M. M., et al. 2016, ApJ, 819, 31
* Gunn et al. (2006) Gunn, J. E., Siegmund, W. A., Mannery, E. J., et al. 2006, AJ, 131, 2332
* Guy et al. (2007) Guy, J., Astier, P., Baumont, S., et al. 2007, A&A, 466, 11
* Hammerstein et al. (2023) Hammerstein, E., van Velzen, S., Gezari, S., et al. 2023, ApJ, 942, 9
* Hamuy et al. (2003a) Hamuy, M., Phillips, M., Suntzeff, N., & Maza, J. 2003a, IAU Circ., 8151, 2
* Hamuy et al. (2003b) Hamuy, M., Phillips, M. M., Suntzeff, N. B., et al. 2003b, Nature, 424, 651
* Harris et al. (2018) Harris, C. E., Nugent, P. E., Horesh, A., et al. 2018, ApJ, 868, 21
* Harris et al. (2020) Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357
* Hill et al. (2006) Hill, J. M., Green, R. F., & Slagle, J. H. 2006, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 6267, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, ed. L. M. Stepp, 62670Y
* Holtzman et al. (2008) Holtzman, J. A., Marriner, J., Kessler, R., et al. 2008, AJ, 136, 2306
* Hunter (2007) Hunter, J. D. 2007, Computing in Science & Engineering, 9, 90
* Hviding et al. (2022) Hviding, R. E., Hainline, K. N., Rieke, M., et al. 2022, AJ, 163, 224
* Iben & Tutukov (1984) Iben, I., J. & Tutukov, A. V. 1984, ApJS, 54, 335
* Irani et al. (2022) Irani, I., Prentice, S. J., Schulze, S., et al. 2022, ApJ, 927, 10
* Ivezić et al. (2019) Ivezić, Ž., Kahn, S. M., Tyson, J. A., et al. 2019, ApJ, 873, 111
* Jordan et al. (2012) Jordan, George C., I., Perets, H. B., Fisher, R. T., & van Rossum, D. R. 2012, ApJ, 761, L23
* Kashi & Soker (2011) Kashi, A. & Soker, N. 2011, MNRAS, 417, 1466
* Kasliwal et al. (2019) Kasliwal, M. M., Cannella, C., Bagdasaryan, A., et al. 2019, Publications of the Astronomical Society of the Pacific, 131, 038003
* Kasliwal et al. (2019) Kasliwal, M. M., Cannella, C., Bagdasaryan, A., et al. 2019, PASP, 131, 038003
* Katsuda et al. (2015) Katsuda, S., Mori, K., Maeda, K., et al. 2015, ApJ, 808, 49
* Kawabata et al. (2018) Kawabata, M., Kawabata, K. S., Maeda, K., et al. 2018, PASJ, 70, 111
* Khokhlov (1991) Khokhlov, A. M. 1991, A&A, 245, L25
* Kilpatrick et al. (2021) Kilpatrick, C. D., Drout, M. R., Auchettl, K., et al. 2021, MNRAS, 504, 2073
* Kool et al. (2023) Kool, E. C., Johansson, J., Sollerman, J., et al. 2023, Nature, 617, 477
* Kromer et al. (2013) Kromer, M., Fink, M., Stanishev, V., et al. 2013, MNRAS, 429, 2287
* Law et al. (2009) Law, N. M., Kulkarni, S. R., Dekany, R. G., et al. 2009, PASP, 121, 1395
* Leloudas et al. (2015) Leloudas, G., Hsiao, E. Y., Johansson, J., et al. 2015, A&A, 574, A61
* Lintott et al. (2011) Lintott, C., Schawinski, K., Bamford, S., et al. 2011, MNRAS, 410, 166
* Livne & Arnett (1995) Livne, E. & Arnett, D. 1995, ApJ, 452, 62
* Masci et al. (2019) Masci, F. J., Laher, R. R., Rusholme, B., et al. 2019, PASP, 131, 018003
* Mazzali et al. (2007) Mazzali, P. A., Röpke, F. K., Benetti, S., & Hillebrandt, W. 2007, Science, 315, 825
* Mazzali et al. (2015) Mazzali, P. A., Sullivan, M., Filippenko, A. V., et al. 2015, MNRAS, 450, 2631
* Mazzali et al. (2014) Mazzali, P. A., Sullivan, M., Hachinger, S., et al. 2014, MNRAS, 439, 1959
* McCully et al. (2022) McCully, C., Jha, S. W., Scalzo, R. A., et al. 2022, ApJ, 925, 138
* Miller et al. (2020) Miller, A. A., Yao, Y., Bulla, M., et al. 2020, ApJ, 902, 47
* Nomoto (1982) Nomoto, K. 1982, ApJ, 253, 798
* Nomoto et al. (2005) Nomoto, K., Suzuki, T., Deng, J., Uenishi, T., & Hachisu, I. 2005, in Astronomical Society of the Pacific Conference Series, Vol. 342, 1604-2004: Supernovae as Cosmological Lighthouses, ed. M. Turatto, S. Benetti, L. Zampieri, & W. Shea, 105
* Nugent et al. (2011) Nugent, P. E., Sullivan, M., Cenko, S. B., et al. 2011, Nature, 480, 344
* Pakmor et al. (2010) Pakmor, R., Kromer, M., Röpke, F. K., et al. 2010, Nature, 463, 61
* Pakmor et al. (2012) Pakmor, R., Kromer, M., Taubenberger, S., et al. 2012, ApJ, 747, L10
* pandas development team (2020) pandas development team, T. 2020, pandas-dev/pandas: Pandas
* Parrent et al. (2012) Parrent, J. T., Howell, D. A., Friesen, B., et al. 2012, ApJ, 752, L26
* Patat (2005) Patat, F. 2005, MNRAS, 357, 1161
* Patat et al. (2013) Patat, F., Cordiner, M. A., Cox, N. L. J., et al. 2013, A&A, 549, A62
* Patnaude et al. (2012) Patnaude, D. J., Badenes, C., Park, S., & Laming, J. M. 2012, ApJ, 756, 6
* Pereira et al. (2013) Pereira, R., Thomas, R. C., Aldering, G., et al. 2013, A&A, 554, A27
* Perley et al. (2020) Perley, D. A., Fremling, C., Sollerman, J., et al. 2020, ApJ, 904, 35
* Phillips (1993) Phillips, M. M. 1993, ApJ, 413, L105
* Phillips et al. (1999) Phillips, M. M., Lira, P., Suntzeff, N. B., et al. 1999, AJ, 118, 1766
* Planck Collaboration et al. (2020) Planck Collaboration, Aghanim, N., Akrami, Y., et al. 2020, A&A, 641, A6
* Prajs et al. (2017) Prajs, S., Sullivan, M., Smith, M., et al. 2017, MNRAS, 464, 3568
* Raskin & Kasen (2013) Raskin, C. & Kasen, D. 2013, ApJ, 772, 1
* Rau et al. (2009) Rau, A., Kulkarni, S. R., Law, N. M., et al. 2009, PASP, 121, 1334
* Ravi et al. (2023) Ravi, A. P., Rho, J., Park, S., et al. 2023, ApJ, 950, 14
* Reusch (2020) Reusch, S. 2020, ztffps
* Rigault (2018) Rigault, M. 2018, ztfquery, a python tool to access ZTF data
* Rosswog et al. (2009) Rosswog, S., Kasen, D., Guillochon, J., & Ramirez-Ruiz, E. 2009, ApJ, 705, L128
* Ruiz-Lapuente et al. (1992) Ruiz-Lapuente, P., Cappellaro, E., Turatto, M., et al. 1992, ApJ, 387, L33
* Schlegel et al. (1998) Schlegel, D. J., Finkbeiner, D. P., & Davis, M. 1998, ApJ, 500, 525
* Shappee & Stanek (2011) Shappee, B. J. & Stanek, K. Z. 2011, ApJ, 733, 124
* Shappee et al. (2017) Shappee, B. J., Stanek, K. Z., Kochanek, C. S., & Garnavich, P. M. 2017, The Astrophysical Journal, 841, 48
* Sharma et al. (2023) Sharma, Y., Sollerman, J., Fremling, C., et al. 2023, ApJ, 948, 52
* Shen & Bildsten (2009) Shen, K. J. & Bildsten, L. 2009, ApJ, 699, 1365
* Silverman et al. (2013) Silverman, J. M., Nugent, P. E., Gal-Yam, A., et al. 2013, ApJS, 207, 3
* Smartt et al. (2015) Smartt, S. J., Valenti, S., Fraser, M., et al. 2015, A&A, 579, A40
* Smee et al. (2013) Smee, S. A., Gunn, J. E., Uomoto, A., et al. 2013, AJ, 146, 32
* Smith et al. (2015) Smith, N., Mauerhan, J. C., Cenko, S. B., et al. 2015, MNRAS, 449, 1876
* Stahl et al. (2020) Stahl, B. E., Zheng, W., de Jaeger, T., et al. 2020, MNRAS, 492, 4325
* Stanishev et al. (2018) Stanishev, V., Goobar, A., Amanullah, R., et al. 2018, A&A, 615, A45
* Strotjohann et al. (2021) Strotjohann, N. L., Ofek, E. O., Gal-Yam, A., et al. 2021, ApJ, 907, 99
* Taam (1980) Taam, R. E. 1980, ApJ, 242, 749
* Taubenberger et al. (2015) Taubenberger, S., Elias-Rosa, N., Kerzendorf, W. E., et al. 2015, MNRAS, 448, L48
* Tonry et al. (2018) Tonry, J. L., Denneau, L., Heinze, A. N., et al. 2018, PASP, 130, 064505
* van der Walt et al. (2019) van der Walt, S. J., Crellin-Quick, A., & Bloom, J. S. 2019, Journal of Open Source Software, 4
* van Velzen et al. (2021) van Velzen, S., Gezari, S., Hammerstein, E., et al. 2021, ApJ, 908, 4
* Webbink (1984) Webbink, R. F. 1984, ApJ, 277, 355
* Wes McKinney (2010) Wes McKinney. 2010, in Proceedings of the 9th Python in Science Conference, ed. Stéfan van der Walt & Jarrod Millman, 56 – 61
* Whelan & Iben (1973) Whelan, J. & Iben, Icko, J. 1973, ApJ, 186, 1007
* Wood-Vasey et al. (2004) Wood-Vasey, W. M., Wang, L., & Aldering, G. 2004, ApJ, 616, 339
* Wright et al. (2010) Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868
* Yao et al. (2019) Yao, Y., Miller, A. A., Kulkarni, S. R., et al. 2019, ApJ, 886, 152
* Yaron & Gal-Yam (2012) Yaron, O. & Gal-Yam, A. 2012, PASP, 124, 668
* York et al. (2000) York, D. G., Adelman, J., Anderson, John E., J., et al. 2000, AJ, 120, 1579
* Zackay et al. (2016) Zackay, B., Ofek, E. O., & Gal-Yam, A. 2016, ApJ, 830, 27
* Zhang et al. (2016) Zhang, K., Wang, X., Zhang, J., et al. 2016, ApJ, 820, 67
## Appendix A Tables
Table 7: Spectra used to make the SN 2011fe model. All spectra were taken from WISeREP (Yaron & Gal-Yam 2012). MJD | Phase (d) | Telescope | Instrument | Wavelength coverage (Å) | Reference
---|---|---|---|---|---
55798.0 | $-$16.0 | Lijiang-2.4m | YFOSC | 3461 – 8956 | Zhang et al. (2016)
55798.2 | $-$15.8 | Lick-3m | KAST | 3416 – 10278 | Nugent et al. (2011)
55799.0 | $-$15.0 | Lijiang-2.4m | YFOSC | 3502 – 8958 | Zhang et al. (2016)
55799.3 | $-$14.7 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55800.2 | $-$13.8 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55801.2 | $-$12.8 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55802.3 | $-$11.7 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55803.2 | $-$10.8 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55804.2 | $-$9.8 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55805.2 | $-$8.8 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55806.2 | $-$7.8 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55807.3 | $-$6.7 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55808.2 | $-$5.8 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55809.2 | $-$4.8 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55811.4 | $-$2.6 | HST | STIS | 1779 – 24965 | Mazzali et al. (2014)
55812.0 | $-$2.0 | Gemini-N | GMOS | 3497 – 9648 | Parrent et al. (2012)
55813.2 | $-$0.8 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55814.2 | 0.2 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55815.2 | 1.2 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55816.2 | 2.2 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55817.2 | 3.2 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55817.7 | 3.7 | HST | STIS | 1265 – 24965 | Mazzali et al. (2014)
55818.2 | 4.2 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55821.2 | 7.2 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55823.2 | 9.2 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55826.2 | 12.2 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55828.2 | 14.2 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55829.0 | 15.0 | Gemini-N | GMOS | 3497 – 9643 | Parrent et al. (2012)
55830.2 | 16.2 | Keck1 | LRIS | 3227 – 10242 | Stahl et al. (2020)
55831.2 | 17.2 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55832.0 | 18.0 | Lijiang-2.4m | YFOSC | 3577 – 8957 | Zhang et al. (2016)
55833.2 | 19.2 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55835.3 | 21.3 | HST | STIS | 1731 – 10221 | Mazzali et al. (2014)
55836.2 | 22.2 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55838.2 | 24.2 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55841.3 | 27.3 | HST | STIS | 1738 – 10221 | Mazzali et al. (2014)
55855.2 | 41.2 | HST | STIS | 1738 – 10216 | Mazzali et al. (2014)
55888.6 | 74.7 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55891.7 | 77.7 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55893.6 | 79.7 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55896.6 | 82.6 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55897.7 | 83.7 | Keck1 | LRIS | 3164 – 10126 | Stahl et al. (2020)
55901.6 | 87.6 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55903.6 | 89.6 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55911.0 | 97.0 | XLT | BFOSC | 3296 – 9693 | Zhang et al. (2016)
55911.6 | 97.6 | UH88 | SNIFS | 3296 – 9693 | Pereira et al. (2013)
55913.5 | 99.5 | Lick-3m | KAST | 3427 – 10332 | Stahl et al. (2020)
55914.0 | 100.0 | WHT-4.2m | ISIS | 3499 – 9491 | Friesen et al. (2017)
55916.0 | 102.0 | WHT-4.2m | ISIS | 3498 – 9491 | Law et al. (2009); Rau et al. (2009)
55917.0 | 103.0 | WHT-4.2m | ISIS | 3499 – 9492 | Law et al. (2009); Rau et al. (2009)
55926.0 | 112.0 | Lijiang-2.4m | YFOSC | 3366 – 9069 | Zhang et al. (2016)
55929.5 | 115.5 | Lick-3m | KAST | 3426 – 10170 | Stahl et al. (2020)
55944.5 | 130.5 | Lick-3m | KAST | 3453 – 10088 | Stahl et al. (2020)
55959.0 | 145.0 | Lick-3m | KAST | 3497 – 10000 | Law et al. (2009); Rau et al. (2009)
55980.4 | 166.4 | Lick-3m | KAST | 3441 – 10250 | Stahl et al. (2020)
55988.0 | 174.0 | WHT-4.2m | ISIS | 3495 – 9982 | Mazzali et al. (2015)
56019.4 | 205.4 | Lick-3m | KAST | 3438 – 10324 | Mazzali et al. (2015)
56040.4 | 226.4 | Lick-3m | KAST | 3437 – 10178 | Mazzali et al. (2015)
56047.0 | 233.0 | Lijiang-2.4m | YFOSC | 3392 – 9053 | Zhang et al. (2016)
56073.0 | 259.0 | WHT-4.2m | ISIS | 3495 – 9483 | Mazzali et al. (2015)
56103.0 | 289.0 | WHT-4.2m | ISIS | 3423 – 10268 | Mazzali et al. (2015)
56127.0 | 313.0 | P200 | DBSP | 3197 – 10991 | Law et al. (2009); Rau et al. (2009)
56162.2 | 348.2 | Lick-3m | KAST | 3487 – 10240 | Mazzali et al. (2015)
56194.2 | 380.2 | Keck1 | LRIS | 3232 – 10268 | Stahl et al. (2020)
56277.0 | 463.0 | Lijiang-2.4m | YFOSC | 3379 – 9337 | Zhang et al. (2016)
56778.5 | 964.5 | Keck1 | LRIS | 3074 – 10320 | Graham et al. (2015)
56831.2 | 1017.2 | LBT | MODS1 | 3098 – 10487 | Taubenberger et al. (2015)
Table 8: Redshift values where 50 per cent of the simulated SNe were found to have CSM interaction. Strength is the strength of the H $\alpha$ line compared to the strength detected in SN 2015cp. Start shows how many days after the peak the interaction begins, and duration is in days as well. We fitted sigmoid functions to the results of each simulation in order to find the redshift where 50 per cent of the interactions were recovered, assuming the reference images were of the same depth as the ones used in ZTF or 0.5 or 1 mag deeper. These values are shown in $z_{50}\pm\sigma_{z_{50}}$, and $\chi^{2}_{\text{red}}$ shows the quality of the fit. | | | | ZTF references | | 0.5 mag deeper | | 1 mag deeper
---|---|---|---|---|---|---|---|---
strength | start | duration | | $z_{50}$ | $\sigma_{z_{50}}$ | $\chi^{2}_{\text{red}}$ | | $z_{50}$ | $\sigma_{z_{50}}$ | $\chi^{2}_{\text{red}}$ | | $z_{50}$ | $\sigma_{z_{50}}$ | $\chi^{2}_{\text{red}}$
0.0 | – | – | | 0.0038 | 0.0009 | 0.72 | | 0.0042 | 0.0019 | 1.83 | | 0.0046 | 0.0032 | 3.47
0.1 | 100 | 100 | | 0.0041 | 0.0008 | 0.83 | | 0.0049 | 0.0011 | 1.82 | | 0.0049 | 0.0033 | 3.41
0.1 | 100 | 300 | | 0.0050 | 0.0007 | 0.79 | | 0.0055 | 0.0014 | 1.79 | | 0.0059 | 0.0025 | 3.42
0.1 | 100 | 500 | | 0.0053 | 0.0006 | 0.77 | | 0.0062 | 0.0011 | 1.76 | | 0.0068 | 0.0018 | 3.44
0.1 | 200 | 100 | | 0.0045 | 0.0007 | 0.78 | | 0.0050 | 0.0016 | 1.80 | | 0.0054 | 0.0028 | 3.38
0.1 | 200 | 300 | | 0.0053 | 0.0006 | 0.78 | | 0.0058 | 0.0014 | 1.79 | | 0.0063 | 0.0022 | 3.44
0.1 | 200 | 500 | | 0.0056 | 0.0006 | 0.77 | | 0.0066 | 0.0010 | 1.79 | | 0.0073 | 0.0016 | 3.47
0.1 | 300 | 100 | | 0.0050 | 0.0006 | 0.80 | | 0.0056 | 0.0012 | 1.80 | | 0.0059 | 0.0023 | 3.43
0.1 | 300 | 300 | | 0.0052 | 0.0006 | 0.80 | | 0.0061 | 0.0044 | 1.78 | | 0.0067 | 0.0017 | 3.46
0.1 | 300 | 500 | | 0.0053 | 0.0005 | 0.82 | | 0.0060 | 0.0010 | 2.03 | | 0.0069 | 0.0016 | 3.53
0.1 | 500 | 100 | | 0.0045 | 0.0007 | 0.82 | | 0.0048 | 0.0014 | 2.08 | | 0.0054 | 0.0021 | 3.52
0.1 | 500 | 300 | | 0.0050 | 0.0005 | 0.83 | | 0.0056 | 0.0010 | 2.09 | | 0.0062 | 0.0017 | 3.53
0.1 | 500 | 500 | | 0.0050 | 0.0005 | 0.82 | | 0.0057 | 0.0009 | 2.09 | | 0.0063 | 0.0017 | 3.52
1.0 | 100 | 100 | | 0.0041 | 0.0007 | 0.86 | | 0.0045 | 0.0013 | 2.10 | | 0.0046 | 0.0027 | 3.58
1.0 | 100 | 300 | | 0.0091 | 0.0005 | 0.85 | | 0.0096 | 0.0012 | 1.98 | | 0.0108 | 0.0017 | 3.40
1.0 | 100 | 500 | | 0.0111 | 0.0004 | 0.84 | | 0.0129 | 0.0008 | 2.08 | | 0.0139 | 0.0013 | 3.50
1.0 | 200 | 100 | | 0.0067 | 0.0007 | 0.80 | | 0.0077 | 0.0012 | 2.05 | | 0.0083 | 0.0021 | 3.51
1.0 | 200 | 300 | | 0.0104 | 0.0005 | 0.93 | | 0.0122 | 0.0008 | 2.06 | | 0.0130 | 0.0014 | 3.31
1.0 | 200 | 500 | | 0.0116 | 0.0004 | 0.84 | | 0.0129 | 0.0008 | 2.11 | | 0.0140 | 0.0018 | 3.63
1.0 | 300 | 100 | | 0.0086 | 0.0004 | 0.78 | | 0.0094 | 0.0008 | 1.95 | | 0.0100 | 0.0013 | 3.62
1.0 | 300 | 300 | | 0.0105 | 0.0004 | 0.75 | | 0.0118 | 0.0007 | 2.02 | | 0.0128 | 0.0015 | 3.61
1.0 | 300 | 500 | | 0.0109 | 0.0004 | 0.78 | | 0.0125 | 0.0008 | 2.11 | | 0.0139 | 0.0013 | 3.76
1.0 | 500 | 100 | | 0.0081 | 0.0003 | 0.83 | | 0.0090 | 0.0008 | 1.94 | | 0.0094 | 0.0016 | 3.69
1.0 | 500 | 300 | | 0.0104 | 0.0003 | 0.78 | | 0.0121 | 0.0007 | 2.10 | | 0.0133 | 0.0012 | 3.79
1.0 | 500 | 500 | | 0.0105 | 0.0003 | 0.80 | | 0.0122 | 0.0007 | 2.11 | | 0.0135 | 0.0012 | 3.80
10.0 | 100 | 100 | | 0.0091 | 0.0014 | 0.93 | | 0.0097 | 0.0028 | 2.13 | | 0.0087 | 0.0058 | 3.45
10.0 | 100 | 300 | | 0.0259 | 0.0007 | 1.36 | | 0.0299 | 0.0009 | 1.79 | | 0.0328 | 0.0012 | 1.83
10.0 | 100 | 500 | | 0.0297 | 0.0005 | 1.21 | | 0.0347 | 0.0007 | 1.83 | | 0.0383 | 0.0010 | 1.84
10.0 | 200 | 100 | | 0.0204 | 0.0009 | 1.69 | | 0.0223 | 0.0012 | 2.40 | | 0.0217 | 0.0019 | 2.54
10.0 | 200 | 300 | | 0.0314 | 0.0004 | 1.45 | | 0.0351 | 0.0007 | 1.98 | | 0.0377 | 0.0010 | 2.40
10.0 | 200 | 500 | | 0.0326 | 0.0005 | 1.50 | | 0.0374 | 0.0007 | 2.05 | | 0.0415 | 0.0009 | 1.95
10.0 | 300 | 100 | | 0.0221 | 0.0023 | 1.21 | | 0.0233 | 0.0011 | 2.01 | | 0.0231 | 0.0019 | 2.71
10.0 | 300 | 300 | | 0.0311 | 0.0004 | 1.30 | | 0.0350 | 0.0007 | 1.90 | | 0.0375 | 0.0010 | 2.27
10.0 | 300 | 500 | | 0.0328 | 0.0004 | 1.42 | | 0.0381 | 0.0007 | 2.22 | | 0.0423 | 0.0009 | 2.21
10.0 | 500 | 100 | | 0.0199 | 0.0011 | 1.80 | | 0.0224 | 0.0014 | 2.47 | | 0.0226 | 0.0019 | 3.04
10.0 | 500 | 300 | | 0.0323 | 0.0004 | 1.23 | | 0.0371 | 0.0007 | 2.12 | | 0.0406 | 0.0009 | 2.19
10.0 | 500 | 500 | | 0.0323 | 0.0004 | 1.22 | | 0.0373 | 0.0007 | 2.12 | | 0.0407 | 0.0009 | 2.12
## Appendix B Inputs to simsurvey simulations
The specific inputs to simsurvey used in Section 3.4.2 to determine the
detection efficiencies for SN 2015cp-like interaction are listed here.
* •
Model: SN 2011fe + H $\alpha$ line (Section 3.4.1).
* •
Sky distribution: RA $\in$ [0∘, 360∘], Dec. $\geq-30^{\circ}$ (The area
covered by ZTF, Bellm et al. 2019).
* •
Volumetric rate: The SN Ia rate is $2.4\times 10^{-5}$ Mpc-3 yr-1 for $z\leq
0.1$ (Frohmaier et al. 2019). simsurvey uses this to calculate the amount of
SNe to generate at a given redshift interval.
* •
SN peak time distribution: $58\,195\leq$ modified Julian date (MJD) $\leq
58\,487$ (between 18 March 2018 and 4 January 2019).
* •
Galactic extinction: dust maps from Schlegel et al. (1998).
* •
Host galaxy extinction: Cardelli et al. (1989) extinction law, with E(B – V)
drawn from an exponential distribution with exponent $\lambda=0.11$ (Stanishev
et al. 2018), the same way as host extinction was added in the original
simsurvey paper (Feindt et al. 2019b).
* •
Telescope specifications: ZTF P48 camera, 4$\times$4 grid of CCDs with four
readout channels each, resulting in 64 separate output channels (Dekany et al.
2020).
* •
Survey plan: ZTF observation logs between $58\,197\leq$ MJD $\leq 59\,211$
(between 20 March 2018 and 28 December 2020), ensuring all simulated SNe are
followed for a minimum of about 2 years after the peak.
## Appendix C Late-time spectrum of SN 2020alm
After confirming the late-time detections in SN 2020alm to still be ongoing,
we obtained a spectrum using OSIRIS on the GTC on 26 July 2023, 1277 days
after the estimated peak date of the SN. As the observed spectrum was heavily
dominated by the host galaxy, we subtracted a host galaxy spectrum taken by
SDSS in 2003 to remove the host contamination. This was done after re-sampling
the host spectrum to have the same wavelength spacing as the new spectrum.
This left only the spectrum causing the late-time photometry detections, and
is shown in Fig. 14. The subtraction was confirmed to be successful by
checking that the Mg I ${\lambda 5175}$ and Na ID absorption lines were
reduced to the noise level, as these lines are not expected to be due to the
late-time signal but purely from the host galaxy. Some of the host galaxy
emission lines were not completely subtracted during this process, most
noticeably [N II] ${\lambda 6583}$ and [S II] ${\lambda\lambda 6716,6730}$,
but our resolution is inadequate to draw any conclusions from this.
The only explanation for the late-time detections that uses a second transient
at the same location is a TDE. Hammerstein et al. (2023) shows that the
intrinsic spectrum of a TDE is flat in this wavelength range, with sometimes
some narrow emission lines. Therefore, we model a TDE spectrum as a line of
constant flux density, and add Milky Way extinction (using the SFD89 dust maps
in the direction of the object; Schlegel et al. 1998) and variable host
extinction in an attempt to obtain the general shape of the observed spectrum.
We find that $0.6\leq$ $E(B-V)$${}_{\text{host}}\leq 1$ mag is adequate to
reproduce the general spectral shape and suggests that a TDE with
approximately constant colour and moderate host extinction can explain the
observed spectral excess for this event.
Figure 14: Spectrum of the late-time signal in SN 2020alm in its rest frame.
The top panel shows the late-time spectrum obtained on 26 July 2023 using
OSIRIS on the GTC, and the SDSS spectrum obtained in 2003. The bottom panel
shows the late-time excess, obtained by subtracting the SDSS host galaxy
spectrum from the observed late-time spectrum. A smoothed spectrum is shown in
blue. The smoothing was done using a rolling kernel of size 5 to average over
the values. The red lines are a simple TDE model with Milky Way and some
amount of host galaxy extinction applied (the amount is shown in the legend),
in order to get the approximate shape of the observed spectrum. Narrow
emission and absorption lines that were notable in the unsubtracted spectrum
are marked with dashed lines. The grey regions are affected by sky lines, and
should be ignored.
## Appendix D Binned SMP light curves of the final candidates
Figure 15: Similar to Fig. 11, but showing the binned light curves generated
using scene modelling photometry instead of forced photometry. As no bins
$\geq 5\sigma$ was recovered in SN 2018grt, it is not shown here. Both SN
2019ldf and SN 2020tfc do however still have robust detections in some bands.
The best fitting alternate transients shown in dotted lines are the same as in
Fig. 11. The i-band of SN 2020tfc is not shown as the background was not
subtracted completely, resulting in a significant flux offset.
|
# Generalized Fresnel-Floquet equations for driven quantum materials
Marios H. Michael<EMAIL_ADDRESS>Department of Physics, Harvard
University, Cambridge, Massachusetts 02138, USA. Michael Först Max Planck
Institute for the Structure and Dynamics of Matter, Luruper Chausse 149, 22761
Hamburg, Germany Daniele Nicoletti Max Planck Institute for the Structure
and Dynamics of Matter, Luruper Chausse 149, 22761 Hamburg, Germany Sheikh
Rubaiat Ul Haque Department of Physics, University of California San Diego,
La Jolla, California 92093, USA Andrea Cavalleri Max Planck Institute for
the Structure and Dynamics of Matter, Luruper Chausse 149, 22761 Hamburg,
Germany Department of Physics, University of Oxford, Clarendon Laboratory,
Parks Road, Oxford OX1 3PU, UK Richard D. Averitt Department of Physics,
University of California San Diego, La Jolla, California 92093, USA Daniel
Podolsky Department of Physics, Technion, 32000 Haifa, Israel Eugene Demler
Department of Physics, Harvard University, Cambridge, Massachusetts 02138,
USA. Institute for Theoretical Physics, ETH Zurich, 8093 Zurich, Switzerland.
###### Abstract
Optical drives at terahertz and mid-infrared frequencies in quantum materials
are increasingly used to reveal the nonlinear dynamics of collective modes in
correlated many-body systems and their interplay with electromagnetic waves.
Recent experiments demonstrated several surprising optical properties of
transient states induced by driving, including the appearance of photo-induced
edges in the reflectivity in cuprate superconductors, observed both below and
above the equilibrium transition temperature. Furthermore, in other driven
materials, reflection coefficients larger than unity have been observed. In
this paper we demonstrate that unusual optical properties of photoexcited
systems can be understood from the perspective of a Floquet system; a system
with periodically modulated system parameters originating from pump-induced
oscillations of a collective mode. We present a general phenomenological model
of reflectivity from Floquet materials, which takes into account parametric
generation of excitation pairs. We find a universal phase diagram of drive
induced features in reflectivity which evidence a competition between driving
and dissipation. To illustrate our general analysis we apply our formalism to
two concrete examples motivated by recent experiments: a single plasmon band,
which describes Josephson plasmons in layered superconductors, and a phonon-
polariton system, which describes upper and lower polaritons in materials such
as insulating SiC. Finally we demonstrate that our model can be used to
provide an accurate fit to results of phonon-pump - terahertz-probe
experiments in the high temperature superconductor
$\rm{YBa_{2}Cu_{3}O_{6.5}}$. Our model explains the appearance of a pump-
induced edge, which is higher in energy than the equilibrium Josephson plasmon
edge, even if the interlayer Josephson coupling is suppressed by the pump
pulse.
## I Introduction and overview
### I.1 Motivation
Nonequilibrium dynamics in quantum materials is a rapidly developing area of
research that lies at the interface between nonlinear optics and quantum many-
body physics[1, 2]. Indeed, a panoply of experimental results highlight the
ability of ultrafast optical techniques to interrogate and manipulate emergent
states in quantum materials. This includes photo-augmented superconductivity
[3, 4, 5, 6], unveiling hidden states in materials proximal to the boundary of
an insulator-to-metal transition [7], and manipulating topological states [8,
9, 10]. The terahertz to mid-infrared spectral range is especially important
as numerous phenomena in quantum materials manifest at these energies,
including phonons, magnons, plasmons, and correlation gaps[11]. Access to this
spectral range enables preferential pump excitation of specific degrees of
freedom and probing of the resultant dynamics that are encoded in the
dielectric response (and hence the reflectivity or transmission). Therefore, a
particular challenge is to decode the optical reflectivity dynamics which
typically requires developing models that can be related to the underlying
microscopic states. In short, it is crucial to develop a consistent framework
for interpreting experimental results to aid in identifying emergent universal
properties of driven states and to take full advantage of the plethora of
“properties-on-demand” exhibited by quantum materials [12].
So far, the predominant paradigm of understanding pump and probe experiments
has been based on the perspective of a dynamic trajectory in a complex energy
landscape, where “snapshots” track the evolution of the slowly evolving but
quasi-stationary many-body states [13]. Within this approach temporal
evolution of spectroscopic features is interpreted using the conventional
equilibrium formalism, and measured parameters serve as a fingerprint of the
underlying instantaneous state. In particular, this approach has been applied
to analyze c-axis terahertz reflectivity of the cuprate superconductors. In
equilibrium, the Josephson plasma (JP) edge appears only below Tc indicating
coherent c-axis Cooper-pair tunneling. Interband or phononic excitation along
the c-axis in several distinct cuprates (including La1.675Eu0.2Sr0.125CuO4,
La2-xBaxCuO4, YBa2Cu3O6+δ) resulted in the appearance of edge-like features in
the terahertz c-axis reflectivity at temperatures above the equilibrium Tc[3,
4, 5, 14]. These experiments were interpreted as providing spectroscopic
evidence for light induced modification of interlayer Josephson coupling. The
central goal of this paper is to develop an alternative framework for
interpreting optical responses of photoexcited materials. Our model focuses on
features unique to nonequilibrium systems, in particular to photoexcited
collective excitations which provide parametric driving of many-body systems.
While we do not argue that this mechanism explains all experimental results on
pump induced changes in reflectivity, we believe that this scenario is
sufficiently ubiquitous to merit detailed consideration. We provide universal
expressions for driving induced changes in the reflectivity, which can be
compared to experimental data, in order to examine the relevance of the
parametric driving scenario to any specific experiment.
Before proceeding to discuss details of our model, it is worth reviewing
several experiments that have already revealed pump-induced dynamics that
cannot be interpreted from the perspective of equilibrium systems.
Particularly striking are recent observations of light amplification in the
photoexcited insulator SiC and the superconductor K3C60 above its equilibrium
Tc [15, 16, 17]. Furthermore, in the case of pumped YBa2Cu3O6+δ discussed
above, strong experimental evidence has accumulated indicating that an
effective photo-induced edge arises from parametric amplification of Josephson
plasmons rather than a modification of the effective coupling [18, 19] (see
discussion in Section-IV of this paper). Prior work also demonstrated higher
harmonic generation from Higgs and other collective modes[20, 21, 22] and
nonlinear effects including parametric amplification of superconducting plasma
waves[23, 24, 25, 26]. A cursory understanding of these experiments can be
obtained from the perspective of nonlinear optics deriving from coherent
dynamics of order parameters and associated degrees of freedom such as
phonons. However, several qualitative differences between collective mode
optics and standard nonlinear optics deserve a special mention. First, in
systems that we consider, a nonlinear response of the probe pulse persists at
delay times well beyond the duration of the pump pulse. Hence, one cannot
apply theoretical approaches based on the expansion of optical nonlinearities
in powers of the electric field, such as $\chi^{(2)}$, and $\chi^{(3)}$ [27].
Instead, it is imperative to analyze the interaction of the electromagnetic
field of the probe pulse with matter degrees of freedom excited by the pump
pulse. Second, it is important to properly account for the role of the
surface, since the probe wavelength can be comparable or even larger than the
penetration depth of the material. Thus, common assumptions of nonlinear
optics, including the slowly varying envelope approximation [27] and phase
matching, need to be replaced by the explicit solution of Maxwell equations
coupled to dynamical equations for matter.
Figure 1: Phase diagram of optical reflectivity of an interacting Floquet
material as a function of the parametric drive amplitude and dissipation. We
identify four regimes with qualitatively different types of reflectivity: (I)
Weakly driven underdamped modes in the stable regime where dissipation,
$\gamma$, is sufficient to prevent a lasing instability. The line shape is a
square lorentzian dip given by equation (1). (II) Weakly driven overdamped
modes in the stable regime. The resonance feature is an edge-like line shape
given by equation (30). (III) Cross-over regime on the boundary of the stable
and unstable regions with a double dip structure. (IV) Unstable region, strong
driving overcomes dissipation and may even lead to parametric amplification.
### I.2 Overview of results and organization of the paper
The primary goal of this paper is to present a general phenomenological
formalism for discussing optical properties of driven states following a
resonant excitation of a collective mode. We analyze the problem from the
perspective of Floquet matter, in which a collective mode excited by the pump
provides temporal modulation of microscopic parameters of the material. This
results in parametric driving of the system and Floquet-type mixing of
frequencies. When the system is driven homogeneously in space (i.e. with wave
vector $k=0$) and frequency $\Omega_{d}$, a parametric resonance occurs
whenever two collective excitations that are IR-active have the opposite wave
vector, $k_{1}=-k_{2}$, and frequencies that add up to to the drive frequency,
$\omega_{1}+\omega_{2}=\Omega_{d}$. Naively, one expects parametric resonances
to always lead to enhancement of reflectivity, with sharp peaks corresponding
to parametric resonance conditions. We find that the situation is far richer
and may include the appearance of edges, dips, and electromagnetically induced
transparency (EIT)[28] type structure in the reflectivity (see Fig. 1).
Physically, this comes from oscillation induced mixing between light-matter
fields of different frequency components. In this paper, we focus on the case
of oscillations with a small amplitude and/or strong dissipation, in which
case analysis can be limited to the mixing of only two frequencies, commonly
referred to as the signal and idler frequencies. They are defined such that
the signal frequency $\omega_{s}$ is equal to the frequency of the probe
pulse, whereas the idler frequency $\omega_{\rm id.}$ is equal to the
difference between the drive frequency $\Omega_{d}$ and $\omega_{s}$.
Interference between the signal and idler frequency components is reminiscent
of Fano-type phenomena in scattering theory and optics, where interference
between two channels can result in non-trivial energy dependence of the
reflection and transmission coefficients[29]. Describing the driving only in
terms of signal and idler mixing corresponds to a degenerate perturbation
theory in the Floquet basis [30, 31, 17].
What determines whether interference phenomena will dominate over parametric
amplification of reflectivity is the competition between parametric driving
and losses. We find a universal dynamical phase diagram of the optical
response as a function of the strength of the drive and dissipation.
Remarkably, we find that the entire breadth of these responses can be
specified using only a few effective (phenomenological) parameters. One of the
main achievements of this paper is to derive analytical formulas for the shape
of these resonances in section II.3 in the case of strong dissipation where
perturbation theory is valid, Regimes I and II in Fig. 1. In Regime I,
corresponding to the case of underdamped collective modes, we obtain a
Lorentzian square shape:
$R_{\rm
driven}=R_{s}\left(1+\alpha\mbox{Re}\\{\frac{1}{\left(\omega-\omega_{\rm
para.}+i\gamma\right)^{2}}\\}\right),$ (1)
where $R_{\rm driven}$ is the reflectivity in the Floquet state, $R_{s}$ the
reflectivity in equilibrium, $\alpha$ a frequency dependent parameter that
depends on dispersion of the IR collective modes of the material, $\omega_{\rm
para.}$ the frequency at which parametric resonance condition is satisfied and
$\gamma$ the dissipation in the system. Notably, in Regime I, we can use the
Floquet drive to directly extract the dissipation in the system on parametric
resonance. In Regime II, corresponding to overdamped collective modes, the
resonance peak has the form:
$R_{\rm
driven}=R_{s}\left(1+\beta\mbox{Re}\\{e^{i\theta}\frac{1}{\omega-\omega_{\rm
para.}+i\gamma}\\}\right),$ (2)
where $\beta$ and $\theta$ are frequency dependent parameters that depend on
the dispersion of IR collective modes. In this case, the shape is a linear
combination of a real and imaginary lorentzian function resulting in an
effective ”edge” like feature.
For clarity, in this work we simplify our analysis by including Floquet
modulation at a single frequency. When the finite lifetime of the collective
mode is taken into account, this should be analyzed as a multi-tonal drive.
Our analysis can be generalized to this situation. However, in the current
paper we will only comment on the main changes that we expect in this case. We
postpone a detailed discussion of the multi-tonal Floquet-Fresnel problem to a
future publication [32].
It is worth noting conceptual connections between our Floquet approach and
previous work on the phenomenon of optical phase conjugation (OPC) [33, 34].
What makes our analysis different is that we focus on terahertz phenomena,
which correspond to much longer wavelengths than optical phenomena considered
in the context of OPC. It is important for our discussion to take into account
that non-linear processes take place near the material boundary rather than in
the bulk, which is why our starting point is the Fresnel formalism of
reflection of electromagnetic waves. This can be contrasted to phase matching
conditions used in most discussions of OPC, which essentially assume that non-
linear processes take place in the bulk of the material.
This paper is organized as follows. Section II presents a general formalism
for computing the reflectivity of Floquet materials. With a goal of setting up
notation in section II.1 we remind the readers the canonical formalism of
Fresnel’s solution of light reflection from an equilibrium material with an
index of refraction $n(\omega)$. In Section II.2 we discuss how to generalize
this approach to study light reflection from a material subject to a periodic
drive. We show a universal form of frequency dependence of reflectivity from
such systems, which we summarize in the phase diagram presented in Figure 1.
We show that this frequency dependence can be deduced from the dispersion of
collective modes and frequency of the periodic drive without developing a full
microscopic theory. Thus the Floquet-Fresnel equations allows for the same
level of conceptual understanding as the standard equilibrium Fresnel problem.
To make our discussion more concrete, in section III we apply this analysis to
two paradigmatic cases: i) a single low frequency plasmon band and ii) the two
band case of a phonon-polariton system with dispersions shown in Figure 2.
These two cases are not only exemplary but also provide accurate models for
real materials, such as the lower Josephson plasmon of
$\rm{YBa_{2}Cu_{3}O_{6+x}}$ (case (i)) and a phonon-polariton system in $\rm
SiC$ (case (ii)).They are reviewed in sections III.1 and III.2 respectively.
We note that most cases of parametric resonance in pump and probe experiments
can be reduced to these two examples, since the usual formulation of
parametric resonance involves generating pairs of excitations and the
resonance can be described by including up to two bands. However, in some
cases there may be additional features in the reflectivity arising from
singular behavior of matrix elements. We provide a concrete example of this in
section III.2 for the case of phonon-polariton model in SiC. Finally, we
demonstrate that our theory enables a quantitatively accurate fit to the
results of pump and probe experiments in $\rm{YBa_{2}Cu_{3}O_{6.5}}$. These
experiments demonstrated the appearance of a photo-induced edge both below and
above the superconducting transition temperature, at frequencies close to the
lower plasmon edge. We demonstrate that these observations can be accurately
described by the Floquet-Fresnel model developed in this paper.
Figure 2: Examples of dispersion relations (a) - (b) and their corresponding
reflectivity spectrum (c) - (d). a) Dispersion relation of a plasmon in a SC
(black) and dispersion relation of light in air (green). The corresponding
equilibrium reflectivity in (c) shows perfect reflection below the gap, for
$\omega_{s}<\omega_{\rm pl.}$, and a minimum in reflectivity appears when the
dispersion of light in air crosses the dispersion of the plasmon in the
material, a condition called phase matching. (b) Dispersion relation of
phonon-polariton (black) and dispersion relation of light in air (green). The
corresponding equilibrium reflectivity in (d) shows perfect reflectivity
inside the reststrahlen band and plasma edge when the dispersion of light in
air crosses the dispersion of the upper polariton, similarly to (c). The red
dot in (a) and (b) show the driving frequency, while the arrows depict the
parametrically resonant processes resulting from the Floquet drive. This leads
to features in the reflectivity predicted by our theory at the parametrically
resonant frequencies both for strong and weak drive relative to dissipation.
## II General formalism of Floquet-Fresnel reflection
### II.1 Equilibrium reflectivity
We begin our discussion by presenting coupled dynamical equations for light
and matter, assuming that the material has an infrared active collective mode,
such as a phonon or a Josephson plasmon. Information about the collective mode
is included in the frequency dependence of the linear electric susceptibility,
$\chi(\omega,k)$, which determines the index of refraction $n(\omega)$:
$\displaystyle\nabla\times B=$ $\displaystyle\mu_{0}\partial_{t}D,$ (3a)
$\displaystyle\nabla\times E=$ $\displaystyle-\partial_{t}B,$ (3b)
$\displaystyle D=$ $\displaystyle\epsilon_{0}E+P$ (3c)
where the dynamics of the polarization $P$ contain all optically active
collective modes inside the material, $E$ is the electric field and
$\epsilon_{0}$ and $\mu_{0}$ are the electric permittivity and magnetic
permeability in vacuum, respectively. The polarization in frequency and
momentum space, $P(\omega,k)$, is given in terms of the electric field through
the linear susceptibility,
$P(\omega,k)=\epsilon_{0}\chi(\omega,k)E(\omega,k)$. Due to the high speed of
light, $c$, considerable hybridization between the collective mode and light
occurs only at very small momenta, $k\sim\frac{\omega}{c}$. As a result, for
optical reflection problems we can take the susceptibility to be
dispersionless, $\chi(\omega,k)\approx\chi(\omega,k=0)\equiv\chi(\omega)$ to a
good approximation. Combining the Maxwell equations with the susceptibility we
find the dynamics of the electromagnetic transverse modes in frequency and
momentum space to be given by a wave equation with a solely frequency
dependent refractive index $n(\omega)$:
$\left(\frac{n^{2}(\omega)\omega^{2}}{c^{2}}-k^{2}\right)E(\omega,k)=0.$ (4)
Collective mode dispersion relations are found as solutions to the equation
$\left(k^{2}-\frac{\omega^{2}n^{2}(\omega)}{c^{2}}\right)=0$. The above
description is very general and any dispersion relation inside the material
can be captured by an appropriate choice of $n(\omega)$.
#### II.1.1 The case of a Plasmon
In superconductors (SC) the Anderson-Higgs mechanism gives rise to the gap in
the spectrum of transverse electromagnetic fields equal to the plasma
frequency, see Fig. 2(a). The plasmon excitation can be captured by a
refractive index of the type[35]:
$n^{2}_{SC}(\omega)=\epsilon_{\infty}\left(1-\frac{\omega^{2}_{\rm
pl.}}{\omega^{2}}\right),$ (5)
where $\omega_{\rm pl.}$ is the plasma frequency and $\epsilon_{\infty}$ a
constant background permittivity. Such a refractive index when substituted in
equation (4), leads to the dispersion relation for the electromagnetic field
inside a SC to be:
$\omega_{SC}^{2}(k)=\omega^{2}_{\rm
pl.}+\frac{c^{2}}{\epsilon_{\infty}}k^{2}.$ (6)
We note that plasmon modes can have very different frequencies depending on
light polarization. In particular, in the case of layered systems, such as
$\rm YBa_{2}Cu_{3}O_{6+x}$ superconductors, the plasma frequency is small for
electric field polarization perpendicular to the layers. In layered metals one
can also find low energy plasmon modes, although they typically have stronger
damping than in superconductors.
#### II.1.2 Phonon-polariton systems
Another ubiquitous example is the case of phonon-polaritons. In this paper we
will primarily use $\rm SiC$ for illustration, which features an IR-active
phonon at frequency close to 30 THz with a large effective charge[15]. Another
related material that is currently under active investigation is $\rm
Ta_{2}NiSe_{5}$, which has an additional complication that multiple phonons
need to be included in the analysis.
In the case of a single IR phonon the dispersion relation of the phonon-
polariton system is depicted in Fig. 2 (b). It can be captured by substituting
in equation (4) the refractive index[31]:
$n_{\rm phonon}^{2}(\omega)=\epsilon_{\infty}\left(1-\frac{\omega_{\rm
pl.,phonon}^{2}}{\omega^{2}+i\gamma\omega-\omega^{2}_{\rm ph.}}\right),$ (7)
where $\omega_{\rm pl.,phonon}$ is the plasma frequency of the phonon mode,
$\omega_{\rm ph.}$ the transverse phonon frequency and $\gamma$ a dissipative
term for the phonon.
#### II.1.3 The case of multiple IR modes
In the case when multiple IR-active collective modes need to be included in
the analysis (phonons, plasmons, etc.), it is common to use the Lorentz model
which parametrizes the contribution of each collective mode to the refractive
index by a lorentzian [36]:
$n^{2}(\omega)=\epsilon_{\infty}\left(1-\sum_{i}\frac{\omega_{pl.}^{2}}{\omega^{2}+i\gamma_{i}\omega-\omega_{i}^{2}}\right),$
(8)
where $\omega_{i}$ is the bare frequency of the $i$th collective mode,
$\omega_{pl.,i}$ the plasma frequency which characterises the strength of the
coupling to light, $\gamma_{i}$ the dissipation and $\epsilon_{\infty}$ an
effective static component to the permittivity arising from high energy modes
not included in the sum. The above discussion, illustrates that equation (4)
is very general, and in any case an appropriate $n(\omega)$ can be chosen to
capture the dispersion relation of optically active bands.
#### II.1.4 Fresnel equations
We begin by reviewing the Fresnel light reflection problem at the interface
between air and material in equilibrium. While this is a textbook material, we
present it here with the goal of establishing notations for subsequent
discussion of the non-equilibrium case. We consider an incoming beam with
frequency $\omega_{\rm s}$ at normal angle of incidence
$E_{s}=E_{0}e^{i\frac{\omega_{s}}{c}z-i\omega_{s}t}$ (9)
where $z$ is the direction perpendicular to the interface and the interface
lies at $z=0$. The reflected and transmitted waves at the signal frequency are
expressed through reflection and transmission coefficients,
$E_{r}=r_{s}E_{0}e^{-i\frac{\omega_{s}}{c}z-i\omega_{s}t}$ and
$E_{t}=t_{s}E_{0}e^{ik_{s}z-i\omega_{s}t}$. The momentum $k_{s}$ corresponds
to the mode inside the material oscillating at $\omega_{s}$ and using equation
(4), is given by $k_{s}=\frac{\omega_{s}n(\omega_{s})}{c}$. Matching the
electric field across the boundary at $z=0$ gives rise to the boundary
equation:
$1+r_{s}=t_{s}.$ (10)
For non-magnetic materials, the magnetic field is also conserved across the
surface. Using the homogeneous Maxwell equation, $\partial_{t}B=-\nabla\times
E$, we calculate the magnetic field in the two regions. Matching the two
regions at $z=0$ gives rise to the second boundary equation:
$1-r_{s}=n(\omega_{s})t_{s}.$ (11)
Solving for the reflection coefficient, we find the standard expression for
reflectivity in terms of the refractive index:
$\begin{split}R_{s}=|r_{s}|^{2}=\left|\frac{1-n(\omega_{s})}{1+n(\omega_{s})}\right|^{2}=\frac{\left(1-n^{\prime}\right)^{2}+\left(n^{\prime\prime}\right)^{2}}{\left(1+n^{\prime}\right)^{2}+\left(n^{\prime\prime}\right)^{2}}\end{split}$
(12)
where $n^{\prime}$ and $n^{\prime\prime}$ correspond to the real and imaginary
part of the refractive index respectively.
In equilibrium, reflectivity can be deduced, at least qualitatively, from the
form of collective mode dispersion inside the material. This is depicted in
Fig. (2)(a) -(c) for a SC and in Fig. (2)(b) - (d) for a photon-polariton
system. In the case of a SC, at probing frequencies below the plasma gap
$\omega_{s}<\omega_{pl.}$, no bulk modes exist to propagate the energy and
$k_{s}$ is purely imaginary corresponding to an evanescent wave. In this
situation, we have near perfect reflectivity. As soon as the probing frequency
becomes larger than the plasma gap, transmission is allowed and reflectivity
drops abruptly, reaching a minimum at the frequency where the light-cone
crosses the plasma band. The minimum in reflectivity or equivalently the
maximum in transmission occurs when the incoming and transmitted waves are
”phase matched”, a condition that is satisfied when the light cone in air
crosses a new band inside the material, i.e. $n^{\prime}(\omega_{s})=1$ in
equation (12). The sudden drop in reflectivity appearing whenever a new
optically active band becomes available is called in the literature a ”plasma
edge”. Similar reasoning can be used to determine qualitatively the
reflectivity of a phonon-polariton system from its dispersion relation alone:
At frequencies within the gap of the dispersion, $\omega_{\rm
ph.}<\omega_{s}<\sqrt{\omega_{\rm ph.}^{2}+\omega_{\rm pl.,phonon}^{2}}$,
called the reststrahlen band, only evanescent waves are allowed, and
reflectivity is expected to be close to one. On the other hand for probing
frequencies $\omega_{s}>\sqrt{\omega_{\rm ph.}^{2}+\omega_{\rm pl}^{2}}$, when
the light cone crosses the upper polariton branch, a plasma edge appears.
#### II.1.5 Dissipation
Finally, we comment on the effects of dissipation on light reflection. While
in principle, equation (4) is completely general, it is sometimes helpful to
add dissipation explicitly through the conductivity of the normal electron
fluid which obeys Ohm’s law and modifies equation (4) to:
$\left(n^{2}(\omega)\omega^{2}+i\frac{\sigma_{n}}{\epsilon_{0}}\omega-c^{2}k^{2}\right)E=0,$
(13)
where $\sigma_{n}$ is the normal electron fluid conductivity. Such a term
provides a natural way of including increased dissipation in the pumped state
discussed below as a result of the presence of photo-excited carriers. In the
equilibrium case, dissipation acts to smooth out sharp features in
reflectivity such as the plasma edges.
### II.2 Floquet reflectivity
#### II.2.1 Floquet eigenstates
Our goal in this section is to introduce a simple model for Floquet materials
and discuss special features in reflectivity that appear in this model close
to parametric resonances. In the next section we will demonstrate that
features discussed in this section are ubiquitous, and can be found in more
accurate models[31, 19]. We model the Floquet medium by assuming that the
presence of an oscillating field inside the material results in a time
periodic refractive index, $n^{2}_{\rm driven}(t)=n^{2}(\omega)+\delta
n^{2}_{\rm drive}\cos\left(\Omega_{d}\right)$. The equations of motion in
frequency space for the electric field in the presence of the time-dependent
perturbation becomes:
$\begin{split}&\left(k^{2}-\frac{\omega^{2}n^{2}(\omega)}{c^{2}}\right)E(\omega,k)+\\\
&A_{\it
drive}\left(E(\omega-\Omega_{d},k)+E(\omega+\Omega_{d},k)\right)=0,\end{split}$
(14)
where $A_{\it drive}$ is the mode coupling strength related to the amplitude
of the time-dependent drive, which in this section we assume to be constant
although, in principle, it may be frequency dependent (see e.g. section
III.2). Generally, equations of the type (14) should be solved simultaneously
for many frequency components that differ by integer multiples of the drive
frequency. However, to capture parametric resonances in the spectrum, it is
sufficient to limit analysis to mixing between only two modes, which are
commonly referred to as the signal and idler modes [37]. The signal frequency
is taken to be the frequency of the probe pulse, whereas the idler frequency
is chosen from the condition that the sum of the signal and idler frequency is
equal to the drive frequency. There may be other resonant Floquet conditions,
such as $\omega_{1}-\omega_{2}=\Omega_{d}$, which do not correspond to
parametric generation of excitations by the drive but instead correspond to
resonant re-scattering. We postpone discussion of such cases to subsequent
publications. Thus we consider
$E(t,z)=\left(E_{s}e^{-i\omega_{s}t}+E_{id.}^{*}e^{+i\omega_{\rm
id.}t}\right)e^{ikz}.$ (15)
Truncating the eigenmode ansatz to only signal and idler components
corresponds to using Floquet degenerate perturbation theory approximation[30].
The inclusion of higher harmonic contributions will give rise to sub-leading
perturbative corrections. With the ansatz in equation (15), the equations of
motion take the form:
$\begin{pmatrix}k^{2}-k_{s}^{2}&&A_{\it drive}\\\ A_{\it
drive}&&k^{2}-k_{id.}^{2}\end{pmatrix}\cdot\begin{pmatrix}E_{s}\\\
E^{*}_{id.}\end{pmatrix}=0$ (16)
where $k_{s}^{2}(\omega_{s})=\frac{\omega^{2}_{s}n^{2}(\omega_{s})}{c^{2}}$
and $k_{id.}^{2}(\omega_{s})=\frac{\omega_{\rm id.}^{2}n^{2}(\omega_{\rm
id.})}{c^{2}}$ is the momentum of the eigenstate oscillating at the signal
frequency or idler frequency respectively in the absence of the parametric
drive $A_{\it drive}$. The renormalized eigenvalues are given by:
$\displaystyle
k_{\pm}^{2}=\frac{k^{2}_{s}+k_{id.}^{2}}{2}\pm\sqrt{\left(\frac{k_{s}^{2}-k_{id.}^{2}}{2}\right)^{2}+A_{\it
drive}^{2}},$ (17a)
and the corresponding Floquet eigenstates are:
$\displaystyle E_{id,\pm}^{*}$ $\displaystyle=\alpha_{\pm}E_{s,\pm},$ (18a)
$\displaystyle\alpha_{\pm}$
$\displaystyle=\frac{k_{s}^{2}-k_{id.}^{2}}{2A_{\it
drive}}\mp\sqrt{\left(\frac{k_{s}^{2}-k_{id.}^{2}}{2A_{\it
drive}}\right)^{2}+1}$ (18b)
#### II.2.2 Floquet-Fresnel equations
The eigenstates in equation (18) represent two transmission channels for the
case where the Floquet material is probed at the signal frequency,
$E_{\pm}(t,z)=t_{\pm}E_{0}\left(e^{-i\omega_{s}t}+\alpha_{\pm}e^{+i\omega_{\rm
id.}t}\right)e^{ik_{\pm}z}$. Similarly, the transmitted magnetic field is
given by
$B_{\pm}(t,z)=k_{\pm}t_{\pm}E_{0}\left(\frac{1}{\omega_{s}}e^{-i\omega_{s}t}-\frac{\alpha_{\pm}}{\omega_{\rm
id.}}e^{+i\omega_{\rm id.}t}\right)e^{ik_{\pm}z}$. To find the reflectivity,
we need to satisfy boundary conditions corresponding to matching of magnetic
and electric fields across the boundary oscillating at the signal and idler
frequency separately:
$\displaystyle 1+r_{s}$ $\displaystyle=t_{+}+t_{-},$ (19a) $\displaystyle
1-r_{s}$
$\displaystyle=\frac{ck_{+}}{\omega_{s}}t_{+}+\frac{ck_{-}}{\omega_{s}}t_{-},$
(19b) $\displaystyle r_{id.}$ $\displaystyle=\alpha_{+}t_{+}+\alpha_{-}t_{-},$
(19c) $\displaystyle r_{id.}$ $\displaystyle=\frac{ck_{+}}{\omega_{\rm
id.}}\alpha_{+}t_{+}+\frac{ck_{-}}{\omega_{\rm id.}}\alpha_{-}t_{-}$ (19d)
where $r_{id.}$ is the coefficient of the light reflected at the idler
frequency. The Fresnel-Floquet problem in equation (19) together with
equations (17) and (18) form a closed set of equations that can be solved to
determine the reflectivity $R=|r_{s}|^{2}$.
#### II.2.3 Perturbation theory for large dissipation
In order to elucidate the physics of photo-induced resonances, it is
instructive to work perturbatively in the parametric driving strength, away
from parametric resonance. Since $k_{s}=k_{id.}$ corresponds to the parametric
resonance condition, the small parameter is chosen to be $\xi=\frac{2A_{\it
drive}}{k_{s}^{2}-k_{id.}^{2}}$. In the limit of small $\xi$, the two
solutions can be safely separated into a mostly signal solution and a mostly
idler solution. These correspond to expansions of $k_{\pm}^{2}$ to linear
order in $\xi$
$\displaystyle\tilde{k}_{s}^{2}$ $\displaystyle\approx k_{s}^{2}+\frac{A_{\it
drive}\xi}{2}+\mathcal{O}(A_{\it drive}\xi^{3}),$ (20a)
$\displaystyle\tilde{k}_{id.}^{2}$ $\displaystyle\approx
k_{id.}^{2}-\frac{A_{\it drive}\xi}{2}+\mathcal{O}(A_{\it drive}\xi^{3})$
(20b)
where $\tilde{k}_{s}$ and $\tilde{k}_{id.}$ are the renormalized momenta. The
corresponding transmission channels are given by expanding $\alpha_{\pm}$ to
leading order in $\xi$:
$\displaystyle E_{1}=$ $\displaystyle
t_{s}E_{0}\left(e^{-i\omega_{s}t}-\left(\frac{\xi}{2}+\mathcal{O}(\xi^{3})\right)e^{+i\omega_{\rm
id.}t}\right)e^{i\tilde{k}_{s}z}$ (21a) $\displaystyle E_{2}=$ $\displaystyle
t_{id.}E_{0}\left(\left(\frac{\xi}{2}+\mathcal{O}(\xi^{3})\right)e^{-i\omega_{s}t}+e^{+i\omega_{\rm
id.}t}\right)e^{i\tilde{k}_{id.}z}$ (21b)
where the eigenmodes have been rescaled in perturbation theory in order to
interpret $E_{1}$ as the channel oscillating primarily at the signal frequency
with a perturbative mixing of the term oscillating at the idler frequency,
while $E_{2}$ is the channel oscillating primarily at the idler frequency with
a perturbative mixing of a term oscillating at the signal frequency. By
integrating out the idler transmission channel the Floquet-Fresnel equations
can be reformulated through an effective renormalized refractive index (see
Appendix A for details):
$\displaystyle 1+r_{s}=$ $\displaystyle t_{s},$ (22a) $\displaystyle 1-r_{s}=$
$\displaystyle t_{s}\tilde{n}$ (22b)
where $\tilde{n}$ is given by:
$\tilde{n}=n_{eq.}\left(1+\frac{A_{\it
drive}\xi}{4k_{s}^{2}}+\frac{\xi^{2}}{4}\frac{c\tilde{k}_{s}-\omega_{\rm
id.}}{c\tilde{k}_{id.}-\omega_{\rm
id.}}\left(\frac{\tilde{k}_{id.}}{\tilde{k}_{s}}-1\right)\right)$ (23)
where $n_{\rm eq.}$ is the equilibrium refractive index. Unlike equilibrium,
the dressed Floquet refractive index is allowed to be negative giving rising
to parametric amplification of the reflected signal. Equation (23) has two
perturbative corrections to second order in the mode coupling strength’s
amplitude, $A_{\it drive}$: one of order $\xi$ and the other of order
$\xi^{2}$. The term linear in $\xi$ comes from the renormalization of the
transmitted wave-vector $\tilde{k}_{s}$, while the quadratic term results from
integrating out the idler channel and therefore originates from interference
effects between signal and idler mode.
On parametric resonance within the same band, the phase matching condition
between signal and idler $|\mbox{Re}\left(k_{s}\right)|=|\mbox{Re}\left(k_{\rm
id.}\right)|$, implies $\omega_{s}=\omega_{\rm id.}=\frac{\Omega_{d}}{2}$,
while the sign of each wave-vector is fixed by causality as we show below. The
perturbation theory developed above is valid even on resonance provided that
the dissipation is high enough. To show this we expand around the
parametrically resonant frequency, $\omega_{s}=\omega_{para}$, with a finite
dissipation that we include in a causal way through the substitution
$\omega_{s}\rightarrow\omega_{s}+i\gamma$. The expressions for the signal and
idler wave-vectors are then given by:
$\displaystyle k_{s}=$ $\displaystyle
k_{s,0}^{\prime}+\frac{\omega_{s}-\omega_{para}+i\gamma}{v_{g}(\omega_{para})},$
(24a) $\displaystyle k_{id.}=$ $\displaystyle-
k_{s,0}^{\prime}+\frac{\omega_{s}-\omega_{para}+i\gamma}{v_{g}(\omega_{para})}.$
(24b)
In equation (24), $v_{g}(\omega_{para})$ is the group velocity on parametric
resonance, and $k_{s,0}^{\prime}$ the real part of the $k_{s}$ wave-vector on
parametric resonance. Boundary conditions require that the transmitted light
vanishes at large distances inside the material or equivalently ${\rm
Im}\\{k\\}>0$. As a result, the real part of $k_{id.}$ is negative and counter
propagates with respect to the mostly signal transmission channel inside the
material. This situation is shown schematically in Fig 3. Using equation (24),
the parameter $\xi$ is given by
$\xi\approx\frac{A_{\it
drive}}{2\left(\omega_{s}-\omega_{para}+i\gamma\right)\frac{k^{\prime}_{s,0}}{v_{g}(\omega_{para})}}$
(25)
and has a lorentzian peak structure. The resonant form of $\xi$ is responsible
for resonances in the reflectivity. In the case of multiple bands, the above
result can be generalized by considering that the phase matching condition
between signal and idler wave can occur at different signal and idler
frequencies $\omega_{para.,1}+\omega_{para,2}=\Omega_{d}$. In this situation
we have:
$\displaystyle k_{s}=$ $\displaystyle
k_{s,0}^{\prime}+\frac{\omega_{s}-\omega_{para,1}+i\gamma(\omega_{\rm
para,1})}{v_{g}(\omega_{\rm para,1})},$ (26a) $\displaystyle k_{id.}=$
$\displaystyle-k_{s,0}^{\prime}+\frac{\omega_{\rm
id.}-\omega_{para,1}+i\gamma(\omega_{\rm para,2})}{v_{g}(\omega_{\rm
para,2})},$ (26b)
where the group velocity and dissipation can be different for the different
bands at $\omega_{para,1}$ and $\omega_{para,2}$. However, the perturbative
parameter $\xi$ for multiple bands takes a form similar to the single band
case:
$\xi=\frac{2A_{\it drive}}{k_{s}^{2}-(k_{id.})^{2}}\approx\frac{A_{\it
drive}}{2\left(\omega_{s}-\omega_{para,1}+i\gamma_{eff}\right)\times\frac{k_{s,0}^{\prime}}{v_{eff}}}$
(27)
where $2v_{eff}^{-1}=v_{g}^{-1}(\omega_{\rm para,1})+v_{g}^{-1}(\omega_{\rm
para,2})$ and
$\gamma_{eff.}=\frac{v_{eff.}}{2}\cdot\left(\frac{\gamma(\omega_{\rm
para,1})}{v_{g}(\omega_{\rm para,1})}+\frac{\gamma(\omega_{\rm
para,2})}{v_{g}(\omega_{\rm para,2})}\right)$. Equation (27) demonstrates that
for small driving strength the resonant behaviour of parametric driving within
the same band and in between different bands is the same.
### II.3 Floquet Fresnel Phase Diagram
Based on our analysis, resonant features in the reflectivity can be classified
into four regimes as shown in Fig. 1. Regimes I and II, are in the stable
region where dissipation is stronger than the parametric drive. For these
cases we can obtain analytic expressions for the changes in reflectivity. To
second order in $A_{\it drive}$ we find two contributions to the refractive
index given in equation (23). A linear term in $\xi$ arising from band
renormalization due to the presence of the drive and another of order
$\xi^{2}$ which appears after integrating out the idler transmission channel
and therefore corresponds to interference phenomena between signal and idler
transmission. Their relative strength is given by $\frac{\delta
n_{linear}}{\delta n_{quadratic}}\propto\frac{\gamma}{v_{g}k_{s}}$ on
parametric resonance, $|Re(k_{s})|=|Re(k_{\rm id.})|$, c.f. equation (23).
Interference phenomena dominate, for underdamped photon modes for which
$\gamma<v_{g}k_{s}$ while for overdamped modes interference phenomena are
suppressed and band renormalization is dominant. The corresponding changes to
reflectivity are calculated by expanding the reflectivity to linear order in
$\delta n$
Figure 3: Schematic depiction of optical reflection from a Floquet material.
Inside the Floquet material signal and idler frequency components are mixed
giving rise to two transmission channels, $E_{1}$ and $E_{2}$ propagating in
opposite directions. Since the idler component is counter-oscillating compared
to the signal, the Floquet drive which mixes signal and idler effectively acts
as a wall reflecting each transmission channel and changing its propagation
direction. The two channels are coupled at the interface through the boundary
conditions. The picture explains schematically the physical mechanism of
parametric amplification where the transmission channels are directed towards
the surface by the drive.
$\displaystyle\tilde{n}=$ $\displaystyle n_{eq.}+\delta n,$ (28a)
$\displaystyle\tilde{r}_{s}=$
$\displaystyle\frac{1-\tilde{n}}{1+\tilde{n}}\approx r_{s,eq.}-\frac{2\delta
n}{(n+1)^{2}},$ (28b) $\displaystyle\tilde{R}_{s}\approx$ $\displaystyle
R_{s,eq.}-4\mbox{Re}\left\\{\frac{\delta n}{(n+1)^{2}}r_{s}^{*}\right\\}.$
(28c)
Regime I: for the usual case of underdamped modes and a single band we can
take $r_{s,eq}^{*}$ and $n_{eq.}$ to be real. Moreover the constant,
$A=\frac{c\tilde{k}_{s}-\omega_{\rm id.}}{-c\tilde{k}_{id.}+\omega_{\rm
id.}}\left(1+\frac{-\tilde{k}_{id.}}{\tilde{k}_{s}}\right)$, can be expanded
around parametric resonance to give : $A=2\frac{n-1}{n+1}=2r_{s}$. Under these
assumptions, interference of signal and idler gives rise to a double
lorentzian dip in reflectivtiy and is reminiscent of EIT.
$\tilde{R}_{s}\approx
R_{s,eq.}\left(1-\frac{2}{(n+1)^{2}}\mbox{Re}\left\\{\frac{C}{\left(\omega_{s}-\omega_{s,para}+i\gamma\right)^{2}}\right\\}\right)$
(29)
where $C=\frac{v_{g}^{2}A_{\it drive}^{2}}{4k_{s}^{2}}$ is a constant
proportional to the driving intensity.
Regime II: In the opposite limit of overdamped dynamics in a single band the
dominant term comes from the linear in $\xi$ term and the reflectivity takes
the form:
$\tilde{R}_{s}\approx
R_{s,eq.}+C^{\prime}\mbox{Re}\left\\{e^{i\theta}\frac{1}{\omega_{s}-\omega_{s,para}+i\gamma}\right\\}$
(30)
where $C^{\prime}e^{i\theta}=-\frac{1}{(n+1)^{2}}\frac{A_{\it
drive}^{2}v_{g}c}{4k_{s}^{3}}$. This feature appears as a plasma edge induced
by the drive from a featureless overdamped background as reported in Ref.
[18].
Regime III and IV: These regimes are not perturbative, however in many cases
we can use our simple theory of parametric resonance between two bands to
capture the reflectivity of real experiments, by solving equations (19) (a) -
(d). In particular, regime IV corresponds to a lasing instability regime where
we expect a strong peak in the reflectivity due to parametric amplification
and can even be a discontinuous function (as it was also shown in Ref. [31].
Regime III is an intermediate region where on resonance there is
amplification. However away from resonance perturbation theory still holds
giving rise to an interesting double dip structure.
## III Examples of manifestations of parametric resonance in reflectivity
In the previous section, we investigated general aspects of Floquet resonances
while being agnostic about microscopic details of the system. In this section,
we discuss toy models of realistic dispersion. Pump-induced features in these
toy models in the different regimes of the pump-induced phase diagram can be
used to build intuition for more complicated multi-band dispersions.
### III.1 Driven plasmon band
The simplest case of an optical system that we discuss is a single plasmon
band, which describes electrodynamics of metals and superconductors. The
equilibrium reflectivity in such systems was discussed in Subsec. II.1.
Maxwell’s equations in a superconductor can be written as[35]:
$\left(\omega^{2}-\omega_{pl.}^{2}+i\frac{\sigma_{n}}{\epsilon_{0}}\omega-c^{2}k^{2}\right)E=0$
(31)
where $\sigma_{n}$ represents the ohmic part of the conductivity and provides
dissipation, while the photon obtains a mass given by the plasma frequency. At
$\omega=0$ equation (31) can be solved with $k=i\omega_{p}$, which corresponds
to the skin effect in metals and for superconductors can also be understood as
the Meissner effect. The dissipation term with $\sigma_{n}$ in (31) can be
present in superconductors due to quasiparticles [38]. The above equations of
motion can be represented by the complex refractive index:
$n_{SC}(\omega)=\frac{\omega^{2}-\omega_{pl}^{2}}{\omega^{2}}+\frac{i\sigma_{n}}{\epsilon_{0}\omega}.$
(32)
We model the Floquet material as a system with time-periodic modulation of the
plasma frequency $\omega_{pl}$ at frequency $\Omega_{d}$. We assume that the
modulation frequency is higher than twice the frequency of the bottom of the
plasmon band, so that the drive can result in resonant generation of plasmon
pairs. Taking the amplitude of modulation to be $A_{\it drive}$ we obtain
equation 14 introduced previously.
Reflectivity spectra in the different regimes of the parametric driving
induced phase diagram are plotted in Fig. 4 by tuning the dissipation through
the normal conductivity $\sigma_{n}$ and the amplitude of periodic oscilations
$A_{\it drive}$.
Figure 4: Reflectivity spectra of a plasmon band driven at
$\Omega_{d}=3\omega_{pl}$ in the four different regimes of the Phase diagram
of Fig. 1. a) Regime I: $\frac{\sigma_{n}}{\epsilon_{0}}=0.064\omega_{pl}$,
$A_{ampl.}=3\frac{\omega_{pl}^{2}}{c^{2}}$, b) Regime II:
$\frac{\sigma_{n}}{\epsilon_{0}}=2\omega_{pl}$,
$A_{ampl.}=60\frac{\omega_{pl}^{2}}{c^{2}}$, c) Regime III:
$\frac{\sigma_{n}}{\epsilon_{0}}=0.1\omega_{pl}$,
$A_{ampl.}=6\frac{\omega_{pl}^{2}}{c^{2}}$, d) Regime IV:
$\frac{\sigma_{n}}{\epsilon_{0}}=0.064\omega_{pl}$,
$A_{ampl.}=6\frac{\omega_{pl}^{2}}{c^{2}}$. Notice that dissipation suppresses
parametric driving effects and a larger oscillation amplitude is needed to
produce an appreciable effect in the reflectivity spectra. Notably, in figure
(b) which corresponds to an overdamped system, parametric driving gives rise
to an interesting structure from a featureless background with a dip on
resonance.
### III.2 Floquet-Fresnel reflectivity in a phonon-polariton system
We now want to demonstrate that the four regimes presented in Fig. 1 are
universal and not limited to a single optical band model. To this end we
consider a system that features two branches of optical excitations: a phonon-
polariton system corresponding to light coupling to a single IR-active phonon
mode. The Hamiltonian describing such a model can be written as:
$H_{\rm ph}=ZEQ+M\omega_{\rm ph}^{2}\frac{Q^{2}}{2}+\frac{\Pi^{2}}{2M},$ (33)
where $Q$ is the phonon coordinate, $\Pi$ is the momentum conjugate to $Q$,
$\omega_{\rm ph.}$ is the transverse phonon frequency, $M$ is the ion mass,
and $Z$ is the effective charge of the phonon mode.
Solving the equations of motion corresponding to (33) together with Maxwell’s
equations we obtain two hybrid light-matter modes, corresponding to the upper
and lower polaritons. In equilibrium the dispersion and typical reflectivity
is given by Fig. 2 (b) - (d). The dispersion is modeled by taking a refractive
index of the type (see discussion in section II.1) :
$n(\omega)^{2}=\epsilon_{\infty}\left(1+\frac{\omega_{\rm
pl.,phonon}^{2}}{-\omega^{2}-i\gamma\omega+\omega_{ph.}^{2}}\right).$ (34)
where in terms of our Hamiltonian parameters the plasma frequency of the
phonon is given by, $\omega_{\rm pl.,phonon}^{2}=\frac{Z^{2}}{\epsilon_{0}M}$.
The bottom of the lower polariton branch is at frequency
$\omega_{L}=\sqrt{\omega_{\rm ph}^{2}+\omega_{\rm pl,phonon}^{2}}$.
A new feature of the two band system is the possibility of inter-band
parametric resonances. The simplest type of optical pump corresponds to
resonantly exciting the upper polariton branch at $k=0$, which then results in
the parametric drive of the system as frequency $\Omega_{d}=2\omega_{L}$ [15]
(for details see Appendix B). This is the situation that we will primarily
focus on in this section. As shown in figure 2b), in this case one finds a
resonant process in which the drive produces one lower and one upper polariton
at finite momentum. This process satisfies both momentum and energy are
conservation. This resonance leads to singularities in the reflectivity shown
in Figure 5 at 20 and 40 THz. Another case of parametric resonance corresponds
to the drive creating two upper polaritons at zero momentum. This leads to the
singularity at $\omega_{L}=30$ THz in figure 5d).
Another small peak in Figure 5d) (strong drive regime) can be seen at the
frequency of 35 THz. This feature arises from the singularity of the matrix
element that mixes the signal and idler frequency components that we pointed
out in section I.2. In Appendix B, we consider non-linearities in the phonon
system of the type
$H_{non-linear}=uQ^{4}.$ (35)
and demonstrate that the matrix element $A_{\it drive}$, introduced in
equation (14), can be writen as
$\begin{split}&A_{\it drive}(\omega_{s})=A_{\it drive,0}+\\\
&\frac{B}{\left(\omega_{s}^{2}+i\gamma\omega_{s}-\omega_{ph.}^{2}\right)\left(\omega_{\rm
id.}^{2}+i\gamma\omega_{\rm id.}-\omega_{ph.}^{2}\right)}.\end{split}$ (36)
The last equation shows that Floquet mixing is dramatically enhanced when
either the signal or the idler frequencies are equal to $\omega_{ph}$. It is
also useful to present this result in terms of the effective change of the
index of refraction (at the signal frequency) after integrating out
contribution of the idler component (see equation (23)). In the phonon-
polariton case we obtain correction to the index of refraction
$\delta n_{\rm phonon}\propto\frac{1}{\left(\omega_{\rm
id.}^{2}+i\gamma\omega_{\rm id.}-\omega_{\rm
ph.}^{2}\right)\left(\omega_{s}^{2}+i\gamma\omega_{s}-\omega_{ph.}^{2}\right)},$
(37)
which shows resonant enhancement around $\omega_{s}=\omega_{\rm ph.}$ and
$\omega_{s}=\Omega_{d}-\omega_{\rm ph.}$. We remind the readers, however, that
equation (37) is based on the perturbative treatment of the signal-idler
mixing and is not quantitatively accurate in the vicinity of singularities in
the reflection coefficient.
Figure 5: Reflectivity spectra of a phonon-polariton system driven by an
effective drive $\Omega_{d}=60$ THz through exciting the upper phonon-
polariton at $30$ THz. The four regimes of the phase diagram are presented for
different parameters of the phonon dissipation and oscillation amplitude.
Numbers were chosen such that the transverse phonon is at $\omega_{T}=25$ THz
and the longitudinal phonon is at $\omega_{L}=30$ THz. a) Regime I:
$\gamma=0.5$ THz , $B=2.7\times 10^{7}\frac{{\rm THz}^{4}}{c^{2}}$, b) Regime
II: $\gamma=5THz$, $B=9\times 10^{7}\frac{{\rm THz}^{2}}{c^{2}}$, c) Regime
III: $\gamma=1$ THz , $B=7000^{2}\frac{{\rm THz}^{4}}{c^{2}}$, d) Regime IV:
$\gamma=0.5$ THz, $B=8.1\times 10^{7}\frac{{\rm THz}^{4}}{c^{2}}$. In regime
IV, apart from the expected parametrically resonant instabilities, we find a
Fano type feature associated with divergences in the the strength of the
phonon mediated parametric drive. This occurs at $\Omega_{d}-\omega_{T}=35$
THz.
## IV Blue shifted edge in bilayer High Tc cuprate
$\rm{YBa_{2}Cu_{3}O_{6.5}}$
An experimental realization of the driven single plasmon edge comes from
terahertz pump and probe experiments in $\rm{YBa_{2}Cu_{3}O_{6.5}}$[39]. In
equilibrium, $\rm{YBa_{2}Cu_{3}O_{6.5}}$ is a bi-layer superconductor with a
Josephson plasmon at $0.9$ THz. The low energy optical response for light
polarized along the c-axis of the material is captured by the equations of
motion[35]:
$\left(n^{2}_{0}\left(\omega^{2}-\omega_{\rm
pl.}^{2}\right)+i\frac{\sigma_{n}}{\epsilon_{0}}\omega-c^{2}k^{2}\right)E=0$
(38)
where $\sigma_{n}$ represents the conductivity of the normal state electron
fluid which provides dissipation for the Josephson plasmon, $\omega_{pl}$ the
Josephson plasma frequency and $n_{0}$ the static refractive index inside the
material. This photon dispersion is shown in Fig. 2(a) with a gap
$\omega_{JP}\sim 0.9$ THz leading to a Josephson plasma edge at that frequency
in the equilibrium optical reflectivity. Equivalently, the equations of motion
can be represented by the refractive index:
$n_{SC}(\omega)=n^{2}_{0}\left(\frac{\omega^{2}-\omega_{\rm
pl.}^{2}}{\omega^{2}}+i\frac{\sigma_{n}}{\epsilon_{0}n^{2}_{0}\omega}\right),$
(39)
substituted in equation (4).
We use our model to fit experimental data presented in reference[39]
(reprinted here with the author’s permission). Parameters used in this section
to produce the figures are tabulated in Appendix C. We consider first a low
temperature state in the superconducting regime, $T=10K$, and model pumping as
parametrically driving Josephson plasmons[19, 18]. Using our simple model, we
find excellent agreement with experiments shown in Fig. 6, and interpret the
edge at $\sim 1$ THz to be a consequence of parametric resonance from a drive
at $\sim 2$ THz and corresponds to the intermediate Regime IV in the phase
diagram. To fit the data, we need to assume that the normal state
conductivity, $\sigma_{n}$ is increased in the pumped state by photo-excited
quasiparticles, but also $\omega_{\rm pl.}^{2}$ which is proportional to the
superfluid density is decreased. Remarkably, our simulation shows that even if
we assume a suppressed superfluid density, we still find a blue-shifted edge
as a result of internal oscillating fields parametrically driving Josephson
plasmons. To fit the photo-induced edge above $T_{c}$, we model the pseudogap
phase as a superconductor with overdamped dynamics and a reduced plasma
resonance frequency. In Fig. 6 (b) we are able to fit the data assuming
parametric driving at the same frequency as the low temperature data. Our
theory suggests that reflectivity data from pump and probe experiments in the
pseudogap phase of $\rm{YBa_{2}Cu_{3}O_{6.5}}$ correspond to Regime II of our
phase diagram, which shows that a photo-induced edge appears as a result of
parametric driving of overdamped photon modes.
Figure 6: Optical reflectivity spectra of $\rm{YBa_{2}Cu_{3}O_{6.5}}$
extracted from Ref. [39], re-plotted with the permission of the authors and
fitted with the theory presented in this paper. The photo-induced reflectivity
edge is well captured by our simple model and suggests that Josephson plasmons
are parametrically driven by a coherently oscillating mode. (a) Reflectivity
spectra at $T=10K$ (below $\rm T_{c}$) shows a dip peak structure around 1 THz
corresponding to regime (I) of our phase diagram. (b) Reflectivity spectra at
$T=100K$ (above $\rm T_{c}$), is fitted with our theory assuming an overdamped
Josephson plasmon edge in the pseudogap regime. Parametric driving produces
changes in reflectivity consistent to regime (II) of our phase diagram.
Fitting parameters reported in Appendix C.
## V Discussion and outlook
In this paper we developed a theory that allows to compute optical
reflectivity of materials with oscillating collective modes. We demonstrated
that using only a few phenomenological coefficients, which parametrize the
frequency dependent refractive index, as well as the frequency of the
oscillations driving the system, it is possible to predict the position of the
photo-induced resonances associated with parametric resonances. To obtain the
shape of the resonant feature, one also needs to include information about the
amplitude of the drive and dissipation of collective modes. In particular, we
found that when dissipation dominates over parametric drive the system
develops a lorentzian shaped dip, which arises from the interference of signal
and idler transmission channels. At stronger drives the dip turns into a peak
and reflectivity can exceed one, indicating parametric amplification of the
probe pulse. We also discussed interesting double dip crossover behaviour
between the overdamped and amplification regimes. Our results should be
ubiquitous in strongly driven systems where the excitation of a well-defined
collective mode can act as the external periodic drive.
Our analysis demonstrates that parametric resonances provide a general
universality class of reflectivity features from which both dynamical and
static properties of the system can be extracted. This puts them in the same
category as previously studied Fano resonance features and EIT[29]. Despite
the simplicity of our model the resulting reflectivity spectrum can be quite
rich, as shown in the phase diagram in Fig. 1. Our results provide a tool for
analyzing a variety of photo-induced features that have been observed in
experiments but have not been given theoretical interpretation until now. We
show that photo-induced features, such as a photo-induced edge, can serve as a
reporter of a long lived collective mode excited in the material during
pumping and a pre-cursor of a lasing instability that can occur in the system
at stronger drives. As a concrete case study we analyzed experimental results
of the pump-induced changes of reflectivity in a layered superconductor
$\rm{YBa_{2}Cu_{3}O_{6.5}}$ at frequencies close to the lower Josephson
plasmon edge. We find that we can obtain an accurate fit to the experimental
data if we include strong renormalization of the equilibrium parameters, such
as enhancement of real conductivity due to the photo-excitation of charge
carriers during the pump.
A natural generalization of the above framework is the inclusion of time
dependent drives at several frequencies. This is important, for example, for
analyzing Floquet drives with finite spectral width or including finite
lifetime of collective modes. In this case different oscillating modes are
expected to compete with each other, leading to a inhomogenious broadening of
the dip / peak features predicted in this work.
## Acknowledgements
SRUH and RDA acknowledge support from the DARPA DRINQS program (Grant No.
D18AC00014). DP thanks support by the Israel Science Foundation (grant
1803/18).
## Appendix A Derivation of Floquet refractive index in the stable regime
In this section we derive the Floquet refractive index shown in equation (23).
Using equations (20) and (21) we derive the perturbative Floquet-Fresnel
equations:
$\displaystyle 1+r_{s}$ $\displaystyle=t_{s}+\frac{\xi}{2}t_{id.},$ (40a)
$\displaystyle 1-r_{s}$
$\displaystyle=t_{s}\frac{c\tilde{k}_{s}}{\omega_{s}}+\frac{\xi}{2}\frac{c\tilde{k}_{id.}}{\omega_{s}},$
(40b) $\displaystyle r_{id.}$ $\displaystyle=-\frac{\xi}{2}t_{s}+t_{id.},$
(40c) $\displaystyle r_{id.}$
$\displaystyle=-\frac{\xi}{2}\frac{c\tilde{k}_{s}}{\omega_{\rm
id.}}t_{s}+\frac{c\tilde{k}_{id.}}{\omega_{\rm id.}}t_{id.}$ (40d)
We can integrate out the effects of the idler channel, by using equations
(40c),(40d): We wish to use the boundary conditions oscillating at the idler
frequency to solve for $t_{id.}$ in terms of $t_{s}$:
$t_{id.}=\frac{\xi}{2}\frac{c\tilde{k}_{s}-\omega_{\rm
id.}}{c\tilde{k}_{id.}-\omega_{\rm id.}}t_{s}$ (41)
These lead to the boundary conditions:
$\displaystyle 1+r_{s}=$ $\displaystyle
t_{s}(1+\frac{\xi^{2}}{4}\frac{c\tilde{k}_{s}-\omega_{\rm
id.}}{c\tilde{k}_{id.}-\omega_{\rm id.}}),$ (42a) $\displaystyle 1-r_{s}=$
$\displaystyle
t_{s}\frac{c\tilde{k}_{s}}{\omega_{s}}\left(1+\frac{\xi^{2}}{4}\frac{c\tilde{k}_{s}-\omega_{\rm
id.}}{c\tilde{k}_{id.}-\omega_{\rm
id.}}\frac{\tilde{k}_{id.}}{\tilde{k}_{s}}\right)$ (42b)
After re-scaling the transmission coefficient $t_{s}$, the above equation can
be cast in the familiar form:
$\displaystyle 1+r_{s}=$ $\displaystyle t_{s},$ (43a) $\displaystyle 1-r_{s}=$
$\displaystyle t_{s}\tilde{n}$ (43b)
allowing us to encode the effects of driving into an effective renormalized
refractive index. In fact, the possibility for the dressed refractive index to
be negative is what gives rise to phenomena such as parametric amplification
of reflectivity. The renormalized refractive index is found to be:
$\displaystyle\tilde{n}\approx$
$\displaystyle\frac{c\tilde{k}_{s}}{\omega_{s}}\frac{1+\frac{\xi^{2}}{4}\frac{c\tilde{k}_{s}-\omega_{\rm
id.}}{c\tilde{k}_{id.}-\omega_{\rm
id.}}\cdot\frac{\tilde{k}_{id.}}{\tilde{k}_{s}}}{1+\frac{\xi^{2}}{4}\frac{c\tilde{k}_{s}-\omega_{\rm
id.}}{c\tilde{k}_{id.}-\omega_{\rm id.}}},$ (44a)
$\displaystyle\tilde{n}\approx$ $\displaystyle n_{eq.}\left(1+\frac{A_{\it
drive}\xi}{4k_{s}^{2}}+\frac{\xi^{2}}{4}\frac{c\tilde{k}_{s}-\omega_{\rm
id.}}{c\tilde{k}_{id.}-\omega_{\rm
id.}}\left(\frac{\tilde{k}_{id.}}{\tilde{k}_{s}}-1\right)\right)$ (44b)
as reported in equation (23).
## Appendix B IR phonon mediated drive
The equation of motion for the phonon given by the hamiltonian in equation
(33) and (35) is
$\left(\partial_{t}^{2}+\gamma\partial_{t}+\omega_{0}^{2}+4uQ^{2}\right)Q=ZE.$
(45)
Using a gaussian ansatz for the phonons we can linearize the above equation
as:
$\left(\partial_{t}^{2}+\gamma\partial_{t}+\omega_{\rm
ph.}^{2}+12u\left\langle Q^{2}\right\rangle\right)Q=\frac{Z}{M}E.$ (46)
The phonon mode appears in the Maxwells equations as:
$\left(\frac{1}{c^{2}}\partial_{t}^{2}-k^{2}\right)E=-Z\partial^{2}_{t}Q.$
(47)
Oscillating collective modes inside the material will affect the above linear
system through oscillations of $\left\langle Q^{2}\right\rangle=\left\langle
X^{2}\right\rangle_{0}+A\left(e^{i\Omega_{d}t}+e^{-i\Omega_{d}t}\right)$. Such
a term can arise by pumping the system on resonance with the upper polariton,
such that $\left\langle Q\right\rangle=A^{\prime}\cos{\omega_{L}t}$, where
$\omega_{L}^{2}=\omega_{ph.}^{2}+\frac{Z^{2}}{M}$, the frequency of the upper
polariton at $k=0$. Alternatively, for a pumping protocol at high frequencies,
the upper polariton fluctuations, $\left\langle Q^{2}\right\rangle$, can be
driven linearly by a Raman process. In both cases, the driving frequency would
be twice the upper plasmon frequency $\Omega_{d}=2\omega_{L}$. However, in
general $\Omega_{d}$ can also correspond to a different frequency not included
in our model. Absorbing $\left\langle Q^{2}\right\rangle_{0}$ in the
definition of $\omega_{\rm ph.}$ and expanding in equation (46) $Q$ in signal
and idler components, $Q=Q_{s}e^{-i\omega_{s}t}+Q_{\rm id.}e^{i\omega_{\rm
id.}t}$ we have
$\begin{pmatrix}Q_{s}\\\
Q_{id}\end{pmatrix}=\begin{pmatrix}\frac{Z}{\omega_{s}^{2}+i\gamma\omega_{s}-\omega_{\rm
ph.}^{2}}&&0\\\ 0&&\frac{Z}{\omega_{\rm id.}^{2}+i\gamma\omega_{\rm
id.}-\omega_{\rm ph.}^{2}}\end{pmatrix}\cdot\begin{pmatrix}E_{s}\\\ E_{\rm
id}\end{pmatrix}+\frac{ZA}{\left(\omega^{2}+i\gamma\omega-\omega_{\rm
ph.}^{2}\right)\left(\omega_{\rm id.}^{2}+i\gamma\omega_{\rm id.}-\omega_{\rm
ph.}^{2}\right)}\begin{pmatrix}E_{\rm id}\\\ E_{s}\end{pmatrix}.$ (48)
Substituting equation 48 in Maxwells equation we find the equations of motion
for the signal and idler component to be:
$\displaystyle\left(\frac{n^{2}_{eq.}(\omega_{s})}{c^{2}}\omega_{s}^{2}-k^{2}\right)E_{s}+A_{\rm
drive,s}(\omega_{s},\omega_{\rm id})E_{\rm id}=$ $\displaystyle 0,$ (49a)
$\displaystyle\left(\frac{n^{2}_{eq.}(\omega_{\rm
id.})}{c^{2}}\omega_{s}^{2}-k^{2}\right)E_{s}+A_{\rm
drive,id}(\omega_{s},\omega_{\rm id})E_{\rm id}=$ $\displaystyle 0$ (49b)
where the signal and idler driving amplitude, $A_{\rm drive,s}$ and $A_{\rm
drive,id}$ is given by:
$\displaystyle\begin{split}&A_{\rm drive,s}=\\\
&\frac{Z^{2}A\omega_{s}^{2}}{\left(\omega^{2}+i\gamma\omega-\omega_{\rm
ph.}^{2}\right)\left(\omega_{\rm id.}^{2}+i\gamma\omega_{\rm id.}-\omega_{\rm
ph.}^{2}\right)},\end{split}$ (50a) $\displaystyle\begin{split}&A_{\rm
drive,s}=\\\ &\frac{Z^{2}A\omega_{\rm
id}^{2}}{\left(\omega^{2}+i\gamma\omega-\omega_{\rm
ph.}^{2}\right)\left(\omega_{\rm id.}^{2}+i\gamma\omega_{\rm id.}-\omega_{\rm
ph.}^{2}\right)},\end{split}$ (50b)
justifying the resonant structure presented in equation 36.
## Appendix C Fitting parameters for $\rm{YBa_{2}Cu_{3}O_{6.5}}$ data
As mentioned, equilibrium is modeled via the equations of motion of photons in
a superconductor:
$\left(\omega^{2}+i\frac{\sigma_{n}}{\epsilon_{0}}\omega-\left(\omega_{\rm
pl.}^{2}+\frac{c^{2}}{n^{2}}k^{2}\right)\right)E(\omega)=0.$ (51)
Driving is taken into account by mixing signal and idler frequency
contributions arising from a periodic drive at $\Omega_{d}$:
$\displaystyle\begin{split}&\left(\omega_{s}^{2}+i\frac{\sigma_{n}}{\epsilon_{0}}\omega_{s}-\left(\omega_{\rm
pl.}^{2}+\frac{c^{2}}{n^{2}}k^{2}\right)\right)E(\omega_{s},k)+\\\ &A_{\it
drive}E(-\omega_{\rm id.},k)=0,\end{split}$ (52a)
$\displaystyle\begin{split}&\left(\omega_{\rm
id.}^{2}+i\frac{\sigma_{n}}{\epsilon_{0}}\omega_{\rm id.}-\left(\omega_{\rm
pl.}^{2}+\frac{c^{2}}{n^{2}}k^{2}\right)\right)E(-\omega_{\rm id.},k)+\\\
&A_{\it drive}E(\omega_{\rm s},k)=0,\end{split}$ (52b)
To fit the data, we first fit the parameters $\\{\omega_{pl.},\sigma_{n},n\\}$
to the equilibrium reflectivity, and then fit the driving frequency
$\Omega_{d}$ and driving amplitude, $A_{\it drive}$ to match the driven
reflectivity spectra.
##### Below $T_{c}$:
The experimental data taken at 10 K, with pumping frequency of $19.2$ THz with
a width of $1$ THz and electric field amplitude $1MV/cm$. The equilibrium is
fitted with $\omega_{pl.}=0.9$ THz , $\sigma/\epsilon_{0}=2.7$ THz , $n=4.2$.
In the driven state we use $\Omega_{d}=2.1$ THz, $A_{\it
drive}=8.4{\rm\frac{THz^{2}}{c^{2}}}$, $\omega_{pl.}=0.6$ THz and
$\sigma_{n}/\epsilon_{0}=5.5$ THz. From our fit we predict that dissipation
has increased due to pumping but also the interlayer Josephson coupling has
decreased during the pump. We see that the edge appears even if the Josephson
coupling is suppressed.
##### Above $T_{c}$ :
The experimental data taken at 10 K, with pumping frequency of $19.2$ THz with
a width of $1$ THz and electric field amplitude $3MV/cm$. Equilibrium is found
to be overdamped with $\omega_{pl.}=0.1$ THz , $\sigma/\epsilon_{0}=25.8$ THz
, $n=5$. The pumped reflectivity is fitted with $\Omega_{d}=3.8$ THz, $A_{\it
drive}=64{\rm\frac{THz^{2}}{c^{2}}}$, $\omega_{pl.}=0.1$ THz and
$\sigma_{n}/\epsilon_{0}=54$ THz.
Finally, both signals where convoluted with a Gaussian broadening function,
with standard deviation $0.05$ THz.
## References
* Aoki _et al._ [2014] H. Aoki, N. Tsuji, M. Eckstein, M. Kollar, T. Oka, and P. Werner, Nonequilibrium dynamical mean-field theory and its applications, Reviews of Modern Physics 86, 779 (2014).
* de la Torre _et al._ [2021] A. de la Torre, D. M. Kennes, M. Claassen, S. Gerber, J. W. McIver, and M. A. Sentef, Nonthermal pathways to ultrafast control in quantum materials (2021), arXiv:2103.14888 [cond-mat.str-el] .
* Fausti _et al._ [2011] D. Fausti, R. I. Tobey, N. Dean, S. Kaiser, A. Dienst, M. C. Hoffmann, S. Pyon, T. Takayama, H. Takagi, and A. Cavalleri, Light-induced superconductivity in a stripe-ordered cuprate, Science 331, 189 (2011).
* Nicoletti _et al._ [2014] D. Nicoletti, E. Casandruc, Y. Laplace, V. Khanna, C. R. Hunt, S. Kaiser, S. S. Dhesi, G. D. Gu, J. P. Hill, and A. Cavalleri, Optically induced superconductivity in striped ${\mathrm{la}}_{2-x}{\mathrm{ba}}_{x}{\mathrm{cuo}}_{4}$ by polarization-selective excitation in the near infrared, Phys. Rev. B 90, 100503 (2014).
* Hu _et al._ [2014] W. Hu, S. Kaiser, D. Nicoletti, C. R. Hunt, I. Gierz, M. C. Hoffmann, M. L. Tacon, T. Loew, B. Keimer, and A. Cavalleri, Optically enhanced coherent transport in YBa2cu3o6.5 by ultrafast redistribution of interlayer coupling, Nature Materials 13, 705 (2014).
* ichi Okamoto _et al._ [2016] J. ichi Okamoto, A. Cavalleri, and L. Mathey, Theory of enhanced interlayer tunneling in optically driven high-TcSuperconductors, Physical Review Letters 117, 10.1103/physrevlett.117.227001 (2016).
* Zhang _et al._ [2016] J. Zhang, X. Tan, M. Liu, S. W. Teitelbaum, K. W. Post, F. Jin, K. A. Nelson, D. N. Basov, W. Wu, and R. D. Averitt, Cooperative photoinduced metastable phase control in strained manganite films, Nature Materials 15, 956 (2016).
* Wang _et al._ [2013] Y. H. Wang, H. Steinberg, P. Jarillo-Herrero, and N. Gedik, Observation of floquet-bloch states on the surface of a topological insulator, Science 342, 453 (2013).
* Sie _et al._ [2019] E. J. Sie, C. M. Nyby, C. D. Pemmaraju, S. J. Park, X. Shen, J. Yang, M. C. Hoffmann, B. K. Ofori-Okai, R. Li, A. H. Reid, S. Weathersby, E. Mannebach, N. Finney, D. Rhodes, D. Chenet, A. Antony, L. Balicas, J. Hone, T. P. Devereaux, T. F. Heinz, X. Wang, and A. M. Lindenberg, An ultrafast symmetry switch in a weyl semimetal, Nature 565, 61 (2019).
* McIver _et al._ [2019] J. W. McIver, B. Schulte, F.-U. Stein, T. Matsuyama, G. Jotzu, G. Meier, and A. Cavalleri, Light-induced anomalous hall effect in graphene, Nature Physics 16, 38 (2019).
* Basov _et al._ [2011] D. N. Basov, R. D. Averitt, D. van der Marel, M. Dressel, and K. Haule, Electrodynamics of correlated electron materials, Reviews of Modern Physics 83, 471 (2011).
* Basov _et al._ [2017] D. N. Basov, R. D. Averitt, and D. Hsieh, Towards properties on demand in quantum materials, Nature Materials 16, 1077 (2017).
* Sun and Millis [2020] Z. Sun and A. J. Millis, Transient trapping into metastable states in systems with competing orders, Physical Review X 10, 10.1103/physrevx.10.021028 (2020).
* Cremin _et al._ [2019] K. A. Cremin, J. Zhang, C. C. Homes, G. D. Gu, Z. Sun, M. M. Fogler, A. J. Millis, D. N. Basov, and R. D. Averitt, Photoenhanced metastable c-axis electrodynamics in stripe-ordered cuprate la1.885ba0.115cuo4, Proceedings of the National Academy of Sciences 116, 19875 (2019), https://www.pnas.org/content/116/40/19875.full.pdf .
* Cartella _et al._ [2018] A. Cartella, T. F. Nova, M. Fechner, R. Merlin, and A. Cavalleri, Parametric amplification of optical phonons, Proceedings of the National Academy of Sciences 115, 12148 (2018), https://www.pnas.org/content/115/48/12148.full.pdf .
* Budden _et al._ [2020] M. Budden, T. Gebert, M. Buzzi, G. Jotzu, E. Wang, T. Matsuyama, G. Meier, Y. Laplace, D. Pontiroli, M. Riccó, F. Schlawin, D. Jaksch, and A. Cavalleri, Evidence for metastable photo-induced superconductivity in k3c60 (2020), arXiv:2002.12835 [cond-mat.supr-con] .
* Buzzi _et al._ [2021] M. Buzzi, G. Jotzu, A. Cavalleri, J. I. Cirac, E. A. Demler, B. I. Halperin, M. D. Lukin, T. Shi, Y. Wang, and D. Podolsky, Higgs-mediated optical amplification in a nonequilibrium superconductor, Phys. Rev. X 11, 011055 (2021).
* von Hoegen _et al._ [2020] A. von Hoegen, M. Fechner, M. Först, N. Taherian, E. Rowe, A. Ribak, J. Porras, B. Keimer, M. Michael, E. Demler, and A. Cavalleri, Parametrically amplified phase-incoherent superconductivity in yba2cu3o6+x, arXiv:1911.08284 (2020).
* Michael _et al._ [2020] M. H. Michael, A. von Hoegen, M. Fechner, M. Först, A. Cavalleri, and E. Demler, Parametric resonance of josephson plasma waves: A theory for optically amplified interlayer superconductivity in $yba_{2}cu_{3}o_{6+x}$, Phys. Rev. B 102, 174505 (2020).
* Shimano and Tsuji [2020] R. Shimano and N. Tsuji, Higgs mode in superconductors, Annual Review of Condensed Matter Physics 11, 103 (2020).
* Giorgianni _et al._ [2019] F. Giorgianni, T. Cea, C. Vicario, C. P. Hauri, W. K. Withanage, X. Xi, and L. Benfatto, Leggett mode controlled by light pulses, Nature Physics 15, 341 (2019).
* Gabriele _et al._ [2021] F. Gabriele, M. Udina, and L. Benfatto, Non-linear terahertz driving of plasma waves in layered cuprates, Nature Communications 12, 10.1038/s41467-021-21041-6 (2021).
* Denny _et al._ [2015] S. Denny, S. Clark, Y. Laplace, A. Cavalleri, and D. Jaksch, Proposed parametric cooling of bilayer cuprate superconductors by terahertz excitation, Physical Review Letters 114, 10.1103/physrevlett.114.137001 (2015).
* Rajasekaran _et al._ [2016] S. Rajasekaran, E. Casandruc, Y. Laplace, D. Nicoletti, G. D. Gu, S. R. Clark, D. Jaksch, and A. Cavalleri, Parametric amplification of a superconducting plasma wave, Nature Physics 12, 1012 (2016).
* Dienst _et al._ [2013] A. Dienst, E. Casandruc, D. Fausti, L. Zhang, M. Eckstein, M. Hoffmann, V. Khanna, N. Dean, M. Gensch, S. Winnerl, W. Seidel, S. Pyon, T. Takayama, H. Takagi, and A. Cavalleri, Optical excitation of josephson plasma solitons in a cuprate superconductor, Nature Materials 12, 535 (2013).
* Rajasekaran _et al._ [2018] S. Rajasekaran, J. Okamoto, L. Mathey, M. Fechner, V. Thampy, G. D. Gu, and A. Cavalleri, Probing optically silent superfluid stripes in cuprates, Science 359, 575 (2018).
* Boyd [2008] R. W. Boyd, _Nonlinear Optics, Third Edition_ , 3rd ed. (Academic Press, Inc., USA, 2008).
* Harris _et al._ [1996] S. E. Harris, G. Y. Yin, A. Kasapi, M. Jain, and Z. F. Luo, Electromagnetically induced transparency, in _Coherence and Quantum Optics VII_ , edited by J. H. Eberly, L. Mandel, and E. Wolf (Springer US, Boston, MA, 1996) pp. 295–304.
* Limonov _et al._ [2017] M. F. Limonov, M. V. Rybin, A. N. Poddubny, and Y. S. Kivshar, Fano resonances in photonics, Nature Photonics 11, 543 (2017).
* Eckardt and Anisimovas [2015] A. Eckardt and E. Anisimovas, High - frequency approximation for periodically driven quantum systems from a floquet - space perspective, New Journal of Physics 17, 093039 (2015).
* Sugiura _et al._ [2019] S. Sugiura, E. A. Demler, M. Lukin, and D. Podolsky, Resonantly enhanced polariton wave mixing and floquet parametric instability, arXiv:1910.03582 (2019).
* Dolgirev _et al._ [2021] P. Dolgirev, M. H. Michael, and E. Demler, Multi-tonal floquet-fresnel equations \- unpublished (2021).
* Fisher [1983] R. A. Fisher, _Optical Phase Conjugation_ (Academic Press, San Diego, 1983).
* Zel’Dovich _et al._ [1985] B. Zel’Dovich, N. Pilipetsky, and V. Shkunov, _Principles of Phase Conjugation_ , Springer Series in Optical Sciences (Springer-Verlag Berlin Heidelberg, 1985).
* Tinkham [2004] M. Tinkham, _Introduction to Superconductivity_ (Dover Publications, 2004).
* Wooten [1972] F. Wooten, Chapter 2 - maxwell’s equations and the dielectric function, in _Optical Properties of Solids_, edited by F. Wooten (Academic Press, 1972) pp. 15–41.
* Roy and Devoret [2016] A. Roy and M. Devoret, Introduction to parametric amplification of quantum signals with josephson circuits, Comptes Rendus Physique 17, 740 (2016), quantum microwaves / Micro-ondes quantiques.
* Bulaevskii _et al._ [1994] L. N. Bulaevskii, M. Zamora, D. Baeriswyl, H. Beck, and J. R. Clem, Time - dependent equations for phase differences and a collective mode in josephson - coupled layered superconductors, Phys. Rev. B 50, 12831 (1994).
* Liu _et al._ [2020] B. Liu, M. Först, M. Fechner, D. Nicoletti, J. Porras, T. Loew, B. Keimer, and A. Cavalleri, Pump frequency resonances for light-induced incipient superconductivity in ${\mathrm{yba}}_{2}{\mathrm{cu}}_{3}{\mathrm{o}}_{6.5}$, Phys. Rev. X 10, 011053 (2020).
|
©2021 IEEE. Personal use of this material is permitted. Permission from IEEE
must be obtained for all other uses, in any current or future media, including
reprinting/republishing this material for advertising or promotional purposes,
creating new collective works, for resale or redistribution to servers or
lists, or reuse of any copyrighted component of this work in other works.
This work has been submitted to the IEEE for possible publication. Copyright
may be transferred without notice, after which this version may no longer be
accessible.
# Hamilton-Jacobi Equations for Two Classes of State-Constrained Zero-Sum
Games
Donggun Lee Student Member, IEEE and Claire J. Tomlin Fellow, IEEE This
research is supported by ONR under the BRC program in multibody control
systems, by DARPA under the Assured Autonomy program, and by NSF grant
#1837244.Donggun Lee is with the Department of Mechanical Engineering,
University of California, Berkeley, USA<EMAIL_ADDRESS>J.
Tomlin is with the Department of Electrical Engineering and Computer Sciences,
University of California, Berkeley, USA<EMAIL_ADDRESS>
###### Abstract
This paper presents Hamilton-Jacobi (HJ) formulations for two classes of two-
player zero-sum games: one with a maximum cost value over time, and one with a
minimum cost value over time. In the zero-sum game setting, player A minimizes
the given cost while satisfying state constraints, and player B wants to
prevent player A’s success. For each class of problems, this paper presents
two HJ equations: one for time-varying dynamics, cost, and state constraint;
the other for time-invariant dynamics, cost, and state constraint. Utilizing
the HJ equations, the optimal control for each player is analyzed, and a
numerical algorithm is presented to compute the solution to the HJ equations.
A two-dimensional water system is introduced as an example to demonstrate the
proposed HJ framework.
## 1 Introduction
In two-player zero-sum games, one player’s control signal minimizes a cost
while satisfying a state constraint, while the second player’s control signal
tries either to maximize the cost or to violate the state constraint. An
optimal control problem may be considered a special case of the zero-sum game:
a control signal that minimizes the given cost while satisfying the constraint
is to be determined.
The Hamilton-Jacobi (HJ) partial differential equation (PDE) can be used to
represent zero-sum games for dynamical systems. The HJ formulation includes a
cost function in the form of the integration of a stage cost and a terminal
cost, nonlinear dynamics, control constraints, and state constraints.
The zero-sum game can be classified according to 1) whether the terminal time
is a given constant or a variable to be determined, and 2) whether or not
state constraints exist. If the problem is state-unconstrained and the
terminal time is a given constant, the Hamilton-Jacobi-Isaacs (HJI) PDE [1]
applies. For the state-constrained problem where the terminal time is given,
[2] presents the corresponding HJ equation. For problems where the terminal
time is a variable to be determined, [3] deals with the state-unconstrained
problem where the stage cost is zero, and [4, 5] deal with the zero-stage-cost
and state-constrained problems. This paper generalizes the previous work to
deal with the case of non-zero stage-cost and state constraint.
This paper proposes HJ equations for two classes of state-constrained problems
where the terminal time is a variable to be determined and the stage cost is
non-zero. In the first class of problems, player A wants to minimize the
maximum cost over time while satisfying the state constraint, and player B
wants to prevent player A’s success. This class of problems can be interpreted
as a robust control problem on the time and disturbances that optimizes the
maximum cost over time with respect to the worst disturbances. In the second
class of problems, player A wants to minimize the minimum cost over time while
satisfying the state constraint, and player B again wants to prevent player
A’s success. This class of problems can be interpreted as another robust
control problem on the disturbances that optimizes the minimum cost over time
with respect to the worst disturbances.
The proposed HJ equations can generally deal with both time-varying and time-
invariant dynamics, cost, and state constraint. Furthermore, this paper
presents additional HJ equations equivalent to the proposed HJ equations for
the time-invariant case.
### 1.1 Contribution
This paper presents four HJ equations for zero-sum games: two classes, and
time-varying and time-invariant. Among the four HJ equations, three equations
are proposed by this paper, and the other one is presented in [6] and reviewed
here for completeness. Also, this paper provides and presents analysis for the
optimal control signal for each player, numerical algorithm to compute the
proposed HJ equation, as well as a practical example.
### 1.2 Organization
The organization of this paper is as follows. Section 2 presents a
mathematical formulation for two classes of state-constrained zero-sum games.
Section 3 presents the HJ equations for the first class of problems both time-
varying and time-invariant. Section 4 presents the HJ equations for the second
class of problems for both the time-varying and time-invariant cases. Section
5 presents analysis for an optimal control signal based on the solution to the
HJ equations. Section 6 presents a numerical algorithm to compute the solution
to the HJ equations for each class of problem. Section 7 provides a practical
example where our HJ formulation can be utilized, and Section 8 concludes this
paper. Proofs are detailed in the Appendices.
## 2 Two Classes of State-Constrained Zero-Sum Games
We first present the two classes of two-player zero-sum games, called Problems
1 and 2. Consider a dynamical system:
$\displaystyle\dot{\mathrm{x}}(s)=f(s,\mathrm{x}(s),\alpha(s),\beta(s)),s\in[t,T],\text{
and }\mathrm{x}(t)=x,$ (1)
where $(t,x)$ are the initial time and state,
$\mathrm{x}\mathrel{\mathop{\mathchar
58\relax}}[t,T]\rightarrow\mathbb{R}^{n}$ is the state trajectory,
$f\mathrel{\mathop{\mathchar 58\relax}}[0,T]\times\mathbb{R}^{n}\times A\times
B\rightarrow\mathbb{R}^{n}$ is the dynamics,
$A\subset\mathbb{R}^{m_{a}},B\subset\mathbb{R}^{m_{b}}$ are the control
constraints, $\alpha\in\mathcal{A}(t),\beta\in\mathcal{B}(t)$ are the control
signals, in each, player A controls $\alpha$ and player B controls $\beta$,
and the sets of measurable control signals are
$\displaystyle\begin{split}&\mathcal{A}(t)\coloneqq\\{\alpha\mathrel{\mathop{\mathchar
58\relax}}[t,T]\rightarrow A~{}|~{}\|\alpha\|_{L^{\infty}(t,T)}<\infty\\},\\\
&\mathcal{B}(t)\coloneqq\\{\beta\mathrel{\mathop{\mathchar
58\relax}}[t,T]\rightarrow
B~{}|~{}\|\beta\|_{L^{\infty}(t,T)}<\infty\\}.\end{split}$ (2)
In each zero-sum game, we specify each player’s control signal or strategy:
player A wants to minimize the cost under the state constraint, and player B
wants to prevent player A’s success, although the cost is defined in different
ways for Problems 1 and 2. In each problem, we introduce two value functions
depending on which players play first or second.
###### Problem 1
For given initial time and state $(t,x)$, solve
$\displaystyle\begin{split}&\vartheta_{1}^{+}(t,x)\coloneqq\sup_{\delta\in\Delta(t)}\inf_{\alpha\in\mathcal{A}(t)}\\\
&\quad\max_{\tau\in[t,T]}\int_{t}^{\tau}L(s,\mathrm{x}(s),\alpha(s),\delta[\alpha](s))ds+g(\tau,\mathrm{x}(\tau)),\end{split}$
(3) $\displaystyle\text{subject to }\quad c(s,\mathrm{x}(s))\leq 0,\quad
s\in[t,\tau],$ (4)
where $\mathrm{x}$ solves (1) for $(\alpha,\delta[\alpha])$; and solve
$\displaystyle\begin{split}&\vartheta_{1}^{-}(t,x)\coloneqq\inf_{\gamma\in\Gamma(t)}\sup_{\beta\in\mathcal{B}(t)}\\\
&\quad\max_{\tau\in[t,T]}\int_{t}^{\tau}L(s,\mathrm{x}(s),\gamma[\beta](s),\beta(s))ds+g(\tau,\mathrm{x}(\tau)),\end{split}$
(5) $\displaystyle\text{subject to }\quad c(s,\mathrm{x}(s))\leq 0,\quad
s\in[t,\tau],$ (6)
where $\mathrm{x}$ solves (1) for $(\gamma[\beta],\beta)$.
$\Delta(t)$ is a set of non-anticipative strategies for player B, and
$\Gamma(t)$ is a set of non-anticipative strategies for player A. The non-
anticipative strategy outputs a control signal for the second player as a
reaction to the first player’s control signal without using the future
information. The non-anticipative strategy has been introduced by Elliott and
Kalton [7]:
$\displaystyle\Delta(t)\coloneqq$
$\displaystyle\\{\delta\mathrel{\mathop{\mathchar
58\relax}}\mathcal{A}(t)\rightarrow\mathcal{B}(t)~{}|~{}\forall
s\in[t,\tau]\text{ and }\alpha,\bar{\alpha}\in\mathcal{A}(t),$
$\displaystyle\text{ if }\alpha(\tau)=\bar{\alpha}(\tau)\text{ a.e.
}\tau\in[t,s],$ $\displaystyle\text{ then
}\delta[\alpha](\tau)=\delta[\bar{\alpha}](\tau)\text{ a.e. }\tau\in[t,s]\\}.$
(7) $\displaystyle\Gamma(t)\coloneqq$
$\displaystyle\\{\gamma\mathrel{\mathop{\mathchar
58\relax}}\mathcal{B}(t)\rightarrow\mathcal{A}(t)~{}|~{}\forall
s\in[t,\tau],\beta,\bar{\beta}\in\mathcal{B}(t),$ $\displaystyle\text{ if
}\beta(\tau)=\bar{\beta}(\tau)\text{ a.e. }\tau\in[t,s],$ $\displaystyle\text{
then }\gamma[\beta](\tau)=\gamma[\bar{\beta}](\tau)\text{ a.e.
}\tau\in[t,s]\\}.$ (8)
###### Problem 2
For given initial time and state $(t,x)$, solve
$\displaystyle\begin{split}&\vartheta_{2}^{+}(t,x)\coloneqq\sup_{\delta\in\Delta(t)}\inf_{\alpha\in\mathcal{A}(t)}\\\
&\quad\min_{\tau\in[t,T]}\int_{t}^{\tau}L(s,\mathrm{x}(s),\alpha(s),\delta[\alpha](s))ds+g(\tau,\mathrm{x}(\tau)),\end{split}$
(9) $\displaystyle\text{subject to }\quad c(s,\mathrm{x}(s))\leq 0,\quad
s\in[t,\tau],$ (10)
where $\mathrm{x}$ solves (1) for $(\alpha,\delta[\alpha])$; and solve
$\displaystyle\begin{split}&\vartheta_{2}^{-}(t,x)\coloneqq\inf_{\gamma\in\Gamma(t)}\sup_{\beta\in\mathcal{B}(t)}\\\
&\quad\min_{\tau\in[t,T]}\int_{t}^{\tau}L(s,\mathrm{x}(s),\gamma[\beta](s),\beta(s))ds+g(\tau,\mathrm{x}(\tau)),\end{split}$
(11) $\displaystyle\text{subject to }\quad c(s,\mathrm{x}(s))\leq 0,\quad
s\in[t,\tau],$ (12)
where $\mathrm{x}$ solves (1) for $(\gamma[\beta],\beta)$.
For both problems, $L\mathrel{\mathop{\mathchar
58\relax}}[t,T]\times\mathbb{R}^{n}\times A\times B\rightarrow\mathbb{R}$ is
the stage cost, $g\mathrel{\mathop{\mathchar
58\relax}}\mathbb{R}\times\mathbb{R}^{n}\rightarrow\mathbb{R}$ is the terminal
cost, $f\mathrel{\mathop{\mathchar 58\relax}}[t,T]\times\mathbb{R}^{n}\times
A\times B\rightarrow\mathbb{R}^{n}$ is the system dynamics, and
$c\mathrel{\mathop{\mathchar
58\relax}}[t,T]\times\mathbb{R}^{n}\rightarrow\mathbb{R}$ is the state
constraint function.
The difference between $\vartheta_{i}^{+}$ and $\vartheta_{i}^{-}$ ($i=1,2$)
is play order. In $\vartheta_{i}^{+}(t,x)$, at each time $s\in[t,T]$, player A
first plays $\alpha(s)$, and then player B reacts by following its own
strategy $\delta[\alpha](s)$. Despite this play order at each time, the choice
of player B’s strategy comes first since it should be chosen without
information about player A’s control signal. In other words, player B first
chooses its strategy, and then player A chooses its control signal. In
$\vartheta_{i}^{-}(t,x)$, at each time $s$, player B first plays $\beta(s)$,
and then player A reacts with its strategy $\gamma[\beta](s)$. Similarly to
$\vartheta_{i}^{+}(t,x)$, in $\vartheta_{i}^{-}(t,x)$, player A first chooses
its strategy, and then player B chooses its control signal.
Problems 1 and 2 are representative of many practical problems. For Problem 1,
consider two water systems where player A controls the water level of pond 1
that is connected to pond 2. Suppose player B is precipitation. Player A needs
to minimize the highest water level of pond 1 over time while satisfying
constraints for water level of pond 1 and 2 under the worst precipitation
assumption. For Problem 2, consider a car that tries to change its lane while
avoiding collision with other cars. Here, the cost is the distance to the goal
lane, and the car wants to successfully change lanes at some time in the given
time interval, while other cars might bother to lane change.
This paper assumes the following.
###### Assumption 1 (Lipschitz continuity and compactness)
1. 1.
$A$ and $B$ are compact;
2. 2.
$f\mathrel{\mathop{\mathchar 58\relax}}[0,T]\times\mathbb{R}^{n}\times A\times
B\rightarrow\mathbb{R}^{n}$, $f=f(t,x,a,b)$ is Lipschitz continuous in $(t,x)$
for each $(a,b)\in A\times B$;
3. 3.
the stage cost $L\mathrel{\mathop{\mathchar
58\relax}}[0,T]\times\mathbb{R}^{n}\times A\times B\rightarrow\mathbb{R}$,
$L=L(t,x,a,b)$ is Lipschitz continuous in $(t,x)$ for each $(a,b)\in A\times
B$;
4. 4.
for all $(t,x)\in[0,T]\times\mathbb{R}^{n}$, $\\{f(t,x,a,b)~{}|~{}a\in A,b\in
B\\}$ and $\\{L(t,x,a,b)~{}|~{}a\in A,b\in B\\}$ are compact and convex;
5. 5.
the terminal cost $g\mathrel{\mathop{\mathchar
58\relax}}[0,T]\times\mathbb{R}^{n}\rightarrow\mathbb{R}$, $g=g(t,x)$ is
Lipschitz continuous in $(t,x)$;
6. 6.
the state constraint $c\mathrel{\mathop{\mathchar
58\relax}}[0,T]\times\mathbb{R}^{n}\rightarrow\mathbb{R}$, $c=c(t,x)$ is
Lipschitz continuous in $(t,x)$;
7. 7.
the stage cost ($L$) and the terminal cost ($g$) are bounded below.
## 3 Hamilton-Jacobi Equations for Problem 1
### 3.1 HJ equation for Problem 1 (time-varying case)
In this subsection, we derive an HJ equation for Problem 1
($\vartheta_{1}^{\pm}$). Unfortunately, for some initial time and state
$(t,x)$, there is no control $\alpha$ (or strategy $\gamma$) of player A that
satisfies the state constraint for all strategies $\delta$ of player B (or
control signal $\beta$). In this case, $\vartheta_{1}^{\pm}(t,x)$ is infinity.
Thus, $\vartheta_{1}^{\pm}$ is neither continuous nor differentiable in
$(0,T)\times\mathbb{R}^{n}$.
To overcome this issue, we utilize an additional variable $z\in\mathbb{R}$ to
define continuous value functions $V_{1}^{\pm}$ in (13) and (14) that combine
the cost $\vartheta_{1}^{\pm}$ in (3) or (5), and the constraint in (4) or
(6). We call this method the augmented-$z$ method. This method has been
utilized to handle state constraints to solve other HJ problems [2, 6].
$V_{1}^{\pm}$ is well-defined in $[0,T]\times\mathbb{R}^{n}\times\mathbb{R}$.
$\displaystyle
V_{1}^{+}(t,x,z)\coloneqq\sup_{\delta\in\Delta(t)}\inf_{\alpha\in\mathcal{A}(t)}J_{1}(t,x,z,\alpha,\delta[\alpha]),$
(13) $\displaystyle
V_{1}^{-}(t,x,z)\coloneqq\inf_{\gamma\in\Gamma(t)}\sup_{\beta\in\mathcal{B}(t)}J_{1}(t,x,z,\gamma[\beta],\beta),$
(14)
where cost $J_{1}\mathrel{\mathop{\mathchar
58\relax}}(t,x,z,\alpha,\beta)\rightarrow\mathbb{R}$ is defined as follows:
$\displaystyle\begin{split}&J_{1}(t,x,z,\alpha,\beta)\coloneqq\max_{\tau\in[t,T]}\max\bigg{\\{}\max_{s\in[t,\tau]}c(s,\mathrm{x}(s)),\\\
&\quad\quad\int_{t}^{\tau}L(s,\mathrm{x}(s),\alpha(s),\beta(s))ds+g(\tau,\mathrm{x}(\tau))-z\bigg{\\}},\end{split}$
(15)
where $\mathrm{x}$ solves (1). Define the auxiliary state trajectory
$\mathrm{z}$ solving
$\displaystyle\dot{\mathrm{z}}(s)=-L(s,\mathrm{x}(s),\alpha(s),\beta(s)),s\in[t,T],\text{
and }\mathrm{z}(t)=z.$ (16)
Then, (1) and (16) are the joint ODEs whose solution is the augmented state
trajectories: $(\mathrm{x},\mathrm{z})\mathrel{\mathop{\mathchar
58\relax}}[t,T]\rightarrow\mathbb{R}^{n+1}$
$\displaystyle\small\begin{bmatrix}\dot{\mathrm{x}}(s)\\\
\dot{\mathrm{z}}(s)\end{bmatrix}=\begin{bmatrix}f(s,\mathrm{x}(s),\alpha(s),\beta(s))\\\
-L(s,\mathrm{x}(s),\alpha(s),\beta(s))\end{bmatrix},s\in[t,T],\begin{bmatrix}\mathrm{x}(t)\\\
\mathrm{z}(t)\end{bmatrix}=\begin{bmatrix}x\\\ z\end{bmatrix}.$ (17)
Then, $J_{1}$ in (15) becomes
$\displaystyle\begin{split}J_{1}=&\max_{\tau\in[t,T]}\max\Big{\\{}\max_{s\in[t,\tau]}c(s,\mathrm{x}(s)),g(\tau,\mathrm{x}(\tau))-\mathrm{z}(\tau)\Big{\\}}\\\
=&\max\Big{\\{}\max_{s\in[t,T]}c(s,\mathrm{x}(s)),\max_{\tau\in[t,T]}g(\tau,\mathrm{x}(\tau))-\mathrm{z}(\tau)\Big{\\}}.\end{split}$
(18)
The last equality is derived by the distributive property of the maximum
operations.
Lemma 1 shows that $\vartheta_{1}^{\pm}$ can be found if $V_{1}^{\pm}$ are
known. For initial time and state $(t,x)$ for which there is no control or
strategy of player A such that the state constraint ($c(s,\mathrm{x}(s))\leq
0,s\in[t,T]$) is satisfied for player B’s best control signal or strategy,
$V_{1}^{\pm}(t,x,z)$ is always greater than 0 for all $z\in\mathbb{R}$. In
this case, Lemma 1 implies that $\vartheta_{1}^{\pm}(t,x)$ is infinity.
###### Lemma 1 (Equivalence of two value functions)
Suppose Assumption 1 holds. For all $(t,x)\in[0,T]\times\mathbb{R}^{n}$,
$\vartheta_{1}^{+}$ ((3) subject to (4)), $\vartheta_{1}^{-}$ ((5) subject to
(6)), $V_{1}^{+}$ in (13), and $V_{1}^{-}$ in (14) have the following
relationship.
$\displaystyle\vartheta_{1}^{\pm}(t,x)=\min z\text{ subject to
}V_{1}^{\pm}(t,x,z)\leq 0.$ (19)
This implies that
$\displaystyle\begin{split}\vartheta_{1}^{+}(t,x)&=\sup_{\delta\in\Delta(t)}\inf_{\alpha\in\mathcal{A}(t)}\max_{\tau\in[t,T]}\\\
&\int_{t}^{\tau}L(s,\mathrm{x}(s),\alpha(s),\delta[\alpha](s))ds+g(\tau,\mathrm{x}(\tau)),\end{split}$
(20) $\displaystyle\text{subject to }\quad c(s,\mathrm{x}(s))\leq 0,\quad
s\in[t,T],$ (21)
where $\mathrm{x}$ solves (1) for $(\alpha,\delta[\alpha])$, and
$\displaystyle\begin{split}\vartheta_{1}^{-}(t,x)&=\inf_{\gamma\in\Gamma(t)}\sup_{\beta\in\mathcal{B}(t)}\max_{\tau\in[t,T]}\\\
&\int_{t}^{\tau}L(s,\mathrm{x}(s),\gamma[\beta](s),\beta(s))ds+g(\tau,\mathrm{x}(\tau)),\end{split}$
(22) $\displaystyle\text{subject to }\quad c(s,\mathrm{x}(s))\leq 0,\quad
s\in[t,T],$ (23)
where $\mathrm{x}$ solves (1) for $(\gamma[\beta],\beta)$.
Proof. See Appendix .1.
The rest of this subsection focuses on the derivation of the corresponding HJ
equation for $V_{1}^{\pm}$. The HJ equation is based on the principle of
dynamic programming in Lemma 2.
###### Lemma 2 (Optimality condition)
Fix $(t,x,z)\in[0,T]\times\mathbb{R}^{n}\times\mathbb{R}$. Consider a small
step $h>0$ such that $t+h\leq T$, $V_{1}^{+}$ (13) has the following property:
$\displaystyle\begin{split}&V_{1}^{+}(t,x,z)=\sup_{\delta\in\Delta(t)}\inf_{\alpha\in\mathcal{A}(t)}\max\Big{\\{}\max_{s\in[t,t+h]}c(s,\mathrm{x}(s)),\\\
&\max_{s\in[t,t+h]}g(\mathrm{x}(s))-\mathrm{z}(s),V_{1}^{+}(t+h,\mathrm{x}(t+h),\mathrm{z}(t+h))\Big{\\}},\end{split}$
(24)
where $(\mathrm{x},\mathrm{z})$ solves (17) for $(\alpha,\delta[\alpha])$.
Similarly, for $V_{1}^{-}$ (14),
$\displaystyle\begin{split}&V_{1}^{-}(t,x,z)=\inf_{\gamma\in\Gamma(t)}\sup_{\beta\in\mathcal{B}(t)}\max\Big{\\{}\max_{s\in[t,t+h]}c(s,\mathrm{x}(s)),\\\
&\max_{s\in[t,t+h]}g(\mathrm{x}(s))-\mathrm{z}(s),V_{1}^{-}(t+h,\mathrm{x}(t+h),\mathrm{z}(t+h))\Big{\\}},\end{split}$
(25)
where $(\mathrm{x},\mathrm{z})$ solves (17) for $(\gamma[\beta],\beta)$.
Proof. See Appendix .2.
Theorem 1 presents the corresponding HJ equations for $V_{1}^{\pm}$ in (13)
and (14) using viscosity theory. Intuitively, the HJ equation in Theorem 1 is
derived as $h$ in Lemma 2 converges to zero.
###### Theorem 1
(HJ equation for Problem 1) For all
$(t,x,z)\in[0,T]\times\mathbb{R}^{n}\times\mathbb{R}$, $V_{1}^{\pm}$ in (13)
and (14) is the unique viscosity solution to the HJ equation:
$\displaystyle\begin{split}\max\Big{\\{}c(t,x)-&V_{1}^{\pm}(t,x,z),g(t,x)-z-V_{1}^{\pm}(t,x,z),\\\
&V_{1,t}^{\pm}-\bar{H}^{\pm}(t,x,z,D_{x}V_{1}^{\pm},D_{z}V_{1}^{\pm})\Big{\\}}=0\end{split}$
(26)
in $(0,T)\times\mathbb{R}^{n}\times\mathbb{R}$, where
$\bar{H}^{\pm}\mathrel{\mathop{\mathchar
58\relax}}[0,T]\times\mathbb{R}^{n}\times\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}\rightarrow\mathbb{R}$
$\displaystyle\bar{H}^{+}(t,x,z,p,q)\coloneqq\max_{a\in A}\min_{b\in B}-p\cdot
f(t,x,a,b)+qL(t,x,a,b),$ (27)
$\displaystyle\bar{H}^{-}(t,x,z,p,q)\coloneqq\min_{b\in B}\max_{a\in A}-p\cdot
f(t,x,a,b)+qL(t,x,a,b),$ (28)
and
$\displaystyle V_{1}^{\pm}(T,x,z)=\max\\{c(T,x),g(T,x)-z\\}$ (29)
on $\\{t=T\\}\times\mathbb{R}^{n}\times\mathbb{R}$. Denote
$V_{1,t}^{\pm}=\frac{\partial V_{1}^{\pm}}{\partial t}$,
$D_{x}V_{1}^{\pm}=\frac{\partial V_{1}^{\pm}}{\partial x}$, and
$D_{z}V_{1}^{\pm}=\frac{\partial V_{1}^{\pm}}{\partial z}$.
Proof. See Appendix .3.
### 3.2 HJ equation for Problem 1 (time-invariant case)
We define the problem as time-invariant if the stage cost, terminal cost,
dynamics, and state constraints are all independent of time.
In this section, we convert $\vartheta_{1}^{\pm}$ ((3) subject to (4) and (5)
subject to (6)) to a fixed-terminal-time problem for the time-invariant case
of Problem 1, which allows to utilize methods for the fixed-terminal-time
problems [2]. In the fixed-terminal-time problem, optimal control signals of
players have to be determined, but the terminal time does not need to be
specified but is given.
The conversion of Problem 1 to a fixed-terminal-time problem by introducing a
freezing control signal $\mu\mathrel{\mathop{\mathchar
58\relax}}[t,T]\rightarrow[0,1]$ to the dynamics and a set of freezing control
signals:
$\displaystyle\dot{\mathrm{x}}(s)=f(\mathrm{x}(s),\alpha(s),\beta(s))\mu(s),s\in[t,T],\mathrm{x}(t)=x,$
(30) $\displaystyle\mathcal{M}(t)\coloneqq\\{\mu\mathrel{\mathop{\mathchar
58\relax}}[t,T]\rightarrow[0,1]~{}|~{}\|\mu\|_{L^{\infty}(t,T)}<\infty\\}.$
(31)
This freezing control signal controls the contribution of the two players to
the system. For example, $\mu(s)=0$ implies that the state stops at $s$, and
the two players do not contribute to the system. On the other hand, $\mu(s)=1$
allows the state evolves by the control signals of the players. The maximum
over $\tau$ operation in Problem 1 can be replaced by the maximum over the
freezing control signal if it eliminates contribution of the two players after
the maximal terminal time.
We present fixed-terminal-time problems as below:
$\displaystyle\begin{split}\tilde{\vartheta}_{1}^{+}(t&,x)\coloneqq\sup_{\delta\in\Delta(t),\nu_{A}\in\textrm{N}_{A}(t)}\inf_{\alpha\in\mathcal{A}(t)}\\\
&\int_{t}^{T}L(\mathrm{x}(s),\alpha(s),\delta[\alpha](s))\nu_{A}[\alpha](s)ds+g(\mathrm{x}(T)),\end{split}$
(32) $\displaystyle\quad\quad\quad\text{subject to }c(\mathrm{x}(s))\leq
0,s\in[t,T],$ (33)
where $\mathrm{x}$ solves (30) for $(\alpha,\delta[\alpha],\nu_{A}[\alpha])$;
$\displaystyle\begin{split}\tilde{\vartheta}_{1}^{-}(t&,x)\coloneqq\inf_{\tilde{\gamma}\in\tilde{\Gamma}(t)}\sup_{\beta\in\mathcal{B}(t),\mu\in\mathcal{M}(t)}\\\
&\int_{t}^{T}L(\mathrm{x}(s),\tilde{\gamma}[\beta,\mu](s),\beta(s))\mu(s)ds+g(\mathrm{x}(T)),\end{split}$
(34) $\displaystyle\quad\quad\quad\text{subject to }c(\mathrm{x}(s))\leq
0,s\in[t,T],$ (35)
where $\mathrm{x}$ solves (30) for $(\tilde{\gamma}[\beta,\mu],\beta,\mu)$.
$\textrm{N}_{A}$ is a set of non-anticipative strategies for the freezing
control to player A, and $\tilde{\Gamma}$ is a set of non-anticipative
strategies for player A to player B and the freezing control:
$\displaystyle\textrm{N}_{A}(t)\coloneqq\\{\nu_{A}\mathrel{\mathop{\mathchar
58\relax}}\mathcal{A}(t)\rightarrow\mathcal{M}(t)~{}|~{}\forall
s\in[t,\tau],\alpha,\bar{\alpha}\in\mathcal{A}(t),$
$\displaystyle~{}~{}~{}\text{if }\alpha(\tau)=\bar{\alpha}(\tau)\text{ a.e.
}\tau\in[t,s],$ $\displaystyle~{}~{}~{}\text{then
}\nu_{A}[\alpha](\tau)=\nu_{A}[\bar{\alpha}](\tau)\text{ a.e.
}\tau\in[t,s]\\},$ (36)
$\displaystyle\tilde{\Gamma}(t)\coloneqq\\{\tilde{\gamma}\mathrel{\mathop{\mathchar
58\relax}}\mathcal{B}(t)\times\mathcal{M}(t)\rightarrow\mathcal{A}(t)~{}|~{}\forall
s\in[t,\tau],\beta,\bar{\beta}\in\mathcal{B}(t),$
$\displaystyle~{}~{}~{}\mu,\bar{\mu}\in{\mathcal{M}}(t),\text{if
}\beta(\tau)=\bar{\beta}(\tau),\mu(\tau)=\bar{\mu}(\tau)\text{ a.e.
}\tau\in[t,s],$ $\displaystyle~{}~{}~{}\text{then
}\tilde{\gamma}[\beta,\mu](\tau)=\tilde{\gamma}[\bar{\beta},\bar{\mu}](\tau)\text{
a.e. }\tau\in[t,s]\\}.$ (37)
After introducing the auxiliary variable $z\in\mathbb{R}$, define cost
$\tilde{J}$ by combining the cost and the constraint of
$\tilde{\vartheta}_{1}^{\pm}$:
$\displaystyle\begin{split}&\tilde{J}(t,x,z,\alpha,\beta,\mu)\coloneqq\max\big{\\{}\max_{s\in[t,T]}c(\mathrm{x}(s)),g(\mathrm{x}(T))-\mathrm{z}(T)\big{\\}},\end{split}$
(38)
where $(\mathrm{x},\mathrm{z})$ solves, for $s\in[t,T]$,
$\displaystyle\begin{split}&\begin{bmatrix}\dot{\mathrm{x}}(s)\\\
\dot{\mathrm{z}}(s)\end{bmatrix}=\begin{bmatrix}f(\mathrm{x}(s),\alpha(s),\beta(s))\\\
-L(\mathrm{x}(s),\alpha(s),\beta(s))\end{bmatrix}\mu(s),~{}\begin{bmatrix}\mathrm{x}(t)\\\
\mathrm{z}(t)\end{bmatrix}=\begin{bmatrix}x\\\ z\end{bmatrix}.\end{split}$
(39)
Lemma 3 claims that the zero-sum games whose cost is $\tilde{J}$ are
equivalent to $V_{1}^{\pm}$ in (13) and (14), which corresponds to
$\vartheta_{1}^{\pm}$.
###### Lemma 3
Consider $V_{1}^{\pm}$ in (13) and (14), and $\tilde{J}$ in (38). For all
$(t,x,z)\in[0,T]\times\mathbb{R}^{n}\times\mathbb{R}$,
$\displaystyle
V_{1}^{+}(t,x,z)=\sup_{\begin{subarray}{c}\delta\in\Delta(t),\\\
\nu_{A}\in\textrm{N}_{A}(t)\end{subarray}}\inf_{\alpha\in\mathcal{A}(t)}\tilde{J}(t,x,z,\alpha,\delta[\alpha],\nu_{A}[\alpha]),$
(40) $\displaystyle
V_{1}^{-}(t,x,z)=\inf_{\tilde{\gamma}\in\tilde{\Gamma}(t)}\sup_{\begin{subarray}{c}\beta\in\mathcal{B}(t),\\\
\mu\in\mathcal{M}(t)\end{subarray}}\tilde{J}(t,x,z,\tilde{\gamma}[\beta,\mu],\beta,\mu).$
(41)
Proof. See Appendix .4.
###### Corollary 1
(Equivalent fixed-terminal-time game to the time-invariant Problem 1)
$\displaystyle\vartheta_{1}^{\pm}\equiv\tilde{\vartheta}_{1}^{\pm},$ (42)
where $\vartheta_{1}^{+}$ is (3) subject to (4), $\vartheta_{1}^{-}$ is (5)
subject to (6), $\tilde{\vartheta}_{1}^{+}$ is (32) subject to (33), and
$\tilde{\vartheta}_{1}^{-}$ is (34) subject to (35).
Proof. Let the right hand terms in (40) and (41) be denoted as $W_{1}^{\pm}$.
By Corollary 5.3 in [2], $\tilde{\vartheta}_{1}^{\pm}(t,x)=\min z$ subject to
$W_{1}^{\pm}(t,x,z)\leq 0$. This fact and Lemma 1 allow us to conclude (42). ∎
This corollary remarks that the free-terminal-time games
($\vartheta_{1}^{\pm}$) can be converted to fixed-terminal-time games
($\tilde{\vartheta}_{1}^{\pm}$), in which only control signals and strategies
have to be specified, since the terminal time is fixed.
In Lemma 3, $V_{1}^{\pm}$ is converted to a fixed-terminal-time game, whose
corresponding HJ equation has been investigated in [2]. This allows us to
derive an HJ equation for the time-invariant Problem 1 in Theorem 2.
###### Theorem 2
(HJ equation for Problem 1 (time-invariant version)) Consider Problem 1 for
the time-invariant case. For all
$(t,x,z)\in[0,T]\times\mathbb{R}^{n}\times\mathbb{R}$, $V_{1}^{\pm}$ in (13)
and (14) is the unique viscosity solution to the HJ equation:
$\displaystyle\begin{split}\max&\Big{\\{}c(x)-V_{1}^{\pm}(t,x,z),\\\
&V_{1,t}^{\pm}-\min\big{\\{}0,\bar{H}^{\pm}(x,z,D_{x}V_{1}^{\pm},D_{z}V_{1}^{\pm})\big{\\}}\Big{\\}}=0\end{split}$
(43)
in $(0,T)\times\mathbb{R}^{n}\times\mathbb{R}$, where $\bar{H}^{+}$ and
$\bar{H}^{-}$ are defined in (27) and (28), respectively, without the time
dependency, and
$\displaystyle V_{1}^{\pm}(T,x,z)=\max\\{c(x),g(x)-z\\}$ (44)
on $\\{t=T\\}\times\mathbb{R}^{n}\times\mathbb{R}$.
Proof. See Appendix .5.
Note that the Hamiltonian $\bar{H}^{\pm}$ in (43) is time-invariant.
We observe that the right two terms in the HJ equation (26) $\max\\{g-z-
V_{1}^{\pm},V_{1,t}^{\pm}-\bar{H}^{\pm}\\}$ become
$V_{1,t}^{\pm}-\min\\{0,\bar{H}^{\pm}\\}$ in (43). Note that these two terms
are not algebraically equal.
### 3.3 HJ equation for Problem 1 (optimal control setting)
In this subsection, we solve Problem 1 in the optimal control problem setting:
for given initial time and state $(t,x)$,
$\displaystyle\vartheta_{1}(t,x)$
$\displaystyle\coloneqq\inf_{\alpha\in\mathcal{A}(t)}\max_{\tau\in[t,T]}\int_{t}^{\tau}L(s,\mathrm{x}(s),\alpha(s))ds+g(\tau,\mathrm{x}(\tau)),$
(45) $\displaystyle\text{subject to }\quad c(s,\mathrm{x}(s))\leq 0,\quad
s\in[t,\tau],$ (46)
where $\mathrm{x}$ solves
$\displaystyle\dot{\mathrm{x}}(s)=f(s,\mathrm{x}(s),\alpha(s)),s\in[t,T],\text{
and }\mathrm{x}(t)=x.$ (47)
Section 3.1 and 3.2 present the HJ equations for Problem 1 in the zero-sum
game setting. By removing player B in the zero-sum game, we can get HJ
equations for Problem 1 in the optimal control setting. Thus, Theorem 1 and 2
imply the following remark.
###### Remark 1
(HJ equation for Problem 1 (optimal control setting)) Let $V_{1}$ be the
unique viscosity solution to the HJ equation:
$\displaystyle\begin{split}\max\Big{\\{}c(t,x)-&V_{1}(t,x,z),g(t,x)-z-V_{1}(t,x,z),\\\
&V_{1,t}-\bar{H}(t,x,z,D_{x}V_{1},D_{z}V_{1})\Big{\\}}=0\end{split}$ (48)
in $(0,T)\times\mathbb{R}^{n}\times\mathbb{R}$, where
$\bar{H}\mathrel{\mathop{\mathchar
58\relax}}[0,T]\times\mathbb{R}^{n}\times\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}\rightarrow\mathbb{R}$
$\displaystyle\bar{H}(t,x,z,p,q)\coloneqq\max_{a\in A}-p\cdot
f(t,x,a)+qL(t,x,a),$ (49)
and
$\displaystyle V_{1}(T,x,z)=\max\\{c(T,x),g(T,x)-z\\}$ (50)
on $\\{t=T\\}\times\mathbb{R}^{n}\times\mathbb{R}$. Then,
$\displaystyle\vartheta_{1}(t,x)=\min z\text{ subject to }V_{1}(t,x,z)\leq 0,$
(51)
where $\vartheta_{1}$ is (45) subject to (46).
If Problem 1 is time-invariant, $V_{1}$ is the unique viscosity solution to
the HJ equation:
$\displaystyle\begin{split}\max&\Big{\\{}c(x)-V_{1}(t,x,z),\\\
&V_{1,t}-\min\big{\\{}0,\bar{H}(x,z,D_{x}V_{1},D_{z}V_{1})\big{\\}}\Big{\\}}=0\end{split}$
(52)
in $(0,T)\times\mathbb{R}^{n}\times\mathbb{R}$, where $\bar{H}$ is defined in
(49) with ignoring the time dependency, and
$\displaystyle V_{1}(T,x,z)=\max\\{c(x),g(x)-z\\}$ (53)
on $\\{t=T\\}\times\mathbb{R}^{n}\times\mathbb{R}$.
## 4 Hamilton-Jacobi Equations for Problem 2
Problem 2 in the zero-sum game setting is defined in Section 2. Section 4.1
and 4.2 present HJ equations for the time-varying and time-invariant Problem
2, respectively. Section 4.3 presents HJ equations for Problem 2 and the time-
invariant Problem 2 in the optimal control setting.
For Problem 2 in the zero-sum game and optimal control settings, the
corresponding HJ equations have been presented in the authors’ previous work
[6]. This section first presents this previous work and then proposes HJ
equations for the time-invariant version.
### 4.1 HJ equation for the time-varying Problem 2
This subsection provides an HJ formulation for Problem 2: solve
$\vartheta_{2}^{+}$ in (9) subject to (10) and $\vartheta_{2}^{-}$ in (11)
subject to (12). For $(t,x,z)\in[0,T]\times\mathbb{R}^{n}\times\mathbb{R}$,
define the augmented value functions corresponding to the upper and lower
value functions ($\vartheta_{2}^{\pm}$):
$\displaystyle
V_{2}^{+}(t,x,z)\coloneqq\sup_{\delta\in\Delta(t)}\inf_{\alpha\in\mathcal{A}(t)}J_{2}(t,x,a,\alpha,\delta[\alpha]),$
(54) $\displaystyle
V_{2}^{-}(t,x,z)\coloneqq\inf_{\gamma\in\Gamma(t)}\sup_{\beta\in\mathcal{B}(t)}J_{2}(t,x,a,\gamma[\beta],\beta),$
(55)
where cost $J_{2}\mathrel{\mathop{\mathchar
58\relax}}(t,x,z,\alpha,\beta)\rightarrow\mathbb{R}$ is defined as follows:
$\displaystyle\begin{split}J_{2}(t,x&,z,\alpha,\beta)\coloneqq\min_{\tau\in[t,T]}\max\bigg{\\{}\max_{s\in[t,\tau]}c(s,\mathrm{x}(s)),\\\
&\int_{t}^{\tau}L(s,\mathrm{x}(s),\alpha(s),\beta(s))ds+g(\tau,\mathrm{x}(\tau))-z\bigg{\\}},\end{split}$
(56)
where $\mathrm{x}$ solves (1) for $(\alpha,\beta)$. [6] proved that, for all
$(t,x)\in[t,T]\times\mathbb{R}^{n}$,
$\displaystyle\vartheta_{2}^{\pm}(t,x)=\min z\text{ subject to
}V_{2}^{\pm}(t,x,z)\leq 0,$ (57)
and $V_{2}^{\pm}$ are the unique viscosity solutions to the HJ equations in
Theorem 3.
###### Theorem 3
(HJ equation for Problem 2) [6] For all
$(t,x,z)\in[0,T]\times\mathbb{R}^{n}\times\mathbb{R}$, $V_{2}^{\pm}$ in (54)
and (55) are the unique viscosity solutions to the HJ equations:
$\displaystyle\max\Big{\\{}c$
$\displaystyle(t,x)-V_{2}^{\pm}(t,x,z),\min\big{\\{}g(t,x)-z-V_{2}^{\pm}(t,x,z),$
$\displaystyle
V_{2,t}^{\pm}-\bar{H}^{\pm}(t,x,z,D_{x}V_{2}^{\pm},D_{z}V_{2}^{\pm})\big{\\}}\Big{\\}}=0$
(58)
in $(0,T)\times\mathbb{R}^{n}\times\mathbb{R}$, where $\bar{H}^{\pm}$ are
defined in (27) and (28), and
$\displaystyle V_{2}^{\pm}(T,x,z)=\max\\{c(T,x),g(T,x)-z\\}$ (59)
on $\\{t=T\\}\times\mathbb{R}^{n}\times\mathbb{R}$.
We observe that the difference between the two types of HJ equations for
$V_{1}^{\pm}$ and $V_{2}^{\pm}$ is that the minimum operation in (58) for
$V_{2}^{\pm}$ is replaced by the maximum operation in (26). This is from the
difference between $\vartheta_{1}^{\pm}$ and $\vartheta_{2}^{\pm}$:
$\vartheta_{1}^{\pm}$ in (3) and (5) have $\max_{\tau}$ operation, and
$\vartheta_{2}^{\pm}$ in (9) and (11) have $\min_{\tau}$ operation.
### 4.2 HJ equation for Problem 2 (time-invariant case)
Through similar analysis to that in Section 3.2, this subsection derives the
HJ equations for the time-invariant Problem 2. For Problem 1, the freezing
control signal $\mu\mathrel{\mathop{\mathchar 58\relax}}[t,T]\rightarrow[0,1]$
allows to convert to the fixed-terminal-time problems by replacing the maximum
over $\tau$ operation in Problem 1 to the supremum over the freezing control
signal or strategy. Instead, Problem 2 is specified in terms of the minimum
over $\tau$ operation, which will be replaced by the infimum over the freezing
control signal or strategy.
Consider two fixed-terminal-time problems:
$\displaystyle\begin{split}\tilde{\vartheta}_{2}^{+}(t&,x)\coloneqq\sup_{\tilde{\delta}\in\tilde{\Delta}(t)}\inf_{\alpha\in\mathcal{A}(t),\mu\in\mathcal{M}(t)}\\\
&\int_{t}^{T}L(\mathrm{x}(s),\alpha(s),\tilde{\delta}[\alpha,\mu](s))\mu(s)ds+g(\mathrm{x}(T)),\end{split}$
(60) $\displaystyle\quad\quad\quad\text{subject to }c(\mathrm{x}(s))\leq
0,s\in[t,T],$ (61)
where $\mathrm{x}$ solves (30) for $(\alpha,\tilde{\delta}[\alpha,\mu],\mu)$,
$\mathcal{M}$ is defined in (31), and $\tilde{\delta}$ is the non-anticipative
strategy for player B to both player A and the freezing control:
$\displaystyle\tilde{\Delta}$
$\displaystyle(t)\coloneqq\\{\tilde{\delta}\mathrel{\mathop{\mathchar
58\relax}}\mathcal{A}(t)\times\mathcal{M}(t)\rightarrow\mathcal{B}(t)~{}|~{}\forall
s\in[t,\tau],\alpha,\bar{\alpha}\in\mathcal{A}(t),$
$\displaystyle\mu,\bar{\mu}\in\mathcal{M}(t),\text{if
}\alpha(\tau)=\bar{\alpha}(\tau),\mu(\tau)=\bar{\mu}(\tau)\text{ a.e.
}\tau\in[t,s],$ $\displaystyle\text{then
}\tilde{\delta}[\beta,\mu](\tau)=\tilde{\delta}[\bar{\beta},\bar{\mu}](\tau)\text{
a.e. }\tau\in[t,s]\\};$ (62)
$\displaystyle\begin{split}\tilde{\vartheta}_{1}^{-}(t&,x)\coloneqq\inf_{\gamma\in\Gamma(t),\nu_{B}\in\textrm{N}_{B}(t)}\sup_{\beta\in\mathcal{B}(t)}\\\
&\int_{t}^{T}L(\mathrm{x}(s),\gamma[\beta](s),\beta(s))\nu_{B}[\beta](s)ds+g(\mathrm{x}(T)),\end{split}$
(63) $\displaystyle\quad\quad\quad\text{subject to }c(\mathrm{x}(s))\leq
0,s\in[t,T],$ (64)
where $\mathrm{x}$ solves (30) for $(\gamma[\beta],\beta,\nu_{B}[\beta])$, and
the non-anticipative strategy for the freezing control to player B is
$\displaystyle\textrm{N}_{B}(t)$
$\displaystyle\coloneqq\\{\nu_{B}\mathrel{\mathop{\mathchar
58\relax}}\mathcal{B}(t)\rightarrow\mathcal{M}(t)~{}|~{}\forall
s\in[t,\tau],\beta,\bar{\beta}\in\mathcal{B}(t),$ $\displaystyle\text{if
}\beta(\tau)=\bar{\beta}(\tau)\text{ a.e. }\tau\in[t,s],$
$\displaystyle\text{then
}\nu_{B}[\beta](\tau)=\nu_{B}[\bar{\beta}](\tau)\text{ a.e. }\tau\in[t,s]\\}.$
(65)
Recall $\tilde{J}$ in (38), which contains the cost and the constraint of
$\tilde{\vartheta}_{2}^{\pm}$. Following similar steps of the proof of Lemma
3, Lemma 4 can be proved.
###### Lemma 4
Recall $\tilde{J}$ in (38), and consider $V_{2}^{\pm}$ in (54) and (55). For
all $(t,x,z)\in[0,T]\times\mathbb{R}^{n}\times\mathbb{R}$,
$\displaystyle
V_{2}^{+}(t,x,z)=\sup_{\tilde{\delta}\in\tilde{\Delta}(t)}\inf_{\begin{subarray}{c}\alpha\in\mathcal{A}(t),\\\
\mu\in\mathcal{M}(t)\end{subarray}}\tilde{J}(t,x,z,\alpha,\tilde{\delta}[\alpha,\mu],\mu),$
(66) $\displaystyle
V_{2}^{-}(t,x,z)=\inf_{\begin{subarray}{c}\gamma\in\Gamma(t),\\\
\nu_{B}\in\textrm{N}_{B}(t)\end{subarray}}\sup_{\beta\in\mathcal{B}(t)}\tilde{J}(t,x,z,\gamma[\beta],\beta,\nu_{B}[\beta]).$
(67)
By combining the HJ formulation for the fixed-terminal-time problems [2] and
Lemma 4, the HJ equation for the time-invariant Problem 2 is derived in
Theorem 4. The proof for Theorem 4 is analogous to the proof for Theorem 2.
###### Theorem 4
(HJ equation for Problem 2 (time-invariant version)) Consider Problem 2 in the
time-invariant case. For all
$(t,x,z)\in[0,T]\times\mathbb{R}^{n}\times\mathbb{R}$, $V_{2}^{\pm}$ in (54)
and (55) are the unique viscosity solutions to the HJ equations:
$\displaystyle\begin{split}\max&\Big{\\{}c(x)-V_{2}^{\pm}(t,x,z),\\\
&V_{2,t}^{\pm}-\max\big{\\{}0,\bar{H}^{\pm}(x,z,D_{x}V_{2}^{\pm},D_{z}V_{2}^{\pm})\big{\\}}\Big{\\}}=0\end{split}$
(68)
in $(0,T)\times\mathbb{R}^{n}\times\mathbb{R}$, where $\bar{H}^{+}$ and
$\bar{H}^{-}$ are defined in (27) and (28), respectively, without the time
dependency, and
$\displaystyle V_{2}^{\pm}(T,x,z)=\max\\{c(x),g(x)-z\\}$ (69)
on $\\{t=T\\}\times\mathbb{R}^{n}\times\mathbb{R}$.
In comparison between the HJ equations for Problem 2 and its time-invariant
version, $\min\\{g-z-V_{2}^{\pm},V_{2,t}^{\pm}-\bar{H}^{\pm}\\}$ in (58)
becomes $V_{2,t}^{\pm}-\max\\{0,\bar{H}^{\pm}\\}$ in (68). Note that the
difference between Problem 1 and 2 leads to the difference in HJ equations for
the time-invariant problems: (43) has the term
$V_{1,t}^{\pm}-\min\\{0,\bar{H}^{\pm}\\}$, but (68) has the term
$V_{2,t}^{\pm}-\max\\{0,\bar{H}^{\pm}\\}$.
### 4.3 HJ equation for Problem 2 (optimal control setting)
In this subsection, we solve Problem 2 in the optimal control setting: for
given initial time and state $(t,x)$,
$\displaystyle\vartheta_{2}(t,x)$
$\displaystyle\coloneqq\inf_{\alpha\in\mathcal{A}(t)}\min_{\tau\in[t,T]}\int_{t}^{\tau}L(s,\mathrm{x}(s),\alpha(s))ds+g(\tau,\mathrm{x}(\tau)),$
(70) $\displaystyle\text{subject to }\quad c(s,\mathrm{x}(s))\leq 0,\quad
s\in[t,\tau],$ (71)
where $\mathrm{x}$ solves (47). By removing the contribution of player B in
Theorem 3 and 4, we solve $\vartheta_{2}$ using the HJ equations in the
following remark.
###### Remark 2
(HJ equation for Problem 2 (optimal control setting)) Let $V_{2}$ be the
unique viscosity solution to the HJ equation [6]:
$\displaystyle\max\Big{\\{}$ $\displaystyle
c(t,x)-V_{2}(t,x,z),\min\big{\\{}g(t,x)-z-V_{2}(t,x,z),$ $\displaystyle
V_{2,t}-\bar{H}(t,x,z,D_{x}V_{2},D_{z}V_{2})\big{\\}}\Big{\\}}=0$ (72)
in $(0,T)\times\mathbb{R}^{n}\times\mathbb{R}$, where $\bar{H}$ is defined in
(49), and
$\displaystyle V_{2}(T,x,z)=\max\\{c(T,x),g(T,x)-z\\}$ (73)
on $\\{t=T\\}\times\mathbb{R}^{n}\times\mathbb{R}$. Then,
$\displaystyle\vartheta_{2}(t,x)=\min z\text{ subject to }V_{2}(t,x,z)\leq 0,$
(74)
where $\vartheta_{2}$ is (70) subject to (71).
If Problem 2 is time-invariant, $V_{2}$ is the unique viscosity solution to
the HJ equation:
$\displaystyle\begin{split}\max&\Big{\\{}c(x)-V_{2}(t,x,z),\\\
&V_{2,t}-\max\big{\\{}0,\bar{H}(x,z,D_{x}V_{2},D_{z}V_{2})\big{\\}}\Big{\\}}=0\end{split}$
(75)
in $(0,T)\times\mathbb{R}^{n}\times\mathbb{R}$, where $\bar{H}$ is defined in
(49) without the time dependency, and
$\displaystyle V_{2}(T,x,z)=\max\\{c(x),g(x)-z\\}$ (76)
on $\\{t=T\\}\times\mathbb{R}^{n}\times\mathbb{R}$.
## 5 Optimal Control Signal and Strategy
The optimal control signal or strategy for Problems 1 and 2 are specified by
the HJ equations in Section 3 and 4. This section utilizes the HJ equations in
Theorem 1 and 3, and the method in this section can be simply extended for the
other HJ equations in Theorems 2 and 4, and Remarks 1 and 2.
Recall $V_{i}^{\pm}$ ($i=1,2$) defined in (13), (14), (54), (55), and suppose
$V_{i}^{\pm}$ is computed from the HJ equations in Theorems 1 and 3.
Lemmas 3 and 4 imply the following remark.
###### Remark 3 (Find $\vartheta_{i}^{\pm}$ from $V_{i}^{\pm}$)
For initial time $t=0$ and state $x\in\mathbb{R}^{n}$,
$\displaystyle(\mathrm{x}_{*}(0),\mathrm{z}_{*}(0))=(x,\vartheta_{i}^{\pm}(0,x)),$
(77)
where $(\mathrm{x}_{*},\mathrm{z}_{*})$ is an optimal trajectory for
$V_{i}^{\pm}$.
With the initial augmented state $(\mathrm{x}_{*}(0),\mathrm{z}_{*}(0))$, the
optimal control and strategy can be found at
$(t,\mathrm{x}_{*}(t),\mathrm{z}_{*}(t))$, and the optimal state trajectory is
also updated by solving the ODE (17).
Define $\tilde{H}_{i}^{\pm}\mathrel{\mathop{\mathchar 58\relax}}A\times
B\rightarrow\mathbb{R}$ for a fixed
$(t,x,z)\in(0,T)\times\mathbb{R}^{n}\times\mathbb{R}$
$\displaystyle\begin{split}\tilde{H}_{i}^{\pm}(a,b)\coloneqq&-D_{x}V_{i}^{\pm}(t,x,z)\cdot
f(t,x,a,b)\\\ &+D_{z}V_{i}^{\pm}(t,x,z)L(t,x,a,b)\end{split}$ (78)
for $i=1,2$, thus
$\displaystyle\bar{H}^{+}(t,x,z,D_{x}V_{i}^{+},D_{z}V_{i}^{+})=\max_{a\in
A}\min_{b\in B}\tilde{H}_{i}^{+}(a,b),$ (79)
$\displaystyle\bar{H}^{-}(t,x,z,D_{x}V_{i}^{-},D_{z}V_{i}^{-})=\min_{b\in
B}\max_{a\in A}\tilde{H}_{i}^{-}(a,b),$ (80)
where $\bar{H}^{+}$ and $\bar{H}^{-}$ are defined in (27) and (28),
respectively. In this section, we omit $(t,x,z)$ to simplify notation. Using
the notation with $\tilde{H}_{i}^{\pm}$ (78), the HJ equation (26) for
$V_{1}^{+}$ is equal to
$\displaystyle\max\\{c-V_{1}^{+},g-z-V_{1}^{+},V_{1,t}^{+}-\max_{a\in
A}\min_{b\in B}\tilde{H}_{1}^{+}(a,b)\\}=0.$ (81)
The HJ equation implies that optimal control signal or strategy is determined
by the gradient information at the current time and state $(t,x,z)$, but the
past history of the state trajectory and optimal control signals is not
necessary. For example, in $V_{1}^{+}$, $\alpha_{*}(t)=a_{*}$,
$\delta_{*}[\alpha_{*}](t)=b_{*}$ where $a_{*}$ and $b_{*}$ are solutions to
$\displaystyle\max\\{c-V_{1}^{+},g-z-
V_{1}^{+},V_{1,t}^{+}-\tilde{H}_{1}^{+}(a,b)\\}=0.$ (82)
at $(t,x,z)$. In other words, it is sufficient to specify optimal controls for
player A and B in $(0,T)\times\mathbb{R}^{n}\times\mathbb{R}$ to generate the
optimal control signal or strategy. The maxmini solution $(a_{*},b_{*})$ for
the Hamiltonian $\bar{H}_{V}^{+}$ ($\max_{a\in A}\min_{b\in
B}\tilde{H}_{1}^{+}(a,b)$) is certainly optimal, but there are more solutions.
Similarly, for $V_{1}^{-}$ or $V_{2}^{\pm}$, any pair of $(a_{*},b_{*})$
satisfying the corresponding HJ equation is optimal.
In the HJ equation (81) (or (26)) for $V_{1}^{\pm}$, we have three terms:
$c-V_{1}^{\pm}$, $g-z-V_{1}^{\pm}$, or $V_{1,t}^{\pm}-\bar{H}^{\pm}$, and, at
least, one of these terms is zero. By considering which term is bigger or
smaller among the three terms, all possible optimal controls for
$\vartheta_{1}^{\pm}$ ($V_{1}^{\pm}$) and $\vartheta_{2}^{\pm}$
($V_{2}^{\pm}$) is derived in Remark 4.
Although $(a_{*},b_{*})$ satisfying the HJ equation is optimal, we need to
consider the order of players: player A plays first in $V_{i}^{+}$ but player
B plays first in $V_{i}^{-}$. In $V_{i}^{+}$, we first find a set of optimal
control for player A, and then investigate a set of optimal control for player
B when player A applies its optimal control. On the other hand, in
$V_{1}^{-}$, we first investigate a set of optimal control for player B, and
then find a set of optimal control for player A after applying an optimal
control of player B. Based on this argument, Remark 4 presents optimal
controls for $V_{i}^{+}$ according to classification, and optimal controls for
$V_{i}^{-}$ can be analogously extended.
###### Remark 4 (Optimal controls for $V_{i}^{+}$)
Fix $(t,x,z)\in(0,T)\times\mathbb{R}^{n}\times\mathbb{R}$.
Optimal controls for $\vartheta_{1}^{+}$ ($V_{1}^{+}$) are the following:
1. 1.
Case 1: $\max\\{c-V_{1}^{+},g-z-V_{1}^{+}\\}\geq V_{1,t}^{+}-\bar{H}^{+}$
$\displaystyle a_{*}\in\\{a\in A~{}|~{}V_{1,t}^{+}-\min_{b\in
B}\tilde{H}_{1}^{+}(a,b)\leq 0\\},$ (83) $\displaystyle b_{*}\in B;$ (84)
2. 2.
Case 2: $\max\\{c-V_{1}^{+},g-z-V_{1}^{+}\\}<V_{1,t}^{+}-\bar{H}^{+}$
$\displaystyle a_{*}\in\arg\max_{a\in A}\min_{b\in B}\tilde{H}_{1}^{+}(a,b),$
(85) $\displaystyle b_{*}\in\arg\min_{b\in B}\tilde{H}_{1}^{+}(a_{*},b).$ (86)
Optimal controls for $\vartheta_{2}^{+}$ $(V_{2}^{+})$ are the following:
1. 1.
Case 1: $c-V_{2}^{+}\geq V_{2,t}^{+}-\bar{H}^{+}$
(83) and (84) where $V_{1}^{+}$ and $\tilde{H}_{1}^{+}$ are replaced by
$V_{2}^{+}$ and $\tilde{H}_{2}^{+}$, respectively;
2. 2.
Case 2: $g-z-V_{2}^{+}\geq V_{2,t}^{+}-\bar{H}^{+}\geq c-V_{2}^{+}$
(85) and (86) where $V_{1}^{+}$ and $\tilde{H}_{1}^{+}$ are replaced by
$V_{2}^{+}$ and $\tilde{H}_{2}^{+}$, respectively;
3. 3.
Case 3: $V_{2,t}^{+}-\bar{H}^{+}\geq\max\\{c-V_{2}^{+},g-z-V_{2}^{+}\\}$
$\displaystyle a_{*}\in A,$ (87) $\displaystyle b_{*}\in\\{b\in
B~{}|~{}V_{2,t}^{+}-\tilde{H}_{2}^{+}(a_{*},b)\geq 0\\}.$ (88)
## 6 Numerical Computation for the Hamilton-Jacobi equation
In this section, we present a numerical algorithm based on level set methods
[8] to compute the solutions to the four HJ equations for Problems 1 and 2.
Algorithm 1 deals with the HJ equations for Problem 1 ((26), (43), (48),
(52)), and Algorithm 2 deals with the HJ equations for Problem 2 ((58), (68),
(72), (75)). Level set methods have been utilized to solve a variety of HJ
formulations [9, 5, 6].
Algorithm 1 solves the HJ equation (26) in two steps. At line 6 in Algorithm
1, we first compute the HJ PDE
($V_{1,t}^{\pm}-\bar{H}^{\pm}(t,x,z,D_{x}V_{1}^{\pm},D_{z}V_{1}^{\pm})=0$),
and line 7 in Algorithm 1 replaces $V_{1}^{\pm}$ by one of $c(t,x)$,
$g(t,x)-z$ or itself to satisfy the HJ equation (26).
For solving the HJ PDE at step 1, the Lax-Friedrichs scheme [10] is utilized
on the temporal discretization $\\{t_{0}=0,...,t_{K}=T\\}$ and the spatial
discretization
$\\{(x_{0},z_{0}),...,(x_{N},z_{N})\\}\subset\mathbb{R}^{n}\times\mathbb{R}$:
$\displaystyle
V_{1}^{\pm}(t_{k},x_{i},z_{i})=V_{1}^{\pm}(t_{k+1},x_{i},z_{i})-\Delta_{k}\hat{\bar{H}}^{\pm}(\phi_{x}^{+},\phi_{x}^{-},\phi_{z}^{+},\phi_{z}^{-}),$
(89)
where $\Delta_{k}=t_{k+1}-t_{k}$, $(\phi_{x}^{\pm},\phi_{z}^{\pm})$ are
numerical approximation for $(D_{x}V_{1}^{\pm},D_{z}V_{1}^{\pm})$ (gradients
with respect to $(x,z)$ at $(t_{k+1},x_{i},z_{i})$), and
$\displaystyle\hat{\bar{H}}^{\pm}(\phi_{x}^{+},\phi_{x}^{-},\phi_{z}^{+}$
$\displaystyle,\phi_{z}^{-})=\bar{H}^{\pm}\big{(}t_{k+1},x_{i},z_{i},\frac{\phi_{x}^{+}+\phi_{x}^{-}}{2},\frac{\phi_{z}^{+}+\phi_{z}^{-}}{2}\big{)}$
$\displaystyle-\alpha_{x}\cdot\frac{\phi_{x}^{+}+\phi_{x}^{-}}{2}-\alpha_{z}\frac{\phi_{z}^{+}+\phi_{z}^{-}}{2},$
(90)
where $\hat{H}^{\pm}$ are defined in (27) and (28), and
$\alpha_{x}=(\alpha_{x_{1}},...,\alpha_{x_{n}})$
($\alpha_{x_{i}}=\max|D_{p_{i}}\bar{H}^{\pm}|$) and
$\alpha_{z}(=\max|D_{q}\bar{H}^{\pm}|)$ are dissipation coefficients for
numerical viscosity, based on the partial derivatives of $\bar{H}^{\pm}$ [11].
The fifth-order accurate HJ WENO (weighted essentially nonoscillatory) method
[11] is used for the gradient $\phi_{x}^{\pm},\phi_{z}^{\pm}$. In (89), the
first-order Euler method is used for the temporal partial derivative, but
higher-order methods, such as third-order accurate TVD (total variation
diminishing) Runge-Kutta (RK) [12] can be used. [11] provided the empirical
observation that level set methods are sensitive to spatial accuracy, thus
high-order scheme for spatial derivatives is desired, but high-order
approximation for temporal derivatives does not significantly increase the
accuracy.
For the time-invariant Problem 1, line 9 in Algorithm 1 solves the HJ equation
(43) whose Hamiltonian has the minimum with 0 operation. Then, line 10 in
Algorithm 1 updates $V_{1}^{\pm}$ with the maximum between $c$ and itself to
satisfy the HJ equation (43) without considering $g-z$ term.
For Problem 1 in optimal control setting, Algorithm 1 also works with
utilizing the correct Hamiltonian $\bar{H}$ (49) instead of $\bar{H}^{\pm}$
((27) and (28)).
Algorithm 1 Computing the solution $V_{1}^{\pm}$ or $V_{1}$ to the HJ
equations for Problem 1 in the zero-sum game and optimal control settings.
This algorithm deals with the four HJ equations: (26), (43), (48), and (52).
1:Input: the temporal discretization: $\\{t_{0}=0,...,t_{K}=T\\}$, the spatial
discretization: $\\{(x_{0},z_{0}),...,(x_{N},z_{N})\\}$
2:Output: $V_{1}^{\pm}$ (or $V_{1}$)
3:$V_{1}^{\pm}(\text{or
}V_{1})(T,x_{i},z_{i})\leftarrow\max\\{c(T,x_{i}),g(T,x_{i})-z_{i}\\},\forall
i$
4:for $k\in\\{K-1,...,0\\}$ do
5: case solving the HJ equations (26) or (48)
6: [l]$V_{1}^{\pm}(\text{or }V_{1})(t_{k},x_{i},z_{i})\leftarrow
V_{1}^{\pm}(\text{or }V_{1})(t_{k+1},x_{i},z_{i})-$
7: $\Delta_{k}\hat{\bar{H}}^{\pm}(\text{or
}\hat{\bar{H}})(\phi_{x}^{+},\phi_{x}^{-},\phi_{z}^{+},\phi_{z}^{-}),\forall
i$
8: [l]$V_{1}^{\pm}(\text{or
}V_{1})(t_{k},x_{i},z_{i})\leftarrow\max\\{c(t_{k},x_{i}),$
9: $g(t_{k},x_{i})-z_{i},V_{1}^{\pm}(\text{or
}V_{1})(t_{k},x_{i},z_{i})\\},\forall i$
10: case solving the HJ equations (43) or (52)
11: [l]$V_{1}^{\pm}(\text{or }V_{1})(t_{k},x_{i},z_{i})\leftarrow
V_{1}^{\pm}(\text{or }V_{1})(t_{k+1},x_{i},z_{i})-$
12: $\Delta_{k}\min\\{0,\hat{\bar{H}}^{\pm}(\text{or
}\hat{\bar{H}})(\phi_{x}^{+},\phi_{x}^{-},\phi_{z}^{+},\phi_{z}^{-})\\},\forall
i$
13: [l]$V_{1}^{\pm}(\text{or
}V_{1})(t_{k},x_{i},z_{i})\leftarrow\max\\{c(t_{k},x_{i}),$
14: $V_{1}^{\pm}(\text{or }V_{1})(t_{k},x_{i},z_{i})\\},\forall i$
Algorithm 2 numerically solves Problem 2 in zero-sum game and optimal control
setting. At line 6 in Algorithm 2, we compute the HJ PDE
($V_{2,t}^{\pm}-\bar{H}^{\pm}(t,x,z,D_{x}V_{2}^{\pm},D_{z}V_{2}^{\pm})=0$) in
the HJ equation (58). Line 7 and 8 in Algorithm 2 update $V_{2}^{\pm}$ by
choosing among $c,g-z$ and itself so that the HJ equation (58) holds.
For the time-invariant Problem 2, line 10 in Algorithm 2 first solve the HJ
PDEs ((68) or (75)) whose Hamiltonian has the maximum operation with 0. Line
11 updates $V_{2}^{\pm}$ (or $V_{2}$) by choosing between $c$ and itself so
that the HJ equation (68) (or (75)) holds.
Algorithm 2 Computing the solution $V_{2}^{\pm}$ or $V_{2}$ to the HJ
equations for Problem 2 in the zero-sum game and optimal control settings.
This algorithm deals with the four HJ equations: (58), (68), (72), and (75).
1:Input: the temporal discretization: $\\{t_{0}=0,...,t_{K}=T\\}$, the spatial
discretization: $\\{(x_{0},z_{0}),...,(x_{N},z_{N})\\}$
2:Output: $V_{2}^{\pm}$ (or $V_{2}$)
3:$V_{2}^{\pm}(\text{or
}V_{2})(T,x_{i},z_{i})\leftarrow\max\\{c(T,x_{i}),g(T,x_{i})-z_{i}\\},\forall
i$
4:for $k\in\\{K-1,...,0\\}$ do
5: case solving the HJ equations (58) or (72)
6: [l]$V_{2}^{\pm}(\text{or }V_{2})(t_{k},x_{i},z_{i})\leftarrow
V_{2}^{\pm}(\text{or }V_{2})(t_{k+1},x_{i},z_{i})-$
7: $\Delta_{k}\hat{\bar{H}}^{\pm}(\text{or
}\hat{\bar{H}})(\phi_{x}^{+},\phi_{x}^{-},\phi_{z}^{+},\phi_{z}^{-}),\forall
i$
8: [l]$V_{2}^{\pm}(\text{or
}V_{2})(t_{k},x_{i},z_{i})\leftarrow\min\\{g(t_{k},x_{i})-z_{i},$
9: $V_{2}^{\pm}(\text{or }V_{2})(t_{k},x_{i},z_{i})\\},\forall i$
10: [l]$V_{2}^{\pm}(\text{or
}V_{2})(t_{k},x_{i},z_{i})\leftarrow\max\\{c(t_{k},x_{i}),$
11: $V_{2}^{\pm}(\text{or }V_{2})(t_{k},x_{i},z_{i})\\},\forall i$
12: case solving the HJ equations (68) or (75)
13: [l]$V_{2}^{\pm}(\text{or }V_{2})(t_{k},x_{i},z_{i})\leftarrow
V_{2}^{\pm}(\text{or }V_{2})(t_{k+1},x_{i},z_{i})-$
14: $\Delta_{k}\max\\{0,\hat{\bar{H}}^{\pm}(\text{or
}\hat{\bar{H}})(\phi_{x}^{+},\phi_{x}^{-},\phi_{z}^{+},\phi_{z}^{-})\\},\forall
i$
15: [l]$V_{2}^{\pm}(\text{or
}V_{2})(t_{k},x_{i},z_{i})\leftarrow\max\\{c(t_{k},x_{i}),$
16: $V_{2}^{\pm}(\text{or }V_{2})(t_{k},x_{i},z_{i})\\},\forall i$
## 7 Example
This section provides an example for Problem 1; examples for Problem 2 can be
found in [6]. In this example, we solve a zero-sum game for two ponds, as
shown in Figure 1. This example is motivated by the water system in [13].
Figure 1: Water system with two ponds.
Precipitation on pond 1 increases the water level of pond 1, and pond 1
(player A) wants to minimize the highest water level in the time horizon (1 s)
by controlling amount of outflow to pond 2. We assume that the water level
increasing rate on pond 1 due to the precipitation is unknown but bounded by 0
and 10 $m/s$. The precipitation is considered as player B. These numbers and
units can be easily changed to realistic problems. We determine an optimal
control signal and strategy even in the worst behavior of player B.
In the water system, we have two states: $x_{1}$ and $x_{2}$ represent the
water level of pond 1 and 2. The state trajectories are solving the following
dynamics:
$\displaystyle\begin{split}&\dot{\mathrm{x}}_{1}(s)=\beta(s)-\sqrt{2g\mathrm{x}_{1}(s)}\alpha(s),\\\
&\dot{\mathrm{x}}_{2}(s)=0.5\sqrt{2g\mathrm{x}_{1}(s)}\alpha(s)-0.5\mathrm{x}_{2}(s),\end{split}$
(91)
where $\alpha(s)\in A=[0,1]$, $\beta(s)\in B=[0,10]$, and $g$ is the
gravitational constant: 9.81 $m/s^{2}$. In the dynamics for $\mathrm{x}_{1}$,
the first term $\beta$ is by the precipitation (player B), and the second term
$\sqrt{2g\mathrm{x}_{1}}\alpha$ is the water level decreasing rate by pond 1
(player A). The term $\sqrt{2g\mathrm{x}_{1}}$ is by Bernoulli’s equation, and
pond 1 controls the area of outflows ($\alpha$) between 0 and 1. We set the
bottom area of pond 2 is twice bigger than pond 1, thus the dynamics for
$\mathrm{x}_{2}$ contains $0.5\sqrt{2g\mathrm{x}_{1}}\alpha$. Also, we assume
that pond 2’s water is used for drinking water, which causes a decreasing rate
$0.5\mathrm{x}_{2}$.
The dynamics (91) is not Lipschitz at $x_{1}=0$. To avoid this, we approximate
$\sqrt{2gx_{1}}$ with a sinusoidal function $4.82\sin(1.17x_{1})$ if $x_{1}$
is less than 1. This sinusoidal-approximate function has the same value and
first derivative at $x_{1}=1$: $\sqrt{2g}\simeq 4.82\sin(1.17)$ and
$\sqrt{g/2}\simeq 4.82*1.17*\cos(1.17)$. The approximated (Lipschitz) dynamics
are
$\displaystyle\begin{split}&\dot{\mathrm{x}}_{1}(s)=\beta(s)-\begin{cases}\sqrt{2g\mathrm{x}_{1}(s)}\alpha(s),&\mathrm{x}_{1}(s)\geq
1,\\\
4.82\sin(1.17\mathrm{x}_{1}(s))\alpha(s),&\mathrm{x}_{1}(s)<1,\end{cases}\\\
&\dot{\mathrm{x}}_{2}(s)=\begin{cases}0.5\sqrt{2g\mathrm{x}_{1}(s)}\alpha(s),&\mathrm{x}_{1}(s)\geq
1,\\\
2.41\sin(1.17\mathrm{x}_{1}(s))\alpha(s),&\mathrm{x}_{1}(s)<1,\end{cases}-0.5\mathrm{x}_{2}(s),\end{split}$
(92)
We solve the two zero-sum games: the upper value function is
$\displaystyle\quad\quad\vartheta_{1}^{+}(0,x_{1},x_{2}),=\max_{\delta\in\Delta(0)}\min_{\alpha\in\mathcal{A}(0)}\max_{\tau\in[0,1]}\mathrm{x}_{1}(\tau),$
(93) $\displaystyle\text{subject to
}\max\\{|\mathrm{x}_{1}(s)-7.5|-7.5,|\mathrm{x}_{2}(s)-3|-2\\}\leq 0,$ (94)
where $A=[0,1]$, $B=[0,10]$, $\mathcal{A}(0)=\\{[0,1]\rightarrow
A~{}|~{}\|\alpha\|_{L^{\infty}(0,1)}<\infty\\}$,
$\mathcal{B}(0)=\\{[0,1]\rightarrow
B~{}|~{}\|\beta\|_{L^{\infty}(0,1)}<\infty\\}$, $\Delta(0)$ is a set of non-
anticipative strategies for player B (pond 2) as in (7), and
$(\mathrm{x}_{1},\mathrm{x}_{2})$ solves (92) for $(\alpha,\delta[\alpha])$;
and the lower value function is
$\displaystyle\quad\quad\vartheta_{1}^{-}(0,x_{1},x_{2}),=\min_{\gamma\in\Gamma(0)}\max_{\beta\in\mathcal{B}(0)}\max_{\tau\in[0,1]}\mathrm{x}_{1}(\tau),$
(95) $\displaystyle\text{subject to
}\max\\{|\mathrm{x}_{1}(s)-7.5|-7.5,|\mathrm{x}_{2}(s)-3|-2\\}\leq 0,$ (96)
where $\Gamma(0)$ is a set of non-anticipative strategies for player A (pond
1) as in (8), and $(\mathrm{x}_{1},\mathrm{x}_{2})$ solves (92) for
$(\gamma[\beta],\beta)$. The state constraint implies that the water level of
pond 1 has to be between 0 and 15 $m$ and the one of pond 2 is between 1 and 5
$m$. In these games, pond 1 (player A) wants to minimize the worst water level
of pond 1 in the time horizon while satisfying the state constraint for
preventing flood in pond 1 and 2.
We will solve the HJ equation (26) for $V_{1}^{\pm}$ corresponding to
$\vartheta_{1}^{\pm}$ ((94) or (96)). We have the Hamiltonian
$\displaystyle\bar{H}^{+}(t,x,z,p,q)=\max_{a\in A}\min_{b\in
B}-p_{1}b+0.5p_{2}x_{2}$
$\displaystyle\quad\quad+\begin{cases}(p_{1}-0.5p_{2})\sqrt{2gx_{1}}a&\text{if
}x_{1}\geq 1\\\ (p_{1}-0.5p_{2})4.82\sin(1.17x_{1})a&\text{if
}x_{1}<1\end{cases}$ $\displaystyle\quad\quad=\begin{cases}-10p_{1}&\text{if
}p_{1}\geq 0\\\ 0&\text{if }p_{1}<0\end{cases}+0.5p_{2}x_{2}$
$\displaystyle\quad\quad+\begin{cases}(p_{1}-0.5p_{2})\sqrt{2gx_{1}}&\text{if
}\begin{tabular}[]{l}$p_{1}-0.5p_{2}\geq 0$\\\ $x_{1}\geq 1$\end{tabular}\\\
(p_{1}-0.5p_{2})4.82\sin(1.17x_{1})&\text{if
}\begin{tabular}[]{l}$p_{1}-0.5p_{2}\geq 0$\\\ $x_{1}<1$\end{tabular}\\\
0&\text{if }p_{1}-0.5p_{2}<0\end{cases}$
$\displaystyle=\bar{H}^{-}(t,x,z,p,q)$ (97)
where $x=(x_{1},x_{2})\in\mathbb{R}^{2}$ and
$p=(p_{1},p_{2})\in\mathbb{R}^{2}$. (97) implies
$\displaystyle V_{1}^{+}\equiv V_{1}^{-}\equiv V_{1}^{\pm}\text{ and
}\vartheta_{1}^{+}\equiv\vartheta_{1}^{-}\equiv\vartheta_{1}^{\pm}.$ (98)
We use $V_{1}^{\pm}$ to denote the same value functions $V_{1}^{+}$ and
$V_{1}^{-}$.
The red curvature in Figure 2 (a) shows the zero-level set of
$V_{1}^{\pm}(0,x_{1},x_{2})$, numerically computed by Algorithm 1. This
algorithm is programmed by utilizing the level set toolbox [8] and the
helperOC toolbox [14] in Matlab, and this simulation is carried out on a
laptop with a 2.8 GHz Quad-Core i7 CPU and 16 GB RAM. Each of $x_{1}$,
$x_{2}$, and $z$ axis has 81 discretization points, and the time interval
$[0,1]$ is discretized with 201 points. The computation time for $V_{1}^{\pm}$
is 237 $s$. In Figure 2 (a), the value of $V_{1}^{\pm}$ inside of the red
curvature is negative, on the other hand, the value outside of the curvature
is positive.
This example is time-invariant, so both HJ equations in (26) and (43) can be
utilized. In this example, we solve the HJ equation (26).
---
(a)
(b)
Figure 2: (a) The zero-level set of $V_{1}^{\pm}$ is shown in this figure.
The value of $V_{1}^{\pm}$ inside of the curvature is negative, but the value
is positive outside. The blue planes are $z$-level planes of 4, 6, 8, 10, 12,
14, and 16. (b) The $z$-level sets are shown in $(x_{1},x_{2})$-space. The
$z$-level sets for $z\geq 15$ are the same (the outer curvature). The
$z$-level sets also show $\vartheta_{1}^{\pm}$ by Lemma 1. For example, for
$(1.60,2.85)$ on the $z$-level set of 4, $\vartheta_{1}^{\pm}$ is 4. On the
other hand, consider $(0.11,2)$ on multiple $z$-level sets from 4.22 to any
greather levels, for which $\vartheta_{1}^{\pm}$ is the minimum $z$-level that
contains the point: $\vartheta_{1}^{\pm}(0.05,2)=4.22$.
Lemma 1 describes how to compute $\vartheta_{1}^{\pm}$ from the zero-level set
of $V_{1}^{\pm}$, which is illustrated in Figure 2. Figure 2 (a) illustrates
the intersection of the zero-level set of $V_{1}^{\pm}$ and each $z$-level
plane, and Figure 2 (b) shows these intersections in the state space,
$(x_{1},x_{2})$: the $z$-level sets on the zero-level set of $V_{1}^{\pm}$. As
illustrated in Figure 2 (b), the lower $z$-level is achieved in the smaller
region in $(x_{1},x_{2})$. In this example, as $z$-level is increasing, the
inner area of the $z$-level set on the subzero-level set of $V_{1}^{\pm}$ is
increasing and also converging at the $z$-level of 15, which is the outer
curvature in Figure 2 (b). For $(x_{1},x_{2})$ outside of the outer curvature
indicated with $z\geq 15$, there is no control signal or strategy for pond 1
(player A) to satisfy the state constraint, which implies that
$\vartheta_{1}^{\pm}(0,x_{1},x_{2})$ is infinity. On the other hand, for
$(x_{1},x_{2})$ on a unique $z$-level set, the $z$-level is equal to
$\vartheta_{1}^{\pm}$. For example, the $z$-level set of 6 is the only
$z$-level set passing through $(2.6,2)$. In this case,
$\vartheta_{1}^{\pm}(0,2.6,2)=6$. On the other hand, for $(x_{1},x_{2})$ on
multiple $z$-level sets, the minimum value of $z$-level is
$\vartheta_{1}^{\pm}$. For example, $(0.05,2)$ is on the $z$-level sets of any
number greater than or equal to 4.5. In this case,
$\vartheta_{1}^{\pm}(0,0.05,2)$ is 4.5 since 4.5 is the minimum $z$-level that
contains the point $(0.05,2)$.
---
(a)
(b)
Figure 3: State trajectories by applying an optimal control signal and
strategy for two players (pond 1 and the precipitation) where the initial
states are (a) $(x_{1},x_{2})=(10,4)$ and (b) $(x_{1},x_{2})=(2,1.1)$.
Using the value function $V_{1}^{\pm}$ and $\vartheta_{1}^{\pm}$, the method
presented in Section 5 provides a state trajectory and an optimal control and
strategy for the two players (pond 1 and the precipitation). Among multiple
solutions for optimal control and strategy presented in Remark 4, we choose
$\displaystyle\begin{split}&a_{*}\in\arg\max_{a\in A}\min_{b\in
B}\tilde{H}_{1}^{+}(a,b),\\\ &b_{*}\in\arg\min_{b\in
B}\tilde{H}_{1}^{+}(a_{*},b),\end{split}$ (99)
which satisfies the eight equations (83) to (86) since
$\bar{H}^{+}=\bar{H}^{-}$ and $\tilde{H}_{1}^{+}=\tilde{H}_{1}^{-}$, where
$\tilde{H}_{1}^{\pm}$ is equal to $D_{x}V_{1}^{\pm}\cdot f+D_{z}V_{1}^{\pm}L$
as defined in (78).
Figure 3 shows state trajectories for two different initial states:
$(x_{1},x_{2})=(10,4)$ and $(2,1.1)$. As shown in Figure 3 (a), for the
initial state $(10,4)$, $\mathrm{x}_{2}$ hits the boundary of the state
constraint: $\mathrm{x}_{2}(1)=5$, and $\mathrm{x}_{1}$ is maximized at $t=1$.
Since the initial water levels of the two ponds are high, the precipitation
(player B) tries to increase the water level of pond 1 for all time, but
player A tries to balance the water levels of the two ponds. On the other
hand, for the initial state $(2,1.1)$, Figure 3 (b) shows that
$\mathrm{x}_{2}$ strictly satisfies the state constraint $[1,5]$.
$\mathrm{x}_{1}$ is maximized at $t=0.015$ and increasing for the later time.
Since the initial water levels of the two ponds are low, the precipitation
(player B) tries to violate the state constraint by not increasing the water
level of pond 1. However, player A tries to balance the two ponds’ water level
so that all ponds have more water than the minimum levels.
As discussed in Section 6, there are some numerical issues in Algorithm 1.
First, we observe that (99) provides a bang-bang control, thus the state
trajectories are not smooth as shown in Figure 3. This happens due to frequent
sign change of the gradient along the time horizon. Second, the numerical
error on $V_{1}^{\pm}$ causes inaccurate $\vartheta_{1}^{\pm}$ by Lemma 1,
which could potentially cause unsafety even though the violation of the state
constraint might be smaller for the smaller grid size. In practice, we suggest
having a safety margin to the state constraint: for example, use
$c(s,\mathrm{x}(s))+\epsilon\leq 0$ for small $\epsilon>0$ instead of
$c(s,\mathrm{x}(s))\leq 0$.
## 8 Conclusion and Future Work
This paper presented four HJ equations for the two classes of state-
constrained zero-sum games where the terminal time is a variable to be
determined and the stage cost is non-zero. For each class of problems, two HJ
equations have presented: one for time-varying version, and the other for the
time-invariant version. This paper also analyzed the optimal control and
strategy for each player using the gradient of the viscosity solution to the
HJ equations, and also presented a numerical algorithm to compute the
viscosity solution. As a practical example, a 2D water system demonstrates one
of the presented HJ formulations.
Although our HJ formulation can be generally utilized for the two classes of
problems, the numerical computation of Algorithms 1 and 2 is intractable for
high-dimensional system (higher than four or five). This is because the
computational complexity is exponential in the dimension of the state. In
future, We aim to alleviate this complexity by deriving corresponding Lax and
Hopf theory or by applying approximation techniques in reinforcement learning.
### .1 Proof of Lemma 1
Proof.
(i) $\vartheta_{1}^{+}(t,x)-z\leq 0\Rightarrow V_{1}^{+}(t,x,z)\leq 0$
$\vartheta_{1}^{+}(t,x)-z\leq 0$ implies that, for all $\delta\in\Delta(t)$,
there exists $\alpha\in\mathcal{A}(t)$ such that
$\displaystyle\max_{\tau\in[t,T]}\int_{t}^{\tau}L(s,\mathrm{x}(s),\alpha(s),\delta[\alpha](s))ds+g(\tau,\mathrm{x}(\tau))-z\leq\epsilon$
(100)
and $c(s,\mathrm{x}(s))\leq 0$ for $s\in[t,\tau]$ for any small $\epsilon>0$,
where $\mathrm{x}$ solves (1) for $(\alpha,\delta[\alpha])$. Thus, for all
$\delta$, there exists $\alpha$ such that
$J_{1}(t,x,z,\alpha,\delta)\leq\epsilon$, which concludes
$V_{1}^{+}(t,x,z)\leq 0$. Note that $J_{1}$ is defined in (15).
(ii) $V_{1}^{+}(t,x,z)\leq 0\Rightarrow\vartheta_{1}^{+}(t,x)-z\leq 0$
Assumption 1 implies that, for any $\delta\in\Delta(t)$, there exists
$\alpha\in\mathcal{A}$ such that $J_{1}(t,x,z,\alpha,\delta[\alpha])\leq
V_{1}^{+}(t,x,z)$. If $V_{1}^{+}(t,x,z)\leq 0$, for any $\delta\in\Delta(t)$,
there exists $\alpha$ such that $\max_{s\in[t,\tau]}c(s,\mathrm{x}(s))\leq 0$
and
$\int_{t}^{\tau}L(s,\mathrm{x}(s),\alpha(s),\delta[\alpha](s))ds+g(\tau,\mathrm{x}(s))-z\leq
0$ for all $\tau\in[t,T]$. Thus, $\vartheta_{1}^{+}(t,x)-z\leq 0$.
(i) and (ii) concludes (19).
(iii) Let $\tilde{\vartheta}_{1}^{+}$ be the right hand term in (20) subject
to (21). Then, the following statement can be proved by analogous proofs in
(i) and (ii).
$\displaystyle\tilde{\vartheta}_{1}^{+}(t,x)=\min z$ $\displaystyle\text{
subject to
}\sup_{\delta}\inf_{\alpha}\max\\{\max_{s\in[t,T]}c(s,\mathrm{x}(s)),$
$\displaystyle\max_{\tau\in[t,T]}g(\tau,\mathrm{x}(\tau))-\mathrm{z}(\tau)\\}\leq
0.$ (101)
By (18) and (19), we conclude
$\vartheta_{1}^{+}(t,x)=\tilde{\vartheta}_{1}^{+}(t,x)$.
(iv) The proof for $V_{1}^{-}$ and $\vartheta^{-}_{1}$ is similar to that for
$V_{1}+$ and $\vartheta^{+}_{1}$. ∎
### .2 Proof of Lemma 2
Proof. Consider $(\mathrm{x},\mathrm{z})$ solving (17) for any
$(\alpha,\beta)$, and a small $h>0$. (18) implies
$\displaystyle J_{1}(t$
$\displaystyle,x,z,\alpha,\beta)=\max\Big{\\{}\max_{s\in[t,t+h]}c(s,\mathrm{x}(s)),$
$\displaystyle\max_{s\in[t,t+h]}g(s,\mathrm{x}(s))-\mathrm{z}(s),\max\big{\\{}\max_{s\in[t+h,T]}c(s,\mathrm{x}(s)),$
$\displaystyle\max_{s\in[t+h,T]}g(s,\mathrm{x}(s))-\mathrm{z}(s)\big{\\}}\Big{\\}}.$
(102)
(i) For all $\alpha\in\mathcal{A}(t)$ and $\delta\in\Delta(t)$, there exists
$\alpha_{1}\in\mathcal{A}(t)$, $\delta_{1}\in\Delta(t)$,
$\alpha_{2}\in\mathcal{A}(t+h)$, $\delta_{2}\in\Delta(t+h)$ such that
$\displaystyle\alpha(s)=\begin{cases}\alpha_{1}(s),&s\in[t,t+h],\\\
\alpha_{2}(s),&s\in(t+h,T],\end{cases}$ (103)
$\displaystyle\delta[\alpha](s)=\begin{cases}\delta_{1}[\alpha](s),&s\in[t,t+h],\\\
\delta_{2}[\alpha](s),&s\in(t+h,T].\end{cases}$ (104)
Then, we have
$\displaystyle
V_{1}^{+}(t,x,z)=\sup_{\begin{subarray}{c}\delta_{1}\in\Delta(t)\\\
\delta_{2}\in\Delta(t+h)\end{subarray}}\inf_{\begin{subarray}{c}\alpha_{1}\in\mathcal{A}(t)\\\
\alpha_{2}\in\mathcal{A}(t+h)\end{subarray}}J_{1}(t,x,z,\alpha,\delta[\alpha])$
$\displaystyle=\sup_{\delta_{1}\in\Delta(t)}\inf_{\alpha_{1}\in\mathcal{A}(t)}\max\Big{\\{}\max_{s\in[t,t+h]}c(s,\mathrm{x}(s)),$
$\displaystyle\max_{s\in[t,t+h]}g(s,\mathrm{x}(s))-\mathrm{z}(s),\sup_{\delta_{2}\in\Delta(t+h)}\inf_{\alpha_{2}\in\mathcal{A}(t+h)}\max$
$\displaystyle\big{\\{}\max_{s\in[t+h,T]}c(s,\mathrm{x}(s)),\max_{s\in[t+h,T]}g(s,\mathrm{x}(s))-\mathrm{z}(s)\big{\\}}\Big{\\}}.$
(105)
The last equality is deduced by combining (102) and that the first two terms
of $V_{1}^{+}$ ($\max_{s\in[t,t+h]}c(s,\mathrm{x}(s))$,
$\max_{s\in[t,t+h]}g(s,\mathrm{x}(s))-\mathrm{z}(s)$) are independent of
$(\alpha_{2},\delta_{2})$. (105) concludes (24).
(ii) The proof for (25) is similar to (i). ∎
### .3 Proof of Theorem 1
Proof. (i) At $t=T$, the definition of $V_{1}^{\pm}$ ((13) and (14)) implies
(29).
(ii) For $U\in C^{\infty}([0,T]\times\mathbb{R}^{n}\times\mathbb{R})$ such
that $V_{1}^{+}-U$ has a local maximum at
$(t_{0},x_{0},z_{0})\in(0,T)\times\mathbb{R}^{n}\times\mathbb{R}$ and
$(V_{1}^{+}-U)(t_{0},x_{0},z_{0})=0$, we will prove
$\displaystyle\begin{split}\max\big{\\{}&c(t_{0},x_{0})-U_{0},g(t_{0},x_{0})-z_{0}-U_{0},\\\
&U_{t0}-\bar{H}^{+}(t_{0},x_{0},z_{0},D_{x}U_{0},D_{z}U_{0})\big{\\}}\geq
0,\end{split}$ (106)
where $U_{0}=U(t_{0},x_{0},z_{0})$, $U_{t0}=U_{t}(t_{0},x_{0},z_{0})$,
$D_{x}U_{0}=D_{x}U(t_{0},x_{0},z_{0})$, and
$D_{z}U_{0}=D_{z}U(t_{0},x_{0},z_{0})$.
Suppose not. There exists $\theta>0$, $a_{1}\in A$ such that
$\displaystyle c(t,x)-U_{0}<-\theta,\quad g(t,x)-z-U_{0}<-\theta,$ (107)
$\displaystyle\begin{split}&U_{t}(t,x,z)+D_{x}U(t,x,z)\cdot f(t,x,a_{1},b)\\\
&\quad\quad\quad\quad\quad\quad-
D_{z}U(t,x,z)L(t,x,a_{1},b)\leq-\theta\end{split}$ (108)
for all $b\in B$ and all points $(t,x,z)$ sufficiently close to
$(t_{0},x_{0},z_{0})$: $|t-t_{0}|+\|x-x_{0}\|+|z-z_{0}|<h_{1}$ for small
enough $h_{1}>0$. Consider state trajectories $\mathrm{x}$ and $\mathrm{z}$
solving (17) for $\alpha_{1}\equiv a_{1}$, $t=t_{0}$, $x=x_{0}$, $z=z_{0}$,
and any $\beta\in\mathcal{B}(t_{0})$. By Assumption 1, there exists a small
$h$ such that $\|\mathrm{x}(s)-x_{0}\|+|\mathrm{z}(s)-z_{0}|<h_{1}-h$
($s\in[t_{0},t_{0}+h]$), then,
$\displaystyle
c(s,\mathrm{x}(s))-U_{0}<-\theta,~{}g(s,\mathrm{x}(s))-\mathrm{z}(s)-U_{0}<-\theta,$
(109) $\displaystyle U_{t}(s,\mathrm{x}(s),\mathrm{z}(s))$
$\displaystyle+D_{x}U(s,\mathrm{x}(s),\mathrm{z}(s))\cdot
f(s,\mathrm{x}(s),a_{1},\beta(s))$ $\displaystyle-
D_{z}U(s,\mathrm{x}(s),\mathrm{z}(s))L(s,\mathrm{x}(s),a_{1},\beta(s))\leq-\theta$
(110)
for all $s\in[t_{0},t_{0}+h]$ and $\beta\in\mathcal{B}(t_{0})$.
Since $V_{1}^{+}-U$ has a local maximum at $(t_{0},x_{0},z_{0})$,
$\displaystyle
V_{1}^{+}(t_{0}+h,\mathrm{x}(t_{0}+h),\mathrm{z}(t_{0}+h))-V_{1}^{+}(t_{0},x_{0},z_{0})$
$\displaystyle\leq$ $\displaystyle
U(t_{0}+h,\mathrm{x}(t_{0}+h),\mathrm{z}(t_{0}+h))-U(t_{0},x_{0},z_{0})$
$\displaystyle=$
$\displaystyle\int_{t_{0}}^{t_{0}+h}U_{t}(s,\mathrm{x}(s),\mathrm{z}(s))$
$\displaystyle+$ $\displaystyle D_{x}U(s,\mathrm{x}(s),\mathrm{z}(s))\cdot
f(s,\mathrm{x}(s),a_{1},\delta[\alpha_{1}](s))$ $\displaystyle-$
$\displaystyle
D_{z}U(s,\mathrm{x}(s),\mathrm{z}(s))L(s,\mathrm{x}(s),a_{1},\delta[\alpha_{1}](s))ds\leq-\theta
h$ (111)
for all $\delta\in\Delta(t_{0})$, according to (110). Lemma 2 implies
$\displaystyle
V_{1}^{+}(t_{0},x_{0},z_{0})\leq\sup_{\delta\in\Delta(t_{0})}\max\big{\\{}\max_{s\in[t_{0},t_{0}+h]}c(s,\mathrm{x}(s)),$
$\displaystyle\max_{s\in[t,t+h]}g(\mathrm{x}(s))-\mathrm{z}(s),V_{1}^{+}(t_{0}+h,\mathrm{x}(t_{0}+h),\mathrm{z}(t_{0}+h))\big{\\}}.$
(112)
By subtracting $U_{0}$ on the both sides in (112) and then applying (109) and
(111), we have
$\displaystyle 0\leq\max\\{-\theta,-\theta,-\theta h\\}<0,$ (113)
which is contradiction. Thus, (106) is proved.
(iii) For $U\in C^{\infty}([0,T]\times\mathbb{R}^{n}\times\mathbb{R})$ such
that $V_{1}^{+}-U$ has a local minimum at
$(t_{0},x_{0},z_{0})\in(0,T)\times\mathbb{R}^{n}\times\mathbb{R}$ and
$(V_{1}^{+}-U)(t_{0},x_{0},z_{0})=0$, we will prove
$\displaystyle\begin{split}\max\big{\\{}&c(t_{0},x_{0})-U_{0},g(t_{0},x_{0})-z_{0}-U_{0},\\\
&U_{t0}-\bar{H}^{+}(t_{0},x_{0},z_{0},D_{x}U_{0},D_{z}U_{0})\big{\\}}\leq
0,\end{split}$ (114)
Since $J_{1}(t_{0},x_{0},z_{0},\alpha)$ (15) is greater than the value at
$\tau=t_{0}$,
$\displaystyle
J_{1}(t_{0},x_{0},z_{0},\alpha,\delta[\alpha])\geq\max\big{\\{}c(x_{0},x_{0}),g(t_{0},x_{0})-z_{0}\big{\\}},$
(115)
for all $\alpha\in\mathcal{A}(t_{0}),\delta\in\Delta(t_{0})$. By subtracting
$U_{0}$ on the both sides, and taking the supremum over $\delta$ and the
infimum over $\alpha$, sequentially, on the both side, we have
$\displaystyle
0\geq\max\big{\\{}c(x_{0},x_{0})-U_{0},g(t_{0},x_{0})-z_{0}-U_{0}\big{\\}}.$
(116)
The rest of the proof is to show
$\displaystyle U_{t0}-\bar{H}^{+}(t_{0},x_{0},z_{0},D_{x}U_{0},D_{z}U_{0})\leq
0.$ (117)
Suppose not. For some $\theta>0$,
$\displaystyle\begin{split}U_{t}(t,x,z)+&\max_{b\in B}D_{x}U(t,x,z)\cdot
f(t,x,a,b)\\\ &-D_{z}U(t,x,z)L(t,x,a,b)\geq\theta\end{split}$ (118)
for all $a\in A$ and all points $(t,x,z)$ sufficiently close to
$(t_{0},x_{0},z_{0})$: $|t-t_{0}|+\|x-x_{0}\|+|z-z_{0}|<h_{1}$ for small
enough $h_{1}>0$. Consider state trajectories $\mathrm{x}_{1}$ and
$\mathrm{z}_{1}$ solving (17) for any $\alpha\in\mathcal{A}(t_{0})$,
$\beta=\delta_{1}[\alpha]$, where
$\displaystyle\delta_{1}[\alpha](s)\in\arg\max_{b\in
B}D_{x}U(s,\mathrm{x}_{1}(s),\mathrm{z}_{1}(s))\cdot
f(s,\mathrm{x}_{1}(s),\alpha(s),b)$ $\displaystyle\quad\quad\quad-
D_{z}U(s,\mathrm{x}_{1}(s),\mathrm{z}_{1}(s))L(s,\mathrm{x}_{1}(s),\alpha(s),b),$
(119)
$t=t_{0}$, $x=x_{0}$, and $z=z_{0}$. Since there exists a small $h>0$ such
that $\|\mathrm{x}_{1}(s)-x_{0}\|+|\mathrm{z}_{1}(s)-z_{0}|<h_{1}-h$
($s\in[t_{0},t_{0}+h]$),
$\displaystyle\begin{split}&U_{t}(s,\mathrm{x}_{1}(s),\mathrm{z}_{1}(s))\\\
+&D_{x}U(s,\mathrm{x}_{1}(s),\mathrm{z}_{1}(s))\cdot
f(s,\mathrm{x}_{1}(s),\alpha(s),\delta_{1}[\alpha](s))\\\
-&D_{z}U(s,\mathrm{x}_{1}(s),\mathrm{z}_{1}(s))L(s,\mathrm{x}_{1}(s),\alpha(s),\delta_{1}[\alpha](s))\geq\theta\end{split}$
(120)
for all $s\in[t_{0},t_{0}+h]$. By integrating (120) over
$s\in[t_{0},t_{0}+h]$, we have
$\displaystyle
U(t_{0}+h,\mathrm{x}_{1}(t_{0}+h),\mathrm{z}_{1}(t_{0}+h))-U(t_{0},x,z)\geq\theta
h.$ (121)
Since (121) holds for all $\alpha\in\mathcal{A}(t_{0})$ and
$\delta\in\Delta(t_{0})$,
$\displaystyle\begin{split}\sup_{\delta\in\Delta(t_{0})}\inf_{\alpha\in\mathcal{A}(t_{0})}U(t_{0}+h,\mathrm{x}(t_{0}+h),\mathrm{z}(t_{0}+h))\\\
\quad\quad-U(t_{0},x,z)\geq\theta h,\end{split}$ (122)
where $\mathrm{x},\mathrm{z}$ solve (17) for
$(\alpha,\delta,t_{0},x_{0},z_{0})$.
Since $V_{1}^{+}-U$ has a local minimum at $(t_{0},x_{0},z_{0})$,
$\displaystyle\sup_{\delta\in\Delta(t_{0})}\inf_{\alpha\in\mathcal{A}(t_{0})}\begin{tabular}[]{l}$V_{1}^{+}(t_{0}+h,\mathrm{x}(t_{0}+h),\mathrm{z}(t_{0}+h))$\\\
$\quad\quad\quad\quad\quad\quad\quad\quad\quad-
V_{1}^{+}(t_{0},x_{0},z_{0})$\end{tabular}$ (125) $\displaystyle\geq$
$\displaystyle\sup_{\delta\in\Delta(t_{0})}\inf_{\alpha\in\mathcal{A}(t_{0})}\begin{tabular}[]{l}$U(t_{0}+h,\mathrm{x}(t_{0}+h),\mathrm{z}(t_{0}+h))$\\\
$\quad\quad\quad\quad\quad\quad\quad\quad\quad-U(t_{0},x_{0},z_{0})$\end{tabular}$
(128) $\displaystyle\geq$ $\displaystyle~{}\theta h$ (129)
according to (122). However, Lemma 2 implies
$\displaystyle\sup_{\delta\in\Delta(t_{0})}\inf_{\alpha\in\mathcal{A}(t_{0})}V_{1}^{+}(t_{0}+h,\mathrm{x}$
$\displaystyle(t_{0}+h),\mathrm{z}(t_{0}+h))$ $\displaystyle\leq
V_{1}^{+}(t_{0},x_{0},z_{0}),$ (130)
which contradicts (129).
(iv) The proof for the viscosity solution $V_{1}^{-}$ is similar to (ii) and
(iii) for $V_{1}^{+}$. Also, the uniqueness follows from the uniqueness
theorems for viscosity solutions, Theorem 4.2 in [15], and the extension of
Theorem 1 in [16]. ∎
### .4 Proof of Lemma 3
Proof. Set $\tilde{V}_{1}^{+}$ and $\tilde{V}_{1}^{-}$ be the right hand terms
in (40) and (41), respectively. $V_{1}^{+}$ are $V_{1}^{-}$ are defined in
(13) and (14), respectively.
(i) In this proof, we utilize the following properties in [3, 17], presented
as below.
Define a pseudo-time operator $\sigma_{\mu}\mathrel{\mathop{\mathchar
58\relax}}[t,T]\rightarrow[t,T]$ for a given $\mu\in\mathcal{M}(t)$ (defined
in (31)) and the corresponding inverse operator:
$\displaystyle\sigma_{\mu}(s)=\int_{t}^{s}\mu(\tau)d\tau+t;$ (131)
$\displaystyle\sigma^{-1}_{\mu}(s)\coloneqq\min\tau\text{ subject to
}\sigma_{\mu}(\tau)=s.$ (132)
Then,
$\displaystyle\sigma_{\mu}\big{(}\sigma^{-1}_{\mu}(s)\big{)}=s,\quad
s\in[t,\sigma_{\mu}(T)],$ (133)
$\displaystyle\sigma^{-1}_{\mu}\big{(}\sigma_{\mu}(s)\big{)}=s,\quad
s\in\textrm{Range}(\sigma_{\mu}^{-1}),$ (134)
where
$\textrm{Range}(\sigma_{\mu}^{-1})\coloneqq\\{\sigma^{-1}_{\mu}(s)~{}|~{}s\in[t,\sigma_{\mu}(T)]\\}$.
Consider two state trajectories: $(\mathrm{x},\mathrm{z})$ solving (17) for
$(\tilde{\alpha}(\sigma_{\mu}^{-1}(\cdot)),\tilde{\beta}(\sigma_{\mu}^{-1}(\cdot)))$
for $s\in[t,\sigma_{\mu}(T)]$; $(\tilde{\mathrm{x}},\tilde{\mathrm{z}})$
solving (39) for $(\tilde{\alpha},\tilde{\beta},\mu)$, and
$\mathrm{x}(t)=\tilde{\mathrm{x}}(t)=x$. Then,
$\displaystyle\mathrm{x}\big{(}\sigma_{\mu}(s)\big{)}=\tilde{\mathrm{x}}(s),\quad
s\in[t,T],$ (135) $\displaystyle
g\big{(}\mathrm{x}(\sigma_{\mu}(T))\big{)}-\mathrm{z}(\sigma_{\mu}(T))=g\big{(}\tilde{\mathrm{x}}(T)\big{)}-\tilde{\mathrm{z}}(T).$
(136)
(135) is according to Lemma 4 in [9], and (136) is derived by combining two
lemmas (Lemma 4 and 6) in [9].
(ii) $\tilde{V}_{1}^{+}(t,x,z)\geq V_{1}^{+}(t,x,z)$
For small $\epsilon>0$, there exists $\delta_{1}\in\Delta(t)$ such that
$\displaystyle\begin{split}V_{1}^{+}&(t,x,z)-\epsilon\leq\inf_{\alpha}\max_{\tau\in[t,T]}\max\big{\\{}\max_{s\in[t,\tau]}c(\mathrm{x}_{1}(s)),\\\
&\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
g\big{(}\mathrm{x}_{1}(\tau)\big{)}-\mathrm{z}_{1}(\tau)\big{\\}},\end{split}$
(137)
where $(\mathrm{x}_{1},\mathrm{z}_{1})$ solves (17) for
$(\alpha,\delta_{1}[\alpha])$. Denote $\tau_{*}(\alpha)$ is the maximizer of
the right hand term in (137) for each $\alpha\in\mathcal{A}(t)$:
$\displaystyle\begin{split}&\tau_{*}(\alpha)\coloneqq\arg\max_{\tau\in[t,T]}\max\big{\\{}\max_{s\in[t,\tau]}c(\mathrm{x}_{1}(s)),g\big{(}\mathrm{x}_{1}(\tau)\big{)}-\mathrm{z}_{1}(\tau)\big{\\}}.\end{split}$
(138)
Define a particular strategy $\nu_{A,1}\in\textrm{N}_{A}(t)$:
$\displaystyle\nu_{A,1}[\alpha](s)\coloneqq\begin{cases}1,&s\in[t,\tau_{*}(\alpha)],\\\
0,&s\in(\tau_{*}(\alpha),T].\end{cases}$ (139)
Consider a state trajectory $(\tilde{\mathrm{x}}_{1},\tilde{\mathrm{z}}_{1})$
solving (39) for $(\alpha,\delta_{1}[\alpha],\nu_{A,1}[\alpha])$. Then, we
have
$\displaystyle(\tilde{\mathrm{x}}_{1},\tilde{\mathrm{z}}_{1})(s)=\begin{cases}(\mathrm{x}_{1},\mathrm{z}_{1})(s),&s\in[t,\tau_{*}(\alpha)],\\\
(\mathrm{x}_{1},\mathrm{z}_{1})(\tau_{*}(\alpha)),&s\in(\tau_{*}(\alpha),T],\end{cases}$
(140)
Since $\tilde{V}_{1}^{+}$ has the supremum over $(\delta,\nu_{A})$-space
operation,
$\displaystyle\tilde{V}_{1}^{+}(t,x,z)\geq\inf_{\alpha}\max\big{\\{}\max_{s\in[t,T]}c(\tilde{\mathrm{x}}_{1}(s)),g\big{(}\tilde{\mathrm{x}}_{1}(T)\big{)}-\tilde{\mathrm{z}}_{1}(T)\big{\\}}$
$\displaystyle=\inf_{\alpha}\max\big{\\{}\max_{s\in[t,\tau_{*}(\alpha)]}c(\mathrm{x}_{1}(s)),g\big{(}{\mathrm{x}}_{1}(\tau_{*}(\alpha))\big{)}-\mathrm{z}_{1}(\tau_{*}(\alpha))\big{\\}}$
$\displaystyle\geq V_{1}^{+}(t,x,z)-\epsilon.$ (141)
The second equality is according to (140), and the third inequality is by
(137).
(iii) $V_{1}^{+}(t,x,z)\geq\tilde{V}_{1}^{+}(t,x,z)$
Define $\tilde{\mathfrak{A}}_{\mu}\mathrel{\mathop{\mathchar
58\relax}}\mathcal{A}(t)\rightarrow\mathcal{A}(t)$ and its psuedo inverse
function ${\mathfrak{A}}_{\mu}\mathrel{\mathop{\mathchar
58\relax}}\mathcal{A}(t)\rightarrow\mathcal{A}(t)$:
$\displaystyle(\tilde{\mathfrak{A}}_{\mu}(\alpha))(s)\coloneqq\begin{cases}\alpha\big{(}\sigma_{\mu}(s)\big{)},&s\in\textrm{Range}(\sigma^{-1}_{\mu}),\\\
\text{any }a\in A,&s\notin\textrm{Range}(\sigma^{-1}_{\mu}),\end{cases}$ (142)
$\displaystyle(\mathfrak{A}_{\mu}(\tilde{\alpha}))(s)\coloneqq\begin{cases}\tilde{\alpha}(\sigma^{-1}_{\mu}(s)),&s\in[t,\sigma_{\mu}(T)],\\\
\text{any }a\in A,&s\in(\sigma_{\mu}(T),T],\end{cases}$ (143)
Also, define $\tilde{\mathfrak{D}}_{\mu}\mathrel{\mathop{\mathchar
58\relax}}\Delta(t)\rightarrow\Delta(t)$ and its psuedo inverse function
$\mathfrak{D}_{\mu}\mathrel{\mathop{\mathchar
58\relax}}\Delta(t)\rightarrow\Delta(t)$:
$\displaystyle(\tilde{\mathfrak{D}}_{\mu}(\delta))[\tilde{\alpha}](s)=\begin{cases}\delta[\mathfrak{A}_{\mu}(\tilde{\alpha})](\sigma_{\mu}(s)),&s\in\textrm{Range}(\sigma^{-1}_{\mu}),\\\
\text{any }b\in B,&s\notin\textrm{Range}(\sigma^{-1}_{\mu}),\end{cases}$ (144)
$\displaystyle(\mathfrak{D}_{\mu}(\tilde{\delta}))[\alpha](s)=\begin{cases}\tilde{\delta}[\tilde{\mathfrak{A}}_{\mu}(\alpha)](\sigma^{-1}_{\mu}(s)),&s\in[t,\sigma_{\mu}(T)],\\\
\text{any }b\in B,&s\in(\sigma_{\mu}(T),T].\end{cases}$ (145)
These definitions satisfy the following properties:
$\displaystyle\begin{split}&(\tilde{\mathfrak{A}}_{\mu}(\mathfrak{A}_{\mu}(\tilde{\alpha})))(s)=\tilde{\alpha}(s),\\\
&(\tilde{\mathfrak{D}}_{\mu}(\mathfrak{D}_{\mu}(\tilde{\delta})))[\tilde{\alpha}](s)=\tilde{\delta}[\tilde{\alpha}](s),\end{split}\quad\text{for
}s\in\textrm{Range}(\sigma^{-1}_{\mu})$ (146)
$\displaystyle\begin{split}&(\mathfrak{A}_{\mu}(\tilde{\mathfrak{A}}_{\mu}(\alpha)))(s)={\alpha}(s),\\\
&(\mathfrak{D}_{\mu}(\tilde{\mathfrak{D}}_{\mu}(\delta)))[\alpha](s)=\delta[\alpha](s),\end{split}\quad\text{for
}s\in[t,\sigma_{\mu}(T)],$ (147)
$\displaystyle\big{\\{}\alpha={\mathfrak{A}}_{\mu}(\tilde{\alpha})~{}|~{}\tilde{\alpha}\in\mathcal{A}(t)\big{\\}}=\mathcal{A}(t),\forall\mu\in\mathcal{M}(t)$
(148)
$\displaystyle\big{\\{}\delta={\mathfrak{D}}_{\mu}(\tilde{\delta})~{}|~{}\tilde{\delta}\in\Delta(t)\big{\\}}=\Delta(t),\forall\mu\in\mathcal{M}(t).$
(149)
Consider $(\tilde{\mathrm{x}},\tilde{\mathrm{z}})$ solving (39) for
$(\tilde{\alpha},\tilde{\delta}[\tilde{\alpha}],\mu)$,
$(\mathrm{x},\mathrm{z})$ solving (17) for
$(\mathfrak{A}_{\mu}(\tilde{\alpha}),(\mathfrak{D}_{\mu}(\tilde{\delta}))[\mathfrak{A}_{\mu}(\tilde{\alpha})])$,
and $(\mathrm{x}_{1},\mathrm{z}_{1})$ solving (17) for
$(\alpha,\delta[\alpha])$. Then, we have
$\displaystyle\sup_{\tilde{\delta}\in\Delta(t)}\inf_{\tilde{\alpha}\in\mathcal{A}(t)}\max\big{\\{}\max_{s\in[t,T]}c(\tilde{\mathrm{x}}(s)),g(\tilde{\mathrm{x}}(T))-\tilde{\mathrm{z}}(T)\big{\\}}$
$\displaystyle=$
$\displaystyle\sup_{\tilde{\delta}\in\Delta(t)}\inf_{\tilde{\alpha}\in\mathcal{A}(t)}\max\big{\\{}\max_{s\in[t,\sigma_{\mu}(T)]}c(\mathrm{x}(s)),$
$\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad
g(\mathrm{x}(\sigma_{\mu}(T)))-{\mathrm{z}}(\sigma_{\mu}(T))\big{\\}},$ (150)
$\displaystyle=$
$\displaystyle\sup_{\delta\in\Delta(t)}\inf_{\alpha\in\mathcal{A}(t)}\max\big{\\{}\max_{s\in[t,\sigma_{\mu}(T)]}c(\mathrm{x}_{1}(s)),$
$\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad
g(\mathrm{x}_{1}(\sigma_{\mu}(T)))-{\mathrm{z}}_{1}(\sigma_{\mu}(T))\big{\\}},$
(151) $\displaystyle\leq$ $\displaystyle V_{1}^{+}(t,x,z).$ (152)
(150) is by (135) and (136), and (151) is according to (148) and (149). Since
the above inequality holds for all $\mu$, we substitute $\nu_{A}[\alpha]$ for
$\mu$ and take the supremum over $\nu_{A}$ on the both sides, which concludes
$\tilde{V}_{1}^{+}(t,x,z)\leq V_{1}^{+}(t,x,z)$.
By (ii) and (iii), we conclude $V_{1}^{+}(t,x,z)=\tilde{V}_{1}^{+}(t,x,z)$.
(iv) $V_{1}^{-}(t,x)=\tilde{V}_{1}^{-}(t,x)$
Define $\tilde{\mathfrak{B}}_{\mu}\mathrel{\mathop{\mathchar
58\relax}}\mathcal{B}(t)\rightarrow\mathcal{B}(t)$ and its pseudo inverse
function $\mathfrak{B}_{\mu}\mathrel{\mathop{\mathchar
58\relax}}\mathcal{B}(t)\rightarrow\mathcal{B}(t)$:
$\displaystyle(\tilde{\mathfrak{B}}_{\mu}(\beta))(s)\coloneqq\begin{cases}\beta\big{(}\sigma_{\mu}(s)\big{)},&s\in\text{Range}(\sigma_{\mu}^{-1}),\\\
\text{any }b\in B,&s\notin\text{Range}(\sigma_{\mu}^{-1}),\end{cases}$ (153)
$\displaystyle(\mathfrak{B}_{\mu}(\tilde{\beta}))(s)\coloneqq\begin{cases}\tilde{\beta}(\sigma^{-1}_{\mu}(s)),&s\in[t,\sigma_{\mu}(T)],\\\
\text{any }b\in B,&s\in(\sigma_{\mu}(T),T],\end{cases}$ (154)
Also, define $\tilde{\mathfrak{C}}_{\mu}\mathrel{\mathop{\mathchar
58\relax}}\Gamma(t)\rightarrow\tilde{\Gamma}(t)$, where $\tilde{\Gamma}(t)$ is
defined in (37), and its pseudo inverse function
$\mathfrak{C}_{\mu}\mathrel{\mathop{\mathchar
58\relax}}\tilde{\Gamma}(t)\rightarrow\Gamma(t)$:
$\displaystyle(\tilde{\mathfrak{C}}_{\mu}(\gamma))[\tilde{\beta},\mu](s)=\begin{cases}\gamma[\mathfrak{B}_{\mu}(\tilde{\beta})]\big{(}\sigma_{\mu}(s)\big{)},&s\in\text{Range}(\sigma_{\mu}^{-1}),\\\
\text{any }a\in A,&s\notin\text{Range}(\sigma_{\mu}^{-1}),\end{cases}$ (155)
$\displaystyle({\mathfrak{C}_{\mu}}(\tilde{\gamma}))[\beta](s)=\begin{cases}\tilde{\gamma}[\tilde{\mathfrak{B}}_{\mu}(\beta),\mu]\big{(}\sigma^{-1}_{\mu}(s)\big{)},&s\in[t,\sigma_{\mu}(T)],\\\
\text{any }a\in A,&s\in(\sigma_{\mu}(T),T].\end{cases}$ (156)
These definitions satisfy the following properties: for any
$\mu\in\mathcal{M}(t)$,
$\displaystyle\big{\\{}\beta={\mathfrak{B}}_{\mu}(\tilde{\beta})~{}|~{}\tilde{\beta}\in\mathcal{B}(t)\big{\\}}=\mathcal{B}(t),$
(157)
$\displaystyle\big{\\{}\gamma={\mathfrak{C}}_{\mu}(\tilde{\gamma})~{}|~{}\tilde{\gamma}\in\tilde{\Gamma}(t)\big{\\}}=\Gamma(t).$
(158)
Consider $(\tilde{\mathrm{x}},\tilde{\mathrm{z}})$ solving (39) for
$(\tilde{\gamma}[\tilde{\beta},\mu],\tilde{\beta},\mu)$,
$(\mathrm{x},\mathrm{z})$ solving (17) for
$(\mathfrak{C}_{\mu}(\tilde{\gamma})[\mathfrak{B}_{\mu}(\tilde{\beta})],\mathfrak{B}_{\mu}(\tilde{\beta}))$,
and $(\mathrm{x}_{1},\mathrm{z}_{1})$ solving (17) for
$(\gamma[\beta],\beta)$.
$\displaystyle\tilde{V}_{1}^{-}(t,x,z)=\inf_{\tilde{\gamma}\in\tilde{\Gamma}(t)}\sup_{\tilde{\beta}\in\mathcal{B}(t),\mu\in\mathcal{M}(t)}\max\big{\\{}\max_{s\in[t,T]}c(\tilde{\mathrm{x}}(s)),$
$\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
g(\tilde{\mathrm{x}}(T))-\tilde{\mathrm{z}}(T)\big{\\}}$
$\displaystyle=\inf_{\tilde{\gamma}\in\tilde{\Gamma}(t)}\sup_{\tilde{\beta}\in\mathcal{B}(t),\mu\in\mathcal{M}(t)}\max\big{\\{}\max_{s\in[t,\sigma_{\mu}(T)]}c(\mathrm{x}(s)),$
$\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
g(\mathrm{x}(\sigma_{\mu}(T)))-\mathrm{z}(\sigma_{\mu}(T))\big{\\}}$ (159)
$\displaystyle=\inf_{\gamma\in\Gamma(t)}\sup_{\beta\in\mathcal{B}(t),\mu\in\mathcal{M}(t)}\max\big{\\{}\max_{s\in[t,\sigma_{\mu}(T)]}c(\mathrm{x}_{1}(s)),$
$\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad
g(\mathrm{x}_{1}(\sigma_{\mu}(T)))-\mathrm{z}_{1}(\sigma_{\mu}(T))\big{\\}}.$
(160)
(159) is by (135) and (136), and (160) is by (157) and (158). In the term in
(160), $\mu$ only controls the terminal time ($\sigma_{\mu}(T)$), hence, the
supremum over $\mu$ can be converted to the maximum over $\tau$, which
concludes $V_{1}^{-}(t,x,z)=\tilde{V}_{1}^{-}(t,x,z)$.
∎
### .5 Proof of Theorem 2
Proof. The terminal value is derived by substituting $T$ for $t$ in (40) or
(41):
$\displaystyle V_{1}^{\pm}(T,x,z)=\max\\{c(T,x),g(T,x)-z\\}$ (161)
for all $(x,z)\in\mathbb{R}^{n}\times\mathbb{R}$.
(i) [2] has presented the HJ equation for state-constrained problems, in which
the terminal time is fixed. By applying the HJ equation in [2] to $V_{1}^{+}$,
$\displaystyle 0=\max\big{\\{}c(x)$ $\displaystyle-
V_{1}^{+},V^{+}_{1,t}-\tilde{H}_{1}^{+}(x,z,D_{x}V_{1}^{+},D_{z}V_{1}^{+})\big{\\}},$
(162)
where
$\displaystyle\tilde{H}_{1}^{+}(x,z,p,q)\coloneqq\max_{a\in
A}\min_{\begin{subarray}{c}b\in B\\\ b_{d}\in[0,1]\end{subarray}}-p\cdot
f(x,a,b)b_{d}+qL(x,a,b)b_{d}.$ (163)
Since, for all $a\in A,b\in B$, the term $-p\cdot
f(x,a,b)b_{d}+qL(x,a,b)b_{d}$ is minimized at $b_{d}=0$ or 1,
$\displaystyle\begin{split}\tilde{H}_{1}^{+}(x,z,p,q)&=\max_{a\in
A}\min\big{\\{}0,\\\ &\min_{b\in B}-p\cdot
f(x,a,b)+qL(x,a,b)\big{\\}}.\end{split}$ (164)
Also, $0$ does not depend on $a$, thus, the maximum over $a$ operation can
move into the minimum operation:
$\displaystyle\begin{split}\tilde{H}_{1}^{+}(x,z,p&,q)=\min\\{0,\bar{H}^{+}(x,z,p,q)\\},\end{split}$
(165)
where $\bar{H}^{+}$ is defined in (27). By applying (165) to (162), (43) is
proved for $V_{1}^{+}$.
(ii) By applying [2] to $V_{1}^{-}$,
$\displaystyle 0=\max\big{\\{}c(x)$ $\displaystyle-
V_{1}^{-},V^{-}_{1,t}-\tilde{H}_{1}^{-}(x,z,D_{x}V_{1}^{-},D_{z}V_{1}^{-})\big{\\}},$
(166)
where
$\displaystyle\tilde{H}_{1}^{-}(x,z,p,q)\coloneqq\min_{\begin{subarray}{c}b\in
B\\\ b_{d}\in[0,1]\end{subarray}}\max_{a\in A}-p\cdot
f(x,a,b)b_{d}+qL(x,a,b)b_{d}.$ (167)
Since $b_{d}\in[0,1]$ is non-negative,
$\displaystyle\tilde{H}_{1}^{-}(x,z,p,q)$
$\displaystyle=\min_{b_{d}\in[0,1]}b_{d}\min_{b\in B}\max_{a\in A}[-p\cdot
f(x,a,b)$
$\displaystyle\quad\quad\quad\quad\quad\quad\quad\quad\quad+qL(x,a,b)]$
$\displaystyle=\min\\{0,\bar{H}^{-}(x,z,p,q)\\},$ (168)
where $\bar{H}^{-}$ is defined in (28). (166) and (168) prove (43) for
$V_{1}^{-}$. ∎
## References
* [1] L. C. Evans and P. E. Souganidis, “Differential games and representation formulas for solutions of Hamilton-Jacobi-Isaacs equations,” _Indiana University mathematics journal_ , vol. 33, no. 5, pp. 773–797, 1984.
* [2] A. Altarovici, O. Bokanowski, and H. Zidani, “A general hamilton-jacobi framework for non-linear state-constrained control problems,” _ESAIM: Control, Optimisation and Calculus of Variations_ , vol. 19, no. 2, pp. 337–357, 2013.
* [3] I. M. Mitchell, A. M. Bayen, and C. J. Tomlin, “A time-dependent Hamilton-Jacobi formulation of reachable sets for continuous dynamic games,” _IEEE Transactions on Automatic Control_ , vol. 50, no. 7, pp. 947–957, 2005.
* [4] K. Margellos and J. Lygeros, “Hamilton-Jacobi formulation for reach-avoid differential games,” _IEEE Transactions on Automatic Control_ , vol. 56, no. 8, pp. 1849–1861, 2011.
* [5] J. F. Fisac, M. Chen, C. J. Tomlin, and S. S. Sastry, “Reach-avoid problems with time-varying dynamics, targets and constraints,” in _Proceedings of the 18th international conference on hybrid systems: computation and control_. ACM, 2015, pp. 11–20.
* [6] D. Lee, A. Keimer, A. M. Bayen, and C. J. Tomlin, “Hamilton-Jacobi Formulation for State-Constrained Optimal Control and Zero-Sum Game Problems,” in _Decision and Control (CDC), 2020 IEEE 59th Conference on_. IEEE, 2020, accepted.
* [7] R. J. Elliott and N. J. Kalton, _The existence of value in differential games_. American Mathematical Soc., 1972, vol. 126.
* [8] I. M. Mitchell and J. A. Templeton, “A toolbox of Hamilton-Jacobi solvers for analysis of nondeterministic continuous and hybrid systems,” in _International Workshop on Hybrid Systems: Computation and Control_. Springer, 2005, pp. 480–494.
* [9] I. M. Mitchell and C. J. Tomlin, “Overapproximating reachable sets by Hamilton-Jacobi projections,” _J. Scientific Computing_ , vol. 19, no. 1-3, pp. 323–346, 2003.
* [10] M. G. Crandall and P.-L. Lions, “Two approximations of solutions of hamilton-jacobi equations,” _Mathematics of computation_ , vol. 43, no. 167, pp. 1–19, 1984.
* [11] S. Osher and R. Fedkiw, _Level Set Methods and Dynamic Implicit Surfaces_. Springer-Verlag, 2003, vol. 153.
* [12] C.-W. Shu and S. Osher, “Efficient implementation of essentially non-oscillatory shock-capturing schemes,” _Journal of computational physics_ , vol. 77, no. 2, pp. 439–471, 1988.
* [13] M. P. Chapman, K. M. Smith, V. Cheng, D. L. Freyberg, and C. J. Tomlin, “Reachability analysis as a design tool for stormwater systems,” in _2018 IEEE Conference on Technologies for Sustainability (SusTech)_. IEEE, 2018, pp. 1–8.
* [14] M. Chen, S. Herbert, S. Bansal, and C. Tomlin, “Optimal control helper toolbox,” http://github.com/HJReachability/helperOC.
* [15] E. Barron and H. Ishii, “The bellman equation for minimizing the maximum cost,” _Nonlinear Analysis: Theory, Methods & Applications_, vol. 13, no. 9, pp. 1067–1090, 1989.
* [16] L. C. Evans, _Partial differential equations_. American Mathematical Society, 2010.
* [17] D. Lee and C. J. Tomlin, “A Hopf-Lax formula in Hamilton-Jacobi analysis of reach-avoid problems,” _IEEE Control Systems Letters_ , vol. 5, no. 3, pp. 1055–1060, 2020.
[]Donggun Lee is a Ph.D. student in Mechanical Engineering at UC Berkeley. He
received B.S. and M.S. degrees in Mechanical Engineering from Korea Advanced
Institute of Science and Technology (KAIST), Daejeon, Korea, in 2009 and 2011,
respectively. Donggun works in the area of control theory and robotics. []Dr.
Claire Tomlin is the Charles A. Desoer Professor of Engineering in EECS at
Berkeley. She was an Assistant, Associate, and Full Professor in Aeronautics
and Astronautics at Stanford from 1998 to 2007, and in 2005 joined Berkeley.
Claire works in the area of control theory and hybrid systems, with
applications to air traffic management, UAV systems, energy, robotics, and
systems biology. She is a MacArthur Foundation Fellow (2006), an IEEE Fellow
(2010), in 2017 she was awarded the IEEE Transportation Technologies Award,
and in 2019 was elected to the National Academy of Engineering and the
American Academy of Arts and Sciences.
|
# How Good is ChatGPT in Giving Advice on Your Visualization Design?
Nam Wook Kim Boston College140 Commonwealth AveChestnut
HillMassachusettsUSA02467<EMAIL_ADDRESS>, Grace Myers Boston
College140 Commonwealth AveChestnut HillMassachusettsUSA02467
<EMAIL_ADDRESS>and Benjamin Bach University of Edinburgh Old College,
South BridgeEdinburghUnited Kingdom<EMAIL_ADDRESS>
(2018; 20 February 2007; 12 March 2009; 5 June 2009)
###### Abstract.
Data visualization practitioners often lack formal training, resulting in a
knowledge gap in visualization design best practices. Large-language models
like ChatGPT, with their vast internet-scale training data, offer
transformative potential in addressing this gap. To explore this potential, we
adopted a mixed-method approach. Initially, we analyzed the VisGuide forum, a
repository of data visualization questions, by comparing ChatGPT-generated
responses to Human replies. Subsequently, our user study delved into
practitioners’ reactions and attitudes toward ChatGPT as a visualization
assistant. Participants, who brought their visualizations and questions,
received feedback from both Human experts and ChatGPT in a randomized order.
They filled out experience surveys and shared deeper insights through post-
interviews. The results highlight the unique advantages and disadvantages of
ChatGPT, such as its ability to quickly provide a wide range of design options
based on a broad knowledge base, while also revealing its limitations in terms
of depth and critical thinking capabilities.
data visualization, design feedback, ChatGPT, LLM, AI
††copyright: acmcopyright††journalyear: 2018††doi:
XXXXXXX.XXXXXXX††conference: Make sure to enter the correct conference title
from your rights confirmation emai; June 03–05, 2018; Woodstock,
NY††booktitle: Woodstock ’18: ACM Symposium on Neural Gaze Detection, June
03–05, 2018, Woodstock, NY††price: 15.00††isbn: 978-1-4503-XXXX-X/18/06††ccs:
Human-centered computing Empirical studies in visualization††ccs: Human-
centered computing User studies
## 1\. Introduction
Visualizations are ubiquitous and widely employed by practitioners across
various disciplines. However, many data visualization practitioners commonly
lack formal training and instead acquire the necessary skills on the go, often
leaning on examples from the internet, online blogs, and other publicly
available resources (Esteves and Neves, 2022; dvs, [n. d.]). They frequently
find themselves in situations where they must navigate intricate and
occasionally contradictory choices (Choi et al., 2023b). In such instances,
they often revert to their instincts, drawing from the experiences and
observations they have accumulated along the way (Choi et al., 2023b; Bako et
al., 2023). Alternatively, they seek feedback from colleagues, often field
experts, who can help provide novel perspectives, validate design choices, and
challenge assumptions (Choi et al., 2023b; Luther et al., 2015). Nevertheless,
not all practitioners have the privilege of accessing such feedback from
experienced colleagues.
This work delves into ways to address the knowledge gap in practical contexts.
We examine the emergence of large language models (LLMs) equipped with
extensive internet-scale training data, which holds transformative potential
(Okerlund et al., 2022). LLM-based chatbots, exemplified by ChatGPT (OpenAI,
[n. d.]), can serve as a design companion to offer tailored suggestions by
analyzing a practitioner’s design choices and considering established best
practices. This paradigm shift can level the playing field, enabling
practitioners with limited knowledge and resources to receive high-quality
guidance in their design journey. Our two specific research questions are as
follows:
* •
RQ1. Can ChatGPT rival Human expertise in data visualization knowledge?
* •
RQ2. How would visualization practitioners perceive ChatGPT’s design feedback?
Figure 1. Methodology Overview: The methodology comprises two key phases. In
the first phase, questions answered within a forum space by Human respondents
are explored and then presented to ChatGPT. The second phase involves a
feedback session in which users solicit visualization design feedback from
both ChatGPT and Human experts.
Both phases are represented by a flow diagram chart. On left is the first
phase in which arrows point from question to ChatGPT responses and from
question to Human responses. There is also an arrow from content analysis that
points to both Human responses and ChatGPT responses.In phase two, there are
three parts. The first part is the data viz practitioner who comes to the
session with a design feedback based question. the second part is the two
feedback sessions, one with ChatGPT and one with the Human experts. The third
part is the follow up interview in which the participant shares about their
experience.
We employ a mixed-method approach (Figure 1 to explore the potential of
ChatGPT in offering data visualization design knowledge. First, we
investigated the VisGuides forum, which provides a valuable repository of
questions and responses from practitioners. We filtered out inadequately
formulated questions lacking context, such as vague queries like “how do you
like my design?” while only including the ones with at least one response. Our
final collection comprises 119 questions, each accompanied by an average of
1.54 replies. We fed the questions to ChatGPT and compared its responses to
the Human counterparts across 6 key metrics: coverage, topicality, breadth,
clarity, depth, and actionability. We found that ChatGPT’s performance is
comparable to, and often better than, human responses, especially in providing
clear and comprehensive answers that cover all questions asked.
While the VisGuide forum analysis provided insight into ChatGPT’s response
quality, it fell short of capturing practitioners’ reactions, attitudes, and
overall experiences. Thus, we conducted a subsequent user study wherein we
invited practitioners to partake in a comparative assessment of Human expert
feedback and ChatGPT-generated feedback. Participants were recruited through
the Data Visualization Society and a university mailing list. Before the
study, participants were requested to bring their visualizations along with
questions for feedback. The sequence in which they sought feedback, either
from Human experts or ChatGPT, was randomized to mitigate order effects.
Interestingly, all participants preferred Human experts, valuing their ability
to engage in fluid conversations and provide tailored recommendations.
However, they also appreciated the broad knowledge demonstrated by ChatGPT,
despite noting that ChatGPT’s responses lacked depth.
Synthesizing these two studies, we can draw valuable lessons and identify
future opportunities. For instance, the variation in human responses within
VisGuides could contribute to the observed differences between the studies, as
there was a mix of both high-quality and subpar responses. One significant
limitation of ChatGPT, as pointed out by participants, is its inability to
grasp nuanced visuals and engage in proactive discussions, especially when
compared to human experts. Furthermore, we explore the potential advantages of
integrating ChatGPT into visualization education and its role as a design
knowledge agent within existing visualization tools for practitioners.
## 2\. Related Work
### 2.1. Data Visualization Design Practice
Data visualization has gone mainstream, frequently used to explore and
communicate data in a wide variety of industry sectors. As a result, data
visualization practitioners have diverse professions, such as managers,
scientists, journalists, and analysts, not necessarily having a specific job
title as “visualization designer” or “visualization developer.” (Esteves and
Neves, 2022) These professionals do not typically have formal training or
education in data visualization, while mostly learning necessary visualization
skills on the go (Esteves and Neves, 2022; dvs, [n. d.]). While decades of
empirical research in data visualization have produced fruitful knowledge in
effective visualization design, it is unclear how well it is accessible to
practice (Choi et al., 2021; Diehl et al., 2018). In fact, it has been
reported that many practitioners are not aware of theories and principles from
research (Parsons, 2022).
As with any creative design profession, visualization practitioners run into
challenging situations where they must make decisions among conflicting design
alternatives (Choi et al., 2023b). Such decisions might include selecting
chart types, making competing decisions between aesthetics and functions,
picking appropriate color scales, and addressing conflicts with user and
business stakeholder needs (Choi et al., 2023b). Processes by practitioners to
address such design issues are more nuanced and multi-faceted than theoretical
process models proposed in the research field (Parsons, 2022). They depend on
a kind of situated planning that often relies on forms of experimentation and
responding to what is happening in the moment (Parsons, 2022; Parsons et al.,
2020). They balance readability and engagement and also consider contextual
factors such as whether their visualizations are business-oriented dashboards
or public-facing graphics (Parsons and Shukla, 2020).
Practitioners frequently resort to their intuition or gut instinct while also
looking for examples for inspiration (Choi et al., 2023b; Parsons, 2022).
Moreover, they also seek feedback from their colleagues to improve their data
visualizations (Choi et al., 2023b; Esteves and Neves, 2022). Such feedback is
essential to assess a design and generate revision ideas in the design
process, although the fear of criticism and non-anonymity can make people
uncomfortable receiving feedback from colleagues. Moreover, self-employed
people might struggle to find such colleagues who can provide valuable
feedback (Chalkidis et al., 2019)
In this work, we examine how an LLM-based chatbot, trained with extensive
knowledge in the wild, can serve as a design companion to provide knowledge
and feedback to practitioners.
### 2.2. Evaluating LLMs’ Knowledge Capacity
LLMs, such as BERT (Devlin et al., 2018), GPT (Brown et al., 2020; OpenAI,
2023), and Llama (Touvron et al., 2023), have rapidly gained popularity and
adoption across both industry and academia (Bommasani et al., 2021; Zhao et
al., 2023). Leveraging massive amounts of text data and computational power,
these foundation models have achieved impressive performance on a wide variety
of tasks, from generating human-like text to high-quality images (Radford et
al., 2019, 2018; Wei et al., 2022). However, the scale and complexity of these
models make it difficult to fully understand the scope and limitations of
their language capabilities (Bender et al., 2021).
A number of benchmark tasks have been proposed to assess the general knowledge
and reasoning capabilities of language models in a more holistic manner (Liang
et al., 2022; Chang et al., 2023). These include tasks that require models to
answer broad-coverage factoid questions based on pre-existing knowledge (Liang
et al., 2022). Other benchmarks evaluate numeric, analogical, and logical
reasoning skills across different modalities (Qin et al., 2023; Zhang et al.,
2023), while some others examine the ability of LLMs to generalize knowledge
to new concepts (Wei et al., 2022). Assessing models on such diverse reasoning
skills can provide deeper insight into their adaptability, knowledge gaps, and
limitations. In addition to assessing such capabilities, an important area of
research has focused on evaluating potential harms and biases in large
language models. Some studies have performed targeted adversarial evaluations
to expose problematic biases encoded in the models’ training data (Nadeem et
al., 2021; Tan and Celis, 2019; Gehman et al., 2020).
In addition to generic capabilities, research has been carried out to
investigate LLMs’ discipline-specific knowledge and skills. The inherent
intricacies and nuances of specific domains necessitate tailored evaluation
methods. For instance, researchers explored students’ and teachers’ usage of
and perceptions toward LLMs, investigating opportunities and potential
benefits for education (Kasneci et al., 2023; Baidoo-Anu and Owusu Ansah,
2023; Tlili et al., 2023). Researchers also tested LLMs’ ability to pass exams
in specific domains like medicine (Gilson et al., 2023), law (Choi et al.,
2023a), and business (Mollman, 2023). Others have evaluated LLMs’ ability to
carry out real-world tasks. Examples include analyzing diagnostic abilities by
answering questions about medications, symptoms and treatments (Lee et al.,
2023); producing research hypotheses (Lahat et al., 2023); predicting legal
judgment and recommending case citations (Chalkidis et al., 2019); and
predicting chemical properties (Pan, 2023).
Recent studies have initiated the exploration of Large Language Models’ (LLMs)
applicability in the realm of data visualization. Some investigations have
examined their capacity to generate data visualizations (Paik, 2023;
Hutchinson et al., 2023), facilitate exploratory data analysis (Dibia, 2023),
and create stylistic data graphics (Dibia, 2023), in addition to generating
graph drawings (Bartolomeo et al., 2023). Furthermore, studies have delved
into the usability of LLM-based code generation in coding data visualizations
(Vaithilingam et al., 2022), as well as understanding visualization
practitioners’ perceptions regarding generative text-to-image models
(Schetinger et al., 2023). Our goal in this work is to explore the potential
of LLMs in addressing the knowledge gap among data visualization practitioners
by offering design insights and feedback, thereby shedding light on the
efficacy of LLMs as design companions.
## 3\. Assessing ChatGPT’s Competence in Data Visualization Knowledge
As a step toward assessing ChatGPT’s data visualization knowledge, we were
interested in its ability to respond to real-world data visualization
questions in comparison to Humans.
### 3.1. Data Collection
We relied on VisGuides (Diehl et al., 2018) as our main source for
visualization questions. VisGuides is a discussion forum focused on
visualization guidelines, where practitioners from various backgrounds, such
as scientists, designers, engineers, and students, ask and respond to
questions regarding their visualizations and fundamental visualization
principles.
We identified a total of 226 questions within the VisGuides repository. Among
these, we selected 119 questions for the study based on specific inclusion
criteria. These criteria included the presence of a question in the post, the
requirement for a visualization description to possess sufficient visual
encoding (due to ChatGPT’s inability to process images), the need for
questions to be comprehensible and well-structured (excluding overly generic
inquiries like ’What do you think?’ or ’Is this good enough?’), and the
necessity of having at least one Human response. Two of the authors jointly
reviewed and unanimously approved the inclusion of these 119 questions.
Visguides primarily featured two types of questions, broadly divided into two
categories: design feedback questions (87) and visualization guideline
questions (32). The former aimed to enhance users’ personal visualizations
(see Figure 3 B & C), while the latter sought to comprehend visualization
guidelines or principles (see Figure 3 A). For a more comprehensive taxonomy
of questions on the platform, Diehl et al. provide detailed insights (Diehl et
al., 2021).
After compiling the list of qualifying questions, we presented each query to
ChatGPT, accompanied by a role-playing prompt designed as follows:
Please act as if you are a visualization expert.
Can you respond to this question delimited by ///.
///
{Question}
///
Please format your reply as JSON objects with
keys: Question and Response.
{
Question: xxx
Response: xxx
}
There may be more than one question,
format each as a new object.
To streamline our analysis, we created a spreadsheet where each row included a
question, the corresponding ChatGPT response, and a compilation of Human
responses. Each question had 1.54 Human responses on average ($Std.dev$ =
1.03). The average approximate word count of the Human responses was 235.35
($Std.dev$ = 289.62) with a maximum of 1721 words and minimum of 13 words. The
average approximate word count of the ChatGPT responses was 463.42 ($Std.dev$
= 189.34) with a maximum of 1192 words and minimum of 115 words.
### 3.2. Analysis Method
Two researchers performed an open coding process on a randomly selected 10%
sample of the questions and the corresponding ChatGPT and Human responses.
This process resulted in the development of six evaluation metrics: breadth,
clarity, depth, actionability, coverage, and topicality. These metrics were
assessed using a Likert scale that ranged from 1 to 5. For instance, a rating
of 1 indicated narrow responses, while a rating of 5 denoted very broad
responses in the context of the breadth metric. The researchers initially
assigned scores to the 10% sample, engaging in discussions to resolve any
discrepancies and establish consensus definitions and criteria for each score
within every metric to ensure consistency. Following this, one of the
researchers proceeded to score the remaining question responses.
Given that some questions elicited multiple Human responses while others
received only one, We created a composite score to evaluate the collective
quality of the Human responses. Furthermore, any elements of the Human
responses that extended beyond the capabilities of ChatGPT, such as providing
citations to related literature, links to external resources, visual examples,
or posing follow-up questions, were duly noted. We also categorized whether
each question sought feedback on the questioner’s visualizations or inquired
about general visualization design knowledge.
### 3.3. Results
Figure 2. Comparison of metric scores between Human and ChatGPT responses: the
chart showcases the existence of significant differences for the metrics of
breadth, clarity, and coverage, while the scores for actionablity, depth and
topicality were comparable for ChatGPT and Human response ratings.
Comparing metric scores. This graph presents an evaluation of six key metrics
on the y-axis. The x-axis represents scores, ranging from 0 to 5.0. The
dataset consists of two distinct conditions: Human responses and responses
generated by ChatGPT. For each of the aforementioned metrics, the graph
displays scores for both ChatGPT and Human responses in the form of interval
plots. Additionally, confidence intervals are included to illustrate the
degree of uncertainty in the measurements. In the case of actionability,
depth, and topic focus, the ChatGPT and Human responses exhibit overlapping
score intervals, indicating no significant differences between them. However,
for the metrics of breadth, coverage, and clarity, ChatGPT consistently
achieved higher scores compared to Human responses. Importantly, the intervals
representing these metrics do not intersect, reinforcing the statistical
significance of these differences.
Figure 3. Examples of visguide questions and responses: Each question serves
as a unique example, accompanied by responses from both ChatGPT and a Human
expert. The ChatGPT responses consistently exhibit breadth, topicality, and
coverage. The Human response rating are more varied, as is evident when
comparing the first and last example.
This visual representation showcases three distinct instances of questions
posed within the context of Visguide. The three examples are as follows: 1.
Guideline Question on Color Palettes: The first question pertains to color
palettes and serves as a guideline inquiry. ChatGPT provides general
recommendations for color selection, while the Human expert response focuses
on a specific guideline and cites relevant academic sources. 2. General Design
Feedback Question with Bubble Chart: The second question revolves around
general design feedback, particularly regarding the enhancement of a bubble
chart visualization. The actual bubble chart visualization is presented. In
response, both ChatGPT and the Human expert offer suggestions and insights. 3.
Specific Design Feedback Question with Tabular Heat Map: The third question
addresses specific design feedback, centered on a tabular heat map
visualization. The question itself incorporates the visualization, allowing
readers to contextualize the discussion. Responses from ChatGPT and the Human
expert are provided. These three scenarios exemplify the versatility of
Visguide as an interactive platform for soliciting guidance and feedback, and
the varying nature of responses between ChatGPT and Human expertise.
#### 3.3.1. Ratings on Quality Metrics
Figure 4 shows the scores of response quality metrics for both ChatGPT and
Human responses. We describe the findings for each quality metric as below.
##### Coverage:
Both ChatGPT and Humans excelled in covering the entirety of the question,
with ChatGPT slightly outperforming Humans. ChatGPT scored 4.92 ($std.dev$ =
0.33), while Humans scored 4.37 ($std.dev$ = 0.95). The lower score often
stemmed from the Human respondent’s failure to address all facets of the
question. For instance, in Figure 3C, the user raised a two-part question,
“…would you recommend all the states be placed on the Y-axis instead of
grouped as regions? If so, which visual layouts would be effective with
dealing with the large number of states?…”. However, the Human response only
addressed the second part of the question, suggesting a heat map as a useful
visual layout.
##### Topicality:
This score indicates the degree to which the response stays on topic. ChatGPT
averaged 4.87 ($std.dev$ = 0.45), while Humans averaged 4.72 ($std.dev$ =
0.59). There were moments when both ChatGPT and Humans strayed from the main
topic. For instance, when a user inquired about the application of visual
variables in VR visualizations,111https://visguides.org/t/visual-variables-
for-visualizations-in-vr/143 ChatGPT not only addressed this question but also
came up with a hypothetical question asking what factors to consider when
choosing such visual variables. Human respondents also occasionally veered
off-topic by offering unsolicited advice and extending their responses beyond
the original question’s scope.
##### Breadth
: Breadth measures how widely a response explores various ideas, concepts,
options, or perspectives. In this category, ChatGPT outperformed Human
responses by a significant margin. ChatGPT achieved an average score of 4.62
($std.dev$ = 0.61), while Human responses averaged only 3.35 ($std.dev$ =
1.13). For example, in Figure 3A, when the user inquired about color palette
guidelines, ChatGPT provided an extensive list of potential approaches,
encompassing data type, color harmony, accessibility, etc. In contrast, the
Human respondent to this question concentrated primarily on the creation of
domain-specific color maps, offering in-depth insights into the process.
##### Clarity:
The clarity score assesses how easily a reader can comprehend a response,
taking into account conciseness, lack of verbosity, and the organization of
the response structure. Notably, ChatGPT consistently presented its responses
in a well-structured JSON array due to the specific prompt format we employed.
This contributed to ChatGPT receiving a high clarity score, averaging 4.93
($std.dev$ = 0.28). Human responses also demonstrated good clarity with an
average score of 4.30 ($std.dev$ = 0.79). They received lower ratings when the
responses were overly brief, making it difficult to discern the specific
reference points.
##### Depth:
The depth score evaluates the extent of explanations, expertise, and valuable
insights in a response. Human responses averaged a score of 3.52 ($std.dev$ =
1.07), while ChatGPT’s average was 3.44 ($std.dev$ = 0.71). Although the
scores are comparable to each other, a distinct difference in depth between a
ChatGPT response and a Human response is evident in Figure 3B. The ChatGPT
response primarily focuses on general, surface-level improvements applicable
to any visualization, such as adding labels and legends. In contrast, the
Human response often provides more tailored and specific suggestions for
enhancing the visualization, as demonstrated by the recommendation to “put a
label before the numerical attribute like ’Estimated Production: 1000’.” This
detailed guidance directly pertains to the unique aspects of the visualization
in question. However, Human responses also displayed greater variability. For
instance, when asked about color clarity and alternative data representation
to reduce clutter, a Human response simply confirmed the current design
choices are fine222https://visguides.org/t/latitude-distribution-of-solar-and-
wind-farms-uk/815.
##### Actionability:
The actionability score assesses whether the response offers guidance that can
be readily implemented in the visualization. This metric is not applicable to
visualization guideline questions. The actionability scores were quite
similar: Human responses had an average score of 3.86 ($std.dev$ = 0.94),
while ChatGPT achieved an average of 3.99 ($std.dev$ = 0.61). Overall, ChatGPT
received a high score for actionability, as it consistently provided a list of
recommendations. However, the score was occasionally reduced due to the
generic or vague nature of these suggestions. For instance, in Figure 3B,
ChatGPT offered advice for addressing label clutter, selecting colors, and
refining the legend. Nevertheless, these recommendations were explained in a
manner that made them applicable to most visualizations. The actionability
scores for Human responses tend to be low when the Human refrains from
offering recommendations and instead validates all of the user’s
choices333https://visguides.org/t/comparison-of-type-a-b-acute-hepatitis-from-
tycho-dataset/406.
#### 3.3.2. Further Observations in Human and ChatGPT Responses
Humans tend to provide external resources—43 out of 119 (36.13%) Human
responses included elements that ChatGPT would not provide unless explicitly
instructed. This additional content encompassed references to academic
research, links to related articles and websites, citations of studies, and
the inclusion of informative video links. For instance, in Figure 3A, asking
for guidelines on choosing color palette, the responses include references to
three different academic research papers including “H. Fang, et al.”. In
another case, when the question was about using blow-apart effects, the Human
respondent embedded an educational video related to the
subject444https://visguides.org/t/the-blow-apart-effect/71.
Multiple Human responses tend to complement each other—Most questions had a
single response (84), while 35 questions had multiple responses. Average
ratings generally favored responses with multiple contributors across all
categories, except for topicality. Significant rating differences were
observed in breadth, depth, and actionability. Single-respondent questions
averaged 3.06 ($std.dev$ = 1.05) in breadth, 3.35 ($std.dev$ = 1.05) in depth,
and 3.72 ($std.dev$ = 0.96) in actionability. Conversely, multi-respondent
questions scored higher, averaging 4.06 ($std.dev$ = 0.98) in breadth, 3.94
($std.dev$ = 0.98) in depth, and 4.26 ($std.dev$ = 0.74) in actionability.
When there’s only one response to a question, the quality of that response
becomes heavily reliant on the individual respondent, leading to significant
variability in quality, as seen in Figure 3A (high quality) versus Figure 3C
(low quality). However, when multiple respondents contribute, they complement
each other and compensate for areas in which one respondent might fall short
(Figure 3B).
Human response rating improves for visualization guideline questions—For
ChatGPT responses, the differences in ratings between feedback and
visualization guideline questions consistently remained small (less than 0.2
rating points). However, Human responses received higher scores for addressing
guideline questions and lower scores for addressing design feedback questions
across all metrics. For instance, Human responses excelled in breadth and
depth for visualization guideline questions, with average ratings of 4.09
($std.dev$ = 0.95) for breadth and 4.25 ($std.dev$ = 0.97) for depth. In
contrast, design feedback questions received lower average ratings, with 3.08
($std.dev$ = 1.06) for breadth and 3.25 ($std.dev$ = 0.97) for depth.
Guideline questions frequently require broader perspectives and readily
accessible design knowledge, with 84.4% (27/32) of their responses citing
external references, in contrast to design feedback questions, where only
18.39% (16/87) of Human responses did so. Conversely, feedback questions
entail an understanding of domain-specific data and tasks, potentially making
it challenging to offer comprehensive insights. For instance, a question like
“Is this color scheme suitable?” requires a deep understanding of the domain.
Conversely, questions such as the one about gendered colors in
visualizations555https://visguides.org/t/use-of-gendered-colours-in-
visualization-a-guideline-or-a-personal-principle/999 can easily elicit a
variety of various viewpoints and existing resources.
Question specificity corresponds with response quality—ChatGPT’s response was
susceptible to how specific and clear the question was. Figure 3B shows such
an example when the user query was “do you believe the graph is clear?” Other
examples of less specific user questions include: “is my color map optimal?”
and “how can my visual design be improved?”666https://visguides.org/t/map-and-
bar-visualization/841. On the other hand, in Figure 3C, the question is more
specific, resulting in more meaningful options and insights offered by
ChatGPT.
ChatGPT’s color vision deficiency is usually not problematic—Color-related
questions were prominent. Out of the 119 questions, 32 were centered on color-
based design feedback (e.g., “Is the choice of the colour scheme appropriate?”
777https://visguides.org/t/area-chart-generation-capacity-over-the-years-for-
each-power-fuel-type/810). Users rarely mentioned the specific colors used in
their visualizations, often providing only visual encoding information. As a
result, ChatGPT’s responses were typically based on general color principles.
Despite this limitation, ChatGPT surpassed its overall average in most
categories, except for depth and actionability.
### 3.4. Takeaways
Our analysis reveals ChatGPT’s ability to respond to data visualization
queries compared to Human counterparts. It’s noteworthy that ChatGPT, despite
lacking vision, demonstrates remarkable reliability in providing answers,
possibly owing to its exceptional language understanding and extensive
knowledge base. While these findings are promising, they don’t provide
insights into how practitioners perceive the value of Human’s responses,
prompting the need for further research.
Participant ID | Role | Years in Data Vis | Gender | Age range | Racial background | Frequency of ChatGPT usage
---|---|---|---|---|---|---
P10 | Developer | 3-5 years | Male | 25-34 years old | Asian | Daily
P11 | Manager | 1-3 years | Male | 25-34 years old | Caucasian | Occasionally
P12 | Freelancer | 3-5 years | Male | 35-44 years old | Asian | Daily
P13 | Journalist | 1-3 years | Female | 18-24 years old | Hispanic | Weekly
P14 | Consultant | 3-5 years | Male | 25-34 years old | Caucasian | Daily
P15 | Product Designer | 3-5 years | Female | 25-34 years old | Caucasian | Weekly
P16 | Analyst | 5-10 years | Male | 35-44 years old | Caucasian, African American | Weekly
P17 | Scientist | 3-5 years | Female | 25-34 years old | Asian | Weekly
P18 | Scientist | 1-3 years | Female | 25-34 years old | African American | Weekly
P19 | Student | 5-10 years | Male | 25-34 years old | Caucasian | Only once or twice
P20 | Freelancer | ¿ 10 years | Male | 45-54 years old | Caucasian | Occasionally
P21 | Student | 3-5 years | Male | 25-34 years old | Asian | Daily
Table 1. Demographic and experience related information for participants in
user study.
## 4\. Perception of practitioners toward ChatGPT’s utility
Building upon our previous research, we conducted a comparative interview
study involving feedback sessions with ChatGPT and Human experts. We aimed to
gain a deeper understanding of how data visualization practitioners perceive
ChatGPT’s role as a design companion, uncovering their attitudes and
perceptions regarding ChatGPT’s design feedback.
### 4.1. Recruitment
We aimed to engage a diverse group of data visualization practitioners in our
study. We recruited participants through academic mailing lists targeting
students and scientists, as well as the Data Visualization Society’s Slack
channel (dat, [n. d.]). Inclusion criteria required proficiency in English,
experience in data visualization creation, and a willingness to share at least
one of their data visualizations. Initially, 41 individuals responded to the
recruitment survey, from which we selected 12 participants to take part in the
study.
### 4.2. Participants
The study involved 12 participants (P10 to P21), creating a diverse group with
varying professional backgrounds, experience levels, and familiarity with
ChatGPT (see Table 1). Their roles encompassed a broad spectrum, including
developers, managers, journalists, consultants, product designers, analysts,
scientists, and students. Professional experience ranged from less than one
year to over ten years in their respective fields. Furthermore, participants’
engagement with ChatGPT varied, with some being daily users while others
interacted with it only occasionally or as the need arose.
### 4.3. Tasks & Procedures
Before each study session, participants were asked to prepare a visualization
they were comfortable sharing, along with a list of relevant questions. These
sessions were conducted using Zoom. The entire study session took about 60
minutes. Participants were compensated with a $50 Amazon gift card.
Each interview began with a brief introduction by the moderator and was
divided into three segments: a visualization feedback session with ChatGPT, a
similar session with Human expert(s), and an open-ended interview. Six
participants initiated the process with the ChatGPT session, while the other
six commenced with the Human expert session. After each feedback session, we
administered a survey. In five sessions, two visualization experts provided
feedback, while in the remaining seven sessions, we had one expert present.
Both experts are current professors in the field of visualization.
During the ChatGPT feedback session, participants shared their screens with
the moderator and presented their visualizations, accompanied by a brief
explanation. The moderator introduced a predefined input format for ChatGPT,
employing a role-playing structure (Ihwan, [n. d.]). Participants were guided
to furnish details about the visualization’s chart type, textual description,
visual encodings, and any related questions. They input this information into
ChatGPT while screen-sharing. Following ChatGPT’s response, participants could
pose follow-up questions or request clarifications. Afterward, participants
received a survey via the Zoom chat to gather feedback on their ChatGPT
experience.
During the Human expert session, one or two visualization experts joined the
Zoom meeting. Participants shared their screens and presented their
visualization-related questions to these experts. The experts answered queries
and provided additional feedback or further insights. Subsequently, following
this session, the visualization experts left the meeting, and participants
were presented with a Zoom survey to gather feedback about their experience.
In the final segment of the study, open-ended interviews were conducted. In
these interviews, the moderator inquired about participants’ experiences in
both ChatGPT and Human feedback sessions, delving deeper into the rationale
behind their survey responses. Furthermore, participants were prompted to
share their perspectives on the possible integration of ChatGPT into their
data visualization workflow, its constraints, and the future possibilities it
might offer.
### 4.4. Data & Analysis Methods
The final dataset from our user studies comprises participant questions (see
Table 2) and responses, survey data, and transcripts from post interviews. Our
primary focus for analysis was on the interview transcripts. Initially, two
researchers independently examined three interview transcripts, identifying
interesting quotes and assigning meaningful themes. Once we established a high
degree of consistency between the researchers’ findings, one of them proceeded
to review the remaining transcripts. Ultimately, we compiled a categorized
list of quotes based on initial higher-level themes (e.g., Where Human experts
excel). To further refine our analysis, we revisited these quotes and divided
the themes into sub-level categories (e.g., Collaborative and natural
conversations).
### 4.5. Post-Session Interview Analysis Results
P ID | Chart Type | Initial Questions
---|---|---
P10 | parallel coordinate map | Could you suggest an alternative way to visualize this? How can I make this visualization more engaging?
P11 | scatterplot | Does it make more sense to compare X or to compare Y? Would there be a better visualization type to show ___?
P12 | scatterplot | How can I improve this visualization in order to satisfy the customer?
P13 | scatterplot | What is the best way to show ___? what do you think about the current color scheme?
P14 | diverging stacked bar chart | How can we ensure the user looks at ___? How can I put these four bars next to each other in one visualization?
P15 | bubble cluster | How would I enable more properties to be seen? How would I let the user encode the strength of their preference?
P16 | interactive map | Which parts of the data visualization do you think are successful? Are there any areas you believe could be improved for better clarity and impact?
P17 | sankey chart | Is there a way to ascertain what visualization is most effective for communicating the data? How could ___ be more clearly displayed?
P18 | wav file visualization | Would it be visually overwhelming to use ___? Is this an appropriate visualization for demonstrating ___?
P19 | color heatmaps | Would another color map be more suitable? Would you have any suggestions as to how to indicate ___?
P20 | beeswarm, line chart | Does the special encoding for ___ make intuitive sense? Is it clear that ___ represents ___?
P21 | comparative line graph | How can I decrease the number of colors used in this visualization? Would this data be better represented if I ___?
Table 2. Initial questions from participants. These initial questions are
altered to maintain the privacy of the participants. The original questions
provided more contextual information and specificity.
#### 4.5.1. Participant Questions and Interaction Dynamics
Four of the participants (P10, P15, P16, and P18) shared interactive
visualizations. The other eight participants shared static visualizations. The
type of visualization, as well as the questions they initially brought, can be
seen in Table 2. To safeguard participant privacy, we refrain from disclosing
their visualizations or any elements of their questions that could potentially
reveal information about participants.
In the ChatGPT section of the interview, the majority of the participants
asked ChatGPT at least one follow-up question. The follow-up questions to
ChatGPT tended to be more specific and technical than the follow-up questions
asked to the Human experts. For example, P11 asked ChatGPT “What are the pros
and cons of having too many bubbles on the scatterplot from having a clear
comparison?”
On the other hand, the follow up questions to the Human experts tended to be
less structured and more conversational. The participant and expert had a
conversation and pulled out different aspects of the question along the way.
There was a lot of stopping to clarify what different elements of the
visualization meant and how they played into the visualization. As a result,
other factors not explicitly mentioned in the question often ended up getting
brought up in these sessions with the Human experts.
#### 4.5.2. Post Survey Results
In Figure 4, we present the outcomes of the experience surveys conducted
following each feedback session. The overarching consensus among participants
was a strong preference for Human experts. This preference was underscored by
statistically significant distinctions, as determined through two-tailed Mann-
Whitney U tests. Specifically, participants expressed significantly higher
levels of satisfaction in interacting with Human experts, perceiving their
responses as notably more accurate, helpful, reliable, and adaptable to their
preferences.
Furthermore, respondents indicated that Human experts exhibited a
significantly deeper understanding of the context and requirements of their
queries. Human experts were also acknowledged for their expertise in data
visualization, along with their ability to offer actionable recommendations.
It is worth noting that no significant differences were observed in responses
to questions related to the clarity and conciseness of explanations, concerns
about potential biases, or perceived risks of receiving misleading
information.
Figure 4. User preference for Human expert vs. ChatGPT responses in feedback
sessions: The overarching pattern evident in these charts predominantly
indicates a user preference for Human experts over ChatGPT in the context of
feedback sessions.
This figure is about user preferences for the feedback sessions. 11 grouped
bar charts are presented, each illuminating user feedback collected through
Likert scale questions. These questions gauge user perceptions of the quality
of responses received from ChatGPT compared to those from Human experts during
feedback sessions. Examples of these questions encompass aspects such as
overall satisfaction with interaction and engagement with the feedback
provider, and the level of trust in the accuracy and correctness of
information provided concerning data visualizations. For every question, two
bars stand side by side: one representing the Human response and the other
representing ChatGPT’s response. These bars are further divided into five
distinct sections, reflecting the spectrum of user sentiment—ranging from
”strongly disagreed” to ”strongly agreed” with the respective question.
#### 4.5.3. Where Human Experts Excel
All participants were generally more satisfied with Human experts. They
mentioned a variety of reasons for such opinions. The participants highlighted
several key themes in their responses, shedding light on the strengths of
Human experts in the context of data visualization guidance.
##### Bespoke and focused feedback of Human experts
First of all, they liked Human experts’ ability to offer tailored
recommendations that were closely aligned with the specific visuals in
question (P10, P11, P15, P17 - P21). P17 exemplified this sentiment by
stating, “[…] recommendations are a lot more tailored and specific in that
way, versus just like being tossed a list of potential tools we can look.”.
P11 further elaborated on this point by highlighting that Human experts can
envision various options and present the ones they deem most valuable. They
also noted that Human experts could effectively understand the context of
visuals by directly observing them (P11, P17 - P19). P19 expressed this
sentiment by stating, “I felt the feedback was obviously more grounded to the
visualization at hand. Because again, they could see it.” Similarly, P18
commented, “As he’s looking at the pictures, he knows the information I had
access to and how that translates.” Furthermore, participants acknowledged
that Human experts excelled in staying focused on the specific problem at
hand. For instance, P15 articulated, “[the expert] was able to talk about
improvements within scope […] without redesigning it entirely.” Similarly, P10
commented, “The ones that [the expert] gave were within the scope of refining
existing visualizations.”
##### Collaborative and natural conversations
Participants consistently expressed the value of more fluid conversations when
interacting with Human experts (P10, P11, P15, P17, P19, P20). They
highlighted that Human responses felt more free-flowing, contributing to a
sense of genuine interaction (P10, P17). Participants also noted that Human-
expert interactions provided a collaborative and interactive experience (P11,
P15, P19, P20). For instance, P11 highlighted an efficient turn-taking dynamic
by saying, “the ability to ask follow-up questions and clarify I think was
easier to do within the Human interaction context.” Others similarly
emphasized the convenience of seeking clarifications without the need for
precise phrasing or additional context adjustments (P11, P15).
##### Enriched insights through lived experience
Participants felt that Human experts’ responses were often rooted in their
experience and education (P16). They valued the ability of Human experts to
explain the underlying rationale, promoting a more actionable approach to
implementing the suggestions (P10). Furthermore, participants appreciated the
Human experts’ ability to think beyond the obvious; as P13 noted, “I feel like
I was definitely more satisfied with the Human experts, especially because
they thought outside of the box without me asking … they just started thinking
so many things beyond anything I could have think of.” Moreover, participants
conveyed that interactions with Human experts felt personal and supportive.
One participant reflected, “they were really walking me through the different
things […] help you grow as a professional […] really pushing me.”, while
another participant recognized the proactiveness of Human experts by saying
“even if I didn’t ask question, they kind of brought it up.”
#### 4.5.4. Where ChatGPT excels
Participants identified the strengths of ChatGPT in contrast to Human experts.
##### Brainstorming and ideation
ChatGPT’s role as a creative catalyst emerged prominently during participant
discussions, underlining its capacity to spark innovative ideas (P11, P15,
P18). Participants like P18 found unexpected and impressive suggestions,
stating, “It did have some cool ideas, I didn’t expect this from ChatGPT. So
pretty. Pretty good.”, while P11 commented, “I think it’s really interesting
to try and kind of break outside the box and get more creative… GPT has the
potential to disrupt that workflow in a positive way.” Likewise, P15 also
acknowledged that the tool may not always provide precise solutions but rather
serve as a springboard for creative exploration. P15 contrasts this to Human
experts by saying, “[expert] feedback was less brainstorming and more actual
critique.”
##### Broad knowledge base
In a similar vein to its brainstorming capability, participants also
highlighted ChatGPT’s capacity to serve as a vast repository of knowledge,
offering a wide array of ideas (P10, P11, P15). P10 stated, “it really does
kind of feel like a really fancy kind of search engine, where it is kinda, I
give it a problem. And it’s like, oh, like, this might be a solution.”, while
P15 said, “great place to go to start […] researching or come up with ideas
you hadn’t thought of.” P11 similarly emphasized that ChatGPT excels at
showcasing a range of data visualization options, making it a valuable
resource during the initial stages of projects. P11 elaborated on the tool’s
potential, envisioning a scenario where metadata about a dataset could be fed
to ChatGPT to receive chart-type recommendations. However, P10 emphasized that
a critical eye is needed to discern the relevant and valuable insights from
its outputs; “I asked to give 20 ideas and I’m […] experienced enough […] to
[…] know that, like, 16 of them are just nonsense. The other four things I
might not have thought of.”
##### Time saved via rapid understanding and response
Participants appreciated ChatGPT’s ability to quickly grasp and provide
information (P10, P11, P13). P10 said ChatGPT was able to quickly grasp
unfamiliar concepts auxiliary to visualization (e.g., explaining Wordle to
ChatGPT), while P11 and P13 felt ChatGPT provided faster and more
comprehensive lists of options, especially for alternative chart types, when
compared to the responses from Human experts. On a related note, P11 said that
Human interactions can sometimes involve a lengthier back-and-forth process to
arrive at the right question.
##### Soft attitude and neutral perspective
Other participants noted ChatGPT’s behavioral traits. For example, P19 pointed
out that ChatGPT often assumed a gentle and agreeable attitude by
acknowledging concerns without offering specific directions, which led to a
perception of excessive alignment. P19 also noted that ChatGPT demonstrated a
neutral stance, presenting a variety of viewpoints without favoring any
particular side. This contrasted with Human experts who might convey their own
biases, thereby making ChatGPT’s approach seem more balanced.
#### 4.5.5. Where ChatGPT falls short
Participants also discussed ChatGPT’s several shortcomings in comparison to
Human experts.
##### Lack of depth in responses
Participant feedback indicated a notable lack of depth in ChatGPT’s responses
(P13, P15, P19). For instance, P19 expressed that ChatGPT’s advice seemed to
lack actionable insights and depth that Human experts possess. This sentiment
was echoed by P15, who noted that while ChatGPT’s answers were broad in
knowledge, they lacked depth. P15 further elaborated that ChatGPT often
suggested radical changes (i.e., diverse ideas), while Human experts provided
improvements that were more aligned with refining existing visualizations. P19
similarly commented that ChatGPT’s ideas were not considering the specific
visualization. Participants acknowledged this contrast as a trade-off (P15,
P17), emphasizing that the desired level of response detail and granularity
from either ChatGPT or Human experts depends on user needs.
##### Generic advice, misalignment, and lack of critical thinking
Participants expressed that ChatGPT’s feedback often felt generic and lacked
contextual depth (P12, P16, P20). P20 expressed that ChatGPT’s “knowledge felt
[…] not necessarily backed up by experience.” P16 and P12 similarly noted that
ChatGPT’s responses were not necessarily wrong but were overly generic,
failing to consider the actual outcomes of the provided recommendations. P16
further commented that they “felt the areas of improvement were good
suggestions. But it didn’t feel like you have to do this.”
Several participants highlighted instances of misalignment and a lack of
specificity in ChatGPT’s recommendations (P11, P16, P19). P19 pointed out an
example where ChatGPT suggested color maps that might create confusion rather
than improve visualizations. This lack of alignment was further emphasized by
P11, who noted a communication hurdle hindering the establishment of a common
understanding and precise feedback.
Participants highlighted concerns related to critical thinking (P13, P16). P16
noted, “whereas the ChatGPT felt more like it was telling me what I wanted to
hear if that makes sense.” P16 further elaborated that ChatGPT’s responses
sometimes resembled textbook information rather than offering insightful
suggestions on visualization. P13 shared a similar sentiment, stating, “And I
feel like the issue with ChatGPT typically is that it only responds to your
question. So if you cannot think of the question, it will not give you the
answer.”
##### Efforts required for fluid conversation
Participants highlighted the challenges in having a fluid and interactive
conversation with ChatGPT (P10, P13, P16, P17, P20). For instance, P10
explained, “… kind of hoping that one of them would be what I wanted.”. P17
expressed a similar sentiment, stating, “even though it’s still
conversational, but it still kind of feels a little bit static sometimes where
you’re just typing text into a text box.” P16 further emphasized the
difference in interaction dynamics, stating, “I really liked listening to the
Human experts go back and forth, and talk about different aspects… whereas
ChatGPT just fed that to me, and that was it.” P13 noted that ChatGPT’s
responses often required iterations and follow-ups, P10 found that ChatGPT’s
insights became more specific and interesting when more constraints were
provided.
#### 4.5.6. Opinions on Trust Reliability with ChatGPT
Participants had varied opinions on the trustworthiness and reliability of
ChatGPT’s responses, with some expressing low trust (P10, 18, P20). P20
mentioned the limitations in ChatGPT’s understanding of visuals, which relied
heavily on the accuracy of participants’ descriptions. P18 pointed out the
potential for fabricated information due to the algorithmic nature of the
responses. P10 echoed this sentiment, indicating that ChatGPT often sounded
too confident and authoritative, which could mislead users into placing
unwarranted trust. On the other hand, P17 said the two are relatively the
same, highlighting that the key distinction lies in how information is
delivered.
Several other participants pointed to the need for due diligence in evaluating
ChatGPT’s suggestions, especially when the user lacks domain expertise (P10,
P15). For instance, P10 commented: “If I was in a field that I wasn’t familiar
with, I think it’d be really easy to get fooled by it.” P15 drew a parallel
with the early inception of Wikipedia, stating, “ it was like early days of
Wikipedia, I was taught like, never cite Wikipedia, it could be wrong. And I
think the same thing is true of something like a large language model.”
They also made comparisons to Human experts in terms of trust and authority
(P13, P16, P18, P21). They explained how the credentials of Human respondents
influenced their perception (P18, P21), e.g., “Well, the Human respondent is a
professor in visualization.”—P18. Others perceived Human experts as sounding
knowledgeable and educational (P13, P16). In contrast to these, participants
also shared their perceptions of expertise and self-awareness in AI. P16 said,
“Didn’t feel like that the ChatGPT response was incorrect … trusted it, but
not nearly as much as the Humans.”, while P20 expressed, “ChatGPT doesn’t know
when it’s wrong.”
#### 4.5.7. Limitations and Opportunities with ChatGPT
Participants expressed an optimistic outlook regarding the future potential of
ChatGPT, despite acknowledging its current challenges (P10, P12, P15, P16).
They discussed aspects they would like to see improved in ChatGPT in the
future.
##### Ability to convey complex visualizations
Participants emphasized the significance of enabling ChatGPT to comprehend and
interpret complex visualizations, highlighting the limitations of text-based
communication when dealing with intricate design problems (P10, P11, P13, P15
- 21). P15 acknowledged the inherent loss of information when translating
between visual and written mediums, while P10 and P11 conveyed the complexity
of describing interactive dashboards and multimedia visualizations via text.
P10 also mentioned video recordings that could convey the interactivity of
designs.
##### Ability to generate visualizations based on feedback
Several participants pondered the prospect of ChatGPT being able to not only
analyze input information but also generate relevant visualizations (P18, P19,
P21). For example, P18 proposed the generation or modification of images based
on textual descriptions. P19 noted the convenience of sharing underlying code
to accompany visual content, emphasizing its value in effectively
communicating complex ideas instead of relying solely on text descriptions.
Similarly, P21 discussed the concept of inputting data and posing questions to
explore the space of chart design. P13 envisioned the integration of ChatGPT
within existing tools, enhancing workflow efficiency and facilitating a more
cohesive user experience.
##### Facilitating Fluid and Truthful Conversations
Participants expressed the need for ChatGPT to initiate and support more fluid
conversations to enhance its usability and value (P11, P16, P21). P16
highlighted the need for AI to prompt users for further details about the
visualization; “I would really like it if it were to ask me questions to kind
of gauge my thought process.” Similarly, P11 noted the value of follow-up
questions in driving valuable insights, while “it was harder to envision what
a follow-up question to that system would be.” On the other hand, P21 raised
concerns about the possibility of ChatGPT providing false or hallucinated
information.
##### The indispensable role of Human feedback
The consensus among participants highlighted the enduring value of Human
feedback in evaluating and improving visualizations (P11, P16, P21). P16
commented, “I think there’s always, at least in my lifetime, going to be a
need for Human interpretation. And the Human experts’ sessions, for me,
reinforce that they offer a lot of really good insight.” P11 further expressed
that despite ChatGPT’s capabilities, the feedback from actual users remains
incomparable, as it encompasses a deeper level of insight into user
experience. Additionally, P11 discussed ideas of experimenting with different
personas for hypothetical user interactions.
##### Issues with privacy and sensitive data
Participants in the interviews expressed significant concerns regarding the
privacy and handling of sensitive data by ChatGPT (P15, P16, P20, P21). They
emphasized that sharing visuals containing private or sensitive information
would be uncomfortable and unlikely unless the data was from a public source
or there was a guarantee that it would not be shared publicly. P15 shared an
illustrative example, stating, “I met a company that I had to sign an NDA
with, and we are not allowed to use ChatGPT with anything that might be
considered confidential information because that information goes onto OpenAI
servers and will live there for eternity.”
## 5\. Discussions
### 5.1. Lessons learned
Human experts were generally perceived as more effective in imparting data
visualization design knowledge compared to ChatGPT. Several factors
contributed to this perception, including Human experts’ ability to engage in
more fluid conversations, their adeptness in turn-tracking, and their capacity
to offer highly contextualized suggestions. Additionally, participants’ trust
in Human experts was influenced by their professional backgrounds. However,
not all feedback regarding ChatGPT was negative. Participants acknowledged its
utility as a brainstorming tool, primarily due to its extensive knowledge
base, which enabled it to generate novel ideas. Furthermore, participants
appreciated ChatGPT’s prompt responses. Nonetheless, concerns were raised
about ChatGPT, particularly regarding the need for some level of expertise to
evaluate the validity and appropriateness of its suggestions and responses
that are at the risk of being hallucinated. Overall, participants expressed
optimism about ChatGPT’s potential, believing that it will continue to improve
and become more valuable in the future.
### 5.2. Comparisons to Human Responses from Visguides
While we did not formally analyze responses in the feedback study, ChatGPT
responses from the study were similar to ChatGPT’s responses in Visguides. In
both instances, ChatGPT provided broad responses and listed ideas. For
responses from Human experts, they differ more from the Human responses in
Visguides. One reason for this is that any member of the Visguide community
can answer questions on the forum, whereas in the feedback sessions, the
questions were answered by experienced visualization experts. These sessions
also happened in real-time, which led to the Human experts not missing parts
of participant questions and led to them being especially attentive and
thorough. The Human experts in the feedback session also sometimes
supplemented their verbal responses by sharing supporting links and related
articles over the Zoom chat. Overall, the responses from Human experts in the
feedback sessions were similar to the high-scoring Human responses in
Visguides.
### 5.3. Improving perceptual and cognitive abilities
One significant challenge encountered while interacting with ChatGPT was its
lack of nuanced image understanding. Consequently, despite the effectiveness
of our template for describing chart types and visual encodings (P10), ChatGPT
struggled to comprehend the intricate visual patterns depicted in the charts.
We noticed that providing additional descriptions of these perceptual patterns
improved ChatGPT’s performance in the VisGuides analysis, although some
participants expressed concerns about the need to manually input such
descriptions. Currently, platforms like Bard and Bing support image input.
However, existing vision systems are primarily trained on natural images and
are ill-equipped to handle synthetic images such as graphic designs and data
visualizations (Bylinskii et al., 2017; Hoque et al., 2022). In a brief test
where we posed ten questions from VisGuides to Bard, their responses did not
seem to surpass ChatGPT’s performance. For instance, Bard consistently
recommended adding elements that were already clearly in the visualization,
such as legends, color encodings, and titles. Additionally, the
recommendations provided by Bard had less breadth to them and were oftentimes
not applicable, such as recommending adding a trend line to a Gantt Chart.
### 5.4. Ability to proactively engage in discussions
Another major challenge reported by participants was the absence of fluid
conversation and turn-tracking. ChatGPT functions as a chatbot that responds
solely to user instructions, and it is also trained to align with users’
intentions (Ouyang et al., 2022). This design makes ChatGPT more passive and
affirmative in comparison to Human experts, who frequently pose clarifying
questions and may even offer unsolicited advice. There exists a potential
trade-off between these two communication styles. On the one hand, there’s a
concern about unsafe or misalignment scenarios where AI chatbots might have
the ability to intervene and provide malicious or irrelevant feedback. On the
other hand, there are situations where practitioners struggle to formulate
questions because they may not yet discern what’s wrong with their
visualizations. Our participants often found themselves in this latter
scenario, where they did not know what kind of follow-up questions to ask. It
would be valuable to conduct further investigations into passive and active
Human-AI alignments within this problem context. For instance, a middle-ground
approach could involve ChatGPT suggesting potential follow-up questions
similar to the current capabilities of Bing.
### 5.5. Integrating into a practitioner’s workflow
Several participants mentioned that they primarily use ChatGPT for tasks
related to writing or debugging code in order to generate visualizations (P10,
P12, P13, P16, P18). The robust code generation and comprehension capabilities
of large language models (LLMs), such as Github Copilot (Git, [n. d.]), are
relatively well-known. On the other hand, data visualization practice rests on
two pillars: design and implementation. Our study sheds light on ChatGPT’s
current state in terms of design knowledge beyond implementation knowledge.
However, existing data visualization tools often fall short of conveying
effective data visualization knowledge to designers. As P13 suggests, the
integration of a ChatGPT-like assistant within current tools would be highly
beneficial. Such a design assistant could assist in suggesting appropriate
chart types (P11, P21) or providing rationales or critiques for generated
visualizations.
### 5.6. ChatGPT for Design Knowledge Education
Some participants noted ChatGPT’s potential for education (P20, P21), such as
assisting with understanding unfamiliar charts or aiding in writing
educational blogs on data visualization. A recent paper identifies challenges
in visualization education (Bach et al., 2023), highlighting AI as a valuable
tool for formal and informal learning contexts. However, concerns include the
consistency and quality of AI-generated content, as differing responses to
students could be problematic. There’s also the risk of students over-relying
on AI instead of engaging critically with visualizations, e.g., through peer
feedback. Future AI systems in education may act as personalized tutors,
tracking progress, guiding learning goals, and fostering both comprehension
and generation of visualizations.
## 6\. Conclusion
In this study, we explored the potential of ChatGPT as a design companion,
focusing on its capacity to provide valuable visualization insights to
practitioners. Our findings reveal that ChatGPT, despite its absence of visual
perception, capitalizes on a vast knowledge repository to generate diverse and
imaginative suggestions. While limitations like the depth of its responses and
interaction dynamics exist, participants expressed optimism about its future
utility. Looking ahead, our future research endeavors will involve
investigating state-of-the-art chart understanding vision models to develop a
multimodal conversational agent. Additionally, we plan to delve into the
design space of integrating ChatGPT into existing data visualization tools,
further enhancing its practical applicability.
## References
* (1)
* dat ([n. d.]) [n. d.]. _Data Visualization Society_. Accessed: March 28, 2023.
* dvs ([n. d.]) [n. d.]. _Data Visualization Society Surveys_. https://www.datavisualizationsociety.org/survey-history Accessed on Sep 9, 2023.
* Git ([n. d.]) [n. d.]. GitHub Copilot. https://github.com/features/copilot. Accessed on Sep 9, 2023.
* Bach et al. (2023) Benjamin Bach, Mandy Keck, Fateme Rajabiyazdi, Tatiana Losev, Isabel Meirelles, Jason Dykes, Robert S Laramee, Mashael AlKadi, Christina Stoiber, Samuel Huron, et al. 2023\. Challenges and Opportunities in Data Visualization Education: A Call to Action. _arXiv preprint arXiv:2308.07703_ (2023).
* Baidoo-Anu and Owusu Ansah (2023) David Baidoo-Anu and Leticia Owusu Ansah. 2023. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. _Available at SSRN 4337484_ (2023).
* Bako et al. (2023) Hannah K. Bako, Xinyi Liu, Leilani Battle, and Zhicheng Liu. 2023. Understanding how Designers Find and Use Data Visualization Examples. _IEEE Transactions on Visualization and Computer Graphics_ 29, 1 (2023), 1048–1058. https://doi.org/10.1109/TVCG.2022.3209490
* Bartolomeo et al. (2023) Sara Di Bartolomeo, Giorgio Severi, Victor Schetinger, and Cody Dunne. 2023. Ask and You Shall Receive (a Graph Drawing): Testing ChatGPT’s Potential to Apply Graph Layout Algorithms. In _EuroVis 2023 - Short Papers_ , Thomas Hoellt, Wolfgang Aigner, and Bei Wang (Eds.). The Eurographics Association. https://doi.org/10.2312/evs.20231047
* Bender et al. (2021) Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, and Shmargaret Shmitchell. 2021. On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?. In _Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency_ (Virtual Event, Canada) _(FAccT ’21)_. Association for Computing Machinery, New York, NY, USA, 610–623. https://doi.org/10.1145/3442188.3445922
* Bommasani et al. (2021) Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, S. Buch, Dallas Card, Rodrigo Castellon, Niladri S. Chatterji, Annie S. Chen, Kathleen A. Creel, Jared Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren E. Gillespie, Karan Goel, Noah D. Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas F. Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, O. Khattab, Pang Wei Koh, Mark S. Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir P. Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Benjamin Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, J. F. Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Robert Reich, Hongyu Ren, Frieda Rong, Yusuf H. Roohani, Camilo Ruiz, Jack Ryan, Christopher R’e, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishna Parasuram Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei A. Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. 2021. On the opportunities and risks of foundation models. _arXiv preprint arXiv:2108.07258_ (2021). https://crfm.stanford.edu/assets/report.pdf
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In _Advances in Neural Information Processing Systems_ , H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 1877–1901. https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
* Bylinskii et al. (2017) Zoya Bylinskii, Nam Wook Kim, Peter O’Donovan, Sami Alsheikh, Spandan Madan, Hanspeter Pfister, Fredo Durand, Bryan Russell, and Aaron Hertzmann. 2017. Learning Visual Importance for Graphic Designs and Data Visualizations. In _Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology_ (Québec City, QC, Canada) _(UIST ’17)_. Association for Computing Machinery, New York, NY, USA, 57–69. https://doi.org/10.1145/3126594.3126653
* Chalkidis et al. (2019) Ilias Chalkidis, Ion Androutsopoulos, and Nikolaos Aletras. 2019\. Neural Legal Judgment Prediction in English. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_. Association for Computational Linguistics, Florence, Italy, 4317–4323. https://doi.org/10.18653/v1/P19-1424
* Chang et al. (2023) Yupeng Chang, Xu Wang, Jindong Wang, Yuan Wu, Kaijie Zhu, Hao Chen, Linyi Yang, Xiaoyuan Yi, Cunxiang Wang, Yidong Wang, et al. 2023\. A survey on evaluation of large language models. _arXiv preprint arXiv:2307.03109_ (2023).
* Choi et al. (2023b) Jinhan Choi, Changhoon Oh, Yea-Seul Kim, and Nam Wook Kim. 2023b. VisLab: Enabling Visualization Designers to Gather Empirically Informed Design Feedback. In _Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems_ (Hamburg, Germany) _(CHI ’23)_. Association for Computing Machinery, New York, NY, USA, Article 813, 18 pages. https://doi.org/10.1145/3544548.3581132
* Choi et al. (2021) Jinhan Choi, Changhoon Oh, Bongwon Suh, and Nam Wook Kim. 2021\. Toward a Unified Framework for Visualization Design Guidelines. In _Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems_ (Yokohama, Japan) _(CHI EA ’21)_. Association for Computing Machinery, New York, NY, USA, Article 240, 7 pages. https://doi.org/10.1145/3411763.3451702
* Choi et al. (2023a) Jonathan H Choi, Kristin E Hickman, Amy Monahan, and Daniel Schwarcz. 2023a. Chatgpt goes to law school. _Available at SSRN_ (2023).
* Devlin et al. (2018) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. _arXiv preprint arXiv:1810.04805_ (2018).
* Dibia (2023) Victor Dibia. 2023\. LIDA: A Tool for Automatic Generation of Grammar-Agnostic Visualizations and Infographics using Large Language Models. In _Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)_. Association for Computational Linguistics, Toronto, Canada, 113–126. https://doi.org/10.18653/v1/2023.acl-demo.11
* Diehl et al. (2018) Alexandra Diehl, Alfie Abdul-Rahman, Mennatallah El-Assady, Benjamin Bach, Daniel Keim, and Min Chen. 2018. VisGuides: A Forum for Discussing Visualization Guidelines. In _EuroVis 2018 - Short Papers_ , Jimmy Johansson, Filip Sadlo, and Tobias Schreck (Eds.). The Eurographics Association. https://doi.org/10.2312/eurovisshort.20181079
* Diehl et al. (2021) Alexandra Diehl, Elif E. Firat, Thomas Torsney-Weir, Alfie Abdul-Rahman, Benjamin Bach, Robert Laramee, Renato Pajarola, and Min Chen. 2021. VisGuided: A Community-driven Approach for Education in Visualization. In _Eurographics 2021 \- Education Papers_ , Beatriz Sousa Santos and Gitta Domik (Eds.). The Eurographics Association. https://doi.org/10.2312/eged.20211003
* Esteves and Neves (2022) Salomé Esteves and Marco Neves. 2022. “I learned it on the job” Becoming a Data Visualization professional in news media. _Information Design Journal_ 27, 3 (2022), 309–319. https://doi.org/10.1075/idj.22004.est
* Gehman et al. (2020) Samuel Gehman, Suchin Gururangan, Maarten Sap, Yejin Choi, and Noah A Smith. 2020. RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models. In _Findings of the Association for Computational Linguistics: EMNLP 2020_. Association for Computational Linguistics, 3356–3369. https://doi.org/10.18653/v1/2020.findings-emnlp.301
* Gilson et al. (2023) Aidan Gilson, Conrad W Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, David Chartash, et al. 2023\. How does ChatGPT perform on the United States medical licensing examination? The implications of large language models for medical education and knowledge assessment. _JMIR Medical Education_ 9, 1 (2023), e45312. https://doi.org/10.1101/2022.12.23.22283901
* Hoque et al. (2022) E. Hoque, P. Kavehzadeh, and A. Masry. 2022. Chart Question Answering: State of the Art and Future Directions. _Computer Graphics Forum_ 41, 3 (2022), 555–572. https://doi.org/10.1111/cgf.14573 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14573
* Hutchinson et al. (2023) Maeve Hutchinson, Aidan Slingsby, Radu Jianu, and Pranava Madhyastha. 2023. Towards Visualisation Specifications from Multilingual Natural Language Queries using Large Language Models. (2023). https://doi.org/10.2312/evp.20231072
* Ihwan ([n. d.]) Aris Ihwan. [n. d.]. Role Prompting. https://www.linkedin.com/pulse/role-prompting-aris-ihwan/?trk=pulse-article_more-articles_related-content-card. Accessed on Sep 8, 2023.
* Kasneci et al. (2023) Enkelejda Kasneci, Kathrin Sessler, Stefan Küchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan Günnemann, Eyke Hüllermeier, Stephan Krusche, Gitta Kutyniok, Tilman Michaeli, Claudia Nerdel, Jürgen Pfeffer, Oleksandra Poquet, Michael Sailer, Albrecht Schmidt, Tina Seidel, Matthias Stadler, Jochen Weller, Jochen Kuhn, and Gjergji Kasneci. 2023. ChatGPT for good? On opportunities and challenges of large language models for education. _Learning and Individual Differences_ 103 (2023), 102274. https://doi.org/10.1016/j.lindif.2023.102274
* Lahat et al. (2023) Adi Lahat, Eyal Shachar, Benjamin Avidan, Zina Shatz, Benjamin S. Glicksberg, and Eyal Klang. 2023. Evaluating the use of large language model in identifying top research questions in gastroenterology. _Scientific Reports_ 13, 1 (March 2023), 4164\. https://doi.org/10.1038/s41598-023-31412-2
* Lee et al. (2023) Peter Lee, Sebastien Bubeck, and Joseph Petro. 2023\. Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine. _New England Journal of Medicine_ 388, 13 (2023), 1233–1239. https://doi.org/10.1056/NEJMsr2214184
* Liang et al. (2022) Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022\. Holistic evaluation of language models. _arXiv preprint arXiv:2211.09110_ (2022).
* Luther et al. (2015) Kurt Luther, Jari-Lee Tolentino, Wei Wu, Amy Pavel, Brian P. Bailey, Maneesh Agrawala, Björn Hartmann, and Steven P. Dow. 2015\. Structuring, Aggregating, and Evaluating Crowdsourced Design Critique. In _Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing_ (Vancouver, BC, Canada) _(CSCW ’15)_. Association for Computing Machinery, New York, NY, USA, 473–485. https://doi.org/10.1145/2675133.2675283
* Mollman (2023) Steve Mollman. 2023\. ChatGPT passed a Wharton MBA exam and it’s still in its infancy. One professor is sounding the alarm. https://fortune.com/2023/01/21/chatgpt-passed-wharton-mba-exam-one-professor-is-sounding-alarm-artificial-intelligence/. Accessed on Sep 7, 2023.
* Nadeem et al. (2021) Moin Nadeem, Anna Bethke, and Siva Reddy. 2021. StereoSet: Measuring stereotypical bias in pretrained language models. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)_. Association for Computational Linguistics, 5356–5371. https://doi.org/10.18653/v1/2021.acl-long.416
* Okerlund et al. (2022) Johanna Okerlund, Evan Klasky, Aditya Middha, Sujin Kim, Hannah Rosenfeld, Molly Kleinman, and Shobita Parthasarathy. 2022. What’s in the Chatterbox? Large Language Models, Why They Matter, and What We Should Do About Them.
* OpenAI ([n. d.]) OpenAI. [n. d.]. ChatGPT. https://openai.com/chatgpt. Accessed on Sep 7, 2023.
* OpenAI (2023) R OpenAI. 2023. GPT-4 technical report. _arXiv_ (2023), 2303–08774.
* Ouyang et al. (2022) Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022\. Training language models to follow instructions with human feedback. _Advances in Neural Information Processing Systems_ 35 (2022), 27730–27744.
* Paik (2023) Soonk Paik. 2023\. How I Created a Data Visualization With Zero Coding Skills, Thanks to ChatGPT. _Nightingale magazine by the Data Visualization Society_ (4 April 2023). https://nightingaledvs.com/data-visualization-using-chatgpt-to-code/
* Pan (2023) Jie Pan. 2023. Large language model for molecular chemistry. _Nature Computational Science_ 3, 1 (2023), 5–5. https://doi.org/10.1038/s43588-023-00399-1
* Parsons (2022) P. Parsons. 2022\. Understanding Data Visualization Design Practice. _IEEE Transactions on Visualization; Computer Graphics_ 28, 01 (jan 2022), 665–675. https://doi.org/10.1109/TVCG.2021.3114959
* Parsons et al. (2020) Paul Parsons, Colin M. Gray, Ali Baigelenov, and Ian Carr. 2020\. Design Judgment in Data Visualization Practice. In _2020 IEEE Visualization Conference (VIS)_. 176–180. https://doi.org/10.1109/VIS47514.2020.00042
* Parsons and Shukla (2020) Paul Parsons and Prakash Shukla. 2020. Data Visualization Practitioners’ Perspectives on Chartjunk. In _2020 IEEE Visualization Conference (VIS)_. 211–215. https://doi.org/10.1109/VIS47514.2020.00049
* Qin et al. (2023) Chengwei Qin, Aston Zhang, Zhuosheng Zhang, Jiaao Chen, Michihiro Yasunaga, and Diyi Yang. 2023\. Is ChatGPT a general-purpose natural language processing task solver? _arXiv preprint arXiv:2302.06476_ abs/2302.06476 (2023).
* Radford et al. (2018) Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. 2018\. Improving language understanding by generative pre-training. (2018).
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019\. Language models are unsupervised multitask learners. _OpenAI blog_ 1, 8 (2019), 9.
* Schetinger et al. (2023) V. Schetinger, S. Di Bartolomeo, M. El-Assady, A. McNutt, M. Miller, J. P. A. Passos, and J. L. Adams. 2023. Doom or Deliciousness: Challenges and Opportunities for Visualization in the Age of Generative Models. _Computer Graphics Forum_ 42, 3 (2023), 423–435. https://doi.org/10.1111/cgf.14841 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14841
* Tan and Celis (2019) Yi Chern Tan and L. Elisa Celis. 2019. _Assessing Social and Intersectional Biases in Contextualized Word Representations_. Curran Associates Inc., Red Hook, NY, USA.
* Tlili et al. (2023) Ahmed Tlili, Boulus Shehata, Michael Agyemang Adarkwah, Aras Bozkurt, Daniel T. Hickey, Ronghuai Huang, and Brighter Agyemang. 2023\. What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. _Smart Learning Environments_ 10, 1 (2023), 15\. https://doi.org/10.1186/s40561-023-00237-x
* Touvron et al. (2023) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023\. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_ (2023).
* Vaithilingam et al. (2022) Priyan Vaithilingam, Tianyi Zhang, and Elena L. Glassman. 2022\. Expectation vs. Experience: Evaluating the Usability of Code Generation Tools Powered by Large Language Models. In _Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems_ (New Orleans, LA, USA) _(CHI EA ’22)_. Association for Computing Machinery, New York, NY, USA, Article 332, 7 pages. https://doi.org/10.1145/3491101.3519665
* Wei et al. (2022) Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. 2022\. Emergent abilities of large language models. _Transactions on Machine Learning Research_ (2022).
* Zhang et al. (2023) Zhuosheng Zhang, Aston Zhang, Mu Li, Hai Zhao, George Karypis, and Alex Smola. 2023\. Multimodal chain-of-thought reasoning in language models. _arXiv preprint arXiv:2302.00923_ (2023).
* Zhao et al. (2023) Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, Ruiyang Ren, Yifan Li, Xinyu Tang, Zikang Liu, Peiyu Liu, Jian-Yun Nie, and Ji-Rong Wen. 2023\. A survey of large language models. _arXiv preprint arXiv:2303.18223_ (2023).
|
# A Proper Scoring Rule for Validation of Competing Risks Models
Zoe Guan1
(1 Department of Epidemiology and Biostatistics, Memorial Sloan Kettering
Cancer Center, New York, NY 10017
)
###### Abstract
Scoring rules are used to evaluate the quality of predictions that take the
form of probability distributions. A scoring rule is strictly proper if its
expected value is uniquely minimized by the true probability distribution. One
of the most well-known and widely used strictly proper scoring rules is the
logarithmic scoring rule. We propose a version of the logarithmic scoring rule
for competing risks data and show that it remains strictly proper under non-
informative censoring.
## 1 Introduction
A probabilistic forecast is a prediction that specifies a probability
distribution over the set of possible outcomes. The quality of probabilistic
forecasts is typically assessed using scoring rules (for an overview, see
Gneiting and Raftery (2007), Chapter 10 of Parmigiani and Inoue (2009), and
Dawid and Musio (2014)). Given a set of outcomes $\mathcal{X}$ and a family of
probability measures $\mathcal{P}$ over $\mathcal{X}$, a scoring rule is a
loss function
$s:\mathcal{X}\times\mathcal{P}\to\mathds{R}\cup[-\infty,\infty]$ that assigns
a number $s(x,Q)$ to each combination of $x\in\mathcal{X}$ and
$Q\in\mathcal{P}$.
A rational forecaster who believes that the true distribution is
$P\in\mathcal{P}$ will report a forecast $Q\in\mathcal{P}$ that minimizes the
expected score under $P$,
$\displaystyle S(P,Q)\coloneqq E_{P}[s(X,Q)].$ (1)
A scoring rule is proper if $S(P,P)\leq S(P,Q)$ for all $P,Q\in\mathcal{P}$,
and it is strictly proper if $S(P,P)<S(P,Q)$ for $Q\neq P$. Strictly proper
scoring rules are desirable because they encourage honesty (i.e. they
encourage forecasters to report their true beliefs) and reward accuracy
(Winkler, 1994).
One of the most well-known and widely used strictly proper scoring rules is
the logarithmic scoring rule proposed by Good (1952):
$\displaystyle s(x,Q)=-\log(q(x))$ (2)
where $q$ is the probability density or mass function corresponding to $Q$.
There are many theoretical and empirical arguments supporting the use of the
logarithmic scoring rule in various prediction problems (Winkler, 1969;
Phillips and Edwards, 1966; Benedetti, 2010). Besides being strictly proper,
the logarithmic scoring rule is also local, which means that it depends only
on the predicted probabilities for observed events and does not use the
predicted probabilities for unobserved events. Moreover, the logarithmic
scoring rule discourages the forecaster from assigning extreme probabilities
to very rare or very frequent events Benedetti (2010), which might be
desirable in settings where overconfident predictions have serious
consequences.
In survival or failure time analysis, interest lies in predicting the time
until the occurrence of a specific event, which might not be fully observed
due to censoring. Dawid and Musio (2014) described a proper scoring rule,
called the survival score, for the classical survival analysis setting with a
single event type and non-informative right censoring. The survival score
gives rise to a variant of the logarithmic scoring rule as a special case. In
this paper, we consider a competing risks setting where there are multiple
mutually exclusive event types. We propose a logarithmic scoring rule for this
setting and show that it remains strictly proper under non-informative right
censoring.
## 2 Competing Risks Notation
Suppose there are $M$ competing causes of failure. Let $T$ be the time to
failure and let $J\in\\{1,\dots,M\\}$ denote the cause of failure. $T$ is
potentially subject to right censoring. Let $C$ be the censoring time, which
is assumed to be independent of $(T,J)$. We observe $Y=\min(T,C)$ and
$\Delta_{j}=I[T\leq C,J=j]$, the indicator for whether failure type $j$ is
observed, for $j=1,\dots,M$. We assume the times are discrete, but similar
results apply for continuous-time data, with minor notation changes.
Let $Q$ be a probability distribution for $(T,J)$ and $G$ a probability
distribution for $C$. Let $f_{j,Q}(t)=Q(T=t,J=j)$, $F_{j,Q}(t)=Q(T\leq
t,J=j)$, and $F_{Q}(t)=\sum\limits_{j=1}^{M}F_{j,Q}(t)$. These functions can
depend on covariates, but for simplicity we omit them from the notation. The
joint probability mass function for $(Y,\Delta_{1},\dots,\Delta_{M})$ is
$\displaystyle\pi_{Q,G}(Y=y,\Delta_{1}=\delta_{1},\dots,\Delta_{M}=\delta_{M})=\prod_{j=1}^{M}f_{j,Q}(y)^{\delta_{j}}(1-F_{Q}(y))^{1-\delta}G(C\geq
y)^{\delta}G(C=y)^{1-\delta}$ (3)
where $\delta=\sum\limits_{j=1}^{m}\delta_{j}$.
## 3 Scoring Rule and Proof of Propriety
We define a logarithmic scoring rule that evaluates a probability distribution
for $(T,J)$ against the observed data $(y,\delta_{1},\dots,\delta_{M})$. We
show that this scoring rule is strictly proper.
###### Theorem 1.
Given a probability distribution $Q$ for $(T,J)$, define
$\displaystyle
s((y,\delta_{1},\dots,\delta_{M}),Q)\coloneqq-\sum_{j=1}^{M}\delta_{j}\log(f_{j,Q}(y))-(1-\delta)\log(1-F_{Q}(y)).$
(4)
This is a strictly proper scoring rule for the distribution of $(T,J)$.
When $M=1$, (4) is equivalent to a special case of the survival score from
Section 3.5 of Dawid and Musio (2014) that is obtained by setting
$\psi(\lambda)=\lambda\log{\lambda}$.
Proof of Theorem 1.
Let $P$ and $Q$ be probability distributions for $(T,J)$. Let $G$ be a
probability distribution for $C$. Define
$\displaystyle S_{G}(P,Q)\coloneqq
E_{P,G}[s((Y,\Delta_{1},\dots,\Delta_{M}),Q)].$ (5)
We will show that for any choice of $G$, $S_{G}(P,Q)$ is uniquely minimized by
$Q=P$.
$\displaystyle S_{G}(P,Q)-S_{G}(P,P)$
$\displaystyle=\sum_{y,\delta_{1},\dots,\delta_{M}}\pi_{P,G}(y,\delta_{1},\dots,\delta_{M})\left(s((y,\delta_{1},\dots,\delta_{M}),Q)-s((y,\delta_{1},\dots,\delta_{M}),P)\right)$
$\displaystyle=\sum_{y,\delta_{1},\dots,\delta_{M}}\pi_{P,G}(y,\delta_{1},\dots,\delta_{M})\log\left(\frac{\prod_{j=1}^{M}f_{j,P}(y)^{\delta_{j}}(1-F_{P}(y))^{1-\delta}}{\prod_{j=1}^{M}f_{j,Q}(y)^{\delta_{j}}(1-F_{Q}(y))^{1-\delta}}\right)$
$\displaystyle=\sum_{y,\delta_{1},\dots,\delta_{M}}\pi_{P,G}(y,\delta_{1},\dots,\delta_{M})\log\left(\frac{\prod_{j=1}^{M}f_{j,P}(y)^{\delta_{j}}(1-F_{P}(y))^{1-\delta}G(C\geq
y)^{\delta}G(C=y)^{1-\delta}}{\prod_{j=1}^{M}f_{j,Q}(y)^{\delta_{j}}(1-F_{Q}(y))^{1-\delta}G(C\geq
y)^{\delta}G(C=y)^{1-\delta}}\right)$
$\displaystyle=\sum_{y,\delta_{1},\dots,\delta_{M}}\pi_{P,G}(y,\delta_{1},\dots,\delta_{M})\log\left(\frac{\pi_{P,G}(y,\delta_{1},\dots,\delta_{M})}{\pi_{Q,G}(y,\delta_{1},\dots,\delta_{M})}\right)$
$\displaystyle=D_{KL}(\pi_{P,G}||\pi_{Q,G})$ where $D_{KL}(p||q)$ denotes the
Kullback-Leibler divergence from $p$ to $q$
Kullback-Leibler divergence is non-negative and $D_{KL}(p||q)=0$ if and only
if $p(x)=q(x)$ for all $x$ (MacKay, 2003), so $S_{G}(P,Q)$ is uniquely
minimized by $Q=P$.
∎
## Acknowledgements
I would like to thank Giovanni Parmigiani for pointing me to relevant
literature and providing helpful suggestions.
## References
* Benedetti (2010) Riccardo Benedetti. Scoring rules for forecast verification. _Monthly Weather Review_ , 138(1):203–211, 2010\.
* Dawid and Musio (2014) Alexander Philip Dawid and Monica Musio. Theory and applications of proper scoring rules. _Metron_ , 72(2):169–183, 2014.
* Gneiting and Raftery (2007) Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. _Journal of the American statistical Association_ , 102(477):359–378, 2007.
* Good (1952) Irving J Good. Rational decisions. _Journal of the Royal Statistical Society, Series B, Methodological_ , 14:107–114, 1952.
* MacKay (2003) David JC MacKay. _Information theory, inference and learning algorithms_. Cambridge university press, 2003.
* Parmigiani and Inoue (2009) Giovanni Parmigiani and Lurdes Inoue. _Decision theory: Principles and approaches_ , volume 812. John Wiley & Sons, 2009.
* Phillips and Edwards (1966) Lawrence D Phillips and Ward Edwards. Conservatism in a simple probability inference task. _Journal of experimental psychology_ , 72(3):346, 1966.
* Winkler (1969) Robert L Winkler. Scoring rules and the evaluation of probability assessors. _Journal of the American Statistical Association_ , 64(327):1073–1078, 1969.
* Winkler (1994) Robert L Winkler. Evaluating probabilities: Asymmetric scoring rules. _Management Science_ , 40(11):1395–1405, 1994\.
|
Hysteresis Characteristics of Generalized Spin-S Magnetic Binary Alloys
Gülşen<EMAIL_ADDRESS>
The Graduate School of Natural and Applied Sciences, Dokuz Eylül University,
Tr-35160 İzmir, Turkey
Ümit<EMAIL_ADDRESS>
Department of Physics, Dokuz Eylül University, TR-35160 Izmir, Turkey
## 1 Abstract
In this study, hysteresis characteristics of the generalized spin-S binary
alloy represented by the formula $A_{c}B_{1-c}$ have been investigated within
the framework of effective field approximation. The binary system consists of
type A (spin-S) and type B (spin-S) atoms which are randomly distributed on a
honeycomb lattice. Both integer and half-integer spin models of two magnetic
atom types are examined. By detailed investigation on hysteresis loops,
multiple hysteresis behaviors are obtained for a given set of Hamiltonian
parameters. Besides, the quantities of hysteresis characteristics as the
hysteresis loop area, remanent magnetization, and coercive field have been
investigated as functions of concentration.
Keywords: Hysteresis characteristics, binary alloy, effective field theory
## 2 Introduction
The development and characterization of the transition metal alloys and rare
earth alloys are high interest due to the emergence of successful applications
in this rapidly evolving field [1]. Multicritical points of rare earth alloys
such as $Tb-Er$, $Dy-Er$, $Tb-Tm$, $Dy-Tm$ and $Gd-Er$ have been investigated
in the presence of crystal field effects [2]. Phase diagrams of magnetic and
non-magnetic transition metal alloys have been studied both experimentally and
theoretically based on Ising-type phenomenological models [3].
Disordered binary alloys represented by $A_{c}B_{1-c}$ can be modeled
theoretically using well-known Ising-like models. Binary alloy systems
consisting of different spin values, such as half integer - integer spin
valued models are investigated by means of the effective field theory (EFT)
[4, 5, 6], mean field theory (MFT) [7, 8, 9] and Monte Carlo (MC) simulations
[10]. Models with half integer - half integer spins are also examined by means
of MFT [11] and also within the two frameworks EFT and MFT [12, 13]. Besides,
several spin systems are modeled such as $S_{A}=1/2$ and another component is
generalized spin-$S$ by use of MFT [14] and within EFT and MFT [12].
Generalized of both spin variables of binary ferromagnetic alloy has been
investigated with competing anisotropies by means of MFT [15]. Besides,
$Fe_{p}Al_{q}$ alloys [16] and $NiBi$ alloys [17] have been constructed on the
Ising model within the framework of EFT. Site-diluted Ising spin model for
$Fe_{1-q}Al_{q}$ alloys have been examined by use of EFT [18, 19] and pair
approximation [20]. The bond disordered Blume-Capel model for ternary
$(Fe_{0.65}Ni_{0.35})_{1-x}Mn_{x}$ and $Fe_{p}Al_{q}Mn_{x}$ alloys have been
studied with mean-field renormalization group (MFRG) method [21]. The Potts-
like model has been utilized to describe $Gd_{1-x}C_{x}$ alloy on basis of MC
method [22].
It is crucial to emphasize that magnetic materials used in important
technological applications are represented by higher spin systems.
$AB_{p}C_{1-p}$ ternary alloy system corresponding to the magnetic Prussian
blue analogs of
$(Ni_{p}^{II}Mn_{1-p}^{II})_{1.5}[Cr^{III}(CN)_{6}]\cdot zH_{2}O$ type
consisting of $S_{A}=3/2$, $S_{B}=1$, $S_{C}=5/2$ have been investigated by
use of MFT [23] and MC [24]. The
$(Fe_{p}^{II}Mn_{1-p}^{II})_{1.5}[Cr^{III}(CN)_{6}]\cdot nH_{2}O$ type analog
consisting of $S_{A}=3/2$, $S_{B}=2$, $S_{C}=5/2$ has been investigated by use
of MFT [25]. Besides, different spin variables have been studied such as
$S_{A}=1/2$, $S_{B}=1$, $S_{C}=3/2$ [26] and $S_{A}=1$, $S_{B}=3/2$,
$S_{C}=1/2$ [27] within the framework of EFT.
On the other hand, hysteresis behavior of magnetically ordered organic and
molecule-based materials has been inspected extensively [28, 29]. A molecular-
based magnetic material $AFe^{II}Fe^{III}(C_{2}O_{4})_{3}$ which corresponds
to ferrimagnetic mixed spin-$2$ and spin-$5/2$ Ising model on a honeycomb
lattice exhibits $2S+1$ magnetic plateaus in the presence of single-ion
anisotropies and at low temperatures. Triple and double hysteresis behaviors
have been found by means of MC [30] and EFT [31] for the given system.
In Ref. [32], Akıncı concluded that crystal field diluted $S-1$ Blume-Capel
model has double and triple hysteresis behavior in the ground state, at
negative large values of the crystal field within the framework of EFT. Then,
generalization of these results to a spin-$S$ $(S>1)$ Blume Capel [33] and
Heisenberg models [34] have been realized. These models display $2S$-windowed
hysteresis loop in this region. These works also focused on the difference
between integer and half integer spin models. While half integer spin model
displays central loop (which is symmetric about the origin), integer spin
model does not exhibit central loop.
Similar results for other systems have been reported in the literature. For
instance, the binary alloy system consisting of spin-$1/2$ and spin-$1$ could
exhibit double hysteresis behavior in a concentration $0<c<0.557$ range within
the framework of EFT [5]. The effects of the symmetric double Gaussian random
field distribution on this system has also been investigated by use of EFT,
and double hysteresis character has been obtained depending on Hamiltonian
parameters [35]. Hysteresis behavior of quenched disordered binary alloy
cylindrical nanowire consisting of the same spin set as the previous model has
been examined by MC [36]. The effects of the concentration and temperature on
disordered $Fe_{x}Al_{1-x}$ alloys have been studied by using first-principle
theory and MC simulation [37]. There are also studies in the literature
involving the hysteresis features of higher spin models. For example, mixed
spin-$1/2$ and spin-$3/2$ Ising ferromagnetic and ferrimagnetic bilayer system
on a honeycomb lattice has been studied within EFT [38]. It is noteworthy
that, multiple hysteresis behaviors have been obtained on the mixed spin-$7/2$
and spin-$3/2$ Ising ferrimagnetic model by use of MC simulation [39].
Furthermore, the dynamical hysteresis properties of mixed spin-$3/2$ and
spin-$5/2$ ferrimagnetic Ising model have been obtained by means of EFT based
on Glauber-type stochastic dynamics [40]. Single and triple dynamical
hysteresis loops have been observed in a spin-$1/2$ Ising bilayer system using
the same method [41].
There are comprehensive experimental studies about magnetic alloy systems
presented. Some of them are, amorphous $Fe_{100-x}B_{x}$ alloys [42],
$Fe$-doped $Au_{x}Pd_{1-x}$ alloys [43], $Mn_{x}Zn_{1-x}F_{2}$ alloys in the
presence of random field [44], $Tb_{x}Y_{1-x}$ and $Tb_{x}Gd_{1-x}$ alloys in
the presence of single-ion anisotropy [45], $Fe_{100-x}Ni_{x}$ and
$Mn_{100-x}Ni_{x}$ alloys [46], $Fe_{100-q}Al_{q}$ [47] and
$Fe_{x}Al_{1-x}Mn_{x}$ alloys [48].
In our recent work, we have investigated the hysteresis behavior of the binary
magnetic alloy that consist of spin-1 and spin-$1/2$ atoms [5]. The main aim
of this work is to generalize the results obtained in our recent work. We want
to determine the hysteresis characteristics of the magnetic binary alloys by
considering both spins which are generalized by spin-$S$ of type-$A$ and
type-$B$ atoms consisting of different spin values. For this aim, the outline
of this paper as follows: In Sec. 3 we briefly present the model and
formulation. The results and discussions are presented in Sec. 4, and finally
Sec. 5 contains our conclusions.
## 3 Model and Formulation
The system consists of randomly distributed $A$ type atoms that have
spin-$S_{A}$ and $B$ type atoms that have spin-$S_{B}$ within the Ising model.
The concentrations of the $A$ type atoms are denoted as $c$, and the $B$ type
atoms are denoted as $1-c$. Therefore the chemical formula is given by
$A_{c}B_{1-c}$. Note that, the lattice has no vacancies. The Hamiltonian of
the binary Ising model is given by
$\mathcal{H}=-J{{\underset{<i,j>}{\overset{}{\displaystyle\sum}}}\left(\xi_{i}\xi_{j}\sigma_{i}\sigma_{j}+\xi_{i}\delta_{j}\sigma_{i}s_{j}+\delta_{i}\xi_{j}s_{i}\sigma_{j}+\delta_{i}\delta_{j}s_{i}s_{j}\right)}-D{{\underset{i}{\overset{}{\displaystyle\sum}}}\left(\xi_{i}\sigma_{i}^{2}+\delta_{i}s_{i}^{2}\right)}-H{{\underset{i}{\overset{}{\displaystyle\sum}}}\left(\xi_{i}\sigma_{i}+\delta_{i}s_{i}\right)},$
(1)
where $\sigma_{i},s_{i}$ are the $z$ components of the spin-$S_{A}$ and
spin-$S_{B}$ operators and they take the values
$\sigma_{i}=-S_{A},-S_{A}+1,\ldots,S_{A}-1,S_{A}$ and
$s_{i}=-S_{B},-S_{B}+1,\ldots,S_{B}-1,S_{B}$, respectively. $J>0$ is the
ferromagnetic exchange interactions between the nearest neighbor spin pairs,
$D$ is the crystal field (single ion anisotropy) and $H$ is the longitudinal
magnetic field. Here, $\xi_{i}=1$ means that lattice site $i$ is occupied by
type-A type atoms and $\delta_{i}=1$ means that lattice site $i$ is occupied
by type-B type atoms. The site occupation number holds the relation
$\xi_{i}+\delta_{i}=1$. The first summation in Eq. (1) is over the nearest-
neighbor pairs of spins and the other summations are over all the lattice
sites.
In a typical EFT approximation, we consider specific site (denoted by $0$) and
nearest neighbors of it. All interactions of this site can be represented by
$\mathcal{H}_{0}^{A}/\mathcal{H}_{0}^{B}$ if the site $0$ occupied by type-A/B
atoms, respectively. These terms can be treated as local fields acting on a
site $0$,
$\mathcal{H}_{0}^{A}=-\xi_{0}\sigma_{0}\left[J{{\underset{j=1}{\overset{z}{\displaystyle\sum}}}\left(\xi_{j}\sigma_{j}+\delta_{j}s_{j}\right)}+H\right]-\xi_{0}\left(\sigma_{0}\right)^{2}D,$
(2)
$\mathcal{H}_{0}^{B}=-\delta_{0}s_{0}\left[J{{\underset{\delta=1}{\overset{z}{\displaystyle\sum}}}\left(\xi_{j}\sigma_{j}+\delta_{j}s_{j}\right)}+H\right]-\delta_{0}\left(s_{0}\right)^{2}D.$
(3)
By defining spin-spin interaction part of these local fields as,
$E_{0}^{A}=J{{\underset{j=1}{\overset{z}{\displaystyle\sum}}}\left(\xi_{j}\sigma_{j}+\delta_{j}s_{j}\right)},\quad
E_{0}^{B}=J{{\underset{\delta=1}{\overset{z}{\displaystyle\sum}}}\left(\xi_{j}\sigma_{j}+\delta_{j}s_{j}\right)}$
(4)
we can write Eqs. (2) and (3) more compact form as,
$\mathcal{H}_{0}^{A}=-\xi_{0}\sigma_{0}\left[E_{0}^{A}+H\right]-\xi_{0}\left(\sigma_{0}\right)^{2}D,$
(5)
$\mathcal{H}_{0}^{B}=-\delta_{0}s_{0}\left[E_{0}^{B}+H\right]-\delta_{0}\left(s_{0}\right)^{2}D.$
(6)
For obtaining magnetizations ($m_{A},m_{B}$) and quadrupolar moments
($q_{A},q_{B}$) for the system, we can use the exact identities [49] which are
given by
$m_{A}=\frac{\left\langle\left\langle\xi_{0}\sigma_{0}\right\rangle\right\rangle_{r}}{\left\langle\xi_{0}\right\rangle_{r}}=\frac{1}{\left\langle\xi_{0}\right\rangle_{r}}\left\langle\left\langle\frac{Tr_{0}\xi_{0}\sigma_{0}\exp{\left(-\beta\mathcal{H}_{0}^{A}\right)}}{Tr_{0}\exp{\left(-\beta\mathcal{H}_{0}^{A}\right)}}\right\rangle\right\rangle_{r},$
$q_{A}=\frac{\left\langle\left\langle\xi_{0}\sigma_{0}^{2}\right\rangle\right\rangle_{r}}{\left\langle\xi_{0}\right\rangle_{r}}=\frac{1}{\left\langle\xi_{0}\right\rangle_{r}}\left\langle\left\langle\frac{Tr_{0}\xi_{0}\sigma_{0}^{2}\exp{\left(-\beta\mathcal{H}_{0}^{A}\right)}}{Tr_{0}\exp{\left(-\beta\mathcal{H}_{0}^{A}\right)}}\right\rangle\right\rangle_{r},$
$m_{B}=\frac{\left\langle\left\langle\delta_{0}s_{0}\right\rangle\right\rangle_{r}}{\left\langle\delta_{0}\right\rangle_{r}}=\frac{1}{\left\langle\delta_{0}\right\rangle_{r}}\left\langle\left\langle\frac{Tr_{0}\delta_{0}s_{0}\exp{\left(-\beta\mathcal{H}_{0}^{B}\right)}}{Tr_{0}\exp{\left(-\beta\mathcal{H}_{0}^{B}\right)}}\right\rangle\right\rangle_{r},$
(7)
$q_{B}=\frac{\left\langle\left\langle\delta_{0}s_{0}^{2}\right\rangle\right\rangle_{r}}{\left\langle\delta_{0}\right\rangle_{r}}=\frac{1}{\left\langle\delta_{0}\right\rangle_{r}}\left\langle\left\langle\frac{Tr_{0}\delta_{0}s_{0}^{2}\exp{\left(-\beta\mathcal{H}_{0}^{B}\right)}}{Tr_{0}\exp{\left(-\beta\mathcal{H}_{0}^{B}\right)}}\right\rangle\right\rangle_{r}.$
where $Tr_{0}$ is the partial trace over the site $0$,
$\beta=1/\left(k_{B}T\right)$, $k_{B}$ is Boltzmann constant and $T$ is the
temperature. We have two averages here, thermal averages (inner bracket) and
random configurational averages (bracket with subscript $r$). This last
average should be taken into account to include the effect of the random
distribution of the atoms in the system.
Since all relations in Eq. (7) are in the same form, it is enough to derive
one of them for obtaining the final form of the equation. Let us choose the
equation related to $m_{A}$ from Eq. (7).
By writing Eq. (5) in Eq. (7) and performing partial trace operations by using
identity $e^{\xi x}=\xi e^{x}+1-\xi,$ (where $x$ is any real number and
$\xi=0,1$) we can obtain expression in a closed form as
$\frac{\left\langle\left\langle\xi_{0}\sigma_{0}\right\rangle\right\rangle_{r}}{\left\langle\xi_{0}\right\rangle_{r}}=\left\langle\left\langle
f_{m}^{A}\left(E_{0}^{A}\right)\right\rangle\right\rangle_{r},$ (8)
where the function is given by [50]. All definitions of these functions will
be given at the end of this section. By using differential operator technique
[51], Eq. (8) can be written as
$\frac{\left\langle\left\langle\xi_{0}\sigma_{0}\right\rangle\right\rangle_{r}}{\left\langle\xi_{0}\right\rangle_{r}}=\left\langle\left\langle
e^{E_{0}^{A}\nabla}\right\rangle\right\rangle_{r}f_{m}^{A}(x)|_{x=0},$ (9)
where $\nabla$ represents the differential with respect to $x$. The effect of
the differential operator $\nabla$ on an arbitrary function $F$ is given by
$\exp{\left(a\nabla\right)}F\left(x\right)=F\left(x+a\right),$ (10)
with arbitrary constant $a$.
At this stage of the derivation, we have to convert the exponential operator
in average braces to a polynomial form. For this aim using using approximated
van der Waerden identities [52] is typical. This identity is
$\exp{\left(aS\right)}=\cosh{\left(a\eta\right)}+\frac{S}{\eta}\sinh{\left(a\eta\right)},$
(11)
where $\eta^{2}=\left\langle S^{2}\right\rangle$ and $S$ is the spin
eigenvalue. By using $E_{0}^{A}$ of Eq. (4) in Eq. (9) we can obtain
polynomial form of the operator. Then, by using Eq. (10) with the identity
$e^{\xi x}=\xi e^{x}+1-\xi,$ we can obtain equation for $m_{A}$
$m_{A}={{\underset{p=0}{\overset{z}{\displaystyle\sum}}}}{{\underset{q=0}{\overset{z-p}{\displaystyle\sum}}}}{{\underset{r=0}{\overset{p}{\displaystyle\sum}}}}{{\underset{s=0}{\overset{z-q-r}{\displaystyle\sum}}}}{{\underset{t=0}{\overset{q+r}{\displaystyle\sum}}}}C_{pqrst}(-1)^{t}c^{z-p}\left(1-c\right)^{p}\left(\frac{m_{A}}{\eta_{A}}\right)^{q}\left(\frac{m_{B}}{\eta_{B}}\right)^{r}f_{m}^{A}\left(\left[z-2s-2t\right]J,\right)$
(12)
where $\eta_{A}^{2}=q_{A}=\left\langle\sigma^{2}\right\rangle$, and
$C_{pqrst}=\frac{1}{2^{z}}\left(\begin{array}[]{c}z\\\
p\end{array}\right)\left(\begin{array}[]{c}z-p\\\
q\end{array}\right)\left(\begin{array}[]{c}p\\\
r\end{array}\right)\left(\begin{array}[]{c}z-q-r\\\
s\end{array}\right)\left(\begin{array}[]{c}q+r\\\ t\end{array}\right).$ (13)
By using the same steps between Eqs. (8)-(12) to other relations in Eq. (7),
we can obtain equations for other quantities as,
$q_{A}={{\underset{p=0}{\overset{z}{\displaystyle\sum}}}}{{\underset{q=0}{\overset{z-p}{\displaystyle\sum}}}}{{\underset{r=0}{\overset{p}{\displaystyle\sum}}}}{{\underset{s=0}{\overset{z-q-r}{\displaystyle\sum}}}}{{\underset{t=0}{\overset{q+r}{\displaystyle\sum}}}}C_{pqrst}(-1)^{t}c^{z-p}\left(1-c\right)^{p}\left(\frac{m_{A}}{\eta_{A}}\right)^{q}\left(\frac{m_{B}}{\eta_{B}}\right)^{r}f_{q}^{A}\left(\left[z-2s-2t\right]J,\right)$
(14)
$m_{B}={{\underset{p=0}{\overset{z}{\displaystyle\sum}}}}{{\underset{q=0}{\overset{z-p}{\displaystyle\sum}}}}{{\underset{r=0}{\overset{p}{\displaystyle\sum}}}}{{\underset{s=0}{\overset{z-q-r}{\displaystyle\sum}}}}{{\underset{t=0}{\overset{q+r}{\displaystyle\sum}}}}C_{pqrst}(-1)^{t}c^{z-p}\left(1-c\right)^{p}\left(\frac{m_{A}}{\eta_{A}}\right)^{q}\left(\frac{m_{B}}{\eta_{B}}\right)^{r}f_{m}^{B}\left(\left[z-2s-2t\right]J,\right)$
(15)
$q_{B}={{\underset{p=0}{\overset{z}{\displaystyle\sum}}}}{{\underset{q=0}{\overset{z-p}{\displaystyle\sum}}}}{{\underset{r=0}{\overset{p}{\displaystyle\sum}}}}{{\underset{s=0}{\overset{z-q-r}{\displaystyle\sum}}}}{{\underset{t=0}{\overset{q+r}{\displaystyle\sum}}}}C_{pqrst}(-1)^{t}c^{z-p}\left(1-c\right)^{p}\left(\frac{m_{A}}{\eta_{A}}\right)^{q}\left(\frac{m_{B}}{\eta_{B}}\right)^{r}f_{q}^{B}\left(\left[z-2s-2t\right]J.\right)$
(16)
Here the functions are defined as [50],
$f_{m}^{A}\left(x,H,D\right)=\frac{{{\underset{k=-S_{A}}{\overset{S_{A}}{\displaystyle\sum}}}}k\exp{\left(\beta
Dk^{2}\right)\sinh{\left[\beta
k\left(x+H\right)\right]}}}{{{\underset{k=-S_{A}}{\overset{S_{A}}{\displaystyle\sum}}}}\exp{\left(\beta
Dk^{2}\right)\cosh{\left[\beta k\left(x+H\right)\right]}}},$ (17)
$f_{q}^{A}\left(x,H,D\right)=\frac{{{\underset{k=-S_{A}}{\overset{S_{A}}{\displaystyle\sum}}}}k^{2}\exp{\left(\beta
Dk^{2}\right)\cosh{\left[\beta
k\left(x+H\right)\right]}}}{{{\underset{k=-S_{A}}{\overset{S_{A}}{\displaystyle\sum}}}}\exp{\left(\beta
Dk^{2}\right)\cosh{\left[\beta k\left(x+H\right)\right]}}}.$ (18)
$f_{m}^{B}\left(x,H,D\right)=\frac{{{\underset{k=-S_{B}}{\overset{S_{B}}{\displaystyle\sum}}}}k\exp{\left(\beta
Dk^{2}\right)\sinh{\left[\beta
k\left(x+H\right)\right]}}}{{{\underset{k=-S_{B}}{\overset{S_{B}}{\displaystyle\sum}}}}\exp{\left(\beta
Dk^{2}\right)\cosh{\left[\beta k\left(x+H\right)\right]}}},$ (19)
$f_{q}^{B}\left(x,H,D\right)=\frac{{{\underset{k=-S_{B}}{\overset{S_{B}}{\displaystyle\sum}}}}k^{2}\exp{\left(\beta
Dk^{2}\right)\cosh{\left[\beta
k\left(x+H\right)\right]}}}{{{\underset{k=-S_{B}}{\overset{S_{B}}{\displaystyle\sum}}}}\exp{\left(\beta
Dk^{2}\right)\cosh{\left[\beta k\left(x+H\right)\right]}}}.$ (20)
Eqs. (12) and (14)-(16) constitute a system of coupled nonlinear equations.
The coefficients in this system are given by Eq. (13). By using functions from
Eqs. (17)-(19) we can solve this system by numerical procedures. After getting
the values of $m_{A},m_{B},q_{A},q_{B}$ from this solution we can calculate
the total magnetization ($m$) and quadrupolar moment ($q$) of the system via
$m=cm_{A}+\left(1-c\right)m_{B},\quad q=cq_{A}+\left(1-c\right)q_{B}.$ (21)
The hysteresis curves can be obtained by sweeping the magnetic field from $-H$
to $H$ and calculating magnetization in each step. The reverse sweep will
yield other branch of the curve, if present.
## 4 Results and Discussion
We consider the following scaled (dimensionless) quantities in this work
$d=\frac{D}{J},t=\frac{k_{B}T}{J},h=\frac{H}{J}.$ (22)
Results have been investigated on a honeycomb lattice (i.e. $z=3$).
In this study, we will discuss the effects of the crystal field and the
concentration on the hysteresis properties of generalized spin-S binary alloy
system. Note that for different selected spin values of these atoms we can
choose both types of atoms as integer and half integer. We have limited this
work by examining the case of $S_{A}<S_{B}$. Note also that, the concentration
of $c$ in the case of $S_{A}<S_{B}$ corresponds to the concentration of $1-c$
in the $S_{A}>S_{B}$ case.
The hysteresis loop can be obtained by calculating the magnetization by
sweeping the magnetic field from $-h$ to $h$ direction and vice versa. The
system prefers the non-magnetic $s=0$ state which is the disordered phase in
the ground state, at large negative large crystal fields, for integer spin
atoms. Besides, the system is exposed to the magnetic $s=\pm 1/2$ ground
states representing by the ordered phase at negative large crystal fields, for
half integer spin atoms. When the magnitude of the magnetic field begins to
align the spins in parallel to the field direction, the system exhibits
transition from mostly occupied ground state $s$ to the next occupied $s+1$
ground state, at low temperatures. If the magnetic field further increases,
$s+1\rightarrow s+2\rightarrow...\rightarrow S$ transitions occur, for both of
integer and half integer spin values. This is the well known plateau like
ground state structure. If we examine these transitions for both positive and
negative directions of the external magnetic field, the magnetization response
of the system could display interesting hysteresis characteristics. These
characteristics are the main investigation area of this work.
Firstly, we choose the spin values of two different types of atoms as half
integer spins. The hysteresis loops are depicted in Fig. 1 with selected
values of the concentrations $c=0.1$, $c=0.3$ and $c=0.8$. Spin variables of
the binary alloy system are $S_{A}=1/2$, $S_{B}=3/2$ in Fig 1 (a) and
$S_{A}=5/2$, $S_{B}=7/2$ in Fig 1 (b). One should notice that $c=0$ and $c=1$
cases correspond to a situation where all lattice sites are occupied by
spin-$3/2$ and spin-$1/2$ atoms in Fig 1 (a), spin-$7/2$ and spin-$5/2$ atoms
in Fig 1 (b), respectively. When the majority of the binary alloy system
consists of $spin-3/2$ atoms, (such as $c=0.1$ in Fig. 1 (a)), the system
displays triple hysteresis (TH) behavior for selected values of the
Hamiltonian parameters $t=0.45$ and $d=-2$ (see Fig 1 (a)). The central loop
imply the ordered phase for this system. When we start adding more type-A
atoms to the system (i.e. rising $c$), the outer windows that appears large
negative and positive values of magnetic field of hysteresis curve, disappear.
The central loop continues with the same structure, i.e. single hysteresis
(SH) is observed. When the majority of the binary alloy composed of type-A
atoms (i.e. $c=0.8$), the ordered phase has been protected in central loop in
the same way. Due to the fact that, the system prefers magnetic $s=\pm 1/2$
ground states of binary alloy which consists of half integer spins. When the
system exposed to a large magnetic field, all spins align parallel to the
field. Magnetization is also greater in the system with high spin variable in
large longitudinal field. For instance, the magnetization related to $c=0.1$
is greater than the case of $c=0.8$ system for $h=2$ (see Figs. 1 (a) and
(b)). As seen in Fig. 1 (b), for the binary alloy system which consists of
higher spin values, the number of windows of the hysteresis loops increase for
a low temperature $t=0.5$ and for crystal field parameter $d=-2$. While
$c=0.1$ case exhibits seven ($2S_{B}$-windowed) windows of hysteresis (7H)
loops, $c=0.3$ and $c=0.8$ cases exhibit five ($2S_{A}$-windowed) windows of
hysteresis (5H) loops . Inset of graph in Fig. 1 (b) shows the central loop
which represent an ordered phase that exists for all concentration values. As
an important result, we can emphasize that, while the concentration decreases,
two additional outermost symmetric windows of hysteresis loops appear. The
magnetization of the system also increases for large longitudinal magnetic
field, as expected. These results are compatible with the results presented in
Ref. [33].
In Fig. 2, we consider a system where both of the spin variables of binary
alloy are integer -values. Hysteresis properties of the system are given for
selected values of concentrations $c=0.1$, $c=0.3$ and $c=0.8$. Spin values
are chosen as $S_{A}=1$ and $S_{B}=2$ in Fig. 2 (a) and $S_{A}=1$ and
$S_{B}=3$ in Fig. 2 (b) for $t=0.5$ and $d=-2$. In the case of $c=0.1$ the
system displays four-windowed hysteresis $(4H)$ loops which have no central
loop (see Fig. 2 (a)). This means, the system has disordered phase which
corresponds to the system with non-magnetic $s=0$ ground state. The physical
mechanism is as follows: with rising field in positive direction, $s=0$ state
begins to evolve into $s=\pm 1$ states, and then these states evolves into
$s=\pm 2$, with rising magnetic field. For the concentration values of $c=0.3$
and $c=0.8$, the outer symmetric two windows of the hysteresis curve disappear
and the system exhibits double hysteresis (DH) character. Magnetization
decreases with increasing concentrations at large applied magnetic field. As
seen in Fig 2 (b), if we fix the spin value of A atoms as $S_{A}=1$, and
increase the spin value of B atoms such that $S_{A}=3$, then the binary alloy
system with concentration $c=0.1$ exhibits $6$-windowed hysteresis $(6H)$
loops, without central loop. A considerable noteworthy result is that the
system displays DH behavior for $c=0.3$ and $c=0.8$, due to the higher spin
value of B atoms. The system eliminates the outermost four symmetric loops to
adapt from the $6$-windowed hysteresis loop behavior to the DH loop behavior.
Regardless of the difference between the spin values of the binary alloys, the
number of outer windows that disappeared will be twice that difference for
concentration that close to the small spin value. To clarify, difference
between the spin values of type-A and type-B atoms are $1$ (2), the number of
outer windows that disappear will be two (four) with increasing components of
A atoms in Fig 2 (a) (Fig. 2 (b)), respectively.
In the previous examinations, the binary alloy system had only one of the
ordered (half integer spin valued binary alloys) or disordered (integer spin
valued binary alloys) phases at low temperatures and large negative crystal
field values. To find out which phase does the system exhibit in binary alloys
consisting of spin values integer and half integer, we choose spin value of A
atoms as integer and spin value of B atoms as half integer in Fig 3.
Hysteresis behaviors of binary alloy consisting of (a) $S_{A}=1$, $S_{B}=3/2$,
(b) $S_{A}=1$, $S_{B}=7/2$ and (c) $S_{A}=2$, $S_{B}=5/2$ spin values have
quite striking results. In figure 3 (a), hysteresis curves are shown for the
values of the concentration $c=0.1$, $c=0.3$, $c=0.5$ and $c=0.8$ for $t=0.46$
and $d=-1.8$. For the case of $c=0.1$, namely majority of the lattice sites
consist of type B atoms, the system has TH behavior with central loop (see the
curves related to the $c=0.1$ in Figs. 3 (a), (b) and (c)). For the
concentration value $c=0.3$, the central loop disappears and the outer
symmetric two loops become narrower. Since the central loop is lost, the
system should exhibit a phase transition in this concentration ranges. For the
value of $c=0.3$, the system does not have a magnetic ground state at low
magnetic field values. As the applied magnetic field increases, smaller
magnetic field induces the transition between the ground states of the system.
Contrary to the common belief, the difference between the new ground states
dominated by the B atoms are less than one. The transition from TH to DH can
be arranged systematically as follows: The number of windows that disappeared
will be two times of difference between two spin values of type A and type B
atoms, i.e. $2(S_{B}-S_{A})$ windows disappear. The difference between
$S_{A}=1$ and $S_{B}=3/2$ spins are $1/2$, so one loop disappears, which is
the central loop. For concentration value of $c=0.5$, the system exhibits
paramagnetic hysteresis (PH) which prefers $s=0$ state (see the curves for
$c=0.5$ in Fig. 3 (a), (b) and (c)). When the concentration value increases to
$c=0.8$, due to the spin value of A atoms, $2S_{A}$-windowed (namely DH)
behavior appears. Depending on the concentration of the binary alloy, DH for
$c=0.3$ and $c=0.8$ are different from each other due to the difference
between their ground states. In this case, $TH-DH-PH-DH$ hysteresis behaviors
occur. The first one (TH) corresponds to the ordered phase and the others (DH,
PH and other DH) are disordered phase. Phase transition between two phases
causes $2(S_{B}-S_{A})=1$ windows to disappear. In Fig 3 (b) Hamiltonian
parameters are set as $t=0.46$ and $d=-2$ for various concentrations. For
$c=0.1$ system exhibits $7$-windowed hysteresis loops with central loop which
is demonstrated in the upper inset. For the value of $c=0.3$, the system
exhibits DH behavior which is demonstrated in lower inset. The observed DH
loops are also narrower than the concentric loops. So, transition from ordered
to a disordered phase occurs between these concentration ranges. As the
concentration of type-A atoms increases in the system, we observe that some
loops disappear according to the new ground states, which are dominated by the
B atoms. From the general result mentioned above, $2(S_{B}-S_{A})=5$ windows
should disappear. Four of them are the outermost symmetric windows and the
other is the central loop as can be seen in Fig 3 (b). For the value of
$c=0.5$, the system exhibits disordered phase and then for $c=0.8$ the
innermost double symmetric loops appear, which are also demonstrated in upper
inset of Fig. 3 (b). For alloys consisting mostly of atoms type-A, the ground
state will be dominated by atoms type-A, so it has different center from the
other DH loops. In brief, $7H-DH-PH-DH$ hysteresis behaviors are observed as
the concentration increases. In Fig 3 (c), the system exhibits $5H-4H-PH-4H$
hysteresis behaviors (while the concentration increases) for $t=0.47$ and
$d=-2$, respectively. Transition from the ordered phase ($(5H)$ windowed loop)
to the disordered phase ($(4H)$ windowed loop) causes $2(S_{B}-S_{A})=1$
window disappears, which is the central loop. The other $4$-windowed loops get
narrower. For the value of $c=0.5$ system has paramagnetic hysteresis (PH)
loop. It is worth to emphasize here that, for $c=0.8$, the innermost
$4$-windowed hysteresis loops (which have different centers from the other
$4$-windowed loops) appear.
When we generalized integer-half integer results, for low temperatures and
negative large crystal fields, $2S_{B}-2S_{A}-PH-2S_{A}$-windowed hysteresis
loops are observed while the concentration increases for $S_{A}<S_{B}$. The
first one is the ordered phase and the other three are in disordered phase.
The transition from $2S_{B}$ to $2S_{A}$-windowed loop causes $2(S_{B}-S_{A})$
disappearing windows. The last $2S_{A}$-windowed hysteresis appears as
innermost loops due to the case of $S_{A}<S_{B}$.
In Fig 4, we have investigated binary alloy model consisting of half integer -
integer spins such as (a) $S_{A}=1/2$, $S_{B}=1$, (b) $S_{A}=3/2$, $S_{B}=2$
and (c) $S_{A}=3/2$, $S_{B}=3$. In Fig 4 (a), the system exhibits $DH-PH-PH-
SH$ hysteresis behaviors for the values of $t=0.46$ and $d=-1.6$. In Fig 4
(b), $4H-2H-PH-3H$ hysteresis behaviors are obtained for $t=0.5$ and $d=-1.8$.
In Fig 4 (c), $6H-2H-PH-3H$ hysteresis behaviors are observed for $t=0.5$ and
$d=-1.8$. Firstly, $2S_{B}$-windowed hysteresis loops are observed for lower
concentration values, as expected. If the concentration of the system is
increase from $c=0.1$ to $c=0.3$, the innermost DH disappears due to the new
ground states dominated by atoms of type-B (see Fig. 4 (b)). The system
exhibits paramagnetic hysteresis behavior for $c=0.5$. If the concentration of
the system is increases from $c=0.5$ to $c=0.8$, the innermost
$2S_{A}$-windowed hysteresis appears with a central loop. The first three
hysteresis behaviors ($4H-2H-PH$) correspond to disordered paramagnetic phase
and the last one is ($3H$) the ordered phase. The phase transition occurs
between these two phases.
The effects of crystal field parameter are examined for the binary alloys
consisting of integer-half integer spin model as, can be seen in Fig 5. Spin
values of the system are selected as $S_{A}=1$, $S_{B}=7/2$ and for the
parameters $c=0.1$ and $t=0.5$ in Fig. 5 (a), $c=0.8$ and $t=0.46$ in Fig. 5
(b). As an evolution of the hysteresis loops with changing crystal field,
$SH-6H-6H-4H-2H$ is observed (see Fig. 5 (a)). We concluded by comparison of
Fig. 3 (b) and Fig. 5 (a) from which we see that, by increasing the
temperature, the central loop disappears for the value of $d=-2$. The system
exhibits SH behavior, which is represented by ordered phase for $d=-1$, and
disordered phase for lower values of the crystal field. As the crystal field
increases to a negative large values, the outermost symmetric loops gradually
disappear and the windows start to be separated from each other. As seen in
Fig. 5 (b) $SH-SH-DH-PH-PH$ hysteresis behaviors are observed for given values
of the crystal field. Phase transition occurs when the crystal field parameter
changes from $d=-1$ to $d=-2$. When the system consists of mostly type-A
atoms, multiple hysteresis behavior is replaced by SH and DH behaviors. In
Fig. 6, we have investigated the effects of crystal field on binary alloy
consisting of half integer-integer spins at the temperature $t=0.46$. For the
spin values of $S_{A}=3/2$, $S_{B}=2$, the system displays $SH-4H-4H-2H$
hysteresis behaviors for $c=0.1$ and for selected crystal field parameters
(see Fig. 6 (a)). The system exhibits a phase transition with increasing
crystal field values, in the region between $d=-0.5$ and $d=-2$. Hysteresis
behaviors which are observed as $SH-3H-DH-PH$ for $c=0.9$, respectively can be
seen in Fig. 6 (b). The central loop disappears when passing from $3H$ to $DH$
behavior, and phase transition occurs in this range.
In order to investigate the evolution of the hysteresis properties, we
describe the quantities hysteresis loop area (HLA), remanent magnetization
(RM) and coercive field (CF). HLA is defined as the area covered by hysteresis
loop in $(m,h)$ plane, and it describes the loss of energy due to the
hysteresis. The RM is residual magnetization in the system after an applied
magnetic field is removed. CF is the intensity of the magnetic field needed to
change the sign of the magnetization.
In Figs. 7, 8 and 9 we demonstrate the variation of the quantities (HLA, RM
and CF, respectively) with the temperature, for one or both of two spin
variables chosen as integer or half integer spin. Chosen spin values at
crystal field $d=0$ are (a) $S_{A}=1/2$, $S_{B}=3/2$, (b) $S_{A}=1$,
$S_{B}=2$, (c) $S_{A}=1$, $S_{B}=3/2$ and (d) $S_{A}=3/2$, $S_{B}=2$. Besides,
the selected values of the concentrations are $c=0.0$, $c=0.5$ and $c=1.0$ in
these figures. In Fig. 7 (a) HLA drops to zero as the temperature increases.
Note that, the HLA of $c=0.0$ is greater than the HLA for $c=0.5$. In $c=1.0$
case there is no HLA, which means a paramagnetic hysteresis loop. Rising
temperature drives the system to a disordered phase, due to the thermal
agitations. In Fig. 7 (b) for integer-integer spin model, HLA exists for $c=1$
concentration at low temperatures and the value of HLA is greater than Fig. 7
(a). Therefore, more energy dissipation occurs. The temperature values that
HLA vanishes, increases. For $c=0$ the value of HLA has remained constant at
low temperatures as in Fig 7 (b). If we fix spin value of A atoms as
$S_{A}=1$, and increase the spin value of B atoms such that $S_{B}=3/2$, HLA
curves of $c=0.0$ and $c=0.5$ decrease (compare Fig 7 (b) and (c)). If we
fixed the spin value of B atoms as $S_{B}=2$, then increase the spin value of
A atoms such that $S_{A}=3/2$, HLA curves of $c=0.5$ and $c=1.0$ get higher
(compare Fig. 7 (b) and (d)). As seen in Figs. 8 for RM, and Fig. 9 for CF,
decreasing behavior occurs with the increasing concentration. According to the
different spin variables, results of RM and CF are consistent with HLA’s
behavior that is obtained above.
Figure 1: Hysteresis evolutions of the binary alloy system which consist of
half integer-half integer spin model such as (a) $S_{A}=1/2$, $S_{B}=3/2$ with
temperature $t=0.45$ and (b) $S_{A}=5/2$, $S_{B}=7/2$ spin variables with
$t=0.5$ for selected concentration values $c=0.1$, $c=0.3$ and $c=0.8$ at
$d=-2$ crystal field parameter.
Figure 2: Hysteresis evolutions of the binary alloy system which consist of
integer-integer spin model such as (a) $S_{A}=1$, $S_{B}=2$ and (b) $S_{A}=1$,
$S_{B}=3$ spin variables for selected concentration values $c=0.1$, $c=0.3$
and $c=0.8$, with the temperature $t=0.5$ and $d=-2$ crystal field parameter.
Figure 3: Hysteresis evolutions of the binary alloy system which consists of
integer- half integer spin model such as (a) $S_{A}=1$, $S_{B}=3/2$, (b)
$S_{A}=1$, $S_{B}=7/2$ and (c) $S_{A}=2$, $S_{B}=5/2$ spin variables for a
given set of Hamiltonian parameters.
Figure 4: Hysteresis evolutions of the binary alloy system which consists of
half integer-integer spin model such as (a) $S_{A}=1/2$, $S_{B}=1$, (b)
$S_{A}=3/2$, $S_{B}=2$ and (c) $S_{A}=3/2$, $S_{B}=3$ spin variables for a
given set of Hamiltonian parameters.
Figure 5: Hysteresis evolutions of the binary alloy system which consists of
integer- half integer spin model such as $S_{A}=1$, $S_{B}=7/2$ for the
concentrations (a) $c=0.1$ and (b) $c=0.8$ for a given set of Hamiltonian
parameters.
Figure 6: Hysteresis evolutions of the binary alloy system which consists of
half integer-integer spin model such as $S_{A}=3/2$, $S_{B}=2$ for the
concentrations (a) $c=0.1$ and (b) $c=0.9$ for a given set of Hamiltonian
parameters.
Figure 7: Variation of HLA with the temperature of the binary alloy system
which consists of (a) $S_{A}=1/2$, $S_{B}=3/2$, (b) $S_{A}=1$, $S_{B}=2$, (c)
$S_{A}=1$, $S_{B}=3/2$ and (d) $S_{A}=3/2$, $S_{B}=2$ spin variables for
selected values of concentrations $c=0.0$, $c=0.5$ and $c=1.0$ at $d=0$
crystal field parameter.
Figure 8: Variation of RM with the temperature of the binary alloy system
which consists of (a) $S_{A}=1/2$, $S_{B}=3/2$, (b) $S_{A}=1$, $S_{B}=2$, (c)
$S_{A}=1$, $S_{B}=3/2$ and (d) $S_{A}=3/2$, $S_{B}=2$ spin variables for
selected values of concentrations $c=0.0$, $c=0.5$ and $c=1.0$ at $d=0$
crystal field parameter.
Figure 9: Variation of CF with the temperature of the binary alloy system
which consists of (a) $S_{A}=1/2$, $S_{B}=3/2$, (b) $S_{A}=1$, $S_{B}=2$, (c)
$S_{A}=1$, $S_{B}=3/2$ and (d) $S_{A}=3/2$, $S_{B}=2$ spin variables for
selected values of concentrations $c=0.0$, $c=0.5$ and $c=1.0$ at $d=0$
crystal field parameter.
## 5 Conclusion
In conclusion, hysteresis characteristics of the generalized spin-S magnetic
binary alloy system represented by $A_{c}B_{1-c}$ have been investigated
within the framework of effective field theory. The system consists of
type-$A$ and type-$B$ atoms with the concentrations $c$ and $1-c$,
respectively. Results of the generalized spin-$S$ binary alloy model are
discussed as one or both of the spins of $A$ and $B$ atoms are selected as
integer or half integer spin values.
The effects of the concentration and crystal field parameters of the magnetic
binary alloy model strongly depend on whether the spins of the atoms are
integer or half-integer. As consistently by the related literature, the
special cases ($c=0$, $c=1$) of binary alloy system exhibit $2S$-windowed
hysteresis character. The evolution of the multiple hysteresis loops are
observed for higher spin valued alloy system for the large negative value of
crystal field at low temperatures. Integer (half-integer) spin valued binary
alloy system exhibits disordered (ordered) phase in this region. It has been
found that the number of outer windows that disappeared will be twice that
difference between spin values when majority of binary alloy composed of type
A atoms in the case of $S_{A}<S_{B}$.
The significantly remarkable point of our results is that the effect of
concentration is one of the most important parameters affecting the hysteresis
behavior in the system. Since the majority of the lattice sites consist of
half-integer spin values, the center loop lost and the system exhibits phase
transition as the concentration increases for integer-half integer system.
Besides, the magnetic ground state of the system disappears in the low
magnetic field. As the applied magnetic field increases, the transition
between the other ground states of the system occurs with a very small
magnetic field value. We have decided that the difference of the magnetization
values between the new ground states dominated by the B atom is less than $1$.
For alloys with concentration consisting mostly integer spin valued atoms, the
ground state will be dominated by type-A atoms. It is remarkable generalized
result that $2S_{B}-2S_{A}-PH-2S_{A}$-windowed hysteresis loops are observed
with increasing concentrations for integer-half integer system. The transition
from $2S_{B}$ to $2S_{A}$-windowed loop causes $2(S_{B}-S_{A})$ vanishing
windows. When the majority of binary alloy is composed of integer spin values,
inner (low magnetic field) ground states of the system disappear at low
magnetic field as we started to add more half-integer spin to the half
integer-integer system. As concentration rises, the magnetic ground states
dominated by half-integer spin appear and the system is in an ordered phase
anymore.
It has been demonstrated that the outermost symmetric loops disappear
gradually and windows are separated from each other, as the crystal field
parameter is increased in the negative direction. Besides, the quantities of
hysteresis loops have been investigated with the variation of the temperature.
Rising temperature drags the system into a disordered phase due to the thermal
agitations. As the concentration increases from $c=0$ to $1$, HLA, RM and CF
decreases for all binary alloy system which is one or both of two spin
variables chosen as integer or half integer spin model. These quantities
increase as the spin value gets higher.
We hope that the results obtained in this work may be beneficial form both
theoretical and experimental points of view.
## References
* [1] E.P. Wohlfarth, Handbook of Magnetic Materials 1, North-Holland Publishing Company, Netherlands, 1980.
* [2] P.A. Lindgard, Physical Review B, 16/5 (1977) 2168.
* [3] M. C. Cadeville and J.L. Morán López, Physics Reports 153/6 (1987) 331-399.
* [4] T. Kaneyoshi, Physical Review B, 39/16 (1989) 12134.
* [5] Ü. Akıncı, G. Karakoyun, Physica B: Condensed Matter 521 (2017) 365.
* [6] G. Karakoyun, Ü. Akıncı, Physica A: Statistical Mechanics and its Applications 510 (2018) 407\.
* [7] J.A. Plascak, Physica A 198 (1993) 655.
* [8] D.G. Rancourt, M. Dubé and P.R.L. Heron, Journal of Magnetism and Magnetic Materials 125 (1993) 39.
* [9] T. Kawasaki, Progress of Theoretical Physics, 58/5 (1977) 1357.
* [10] D.S. Cambuí, A.S. De Arruda and M. Godoy, International Journal of Modern Physics C, 23/08 (2012) 1240015\.
* [11] T. Kaneyoshi, Journal of Physics: Condensed Matter 5/40 (1993) L501.
* [12] T. Kaneyoshi and M. Jascur, Journal of Physics: Condensed Matter 5/19 (1993) 3253.
* [13] T. Kaneyoshi, Physica B, 193 (1994) 255-264.
* [14] T. Kaneyoshi, Physica B 210 (1995)178-182.
* [15] A.I. Lukanin, M.V. Medvedev, physica status solidi (b), 121/2 (1984) 573-582.
* [16] D.A. Dias, J. Ricardo de Sousa, J.A. Plascak, Physics Letters A 373 (2009) 3513–3515.
* [17] N. Şarlı, M. Keskin, Chinese Journal of Physics 60 (2019) 502–509.
* [18] A.S. Freitas, D.F. de Albuquerque, N.O. Moreno, Physica A 391 (2012) 6332-6336.
* [19] G.A. Pérez Alcazar, J.A. Plascak and E. Galváo da Silva, Physical Review B 34/3 (1986) 1940.
* [20] J. A. Plascak, L. E. Zamora and G.A. Pérez Alcazar, Physical Review B 61/5 (2000) 3188.
* [21] D.P. Lara, G.A. Pérez Alcazar, L.E. Zamora, J.A. Plascak, Physical Review B 80 (2009) 014427.
* [22] Y. Ma, A. Du, Journal of Magnetism and Magnetic Materials 321 (2009) L65–L68.
* [23] A. Bobák, F. O. Abubrig, Physical Review B 68 (2003) 224405.
* [24] M. Źuković, A. Bobák, Journal of Magnetism and Magnetic Materials 322 (2010) 2868-2873.
* [25] J. Dely, A. Bobák, Journal of Magnetism and Magnetic Materials 305 (2006) 464-466.
* [26] A. Bobák, F. O. Abubrig, D. Horváth, Physica A 312 (2002) 187 – 207.
* [27] Y. Yüksel, Ü. Akıncı, Journal of Physics and Chemistry of Solids 112 (2018) 143-152.
* [28] J.S. Miller, Chemical Society Reviews, 40/6 (2011) 3266-3296.
* [29] J.S. Miller, Materials Today, 17/5 (2014) 224-235.
* [30] W. Wang, W. Jiang, D. Lv, physica status solidi (b) 249/1 (2012) 190-197.
* [31] W. Jiang, V.C. Lo, B.D. Bai, J. Yang, Physica A: Statistical Mechanics and its Applications 389/11 (2010) 2227-2233.
* [32] Ü. Akıncı, Physics Letters A 380 (2016) 1352-1357.
* [33] Ü. Akıncı, Physica A 483 (2017) 130-138.
* [34] Ü. Akıncı, Y. Yüksel, Physica B 549 (2018) 1-5.
* [35] G. Karakoyun, Ü. Akıncı, Physica B: Physics of Condensed Matter 578 (2020) 411870.
* [36] Z.D. Vatansever, Journal of Alloys and Compounds 720 (2017) 388-394.
* [37] T. Ghosh, A.P. Jena, A. Mookerjee, Journal of Alloys and Compounds 639 (2015) 583-587.
* [38] W. Jiang, B.D. Bai, physica status solidi (b) 243/12 (2006) 2892-2900.
* [39] H. Bouda, T. Bahlagui, L. Bahmad, R. Masrour, A. El Kenz, A. Benyoussef, Journal of Superconductivity and Novel Magnetism 32/8 (2019) 2539-2550.
* [40] M. Ertaş, A. Yılmaz, Computational Condensed Matter 14 (2018) 1-7.
* [41] M. Ertaş, Physica B: Physics of Condensed Matter 550 (2018) 154-162.
* [42] K. Fukamichi, M. Kikuchi, S. Arakawa and T. Masumoto, Solid State Communications, 23 (1977) 955.
* [43] E.R. Domp, D.J. Sellmyer, T.M. Quick and R.J. Borg, Physical Review B 17/5 (1978) 2233.
* [44] R.A. Cowley, R.J. Birgeneau and G. Shirane, Physica 140A (1986) 285-290.
* [45] S.A. Nikitin and N.P. Arutyunian, Zh. Eksp. Teor. Fiz, 77 (1979) 2018-2027.
* [46] S.U. Jen and S.S. Liou, Journal of Applied Physics 85/12 (1999) 8217.
* [47] F. Reyes-Gómez, W.R. Aguirre-Contreras, G.A. Pérez Alcazar, J.A. Tabares, Journal of Alloys and Compounds 735 (2018) 870-879.
* [48] M.A. Kobeissi, Journal of Physics: Condensed Matter 3 (1991) 4983-4998.
* [49] F. C. SáBarreto, I. P. Fittipaldi, B. Zeks, Ferroelectrics 39 (1981) 1103.
* [50] J. Strecka, M. Jascur, acta physica slovaca 65 (2015) 235.
* [51] R. Honmura, T. Kaneyoshi, J. Phys. C: Solid State Phys. 12 (1979) 3979.
* [52] T. Kaneyoshi, J. Tucker, M. Jascur, Physica A 176, 495 (1992).
|
# Photocathode characterisation for robust PICOSEC Micromegas precise-timing
detectors
M. Lisowska<EMAIL_ADDRESS>R. Aleksan Y. Angelis S. Aune J.
Bortfeldt F. Brunbauer M. Brunoldi E. Chatzianagnostou J. Datta K.
Dehmelt G. Fanourakis S. Ferry D. Fiorina111Now at Gran Sasso Science
Institute, Viale F. Crispi, 7 67100 L’Aquila, Italy. K. J. Floethner M.
Gallinaro F. Garcia I. Giomataris K. Gnanvo F.J. Iguaz222Now at SOLEIL
Synchrotron, L’Orme des Merisiers, Départementale 128, 91190 Saint-Aubin,
France. D. Janssens A. Kallitsopoulou M. Kovacic B. Kross C.C. Lai P.
Legou J. Liu M. Lupberger I. Maniatis333Now at Department of Particle
Physics and Astronomy, Weizmann Institute of Science, Hrzl st. 234, Rehovot,
7610001, Israel. J. McKisson Y. Meng H. Muller R. De Oliveira E. Oliveri
G. Orlandini A. Pandey T. Papaevangelou M. Pomorski M. Robert L.
Ropelewski D. Sampsonidis L. Scharenberg T. Schneider E. Scorsone L.
Sohl444Now at TÜV NORD EnSys GmbH & Co. KG. M. van Stenis Y. Tsipolitis S.
Tzamarias A. Utrobicic I. Vai R. Veenhof L. Viezzi P. Vitulo C. Volpato
X. Wang S. White W. Xi Z. Zhang Y. Zhou
###### Abstract
The PICOSEC Micromegas detector is a precise-timing gaseous detector based on
a Cherenkov radiator coupled with a semi-transparent photocathode and a
Micromegas amplifying structure, targeting a time resolution of tens of
picoseconds for minimum ionising particles. Initial single-pad prototypes have
demonstrated a time resolution below $\sigma$ = 25 ps, prompting ongoing
developments to adapt the concept for applications. The achieved performance
is being transferred to robust multi-channel detector modules suitable for
large-area detection systems requiring excellent timing precision. To enhance
the robustness and stability of the PICOSEC Micromegas detector, research on
robust carbon-based photocathodes, including Diamond-Like Carbon (DLC) and
Boron Carbide (B4C), is pursued. Results from prototypes equipped with DLC and
B4C photocathodes exhibited a time resolution of $\sigma$ $\approx$ 32 ps and
$\sigma$ $\approx$ 34.5 ps, respectively. Efforts dedicated to improve
detector robustness and stability enhance the feasibility of the PICOSEC
Micromegas concept for large experiments, ensuring sustained performance while
maintaining excellent timing precision.
###### keywords:
Gaseous detectors , Micromegas , Photocathodes , Timing resolution
††journal: NIM A
[inst1]organization=European Organization for Nuclear Research (CERN), 1211
Geneve 23, Switzerland
[inst2]organization=Université Paris-Saclay, F-91191 Gif-sur-Yvette, France
[inst3]organization=IRFU, CEA, Université Paris-Saclay, F-91191 Gif-sur-
Yvette, France
[inst4]organization=Department of Physics, Aristotle University of
Thessaloniki, University Campus, GR-54124, Thessaloniki, Greece
[inst5]organization=Center for Interdisciplinary Research and Innovation
(CIRI-AUTH), Thessaloniki 57001, Greece
[inst6]organization=Department for Medical Physics, Ludwig Maximilian
University of Munich, Am Coulombwall 1, 85748 Garching, Germany
[inst7]organization=Dipartimento di Fisica, Università di Pavia, Via Bassi 6,
27100 Pavia, Italy
[inst8]organization=INFN Sezione di Pavia, Via Bassi 6, 27100 Pavia, Italy
[inst9]organization=Department of Physics and Astronomy, Stony Brook
University, Stony Brook, NY 11794-3800, USA
[inst10]organization=Jefferson Lab, 12000 Jefferson Avenue, Newport News, VA
23606, USA
[inst11]organization=Institute of Nuclear and Particle Physics, NCSR
Demokritos, GR-15341 Agia Paraskevi, Attiki, Greece
[inst12]organization=Helmholtz-Institut für Strahlen- und Kernphysik,
University of Bonn, Nußallee 14–16, 53115 Bonn, Germany
[inst13]organization=Laboratório de Instrumentacão e Física Experimental de
Partículas, Lisbon, Portugal
[inst14]organization=Helsinki Institute of Physics, University of Helsinki,
FI-00014 Helsinki, Finland
[inst16]organization=University of Zagreb, Faculty of Electrical Engineering
and Computing, 10000 Zagreb, Croatia
[inst17]organization=European Spallation Source (ESS), Partikelgatan 2, 224 84
Lund, Sweden
[inst18]organization=State Key Laboratory of Particle Detection and
Electronics, University of Science and Technology of China, Hefei 230026,
China
[inst19]organization=Physikalisches Institut, University of Bonn, Nußallee 12,
53115 Bonn, Germany
[inst20]organization=Friedrich-Alexander-Universität Erlangen-Nürnberg,
Schloßplatz 4, 91054 Erlangen, Germany
[inst21]organization=CEA-LIST, Diamond Sensors Laboratory, CEA Saclay, F-91191
Gif-sur-Yvette, France
[inst22]organization=Queen’s University, Kingston, Ontario, Canada
[inst23]organization=National Technical University of Athens, Athens, Greece
[inst24]organization=Ruđer Bošković Institute, Bijenička cesta 54., 10 000
Zagreb, Croatia
[inst25]organization=Department of Physics and Astronomy, University of
Florence, Via Giovanni Sansone 1, 50019 Sesto Fiorentino, Italy
[inst26]organization=University of Virginia, USA
Figure 1: (a) PICOSEC detection concept: A charged particle passing through a
Cherenkov radiator generates UV photons, which are converted into electrons on
a photocathode. The electrons are multiplied in the gas volume in two stages
and induce a signal on the anode. (b) Typical PICOSEC waveform: The signal
displays a fast electron peak and a slow ion tail. Note that the figures are
not drawn to scale [11].
## 1 Introduction
The intense interest in advancing technologies for precise-timing detectors
has been driven by demanding environments anticipated in forthcoming High
Energy Physics experiments. Meeting the criteria of achieving an excellent
time resolution, ensuring stable long-term operation and providing large-area
coverage is crucial for adapting the concept for physics applications. The
PICOSEC Micromegas (hereafter PICOSEC) project [1] undertakes the efforts to
develop a robust multi-channel gaseous detector, targeting a time resolution
of $\mathcal{O}(10)$ ps for Minimum Ionising Particles (MIPs). Initial single-
pad prototypes equipped with a Cesium Iodide (CsI) photocathode have
demonstrated a time resolution below $\sigma$ = 25 ps [1], prompting further
developments [2] \- [13]. Although preliminary measurements of alternative
approaches have been conducted previously [2, 5, 11, 13], a comprehensive
characterisation of different carbon-based photocathodes has not been
reported. This work aims to advance the development of robust photocathodes
for PICOSEC detectors while maintaining an excellent time resolution.
Within the scope of this paper, measurements of three different photocathode
materials, including CsI, Diamond-Like Carbon (DLC) and Boron Carbide (B4C),
are reported. The samples were manufactured by various research institutes,
including the European Organization for Nuclear Research (CERN), the
University of Science and Technology of China (USTC), the French Atomic Energy
Commission (CEA) and the European Spallation Source (ESS).
## 2 PICOSEC Micromegas detection concept
The PICOSEC Micromegas detection concept is depicted in Fig. 1a [1]. The
detector is designed to minimize time jitter from primary ionization in the
gas by using a Cherenkov radiator. A charged particle passing through the
crystal radiator generates a cone of ultraviolet (UV) photons which are
partially converted into primary electrons on a photocathode coated on the
window. Typically, the radiator in a PICOSEC detector is a 3 mm thick
Magnesium Fluoride (MgF2) crystal which transmits light above 120 nm. The
window is coated with a 3 nm thick chromium (Cr) conductive interfacial layer
that serves as a contact for the high voltage (HV). The UV photon-to-electron
converter is an 18 nm thick semi-transparent CsI photocathode known for its
high Quantum Efficiency (QE) for photons below 200 nm. All primary electrons
are created at the same surface, eliminating uncertainty about the ionisation
location and thereby minimising time jitter. The gas volume is filled with a
mixture of 80$\%$ Ne, 10$\%$ CF4 and 10$\%$ C2H6 at ambient pressure. The
typical electric fields across the pre-amplification and amplification gaps
are approximately 40 kV/cm and 20 kV/cm, correspondingly. In the presence of a
strong electric field, the extracted electrons successively ionise gas
molecules, causing electron multiplication, which first occurs in the pre-
amplification gap and, after passing through the Micromegas mesh, continues in
the amplification gap. The total gain achieved is on the level of
$\mathcal{O}(10^{5}-10^{6})$. The amplified electrons move towards the anode,
while the ions travel to the mesh. Their movement induce a signal on the
anode, which is then amplified and read out by a digitiser. A typical PICOSEC
waveform displaying a fast electron peak and a slow ion tail is illustrated in
Fig. 1b. The leading edge of the electron peak determines a Signal Arrival
Time (SAT).
Several optimisation studies were conducted on a 10 mm diameter active area
single-pad PICOSEC detector to enhance high voltage stability, reduce noise,
improve signal integrity and achieve uniform timing response, using a
simplified assembly process [12]. The design includes improvements to the
detector board, vessel, mechanical parts and electrical connections for both
high voltage and signal. A new single-channel detector board is shown in Fig.
2.
To enhance the robustness and stability of the PICOSEC detector, research on
robust photocathodes is underway. The initial prototype involved a CsI
photocathode due to its high QE and UV sensitivity, nonetheless, the material
is vulnerable to damage from ion blackflow, discharges and humidity. The
robustness of the photocathode is crucial for maintaining detector efficiency
and timing resolution during long-term operation. Potential alternatives
include using protective layers or other materials, with carbon-based
photocathodes like DLC and B4C being the most promising candidates.
Figure 2: Single-channel detector board with a 10 mm diameter active area.
## 3 Experimental methodology
R&D activities within the PICOSEC project have covered aspects ranging from
design and production, through assembly, to measurements in both laboratory
conditions as well as with particle beams. Prototypes were assembled in a
clean-room environment and tested in the laboratory. The ASSET (A Small Sample
Evaporation Test) setup was developed to characterise photocathodes,
facilitating transparency and QE measurements, as well as ageing studies [14,
15]. The time resolution of the detectors was evaluated using 150 GeV/c muon
beams at the CERN SPS H4 beamline.
### 3.1 ASSET photocathode characterisation device
The ASSET setup was developed to characterise photocathodes for gaseous
radiation detectors, as in the PICOSEC project [5]. The primary aims are to
quantify QE and transparency of the photocathodes as well as their degradation
due to ion backflow. Fig. 3 provides an overview of the ASSET setup.
Figure 3: Overview of the ASSET photocathode characterisation device: The
setup uses UV light from a deuterium lamp and a VUV monochromator, features a
collimating mirror chamber, a beam splitter and two calibrated CsI PMTs to
measure light intensity and stability within a wavelength range of 140-200 nm,
peaking at 160-180 nm. It allows for measurements in reflective and
transmission mode, as well as studies of the ageing process due to ion
backflow [5].
Measurements utilise UV light from a system consisting of a deuterium lamp
(McPherson, Model 632) and a Vacuum UltraViolet (VUV) monochromator
(McPherson, Monarch). The system is flushed with high-purity nitrogen to
prevent absorption of the short-wavelength portion of the UV spectrum. The
monochromator is attached to the measurement chamber through an extension
containing a collimating mirror chamber that focuses the light. A beam
splitter divides the light into two beams. Two calibrated CsI photomultiplier
tubes (PMTs) register the light intensity: one measures the amount of light at
the sample position, while the other monitors light stability during
measurements. The wavelength range for measurements is limited to about 140 nm
on the low-wavelength side by the transparency of the MgF2 lens and to about
200 nm on the high-wavelength side by the deuterium lamp’s light intensity.
The highest light intensity is observed at 160-180 nm.
The ASSET setup allows for measurements in reflective and transmission modes.
In both modes, the extraction grids record the current generated by electrons
extracted from the photocathodes. The reflective mode measures the QE of
photocathodes, while the transmission mode evaluates both QE and transparency.
The measurements are conducted in a vacuum of 10-6 mbar, achieved using pre-
pumps and turbomolecular pumps. Additionally, the device enables the the
collection of determined amounts of ion current on samples to study potential
charge-induced ageing effects. Further details about the ASSET photocathode
characterisation device can be found in [5].
The primary function of the ASSET setup is the measurement of photocurrent
from photocathode samples. The QE is calculated from current measurements
using the formula
$QE=\frac{N_{e}}{N_{ph}}=\frac{\frac{I_{s}}{e}}{\frac{I_{PMT}}{e\cdot
C_{PMT}}}=\frac{I_{s}\cdot C_{PMT}}{I_{PMT}},$ (1)
where $N_{e}$ is the number of electrons extracted from the sample (measured
on the sample), $N_{ph}$ is the number of photons that arrive at the sample
(measured with the PMT), $I_{s}$ is the current measured on the sample (offset
subtracted), $I_{PMT}$ is the current measured on the PMT (offset subtracted),
$C_{PMT}$ is the calibration factor of PMT (depends on wavelength) and $e$ is
the elementary charge.
The current measured by the PMT is used to calculate the transparency (T) of
the samples, given by
$T=\frac{{I}_{PMT,~{}s.~{}in}}{{I}_{PMT,~{}s.~{}out}},$ (2)
where ${I}_{PMT,~{}s.~{}in}$ is the PMT current measured with the sample in
the measurement position and ${I}_{PMT,~{}s.~{}out}$ is the PMT current
measured with the sample out of measurement position. Transparency
measurements in ASSET was performed in the VUV range between 140 nm and 200
nm. Transparency can also be assessed across the visible light spectrum, from
200 nm to 800 nm, using a spectrophotometer (PerkinElmer, Lambda 650 UV/VIS
Spectrometer).
An important capability of the ASSET measurements is conducting ageing
studies. The procedure starts with an X-ray beam entering the irradiation
chamber filled with a gas mixture of Ar/CO2 (70/30 %) at ambient pressure that
ionises the gas and creates primary electrons. The primary electrons are
attracted to multiplication wires, where an avalanche occurs in the high
electric field region near the wires, producing additional electrons and ions.
The electrons from he avalanche are collected on the wires, while the ions
move towards the grounded sample. This process accumulates charge on the
sample, allowing for the quantification of the drop in QE after exposure.
### 3.2 Characterisation with particle beams
Particle beam campaigns are conducted to measure the time resolution of
prototypes assembled in various configurations. The measurements are conducted
using 150 GeV/c muon beams at the CERN SPS H4 beamline. The test setup
includes a beam telescope facilitating triggering, timing and tracking
capabilities. An example telescope configuration is shown in Fig. 4. Precise
particle tracking is achieved utilising three triple Gas Electron Multiplier
(GEM) detectors with a spatial resolution below 80 µm. These devices use APV25
front-end ASICs to shape the signals from all electrode channels, which are
subsequently digitised using the Scalable Readout System (SRS) [16]. The GEMs
are operated in a gas mixture of Ar/CO2 (70/30 %) at ambient pressure. A
micro-channel plate photomultiplier tube (MCP-PMT, Hamamatsu, R3809U-50)
serves as the timing reference and data acquisition (DAQ) trigger. The
telescope can be utilised for testing several PICOSEC prototypes
simultaneously.
Different components individually contribute to the overall time resolution in
a detector system, with the total time resolution being the sum of variances
from each of these contributions. It can be represented by a simplified model
${\sigma_{tot}}^{2}={\left({\sigma_{SPE}}\over{\sqrt{N_{PE}}}\right)}^{2}+{\sigma_{t_{0}}}^{2}+{\sigma_{n}}^{2}+...$
(3)
The first term is the time resolution of the PICOSEC prototype measured with
MIPs, where $\sigma_{SPE}$ is the time resolution achieved with SPE events and
$N_{PE}$ is the number of photoelectrons extracted from the photocathode [2].
The second term is the contribution of the reference device. The third term is
the time jitter caused by noise in the detector and the connected external
electrical circuit. The time resolutions discussed in this paper include
contributions from all these components.
The photocathode performance is assessed by comparing $N_{PE}$ they generate
during the process. The number is estimated as the ratio of the signal
amplitude induced by a single photoelectron (SPE) to the signal amplitude
induced by a MIP. SPE measurements are performed using a light-emitting diode
(LED), while the multiple photoelectron (MPE) measurements are conducted with
muon beams. The signals are amplified by a custom-developed charge-sensitive
ARTH amplifier [2] and read out by an oscilloscope (LeCroy WR8104, 10 GS/s
sampling rate, 1 GHz bandwidth).
The procedure for calculating $N_{PE}$ involves several steps. Firstly, the
maximum amplitude for each waveform is determined. Secondly, a histogram of
all maximum amplitudes is plotted. The noise component is fitted using a
Gaussian distribution, while the signal component is fitted using an equation
based on a Pólya distribution of the form
$P_{n}=\frac{(\theta+1)^{\theta+1}}{\bar{n}\Gamma(\theta+1)}\left(\frac{n}{\bar{n}}\right)^{\theta}e^{-(\theta+1)n/\bar{n}},$
(4)
where: $n$ is the number of charges produced, $\bar{n}$ the mean avalanche
size, and $\theta$ the shape parameter. From this Pólya distribution, the mean
value is calculated. Finally, the mean amplitude of the MPE is divided by the
mean amplitude of the SPE to obtain the NPE for a given photocathode. The
$N_{PE}$ is given by the formula
$N_{PE}=\frac{{V}_{MIP}}{{V}_{SPE}},$ (5)
where ${V}_{MIP}$ is a MIP mean amplitude, ${V}_{SPE}$ is a SPE mean
amplitude.
Quantifying the time resolution of the PICOSEC detector requires a reference
device with significantly superior timing precision. To fulfill this
condition, the MCP-PMT with a time resolution below 6 ps in the central region
[1, 2] serves as the reference for the PICOSEC prototypes. The devices are
aligned to each other, ensuring that particles passed through the active area
of both. To amplify and read out the signal, a custom-developed RF pulse
amplifier cards optimized for PICOSEC (with built-in discharge protection up
to 350 V, 650 MHz bandwidth, 38 dB gain, 75 mW power consumption [10, 17]) are
used in conjunction with an oscilloscope.
In the analysis, the constant fraction discrimination (CFD) method is employed
to determine the timestamp at which the signal reaches a specific fraction of
its peak amplitude, thereby addressing the issue of time walk [1]. The leading
edge of the electron peak is fitted with a sigmoid function to identify the
signal position in time at 20$\%$ of the its amplitude. The SAT is calculated
as the difference between the timestamps of the PICOSEC detector and the
reference device. The SAT distribution is analysed using a double Gaussian fit
and time resolution of the detector system is computed as the standard
deviation of this distribution.
Figure 4: The beam telescope equipped with three triple-GEMs for particle
tracking, an MCP-PMT for timing reference as well as DAQ trigger, and PICOSEC
prototypes for testing.
The results presented in this paper are derived from calculations conducted
after applying several selection criteria (cuts) to the triggered events.
Initially, a time window cut is implemented, selecting events within 300 ps of
the median time difference of all recorded signals to exclude off-time events
and noise fluctuations. Additional cuts are applied to set signal amplitude
limits. Events with amplitudes below 1% of the dynamic range, categorised as
empty events, and those above 99%, classified as saturated waveforms, are
rejected. Furthermore, a geometrical cut is essential due to the Cherenkov
radiator’s emission of a UV photon cone at approximately 45-degree angle. To
ensure accurate time resolution measurements, only fully contained events
within the detector’s active area should be included. Specifically, a cut of a
4 mm diameter circle around the pad center for a 10 mm diameter detector
equipped with a 3 mm thick radiator (or a larger diameter, if otherwise
specified) was implemented. Signals from tracks that pass outside this central
region show reduced amplitude because of the partial loss of NPE beyond the
area of the channel.
## 4 Photocathode characterisation
Three different photocathode materials including Cesium Iodide, Diamond-Like
Carbon and Boron Carbide have been examined. The fabrication, characterisation
and performance evaluation of these photocathode materials have been detailed,
highlighting their strengths and challenges in achieving precise timing in
advanced detection systems. To ensure consistency of the results, all
measurements presented in this paper were performed using identical single-pad
non-resistive detectors featuring a 10 mm diameter active area (or 15 mm if
otherwise specified), with pre-amplification and amplification gaps of
approximately 127 µm. The detectors were operated in a sealed mode,
maintaining a gas pressure of 990 $\pm$ 5 mbar for the purpose of comparison.
### 4.1 Cesium Iodide
Cesium Iodide is the most widely used material for photocathodes in gaseous
radiation detectors due to its high QE and its sensitivity to UV radiation.
The baseline photocathode used in the PICOSEC detector consists of an 18 nm
thick CsI layer deposited onto a 3 mm thick MgF2 substrate with a 3 nm thick
Cr conductive interfacial layer, exceeding 12 photoelectrons per MIP [1],
Previous studies on transparency in the VUV range have shown that the pure
MgF2 substrate has approximately 80% transparency, which decreases to 45% with
the addition of the Cr layer, and further to 10% with the CsI layer [5].
Particle beam measurements were performed using a PICOSEC prototype equipped
with a CsI photocathode, operated in a sealed mode at a gas pressure of 985
mbar. At voltage settings of V${}_{\text{C}}$ = -430 V on the cathode and
V${}_{\text{A}}$ = 265 V on the anode, the detector exhibited a time
resolution of $\sigma$ = 15.8 $\pm$ 2.2 ps, as illustrated in Fig. 5.
Figure 5: SAT distribution of a single-pad prototype, featuring a 10 mm
diameter active area, equipped with an 18 nm thick CsI photocathode with a 3
nm thick Cr conductive layer, operated in a sealed mode at a gas pressure of
985 mbar. The voltage settings were V${}_{\text{C}}$ = -430 V on the cathode
and V${}_{\text{A}}$ = 265 V on the anode. The histogram consists of the data
after implementing a geometrical cut of a 4 mm diameter circle around the pad
center to include only fully contained events. The results of a double
Gaussian fit yield a time resolution of $\sigma$ = 15.8 $\pm$ 2.2 ps.
Despite exhibiting excellent time resolution, CsI is vulnerable to damage from
ion backflow, discharges and humidity. Fig. 6 illustrates the decline in QE
observed in a CsI sample before and after multiple irradiation steps,
providing evidence of ion bombardment affecting CsI. Studies demonstrated that
after accumulating a charge of 6 mC/cm2, the QE for CsI decreased by 80%,
whereas for B4C, it decreased by 45%, indicating a more rapid degradation of
CsI [5]. One possible solution to reduce the impact of ion backflow on CsI and
mitigate degradation involves introducing protective layers, such as MgF2 or
Lithium Fluoride (LiF) [18]. Nonetheless, tests conducted within the PICOSEC
project revealed that these layers inhibited electron extraction, leading to
reduced QE [5]. Another approach to address the issue of non-robustness is to
explore alternative materials, with carbon-based photocathodes like DLC and
B4C being the most promising candidates.
### 4.2 Diamond-Like Carbon
Diamond-Like Carbon represents an alternative photocathode material that
offers greater resistance to environmental influences compared to CsI. DLC
belongs to a class of amorphous carbon materials, exhibiting excellent
properties including chemical and thermal stability, hardness and robustness.
Initial measurements of DLC photocathodes have been conducted by collaborators
from USTC [13], including tests of various layer thicknesses, measurements of
QE, NPE and time resolution as well as aging studies. DLC samples consistently
demonstrated good performance and robustness, making them a strong candidate
for PICOSEC photocathodes.
Figure 6: Ageing studies of a CsI photocathode performed in the ASSET setup:
Decrease in QE after multiple irradiation steps, indicating the impact of ion
bombardment on CsI [5]. Figure 7: Picture of DLC photocathodes with various
layer thicknesses deposited at the CERN MPT workshop using a magnetron
sputtering technique.
The DLC photocathodes presented in this paper were fabricated using a
magnetron sputtering technique at the CERN Micro-Pattern Technologies (MPT)
workshop. DLC photocathodes with thicknesses ranging from 1 nm to 100 nm were
deposited on glass and MgF2 substrates, with some samples including a Cr
interfacial layer. Fig. 7 depicts the DLC photocathodes with the various
thicknesses produced during one of the two deposition campaigns. The process
proved to be reproducible across both campaigns.
Profilometer measurements were performed to investigate the thicknesses of the
50 nm and 100 nm layers. For layers in the range of a few nanometers,
specifically from 1.5 nm to 4.5 nm, thicknesses were estimated by scaling the
coating time. Fig. 8 presents the results of transparency measurements
conducted in the wavelength range from 200 nm to 800 nm using a
spectrophotometer. The data demonstrate a correlation between the estimated
thicknesses and the measured transparency. The transparency of samples with a
thickness of a few nanometers in the VUV range is approximately 60%.
Figure 8: Transparency measurements of the DLC layers with thicknesses ranging
from 1.5 nm to 100 nm deposited on MgF2 and glass substrates, conducted in the
wavelength range from 200 nm to 800 nm using a spectrophotometer.
Surface resistivity measurements were conducted, exhibiting a correlation
between the thickness of the DLC layer and its resistivity, as shown in Fig.
9. The higher resistivity of the DLC photocathodes on the MgF2 radiator
compared to the glass substrate suggests that the layer deposited on the MgF2
is thinner, potentially due to the lower adhesion of the crystal.
Figure 9: Surface resistivity measurements of the DLC photocathodes,
presenting a correlation between estimated thicknesses and resistivity. The
higher resistivity of the DLC photocathodes on the MgF2 radiator compared to
the glass substrate suggests a thinner deposited layer on the MgF2, possibly
due to lower crystal adhesion.
The DLC photocathodes, with thicknesses ranging from 1.5 nm to 3.5 nm, were
characterised during particle beam measurements. The measurements were
performed at a gas pressure of 990 mbar. The NPE generated by the DLC
photocathodes varied between 2.5 and 3. The 10 mm diameter active area
detector featuring a 1.5 nm thick DLC photocathode deposited directly on the
radiator and operated at voltages of V${}_{\text{C}}$ = -500 V on the cathode
and V${}_{\text{A}}$ = 275 V on the anode demonstrated the best performance,
achieving a time resolution of $\sigma$ = 31.9 $\pm$ 1.3 ps, as illustrated in
Fig. 10. Thicker samples exhibited comparable time resolution values, with a
deviation of approximately 4 ps.
Figure 10: SAT distribution of a prototype equipped with a 1.5 nm DLC
photocathode. The voltage settings were V${}_{\text{C}}$ = -500 V on the
cathode and V${}_{\text{A}}$ = 275 V on the anode. The results show a time
resolution of $\sigma$ = 31.9 $\pm$ 1.3 ps.
The DLC photocathodes deposited directly on the radiator without a Cr
conductive interfacial layer are sufficient for studying photocathode
performance. Nonetheless, in high-rate environments, a Cr layer is essential
to mitigate charging-up effects and voltage drops, particularly for samples
with larger surface areas. Consequently, samples with Cr were evaluated,
showing a decrease in transparency of approximately 30% and 2 ps worse time
resolution.
To enhance UV photon production, a thicker radiator was introduced. At the
same time, the Cherenkov cone diameter was widened. A prototype with a 15 mm
diameter active area, equipped with a 2.5 nm thick DLC photocathode deposited
on a 5 mm thick MgF2 radiator, was tested. The analysis involved applying a
geometric cut, consisting of a 5 mm diameter circle around the pad center, to
include only fully contained events. The detector operated at of
V${}_{\text{C}}$ = -490 V on the cathode and V${}_{\text{A}}$ = 275 V on the
anode yielded a time resolution of $\sigma$ = 28.0 $\pm$ 1.4 ps.
### 4.3 Boron Carbide
Boron Carbide is a chemical compound from the carbide group, known for its
crystalline structure and exceptional hardness, often used as a substitute for
diamond. The initial B4C photocathodes were developed at CEA, where a wide
range of thicknesses was tested, yielding promising results [2, 11]. More
detailed studies discussed in this paper were conducted with B4C samples
fabricated using a sputtering technique at ESS. During the coating campaigns,
B4C photocathodes with thicknesses ranging from 7 nm to 15 nm were deposited
on MgF2 radiators with a Cr conductive interfacial layers.
Profilometer measurements were conducted to determine the thickness of the
photocathodes. The 7 nm layer was estimated based on the scaling of the
coating time, considering the machine’s resolution limits. Transparency
measurements of the B4C layers, conducted in the VUV wavelength range,
demonstrated a correlation between estimated thicknesses and transparency,
with values ranging from 40% for the thinnest layer to 20% for the thickest.
The B4C photocathodes were evaluated through particle beam measurements. The
prototypes were operated at a gas pressure of 990 mbar. The NPE created by the
B4C photocathodes ranged between 2 and 4, with higher PE production observed
in thinner layers. The detector equipped with a 9 nm thick B4C photocathode
and operated at V${}_{\text{C}}$ = -490 V on the cathode and V${}_{\text{A}}$
= 275 V on the anode, exhibited the best performance, achieving a time
resolution of $\sigma$ = 34.5 $\pm$ 1.5 ps, as shown in Fig. 11. The time
resolution of the remaining B4C samples decrease, with the thickest sample
showing a degradation of approximately 10 ps.
Figure 11: SAT distribution of a prototype equipped with a 9 nm B4C
photocathode. The voltages were at V${}_{\text{C}}$ = -490 V on the cathode
and V${}_{\text{A}}$ = 275 V on the anode. The results show a time resolution
of $\sigma$ = 34.5 $\pm$ 1.5 ps.
## 5 Discussion
In the comparative analysis of photocathode materials, distinct performance
characteristics were observed among CsI, DLC, and B4C. CsI exhibited the
highest QE, exceeding 12 photoelectrons per MIP, whereas DLC yielded QE values
ranging between 2.5 and 3, while B4C between 2 and 4. Despite B4C being more
transparent than DLC, both materials exhibit similar efficiency, generating
comparable NPE. NPE correlates with the achievable time resolution. CsI
showcased the best performance with $\sigma$ $\approx$ 15.8 ps, followed by
DLC with $\sigma$ $\approx$ 32 ps and B4C with $\sigma$ $\approx$ 34.5 ps.
Although demonstrating excellent timing results, CsI also exhibited a faster
degradation in QE compared to carbon-based materials. Furthermore, carbon-
based materials displayed notably greater resistance to humidity. Considering
the demand for robust detectors with timing specifications that allow a margin
on operating condition, DLC and B4C emerge as promising candidates.
Consequently, a detector configuration comprising a single-pad prototype with
a 15 mm diameter active area resistive Micromegas of 20 M$\Omega$/$\Box$,
utilising a 1.5 nm DLC photocathode and an amplifier integrated on the outer
PCB, was tested. Operating the device at V${}_{\text{C}}$ = -530 V on the
cathode and V${}_{\text{A}}$ = 275 V on the anode, a time resolution of
$\sigma$ = 31.4 $\pm$ 0.6 ps was achieved within a 9 mm diameter circle around
the pad center, including only fully contained events. A uniform time response
across this region, with an RMS = 38.8 $\pm$ 0.3 ps, was observed, as depicted
in Fig. 12.
## 6 Conclusions
The work described in this paper is focused on enhancing the robustness of
PICOSEC detectors. The research delves into the characterisation of robust
carbon-based photocathodes, such as DLC and B4C. Results obtained from
prototypes equipped with DLC and B4C photocathodes revealed time resolutions
of approximately $\sigma$ $\approx$ 32 ps and $\sigma$ $\approx$ 34.5 ps,
correspondingly. Efforts dedicated to detector developments increase the
feasibility of the PICOSEC concept for experiments requiring sustained
performance while maintaining excellent timing precision. Current developments
include research on alternative materials, such as titanium to replace Cr, and
carbon-based nanostructures for use as photocathodes. Simultaneously, a 10×10
cm2 robust photocathode, incorporating a conductive interlayer to prevent a
voltage drop, will be tested with a 100-channel prototype [10] and a SAMPIC
digitiser [19]. In view of improving the stability of the detector, the
production of a high-rate 10×10 cm2 Micromegas with double-layer DLC for
vertical charge evacuation and evaluation of rate capability is ongoing.
Additionally, efforts to enhance spatial resolution involve testing high-
granularity layouts. Scaling up the PICOSEC detector by tiling 10×10 cm2
modules or developing larger prototypes are the next steps.
Figure 12: Time resolution map of a detector configuration consisting of a 15
mm diameter active area resistive Micromegas of 20 M$\Omega$/$\Box$, a 1.5 nm
DLC photocathode and an integrated amplifier. The black circle indicates the
active area of the detector, while the red circle highlights fully contained
events.
## Acknowledgements
We acknowledge the support of the CERN EP R&D Strategic Programme on
Technologies for Future Experiments; the RD51 Collaboration, in the framework
of RD51 common projects; the DRD1 Collaboration; the PHENIICS Doctoral School
Program of Université Paris-Saclay, France; the Cross-Disciplinary Program on
Instrumentation and Detection of CEA, the French Alternative Energies and
Atomic Energy Commission; the French National Research Agency (ANR), project
id ANR-21-CE31-0027; the Program of National Natural Science Foundation of
China, grants number 11935014 and 12125505; the COFUND-FP-CERN-2014 program,
grant number 665779; the Fundação para a Ciência e a Tecnologia (FCT),
Portugal; the Enhanced Eurotalents program, PCOFUND-GA-2013-600382; the US CMS
program under DOE contract No. DE-AC02-07CH11359; this material is based upon
work supported by the U.S. Department of Energy, Office of Science, Office of
Nuclear Physics under contracts DE-AC05-06OR23177.
## References
* [1] J. Bortfeldt, et al., for the PICOSEC Micromegas Collaboration, _PICOSEC: Charged particle timing at sub-25 picosecond precision with a Micromegas based detector_ , Nuclear Instruments and Methods in Physics Research A 903 (2018) 317–325.
* [2] L. Sohl, _Development of PICOSEC-Micromegas for fast timing in high rate environments_ , PhD dissertation, Université Paris-Saclay, 2020; available at https://hal-universite-paris-saclay.archives-ouvertes.fr/tel-03167728/.
* [3] J. Bortfeldt, et al., for the PICOSEC Micromegas Collaboration, _Modeling the timing characteristics of the PICOSEC Micromegas detector_ , Nuclear Instruments and Methods in Physics Research A 993 (2021) 165049.
* [4] S. Aune, et al., for the PICOSEC Micromegas Collaboration, _Timing performance of a multi-pad PICOSEC-Micromegas detector prototype_ , Nuclear Instruments and Methods in Physics Research A 993 (2021) 165076.
* [5] M. Lisowska, _Photocathode characterisation and ageing studies for precise-timing gaseous detectors_ , MSc thesis, Wroclaw University of Science and Technology, 2021; available at https://cds.cern.ch/record/2885929?ln=en.
* [6] A. Kallitsopoulou, _Development of a Simulation Model and Precise Timing Techniques for PICOSEC-Micromegas Detectors_ , MSc thesis, Aristotle University of Thessaloniki, 2021; available at arXiv:2112.14113.
* [7] I. Maniatis, _Research and Development of Micromegas Detectors for New Physics Searches_ , PhD dissertation, Aristotle University of Thessaloniki, 2022; available at: http://ikee.lib.auth.gr/record/339482/files/GRI-2022-35238.pdf.
* [8] E. Chatzianagnostou, _Study of multi-pad PICOSEC-MicroMegas detector prototype: Time resolution evaluation_ , MSc thesis, Aristotle University of Thessaloniki, 2022; available at: http://ikee.lib.auth.gr/record/343732/files/GRI-2022-37581.pdf.
* [9] M. Lisowska, et al., for the PICOSEC Micromegas Collaboration, _Sub-25 ps timing measurements with 10×10 cm 2 PICOSEC Micromegas detectors_, Nuclear Instruments and Methods in Physics Research A 1046 (2023) 167687.
* [10] A. Utrobicic, et al., for the PICOSEC Micromegas Collaboration, _A large area 100-channel PICOSEC Micromegas detector with time resolution at the 20 ps level_ , Journal of Instrumentation 18 (2023) C07012.
* [11] M. Lisowska, et al., for the PICOSEC Micromegas Collaboration, _Towards robust PICOSEC Micromegas precise timing detectors_ , Journal of Instrumentation 18 (2023) C07018.
* [12] A. Utrobicic, et al., for the PICOSEC Micromegas Collaboration, _Single channel PICOSEC Micromegas detector with improved time resolution_ , 2024; available at arXiv:2406.05657.
* [13] X. Wang, et al., for the PICOSEC Micromegas Collaboration, _A Novel Diamond-like Carbon based photocathode for PICOSEC Micromegas detector_ , 2024; available at arXiv:2406.08712.
* [14] The CMS and TOTEM Collaborations _ALICE Technical Design Report of the High Momentum Particle Identification Detector_ , CERN-LHCC-98–19, ALICE-TDR-1, 1998; available at: https://alice-collaboration.web.cern.ch/sites/default/files/Documents/PROJECTS/HMPID/HMPID_TDR.pdf
* [15] H. Hödlmoser, _Development of Large Area CsI Photocathodes for the ALICE/HMPID RICH Detector_ , PhD dissertation, Technischen Universität Wien, 2005; available at https://cds.cern.ch/record/924378/
* [16] S. Martoiu, et al., _Development of the scalable readout system for micro-pattern gas detectors and other applications_ , Journal of Instrumentation 18 (2013) C03015.
* [17] C. Hoarau, et al., _RF pulse amplifier for CVD-diamond particle detectors_ , Journal of Instrumentation 16 (2021) T04005.
* [18] A Buzulutskov, a Breskin, R Chechik, J Va’vra, _Study of photocathode protection with thin dielectric films_ , Nuclear Instruments and Methods in Physics Research A 371 (1996) 147-150.
* [19] D. Breton, et al., _Measurements of timing resolution of ultra-fast silicon detectors with the SAMPIC waveform digitizer_ , Nuclear Instruments and Methods in Physics Research A 835 (2016) 51-60.
|
e1e-mail<EMAIL_ADDRESS>
11institutetext: Graduate School of Engineering Science, Osaka University,
Toyonaka, Osaka 560-8531, Japan 22institutetext: Yukawa Institute for
Theoretical Physics, Kyoto University, Sakyo-ku, Kyoto 606-8502, Japan
In this study, we numerically investigated the mechanical responses and
trajectories of frictional granular particles under oscillatory shear in the
reversible phase where particle trajectories form closed loops below the
yielding point. When the friction coefficient is small, the storage modulus
exhibits softening, and the loss modulus remains finite in the quasi-static
limit. As the friction coefficient increases, the softening and residual loss
modulus are suppressed. The storage and loss moduli satisfy scaling laws if
they are plotted as functions of the areas of the loop trajectories divided by
the strain amplitude and diameter of grains, at least for small values of the
areas.
# Shear modulus and reversible particle trajectories of frictional granular
materials under oscillatory shear
Michio Otsukie1,addr1 Hisao Hayakawaaddr2
(Received: date / Accepted: date)
††journal: Eur. Phys. J. E
## 1 Introduction
Dense disordered materials, such as granular materials, foams, emulsions, and
colloidal suspensions, behave like solids when the packing fraction $\phi$
exceeds the jamming point Hecke ; Behringer . Under a small shear strain, the
shear stress is proportional to shear strain, which is characterized by the
shear modulus depending on $\phi$ OHern02 ; Tighe11 ; Otsuki17 . However, as
the shear strain increases, the stress-strain relation becomes nonlinear
Coulais ; Otsuki14 .
The nonlinear stress-strain relation is believed to result from the yielding
transition associated with plastic deformations Nagamanasa ; Knowlton ;
Kawasaki16 ; Leishangthem ; Clark ; Boschan19 . However, recent studies have
shown that the mechanical response becomes nonlinear even when the system is
in the reversible phase in which particle trajectories form closed loops below
the yielding point Boschan ; Nakayama ; Kawasaki20 ; Bohy . Such a response is
called (reversible) softening, where the storage modulus under oscillatory
shear decreases as the strain amplitude increases. In a previous paper
Otsuki21 , we demonstrated that the loss modulus remains finite in the quasi-
static limit when reversible softening occurs. We have clarified that the
reversible softening and residual loss modulus originate from the loop
trajectories of particles Lundberg ; Schreck ; Keim13 ; Keim14 ; Regev13 ;
Regev15 ; Priezjev ; Lavrentovich ; Nagasawa ; Das .
Most previous numerical studies assumed frictionless particles, although
realistic disordered materials consist of frictional grains. The friction
causes drastic changes in rheology. For example, frictional particles exhibit
discontinuous shear thickening Otsuki11 ; Chialvo ; Brown ; Seto ; Fernandez ;
Heussinger ; Bandi ; Ciamarra ; Mari ; Grob ; Kawasaki14 ; Wyart14 ; Grob16 ;
Peters ; Fall ; Sarkar ; Singh ; Kawasaki18 ; Thomas and shear jamming Bi11 ;
Zhang08 ; Zhang10 ; Wang18 ; Zhao ; Sarkar13 ; Sarkar16 ; Seto19 ; Pradipto ;
Otsuki20 , which hardly occurs in frictionless particles. Thus, it is natural
to expect that the friction between the grains affects the mechanical
responses and particle trajectories.
In this study, we numerically investigated the shear modulus of frictional
granular materials under oscillatory shear. In Sect. 2, we explain our model
and setup. In Sect. 3, we present our numerical results of a single-cycle
displacement and mean square displacement to distinguish the irreversible
phase from the reversible phase of particle trajectories. We show how particle
trajectories depend on the friction coefficient between the grains in Sect. 4.
In Sect. 5, we illustrate the existence of scaling laws of the storage and
loss moduli, at least, for small areas of reversible particles trajectories.
In Sect. 6, we conclude and discuss our results.
## 2 Model and setup
Let us consider two-dimensional frictional granular particles with identical
densities confined in a square box under oscillatory shear. Particle $i$ with
a diameter $d_{i}$ is driven by the SLLOD equation under the Lees–Edwards
boundary condition Evans :
$\displaystyle\frac{d}{dt}{\bm{r}}_{i}$ $\displaystyle=$
$\displaystyle\dot{\gamma}(t)y_{i}\bm{e}_{x}+\frac{\bm{p}_{i}}{m_{i}},$ (1)
$\displaystyle\frac{d}{dt}{\bm{p}}_{i}$ $\displaystyle=$
$\displaystyle-\dot{\gamma}(t)p_{i,y}\bm{e}_{x}+\bm{F}_{i},$ (2)
where ${\bm{r}}_{i}=(x_{i},y_{j})$ and
$\bm{p}_{i}=m_{i}(\dot{\bm{r}}_{i}-\dot{\gamma}(t)y_{i})\bm{e}_{x}$ are the
position and peculiar momentum of particle $i$ with mass $m_{i}$, shear rate
$\dot{\gamma}(t)$, and the unit vector $\bm{e}_{x}$ along the $x$-direction,
respectively. The force $\bm{F}_{i}$ is given by:
$\bm{F}_{i}=\sum_{j\neq
i}\left(\bm{F}_{ij}^{\rm(n)}+\bm{F}_{ij}^{\rm(t)}\right)\Theta(d_{ij}-r_{ij}),$
(3)
where $\bm{F}_{ij}^{\rm(n)}$ and $\bm{F}_{ij}^{\rm(t)}$ are the normal and
tangential forces between particles $i$ and $j$, $d_{ij}=(d_{i}+d_{j})/2$ is
the average diameter, and $r_{ij}=|\bm{r}_{ij}|$ is the distance between
particles $i$ and $j$, with
$\bm{r}_{ij}=\bm{r}_{i}-\bm{r}_{j}=(x_{ij},y_{ij})$. Here, $\Theta(x)$ is the
Heviside step function satisfying $\Theta(x)=1$ for $x>0$, and $\Theta(x)=0$
otherwise. To avoid crystallization, we constructed a dispersed system with an
equal number of grains of two diameters ($d_{0}$ and $d_{0}/1.4$).
Then, we adopt the following model for the normal force:
$\bm{F}_{ij}^{\rm(n)}=-\left(k^{\rm(n)}u^{\rm(n)}_{ij}+\eta^{\rm(n)}v^{\rm(n)}_{ij}\right)\bm{n}_{ij}$
(4)
with $k^{\rm(n)}$ as the normal elastic constant, $\eta^{\rm(n)}$ as the
normal viscous constant, and the normal unit vector is
$\bm{n}_{ij}=\bm{r}_{ij}/r_{ij}$. The normal relative displacement and
velocity are, given by the following, respectively:
$\displaystyle u^{\rm(n)}_{ij}=r_{ij}-d_{ij},$ (5)
and
$\displaystyle
v^{\rm(n)}_{ij}=\frac{d}{dt}u^{\rm(n)}_{ij}=\left(\frac{d}{dt}\bm{r}_{i}-\frac{d}{dt}\bm{r}_{j}\right)\cdot\frac{\bm{r}_{ij}}{r_{ij}}.$
(6)
We adopt the following model for the tangential force
$\bm{F}_{ij}^{\rm(t)}={\rm min}\left(|\tilde{F}_{ij}^{\rm(t)}|,\mu
F_{ij}^{\rm(n,el)}\right){\rm sgn}(\tilde{F}_{ij}^{\rm(t)})\bm{t}_{ij},$ (7)
where $\bm{t}_{ij}=(-y_{ij}/r_{ij},x_{ij}/r_{ij})$ is the tangential unit
vector, and $\mu$ is the friction coefficient. Here, ${\rm min}(a,b)$ selects
the smaller one between $a$ and $b$; ${\rm sgn}(x)=1$ for $x\geq 0$ and ${\rm
sgn}(x)=-1$ for $x<0$. Furthermore,
$F_{ij}^{\rm(n,el)}=-k^{\rm(n)}u^{\rm(n)}_{ij}$ is the elastic part of the
normal force. $\tilde{F}_{ij}^{\rm(t)}$ is given by
$\tilde{F}_{ij}^{\rm(t)}=-\left(k^{\rm(t)}u_{ij}^{\rm(t)}+\eta^{\rm(t)}v_{ij}^{\rm(t)}\right)$
(8)
with $k^{\rm(t)}$ as the tangential elastic constant and $\eta^{\rm(t)}$ as
the tangential viscous constant. The tangential velocity $v_{ij}^{\rm(t)}$ is
expressed as
$v_{ij}^{\rm(t)}=(\bm{v}_{i}-\bm{v}_{j})\cdot\bm{t}_{ij}-(d_{i}\omega_{i}+d_{j}\omega_{j})/2$
(9)
with $\omega_{i}$ as the angular velocity of particle $i$. The tangential
displacement $u_{ij}^{\rm(t)}$ satisfies
$\dot{u}_{ij}^{\rm(t)}=v_{ij}^{\rm(t)}$ for $|\tilde{F}_{ij}^{\rm(t)}|<\mu
F_{ij}^{\rm(n,el)}$, whereas $u_{ij}^{\rm(t)}$ remains unchanged for
$|\tilde{F}_{ij}^{\rm(t)}|\geq\mu F_{ij}^{\rm(n,el)}$. We note that
$u_{ij}^{\rm(t)}$ is set to zero if particles $i$ and $j$ are detached. The
time evolution of $\omega_{i}$ is given by
$I_{i}\frac{d}{dt}\omega_{i}=T_{i}$ (10)
with the moment of inertia $I_{i}=m_{i}d_{i}^{2}/8$, and torque
$T_{i}=-\sum_{j}\frac{d_{i}}{2}\bm{F}_{ij}^{\rm(t)}\cdot\bm{t}_{ij}$.
The particles were randomly placed with an initial packing fraction of
$\phi_{\rm I}=0.75$. The system was slowly compressed until the packing
fraction reached $\phi=0.870$, which was sufficiently above the jamming point.
In each step of the compression, the packing fraction is increased by
$\Delta\phi=1.0\times 10^{-4}$ with the affine transformation. Thereafter, the
particles were relaxed to a mechanical equilibrium state with the kinetic
temperature $T_{\rm K}=\sum_{i}p_{i}^{2}/(mN)<T_{\rm th}$. Here, we chose
$T_{\rm th}=1.0\times 10^{-8}k^{\rm(n)}d_{0}^{2}$.
After the compression, we apply the shear strain:
$\gamma(t)=\gamma_{0}\sin\omega t$ (11)
at constant volume with a strain amplitude $\gamma_{0}$ and angular frequency
$\omega$ for $N_{\rm c}$ cycles. The shear rate is given by
$\dot{\gamma}(t)=\gamma_{0}\omega\cos\omega t$ (12)
In the last cycle, we measured the storage and loss moduli, defined by Doi
$\displaystyle G^{\prime}$ $\displaystyle=$
$\displaystyle\frac{\omega}{\pi}\int_{0}^{2\pi/\omega}dt\ \sigma(t)\sin\omega
t/\gamma_{0},$ (13) $\displaystyle G^{\prime\prime}$ $\displaystyle=$
$\displaystyle\frac{\omega}{\pi}\int_{0}^{2\pi/\omega}dt\ \sigma(t)\cos\omega
t/\gamma_{0}$ (14)
with the (symmetric contact) shear stress as
$\sigma=-\frac{1}{2L^{2}}\sum_{i}\sum_{j>i}(x_{ij}F_{ij,y}+y_{ij}F_{ij,x}).$
(15)
Here, we ignore the kinetic and asymmetric parts of the shear stress because
they are less than $1\%$ of $\sigma$.
The number of particles is $N=1000$, $k^{\rm(t)}=0.2k^{\rm(n)}$, and
$\eta^{\rm(n)}=\eta^{\rm(t)}=k^{\rm(n)}t_{0}$ with
$t_{0}=\sqrt{m/k^{\rm(n)}}$, where $m$ is the mass of a grain with diameter
$d_{0}$. This model corresponds to the restitution coefficient $e=0.043$. We
adopt the leapfrog algorithm with the time step $\Delta t=0.05t_{0}$. We chose
$\omega=1.0\times 10^{-4}t_{0}^{-1}$ as the quasi-static shear deformation
because $G^{\prime}$ and $G^{\prime\prime}$ do not depend on $\omega$ for
$\omega\leq 1.0\times 10^{-4}t_{0}^{-1}$.
## 3 Single-cycle particle displacement and mean square displacement
First, we introduce the single-cycle particle displacement as
$dr(n)=\left\langle\sum_{i=1}^{N}|\bm{r}_{i}(nT)-\bm{r}_{i}((n-1)T)|\right\rangle/N$
(16)
with the period $T=2\pi/\omega$ and the ensemble average
$\langle\cdot\rangle$. We plot $dr(n)$ against $n$ for various values of
$\gamma_{0}$ with $\mu=1.0$ in Fig. 1. For $\gamma_{0}=0.2$ and $0.1$, $dr(n)$
is finite and almost independent of $n$. The finite $dr(n)$ indicates that
plastic deformations exist during the cycles. For $\gamma_{0}=0.04$ and
$0.02$, a negligibly small $dr(n)$ can be regarded as the reversible motion of
particles.
Figure 1: Plots of the single-cycle displacement $dr(n)$ versus $n$ for
various values of $\gamma_{0}$ with $\mu=1.0$.
In Fig. 2, we plot the mean square displacements for various values of
$\gamma_{0}$ with $\mu=1.0$ and $n_{0}=100$, defined by
$|\Delta{\mathbf{r}}(n)|^{2}=\sum_{i}|\bm{r}_{i}((n+n_{0})T)-\bm{r}_{i}(n_{0}T)|^{2}/N.$
(17)
Here, the position $\bm{r}_{i}(n_{0}T)$ after $n_{0}$ cycles is the reference
state. For $\gamma_{0}=0.4,0.2,$ and $0.1$, $|\Delta{\mathbf{r}}(n)|^{2}$ is
proportional to $n$, whereas $|\Delta{\mathbf{r}}(n)|^{2}$ reaches a small
saturated value for $\gamma_{0}\leq 0.04$. These results are consistent with
the behavior shown in Fig. 1, where the system is irreversible for
$\gamma_{0}\geq 0.1$, and reversible for $\gamma_{0}\leq 0.04$.
Figure 2: Plots of the mean square displacement $|\Delta{\mathbf{r}}(n)|^{2}$
versus $n$ for various values of $\gamma_{0}$ with $\mu=1.0$. The solid line
represents $|\Delta{\mathbf{r}}(n)|^{2}\sim n$.
From Figs. 1 and 2, we define the reversible phase where the displacement
$dr(n)/\gamma_{0}$ in the last cycle is lower than $0.01d_{0}$, and the
diffusion coefficient $D$ is lower than $1.0\times 10^{-5}d_{0}^{2}/t_{0}$.
Here, we estimate $D$ from the mean square displacement
$|\Delta{\mathbf{r}}(n)|^{2}$ for $n_{\rm I}\leq n\leq n_{\rm F}$ as
$D=(|\Delta{\mathbf{r}}(n_{\rm F})|^{2}-|\Delta{\mathbf{r}}(n_{\rm
I})|^{2})/\left\\{4(n_{\rm F}-n_{\rm I})\right\\}$, where $n_{\rm I}=10$ and
$n_{\rm F}=100$. It should be noted that $|\Delta{\mathbf{r}}(n)|^{2}$ in the
reversible phase is subdiffusive, where $D$ decreases as $n_{\rm F}$
increases. We confirmed that the systems with $\mu=0.01,0.1,0.2,0.3,0.5,$ and
$1.0$ are in the reversible phase for $\gamma_{0}\leq 0.04$. 111We have also
checked that particles in the reversible phase exhibit almost the same
trajectories for ten cycles in all samples.
## 4 Particle trajectories in the reversible phase
In the reversible phase, non-affine particle trajectories should form closed
loops, where the non-affine trajectory for particle $i$ is defined as
$\tilde{\bm{r}}_{i}(t)={\bm{r}}_{i}(t)-\gamma(t)y_{i}(t)\bm{e}_{x}.$ (18)
We plot $\tilde{\bm{r}}_{i}(t)$ for the last two cycles for $\gamma=0.04$ and
$0.004$ with $\mu=0.1$ in Fig. 3. The particle returns to its original
position after each cycle, and the trajectory forms a loop; however,the
trajectories of two cycles for $\gamma_{0}=0.04$ deviate in some parts owing
to the inertia of the particles.
---
Figure 3: Non-affine particle trajectories for $\gamma_{0}=0.04$ (a) and
$0.004$ (b) with $\mu=0.1$. The circles represent the trajectory in the last
cycle. The line represents the trajectory in the second to the last cycle.
In Figs. 4 and 5, the non-affine trajectories for $\mu=0.5$ and $1.0$ are
plotted. For $\mu=0.5$, the trajectory forms a loop with a finite area for
$\gamma_{0}=0.04$, as shown in Fig. 4(a), while the trajectory for
$\gamma_{0}=0.004$ in Fig. 4(b) becomes a line (or a loop with zero area).
Such a line trajectory is not observed in jammed frictionless particles
Lundberg ; Schreck ; Keim13 ; Keim14 ; Regev13 ; Regev15 ; Priezjev ;
Lavrentovich ; Nagasawa ; Das . For $\mu=1.0$, the trajectories for
$\gamma_{0}=0.04$ and $0.004$ formed lines, as shown in Fig. 5. For
$\gamma_{0}=0.04$, the line trajectory is bent (Fig. 5(a)), while it is almost
straight for $\gamma_{0}=0.004$ (Fig. 5(b)).
---
Figure 4: Non-affine particle trajectories for $\gamma_{0}=0.04$ (a) and
$0.004$ (b) with $\mu=0.5$. The circles represent the trajectory in the last
cycle. The line represents the trajectory in the second to the last cycle.
---
Figure 5: Non-affine particle trajectories for $\gamma_{0}=0.04$ (a) and
$0.004$ (b) with $\mu=1.0$. The circles represent the trajectory in the last
cycle. The line represents the trajectory in the second to the last cycle.
The geometry of the trajectory is characterized by
$A_{i}=\oint_{C}\tilde{x}_{i}d\tilde{y}_{i}=\int_{0}^{2\pi/\omega}\tilde{x}_{i}(t)\frac{d\tilde{y}_{i}(t)}{dt}dt,$
(19)
where $C$ represents the trajectory of a cycle. $|A_{i}|$ coincides with the
area covered by the trajectory of particle $i$, if there is no intersection
for the trajectory. We introduce the average area $A$ as
$A=\sum_{i}|A_{i}|/N.$ (20)
If the characteristic length of the trajectory is scaled by $\gamma_{0}d_{0}$,
as in frictionless particles Otsuki21 , $A$ is proportional to
$(\gamma_{0}d_{0})^{2}$. Therefore, we plot the normalized average area
$A/(\gamma_{0}d_{0})^{2}$ against $\gamma_{0}$ for various values of $\mu$ in
Fig. 6. When $\gamma_{0}$ is sufficiently small, $A/(\gamma_{0}d_{0})^{2}$ is
almost zero, corresponding to the line trajectories for $\mu\geq 0.1$.
Furthermore, $A/(\gamma_{0}d_{0})^{2}$ increases with the strain amplitude
$\gamma_{0}$ above a critical value, which is dependent on $\mu$.
Figure 6: Plots of the normalized area $A/(\gamma_{0}d_{0})^{2}$ of loop
trajectories versus $\gamma_{0}$ for various values of $\mu$.
## 5 Storage and loss moduli
In Fig. 7, we plot the scaled storage modulus $G^{\prime}/G^{\prime}_{0}$ in
the reversible phase against the strain amplitude $\gamma_{0}$, where
$G^{\prime}_{0}$ is defined as the storage modulus
$G^{\prime}_{0}=\lim_{\gamma_{0}\to 0}G^{\prime}$ in the linear response
regime. Here, we estimate $G^{\prime}_{0}$ by $G^{\prime}$ at
$\gamma_{0}=1.0\times 10^{-4}$. As can be seen in Fig. 7,
$G^{\prime}/G^{\prime}_{0}$ decreases with $\gamma_{0}$ above the critical
strain amplitude, depending on $\mu$ for $\mu\geq 0.1$. The decay of
$G^{\prime}$ with $\gamma_{0}$ in the reversible phase is regarded as
reversible softening, which is also observed in frictionless particles
Otsuki21 . A similar decay of $G^{\prime}$ has been reported in an experiment
on photoelastic disks, but it is not clear whether it occurs in the reversible
phase Coulais . The critical strain amplitude for the decay of $G^{\prime}$
increases with $\mu$, and the softening decreases as $\mu$ increases. In Fig.
8, we plot $G^{\prime}_{0}$ as a function of $\mu$, where $G^{\prime}_{0}$
slowly decreases as $\mu$ increases.
Figure 7: Plots of the normalized storage modulus $G^{\prime}/G^{\prime}_{0}$
versus $\gamma_{0}$ for various $\mu$ values. Figure 8: Plots of the storage
modulus $G^{\prime}_{0}$ in the linear response regime versus $\mu$.
Figure 9 shows the loss modulus $G^{\prime\prime}$ against $\gamma_{0}$ for
various $\mu$ values. $G^{\prime\prime}$ is approximately zero for small
$\gamma_{0}$, while $G^{\prime\prime}$ increases with $\gamma_{0}$ above a
threshold strain amplitude except for $\mu=0.01$.
Figure 9: Plots of the loss modulus $G^{\prime\prime}$ versus $\gamma_{0}$
for various values of $\mu$.
In Ref. Otsuki21 , the reversible softening in frictionless systems can be
characterized by the loop trajectories of particles. Even in frictional
granular materials, the average area $A$ characterizing the loop trajectories
seems to be related to $G^{\prime}$. To check the validity of this conjecture,
we plotted $1-G^{\prime}/G^{\prime}_{0}$ against $A/(\gamma_{0}d_{0})^{2}$ in
Fig. 10 for $A/(\gamma_{0}d_{0})^{2}<0.6$. It is remarkable that
$1-G^{\prime}/G^{\prime}_{0}$ satisfies a scaling law in which
$1-G^{\prime}/G_{0}$ is a linear function of $A/(\gamma_{0}d_{0})^{2}$, and is
independent of $\mu$. Note that the data for $A/(\gamma_{0}d_{0})^{2}>0.6$
with $\mu=0.1$ and $0.01$ deviate from the linear behavior.
Figure 10: Plots of $1-G^{\prime}/G^{\prime}_{0}$ versus
$A/(\gamma_{0}d_{0})^{2}$ for various $\gamma_{0}$ and $\mu$ values.
The loss modulus $G^{\prime\prime}$ is also expected to be characterized by
the loop trajectories of the particles even in the frictional system. The
connection between the loss modulus and loop trajectories is suggested in
suspension experiments Keim13 ; Keim14 . To clarify this connection, we
plotted $G^{\prime\prime}$ against $A/(\gamma_{0}d_{0})^{2}$ in Fig. 11 for
various $\mu$ with $A/(\gamma_{0}d_{0})^{2}\leq 0.6$. It is remarkable that
$G^{\prime\prime}$ satisfies a scaling law in which $G^{\prime\prime}$ is
proportional to $A/(\gamma_{0}d_{0})^{2}$, except for $\mu=0.1$ and $0.01$.
Figure 11: Plots of the loss modulus $G^{\prime\prime}$ versus
$A/(\gamma_{0}d_{0})^{2}$ for various values of $\gamma_{0}$ and $\mu$.
## 6 Conclusion and discussion
We numerically studied the relationship between the trajectories of the
frictional granular materials and the shear modulus under oscillatory shear in
the reversible phase. The geometry of the particle trajectories depends on the
friction coefficient $\mu$, where the normalized area
$A/(\gamma_{0}d_{0})^{2}$ increases as $\gamma_{0}$ increases and $\mu$
decreases. The storage modulus $G^{\prime}$ exhibits reversible softening. The
loss modulus $G^{\prime\prime}$ remains finite for a large $\gamma_{0}$ and
small $\mu$. We found the existence of the scaling laws of $G^{\prime}$ and
$G^{\prime\prime}$, at least for not too large $A/(\gamma_{0}d_{0})^{2}$ and
not too small $\mu$.
In this study, we investigated the shear modulus of frictional granular
materials for $\phi=0.87$. In future studies, the findings on the shear
modulus and loop trajectories will have to be confirmed in the vicinity of the
jamming point.
In Ref. Otsuki21 , the reversible softening and residual loss modulus of
frictionless particles are theoretically related to the Fourier components of
the loop trajectories. We numerically connected $G^{\prime}$ and
$G^{\prime\prime}$ with the area of the loop trajectories, but the theoretical
basis has not been confirmed. Therefore, an extension of the theory in Ref.
Otsuki21 to frictional particles will be our future work.
###### Acknowledgements.
The authors thank K. Saitoh and D. Ishima for fruitful discussions. This work
was supported by JSPS KAKENHI Grant Numbers JP16H04025, JP19K03670, and
JP21H01006, and ISHIZUE 2020 of the Kyoto University Research Development
Program.
## References
* (1) M. van Hecke, J. Phys.: Condens. Matter 22, 033101 (2009)
* (2) R. P. Behringer and B. Chakraborty, Rep. Prog. Phys. 82 012601 (2019)
* (3) C. S. O’Hern, S. A. Langer, A. J. Liu, and S. R. Nagel, Phys. Rev. Lett. 88, 075507 (2002).
* (4) B. P. Tighe, Phys. Rev. Lett. 107, 158303 (2011).
* (5) M. Otsuki and H. Hayakawa, Phys. Rev. E 95, 062902 (2017).
* (6) C. Coulais, A. Seguin, and O. Dauchot, Phys. Rev. Lett. 113, 198001 (2014).
* (7) M. Otsuki and H. Hayakawa, Phys. Rev. E 90, 042202 (2014).
* (8) K. Hima Nagamanasa, S. Gokhale, A. K. Sood, and R. Ganapathy, Phys. Rev. E 89, 062308 (2014).
* (9) E. D. Knowlton, D. J. Pine, and L. Cipelletti, Soft Matter 10, 6931 (2014).
* (10) T. Kawasaki and L. Berthier, Phys. Rev. E 94, 022615 (2016).
* (11) P. Leishangthem, A. D. S. Parmar, and S. Sastry, Nat. Commun. 8, 14653 (2017).
* (12) A. H. Clark, J. D. Thompson, M. D. Shattuck, N. T. Ouellette, and C. S. O’Hern, Phys. Rev. E 97, 062901 (2018).
* (13) J. Boschan, S. Luding, and B. P. Tighe, Granul. Matter 21, 58 (2019).
* (14) J. Boschan, D. Vågberg, E. Somfai, and B. P. Tighe, Soft Matter 12, 5450 (2016).
* (15) D. Nakayama, H. Yoshino, and F. Zamponi,J. Stat. Mech. 2016 104001 (2016).
* (16) T. Kawasaki and K. Miyazaki, arXiv:2003.10716.
* (17) S. Dagois-Bohy, E. Somfai, B. P. Tighe, and M. van Hecke, Soft Matter 13, 9036 (2017).
* (18) M. Otsuki and H. Hayakawa, arXiv:2101.07473.
* (19) M. Lundberg, K. Krishan, N. Xu, C. S. O’Hern, and M. Dennin, Phys. Rev. E 77, 041505 (2008).
* (20) C. F. Schreck, R. S. Hoy, M. D. Shattuck, and C. S. O’Hern, Phys. Rev. E 88, 052205 (2013).
* (21) N. C. Keim and P. E. Arratia, Soft Matter 9, 6222 (2013).
* (22) N. C. Keim and P. E. Arratia, Phys. Rev. Lett. 112, 028302 (2014).
* (23) I. Regev, T. Lookman, and C. Reichhardt Phys. Rev. E 88, 062401 (2013).
* (24) I. Regev, J. Weber, C. Reichhardt, K. A. Dahmen, and T. Lookman, Nat. Commun. 6, 8805 (2015).
* (25) N. V. Priezjev, Phys. Rev. E 93, 013001 (2016).
* (26) M. O. Lavrentovich, A. J. Liu, and S. R. Nagel, Phys. Rev. E 96, 020101(R) (2017).
* (27) K. Nagasawa, K. Miyazaki and T. Kawasaki, Soft Matter 15, 7557 (2019).
* (28) P. Das, H. A. Vinutha, and S. Sastry, Proc. Natl. Acad. Sci. USA 117, 10203 (2020).
* (29) M. Otsuki and H. Hayakawa, Phys. Rev. E 83, 051301 (2011).
* (30) S. Chialvo, J. Sun, and S. Sundaresan, Phys. Rev. E 85, 021305 (2012).
* (31) E. Brown and H. M. Jaeger, Phys. Rev. Lett. 103, 086001 (2009).
* (32) R. Seto, R. Mari, J. F. Morris, and M. M. Denn, Phys. Rev. Lett. 111, 218301 (2013).
* (33) N. Fernandez, R. Mani, D. Rinaldi, D. Kadau, M. Mosquet, H. Lombois-Burger, J. Cayer-Barrioz, H. J. Herrmann, N. D. Spencer, and L. Isa, Phys. Rev. Lett. 111, 108301 (2013).
* (34) C. Heussinger, Phys. Rev. E 88, 050201 (2013).
* (35) M. M. Bandi, M. K. Rivera, F. Krzakala and R.E. Ecke, Phys. Rev. E 87, 042205 (2013).
* (36) M. P. Ciamarra, R. Pastore, M. Nicodemi, and A. Coniglio, Phys. Rev. E 84, 041308 (2011).
* (37) R. Mari, R. Seto, J. F. Morris, and M. M. Denn, J. Rheol. 58, 1693 (2014).
* (38) M. Grob, C. Heussinger, and A. Zippelius, Phys. Rev. E 89, 050201(R) (2014).
* (39) T. Kawasaki, A. Ikeda, and L. Berthier, EPL 107, 28009 (2014).
* (40) M. Wyart and M. E. Cates, Phys. Rev. Lett. 112, 098302 (2014).
* (41) M. Grob, A. Zippelius, and C. Heussinger, Phys. Rev. E 93, 030901(R) (2016).
* (42) I. R. Peters, S. Majumdar, and H. M. Jaeger, Nature 532, 214 (2016).
* (43) A. Fall, F. Bertrand, D. Hautemayou, C. Mezière, P Moucheront, A. Lemaître, and G. Ovarlez, Phys. Rev. Lett. 114, 098301 (2015).
* (44) S. Sarkar, E. Shatoff, K. Ramola, R. Mari, J. Morris, and B. Chakraborty, EPJ Web Conf. 140, 09045 (2017).
* (45) A. Singh, R. Mari, M. M. Denn, and J. F. Morris, J. Rheol. 62, 457 (2018).
* (46) T. Kawasaki and L. Berthier, Phys. Rev. E 98, 012609 (2018).
* (47) J. E. Thomas, K. Ramola, A. Singh, R. Mari, J. F. Morris, and B. Chakraborty, Phys. Rev. Lett. 121, 128002 (2018).
* (48) D. Bi, J. Zhang, B. Chakraborty and R. Behringer, Nature 480, 355 (2011).
* (49) J. Zhang, T. Majmudar, and R. Behringer, Chaos 18, 041107 (2008).
* (50) J. Zhang, T. S. Majmudar, A. Tordesillas, and R. P. Behringer, Granul. Matter 12, 159 (2010).
* (51) D. Wang, J. Ren, J. A. Dijksman, H. Zheng, and R. P. Behringer, Phys. Rev. Lett. 120, 208004 (2018).
* (52) Y. Zhao, J. Barés, H. Zheng, J. E. S. Socolar, and R. P. Behringer, Phys. Rev. Lett. 123, 158001 (2019).
* (53) S. Sarkar, D. Bi, J. Zhang, R. P. Behringer and B. Chakraborty , Phys. Rev. Lett. 111, 068301 (2013).
* (54) S. Sarkar, D. Bi, J. Zhang, J. Ren, R. P. Behringer and B. Chakraborty, Phys. Rev. E 93, 042901 (2016).
* (55) R. Seto, A. Singh, B. Chakraborty, M. M. Denn, J. F. Morris, Granul. Matter 21, 82 (2019).
* (56) Pradipto, H. Hayakawa, Soft Matter 16, 945 (2020).
* (57) M. Otsuki and H. Hayakawa, Phys. Rev. E 101, 032905 (2020).
* (58) D. J. Evans and G. P. Morriss, Statistical Mechanics of Nonequilibrium Liquids 2nd ed. (Cambridge University Press, Cambridge, MA, 2008).
* (59) M. Doi and S. F. Edwards, The Theory of Polymer Dynamics (Oxford University Press, Oxford, 1986).
|
# Counting unate and balanced monotone Boolean functions
Aniruddha Biswas and Palash Sarkar
Indian Statistical Institute
203, B.T.Road, Kolkata
India 700108.
Email: {anib_r<EMAIL_ADDRESS>
###### Abstract
We show that the problem of counting the number of $n$-variable unate
functions reduces to the problem of counting the number of $n$-variable
monotone functions. Using recently obtained results on $n$-variable monotone
functions, we obtain counts of $n$-variable unate functions up to $n=9$. We
use an enumeration strategy to obtain the number of $n$-variable balanced
monotone functions up to $n=7$. We show that the problem of counting the
number of $n$-variable balanced unate functions reduces to the problem of
counting the number of $n$-variable balanced monotone functions, and
consequently, we obtain the number of $n$-variable balanced unate functions up
to $n=7$. Using enumeration, we obtain the numbers of equivalence classes of
$n$-variable balanced monotone functions, unate functions and balanced unate
functions up to $n=6$. Further, for each of the considered sub-class of
$n$-variable monotone and unate functions, we also obtain the corresponding
numbers of $n$-variable non-degenerate functions.
Keywords: Boolean function, unate function, monotone function, Dedekind
number, non-degenerate function, balanced functions, equivalence classes of
functions.
MSC: 05A99.
## 1 Introduction
For a positive integer $n$, an $n$-variable Boolean function $f$ is a map
$f:\\{0,1\\}^{n}\rightarrow\\{0,1\\}$. A Boolean function $f$ is said to be
monotone increasing (resp. decreasing) in the $i$-th variable if
$f(x_{1},\ldots,x_{i-1},0,x_{i+1},\ldots,x_{n})\leq
f(x_{1},\ldots,x_{i-1},1,x_{i+1},\ldots,x_{n})$ $(\mbox{resp.
}f(x_{1},\ldots,x_{i-1},0,x_{i+1},\ldots,x_{n})\geq
f(x_{1},\ldots,x_{i-1},1,x_{i+1},\ldots,x_{n}))$
for all possible $x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{n}\in\\{0,1\\}$. The
function $f$ is said to be locally monotone or unate, if for each
$i\in\\{1,\ldots,n\\}$, it is either monotone increasing or monotone
decreasing in the $i$-th variable. The function $f$ is said to be monotone
increasing (or, simply monotone) if for each $i\in\\{1,\ldots,n\\}$, it is
monotone increasing in the $i$-th variable.
Unate functions have been studied in the literature from various viewpoints
such as switching theory [14, 18, 28, 17, 5, 23], combinatorial aspects [28,
8, 10], and complexity theoretic aspects [2, 31, 19, 10]. Monotone functions
have been studied much more extensively than unate functions and have many
applications so much so that it is difficult to mention only a few
representative works.
A Boolean function is degenerate on some variable if its output does not
depend on the variable, and it is said to be non-degenerate if it is not
degenerate on any variable. A Boolean function is said to be balanced if it
takes the values 0 and 1 equal number of times. Two Boolean functions on the
same number of variables are said to be equivalent if one can be obtained from
the other by a permutation of variables. The notion of equivalence partitions
the set of Boolean functions into equivalence classes.
The number of $n$-variable monotone Boolean functions is called the $n$-th
Dedekind number $D(n)$ after Dedekind who posed the problem in 1897. Till
date, the $n$-th Dedekind numbers has been obtained only up to $n=9$ (see [26,
7, 6, 29, 4, 30, 9, 12, 11]). A closed form summation formula for $D(n)$ was
given in [13], though it was pointed out in [15] that using the formula to
compute $D(n)$ has the same complexity as direct enumeration of all
$n$-variable monotone Boolean functions. Dedekind numbers form the entry
A000372 of [26]. The number of $n$-variable non-degenerate Boolean functions
can be obtained as the inverse binomial transform of the Dedekind numbers and
are hence also known up to $n=9$. These numbers form the entry A006126 of
[26]. The numbers of $n$-variable inequivalent monotone Boolean functions are
known up to $n=9$ (see [27, 21, 22]) and form the entry A003182 of [26].
The focus of the present work is on counting unate and monotone Boolean
functions under various restrictions. For $n\leq 5$, it is possible to
enumerate all $n$-variable Boolean functions. Consequently, the problem of
counting various sub-classes of $n$-variable Boolean functions becomes a
reasonably simple problem. Non-triviality of counting Boolean functions arises
for $n\geq 6$.
We show that the problem of counting unate functions reduces to the problem of
counting monotone functions. Since the numbers of $n$-variable monotone
functions are known up to $n=9$, these values immediately provide the numbers
of $n$-variable unate functions up to $n=9$. The problem of counting balanced
monotone functions has not been considered in the literature. We use an
enumeration strategy to count the number of balanced monotone functions up to
$n=7$. We show that the problem of counting balanced unate functions reduces
to the problem of counting balanced monotone functions. Consequently, we
obtain the numbers of $n$-variable balanced unate functions up to $n=7$. We
further extend these results to obtain the numbers of non-degenerate balanced
monotone functions, non-degenerate unate functions, and non-degenerate
balanced unate functions.
We describe a simple filtering technique for counting the number of
equivalence classes of $n$-variable functions possessing a given property.
Using this technique, we compute the number of equivalence classes of
$n$-variable balanced monotone functions. Unlike the situation for counting
functions, the problem of counting the number of equivalence classes of unate
functions does not reduce to the problem of counting the number of equivalence
classes of monotone functions. So to count equivalence classes of unate
functions, we used a method to generate all $n$-variable unate functions and
applied our filtering technique to obtain the number of equivalence classes of
$n$-variable unate functions. This allowed us to obtain the numbers of
equivalence classes of $n$-variable unate and balanced unate functions up to
$n=6$. We further extend these results to obtain the numbers of equivalence
classes of $n$-variable non-degenerate monotone functions up to $n=9$.
Moreover, we obtain the numbers of equivalence classes of $n$-variable
balanced monotone functions, non-degenerate balanced monotone functions, non-
degenerate unate functions and non-degenerate balanced unate functions up to
$n=6$.
To summarise, the new results that we obtain for monotone and unate functions
are the following.
Monotone:
1. 1.
Numbers of $n$-variable balanced monotone functions and $n$-variable non-
degenerate balanced monotone functions up to $n=7$.
2. 2.
Numbers of equivalence classes of $n$-variable non-degenerate monotone
functions up to $n=9$.
3. 3.
Numbers of equivalence classes of $n$-variable balanced monotone functions,
and $n$-variable non-degenerate balanced monotone functions up to $n=6$.
Unate:
1. 1.
Numbers of $n$-variable unate functions and $n$-variable non-degenerate unate
functions up to $n=9$.
2. 2.
Numbers of $n$-variable balanced unate functions and $n$-variable non-
degenerate balanced unate functions up to $n=7$.
3. 3.
Numbers of equivalence classes of $n$-variable unate functions, $n$-variable
non-degenerate unate functions, $n$-variable balanced unate functions, and
$n$-variable non-degenerate balanced unate functions up to $n=6$.
#### Related counts:
The number of NPN-equivalence classes111Two Boolean functions are said to be
NPN equivalent, if one can be obtained from the other by some combination of
the following operations: a permutation of the variables, negation of a subset
of the variables, and negation of the output. Two functions are NPN
inequivalent if they are not NPN equivalent. of unate Boolean functions has
been studied (see [3] and $A003183$ in [26]). A proper subclass of unate
functions is the class of unate cascade functions which have been studied in
[25, 16, 20]. Entry $A005612$ in [26] provides counts of unate cascade
functions.
#### Outline of the paper:
In Section 2 we describe the preliminaries and prove the mathematical results
required to obtain the various counts. In Section 3 we address the problem of
counting various sub-classes of monotone and unate functions and in Section 4
we take up the problem of counting equivalence classes of monotone and unate
functions possessing a combination of several basic properties. Finally,
Section 5 provides the concluding remarks.
## 2 Mathematical Results
We fix some terminology and notation. The cardinality of a finite set $S$ will
be denoted as $\\#S$. For $x,y\in\\{0,1\\}$, $xy$ and $x\oplus y$ denote the
AND and XOR operations respectively, and $\overline{x}$ denotes the complement
(or negation) of $x$.
Elements of $\\{0,1\\}^{n}$, $n\geq 2$, are $n$-bit strings (or vectors) and
will be denoted using bold font. Given $n\geq 2$ and $1\leq i\leq n$, by
$\mathbf{e}_{i}$ we will denote the $n$-bit string whose $i$-th bit is 1 and
is 0 elsewhere.
Let $f$ be an $n$-variable Boolean function. The weight ${\sf wt}(f)$ of $f$
is the size of its support, i.e. ${\sf
wt}(f)=\\#\\{\mathbf{x}:f(\mathbf{x})=1\\}$. An $n$-variable Boolean function
$f$ can be uniquely represented by a binary string of length $2^{n}$ in the
following manner: for $0\leq i<2^{n}$, the $i$-th bit of the string is the
value of $f$ on the $n$-bit binary representation of $i$. We will use the same
notation $f$ to denote the string representation of $f$. So $f_{0}\cdots
f_{2^{n}-1}$ is the bit string of length $2^{n}$ which represents $f$.
By $\overline{f}$, we will denote the negation of $f$, i.e.
$\overline{f}(\mathbf{x})=1$ if and only if $f(\mathbf{x})=0$. Let $f^{r}$ be
a Boolean function defined as
$f^{r}(x_{1},\ldots,x_{n})=f(\overline{x}_{1},\ldots,\overline{x}_{n})$. The
bit string representation of $f^{r}$ is the reverse of the bit string
representation of $f$.
Let $g$ and $h$ be two $n$-variable Boolean functions having string
representations $g_{0}\cdots g_{2^{n}-1}$ and $h_{0}\cdots h_{2^{n}-1}$. We
write $g\leq h$ if $g_{i}\leq h_{i}$ for $i=0,\ldots,2^{n}-1$. From $g$ and
$h$, it is possible to construct an $(n+1)$-variable function $f$ whose string
representation is obtained by concatenating the string representations of $g$
and $h$. We denote this construction as $f=g||h$. For
$(x_{1},\ldots,x_{n+1})\in\\{0,1\\}^{n+1}$, we have
$\displaystyle f(x_{1},\ldots,x_{n+1})$ $\displaystyle=$
$\displaystyle\overline{x}_{1}g(x_{2},\ldots,x_{n+1})\oplus
x_{1}h(x_{2},\ldots,x_{n+1}).$ (1)
An $n$-variable Boolean function $f$ is said to be non-degenerate on the
$i$-th variable, $1\leq i\leq n$, if there is an $\bm{\alpha}\in\\{0,1\\}^{n}$
such that $f(\bm{\alpha})\neq f(\bm{\alpha}\oplus\mathbf{e}_{i})$. The
function $f$ is said to be non-degenerate, if it is non-degenerate on all the
$n$ variables.
By a property $\mathcal{P}$ of Boolean functions, we will mean a subset of the
set of all Boolean functions. For example, $\mathcal{P}$ could be the property
of being balanced, being monotone, being unate, or a combination of these
properties, where a combination of properties is given by the intersection of
the corresponding subsets of Boolean functions. For $n\geq 0$, let ${\sf
P}_{n}$ denote the number of $n$-variable Boolean functions possessing the
property $\mathcal{P}$, and let ${\sf nd}\mbox{-}{\sf P}_{n}$ denote the
number of $n$-variable non-degenerate Boolean functions possessing the
property $\mathcal{P}$. Since an $n$-variable function can be non-degenerate
on $i$ variables for some $i\in\\{0,\ldots,n\\}$ and the $i$ variables can be
chosen from the $n$ variables in ${n\choose i}$ ways, we obtain the following
result which shows that the sequence $\\{{\sf P}_{n}\\}_{n\geq 0}$ is given by
the binomial transform of the sequence $\\{{\sf nd}\mbox{-}{\sf
P}_{n}\\}_{n\geq 0}$.
###### Proposition 1.
For any property $\mathcal{P}$ of Boolean functions,
$\displaystyle{\sf P}_{n}$ $\displaystyle=$
$\displaystyle\sum_{i=0}^{n}{n\choose i}{\sf nd}\mbox{-}{\sf P}_{i}.$ (2)
Consequently,
$\displaystyle{\sf nd}\mbox{-}{\sf P}_{n}$ $\displaystyle=$
$\displaystyle\sum_{i=0}^{n}(-1)^{n-i}{n\choose i}{\sf P}_{i}.$ (3)
###### Remark 1.
We assume that for $n=0$, there are two $n$-variable, non-degenerate, monotone
(and hence unate), and unbalanced Boolean functions whose string
representations are 0 and 1.
For $n\geq 0$, let ${\sf A}_{n}=2^{2^{n}}$ be the number of all $n$-variable
Boolean functions, and let ${\sf B}_{n}={2^{n}\choose 2^{n-1}}$ be the number
of $n$-variable balanced Boolean functions. Let ${\sf nd}\mbox{-}{\sf A}_{n}$
be the number of all non-degenerate $n$-variable Boolean functions, and ${\sf
nd}\mbox{-}{\sf B}_{n}$ be the number of all non-degenerate $n$-variable
balanced Boolean functions. Using Proposition 1, we obtain
$\displaystyle{\sf nd}\mbox{-}{\sf A}_{n}=\sum_{i=0}^{n}(-1)^{n-i}{n\choose
i}\cdot 2^{2^{i}}$ and $\displaystyle{\sf nd}\mbox{-}{\sf
B}_{n}=\sum_{i=0}^{n}(-1)^{n-i}{n\choose i}\cdot{2^{i}\choose 2^{i-1}}.$ (4)
For $n\geq 0$, by ${\sf M}_{n},{\sf BM}_{n},{\sf U}_{n}$ and ${\sf BU}_{n}$,
we will denote the numbers of $n$-variable monotone, balanced-monotone, unate,
and balanced-unate functions respectively, and by ${\sf nd}\mbox{-}{\sf
M}_{n},{\sf nd}\mbox{-}{\sf BM}_{n},{\sf nd}\mbox{-}{\sf U}_{n}$ and ${\sf
nd}\mbox{-}{\sf BU}_{n}$ we will denote the corresponding numbers of non-
degenerate functions. The relations between the number of $n$-variable
functions possessing one of these properties and the number of non-degenerate
$n$-variable functions possessing the corresponding property are obtained from
Proposition 1.
The following result relates the numbers of monotone and unate Boolean
functions.
###### Proposition 2.
For $n\geq 0$, the following holds.
$\displaystyle{\sf nd}\mbox{-}{\sf U}_{n}$ $\displaystyle=$ $\displaystyle
2^{n}\cdot{\sf nd}\mbox{-}{\sf M}_{n},$ (5) $\displaystyle{\sf nd}\mbox{-}{\sf
BU}_{n}$ $\displaystyle=$ $\displaystyle 2^{n}\cdot{\sf nd}\mbox{-}{\sf
BM}_{n},$ (6) $\displaystyle{\sf U}_{n}$ $\displaystyle\leq$ $\displaystyle
2^{n}\cdot{\sf M}_{n},$ (7) $\displaystyle{\sf BU}_{n}$ $\displaystyle\leq$
$\displaystyle 2^{n}\cdot{\sf BM}_{n},.$ (8)
###### Proof.
First we consider (5) and (6). We prove (5), the proof of (6) being similar.
Let $f$ be an $n$-variable monotone Boolean function. Then it is easy to see
that for any $\bm{\alpha}\in\\{0,1\\}^{n}$, the $n$-variable function
$f_{\bm{\alpha}}$ is unate, where $f_{\bm{\alpha}}$ is defined as
$f_{\bm{\alpha}}(\mathbf{x})=f(\mathbf{x}\oplus\bm{\alpha})$ for all
$\mathbf{x}\in\\{0,1\\}^{n}$. The proof of (5) follows from the following
claim.
Claim: If $f$ is monotone, then the $2^{n}$ possible functions
$f_{\bm{\alpha}}$ corresponding to the $2^{n}$ possible $\bm{\alpha}$’s are
distinct if and only if $f$ is non-degenerate.
Proof of the claim: Suppose $f$ is degenerate on the $i$-th variable. Then $f$
and $f_{\mathbf{e}_{i}}$ are equal. This proves one side of the claim. So
suppose that $f$ is non-degenerate. We have to show that for
$\bm{\alpha}\neq\bm{\beta}$, $f_{\bm{\alpha}}$ and $f_{\bm{\beta}}$ are
distinct functions. Let if possible $f_{\bm{\alpha}}$ and $f_{\bm{\beta}}$ be
equal. Note that since $f$ is non-degenerate, both $f_{\bm{\alpha}}$ and
$f_{\bm{\beta}}$ are also non-degenerate. Since
$\bm{\alpha}=(\alpha_{1},\ldots,\alpha_{n})$ and
$\bm{\beta}=(\beta_{1},\ldots,\beta_{n})$ are distinct, there is a $j$ in
$\\{1,\ldots,n\\}$ such that $\alpha_{j}\neq\beta_{j}$. Suppose without loss
of generality that $\alpha_{j}=0$ and $\beta_{j}=1$. Since $f$ is monotone, it
is monotone increasing in all variables and hence in the $j$-th variable.
Further, since $\alpha_{j}=0$, $f_{\bm{\alpha}}$ is monotone increasing in the
$j$-th variable and since $\beta_{j}=1$, $f_{\bm{\beta}}$ is monotone
decreasing in the $j$-th variable. From $f_{\bm{\alpha}}$ is monotone
increasing in the $j$-th variable we have that for all
$\mathbf{y}=(y_{1},\ldots,y_{n})\in\\{0,1\\}^{n}$ with $y_{j}=0$,
$f_{\bm{\alpha}}(\mathbf{y})\leq
f_{\bm{\alpha}}(\mathbf{y}\oplus\mathbf{e}_{j})$. Further, since
$f_{\bm{\alpha}}$ is non-degenerate, and hence non-degenerate on the $j$-th
variable, equality cannot hold everywhere, i.e. there is a
$\mathbf{z}=(z_{1},\ldots,z_{n})\in\\{0,1\\}^{n}$ with $z_{j}=0$, such that
$f_{\bm{\alpha}}(\mathbf{z})=0$ and
$f_{\bm{\alpha}}(\mathbf{z}\oplus\mathbf{e}_{j})=1$. Since $f_{\bm{\alpha}}$
and $f_{\bm{\beta}}$ are assumed to be equal, it follows that
$f_{\bm{\beta}}(\mathbf{z})=0$ and
$f_{\bm{\beta}}(\mathbf{z}\oplus\mathbf{e}_{j})=1$, which contradicts the fact
that $f_{\bm{\beta}}$ is monotone decreasing in the $j$-th variable. This
proves the claim.
Next we consider (7) and (8). We provide the proof of (7), the proof of (8)
being similar. The relation given by (7) can be obtained from (5) and
Proposition 1 using the following calculation.
$\displaystyle{\sf U}_{n}$ $\displaystyle=$
$\displaystyle\sum_{i=0}^{n}{n\choose i}{\sf nd}\mbox{-}{\sf
U}_{i}=\sum_{i=0}^{n}\left({n\choose i}\cdot 2^{i}\cdot{\sf nd}\mbox{-}{\sf
M}_{i}\right)\leq 2^{n}\cdot\sum_{i=0}^{n}{n\choose i}{\sf nd}\mbox{-}{\sf
M}_{i}=2^{n}\cdot{\sf M}_{n}.$
We record two known facts about monotone functions.
###### Proposition 3.
[1] Let $g$ and $h$ be $n$-variable Boolean functions and $f=g||h$. Then $f$
is a monotone function if and only if $g$ and $h$ are both monotone functions
and $g\leq h$.
###### Proposition 4.
$\left(\mbox{A003183
of~{}\cite[cite]{[\@@bibref{}{ListINtSequence}{}{}]}}\right)$ If $f$ is a
monotone function then $\overline{f^{r}}$ is also a monotone function.
Next we present some results on unate and monotone functions which will be
useful in our enumeration strategy. The first result is the analogue of
Proposition 3 for unate functions.
###### Proposition 5.
Let $g$ and $h$ be $n$-variable functions and $f=g||h$. Then $f$ is a unate
function if and only if $g$ and $h$ are both unate functions satisfying the
following two conditions.
1. 1.
For each variable, $g$ and $h$ are either both monotone increasing, or both
monotone decreasing.
2. 2.
Either $g\leq h$ or $h\leq g$.
###### Proof.
First consider the proof of the “if” part. Suppose $g$ and $h$ are unate
functions satisfying the stated condition. We have to show that for each
variable, $f$ is either monotone increasing, or monotone decreasing. Consider
the variable $x_{1}$. If $g\leq h$, then from (1), $f$ is monotone increasing
on $x_{1}$, while if $g\geq h$, then again from (1), $f$ is monotone
decreasing on $x_{1}$. Now consider any variable $x_{i}$, with $i\geq 2$. If
$g$ and $h$ are both monotone increasing on $x_{i}$, then $f$ is also monotone
increasing on $x_{i}$, while if $g$ and $h$ are both monotone decreasing on
$x_{i}$, then $f$ is also monotone decreasing on $x_{i}$. Since for each
variable, $f$ is either monotone increasing, or monotone decreasing, it
follows that $f$ is a unate function.
For the converse, suppose that $f$ is a unate functions. Then for each
variable $x_{i}$, $i\geq 1$, $f$ is either monotone increasing or monotone
decreasing. From (1), it follows that for each variable $x_{i}$, $i\geq 2$,
$g$ and $h$ are either both monotone increasing, or both monotone decreasing.
So in particular, $g$ and $h$ are unate. If $f$ is monotone increasing for
$x_{1}$, then $g\leq h$ and if $f$ is monotone decreasing for $x_{1}$, then
$g\geq h$.
###### Proposition 6.
If $f$ is a unate function then $\overline{f}$ is also a unate function.
###### Proof.
The proof is by induction on the number of variables $n$. The base case is
$n=1$ and is trivial. Suppose the result holds for some $n\geq 1$. Suppose
that $f$ is an $(n+1)$-variable unate function. Then $f$ can be written as
$f=g||h$, where $g$ and $h$ are $n$-variable unate functions satisfying the
conditions in Proposition 5. Then $\overline{f}=\overline{g}||\overline{h}$.
By induction hypothesis, $\overline{g}$ and $\overline{h}$ are $n$-variable
unate functions and the conditions in Proposition 5 hold for $\overline{g}$
and $\overline{h}$. So $\overline{f}$ is a unate function.
For $0\leq w\leq 2^{n}$, let ${\sf M}_{n,w}$ (resp. ${\sf U}_{n,w}$) be the
number of $n$-variable monotone (resp. unate) Boolean functions of weight $w$.
###### Proposition 7.
For any $n\geq 1$ and weight $w\in\\{0,\ldots,2^{n}\\}$, ${\sf M}_{n,w}={\sf
M}_{n,2^{n}-w}$.
###### Proof.
Proposition 4 sets up a one-one correspondence between $n$-variable monotone
functions having weight $w$ and $n$-variable monotone functions having weight
$2^{n}-w$. This shows that ${\sf M}_{n,w}={\sf M}_{n,2^{n}-w}$.
###### Proposition 8.
For any $n\geq 1$ and weight $w\in\\{0,\ldots,2^{n}\\}$, ${\sf U}_{n,w}={\sf
U}_{n,2^{n}-w}$.
###### Proof.
Proposition 6 sets up a one-one correspondence between $n$-variable unate
functions having weight $w$ and $n$-variable unate functions having weight
$2^{n}-w$. This shows that ${\sf U}_{n,w}={\sf U}_{n,2^{n}-w}$.
### 2.1 Equivalence
Two Boolean functions are equivalent if they have the same number of variables
and one can be obtained from the other by a permutation of variables. Let
$\mathcal{P}$ be a property of Boolean functions. The set $\mathcal{P}$ is
partitioned into equivalence classes by the notion of equivalence. For $n\geq
0$, let $[P]_{n}$ denote the number of equivalence classes of $n$-variable
functions possessing the property $\mathcal{P}$. Also, let ${\sf
nd}\mbox{-}[P]_{n}$ denote the number of equivalence classes of non-degenerate
$n$-variable functions possessing the property $\mathcal{P}$.
###### Remark 2.
We assume that for $n=0$, there are two equivalence classes of $n$-variable,
non-degenerate, monotone (and hence unate), and unbalanced Boolean functions
given by $[0]$ and $[1]$.
We have the following analogue of Proposition 1.
###### Proposition 9.
Let $\mathcal{P}$ be a property of Boolean functions which is closed under
permutation of variables (i.e. if $f$ is in $\mathcal{P}$ and $g$ is obtained
from $f$ by applying a permutation to the variables, then $g$ is also in
$\mathcal{P}$). Then
$\displaystyle[P]_{n}$ $\displaystyle=$ $\displaystyle\sum_{i=0}^{n}{\sf
nd}\mbox{-}[P]_{i}.$ (9)
Consequently,
$\displaystyle{\sf nd}\mbox{-}[P]_{n}$ $\displaystyle=$
$\displaystyle[P]_{n}-[P]_{n-1}.$ (10)
For $n\geq 0$, let $[A]_{n}$ denote the number of equivalence classes of
$n$-variable Boolean functions and $[B]_{n}$ denote the number of equivalence
classes of $n$-variable balanced Boolean functions. The values of $[A]_{n}$
and $[B]_{n}$ can be obtained using Polya’s theory (see for example [24]). Let
${\sf nd}\mbox{-}[A]_{n}$ denote the number of equivalence classes of
$n$-variable non-degenerate Boolean functions and ${\sf nd}\mbox{-}[B]_{n}$
denote the number of equivalence classes of $n$-variable non-degenerate
balanced Boolean functions. Using Proposition 9,
$\displaystyle{\sf nd}\mbox{-}[A]_{n}=[A]_{n}-[A]_{n-1}$ and
$\displaystyle{\sf nd}\mbox{-}[B]_{n}=[B]_{n}-[B]_{n-1}.$ (11)
For $n\geq 0$, by $[M]_{n}$, $[BM]_{n}$, $[U]_{n}$ and $[BU]_{n}$ we will
denote the numbers of equivalence classes of $n$-variable monotone, balanced-
monotone, unate, and balanced-unate functions respectively and by ${\sf
nd}\mbox{-}[M]_{n}$, ${\sf nd}\mbox{-}[BM]_{n}$, ${\sf nd}\mbox{-}[U]_{n}$ and
${\sf nd}\mbox{-}[BU]_{n}$ we will denote the corresponding numbers of
equivalence classes of non-degenerate functions. The following result is the
analogue of Proposition 2.
###### Proposition 10.
For $n\geq 0$, the following holds.
$\displaystyle{\sf nd}\mbox{-}[U]_{n}$ $\displaystyle\leq$ $\displaystyle
2^{n}\cdot{\sf nd}\mbox{-}[M]_{n},$ (12) $\displaystyle{\sf
nd}\mbox{-}[BU]_{n}$ $\displaystyle\leq$ $\displaystyle 2^{n}\cdot{\sf
nd}\mbox{-}[BM]_{n},$ (13) $\displaystyle\mbox{$[U]$}_{n}$ $\displaystyle\leq$
$\displaystyle 2^{n}\cdot[M]_{n},$ (14) $\displaystyle\mbox{$[BU]$}_{n}$
$\displaystyle\leq$ $\displaystyle 2^{n}\cdot[BM]_{n},$ (15)
The relations given by (14) and (15) are analogues of (7) and (8)
respectively. However, unlike (5) and (6), we do not have equalities in (12)
and (13). The reason is that two distinct input translations of a non-
degenerate monotone function can lead to two unate functions which are
equivalent. An example is the following. Suppose $f(X_{1},X_{2})=X_{1}X_{2}$,
i.e. $f$ is the AND function. Let $g(X_{1},X_{2})=f(1\oplus
X_{1},X_{2})=(1\oplus X_{1})X_{2}$ and $h(X_{1},X_{2})=f(X_{1},1\oplus
X_{2})=X_{1}(1\oplus X_{2})$. Then $g(X_{1},X_{2})=h(X_{2},X_{1})$, i.e. $g$
and $h$ are distinct, but equivalent unate functions obtained by distinct
input translations from the monotone function $f$.
## 3 Counting Functions
In this section, we consider the problem of counting various sub-classes of
monotone and unate Boolean functions.
### 3.1 Monotone Functions
Note that ${\sf M}_{n}$ is the $n$-th Dedekind number. For $0\leq n\leq 9$,
the values of ${\sf M}_{n}$ are known [26], with the value of ${\sf M}_{9}$
being obtained recently and independently by two groups of researchers [11,
12]. The values of ${\sf M}_{n}$ form entry A000372 of [26]. The numbers of
non-degenerate $n$-variable monotone functions, ${\sf nd}\mbox{-}{\sf M}_{n}$,
form entry A006126 of [26].
We used enumeration to obtain the number ${\sf BM}_{n}$ of $n$-variable
balanced monotone functions. For $n\leq 6$, we enumerated all monotone
functions and counted only the balanced functions. Our strategy for
enumerating monotone functions is based on Proposition 3. The approach is the
following. First generate all $1$-variable monotone functions and store these.
For $n\geq 2$, to generate all $n$-variable monotone functions, we consider
each pair $(g,h)$ of $(n-1)$-variable monotone functions and check whether the
pair satisfies the condition of Proposition 3. If it does, then $f=g||h$ is
stored. To generate all $n$-variable monotone functions, this approach
requires considering $({\sf M}_{n-1})^{2}$ pairs. The enumeration and
filtering out unbalanced functions allows us to obtain the values of ${\sf
BM}_{n}$, for $n=1,\ldots,6$.
###### Remark 3.
To obtain a faster method, one may consider generating only non-degenerate
functions using Proposition 3. This, however, does not work. It is indeed true
that if $g$ and $h$ are distinct non-degenerate functions, $f=g||h$ is also
non-degenerate. On the other hand, it is possible that one of $g$ or $h$ is
degenerate, but $f$ is non-degenerate. For example, take $g$ to be the
2-variable constant function whose string representation is $0000$, and $h$ to
be the 2-variable AND function whose string representation is given by $0001$.
Then the string representation of the 3-variable function $f=g||h$ is
$00000001$ which is the AND of the three variables and hence non-degenerate.
So the set of all non-degenerate $n$-variable monotone functions cannot be
obtained by concatenating only non-degenerate $(n-1)$-variable monotone
functions.
To obtain ${\sf BM}_{7}$ we used a faster method. After enumerating all
6-variable monotone functions, we divided these functions into groups where
all functions in the same group have the same weight. Our modified strategy is
to take two $n$-variable monotone functions $g$ and $h$, where $g$ has weight
$w$ and $h$ has weight $2^{n}-w$ and check whether $g\leq h$. If the check
passes, then we generate the $(n+1)$-variable balanced monotone function
$f=g||h$. Recall that for $0\leq w\leq 2^{n}$, there are ${\sf M}_{n,w}$
$n$-variable monotone functions having weight $w$. The number of pairs needed
to be considered by the modified method is
$\sum_{w=0}^{2^{n}}{\sf M}_{n,w}{\sf
M}_{n,2^{n}-w}=\sum_{w=0}^{2^{n}}\left({\sf M}_{n,w}\right)^{2},$
where the equality follows from Proposition 7. Substituting $n=6$ and using
the values of ${\sf M}_{6,w}$ obtained through enumeration, we find that the
modified strategy for generating 7-variable balanced monotone functions
requires considering $\sum_{w=0}^{64}\left({\sf M}_{6,w}\right)^{2}\approx
2^{40}$ pairs, while the previous strategy would have required considering
$({\sf M}_{6})^{2}\approx 2^{45}$ pairs.
###### Remark 4.
Note that the above procedure to generate balanced monotone functions can be
applied only once. It uses the set of all $n$-variable monotone functions to
generate the set of all $(n+1)$-variable balanced monotone functions. Since
this does not provide all $(n+1)$-variable monotone functions, it cannot be
applied to generate the set of all $(n+2)$-variable balanced monotone
functions.
Having obtained ${\sf BM}_{n}$, for $n=1,\ldots,7$, we use Proposition 1 to
obtain the values of ${\sf nd}\mbox{-}{\sf BM}_{n}$, i.e. the number of
$n$-variable non-degenerate balanced monotone functions. The obtained values
of ${\sf BM}_{n}$ and ${\sf nd}\mbox{-}{\sf BM}_{n}$ are given in Table 1.
$n$ | ${\sf BM}_{n}$ | ${\sf nd}\mbox{-}{\sf BM}_{n}$
---|---|---
0 | 0 | 0
1 | 1 | 1
2 | 2 | 0
3 | 4 | 1
4 | 24 | 16
5 | 621 | 526
6 | 492288 | 488866
7 | 81203064840 | 81199631130
Table 1: Numbers of $n$-variable balanced monotone and non-degenerate balanced
monotone functions for $0\leq n\leq 7$.
### 3.2 Unate Functions
The problem of counting unate functions reduces to the problem of counting
monotone functions in the following manner. Suppose we wish to obtain the
number ${\sf U}_{n}$ of $n$-variable unate functions. Using Proposition 1,
this reduces to the problem of obtaining ${\sf nd}\mbox{-}{\sf U}_{i}$, for
$0\leq i\leq n$. From (5), this reduces to the problem of obtaining ${\sf
nd}\mbox{-}{\sf M}_{i}$ for $0\leq i\leq n$. Using another application of
Proposition 1 reduces the problem of obtaining ${\sf nd}\mbox{-}{\sf M}_{i}$
to that of obtaining ${\sf M}_{j}$ for $0\leq j\leq i$. So to obtain ${\sf
U}_{n}$, it is sufficient to know ${\sf M}_{i}$ for $0\leq i\leq n$. Since the
values of ${\sf M}_{i}$ are known for $0\leq i\leq 9$, we can obtain the
values of ${\sf U}_{n}$ for $0\leq n\leq 9$. From these values, using
Proposition 1, we obtain the values of ${\sf nd}\mbox{-}{\sf U}_{n}$ for
$0\leq n\leq 9$. The values of ${\sf U}_{n}$ and ${\sf nd}\mbox{-}{\sf U}_{n}$
are shown in Table 2.
$n$ | ${\sf U}_{n}$ | ${\sf nd}\mbox{-}{\sf U}_{n}$
---|---|---
0 | 2 | 2
1 | 4 | 2
2 | 14 | 8
3 | 104 | 72
4 | 2170 | 1824
5 | 230540 | 220608
6 | 499596550 | 498243968
7 | 309075799150640 | 309072306743552
8 | 14369391928071394429416818 | 14369391925598802012151296
9 | 146629927766168786368451678290041110762316052 | 146629927766168786239127150948525247729660416
Table 2: Numbers of $n$-variable unate and non-degenerate unate functions for
$0\leq n\leq 9$.
In a similar manner, using Proposition 1 and (6), the problem of counting
balanced unate functions can be reduced to the problem of counting balanced
monotone functions. Since we have obtained the values of ${\sf BM}_{i}$ for
$0\leq i\leq 7$, we obtain the values of ${\sf BU}_{n}$ for $0\leq n\leq 7$.
Using Proposition 1, this gives us the values of ${\sf nd}\mbox{-}{\sf
BU}_{n}$ for $0\leq n\leq 7$. The values of ${\sf BU}_{n}$ and ${\sf
nd}\mbox{-}{\sf BU}_{n}$ are shown in Table 3.
$n$ | ${\sf BU}_{n}$ | ${\sf nd}\mbox{-}{\sf BU}_{n}$
---|---|---
0 | 0 | 0
1 | 2 | 2
2 | 4 | 0
3 | 14 | 8
4 | 296 | 256
5 | 18202 | 16832
6 | 31392428 | 31287424
7 | 10393772159334 | 10393552784640
Table 3: Numbers of $n$-variable balanced unate and non-degenerate balanced
unate functions for $0\leq n\leq 7$.
## 4 Counting Equivalence Classes of Functions
In this section, we present the results on numbers of equivalence classes of
various subsets of monotone and unate functions.
### 4.1 Filtering Procedure
The basic problem of enumerating equivalence classes is the following. Let
$\mathcal{S}$ be a subset of the set of all $n$-variable Boolean functions.
Given $\mathcal{S}$, we wish to generate a set
$\mathcal{T}\subseteq\mathcal{S}$ of functions such that no two functions in
$\mathcal{T}$ are equivalent, and each function in $\mathcal{S}$ is equivalent
to some function in $\mathcal{T}$. The technique for such filtering is the
following.
Given a permutation $\pi$ of $\\{1,\ldots,n\\}$, we define a permutation
$\pi^{\star}$ of $\\{0,\ldots,2^{n}-1\\}$ as follows. For
$i\in\\{0,\ldots,2^{n}-1\\}$, let $(i_{1},\ldots,i_{n})$ be the $n$-bit binary
representation of $i$. Then $\pi^{\star}(i)=j$, where the $n$-bit binary
representation of $j$ is $(i_{\pi(1)},\ldots,i_{\pi(n)})$. Given an
$n$-variable function $f$, let $f^{\pi}$ denote the function such that for all
$(x_{1},\ldots,x_{n})\in\\{0,1\\}^{n}$,
$f^{\pi}(x_{1},\ldots,x_{n})=f(x_{\pi(1)},\ldots,x_{\pi(n)})$. Suppose
$f_{0}\cdots f_{2^{n}-1}$ is the bit string representation of $f$. Then the
bit string representation of $f^{\pi}$ is $f_{\pi^{\star}(0)}\cdots
f_{\pi^{\star}(2^{n}-1)}$.
Note that for each permutation $\pi$, the permutation $\pi^{\star}$ can be
pre-computed and stored as an array say $P[0,\ldots,2^{n}-1]$. Suppose the bit
string representation of $f$ is stored as an array $A[0,\ldots,2^{n}-1]$. Then
the bit string representation of $f^{\pi}$ is obtained as the array
$B[0,\ldots,2^{n}-1]$, where $B[i]=A[P[i]]$, for $i=0,\ldots,2^{n}-1$. So
obtaining $f^{\pi}$ becomes simply a matter of array reindexing.
Consider the set of functions $\mathcal{S}$ to be filtered is given as a list
of string representations of the functions. We incrementally generate
$\mathcal{T}$ as follows. The first function in $\mathcal{S}$ is moved to
$\mathcal{T}$. We iterate over the other functions in $\mathcal{S}$. For a
function $f$ in $\mathcal{S}$, we generate $f^{\pi}$ for all permutations
$\pi$ of $\\{1,\ldots,n\\}$ using the technique described above. For each such
$f^{\pi}$, we check whether it is present in $\mathcal{T}$. If none of the
$f^{\pi}$’s are present in $\mathcal{T}$, then we append $f$ to $\mathcal{T}$.
At the end of the procedure, $\mathcal{T}$ is the desired set of functions.
The check for the presence of $f^{\pi}$ in $\mathcal{T}$ involves a search in
$\mathcal{T}$. This is done using binary search. To apply binary search on a
list, it is required that the list be sorted. To ensure this, we initially
ensure that $\mathcal{S}$ is sorted (either by generating it in a sorted
manner, or by sorting it after generation). This ensures that at any point of
time, $\mathcal{T}$ is also a sorted list, so that binary search can be
applied.
### 4.2 Monotone
For $n\geq 0$, the numbers $[M]_{n}$ of equivalence classes of $n$-variable
monotone functions form entry A003182 of [26]. Using Proposition 9, it is
possible to find the numbers ${\sf nd}\mbox{-}[M]_{n}$ of equivalence classes
of $n$-variable monotone functions. These values are shown in Table 4.
$n$ | ${\sf nd}\mbox{-}[M]_{n}$
---|---
0 | 2
1 | 1
2 | 2
3 | 5
4 | 20
5 | 180
6 | 16143
7 | 489996795
8 | 1392195548399980210
9 | 789204635842035039135545297410259322
Table 4: Numbers of equivalence classes of $n$-variable non-degenerate
monotone functions for $0\leq n\leq 9$.
For $0\leq n\leq 6$, the numbers $[BM]_{n}$ of equivalence classes of
$n$-variable balanced monotone functions are obtained by applying the
filtering procedure described in Section 4.1 to the strategy for generating
balanced monotone functions described in Section 3.1. Next applying
Proposition 9, we obtained the numbers ${\sf nd}\mbox{-}[BM]_{n}$ of
equivalence classes of $n$-variable non-degenerate balanced monotone
functions. The values of $[BM]_{n}$ and ${\sf nd}\mbox{-}[BM]_{n}$ are shown
in Table 5.
We briefly consider the computation required to obtain $[BM]_{7}$. From Table
1, ${\sf BM}_{7}\allowbreak=\allowbreak
81203064840\allowbreak\approx\allowbreak 2^{36.24}$. For each 7-variable
balanced monotone function $f$, it is required to consider $7!=5040\approx
2^{12.29}$ functions $f^{\pi}$ for the $7!$ permutations $\pi$ of
$\\{1,\ldots,7\\}$. So a total of about $2^{48.53}$ functions would have to be
considered. For each of these functions a binary search is required on the
partially generated set of functions $\mathcal{T}$ and requires performing
about $\log_{2}\\#\mathcal{T}$ comparisons. So the total number of comparisons
required is somewhat above $2^{50}$. This amount of computation is not
presently feasible on the computing resources available to us.
$n$ | $[BM]_{n}$ | ${\sf nd}\mbox{-}[BM]_{n}$
---|---|---
0 | 0 | 0
1 | 1 | 1
2 | 1 | 0
3 | 2 | 1
4 | 4 | 2
5 | 16 | 12
6 | 951 | 935
Table 5: Numbers of equivalence classes of $n$-variable balanced monotone and
non-degenerate balanced monotone functions for $0\leq n\leq 6$.
### 4.3 Unate
In the case of counting functions, the problems of counting unate and balanced
unate functions reduce to the problems of counting monotone and balanced
monotone functions respectively. In the case of counting equivalence classes
of functions, such reduction is no longer possible (using the results that we
could prove). The reason is that unlike (5) which expresses the number of non-
degenerate unate functions in terms of the number of non-degenerate monotone
function, the relation (12) only provides an upper bound on the number of
equivalence classes of non-degenerate unate functions in terms of the number
of equivalence classes of non-degenerate monotone functions.
In view of the above, for counting equivalence classes of unate functions, we
resorted to the technique of enumerating unate functions and then using the
technique described in Section 4.1 to obtain the number of equivalence
classes.
The technique of generating all unate functions is based on Proposition 5.
Along with the string representation of a unate function, we also need to
record whether the function is increasing or decreasing in each of its
variables. This is recorded as the signature of the function. The special
cases of the two constant functions cause some complications in the definition
of the signature.
For an $n$-variable unate function $f$, we define its signature, denoted ${\sf
sig}(f)$, to be an element of
$\\{0,1\\}^{n}\cup\\{\mathfrak{z},\mathfrak{o}\\}$ in the following manner. If
$f$ is the constant function $1$, then ${\sf sig}(f)=\mathfrak{o}$, if $f$ is
the constant function $0$, then ${\sf sig}(f)=\mathfrak{z}$; otherwise ${\sf
sig}(f)$ is an $n$-bit string $\alpha$, where for $i=1,\ldots,n$,
$\alpha_{i}=1$ if $f$ is monotone increasing in the variable $x_{i}$, and
$\alpha_{i}=0$ if $f$ is monotone decreasing in the variable $x_{i}$. The
signature ${\sf sig}(f)$ encodes whether $f$ is monotone increasing or
monotone decreasing on each variable. The function $f$ is both monotone
increasing and monotone decreasing in all the variables if and only if it is a
constant function. The signatures of the constant functions are defined
appropriately.
For enumeration, the bit string representation of the functions are used. A
unate function and its signature are stored as a pair. Consider the following
recursive algorithm to generate all $n$-variable unate functions and their
signatures for $n\geq 1$. At the base step, i.e. for $n=1$, store the four
pairs of 1-variable unate functions and their signatures as
$(00,\mathfrak{z})$, $(01,1)$, $(10,0)$ and $(11,\mathfrak{o})$. Suppose that
for some $n\geq 1$, we have already generated all $n$-variable unate functions
and their signatures. The generation of all $(n+1)$-variable unate functions
and their signatures are done as follows. For any two function-signature pairs
$(g,{\sf sig}(g))$ and $(h,{\sf sig}(h))$, where $g$ and $h$ are $n$-variable
unate functions (which are not necessarily distinct), perform the following
checks:
1. 1.
Whether at least one of ${\sf sig}(g)$ or ${\sf sig}(h)$ is equal to either
$\mathfrak{z}$ or $\mathfrak{o}$ (i.e. whether at least one of $g$ or $h$ is a
constant function).
2. 2.
${\sf sig}(g)={\sf sig}(h)=\alpha$, and either $g\leq h$ or $h\leq g$ holds.
If either of the checks pass, then generate $f=g||h$, and determine ${\sf
sig}(f)$ as follows.
$\displaystyle{\sf sig}(f)$ $\displaystyle=$
$\displaystyle\left\\{\begin{array}[]{ll}\mathfrak{z}&\mbox{if }{\sf
sig}(g)={\sf sig}(h)=\mathfrak{z},\\\ \mathfrak{o}&\mbox{if }{\sf sig}(g)={\sf
sig}(h)=\mathfrak{o},\\\ 1^{n+1}&\mbox{if }{\sf sig}(g)=\mathfrak{z},\ {\sf
sig}(h)=\mathfrak{o},\\\ 0^{n+1}&\mbox{if }{\sf sig}(g)=\mathfrak{o},\ {\sf
sig}(h)=\mathfrak{z},\\\ 1||\alpha&\mbox{if }{\sf sig}(g)=\mathfrak{z},\ {\sf
sig}(h)=\alpha\in\\{0,1\\}^{n},\\\ 0||\alpha&\mbox{if }{\sf
sig}(g)=\mathfrak{o},\ {\sf sig}(h)=\alpha\in\\{0,1\\}^{n}\\\
1||\alpha&\mbox{if }{\sf sig}(g)=\alpha\in\\{0,1\\}^{n},\ {\sf
sig}(h)=\mathfrak{o},\\\ 0||\alpha&\mbox{if }{\sf
sig}(g)=\alpha\in\\{0,1\\}^{n},\ {\sf sig}(h)=\mathfrak{z},\\\
1||\alpha&\mbox{if }g\leq h,\ {\sf sig}(g)={\sf
sig}(h)=\alpha\in\\{0,1\\}^{n},\\\ 0||\alpha&\mbox{if }g\geq h,\ {\sf
sig}(g)={\sf sig}(h)=\alpha\in\\{0,1\\}^{n}.\end{array}\right.$ (26)
Store $(f,{\sf sig}(f))$. Proposition 5 assures us that this recursive
procedure generates all $(n+1)$-variable unate functions and their signatures.
To generate all $(n+1)$-variable unate functions, the above method requires
considering all pairs of $n$-variable unate functions, i.e. a total of
$\left(U(n)\right)^{2}$ options. Applying the fitering strategy of Section 4.1
we obtain the value of $[U]_{n}$. Next using Proposition 9 we obtain the value
of ${\sf nd}\mbox{-}[U]_{n}$. We could perform this computation for $n\leq 6$.
The obtained values of $[U]_{n}$ and ${\sf nd}\mbox{-}[U]_{n}$ are shown in
Table 6. To generate all 7-variable unate functions using this option requires
considering $\left(U(6)\right)^{2}\approx 2^{57.8}$ pairs of functions. This
is not feasible on the computing facility available to us.
$n$ | $[U]_{n}$ | ${\sf nd}\mbox{-}[U]_{n}$
---|---|---
0 | 2 | 2
1 | 4 | 2
2 | 10 | 6
3 | 34 | 24
4 | 200 | 166
5 | 3466 | 3266
6 | 829774 | 826308
Table 6: Numbers of equivalence classes of $n$-variable unate and non-
degenerate unate functions for $0\leq n\leq 6$.
To obtain the set of $n$-variable balanced unate functions, after generating
the set of all $n$-variable unate functions, we remove the ones that are
unbalanced. Then to the resulting set, we apply the technique of Section 4.1
to obtain the number $[BU]_{n}$ of equivalence classes of $n$-variable
balanced unate functions. Subsequently, we apply Proposition 9 to obtain the
number ${\sf nd}\mbox{-}[BU]_{n}$ of equivalence classes of $n$-variable non-
degenerate balanced unate functions. The values of $[BU]_{n}$ and ${\sf
nd}\mbox{-}[BU]_{n}$ are shown in Table 7
$n$ | $[BU]_{n}$ | ${\sf nd}\mbox{-}[BU]_{n}$
---|---|---
0 | 0 | 0
1 | 2 | 2
2 | 2 | 0
3 | 6 | 4
4 | 24 | 18
5 | 254 | 230
6 | 50172 | 49918
Table 7: Numbers of equivalence classes of $n$-variable balanced unate and
non-degenerate balanced unate functions for $0\leq n\leq 6$.
## 5 Concluding Remarks
We have obtained the numbers of $n$-variable unate and monotone functions
possessing a combination of some basic properties. Also, we have obtained the
numbers of equivalence classes of $n$-variable unate and monotone functions
also possessing a combination of those same properties. Our work raises a
number of questions that may be pursued in the future. One such question is
whether the techniques for counting monotone functions from the recent works
[12, 11] can be applied to the problem of counting balanced monotone
functions. Another similar question is whether the techniques for counting the
number of equivalence classes of monotone functions from [21, 22] can be
applied to the problem of counting the number of equivalence classes of
balanced monotone functions. A third question is whether the techniques for
counting the number of equivalence classes of monotone functions from [21, 22]
can be applied to the problem of counting the number of equivalence classes of
unate functions. Positive answers to these questions will allow extending the
results that we could obtain up to $n=6$ or $n=7$ to $n=9$.
## Acknowledgement
We are grateful to an anonymous reviewer of the PhD thesis of the first author
for suggesting that the number of $n$-variable non-degenerate unate functions
is equal to $2^{n}$ times the number of $n$-variable non-degenerate monotone
functions.
## References
* [1] Valentin Bakoev. Generating and identification of monotone Boolean functions. In Mathematics and Education in Mathematics, Sofia, pages 226–232, 2003.
* [2] József Balogh, Dingding Dong, Bernard Lidickỳ, Nitya Mani, and Yufei Zhao. Nearly all $k$-SAT functions are unate. https://arxiv.org/pdf/2209.04894.pdf, 2022.
* [3] Charles R. Baugh. Generation of representative functions of the NPN equivalence classes of unate Boolean functions. IEEE Transactions on Computers, 21(12), 1972.
* [4] Joel Berman and Peter Köhler. Cardinalities of finite distributive lattices. Mitt. Math. Sem. Giessen, 121:103–124, 1976.
* [5] Rodolfo Betancourt. Derivation of minimum test sets for unate logical circuits. IEEE Transactions on Computers, 100(11):1264–1269, 1971.
* [6] Randolph Church. Nunmerical analysis of certain free distributive structures. Duke Mathematical Journal, 6(3):732 – 734, 1940.
* [7] Richard Dedekind. Über zerlegungen von zahlen durch ihre grössten gemeinsamen teiler. Gesammelte Werke, 2:103 – 148, 1897.
* [8] Aaron Feigelson and Lisa Hellerstein. The forbidden projections of unate functions. Discrete Applied Mathematics, 77(3):221–236, 1997.
* [9] Robert Fidytek, Andrzej W. Mostowski, Rafal Somla, and Andrzej Szepietowski. Algorithms counting monotone Boolean functions. Inf. Process. Lett., 79(5):203–209, 2001.
* [10] Yutaka Hata, Masaharu Yuhara, Fujio Miyawaki, and Kazuharu Yamato. On the complexity of enumerations for multiple-valued Kleenean functions and unate functions. In 1991 Proceedings of the Twenty-First International Symposium on Multiple-Valued Logic, pages 55–56. IEEE Computer Society, 1991.
* [11] Lennart Van Hirtum, Patrick De Causmaecker, Jens Goemaere, Tobias Kenter, Heinrich Riebler, Michael Lass, and Christian Plessl. A computation of $D(9)$ using FPGA supercomputing. https://arxiv.org/abs/2304.03039, 2023.
* [12] Christian Jäkel. A computation of the ninth Dedekind number. https://arxiv.org/abs/2304.00895, 2023.
* [13] Andrzej Kisielewicz. A solution of Dedekind’s problem on the number of isotone Boolean functions. Journal fur die Reine und Angewandte Mathematik, 1988(386):139 – 144, 1988.
* [14] Zvi Kohavi. Switching and finite automata theory. McGraw-Hill (New York, NY [ua]), 1970.
* [15] Aleksej D. Korshunov. Monotone Boolean functions. Russian Mathematical Surveys, 58(5):929 – 1001, 2003.
* [16] Karuna K Maitra. Cascaded switching networks of two-input flexible cells. IRE Transactions on Electronic Computers, (2):136–143, 1962.
* [17] William S. Matheson. Recognition of monotonic and unate cascade realizable functions using an informational model of switching circuits. IEEE Transactions on Computers, 100(10):1214–1219, 1971.
* [18] Robert McNaughton. Unate truth functions. IRE Transactions on Electronic Computers, EC-10(1):1–6, 1961.
* [19] Hiroki Morizumi. Sensitivity, block sensitivity, and certificate complexity of unate functions and read-once functions. In IFIP International Conference on Theoretical Computer Science, pages 104–110. Springer, 2014.
* [20] Amar Mukhopadhyay. Unate cellular logic. IEEE Transactions on Computers, 100(2):114–121, 1969.
* [21] Bartłomiej Pawelski. On the number of inequivalent monotone Boolean functions of 8 variables. https://arxiv.org/abs/2108.13997, 2021.
* [22] Bartłomiej Pawelski. On the number of inequivalent monotone Boolean functions of 9 variables. https://arxiv.org/abs/2305.06346, 2023.
* [23] Vijay Pitchumani and Satish S. Soman. Functional test generation based on unate function theory. IEEE transactions on computers, 37(6):756–760, 1988.
* [24] Fred Roberts and Barry Tesman. Applied Combinatorics, 2nd edition. Chapman and Hall/CRC, 2009.
* [25] Tsutomu Sasao and Kozo Kinoshita. On the number of fanout-free functions and unate cascade functions. IEEE Transactions on Computers, 28(1):66–72, 1979.
* [26] Neil J.A. Sloane. The online encyclopedia of integer sequences. https://oeis.org/, 1964.
* [27] Tamon Stephen and Timothy Yusun. Counting inequivalent monotone Boolean functions. Discrete Applied Mathematics, 167:15–24, 2014.
* [28] André Thayse and Jean P. Deschamps. Logic properties of unate and of symmetric discrete functions. In Proceedings of the sixth international symposium on Multiple-valued logic, pages 79–87, 1976.
* [29] Morgan Ward. Note on the order of free distributive lattices. Bull. Amer. Math. Soc., 52:423, 1946.
* [30] Doug Wiedemann. A computation of the eighth Dedekind number. Order, 8(1):5 – 6, 1991.
* [31] Uri Zwick. A $4n$ lower bound on the combinational complexity of certain symmetric Boolean functions over the basis of unate dyadic boolean functions. SIAM Journal on Computing, 20(3):499–505, 1991.
|
# On a variety of right-symmetric algebras
Nurlan Ismailov Astana IT University, Mangilik El avenue, 55/11, Business
center EXPO, block C1, Astana, 010000, Kazakhstan<EMAIL_ADDRESS>and
Ualbai Umirbaev Department of Mathematics, Wayne State University, Detroit,
MI 48202, USA; and Institute of Mathematics and Mathematical Modeling, Almaty,
050010, Kazakhstan<EMAIL_ADDRESS>
###### Abstract.
We construct a finite-dimensional metabelian right-symmetric algebra over an
arbitrary field that does not have a finite basis of identities.
Mathematics Subject Classification (2020): Primary 17D25, 17A50. Secondary
15A24, 16R10.
Key words: right-symmetric algebras, identities, Specht property.
## 1\. Inroduction.
We say that a variety of algebras has the Specht property or is Spechtian if
any of its subvarieties has a finite basis of identities. In other words, a
variety of algebras is Spechtian if the set of all its subvarieties satisfies
the descending chain condition with respect to inclusion. In 1950 Specht [31]
formulated a problem on the Specht property for the variety of all associative
algebras over a field of characteristic zero.
Specialists extended the study of this problem for any varieties of algebras
over fields of any characteristic. In 1970 Vaughan-Lee [36] constructed an
example of a finite-dimensional Lie algebra over a field of characteristic
$p=2$ that does not have a finite basis of identities. In 1974 Drensky [8]
extended this result to fields of any positive characteristic $p>0$. In 1978
Medvedev [25] showed that varieties of metabelian Malcev, Jordan, alternative,
and $(-1,1)$ algebras are Spechtian. In 1984 Umirbaev [33] proved that the
variety of metabelian binary Lie algebras over a field of characteristic $\neq
3$ has the Specht property. In 1980 Medvedev [26] also constructed an example
of a variety of solvable alternative algebras over a field of characteristic
$2$ with an infinite basis of identities. In 1985 Umirbaev [34] proved that
the varieties of solvable alternative algebras over a field of characteristic
$\neq 2,3$ have the Specht property. Pchelintsev [27] constructed an almost
Spechtian variety of alternative algebras over a field of characteristic 3.
The Specht property of so-called bicommutative algebras is proven in [9].
In 1976 Belkin [1] proved that the variety of metabelian right-alternative
algebras does not have the Specht property. In 1978 L’vov [24] constructed a
six-dimensional nonassociative algebra over an arbitrary field satisfying the
identity $x(yz)=0$ with an infinite basis of identities. In 1986 Isaev [15]
adapted L’vov’s methods for right-alternative algebras and constructed a
finite-dimensional metabelian right-alternative algebra over an arbitrary
field with an infinite basis of identities. In 2008 Kuz’min [22] gave a
sufficient condition for the varieties of metabelian right-alternative
algebras over a field of characteristic $\neq 2$ to be Spechtian.
In 1988 Kemer [16, 17] positively solved the famous Specht problem [31] and
proved that every variety of associative algebras over a field of
characteristic zero has a finite basis of identities. Later the Specht problem
was negatively solved for the variety of associative algebras over fields of
positive characteristic $p>0$ [2, 13, 28]. It is also known that the varieties
of Lie algebras generated by a finite-dimensional algebra over a field of
characteristic zero have the Specht property [14, 18]. Despite the efforts of
many specialists in this field, the question of whether the variety of Lie
algebras over a field of characteristic zero has the Specht property remains
open.
This paper is devoted to the study of the Specht property for the variety of
right-symmetric algebras. Recall that an algebra $A$ over a field $\mathbb{F}$
is called right-symmetric if it satisfies the identity
(1) $(a,b,c)=(a,c,b),$
where $(a,b,c)=(ab)c-a(bc)$ is the associator of $a,b,c\in A$.
Right-symmetric algebras are Lie admissible, that is, any right-symmetric
algebra with respect to the commutator $[x,y]=xy-yx$ is a Lie algebra. Very
often right-symmetric (or left-symmetric) algebras are called pre-Lie algebras
and play an important role in the theory of operads [23]. Right-symmetric
algebras arise in many different areas of mathematics and physics [3].
In 1994 Segal [30] constructed a basis of free right-symmetric algebras.
Chapoton and Livernet [5] and, independently, Löfwall and Dzhumadil’daev [11]
gave other bases of free right-symmetric algebras in terms of rooted trees.
The identities of right-symmetric algebras were studied by Filippov [12], and
he proved that any right-nil right-symmetric algebra over a field of
characteristic zero is right nilpotent. An analogue of the PBW basis Theorem
for the universal (multiplicative) enveloping algebra of a right-symmetric
algebra was given in [19]. The Freiheitssatz and the decidability of the word
problem for one-relator right-symmetric algebras were proven in [20].
Recently, Dotsenko and Umirbaev [7] determined that the variety of right-
symmetric algebras over a field of characteristic zero is Nielson-Schreier,
that is, every subalgebra of a free right-symmetric algebra is free.
A right-symmetric algebra with an additional identity
$a(bc)=b(ac)$
is called a Novikov algebra. The class of Novikov algebras is an important and
well-studied subclass of right-symmetric algebras. Recently there was great
progress in the study of identities, solvability, and nilpotency [38, 12, 10,
29, 35, 32]. In 2022 Dotsenko, Ismailov, and Umirbaev [6] proved that (a)
every Novikov algebra satisfying a nontrivial polynomial identity over a field
of characteristic zero is right-associator nilpotent and (b) the variety of
Novikov algebras over a field of characteristic zero has the Specht property.
In this paper, we continue the study of the identities of right-symmetric
algebras. Namely, using the constructions and methods of L’vov [24] and Isaev
[15], we construct a finite-dimensional metabelian right-symmetric algebra
over an arbitrary field that does not have a finite basis of identities. In
fact, our algebra belongs to the variety of algebras $\mathcal{R}$ defined by
the identities
(2) $[[a,b],c]=0,$ (3) $(ab)a=0,$
and
(4) $(ab)(cd)=0.$
We determine some identities and operator identities of the variety
$\mathcal{R}$ in Section 2. In Section 3, a series of algebras $P_{n}$ of this
variety is constructed. A linear basis of free algebras of the variety
$\mathcal{R}$ is constructed in Section 4. Section 5 is devoted to the study
of the relationships between the polynomial identities and the operator
identities of the algebras $P_{n}$. The main result of the paper is given in
Section 6 and says that the algebra $P_{2}$ does not have a finite basis of
identities.
## 2\. A variety of right-symmetric algebras
Let $\mathbb{F}$ be an arbitrary fixed field. In what follows, all vector
spaces are considered over $\mathbb{F}$. As above, $\mathcal{R}$ denotes the
variety of algebras defined by the identities (2), (3), and (4).
###### Lemma 2.1.
Every algebra of the variety $\mathcal{R}$ is right-symmetric and right
nilpotent of index $4$.
###### Proof.
The linearization of (3) gives
(5) $(ab)c+(cb)a=0.$
This identity and (4) imply that
(6) $((ab)c)d=-(dc)(ab)=0.$
Using (3) and(2) one can also get
$(a,b,c)-(a,c,b)=(ab)c-a(bc)-(ac)b+a(cb)$ $=-(cb)a-a(bc)+(bc)a+a(cb)$
$=[b,c]a-a[b,c]=[[b,c],a]=0,$
i.e., $\mathcal{R}$ is a variety of right-symmetric algebras. ∎
Let $A$ be an arbitrary algebra of the variety $\mathcal{R}$. Recall that for
any $x\in A$ the operators of right multiplication $R_{x}$ and left
multiplication $L_{x}$ on $A$ are defined by
$aR_{x}=ax\quad\text{and}\quad aL_{x}=xa,$
respectively. Set also $V_{x,y}=L_{x}R_{y}$.
###### Lemma 2.2.
(7) $V_{x,x}=0,\quad\quad V_{x,y}=-V_{y,x}.$ (8)
$xR_{y}L_{z}L_{t}=yV_{x,z}L_{t}-xR_{y}V_{t,z}.$ (9)
$xR_{y}L_{z}=xV_{z,y}+yR_{x}L_{z}-yV_{z,x}.$ (10)
$xR_{y}V_{z,t}=yR_{x}V_{z,t}.$ (11) $V_{x,y}R_{z}=0.$ (12)
$V_{x,y}(L_{z}L_{t}+V_{t,z})=0.$
###### Proof.
The identities (3) and (5) immediately imply (7). By (1) and (4) we get
$xR_{y}L_{z}L_{t}=t(z(xy))$
$=(tz)(xy)+t((xy)z)-(t(xy))z=yV_{x,z}L_{t}-xR_{y}V_{t,z}.$
From the identity (2) follows (9).
Then (1) and (6) give that
$xR_{y}V_{z,t}=(z(xy))t$ $=((zx)y)t+(z(yx))t-((zy)x)t=yR_{x}V_{z,t}.$
By (6) we obtain $tV_{x,z}R_{t}=((xt)z)t=0$, and, therefore, $V_{x,y}R_{z}=0$.
Set $v=uV_{x,y}$. Then (6), (1), and (4) imply that
$vL_{z}L_{t}=t(zv)-t(vz)=(tz)v-(tv)z=-vV_{t,z}.$
∎
## 3\. Algebras $P_{n}$
For each natural $n$ we define the algebra $P_{n}$ with a linear basis
$a_{ij},\,b_{ij},\,c_{i},\,d_{ij},\,e_{ij},$
where $i,j\in\\{1,2,\ldots,n\\}$, and with the product defined by
$a_{ij}c_{i}=d_{ij},\quad b_{ij}c_{i}=e_{ij},$
$a_{ij}e_{ij}=e_{ij}a_{ij}=-b_{ij}d_{ij}=-d_{ij}b_{ij}=c_{j},$
where all zero products are omitted.
Set
$A_{n}=\mathrm{Span}\\{a_{ij},\,b_{ij}\,|\,1\leq i,j\leq n\\}$
and
$D_{n}=\mathrm{Span}\\{c_{i},\,d_{ij},\,e_{ij}\,|\,1\leq i,j\leq n\\},$
where $\mathrm{Span}\,X$ denotes the linear span of $X$. Then $A_{n}$ is a
subalgebra of $P_{n}$ and $D_{n}$ is an ideal of $P_{n}$. Moreover, $P_{n}$ is
a direct sum of the vector spaces $A_{n}$ and $D_{n}$. Set also
$C_{n}=\mathrm{Span}\\{c_{i}\,|\,1\leq i\leq n\\},\ \
\overline{C}_{n}=\mathrm{Span}\\{d_{ij},\,e_{ij}\,|\,1\leq i,j\leq n\\}.$
Then
(13) $P_{n}^{2}=D_{n},\quad A_{n}^{2}=D_{n}^{2}=0,\quad
D_{n}=C_{n}\oplus\overline{C}_{n},$ $D_{n}P_{n}=C_{n},\quad
P_{n}A_{n}=C_{n},\quad P_{n}C_{n}=\overline{C}_{n},\quad C_{n}P_{n}=0,\quad
P_{n}\overline{C}_{n}=C_{n}.$
###### Lemma 3.1.
The algebra $P_{n}$ belongs to the variety $\mathcal{R}$.
###### Proof.
Obviously the space of commutators $[P_{n},P_{n}]$ coincides with
$\overline{C}_{n}$, which is in the center of $P_{n}$, i.e., (2) holds.
In order to verify the identity (3), it is sufficient to check the identities
(3) and (5) for all elements of the basis of $P_{n}$. Let us begin with (3).
Since $A_{n}^{2}=D_{n}^{2}=(D_{n}A_{n})D_{n}=0$, we may assume that $a\in
A_{n}$ and $b\in D_{n}$. Consider all nonzero products of the space
$A_{n}D_{n}$. If $a=a_{ij}$ and $b=c_{i}$, then
$(a_{ij}c_{i})a_{ij}=d_{ij}a_{ij}=0.$
If $a=a_{ij}$ and $b=e_{ij}$, then
$(a_{ij}e_{ij})a_{ij}=c_{j}a_{ij}=0.$
The other cases can be verified similarly.
Now let’s verify (5). Since $(D_{n}P_{n})P_{n}=0$, the product $(ab)c$ is
nonzero only if $a=a_{ij},\,b=c_{i},\,c=b_{ij}$ or
$a=b_{ij},\,b=c_{i},\,c=a_{ij}$. Thus,
$(ab)c+(cb)a=-c_{j}+c_{j}=0.$
The identity (4) follows from the relations $P^{2}_{n}=D_{n}$ and
$D^{2}_{n}=0$. ∎
###### Lemma 3.2.
For all $x,y\in P_{n}$, $d\in D_{n}$ we have
$(A_{n}+\overline{C}_{n})V_{x,y}=0,\quad V_{d,y}=V_{y,d}=0.$
###### Proof.
The relations (13) give that $(P_{n}A_{n})P_{n}\subseteq C_{n}P_{n}=0$ and
$(P_{n}\overline{C}_{n})P_{n}\subseteq C_{n}P_{n}=0$, i.e., the first equality
of the lemma holds. Similarly, by noting that $(D_{n}P_{n})P_{n}\subseteq
C_{n}P_{n}=0$ and $(P_{n}P_{n})D_{n}\subseteq D_{n}D_{n}=0$, we can deduce the
second equality of the lemma. ∎
Denote by $\mathrm{Ann}_{l}P_{n}$ the space of left annihilators of $P_{n}$.
###### Lemma 3.3.
$\mathrm{Ann}_{l}P_{n}=C_{n}$.
###### Proof.
Assume that $x\in(A_{n}+\overline{C}_{n}+C_{n})\cap\mathrm{Ann}_{l}P_{n}$ and
express it as
$x=\sum_{i,j}(\alpha_{ij}a_{ij}+\beta_{ij}b_{ij}+\gamma_{ij}d_{ij}+\delta_{ij}e_{ij}+\epsilon_{i}c_{i}),$
where
$\alpha_{ij},\beta_{ij},\gamma_{ij},\delta_{ij},\epsilon_{i}\in\mathbb{F}$.
Then we have
$xc_{i}=\sum_{j}(\alpha_{ij}d_{ij}+\beta_{ij}e_{ij}),\quad
xa_{ij}=\delta_{ij}c_{j},\quad xb_{ij}=-\gamma_{ij}c_{j}.$
From these equations, it can be deduced that
$\alpha_{ij}=\beta_{ij}=\gamma_{ij}=\delta_{ij}=0$. Therefore, we can conclude
that $x=\sum_{i}\epsilon_{i}c_{i}$. Consequently,
$\mathrm{Ann}_{l}P_{n}=C_{n}$. ∎
## 4\. Structure of free algebras of $\mathcal{R}$
Let $F(X)$ be the free algebra of the variety $\mathcal{R}$ generated by an
infinite countable set $X=\\{x_{1},x_{2},\ldots,x_{n},\ldots\\}$.
###### Proposition 4.1.
The set of elements $\mathcal{B}$ of $F(X)$ of the forms
$x_{i}$, $x_{i}\hat{R}_{x_{j}}L_{x_{s}}$,
$x_{i}\hat{R}_{x_{j}}V_{x_{p_{1}},x_{q_{1}}}\cdots
V_{x_{p_{k}},x_{q_{k}}}\hat{L}_{x_{s}},$
where $i<j$ and $p_{r}<q_{r}$ for all $r=1,2,\ldots,k$, $k\geq 1$, and
$\hat{T}_{x}$ denotes that the operator $T_{x}$ might not occur, is a basis of
$F(X)$.
###### Proof.
In order to show that $\mathcal{B}$ linearly spans $F(X)$ it is sufficient to
verify that, for any $v\in\mathcal{B}$, the elements $vR_{x_{i}}$ and
$vL_{x_{i}}$ belong to the linear span of $\mathcal{B}$. This is easy to do
using the identities (1), (4), and Lemma 2.2. For example, let
$v=x_{i}\hat{R}_{x_{j}}V_{x_{p_{1}},x_{q_{1}}}\cdots
V_{x_{p_{k}},x_{q_{k}}}L_{x_{s}}.$
Then
$vR_{x_{r}}=x_{i}\hat{R}_{x_{j}}V_{x_{p_{1}},x_{q_{1}}}\cdots
V_{x_{p_{k}},x_{q_{k}}}L_{x_{s}}R_{x_{r}}=x_{i}\hat{R}_{x_{j}}V_{x_{p_{1}},x_{q_{1}}}\cdots
V_{x_{p_{k}},x_{q_{k}}}V_{x_{s},x_{r}}.$
By (12), we get
$vL_{x_{r}}=x_{i}\hat{R}_{x_{j}}V_{x_{p_{1}},x_{q_{1}}}\cdots
V_{x_{p_{k}},x_{q_{k}}}L_{x_{s}}L_{x_{r}}=-x_{i}\hat{R}_{x_{j}}V_{x_{p_{1}},x_{q_{1}}}\cdots
V_{x_{p_{k}},x_{q_{k}}}V_{x_{r},x_{s}}.$
Applying (7) we can express $vR_{x_{r}}$ and $vL_{x_{r}}$ as a linear
combination of elements of $\mathcal{B}$.
It remains to prove the linear independence of elements of $\mathcal{B}$.
Suppose that $f=f(x_{1},x_{2},\ldots,x_{n})\in F(X)$ is a nontrivial linear
combination of elements of $\mathcal{B}$. Suppose that $v\in\mathcal{B}$ and
$\deg_{x_{i}}(v)=k$. Let’s write $v=v(x_{i},\ldots,x_{i})$ in order to differ
the presence of $x_{i}$ in different places. To linearize $v$ in $x_{i}$ we
use new variables $y_{1},\ldots,y_{k}\in X$ and, after renumeration, we can
assume that $y_{r}<x_{j}$ if $i<j$ and $x_{j}<y_{r}$ if $j<i$ for all $1\leq
r\leq k$. Notice that every word $v(y_{\sigma(1)},\ldots,y_{\sigma(k)})$,
where $\sigma\in S_{k}$ and $S_{k}$ is the symmetric group in $k$ symbols, is
an element of $\mathcal{B}$. Then the full linearization of $v$ in $x_{i}$ is
a linear combination of basis elements
$v(y_{\sigma(1)},\ldots,y_{\sigma(k)})$. Therefore, by linearizing a
nontrivial element $f$, we obtain a nontrivial element that is a linear
combination of multilinear elements from $\mathcal{B}$. Substituting zeroes
instead of some variables, if necessary, we can make $f$ linear in each
variable. Therefore, we can assume that $f$ is a multilinear nontrivial
identity in the variables $x_{1},\ldots,x_{n}$. Let
$f=\sum_{i=1}\alpha_{i}u_{i},$
where $\alpha_{i}\in\mathbb{F}$ and $u_{i}\in\mathcal{B}$. Suppose, for
example, that
$u_{1}=x_{i}R_{x_{j}}V_{x_{p_{1}},x_{q_{1}}}\cdots
V_{x_{p_{k}},x_{q_{k}}}L_{x_{s}}.$
Set $x_{i}=d_{1,2}$, $x_{j}=-b_{1,2}$, $x_{p_{r}}=a_{r+1,r+2}$,
$x_{q_{r}}=-b_{r+1,r+2}$ for all $r=1,2,\ldots k$, $x_{s}=a_{k+2,k+3}$. We
have $c_{i}V_{a_{ij},-b_{ij}}=c_{j}$ for all $i,j$. Then the value of $u_{1}$
under this substitution is $d_{k+2,k+3}$ and the value of any other $u_{i}$ is
$0$. Consequently, the value of $f$ is $\alpha_{1}d_{k+2,k+3}\neq 0$. Thus,
$f$ is not an identity for $\mathcal{R}$.
If $L_{x_{s}}$ does not appear in $u_{1}$, then we perform the same
substitutions for the variables. If $R_{x_{j}}$ does not appear in $u_{1}$,
then we simply set $x_{i}=c_{2}$ and perform the same substitutions for the
rest of the variables as described above. In both cases the value of $f$ is
nonzero. This completes our proof. ∎
Let $M=M(F(X))$ be the multiplication algebra of the algebra $F(X)$. Denote by
$E_{0}$ the subalgebra (without identity) of $M$ generated by the operators
$V_{x_{i},x_{j}}$ with $i<j$ for all $i,j=1,2,\ldots$. Set also
$E_{1}=\sum_{j\geq 1}E_{0}L_{x_{j}},\quad E_{2}=\sum_{i\geq
1}R_{x_{i}}E_{0},\quad E_{3}=\sum_{i,j\geq 1}R_{x_{i}}E_{0}L_{x_{j}},$
and
$R_{k}=\sum_{i\geq 1}x_{i}E_{k},\quad\mbox{for }k=0,1,2,3.$
According to Proposition 4.1, the space $F(X)$ is the direct sum of the
subspaces $R_{k}$ and the linear span of the elements of $\mathcal{B}$ of
degree less than or equal to 3.
###### Lemma 4.2.
An identity $zf(x_{1},\ldots,x_{m})=0$, where $f\in E_{0}$, is a consequence
of a system of identities
(14) $tg_{j}(x_{1},\ldots,x_{l})=0,\quad g_{j}\in E_{0},\quad
j\in\mathcal{J},$
in the variety $\mathcal{R}$, where $\mathcal{J}$ is any set of indices, if
and only if the operator $f(x_{1},\ldots,x_{m})$ belongs to the ideal of the
associative algebra $E_{0}$ generated by the set $G$ of all operators
$\varphi(g_{j})$, where $\varphi$ runs over the set of all linear
endomorphisms $\varphi:X\rightarrow\mathbb{F}X=\sum_{i\geq 1}\mathbb{F}x_{i}$
and $j\in\mathcal{J}$.
###### Proof.
Suppose that $f$ belongs to the ideal of $E_{0}$ generated by $G$. Then
$f=\sum_{r=1}^{t}u_{r}g_{j_{r}}^{\varphi_{r}}v_{r},$
for some linear endomorphisms $\varphi_{r}$ and $u_{r},v_{r}\in E_{0}$.
Therefore,
$zf=\sum_{r=1}^{t}(zu_{r})g_{j_{r}}^{\varphi_{r}}v_{r}$
and $zf=0$ is a consequence of the system of identities (14).
Let’s describe all the consequences of the identities (14). Let
$\varphi:F(X)\to F(X)$ be an arbitrary endomorphism and set
$\varphi(x_{i})=y_{i}+h_{i}$, where $y_{i}\in\mathbb{F}X$ and $h_{i}\in
F(X)^{2}$ for all $i$. Since $g_{j}\in E_{0}$, using (4) and (6), we get
$t\varphi(g_{j})=tg_{j}(y_{1},\ldots,y_{l})=0.$
Thus, a general form of consequences of the identities (14) can be expressed
as
$\sum_{r=1}^{t}u_{r}g_{j_{r}}^{\varphi_{r}}v_{r},$
where $u_{r}\in F(X)$, $v_{r}\in M(F(X))$, and $\varphi_{r}$ are linear
endomorphisms. We know that $g_{j_{r}}^{\varphi_{r}}\in E_{0}$. We also claim
that $u_{r}$ and $v_{r}$ can be represented in the forms
$x_{i}\hat{R}_{x_{j}}V_{x_{p_{1}},x_{q_{1}}}\cdots
V_{x_{p_{k}},x_{q_{k}}},\quad\quad
V_{x_{p^{\prime}_{1}},x_{q^{\prime}_{1}}}\cdots,V_{x_{p^{\prime}_{k}},x_{q^{\prime}_{k}}}\hat{L}_{x_{s}},$
respectively, where $i<j$, $p_{l}<q_{l}$, $p^{\prime}_{l}<q^{\prime}_{l}$ and
$k=0,1,\ldots$.
Suppose that $u_{r}$ is a basis element that ends with $L_{x_{s}}$. Then, by
(11) and (12), we can derive that
$V_{x_{i},x_{j}}L_{x_{s}}V_{x_{k},x_{t}}=V_{x_{i},x_{j}}L_{x_{s}}L_{x_{k}}R_{x_{t}}=-V_{x_{i},x_{j}}V_{x_{k},x_{s}}R_{x_{t}}=0.$
Consequently, we have $V_{x_{i},x_{j}}L_{x_{s}}E_{0}=0$.
If $u_{r}=x_{i}R_{x_{j}}L_{x_{k}}$, then by (8) and (11) we get
$u_{r}V_{x_{s},x_{t}}=x_{i}R_{x_{j}}L_{x_{k}}L_{x_{s}}R_{x_{t}}=x_{j}V_{x_{i},x_{k}}L_{x_{s}}R_{x_{t}}-x_{i}R_{x_{j}}V_{x_{s},x_{k}}R_{x_{t}}=x_{j}V_{x_{i},x_{k}}V_{x_{s},x_{t}}.$
So, we can conclude that $u_{r}$ has the claimed form.
Now, let’s consider the case when $v_{r}$ is a basis element that starts with
$R_{y}$. According to (11), we have $E_{0}R_{y}=0$. If $v_{r}$ starts with
$L_{x_{i}}L_{x_{j}}$, then by using (12), we find
$V_{x_{k},x_{s}}L_{x_{i}}L_{x_{j}}=-V_{x_{k},x_{s}}V_{x_{j},x_{i}}.$
Hence, $v_{r}$ also has the claimed form.
If $zf=0$ is a consequence of the identities (14), then we get an equality of
the form
$x_{m+1}f(x_{1},\ldots,x_{m})=\sum_{r=1}^{t}\lambda_{r}x_{i_{r}}w_{r}g_{j_{r}}^{\varphi_{r}}v_{r},$
where $x_{i_{r}}w_{r}=u_{r}$, $w_{r}\in E_{0}+E_{2}$ and $v_{r}\in
E_{0}+E_{1}$. Notice that every element
$x_{i_{r}}w_{r}g_{j_{r}}^{\varphi_{r}}v_{r}$ belongs to $\mathcal{B}$.
Consequently, we may assume that $x_{i_{r}}=x_{m+1}$, $w_{r},v_{r}\in E_{0}$,
and
$f(x_{1},\ldots,x_{m})=\sum_{r=1}^{t}\lambda_{r}w_{r}g_{j_{r}}^{\varphi_{r}}v_{r}.$
∎
## 5\. Identities of $P_{n}$.
In this section, we study the connections between the identities and the
operator identities of $P_{n}$ for $n\geq 2$.
###### Lemma 5.1.
If $f=f(x_{1},\ldots,x_{m})\in F(X)$ and $f=0$ is an identity of $P_{n}$ for
$n\geq 2$, then
(15) $f=f_{0}+f_{1}+f_{2}+f_{3}\in F(X),\quad f_{k}\in R_{k},$
and $f_{k}=0$ is an identity of $P_{n}$ for all $k=0,1,2,3$.
###### Proof.
Let
$f=\sum_{i=1}^{m}\lambda_{i}x_{i}+\sum_{i,j=1}^{m}\lambda_{ij}x_{i}x_{j}+\sum_{i,j,k=1,i<j}^{m}\lambda_{ijk}x_{i}R_{x_{j}}L_{x_{k}}+f^{\prime},$
where $f^{\prime}$ is a linear combination of elements from $\mathcal{B}$ of
degree $\geq 4$.
We first show that $\lambda_{i}=\lambda_{ij}=\lambda_{ijk}=0$ for all
$i,j,k=1,\ldots,m$. For any fixed $i$ the substitution $x_{i}=c_{1}$ and
$x_{j}=0$ for all $j\neq i$ gives that $\lambda_{i}c_{1}=0$, which implies
$\lambda_{i}=0$.
If $i\neq j$, then the substitution $x_{i}=a_{11}$, $x_{j}=c_{1}$, and
$x_{k}=0$ for all $k\neq i,j$, makes the value of $f$ equal to
$\lambda_{ij}d_{11}=0$. We get the same value if $i=j$ under the substitution
$x_{i}=x_{j}=a_{11}+c_{1}$ and $x_{k}=0$ for all $k\neq i,j$. This gives
$\lambda_{ij}=0$ in both cases.
Assume that $i<j>k$. If $i\neq k$, then the substitution $x_{i}=b_{11}$,
$x_{j}=d_{11}$, $x_{k}=a_{12}$, and $x_{t}=0$ for all $t\neq i,j,k$, makes the
value of $f$ equal to $-\lambda_{ijk}d_{12}$. This gives that
$\lambda_{ijk}=0$. If $i=k$, then the substitution $x_{i}=b_{11}$,
$x_{j}=d_{11}$, and $x_{t}=0$ for all $t\neq i,j$, gives that
$-\lambda_{iji}e_{11}=0$ and $\lambda_{iji}=0$. If $i<j=k$, then the
substitution $x_{i}=d_{11}$, $x_{j}=b_{11}$, and $x_{t}=0$ for all $t\neq
i,j$, gives that $-\lambda_{ijj}e_{11}=0$ and $\lambda_{ijj}=0$. Finally, if
$i<j<k$, then the substitution $x_{i}=d_{11}$, $x_{j}=x_{k}=b_{11}$, and
$x_{t}=0$ for all $t\neq i,j$, gives that
$-\lambda_{ijk}e_{11}-\lambda_{ikj}e_{11}=0$, i.e.,
$\lambda_{ijk}=-\lambda_{ikj}=0$.
Thus, $f$ is a linear combination of elements of $\mathcal{B}$ of degree $\geq
4$. Suppose that $f$ is written as in (15). Taking into account the relations
$D_{n}P_{n}\subseteq C_{n}$ and $P_{n}C_{n}\subseteq\overline{C}_{n}$ it can
be observed that the images of $F_{0}=f_{0}+f_{2}$ and $F_{1}=f_{1}+f_{3}$
belong to $C_{n}$ and $\overline{C}_{n}$, respectively. Therefore, if $f=0$ is
an identity of $P_{n}$, then $F_{0}=0$ and $F_{1}=0$ are also identities of
$P_{n}$.
Suppose that
$f_{k}(x_{1},\ldots,x_{m})=\sum_{i=1}^{m}x_{i}g_{i}^{(k)}(x_{1},\ldots,x_{m}),$
where $g_{i}^{(k)}\in E_{k}$ and $k=0,1,2,3$. Let $p_{1},\ldots,p_{m}\in
P_{n}$ with $p_{s}=v_{s}+\overline{v}_{s}+a_{s}$, where $v_{s}\in C_{n}$,
$\overline{v}_{s}\in\overline{C}_{n}$, $a_{s}\in A_{n}$. By Lemma 3.2 we can
obtain that
$p_{i}V_{p_{j},p_{k}}=v_{i}V_{p_{j},p_{k}}=v_{i}V_{v_{j}+\overline{v}_{j}+a_{j},v_{k}+\overline{v}_{k}}+v_{i}V_{v_{j}+\overline{v}_{j},a_{k}}+v_{i}V_{a_{j},a_{k}}=v_{i}V_{a_{j},a_{k}}.$
Then
$f_{0}(p_{1},\ldots,p_{m})=\sum_{i=1}^{m}v_{i}g_{i}^{(0)}(a_{1},\ldots,a_{m})=f_{0}(v_{1}+a_{1},\ldots,v_{m}+a_{m}).$
It is easy to see that
$(A_{n}+C_{n})R_{a_{i}+v_{i}}V_{a_{j}+v_{j},a_{s}+v_{s}}=0$ for all
$a_{i},a_{j},a_{s}\in A_{n}$ and $v_{i},v_{j},v_{s}\in C_{n}$. It follows that
$f_{2}(v_{1}+a_{1},\ldots,v_{m}+a_{m})=0.$
Thus,
$f_{0}(p_{1},\ldots,p_{m})$
$=f_{0}(v_{1}+a_{1},\ldots,v_{m}+a_{m})+f_{2}(v_{1}+a_{1},\ldots,v_{m}+a_{m})$
$=F_{0}(v_{1}+a_{1},\ldots,v_{m}+a_{m})=0.$
Therefore, we can conclude that $f_{0}=0$ and $f_{2}=0$ are identities of
$P_{n}$. Similarly, we can establish that $f_{1}=0$ and $f_{3}=0$ are also
identities of $P_{n}$. ∎
###### Lemma 5.2.
If $f=f(x_{1},\ldots,x_{m})\in R_{1}+R_{3}$, then $fx_{m+1}\in R_{0}+R_{2}$
and if
$f(x_{1},\ldots,x_{m})x_{m+1}=0$ is an identity of $P_{n}$, then $f=0$ is an
identity of $P_{n}$ as well.
###### Proof.
We have $fx_{m+1}\in R_{0}+R_{2}$ by the definition of the spaces $R_{i}$,
where $0\leq i\leq 3$. If $fx_{m+1}=0$ is an identity of $P_{n}$, then all
values of $f$ in $P_{n}$ belong to $C_{n}=\mathrm{Ann}_{l}(P_{n})$ by Lemma
3.3. However, since $f$ is an element of $R_{1}+R_{3}$, the values of $f$ must
belong to $\overline{C}_{n}$. Consequently, $f=0$ is an identity of $P_{n}$. ∎
Recall an exact formal definition of the linearization of identities [37,
Chapter 1]. Let $\mathcal{V}$ be an arbitrary variety of algebras and
$\mathbb{F}\langle X\rangle$ be its free algebra over $\mathbb{F}$ generated
by $X=\\{x_{1},x_{2},\ldots\\}$. Let $y\in\mathbb{F}\langle X\rangle$ be an
arbitrary fixed element. For a nonnegative integer $k$, we define the linear
mapping $\Delta^{k}_{x_{i}}(y)$ on $\mathbb{F}\langle X\rangle$ as follows:
* •
$\Delta^{0}_{x_{i}}(y)$ is the identity mapping;
* •
$x_{s}\Delta^{k}_{x_{i}}(y)=0$, if either $k>1$ or $k=1$, $i\neq s$;
* •
$x_{i}\Delta^{1}_{x_{i}}(y)=y$;
* •
$(uv)\Delta^{k}_{x_{i}}(y)=\sum_{r+s=k}(u\Delta^{r}_{x_{i}}(y))(v\Delta^{s}_{x_{i}}(y))$,
where $x_{i}\in X$ and $u,v$ are any monomials in $\mathbb{F}\langle
X\rangle$. We also write $\Delta_{x_{i}}(y)$ instead of
$\Delta^{1}_{x_{i}}(y)$.
###### Lemma 5.3.
Suppose that $f=f(x_{1},\ldots,x_{m})\in R_{2}$. Then
$f\Delta_{i}(x_{m+1}x_{m+2})\in R_{0}$ for all $1\leq i\leq m$. Moreover,
$f=0$ is an identity of $P_{n}$ if and only if $P_{n}$ satisfies the following
system of identities
(16) $f(x_{1},\ldots,x_{m})\Delta_{i}(x_{m+1}x_{m+2})=0,\quad 1\leq i\leq m.$
###### Proof.
Let $w=xR_{y}V_{z_{1},t_{1}}\cdots V_{z_{r},t_{r}}\in\mathcal{B}$ and $u,v\in
X$. We have
$(xR_{y})\Delta_{x}(uv)=(uv)R_{y}=vV_{u,y}.$
By (1), (4) and (6), we get
$(xR_{y}V_{z_{1},t_{1}})\Delta_{y}(uv)=(z_{1}(x(uv)))t_{1}=((z_{1}x)(uv)-(z_{1}(uv))x+z_{1}((uv)x))t_{1}$
$=-((z_{1}(uv))x)t_{1}+(z_{1}((uv)x))t_{1}=(z_{1}((uv)x))t_{1}=vV_{u,x}V_{z_{1},t_{1}}.$
By (4) we get that $w\Delta_{z_{i}}(uv)=w\Delta_{t_{i}}(uv)=0$ for any
$i=1,\ldots,r$. Thus, if $f(x_{1},\ldots,x_{m})\in R_{2}$, then
$f(x_{1},\ldots,x_{m})\Delta_{i}(x_{m+1}x_{m+2})\in R_{0}$.
If $p_{1},\ldots,p_{m}\in P_{n}$ and $v_{1},\ldots,v_{m}\in D_{n}$, then we
have
(17)
$f(p_{1}+v_{1},\ldots,p_{m}+v_{m})=f(p_{1},\ldots,p_{m})+\sum_{i=1}^{m}f(p_{1},\ldots,p_{m})\Delta_{i}(v_{i}).$
In fact, by Lemma 1.3 from [37], the relation
$f(x_{1}+y_{1},\ldots,x_{m}+y_{m})=\sum_{i_{1},\ldots,i_{m}\geq
0}f\Delta_{1}^{i_{1}}(y_{1})\cdots\Delta_{m}^{i_{m}}(y_{m})$
$=f(x_{1},\ldots,x_{m})+\sum_{i=1}^{m}f(x_{1},\ldots,x_{m})\Delta_{i}(y_{i})+g,$
where $y_{1},\ldots,y_{m}\notin\\{x_{1},\ldots,x_{m}\\}$ are distinct
variables and the degree of $g$ in the variables $y_{1},\ldots,y_{m}$ is
greater than one, holds in $\mathbb{F}\langle X\rangle$. By substituting
$x_{i}=p_{i},y_{i}=v_{i}$ and using the fact that $D_{n}^{2}=0$, we can obtain
the relation (17).
If $f=0$ is an identity of $P_{n}$, then the relation (17) implies that
$f(p_{1},\ldots,p_{m})\Delta_{i}(v)=f(p_{1},\ldots,p_{i}+v,\ldots,p_{m})-f(p_{1},\ldots,p_{m})=0$
for all $p_{i}\in P_{n}$ and $v\in D_{n}$. In other words, the algebra $P_{n}$
satisfies the system of identities (16).
Conversely, suppose that the system of identities (16) holds in $P_{n}$.
Assume that $p_{1},\ldots,p_{m}\in P_{n}$ of the form $p_{i}=a_{i}+v_{i}$,
where $a_{i}\in A_{n}$ and $v_{i}\in D_{n}$. Then using the relation (17), we
have
$f(p_{1},\ldots,p_{m})=f(a_{1}+v_{1},\ldots,a_{m}+v_{m})$
$=f(a_{1},\ldots,a_{m})+\sum_{i=1}^{m}f(p_{1},\ldots,p_{m})\Delta_{i}(v_{i})=f(a_{1},\ldots,a_{m}).$
Considering $A^{2}_{n}=0$ and $f\in R_{2}\subseteq F(X)^{2}$, we can conclude
that $f(a_{1},\ldots,a_{m})=0$. Consequently, $f(p_{1},\ldots,p_{m})=0$. ∎
###### Lemma 5.4.
If $f=f(x_{1},\ldots,x_{m})\in R_{0}$ and $f=0$ is an identity of $P_{n}$ of
the form
$f=\sum_{i=1}^{m}x_{i}g_{i},$
where $g_{i}\in E_{0}$, then $x_{m+1}g_{i}=0$ is an identity of $P_{n}$.
###### Proof.
For a fixed $i$ set $x_{i}=v+a_{i}$ and $x_{j}=a_{j}$ for all $j\neq i$, where
$v\in D_{n}$ and $a_{j}\in A_{n}$. Taking into account the relations
$A_{n}^{2}=D_{n}^{2}=0$ and Lemma 3.2, one can have
$f(x_{1},\ldots,x_{m})=vg_{i}(a_{1},\ldots,a_{m})=0.$
Hence, $x_{m+1}g_{i}=0$ is an identity of $P_{n}$. ∎
###### Proposition 5.5.
For an arbitrary polynomial $f=f(x_{1},\ldots,x_{m})\in F(X)$ there exist
$t(m)=2m(m+3)$ polynomials $g_{i}(x_{1},\ldots,x_{m+3})\in E_{0}$, where
$i=1,\ldots,t(m)$, such that $f(x_{1},\ldots,x_{m})=0$ is an identity of
$P_{n}$ for $n\geq 2$ if and only if $P_{n}$ satisfies the system of
identities
$zg_{i}(x_{1},\ldots,x_{m+3})=0,\quad 1\leq i\leq t(m).$
###### Proof.
Let $f=f(x_{1},\ldots,x_{m})\in F(X)$ and suppose that $f=0$ is an identity of
$P_{n}$. Then by Lemma 5.1 we obtain
$f=f_{0}+f_{1}+f_{2}+f_{3},\quad f_{k}\in R_{k},$
and $f_{k}=0$ is an identity of the algebra $P_{n}$.
By Lemma 5.4, the identity $f_{0}=\sum_{i=1}^{m}x_{i}g_{i}=0$ is equivalent to
the system of $m$ identities $x_{m+1}g_{i}=0$ of $P_{n}$, where $1\leq i\leq
m$.
By Lemma 5.2, the identity $f_{1}=0$ is equivalent to $f_{1}x_{m+1}=0$ and
$f_{1}x_{m+1}\in R_{0}$. Moreover, if
$f_{1}x_{m+1}=\sum_{i=1}^{m}x_{i}g_{i},\quad g_{i}\in E_{0},$
then, by Lemma 5.4, the identity $f_{1}x_{m+1}=0$ is equivalent to the system
of $m$ identities $x_{m+2}g_{i}=0$ of $P_{n}$, where $1\leq i\leq m$.
By Lemma 5.3, the identity $f_{2}=0$ is equivalent to the system of $m$
identities
$f_{2}(x_{1},\ldots,x_{m})\Delta_{i}(x_{m+1}x_{m+2})=0$, where $i=1,\ldots,m$,
and we have $f_{2}\Delta_{i}(x_{m+1}x_{m+2})\in R_{0}$. Hence, by Lemma 5.4,
it is equivalent to a system of $m(m+2)$ identities of the form
$x_{m+3}g_{i}=0$, where $g_{i}(x_{1},\ldots,x_{m+2})\in E_{0}$ and
$i=1,\ldots,m(m+2)$.
By Lemma 5.2, the identity $f_{3}=0$ is equivalent to $f_{3}x_{m+1}=0$ and
$f_{3}x_{m+1}\in R_{2}$. The identity (4) implies that
$(f_{3}x_{m+1})\Delta_{m+1}(x_{m+2}x_{m+3})=0$. Then, by Lemma 5.3, $f_{3}=0$
is equivalent to the system of $m$ identities
$0=(f_{3}x_{m+1})\Delta_{i}(x_{m+2}x_{m+3})\in R_{0}$, where $i=1,\ldots,m$.
Moreover, by Lemma 5.4, it is equivalent to a system of $m(m+2)$ identities of
the form $x_{m+4}g_{j}=0$, where $g_{j}(x_{1},\ldots,x_{m+3})\in E_{0}$ and
$1\leq j\leq m(m+2)$.
Thus, $f=0$ is equivalent to a system of $t(m)=2m(m+3)$ identities of the form
$zg_{i}(x_{1},\ldots,x_{m+3})=0$, where $g_{i}(x_{1},\ldots,x_{m+3})\in E_{0}$
and $i=1,\ldots,t(m)$. ∎
Let $B$ be an arbitrary algebra in $\mathcal{R}$. We define $E_{0}(B)$ as the
algebra of operators generated by $V_{b_{1},b_{2}}$ for all $b_{1},b_{2}\in
B$, that acts on the algebra $B$. Denote by $T(E_{0}(B))$ the ideal of $E_{0}$
defined as the intersection of the kernels of all possible homomorphisms from
$F(X)$ to $B$. The elements of $T(E_{0}(B))$ are called $V$-identities of $B$.
###### Lemma 5.6.
$E_{0}(P_{n})\cong M_{n}(\mathbb{F})$, where $M_{n}(\mathbb{F})$ is algebra of
$n\times n$ matrices.
###### Proof.
According to Lemma 3.2, $E_{0}(P_{n})$ annihilates the subspace
$A_{n}+\overline{C}_{n}$, and $C_{n}$ is an invariant subspace of $P_{n}$
under its action. Consequently, $E_{0}(P_{n})$ is isomorphic to a subalgebra
$L$ of the algebra $End_{\mathbb{F}}C_{n}$. Furthermore, the operator
$V_{b_{ij},a_{ij}}\in E_{0}(P_{n})$ sends the element $c_{i}$ to $c_{j}$, and
$c_{k}$ to zero if $k\neq i$, resembling the action of a unit matrix.
Therefore, the subalgebra $L$ coincides with the entire algebra
$End_{\mathbb{F}}(C_{n})\cong M_{n}(\mathbb{F})$. ∎
###### Proposition 5.7.
If the algebra $P_{n}$ has a finite basis of identities for $n\geq 2$, then
the ideal $T=T(E_{0}(P_{n}))$ is generated by polynomials of bounded degrees.
###### Proof.
Suppose that $P_{n}$ has a finite basis of identities for $n\geq 2$. By
Proposition 5.5, modulo (1), (3), and (4), every identity is equivalent to a
finite system of identities of (14). Consequently, by Lemma 4.2, there exists
a finite set of elements $G\subseteq T$ such that the identities $tg=0$, where
$g\in G$, form a basis of identities of $P_{n}$. Let $m$ be the maximum of the
degrees of polynomials in $G$. By the same Lemma 4.2, the ideal $T$ is
generated by all $\varphi(g)$, where $g\in G$ and $\varphi$ is linear.
Consequently, $T$ is generated by elements of degrees $\leq m$. ∎
## 6\. Identities of $P_{2}$.
We are going to prove that $P_{2}$ does not have a finite basis of identities.
First, let’s construct some important examples of algebras.
###### Proposition 6.1.
For any $s>5$ there exists an algebra $B\in\mathcal{R}$ with the following two
properties:
1. (1)
$B$ is generated by a set $Q=\\{q_{1},\ldots,q_{s+3}\\}$ such that
$T\nsubseteq T(E_{0}(B))$.
2. (2)
Let $C$ be a subalgebra of $B$ generated by any subset $Q^{\prime}$ of $Q$
with $s$ elements. Then
$tg(c_{1},\ldots,c_{k})=0$
for all $g(x_{1},\ldots,x_{k})\in T$, $c_{1},\ldots,c_{k}\in C$, and $t\in B$.
###### Proof.
Set $n=s-5\geq 1$. Let $H$ be the free algebra with identity in the variety of
algebras generated by the field $\mathbb{F}$ with free generators
$\\{h_{1},\ldots,h_{n}\\}$. Denote by $W$ the subspace of $H$, spanned by all
words in $h_{1},\ldots,h_{n}$, including the identity $1$, that do not contain
at least one $h_{i}$. Then $W\neq H$. By Theorem 1.6 from [37], the algebra
$A=H\otimes_{\mathbb{F}}P_{3}$ belongs to $\mathcal{R}$. Consider the
subalgebra $L$ of $A$ generated by the following set of elements:
(18) $\\{1\otimes c_{1},1\otimes a_{11},1\otimes b_{11},1\otimes
a_{12},1\otimes b_{12},h_{i}\otimes a_{22},1\otimes b_{22},1\otimes
a_{23},1\otimes b_{23}\\}$
where $i=1,\ldots,n$.
We note that
$1\otimes c_{2}=-(1\otimes b_{12})((1\otimes a_{12})(1\otimes c_{1})),$
$h_{j}\otimes c_{2}=-(1\otimes b_{22})((h_{j}\otimes a_{22})(1\otimes
c_{2})),$ $h_{i}h_{j}\otimes c_{2}=-(1\otimes b_{22})((h_{i}\otimes
a_{22})(h_{j}\otimes c_{2})).$
Thus, by induction on the length of $h$, one can derive that $h\otimes
c_{2}\in L$ for any word $h$ in $h_{1},\ldots,h_{n}$. In addition, $h\otimes
c_{3}\in L$ since
$h\otimes c_{3}=-(1\otimes b_{23})((1\otimes a_{23})(h\otimes c_{2})).$
Note that $h\otimes c_{3}$ is a two-sided annihilator of $L$ since
$L\subseteq H\otimes(D_{3}+\sum_{i\leq j\leq
3,(i,j)\neq(3,3)}(\mathbb{F}a_{ij}+\mathbb{F}b_{ij}).$
Consequently, $N=H\otimes c_{3}$ and $N^{\prime}=W\otimes c_{3}$ are ideals of
$L$. Set $B=L/N^{\prime}$ and let’s show that it satisfies the properties (1)
and (2) of the proposition.
Verification of Property (1). Denote by $q_{1},\ldots,q_{s+3}$ the images of
the generators of (18) under the natural projection $L\rightarrow
L/N^{\prime}$. By Lemma 5.6, the algebra $E_{0}(P_{2})$ satisfies the well-
known Hall’s identity
$[[\overline{f}_{1},\overline{f}_{2}]\circ[\overline{f}_{3},\overline{f}_{4}],\overline{f}_{5}]=0,$
for all $\overline{f}_{i}\in E_{0}(P_{2})$, where $1\leq i\leq 5$ and $a\circ
b=ab+ba$. It follows that $S=[[f_{1},f_{2}]\circ[f_{3},f_{4}],f_{5}]\in T$ for
all $f_{i}\in E_{0}$.
It is easy to choose $f_{1},\ldots,f_{5}\in E_{0}$ and $\varphi:F(X)\to L$
such that
$f^{\varphi}_{1}=V_{1\otimes b_{12},1\otimes a_{12}}\prod_{i=1}^{n}V_{1\otimes
b_{22},h_{i}\otimes a_{22}},\quad f^{\varphi}_{2}=f^{\varphi}_{5}=V_{1\otimes
b_{11},1\otimes a_{11}},$ $f^{\varphi}_{3}=V_{1\otimes b_{22},1\otimes
a_{22}},\quad f^{\varphi}_{4}=V_{1\otimes b_{23},1\otimes a_{23}}.$
The actions of the operators
$f^{\varphi}_{1},f^{\varphi}_{2},f^{\varphi}_{3},f^{\varphi}_{4}$ on $L$ give
us
$f^{\varphi}_{1}f^{\varphi}_{2}=0,\quad
f^{\varphi}_{2}f^{\varphi}_{1}=V_{1\otimes b_{12},v\otimes a_{12}},$
$f^{\varphi}_{3}f^{\varphi}_{4}=V_{1\otimes b_{23},1\otimes a_{23}},\quad
f^{\varphi}_{4}f^{\varphi}_{3}=0,$
and we have
$S^{\varphi}=[-V_{1\otimes b_{12},v\otimes a_{12}}\circ V_{1\otimes
b_{23},1\otimes a_{23}},V_{1\otimes b_{11},1\otimes a_{11}}]=V_{1\otimes
b_{12},v\otimes a_{12}}V_{1\otimes b_{23},1\otimes a_{23}},$
where $v=h_{1}\cdots h_{n}$. Since
$(1\otimes c_{1})S^{\varphi}=v\otimes c_{3}\neq 0(mod\,N^{\prime}),$
we obtain $S\notin T(E_{0}(B))$ and therefore $T\nsubseteq T(E_{0}(B))$.
Verification of Property (2). Let $L^{\prime}$ be a subalgebra of $L$
generated by a subset of the set (18) that contains no more than $s$ elements.
Assume that $f(x_{1},\ldots,x_{k})\in T$. Let $M$ be the set of all elements
of the form $(1\otimes c_{1})f(l_{1},\ldots,l_{k})$, where $l_{i}\in
L^{\prime}$. We claim that
(19) $M\cap N\subseteq N^{\prime}.$
Let’s assume that (19) does not hold. In other words, there is an element
$g=(h_{1}\cdots h_{n}\otimes c_{3})(h^{\prime}\otimes
c_{3})+h^{\prime\prime}\otimes c_{3}\in M\cap N$
for some nonzero $h^{\prime}\in H$ and some $h^{{}^{\prime\prime}}\in W$.
Note that
$(1\otimes c_{1})V_{1\otimes b_{12},1\otimes a_{12}}=1\otimes
c_{2},\quad(1\otimes c_{2})V_{1\otimes b_{22},h_{i}\otimes
a_{22}}=h_{i}\otimes c_{2},$ $(h_{i}\otimes c_{2})V_{1\otimes
b_{22},h_{j}\otimes a_{22}}=h_{i}h_{j}\otimes c_{2},\,(h_{1}\cdots
h_{n}\otimes c_{2})V_{1\otimes b_{23},1\otimes a_{23}}=h_{1}\cdots
h_{n}\otimes c_{3}.$
Without using $1\otimes c_{1}$ and all of the operators
$V_{1\otimes b_{12},1\otimes a_{12}},\quad V_{1\otimes b_{22},h_{i}\otimes
a_{22}},\quad V_{1\otimes b_{23},1\otimes a_{23}},$
we cannot get elements containing the product $h_{1}\ldots h_{n}$. This means
that $M\cap N$ contains $g$ if and only if the following $s+1$ elements appear
in our calculations:
$1\otimes c_{1},1\otimes a_{12},1\otimes b_{12},h_{i}\otimes a_{22},1\otimes
b_{22},1\otimes a_{23},1\otimes b_{23}\quad(i=1,\ldots,n).$
It is impossible since $L^{\prime}$ is generated by $s$ elements. This
contradiction establishes the inclusion (19).
Set $\overline{f}=f(l_{1},\ldots,l_{k})$ for some fixed $l_{1},\ldots,l_{k}\in
L^{\prime}$. We show that $L\overline{f}=0\,(mod\,N^{\prime})$. By Lemma 3.2,
one can have that
$(H\otimes(\overline{C}_{3}+A_{3}+\mathbb{F}c_{3}))E_{0}(L)=0$. Then, because
of (19), it is sufficient to prove that
(20) $(1\otimes c_{1})\overline{f}=0(mod\,N),\quad(H\otimes
c_{2})\overline{f}=0.$
If $l_{i}=l_{i}^{\prime}+l_{i}^{\prime\prime}$, where
$l_{i}^{\prime}\in H\otimes\sum_{i\leq j\leq
2}(\mathbb{F}a_{ij}+\mathbb{F}b_{ij}),\quad l_{i}^{\prime\prime}\in
H\otimes(\mathbb{F}a_{23}+\mathbb{F}b_{23}),$
then
(21) $(1\otimes c_{1})\overline{f}=(1\otimes
c_{1})f(l_{1}^{\prime},\ldots,l_{k}^{\prime})(mod\,N).$
We have
$f(x_{1}+y_{1},\ldots,x_{k}+y_{k})=f(x_{1},\ldots,x_{k})+g(x_{1},\ldots,x_{k},y_{1},\ldots,y_{k}),$
where every monomial of $g\in E_{0}$ involve at least one variable from
$y_{1},\ldots,y_{k}$. Since $LV_{l^{\prime\prime}_{i},L}\subseteq N$, we
obtain
$(1\otimes
c_{1})g(l^{\prime}_{1},\ldots,l^{\prime}_{k},l^{\prime\prime}_{1}\ldots,l^{\prime\prime}_{k})=0(mod\,N),$
which implies (21).
Consequently, to prove the first relation of (20), without loss of generality
we can assume that $l_{i}\in H\otimes\sum_{i\leq j\leq
2}(\mathbb{F}a_{ij}+\mathbb{F}b_{ij})$. In this case, the elements
$c_{1},l_{1},\ldots,l_{k}$ generate a subalgebra of $H\otimes P_{2}$. The
algebra $H$ can be embedded into the Cartesian product $\mathbb{F}^{\alpha}$
of the algebra $\mathbb{F}$. So, $H\otimes P_{2}$ can be embedded into
$\mathbb{F}^{\alpha}\otimes P_{2}\cong P^{\alpha}_{2}$ and satisfies all the
identities of $P_{2}$. Consequently, it satisfies all $V$-identities from $T$.
Thus, the first relation of (20) is established. The second relation of (20)
can be established similarly using the equality
$H\otimes(\sum_{j\leq 2}(\mathbb{F}a_{1j}+\mathbb{F}b_{1j}))c_{2}=0.$
In this case, we can assume that
$l_{i}\in H\otimes(\sum_{2\leq i,\,j\leq
3}(\mathbb{F}a_{ij}+\mathbb{F}b_{ij})).$
Then the elements $c_{2},l_{1},\ldots,l_{k}$ generate an algebra isomorphic to
a subalgebra of $H\otimes P_{2}$. Thus, we have
$L\overline{f}=0(mod\,N^{\prime})$. The factorization by $N^{\prime}$
completes the proof of Proposition 6.1. ∎
###### Lemma 6.2.
Let $\Sigma$ be a set of generators of the ideal $T=T(E_{0}(P_{2}))$ of
$E_{0}$. Then for any natural number $s$, there exists a polynomial
$f_{s}\in\Sigma$ that depends on more than $s$ variables.
###### Proof.
Suppose, contrary, that $\Sigma$ consists of polynomials that depend on $\leq
s$ variables. Let $B$ be the algebra satisfying the conditions of Proposition
6.1. Consider the epimorphism $\tau:F(X)\rightarrow B$ defined by
$\tau(x_{i})=\left\\{\begin{array}[]{rcl}q_{i}&\mbox{if}&i\leq s+3,\\\
0&\mbox{if}&i>s+3.\end{array}\right.$
This induces the epimorphism $\tilde{\tau}:E_{0}\rightarrow E_{0}(B)$ defined
by
$\tilde{\tau}g(x_{1},\ldots,x_{k})=g(\tau(x_{1}),\ldots,\tau(x_{k})).$
If $g(x_{1},\ldots,x_{k})\in\Sigma$, then $k\leq s$ and $c_{i}=\tau(x_{i})$
belong to a subalgebra of $B$ generated by $\leq s$ elements. By Proposition
6.1(2), $g(c_{1},\ldots,c_{k})=0$. Thus, $g\in Ker\,\tilde{\tau}$. So
$Ker\tilde{\tau}$ contains $\Sigma$ and, consequently, $T$ as well.
Now let $f(x_{1},\ldots,x_{m})\in T$. For any $b_{1},\ldots,b_{m}\in B$ there
exist $r_{i}\in F(X)$ such that $b_{i}=\tau(r_{i})$ for all $i=1,\ldots,m$.
Since $f(r_{1},\ldots,r_{m})\in T$, we have
$f(b_{1},\ldots,b_{m})=\tilde{\tau}f(r_{1},\ldots,r_{m})=0$. This proves that
every element of $T$ is a $V$-identity of $B$. This contradicts Proposition
6.1(1). ∎
###### Theorem 6.3.
Algebra $P_{2}$ over an arbitrary field $\mathbb{F}$ does not have a finite
basis of identities.
###### Proof.
If $P_{2}$ has a finite basis of identities, then the ideal
$T=T(E_{0}(P_{2}))$ is generated by polynomials of bounded degree by
Proposition 5.7. This contradicts Lemma 6.2. ∎
## Acknowledgments
The authors would like to thank the Max Planck Institute für Mathematik and
the first author would like to thank Wayne State University for their
hospitality and excellent working conditions, where some part of this work has
been done.
The second author is supported by the grant AP09261086 of the Ministry of
Science and Higher Education of the Republic of Kazakhstan.
## References
* [1] V. P. Belkin, Varieties of right alternative algebras, Algebra and Logic 15, (1976), 309–320.
* [2] A. Ya. Belov, Counterexamples to the Specht problem, Sb. Math. 191 (2000), no. 3–4, 329–340.
* [3] D. Burde, Left-symmetric algebras, or pre-Lie algebras in geometry and physics, Cent. Eur. J. Math. 4(3) (2006), 323–357.
* [4] A. Cayley, On the theory of analytical forms called trees, Philos. Mag. 13 (1857) 172–176, Collected Math. Papers, vol. 3, University Press, Cambridge, 1890, 242–246.
* [5] F. Chapoton, M. Livernet, Pre-Lie algebras and the rooted trees operad, IMRN, (2001), 8, 395–408.
* [6] V. Dotsenko, N. Ismailov, U. Umirbaev Polynomial identities of Novikov algebras, Math. Z. 303(3), 60, (2023).
* [7] V. Dotsenko, U. Umirbaev An effective criterion for Nielsen-Schreier varieties, IMRN, rnad092, https://doi.org/10.1093/imrn/rnad092.
* [8] V. S. Drensky, On identities in Lie algebras, Algebra and Logic 13, (1974), 150–165.
* [9] V. Drensky, B.K. Zhakhayev, Noetherianity and Specht problem for varieties of bicommutative algebras, J. Algebra 499 (2018), 570–582.
* [10] A.S. Dzhumadil’daev, K.M. Tulenbaev, Engel theorem for Novikov algebras, Comm. Algebra 34 (2006), 3, 883–888.
* [11] A.S. Dzhumadil’daev, C. Löfwall, Trees, free right-symmetric algebras, free Novikov algebras and identities, Homology Homotopy Appl., 4 (2002), no.2(1), 165–190.
* [12] V.T. Filippov, On right-symmetric and Novikov nil algebras of bounded index, Math. Notes 70 (2001), 1–2, 258–263.
* [13] A.V. Grishin, Examples of T-spaces and T-ideals of characteristic 2 without the finite basis property. (Russian. English, Russian summary), Fundam. Prikl. Mat. 5 (1999), no. 1, 101–118.
* [14] A.V. Il’tyakov, On finite basis of identities of Lie algebra representations, Nova J. Algebra Geom. 1 (1992), no. 3, 207–259.
* [15] I. M. Isaev, Finite-dimensional right-alternative algebras generating non-finite-basable varieties, Algebra and Logic 25, (1986), 86–96.
* [16] A. R. Kemer, Solution of the problem as to whether associative algebras have a finite basis of identities, Soviet Math. Dokl. 37 (1988), 1, 60–64.
* [17] A.R. Kemer, Ideals of identities of associative algebras. Translated from the Russian by C. W. Kohls. Translations Mathematical Monographs, 87. American Mathematical Society, Providence, RI, 1991. vi+81 pp.
* [18] A.N. Krasil’nikov, A.L. Shmel’kin, Finiteness of the basis of identities of finite-dimensional representations of solvable Lie algebras, Siberian Math. J. 29 (1988), 3, 395–402.
* [19] D.Kh. Kozybaev, U.U. Umirbaev, The Magnus embedding for right-symmetric algebras, Siberian Math. J. 45 (3) (2004), 488–494.
* [20] D. Kozybaev, L. Makar-Limanov, U. Umirbaev The Freiheitssatz and the autmorphisms of free right-symmetric algebras, Asian-Eur. J. Math., 1(2), (2008), 243–254.
* [21] D. Kozybaev, U. Umirbaev, Identities of the left-symmetric Witt algebras, Internat. J. Algebra Comput. 26 (2016), 2, 435–450.
* [22] A.M. Kuz’min On Spechtian varieties of right alternative algebras, Journal of Mathematical Sciences, 149, (2008), 1098–-1106.
* [23] J.L. Loday, B. Vallette, Algebraic operads, Grundlehren Math. Wiss., 346 [Fundamental Principles of Mathematical Sciences] Springer, Heidelberg, 2012, xxiv+634 pp.
* [24] I.V. L’vov, Finite-dimensional algebras with infinite identity base, Sib Math. J. 19, (1978), 63–69.
* [25] Yu. A. Medvedev, Finite basis theorem for varieties with a two-term identity, Algebra and Logic 17, (1978), 458–472.
* [26] Yu. A. Medvedev, Example of a variety of alternative algebras over a field of characteristic 2 that does not have a finite basis of identities, Algebra and Logic 19, (1980), 191–201.
* [27] S. V. Pchelintsev, An almost Spechtian variety of alternative algebras over a field of characteristic 3, Mat. Sb., 191, No. 6, (2000), 909–925.
* [28] V.V. Shchigolev, Examples of infinitely based T-ideals. (Russian. English, Russian summary) Fundam. Prikl. Mat. 5 (1999), 1, 307–312.
* [29] I. Shestakov and Z. Zhang, Solvability and nilpotency of Novikov algebras, Comm. Algebra 48 (2020), 12, 5412–5420.
* [30] Segal Dan, Free left-symmetric algebras and an analogue of the Poincaré-Birkhoff-Witt theorem, J. Algebra, 164(3) (1994), 750–772.
* [31] W. Specht, Gesetze in Ringen. I. (German) Math. Z. 52 (1950), 557–589.
* [32] K. Tulenbaev, U. Umirbaev, V. Zhelyabin, On the Lie-solvability of Novikov algebras, J. Algebra Appl. 22 (2023), no. 5, Paper No. 2350117, 13 pp.
* [33] U. U. Umirbaev, Metabelian binary-lie algebras, Algebra and Logic 23, (1984), 155–160.
* [34] U.U. Umirbaev Specht varieties of soluble alternative algebras, Algebra and Logic 24, (1985), 140–149.
* [35] U. Umirbaev, V. Zhelyabin, On the solvability of graded Novikov algebras, Internat. J. Algebra Comput. 31 (2021), 7, 1405–1418.
* [36] M. R. Vaughan-Lee Varieties of Lie algebras, Q. J. Math., Oxford, 21(83) (1970), 297–308.
* [37] K.A. Zhevlakov, A.M. Slinko, I.P. Shestakov, A.I. Shirshov, Rings That Are Nearly Associative. Academic Press, New York, (1982).
* [38] E.I. Zel’manov, A class of local translation-invariant Lie algebras, Soviet Math. Dokl., Vol. 35, (1987), 216–218.
|
Original Article
Shogo NishiyamalabelATomohiro KaralabelA Brian ThorsbrolabelB Hiromi
SaidalabelC Yohsuke TakamorilabelD Masaaki TakahashilabelE Takayuki
OhgamilabelF Kohei IchikawalabelG, labelH Rainer SchödellabelI
[labelA]Miyagi University of Education, Sendai, Miyagi, Japan
[labelB]Observatoire de la Côte d’Azur, CNRS UMR 7293, BP4229, Laboratoire
Lagrange, F-06304 Nice Cedex 4, France [labelC]Daido University, Naogya,
Aichi, Japan [labelD]National Institute of Technology, Gobo, Wakayama, Japan
[labelE]Aichi University of Education, Kariya, Aichi, Japan [labelF]National
Astronomical Observatory of Japan, Tokyo, Japan [labelG]Tohoku University,
Sendai, Miyagi, Japan [labelH]Waseda University, Tokyo, Japan
[labelI]Instituto de Astrofísica de Andalucía (IAA)-CSIC, Granada, Spain
Shogo Nishiyama, Miyagi University of Education, 149 Aramaki-Aza-Aoba, Aoba-
ku, Sendai, Miyagi 980-0845, Japan<EMAIL_ADDRESS>
# Origin of an Orbiting Star Around the Galactic Supermassive Black Hole
###### Abstract
The tremendous tidal force that is linked to the supermassive black hole
(SMBH) at the center of our galaxy is expected to strongly subdue star
formation in its vicinity. Stars within $1\mbox{${}^{\prime\prime}$}$ from the
SMBH thus likely formed further from the SMBH and migrated to their current
positions. In this study, spectroscopic observations of the star S0-6/S10, one
of the closest (projected distance from the SMBH of $\approx
0\mbox{$.\\!\\!^{\prime\prime}$}3$) late-type stars were conducted. Using
metal absorption lines in the spectra of S0-6, the radial velocity of S0-6
from 2014 to 2021 was measured, and a marginal acceleration was detected,
which indicated that S0-6 is close to the SMBH. The S0-6 spectra were employed
to determine its stellar parameters including temperature, chemical abundances
([M/H], [Fe/H], [$\alpha$/Fe], [Ca/Fe], [Mg/Fe], [Ti/Fe]), and age. As
suggested by the results of this study, S0-6 is very old ($\gtrsim 10$ Gyr)
and has an origin different from that of stars born in the central pc region.
###### keywords:
astrophysics, infrared astronomy, black hole, spectroscopy
## 1 Introduction
Stars within approximately 1′′( $\approx 0.04$ pc at a distance of 8 kpc from
the Earth) from the supermassive black hole (SMBH) at the center of our galaxy
are referred to as “S-stars”111There is no common definition for S-stars.
Here, S-stars are defined as those within $1\mbox{${}^{\prime\prime}$}$ from
Sgr A*, which include both early- and late-type stars.. S-stars are a
collection of both early-type and late-type stars, and the early-type are
B0–B3 main sequence stars with masses of $8$–$14\,M_{\odot}$ [2, 1]. The late-
type stars are G–M red giants with estimated initial masses of
$0.5$–$2\,M_{\odot}$ [3].
S-stars provide a unique test bed for probing the strong gravity around SMBHs.
Owing to the large mass ratio between S-stars and the SMBH, named Sagittarius
A* (Sgr A*; $\approx 4\times 10^{6}M_{\odot}$[4]), the stars can be considered
as test particles that move in a static potential. A particularly interesting
target is the star named S0-2 or S2 (Fig. 1), whose orbital period is $\approx
16$ years [4, 2]. We have been carrying out spectroscopic monitoring
observations since 2014 with the aim to precisely monitor the radial velocity
(RV) of S0-2/S2 around the pericenter passage in 2018 [5]. We were able to
spatially resolve S0-2/S2 and obtain high-resolution near-infrared (NIR)
spectra with the use of the infrared camera and spectrograph (IRCS) [6] on the
Subaru telescope [7], in combination with the Subaru adaptive optics (AO) [8]
and laser guide star (LGS) systems [9]. The observational results and the
obtained orbit of S0-2/S2 are shown in Fig. 2.
Figure 1: $K^{\prime}$-band image of the galactic center region taken with
Subaru/IRCS/AO188 in 2017. North is up, and east is to the left in the image.
The positions of S0-2/S2, Sgr A*, and our target S0-6/S10, are shown by yellow
circles.
The observations and analysis provide not only the orbital parameters for
S0-2/S2 but also the physical parameters for the SMBH. In our study, the mass
and distance of the SMBH were determined to be $(4.23\pm 0.07)\times
10^{6}\,M_{\odot}$ and $8.10\pm 0.07$ kpc, respectively (Saida et al.[10]; see
also [11, 12]). It should be noted that by observing stars that orbit the
SMBH, we can determine the mass of the SMBH with an uncertainty of only a few
percent, which is much smaller than those from observations of the shadow of
the SMBH [13].
Figure 2: Observational results and derived orbit of S0-2/S2. The blue
squares, red diamonds, and green circles respectively denote the results of
the VLT, Keck/Gemini, and Subaru observations. The darkness of the colors
indicates the epoch of observations, with the most recent ones represented by
the darkest points. The results of an orbital fitting are shown by blue and
red lines, which indicate the blue- and red-shifted parts of the orbit,
respectively. (Left) Orbital motion of S0-2 on the sky. The positions are
relative to Sgr A*. S0-2/S2 revolves around Sgr A* in a clockwise direction in
this plot. (top middle) From top to bottom, offset in right ascension (RA),
offset in declination (Dec), and distance from Sgr A*, respectively, as
function of time. (top right) Three-dimensional visualization of the orbit of
S0-2/S2. The direction to the Earth is indicated by the white arrow, and the
position of Sgr A* is depicted by the black cross. (bottom right) Radial
velocity of S0-2 as function of time.
Because of the tremendous tidal force associated with the SMBH, it is highly
unlikely that the S-stars formed at their present position [14]. Therefore,
the stars are generally assumed to form at larger distances from Sgr A* and
were brought to their current location via dynamical processes. Considering
that the late-type S-stars are likely to be old, $\sim 3$–$10$ Gyr [3], they
are expected to be dynamically relaxed. In other words, identifying where they
formed using only kinematic information is impossible. In this case, stellar
properties including age and chemical abundances are crucial in order to
understand their origin.
The SMBH and S-stars are surrounded by a dense stellar system, the nuclear
star cluster (NSC). This is a massive ($\sim 2.5\times 10^{7}M_{\odot}$)
cluster with a half-light radius of $\sim 5$ pc [15, 16]. Recent observations
revealed that, besides a major stellar population with a mean metallicity at
least as high as twice the solar, there exists a subpopulation of stars that
are characterized by sub-solar metallicity [17, 18]. The subsolar population
may be a relic of an old star cluster migrated to the center of our galaxy
[19, 20]. In this paper, “NSC stars” is defined as stars within $\sim 5$ pc
from Sgr A* except S-stars. Habibi et al.[3] obtained spectra of 21 late-type
S-stars within $0.5\,$pc from Sgr A*, and estimated their $T_{\mathrm{eff}}$,
spectral type, age, and initial mass. However, owing to the medium resolution
($\sim 1500$) of the spectra, they could not determine the chemical abundance
of the late-type S-stars. Without metallicity information, stellar ages are
impossible to determine precisely. In this study, we obtain high-resolution
spectra of one of the late-type S-star S0-6 and determine its chemical
abundances and age to investigate its origin. S0-6 is located at $\approx
0\mbox{$.\\!\\!^{\prime\prime}$}3$ from Sgr A* (Fig. 1), which corresponds to
a transverse distance of $\approx 0.012\,$pc or $2400\,$AU. S0-6 is of great
interest because it appears to be one of the rare red giants that are very
close to Sgr A*.
## 2 Observations and data reduction
In this study, NIR spectroscopic observations were conducted using the Subaru
telescope and IRCS in the echelle mode (spectral resolution of $\sim 20,000$).
Spectra were obtained in the $K+$ setting each year from 2014 to 2019 and in a
special setting in the $K$-band ($K_{\mathrm{sp}}$) in 2021. The AO system on
Subaru AO188 was employed in the observations. During the observations in
2014, 2016, and 2017, the LGS system was applied.
Table 1 shows the echelle order numbers and wavelength coverage for the
settings. Although seven echelle orders can be simultaneously observed, we
utilized only three orders, namely, 25, 26 and 27, in the following analysis.
This is mainly due to strong atmospheric absorption lines in other orders.
Table 1: Wavelength coverage for echelle $K+$ and $K_{\mathrm{sp}}$ mode. Setting | Order | Wavelength coverage
---|---|---
| | [$\mathrm{\AA}$]
$K+$ | 25 | $22500$–$23039$
| 26 | $21634$–$22154$
| 27 | $20833$–$21335$
$K_{\mathrm{sp}}$ | 25 | $22324$–$22878$
| 26 | $21466$–$21998$
| 27 | $21670$–$21185$
The details of the data reduction are described in Nishiyama et al.[5]. The
first process of data reduction involves flat fielding, bad pixel correction,
and cosmic ray removal. Flat field images were acquired using observations of
a continuum source.
Prior to the extraction of spectra of S0-6, wavelength-calibrated stripe
images, where the $x$\- and $y$-axes of the stripe images are wavelength and
spatial position along the slit, respectively, were constructed. Here, all OH
emission lines along the slit direction are used for wavelength calibration of
the stripe images. The numbers of the OH lines used for the calibration are 6,
7, and 14 for the echelle orders 25, 26, and 27, respectively.
We extracted S0-6 spectra and then performed telluric correction of the S0-6
spectra using spectra of standard stars, which were bright early-A-type
dwarfs. Then, the S0-6 spectra were divided by the standard star spectra. We
conducted continuum fitting for the telluric-corrected spectrum for each
observation run. Finally, we combined all spectra to come up with a combined
spectrum for each observed epoch. To determine the metallicity, we combined
all the RV-corrected spectra obtained in our observations.
We estimated the signal-to-noise (S/N) ratio for S0-6 employing the procedure
described in Fukue et al.[21]. Here, the S/N ratio is defined as the inverse
of the noise for the normalized spectrum, $\sigma$, that is, S/N ratio
$=\sigma^{-1}$. Table 2 shows The S/N ratio for each observing run and each
order. The S/N ratios vary from 9 to 36 and depend not only on the number of
the used data sets but also on the seeing condition of the observations and
used guide star system, LGS or NGS. The S/N ratios calculated for the combined
spectra, composed of all the datasets from 2014 to 2021, were 57.3, 52.5, and
41.8, for echelle orders 27, 26, and 25, respectively.
Table 2: Signal-to-noise ratio of spectra with $R=20,000$.
Date | | S/N ratio |
---|---|---|---
| order 27 | order 26 | order 25
2014.379 | 23.1 | 22.9 | 21.1
2015.635 | 22.5 | 23.5 | 20.4
2016.381 | 18.9 | 18.2 | 16.4
2017.341 | 22.1 | 20.7 | 19.6
2017.344 | 33.1 | 36.2 | 30.6
2017.347 | 30.1 | 30.2 | 28.9
2017.603 | 11.2 | 13.3 | 9.3
2017.609 | 23.1 | 23.1 | 14.1
2018.087 | 22.1 | 24.5 | 19.6
2021.420 | 28.1 | 12.3 | 22.7
Combined(a) | 57.3 | 52.5 | 41.8
(a) Combined spectra from 2014 to 2021.
## 3 Radial velocities and location of S0-6
Absorption lines in its spectra were used in order to determine the RV for
S0-6. First, we roughly estimated the wavelength shift of the observed spectra
using strong distinctive absorption lines in order 26. Second, the observed
spectra were compared with a wavelength-shifted model spectrum of a late-type
giant with a temperature of $\sim 4,000$ K [3]. If we found an absorption line
that was also identified in the model spectrum, we fitted the observed line
with a Gaussian function and calculated the RV of S0-6 from the wavelength
difference between the observed and model spectra. Finally, the amount of RV
correction was calculated using the IRAF rvcorrect task in order to obtain RVs
in the Local Standard of Rest reference frame.
The time variation of the RV for S0-6 is shown in Fig. 3. The error bars
include both statistical and systematic uncertainties (Table 3). The
statistical uncertainties are taken from the standard errors of the RV
measurements using the absorption lines, where 21 to 45 lines are employed.
The average statistical uncertainty is 1.25 km s-1.
Table 3: Radial velocities and uncertainties for S0-6.
Time | Number of lines | RV | Statistical uncertainty | Total uncertainty(a)
---|---|---|---|---
(year) | | (km/s) | (km/s) | (km/s)
2014.379 | 31 | 94.96 | 1.27 | 1.46
2015.635 | 31 | 98.18 | 0.90 | 1.15
2016.381 | 26 | 96.07 | 1.52 | 1.68
2017.341 | 31 | 98.72 | 1.16 | 1.37
2017.344 | 39 | 97.72 | 0.81 | 1.09
2017.347 | 44 | 97.21 | 0.85 | 1.12
2017.603 | 21 | 98.90 | 2.00 | 2.13
2017.609 | 28 | 98.01 | 1.53 | 1.69
2018.087 | 45 | 96.93 | 1.04 | 1.27
2021.420 | 34 | 99.60 | 1.37 | 1.55
(a)The systematic uncertainty of $0.72\,$km s-1 is quadratically summed to the
statistical uncertainty.
To understand the long-term stability of the RV measurements, we applied
atmospheric OH emission lines. We extracted spectra at a position close to but
different from S0-6 without subtracting background emission and fit the OH
emission lines to measure wavelength-calibrated OH line wavelengths. By
comparing them with the OH emission wavelengths measured in laboratories, we
obtain a long-term stability of 0.72 km s-1 for the 7-year monitoring
observations. We include this as a systematic uncertainty (Table 3). When the
statistical and systematic uncertainties are summed quadratically, the average
of the total uncertainties in our RV measurements is 1.45 km s-1. The small
uncertainties shown above depict that this study is one of the most accurate
and precise RV measurements that were conducted for stars in the central pc
region of our galaxy.
Figure 3: (Top) Radial velocity of S0-6 as function of time. The blue line
represents the best-fit linear function. (Bottom) Residual from best-fit
linear function shown in the top panel.
Using a linear function, we fitted the RV plot, and the best-fit function is
shown in Fig. 3. The slope of the best-fit function is $0.48\pm 0.20$ km s-1
yr-1. This might suggest a marginal detection of the acceleration of S0-6
along the line of sight. The bottom panel in Fig. 3 represents the residuals
of the data points from the best-fit linear function. The standard deviation
of the data points around the best-fit function is 0.92 km/s, suggesting that
the observed data points are well-fitted with a linear function.
The RV of S0-6 has been also measured for more than 13 years using the Keck
telescope[22]. Acceleration along the line of sight was also detected as
$0.83\pm 0.12$ km s-1 yr-1, and the difference between the results is only
$1.5\sigma$, considering the uncertainties. The positive value of the
acceleration of S0-6 suggests that it is located in front of Sgr A* with its
acceleration vector pointing away from us.
The marginal detection of acceleration allows us to estimate the three-
dimensional distance $r$ of S0-6 from Sgr A*. The equation of motion for S0-6
whose mass is $m_{*}$ is
$m_{*}\boldsymbol{a}=-G\frac{Mm_{*}}{r^{2}}\frac{\boldsymbol{r}}{r},$ (1)
where $\boldsymbol{a}$ is the acceleration of S0-6, $M$ is the mass of Sgr A*,
$\boldsymbol{r}$ is the position vector, and $G$ is the gravitational
constant. We measured the acceleration along the line of sight, $a_{z}$, where
$\boldsymbol{a}=(a_{x},a_{y},a_{z})$ and $|a_{z}|$ is smaller than the three
dimensional acceleration $|\boldsymbol{a}|$. Hence, we obtain the following:
$|a_{z}|<|\boldsymbol{a}|=G\frac{M}{r^{2}}.$ (2)
This enables us to calculate the upper limit of the distance $r$ using the
following equation:
$r<\sqrt{\frac{GM}{|a_{z}|}}.$ (3)
Substituting $G=6.67\times 10^{-11}$ m3kg-1s-2, $M=4.23\times 10^{6}M_{\odot}$
[10], and $|a_{z}|=0.48$ km s-1 yr-1 into equation [3], we acquire $r<0.2$ pc
$\approx 4\times 10^{4}$ AU. This suggests that S0-6 is truly close to Sgr A*.
Note that on the basis of our current data set, the zero-acceleration
hypothesis cannot be ruled out. The significance of the detection of
$\boldsymbol{a}$ is still $2.4\,\sigma$. When fitting the observed RVs using a
function without acceleration (i.e., slope $=0$), we obtain
$\chi^{2}/\mathrm{d.o.f}=7.53/9$. This results in a p-value of 0.58, and we
cannot rule out the zero-acceleration hypothesis for S0-6. Although the
acceleration was detected using the Keck observations ($6.9\,\sigma$)[22],
more observations are needed to conclusively confirm its detection using our
own data.
## 4 Stellar parameters of S0-6
### 4.1 Data sets, StarKit, and calibration
We constrained the stellar parameters of S0-6 by computing synthetic spectra
and comparing them with the observed spectra. To compare the observed and
synthetic spectra, we utilized the StarKit code developed by Kerzendorf & Do
(2015)[23]. StarKit is a spectral fitting framework using Bayesian inference
to determine the best-fit parameters. StarKit simultaneously fits stellar
parameters such as the following: the effective temperature
$T_{\mathrm{eff}}$, the surface gravity $\log g$, the overall metallicity of
all elements [M/H], the $\alpha$ element abundance [$\alpha$/Fe], and the
spectral continuum.
A synthetic spectral library is necessary for StarKit to determine the set of
stellar parameters. Bentley et al.[24] found that the BOSZ grid[25] provides
smaller offsets and scatter than other ones, and we thus decided to utilize
the BOSZ grid. The parameter spaces covered by the BOSZ grid in our analysis
are as follows: $3500\leq T_{\mathrm{eff}}[\mathrm{K}]\leq 35,000$,
$0.0\leq\log g\leq+4.5$, $-2.0\leq\mathrm{[M/H]}\leq+0.75$, and
$-0.25\leq[\alpha/\mathrm{Fe}]\leq+0.5$.
Upon fitting $K$-band stellar spectra with StarKit, we excluded the following
absorption lines and wavelength ranges from the analysis. We did not use
echelle order 25 for the analysis as order 25 includes CO absorption features
at $\gtrsim 20900$ Å, and fitting the CO lines could introduce significant
biases in the determinations of the parameters. The absorption lines due to
Na, Ca, Sc, and V were known to be strong when compared with those for stars
in the galactic disk or the solar neighborhood [28, 26, 27]. Thus, the lines
associated with these four elements were excluded from the fit of the spectra.
Table 4: Data for calibration sample
Name | Spectral type | $T_{\rm{eff}}$ | $\log g$ | [Fe/H] | [$\alpha$/Fe]
---|---|---|---|---|---
| | (K) | (dex) | (dex) | (dex)
BD$-01$ 2971 | M5 | $3587\pm 19$ | $0.5^{(\mathrm{a})}$ | $-0.63\pm 0.26$ | $0.023\pm 0.222^{(\mathrm{b})}$
2MASS J19213390+3750202 | K6-7 | $3734\pm 97$ | $1.12\pm 0.10$ | $+0.30\pm 0.08$ | $0.068\pm 0.012^{(\mathrm{c})}$
HD 787 | K4 | $3962\pm 72$ | $1.12\pm 0.31$ | $-0.04\pm 0.13$ | $0.133\pm 0.114^{(\mathrm{b})}$
$\alpha$ Boo (Arcturus, HD124897) | K1.5 | $4296\pm 110$ | $1.66\pm 0.29$ | $-0.53\pm 0.11$ | $0.228\pm 0.015^{(\mathrm{c})}$
$\mu$ Leo (HD 85503) | K2 | $4494\pm 93$ | $2.44\pm 0.24$ | $+0.27\pm 0.15$ | $0.019\pm 0.009^{(\mathrm{c})}$
$\alpha$ Ari (HD 12929) | K2 | $4587\pm 65$ | $2.59\pm 0.18$ | $-0.11\pm 0.06$ | $0.06^{(\mathrm{b,d})}$
$\delta$ Dra (HD 180711) | G9 | $4856\pm 51$ | $2.69\pm 0.27$ | $-0.12\pm 0.10$ | $0.02\pm 0.02^{(\mathrm{e})}$
$\varepsilon$ Leo (HD 84441) | G1 | $5365\pm 76$ | $2.08\pm 0.29$ | $-0.05\pm 0.20$ | –
(a)Only two measurements with the same results are found. (b)[$\alpha$/Fe] $=$
([Mg/Fe]$+$[Ca/Fe]$+$[Ti/Fe])/3. (c)[$\alpha$/Fe] $=$
[$\alpha$/M]$+$[M/H]$-$[Fe/H], where [$\alpha$/M] and [M/H] are from APOGEE.
(d)[X/Fe] $=$ [X/H]$-$[Fe/H] $=$
$A$(X)$-\log(N_{\mathrm{X}}/N_{\mathrm{H}})_{\odot}-$[Fe/H], where $A$(X) is
the abundance of element X and $N_{\mathrm{X}}$ and $N_{\mathrm{H}}$ are the
number of atoms per unit volume for the element X and hydrogen, respectively.
Here, $\log(N_{\mathrm{X}}/N_{\mathrm{H}})_{\odot}$ values for Mg, Ca, and Ti
are taken from Asplund et al.(2009)[30]. (e)Taken from Tautvaišienė et
al.(2020)[31].
The stellar parameters determined by StarKit appear to depict a systematic
offset in comparison to other studies [18, 24]. We analyzed the stellar
parameters for a calibration sample (Table 4) to evaluate the systematic
offsets and the scatter when we use StarKit and IRCS $K$-band spectra. The
equivalent widths of Na, Ca, and CO absorption features in the $K$-band are
known to be sensitive to $\log g$[29]. However, the CO absorption features are
only partly included in our observed spectrum. Thus, we fixed $\log g$ for the
calibration sample to values determined in past studies (Table 4).
Table 5 shows the acquired parameter offsets between the StarKit and
literature values. Here, we assume that the stellar [M/H] values are the same
as the [Fe/H] values. These results suggest that the StarKit and IRCS spectra
tend to underestimate $T_{\mathrm{eff}}$, [M/H], and [$\alpha$/Fe] compared to
the literature values. These results are consistent with the studies by
Bentley et al.[24], where they concluded that StarKit underestimates the three
parameters.
Table 5: Offsets between StarKit (ST) and literature (lit) values determined
for different parameters of the calibration sample
Order | $\Delta T_{\mathrm{eff}}^{(a)}$ | $\Delta\mathrm{[M/H]}^{(b)}$ | $\Delta$[$\alpha$/Fe](c)
---|---|---|---
| (K) | (dex) | (dex)
27 | $-199\pm 196$ | $-0.30\pm 0.15$ | $-0.13\pm 0.27$
26 | $-204\pm 142$ | $-0.17\pm 0.06$ | $-0.18\pm 0.06$
(a) $\Delta T_{\mathrm{eff}}=T_{\mathrm{ST}}-T_{\mathrm{lit}}$ (b)
$\Delta\mathrm{[M/H]}=\mathrm{[M/H]_{ST}}-{\mathrm{[Fe/H]_{lit}}}$ (c)
$\Delta[\alpha/\mathrm{Fe}]=[\alpha/\mathrm{Fe}]_{\mathrm{ST}}-[\alpha/\mathrm{Fe}]_{\mathrm{lit}}$
### 4.2 Analysis of S0-6 with StarKit
We conducted spectral fitting of S0-6 using StarKit. The fit was performed for
the spectra of order 27 and 26 separately. Since the wavelength range of our
observed spectra is insensitive to $\log g$, we estimated $\log g$ by using
Yonsei-Yale isochrones [32], in which we find correlations between
$T_{\mathrm{eff}}$, the metallicity, and $\log g$. The late-type stars in the
central 0$.\\!\\!^{\prime\prime}$5 region are likely to be old, 3–10 Gyr [3];
thus we used 5- and 10-Gyr isochrones, the difference of which is negligible.
S0-6 is a late-type giant, and its $T_{\mathrm{eff}}$ was observed to be $\sim
4,000$ K [3]. Giant stars with $T_{\mathrm{eff}}\sim 4000$ are expected to
have a $\log g$ value in the range $\approx 0.8$ to $\approx 2.0$, if they are
not very metal poor ($[\mathrm{Fe/H}]>-1$). [19] For the first attempt to
identify the stellar parameters, we fit the spectra of S0-6 using StarKit, by
constraining $\log g$ to $0.8\leq\log g\leq 2.0$. We then obtained the first
results of the fit for $T_{\mathrm{eff}}$, [M/H], and [$\alpha$/Fe]. On the
basis of the first results and the systematic offset and uncertainties
obtained in §4.1, we applied a stronger constraint on $\log g$ using the
Yonsei-Yale isochrones. With the stronger second constraint on $\log g$, we
fit the spectra with StarKit and achieved a second set of results for
$T_{\mathrm{eff}}$, [M/H], and [$\alpha$/Fe]. By repeating this procedure
several times, we obtained stronger constraints on $\log g$ and more reliable
stellar parameters. Finally, we fit the spectra with a fixed $\log g$. The
fixed values are $\log g=0.9$ for order 27 and $1.2$ for order 26.
Figure 4: Observed spectra (black) and best-fitting model (red) of S0-6, and
residuals between observed and model spectra for order 27 (top) and 26
(bottom). The model parameters for order 27 are $T_{\mathrm{eff}}=3630$ K,
$\log g=0.9$, [M/H]$=-0.69$, and [$\alpha$/Fe] $=-0.23$, and those for order
26 are $T_{\mathrm{eff}}=3790$ K, $\log g=1.2$, [M/H]$=-0.72$, and
[$\alpha$/Fe] $=-0.22$. The BOSZ synthetic spectral library [25] was applied
in order to generate the model spectra. The vertical dashed lines represent
known spectral lines. They are labeled on top of the panels. The grey regions
of the spectra are excluded from the fitting process (see § 4.1).
Table 6 lists the resulting stellar parameters. Fig. 4 presents the spectra of
S0-6 and the best-fitting models for orders 27 and 26. The residuals between
the observed and model spectra are also represented. The standard deviations
of the residuals for orders 27 and 26 are 0.027 and 0.029, respectively.
Considering the systematic offset shown in Table 5, the calibrated parameters
are $T_{\mathrm{eff}}=3749$ K, $\mathrm{[M/H]}=-0.385$,
[$\alpha/\mathrm{Fe}]=-0.097$ (order 27), and $T_{\mathrm{eff}}=3998$ K,
$\mathrm{[M/H]}=-0.546$, [$\alpha/\mathrm{Fe}]=-0.041$ (order 26). Here, we
assume that the systematic offsets are the same for good S/N spectra (the
calibration samples) and for lower-quality spectra (S0-6).
Table 6: Results of spectral fitting Order | StarKit results | | After calibration
---|---|---|---
| $T_{\mathrm{eff}}$ | $\log g$ | [M/H] | [$\alpha$/Fe] | | $T_{\mathrm{eff}}$ | $\log g$ | [M/H] | [$\alpha$/Fe]
| (K) | fixed | (dex) | (dex) | | (K) | fixed | (dex) | (dex)
27 | $3630$ | $0.90$ | $-0.685$ | $-0.227$ | | $3749$ | $0.90$ | $-0.385$ | $-0.097$
26 | $3794$ | $1.2$ | $-0.716$ | $-0.221$ | | $3998$ | $1.2$ | $-0.546$ | $-0.041$
Average | | | | | | $3874$ | | $-0.466$ | $-0.069$
Table 7: Error budget in stellar parameter determination
| $T_{\mathrm{eff}}$ | | [M/H] | | [$\alpha$/Fe]
---|---|---|---|---|---
Order | 27 | 26 | | 27 | 26 | | 27 | 26
Internal(a) | $56$ | $163$ | | $0.20$ | $0.13$ | | $0$ | $0.019$
Calibration(b) | $196$ | $142$ | | $0.15$ | $0.06$ | | $0.27$ | $0.06$
Total in each order(c) | $204$ | $216$ | | $0.25$ | $0.14$ | | $0.27$ | $0.063$
Average(d) | $210$ | | $0.20$ | | $0.196$
Standard error(e) | $124$ | | $0.081$ | | $0.028$
Total uncertainty(f) | $244$ | | $0.22$ | | $0.20$
Final result | $3870\pm 240$ | | $-0.47\pm 0.22$ | | $-0.07\pm 0.20$
(a)Standard deviation of results for two spectra combined from two subsets.
(b)Uncertainty in calibration, taken from Table 5. (c)Quadratic sum of
internal and calibration uncertainties. (d)Error propagation in averaging
results for orders 27 and 26. (e)Standard error of the results for orders 27
and 26, where the standard deviation of [M/H] for the two orders is divided by
$\sqrt{2}$. (f)Quadratic sum of uncertainties in averaging and standard error.
We summarize the uncertainties of our measurements in Table 7. We divided the
observed data into two subsets to measure the internal uncertainty for each
order. The spectra in the subsets were combined, which resulted in two
different spectra for each order. We performed fitting of the spectra with
StarKit and determined the stellar parameters. The “internal” uncertainties in
Table 7 are the standard errors of the results of the two subsets. To estimate
the total uncertainty in each order, the internal uncertainty and that derived
in calibration were quadratically summed (“Total in each order” in Table 7).
The “Average” uncertainties were determined by error propagation in averaging
the results for the two orders. Besides the uncertainties described above, we
include standard errors derived from the results of the two orders as an
uncertainty. Finally, the uncertainties in “average” and “standard error” are
quadratically summed in order to derive the total uncertainties. The derived
stellar parameters of S0-6 are as follows: $T_{\mathrm{eff}}=3870\pm 240$ K,
[M/H] $=-0.47\pm 0.22$, and [$\alpha$/Fe] $=-0.07\pm 0.20$. For
$T_{\mathrm{eff}}$, we find very good agreement between our result and the one
of Habibi et al.[3], $\approx 4000$–$4100$ K.
### 4.3 Analysis of S0-6 with Scandium lines and SME
We know from earlier studies that the scandium energy level transition with
the electron configuration 3d24s$-$3d4s4p, found near $2.2\,\mu$m, is
sensitive to $T_{\textrm{eff}}$ of stars[27, 33]. This is because, toward 4000
K, neutral scandium is becoming a minority species, which caused the scandium
spectral features to be very sensitive to temperature. This is further
strengthened by the fact that neutral scandium has a strong nuclear spin and
hence features hyperfine structures, and for this energy level transition with
a single s-electron at the lower level, the hyperfine structure is
particularly strongly enhanced. Moreover, this energy-level transition is
metastable, that is, has a long lifetime for spontaneous decay and therefore
has a high electron population count. The combination of a hyperfine structure
and high electron count makes the lines in the spectra particularly strong.
This benefits establishing an empirical relation between the equivalent width
of the scandium lines and $T_{\textrm{eff}}$ [33].
The empirical relation developed by Thorsbro et al.[33] finds the equivalent
width using the IRAF sbands task with a width of 3 Å around the scandium
lines. The linear regression between the equivalent width and the temperatures
results in a relationship with a standard deviation on the order of 50 K.
Based on the scandium lines 21730 Å, 21812 Å, and 21842 Å, we employ the
empirical relation between the equivalent width of the scandium lines and
$T_{\textrm{eff}}$. Assuming that [Sc/Fe] is not likely to vary beyond 0.15
dex [34] and that the variation of 0.1 dex in [Sc/Fe] leads to a variation of
$\sim$100 K [27], the uncertainty is $\sim$150 K, which includes the
statistical standard deviation from the empirical relation. Hence, using the
empirical relation, we found $T_{\textrm{eff}}=3830\pm 150$ K. This agrees
well with the results obtained using StarKit.
We also analyzed the star using SME [35, 36] to determine chemical abundances.
SME interpolates in a grid of one-dimensional MARCS atmosphere models [37].
These are hydrostatic model atmospheres in spherical geometry, computed
assuming LTE, chemical equilibrium, homogeneity, and conservation of the total
flux. The SME code has the advantage that it includes a flexible $\chi^{2}$
minimization tool to find the solution that best fits an observed spectrum in
a pre-specified spectral window.
SME also has a powerful continuum normalization code, normalizing the spectra
for every attempt to fit. In this way, SME can consider possible suppressed
continuum levels that could exist due to the crowded spectra of cool stars. We
found that for orders 26 and 27, we were able to do a good continuum
normalization, but less so for order 25, thus confirming our previous choice
not to use this order for the analysis. Order 25 has a CO-bandhead at the
later part of the order, which extends beyond the edge of the order, causing
it to have no anchor points for the continuum normalization on the higher
wavelength range of the order.
To determine the chemical abundances, we need to have an accurate list of the
atomic and molecular energy-level transitions. We use a list of atomic energy
level transitions from a previous work by Thorsbro et al. [38], where
wavelengths and line strengths (astrophysical $\log gf$-values) have been
updated using the solar center intensity atlas [39]. We also include CN
molecular lines, since the $K$-band is known for crowded CN-lines [40]. Table
8 summarizes the lines used in this work.
Table 8: Lines used for abundance determinations. Wavelength in air (Å) | $E_{\mathrm{exc}}$ (eV) | $\log gf$
---|---|---
Fe I
20948.086 | $-0.883$ | 6.1190
20991.083 | $-3.019$ | 4.1427
21123.885 | $-0.857$ | 6.0664
21124.505 | $-1.647$ | 5.3342
21756.929 | $-0.715$ | 6.2182
21779.651 | $-4.298$ | 3.6398
Mg I
21059.757 | $-0.709$ | 6.7791
21061.095 | $-0.278$ | 6.7791
Ca I
20937.903 | $-1.357$ | 4.6807
20962.570 | $-0.740$ | 4.6814
20972.529 | $-1.002$ | 4.6807
Ti I
21767.610 | $-2.155$ | 2.5784
21782.997 | $-1.170$ | 1.7489
21897.437 | $-1.476$ | 1.7393
22004.510 | $-1.950$ | 1.7335
Table 9 summarizes the results of the analysis with SME. We found a
metallicity value of [Fe/H] = $-0.4\pm 0.15$. For the $\alpha$ element
abundances, we found [Mg/Fe] = $-0.30\pm 0.15$, [Ca/Fe] = $0.1\pm 0.15$, and
[Ti/Fe] = $-0.4\pm 0.15$, giving an average [$\alpha$/Fe] = $-0.2\pm 0.15$,
based on an average of Mg, Ca, and Ti. These also agree well with the results
from StarKit.
Table 9: S0-6 parameters using the scandium method and SME
$T_{\mathrm{eff}}$ | [Fe/H] | [Mg/Fe] | [Ca/Fe] | [Ti/Fe] | [$\alpha$/Fe](a)
---|---|---|---|---|---
(K) | (dex) | (dex) | (dex) | (dex) | (dex)
$3830\pm 150$ | $-0.40\pm 0.15$ | $-0.30\pm 0.15$ | $+0.10\pm 0.15$ | $-0.40\pm 0.15$ | $-0.20\pm 0.15$
(a) [$\alpha$/Fe] = ([Ca/Fe] + [Mg/Fe] + [Ti/Fe])/3.
## 5 Discussion
### 5.1 Location and age of S0-6
We have marginally detected an acceleration in the RV for S0-6 (§ 3). This
suggests that S0-6 is accelerated by the SMBH [22]. When we use the
acceleration value and the mass of the SMBH, we obtain an upper limit for the
distance of S0-6 from the SMBH of $\sim 0.2$ pc $\approx 4\times 10^{4}$ AU.
Hence, our result suggests that S0-6 is located close to the SMBH. For the
late-type stars within $\sim 1\mbox{${}^{\prime\prime}$}$ from Sgr A*,
$T_{\mathrm{eff}}$, bolometric magnitude, and spectral types are determined
[3], but no measurement of metallicity has been conducted. If S0-6 is indeed
located at $0.2$ pc from the SMBH, to our knowledge, this study is the first
measurement of the chemical abundances of a late-type S-star.
No clear signal of binarity in observations of S0-6 was found. Fig. 3 shows
that the observed RVs are very well-fitted with a linear function, and no
clear periodic signal is found in our observations. S0-6 was also observed
with Keck/OSIRIS for 13 years [22]. They find no significant periodic signal
in the RVs of S0-6 and provide an upper limit of $0.1M_{\odot}$ on the mass of
a hypothetical companion object.
In order to estimate the age of S0-6, we plotted it on an HR diagram. The
bolometric magnitude $M_{\mathrm{bol}}$ can be calculated to be
$M_{\mathrm{bol}}=K-A_{K}-\mathrm{DM}+\mathrm{BC}_{K}$, where $K$, $A_{K}$,
and DM are the observed $K$-band magnitude, the amount of the interstellar
extinction in the $K$-band, and the distance modulus, respectively. Here, we
use $K=13.95\pm 0.04$ [41], $A_{K}=2.46\pm 0.03$ [42], and
$\mathrm{DM}=14.5\pm 0.18$ [10]. We also used the equation
$\mathrm{BC}_{K}=2.6-(T_{\mathrm{eff}}-3800)/1500$,[43] to determine the
bolometric correction $\mathrm{BC}_{K}$. We finally obtained
$M_{\mathrm{bol}}=-0.50\pm 0.25$ and $-0.47\pm 0.21$ for the StarKit and
scandium method results, respectively.
S0-6 is plotted on an HR diagram in Fig. 5. The result of Habibi et al.[3] is
also plotted, which agrees well with our results. When S0-6 is compared with
the theoretical $Z=0.3Z_{\odot}$ isochrones[44] (solid lines in Fig. 5), it is
located to the right of the 10 Gyr isochrone, which suggests that S0-6 is an
old star, as old as $\gtrsim 10$ Gyr.
Figure 5: HR diagram showing $T_{\mathrm{eff}}$ and $M_{\mathrm{bol}}$ of S0-6
obtained by our study and Habibi et al. [3] (black circle). The result of the
StarKit, and that of the scandium method and SME are represented by the red
circle and orange square, respectively. Overlaid are $Z=0.3Z_{\odot}$
theoretical isochrones [44] for 0.5, 1, 5, and 10 Gyr (solid lines, from left
to right). The dashed line is a theoretical 10 Gyr isochrone for
$Z=Z_{\odot}$.
To further confirm the reliability of our $T_{\mathrm{eff}}$ determination, we
estimated it using the line-depth-ratio (LDR) method. In the LDR method,
ratios of the depths of two stellar absorption lines are employed to determine
$T_{\mathrm{eff}}$. Using well-studied bright stars such as $\alpha$ Boo and
$\mu$ Leo, we found good LDR–$T_{\mathrm{eff}}$ relations for six line pairs
in echelle order 26 (Nishiyama et al, in preparation for publication). The
line pairs are shown in Table 10. The mean and standard deviation of
$T_{\mathrm{eff}}$ using the six LDR-$T_{\mathrm{eff}}$ relations is $4100\pm
90$ K, where we show only the statistical uncertainty. This is slightly higher
than $T_{\mathrm{eff}}$ derived by StarKit and the scandium method but
consistent with them within the uncertainties. Additionally,
$T_{\mathrm{eff}}$ derived by StarKit for echelle order 26 is 3998 K, which
agrees very well with that derived by the LDR method.
Table 10: LDR pairs and derived $T_{\mathrm{eff}}$
Element | Wavelength | Element | Wavelength | $T_{\mathrm{eff}}$(a)
---|---|---|---|---
| (Å) | | (Å) | (K)
Ti I | 22010.51 | Na I | 22089.69 | 4105
Ti I | 22010.51 | Na I | 22062.42 | 4134
Sc I | 22058.00 | Na I | 22089.69 | 3977
Sc I | 22058.00 | Na I | 22062.42 | 4012
Ti I | 21903.35 | Na I | 22089.69 | 4166
Ti I | 21903.35 | Na I | 22062.42 | 4190
(a) Mean and standard deviation of six $T_{\mathrm{eff}}$ is $4100\pm 90$ K.
Similar to S0-6, NSC stars distributed to the right and below the oldest
isochrone were found [45, 46, 3]. Chen et al. [47] claimed that this
discrepancy can be explained by assuming younger stars with higher
metallicity. In this study, we found a similar discrepancy for an old and low-
metal star, which suggests a previously unknown systematic uncertainty in
observations and/or theoretical models.
### 5.2 Chemical abundance of S0-6 and stars in our galaxy
Recent observations [18, 19, 20] found that there is a metal-poor stellar
population ($\sim 7\,$%) in the NSC, whose mean metallicity is
$[\mathrm{M/H}]\sim-0.5$. Although Do et al. did not find stars with
$[\mathrm{M/H}]\sim-0.5$ [17], Rich et al. found stars with
$[\mathrm{Fe/H}]\sim-0.5$ [48]. These suggest that stars with similar [M/H]
values to S0-6 are not rare in the NSC, and S0-6 may have the same origin as
that of the NSC metal-poor population. An interesting aspect of S0-6 is its
closeness to Sgr A*, because strong tidal force associated with the SMBH may
provide an observable impact on the nature of S0-6.
To examine the origin of S0-6 in more detail, we analyze its position in an
[$\alpha$/Fe] vs. [M/H] diagram, which is useful for clarifying the chemical
evolution of stars and stellar systems. The top panel in Fig. 6 plots S0-6 and
stars in our galaxy. It is clear that most of the stars in the galaxy with
[M/H]$\lesssim 0.0$ show positive [$\alpha$/Fe]. We can observe two sequences
in the diagram for disk stars (green circles) and bulge stars (yellow
circles). They overlap, but the bulge stars tend to have larger [$\alpha$/Fe]
than the disk stars. At [Fe/H] $\sim-0.5$, the [$\alpha$/Fe] value for S0-6 is
slightly smaller than that for the bulge stars, whereas it is consistent with
the disk stars considering the observational uncertainty. There is a metal-
poor population in the bulge, and for some elements, these metal-poor stars
show different distributions in the diagrams from the other bulge stars and
disk stars [52]. Although the sequence is not clear in Fig. 6, S0-6 might be
on the bulge metal-poor population sequence. We currently do not have enough
samples to determine the distribution of late-type NSC stars in the diagram,
because of the difficulty in observing such NSC stars. A pioneering study to
determine the $\alpha$-element abundance of the late-type stars in the NSC
region, a few pc from Sgr A*, was recently conducted by Thorsbro et al.[33],
in which they determined [Si/Fe] for 15 stars (Fig. 6). All of the stars
measured by Thorsbro et al.[33] show larger [M/H] and [$\alpha$/Fe] than S0-6.
The [M/H] and [$\alpha$/Fe] values were measured for two late-type stars
within 1 pc from Sgr A* by Bentley et al.[24]. They employed a method very
similar to ours and found low-metal stars, whereas their [$\alpha$/Fe] values
are positive. One of the stars, NE1-1-003, shows a very similar chemical
abundances to S0-6, [M/H]$=-0.59\pm 0.11$ and [$\alpha$/Fe]$=0.05\pm 0.15$.
Although the abundances of S0-6 are likely to be different from a large part
of the stars in the NSC region, there might exist low [M/H] and low
[$\alpha$/Fe] abundance stars in the region, for example, S0-6 and NE1-1-003.
Figure 6: (Top) [$\alpha$/Fe] vs. [M/H] plot for stars and stellar systems in
our galaxy. S0-6 is plotted as the red filled circle (StarKit result) and red
square (SME result). The abundances of globular clusters are from Pritzl et
al.[49] (cyan circles) assuming [Fe/H] $=$ [M/H]. The abundances of disk
stars, bulge stars, and bulge metal-poor stars are from Bensby et al.[50]
(green circles), Bensby et al.[51](yellow circles), and Lucey et al.[52] (gold
circles), respectively, assuming [Fe/H] $=$ [M/H] and [$\alpha$/Fe]
$=($[Mg/Fe]$+$[Si/Fe]$+$[Ca/Fe]$)/3$. The abundance of NSC stars are from
Thorsbro et al.[33] (magenta circles) assuming [Fe/H] $=$ [M/H] and
[$\alpha$/Fe] $=$ [Si/Fe], and from Bentley et al.[24] (magenta squares).
(Bottom) [$\alpha$/Fe] vs. [M/H] plot for stars in nearby dwarf galaxies. S0-6
is plotted by the red-filled circle. The abundance of the Sculptor dwarf
galaxy is from Hill et al.[53] (olive squares), and those for the Sgr dwarf
galaxy are from Monaco et al.[54] (light green circles), assuming [Fe/H] $=$
[M/H] and [$\alpha$/Fe] = ([Mg/Fe] + [Ca/Fe])/2. The abundance of LMC is from
Van der Swaelmen et al.[55] (dark green triangle) assuming [Fe/H] $=$ [M/H]
and [$\alpha$/Fe] = ([Mg/Fe] + [Si/Fe] + [Ca/Fe])/3.
### 5.3 Chemical abundance of S0-6 and stars in dwarf galaxies
Generally, stars in dwarf galaxies experience a different chemical evolution
from those in our galaxy, and thus they show a different distribution in the
[$\alpha$/Fe] vs. [M/H] diagram. In the bottom panel in Fig. 6, stellar
abundances for stars in three dwarf galaxies, the Sagittarius dwarf spheroidal
(Sgr) galaxy, the Sculptor dwarf galaxy, and the Large Magellanic Cloud (LMC)
are shown and compared with S0-6.
As can be seen, [M/H] of S0-6 is higher than that for the highest stellar
population in the Sculptor dwarf galaxy. On the other hand, the position of
S0-6 lies within the distributions of the stars of the LMC and the Sgr galaxy.
Even so, we do not conclude that S0-6 was born in the LMC or the Sgr galaxy.
The LMC is located at a distance of $\approx 50$ kpc and is orbiting around
our galaxy. A close encounter between the LMC and our galaxy likely occurred
$\sim 1.5$ Gyr ago, but the LMC has not been clearly tidally disrupted. The
Sgr galaxy is experiencing strong and disruptive tidal interactions with our
galaxy and has been almost totally disrupted, but has not yet been fully
merged. Hence it is difficult to imagine that stars born in the LMC or Sgr
galaxy would have migrated to the very center of our galaxy.
A more natural explanation for the origin might be that S0-6 was born in a
dwarf galaxy and then migrated to the center of our galaxy. Recently, a minor,
metal-poor population was found in the Galactic NSC with kinematics distinct
from the major, metal-rich population [18, 19], The metal-poor NSC population
shows kinematics distinct from the metal-rich population [18, 19], and its
mass fraction was estimated to be $\sim 7$–$18\,\%$ [19, 56]. It might be
natural to assume that S0-6 was born in the same cluster as the metal-poor
population, migrated to the center of our galaxy as a member of the cluster,
and reached the central pc. One of the scenarios to explain the low chemical
abundances of S0-6 is that S0-6 was born as a star in an NSC of a dwarf
galaxy, rather than a globular cluster of our galaxy.
## 6 Conclusions
In this paper, we report the results of our NIR spectroscopic monitoring
observations of the late-type S-star S0-6/S10 using the Subaru telescope and
IRCS. Our main results are as follows. We measured the RV of S0-6 with the
average uncertainty of $1.45\,$km s-1. This is one of the most precise RV
measurements of a star in the central $1\mbox{${}^{\prime\prime}$}$ region of
our galaxy. We marginally detected an acceleration in the RV of S0-6 of
$0.48\pm 0.20$ km s-1 yr-1 from $2014$ to $2021$, and obtained an upper limit
for the distance of S0-6 from the SMBH of $0.2\,\mathrm{pc}\approx 4\times
10^{4}\,$AU. We determined the stellar parameters for S0-6 using the StarKit
code and the observed spectra. The parameters are $T_{\mathrm{eff}}=3870\pm
240$ K, [M/H] $=-0.47\pm 0.22$, [$\alpha$/Fe] $=-0.07\pm 0.20$. We also
determined stellar parameters using the scandium method and SME, and obtained
$T_{\mathrm{eff}}=3830\pm 150$ K, [Fe/H] $=-0.40\pm 0.15$, which are
consistent with the StarKit results, and both results suggest the very old age
of S0-6, $\gtrsim 10$ Gyr. The abundance of other elements are: [Ca/Fe]
$=+0.10\pm 0.15$, [Mg/Fe]$=-0.30\pm 0.15$, and [Ti/Fe] $=-0.40\pm 0.15$. The
chemical abundances and age suggest that S0-6 experienced a different chemical
evolution from other stars in the center of our galaxy.
## 7 Acknowledgements
We wish to thank the Subaru Telescope staff for the support provided for our
observations. We thank Tuan Do, Rory Bentley, and Anja Feldmeier-Krause for
their kind assistance with the analysis using Starkit. This work was supported
by JSPS KAKENHI, Grant in Aid for Challenging Exploratory Research (Grant
Number 18K18760, 20H00178), Grant-in-Aid for Scientific Research(A) (Grant
Number 20H00178, 19H00695, 18H03720) and Grant-in-Aid for Scientific
Research(B) (50640843). This work was supported by the Tohoku Initiative for
Fostering GlobalResearchers for Interdisciplinary Sciences (TI-FRIS) of
MEXT’sStrategic Professional Development Program for Young Researchers. RS
acknowledges financial support from the State Agency for Research of the
Spanish MCIU through the “Center of Excellence Severo Ochoa” award for the
Instituto de Astrofísica de Andalucía (SEV-2017-0709) and from grant
EUR2022-134031 funded by MCIN/AEI/10.13039/501100011033 and by the European
Union NextGenerationEU/PRTR. BT acknowledges the financial support from the
Wenner-Gren Foundation (WGF2022-0041).This research is based on data collected
at the Subaru Telescope,which is operated by the National Astronomical
Observatory of Japan.We are honored and grateful for the opportunity to
observe theUniverse from Maunakea, which has cultural, historical, and
naturalsignificance in Hawaii.
## References
* [1] Habibi, M., Gillessen, S., Martins, F., Eisenhauer, F., Plewa, P. M., Pfuhl, O., et al. (2017) Twelve Years of Spectroscopic Monitoring in the Galactic Center: The Closest Look at S-stars near the Black Hole. ApJ. 847, id.120.
* [2] Ghez, A. M., Duchêne, G., Matthews, K., Hornstein, S. D., Tanner, A., Larkin, J., et al.. (2003) The First Measurement of Spectral Lines in a Short-Period Star Bound to the Galaxy’s Central Black Hole: A Paradox of Youth. ApJ. 586, L127–L131.
* [3] Habibi, M., Gillessen, S., Pfuhl, O., Eisenhauer, F., Plewa, P. M., von Fellenberg, S., et al. (2019) Spectroscopic Detection of a Cusp of Late-type Stars around the Central Black Hole in the Milky Way. ApJ. 872, id.L15
* [4] Schödel, R., Ott, T., Genzel, R., Hofmann, R., Lehnert, M., Eckart, A., et al. (2002) A star in a 15.2-year orbit around the supermassive black hole at the centre of the Milky Way. Nature. 419, 694–696.
* [5] Nishiyama, S., Saida, H., Takamori, Y., Takahashi, M., Schödel, R., Najarro, F., et al. (2018) Radial velocity measurements of an orbiting star around Sgr A*. PASJ. 70, id.74.
* [6] Kobayashi, N., Tokunaga, A. T., Terada, H., Goto, M., Weber, M., Potter, R., et al. (2000) IRCS: infrared camera and spectrograph for the Subaru Telescope. SPIE Conference Series. 4008, 1056–1066.
* [7] Iye, M., Karoji, H., Ando, H., Kaifu, N., Kodaira, K., Aoki, K., et al. (2004) Current Performance and On-Going Improvements of the 8.2 m Subaru Telescope. PASJ. 56, 381–397.
* [8] Hayano, Y., Takami, H., Oya, S., Hattori, M., Saito, Y., Watanabe, M., et al. (2010) Commissioning status of Subaru laser guide star adaptive optics system. SPIE Conference Series. 7736, id. 77360N.
* [9] Minowa, Y., Hayano, Y., Terada, H., Pyo, T-S., Oya, S., Hattori, M., et al. (2012) Subaru laser guide adaptive optics system: performance and science operation. 8447, id.84471F.
* [10] Saida, H., Nishiyama, S., Ohgami, T., Takamori, Y., Takahashi, M., Minowa, Y., et al. (2019) A significant feature in the general relativistic time evolution of the redshift of photons coming from a star orbiting Sgr A*. PASJ. 71, id.126.
* [11] GRAVITY Collaboration. (2019) A geometric distance measurement to the Galactic center black hole with 0.3% uncertainty. A&A. 625, id.L10.
* [12] Do, T., Hees, A., Ghez, A., Martinez, G. D., Chu, D. S., Jia, S., et al. (2019b) Relativistic redshift of the star S0-2 orbiting the Galactic Center supermassive black hole. Science. 365, 664–668.
* [13] Event Horizon Telescope Collaboration. (2022) First Sagittarius A* Event Horizon Telescope Results. I. The Shadow of the Supermassive Black Hole in the Center of the Milky Way. ApJ. 930, id.L12.
* [14] Mapelli, M. and Gualandris, A. (2016) Star Formation and Dynamics in the Galactic Centre. Lecture Notes in Physics, Berlin Springer Verlag. 905, 205.
* [15] Schödel, R., Feldmeier, A., Neumayer, N., Meyer, L., and Yelda, S. (2014) The nuclear cluster of the Milky Way: our primary testbed for the interaction of a dense star cluster with a massive black hole. Classical and Quantum Gravity, 31, id.244007
* [16] Neumayer, N., Seth, A. and Böker, T. (2020) Nuclear star clusters. A&A Rev.. 28, eid.4.
* [17] Do, T., Kerzendorf, W., Winsor, N., Støstad, M., Morris, M. R., Lu, J. R., et al. (2015) Discovery of Low-metallicity Stars in the Central Parsec of the Milky Way. ApJ. 809, id.143.
* [18] Feldmeier-Krause, A., Kerzendorf, W., Neumayer, N., Schödel, R., Nogueras-Lara, F., Do, T., et al. (2017) KMOS view of the Galactic Centre – II. Metallicity distribution of late-type stars. MNRAS. 464, 194–209.
* [19] Feldmeier-Krause, A., Kerzendorf, W., Do, T., Nogueras-Lara, F., Neumayer, N., Walcher, C. J., et al. (2020) Asymmetric spatial distribution of subsolar metallicity stars in the Milky Way nuclear star cluster. MNRAS. 494, 396–410.
* [20] Do, T., David Martinez, G., Kerzendorf, W., Feldmeier-Krause, A., Arca Sedda, M., Neumayer, N., et al. (2020) Revealing the Formation of the Milky Way Nuclear Star Cluster via Chemo-dynamical Modeling. ApJ. 901, id.L28.
* [21] Fukue, K., Matsunaga, N., Yamamoto, R., Kondo, S., Kobayashi, N., Ikeda, Y., et al. (2015) Line-depth Ratios in H-band Spectra to Determine Effective Temperatures of G- and K-type Giants and Supergiants. ApJ. 812, id.64.
* [22] Chu, D. S., Do, T., Ghez, A., Gautam, A. K.., Ciurlo, A., O’neil, K. K., et al. (2023) Evidence of a Decreased Binary Fraction for Massive Stars within 20 milliparsecs of the Supermassive Black Hole at the Galactic Center. ApJ. 948, id.94.
* [23] Kerzendorf, W. and Do, T. (2015) starkit: First real release.
* [24] Bentley, R. O., Do, T., Kerzendorf, W., Chu, D. S., Chen, Z., Konopacky, Q. et al. (2022) Measuring the $\alpha$-abundance of Subsolar-metallicity Stars in the Milky Way’s Central Half-parsec: Testing Globular Cluster and Dwarf Galaxy Infall Scenarios. ApJ. 925, id.77.
* [25] Bohlin, R. C., Mészáros, S., Fleming, S. W., Gordon, Karl D., Koekemoer, Anton M. and Kovács, J. (2017) A New Stellar Atmosphere Grid and Comparisons with HST/STIS CALSPEC Flux Distributions. AJ. 153, id.234.
* [26] Do, T., Kerzendorf, W., Konopacky, Q., Marcinik, J. M., Ghez, A., Lu, J. R. et al. (2018) Super-solar Metallicity Stars in the Galactic Center Nuclear Star Cluster: Unusual Sc, V, and Y Abundances. ApJ. 855, id.L5.
* [27] Thorsbro, B., Ryde, N., Schultheis, M., Hartman, H., Rich, R. M., Lomaeva, M., et al. (2018) Evidence against Anomalous Compositions for Giants in the Galactic Nuclear Star Cluster. ApJ. 866, id.52.
* [28] Cunha, K., Sellgren, K., Smith, V. V., Ramirez, S. V., Blum, R. D. and Terndrup, D. M. (2007) Chemical Abundances of Luminous Cool Stars in the Galactic Center from High-Resolution Infrared Spectroscopy. ApJ. 669, 1011
* [29] Ramirez, S. V., Depoy, D. L., Frogel, Jay A., Sellgren, K., Blum, R. D. (1997) Luminosity and Temperature from Near-Infrared Spectra of Late-Type Giant Stars. AJ. 113, 1411–1420
* [30] Asplund, M., Grevesse, N., Sauval, A. J. and Scott, P. (2009) The Chemical Composition of the Sun. ARA&A. 47, 481–522.
* [31] Tautvaišienė, G., Mikolaitis, Š., Drazdauskas, A., Minkevičiūtė, R., Kjeldsen, H., Brogaard, K., et al. (2020) Chemical Composition of Bright Stars in the Continuous Viewing Zone of the TESS Space Mission. ApJS. 248, id.19.
* [32] Demarque, P., Woo, J.-H., Kim, Y.-C. and Yi, S. K. (2004) Y2 Isochrones with an Improved Core Overshoot Treatment. ApJS. 155, 667–674.
* [33] Thorsbro, B., Ryde, N., Rich, R. M., Schultheis, M., Renaud, F., Spitoni, E., et al. (2020) Detailed Abundances in the Galactic Center: Evidence of a Metal-rich Alpha-enhanced Stellar Population. ApJ. 894, id.26.
* [34] Battistini, C., and Bensby, T. (2015) The origin and evolution of the odd-Z iron-peak elements Sc, V, Mn, and Co in the Milky Way stellar disk. A&A. 577, A9.
* [35] Valenti, J. A., and Piskunov, N. (1996) Spectroscopy made easy: A new tool for fitting observations with synthetic spectra. A&AS. 118, 595-603.
* [36] Valenti, J. A. and Piskunov, N. (2012) SME: Spectroscopy Made Easy. Astrophysics Source Code Library, record ascl:1202.013
* [37] Gustafsson, B., Edvardsson, B., Eriksson, K., Jørgensen, U. G., Nordlund, Å. and Plez, B. (2008) A grid of MARCS model atmospheres for late-type stars. I. Methods and general properties. A&A. 486, 951–970.
* [38] Thorsbro, B., Ryde, N., Rich, R. M., Schultheis, M. Fritz, T. K., and Origlia, L. (2017) Developing an astrophysical line list for Keck/Nirspec observations of red giants in the Galactic centre. Proceedings of the International Astronomical Union. 334. 372–373.
* [39] Livingston, W., and Wallace, L. (1991) An atlas of the solar spectrum in the infrared from 1850 to 9000 cm${}^{-}{1}$ (1.1 to 5.4 micrometer). NSO Technical Report, Tucson: National Solar Observatory, National Optical Astronomy Observatory.
* [40] Sneden, C., Lucatello, S., Ram, R. S., Brooke, J. S. A., and Bernath, P. (2014) Line Lists for the A 2$\Pi$-X 2$\Sigma$+ (Red) and B 2$\Sigma$+-X 2$\Sigma$+ (Violet) Systems of CN, 13C14N, and 12C15N, and Application to Astronomical Spectra. ApJS. 214. id26.
* [41] Gautam, A. K., Do, T., Ghez, A. M., Morris, M. R., Martinez, G. D., Hosek, M. W. Jr., et al. (2019) An Adaptive Optics Survey of Stellar Variability at the Galactic Center. ApJ. 871, id.103.
* [42] Schödel, R., Najarro, F., Muzic, K. and Eckart, A. (2010) Peering through the veil: near-infrared photometry and extinction for the Galactic nuclear star cluster. Accurate near infrared H, Ks, and L’ photometry and the near-infrared extinction-law toward the central parsec of the Galaxy. A&A. 511, id.A18.
* [43] Blum, R. D., Ramírez, S. V., Sellgren, K. and Olsen, K. (2003) Really Cool Stars and the Star Formation History at the Galactic Center. ApJ. 597, 323–346.
* [44] Marigo, P., Girardi, L., Bressan, A., Groenewegen, M. A. T., Silva, L. and Granato, G. L. (2008) Evolution of asymptotic giant branch stars. II. Optical to far-infrared isochrones with improved TP-AGB models. A&A. 482, 883–905.
* [45] Pfuhl, O., Fritz, T. K., Zilka, M., Maness, H., Eisenhauer, F., Genzel, R., et al. (2011) The Star Formation History of the Milky Way’s Nuclear Star Cluster. ApJ. 741, id.108.
* [46] Nishiyama, S., Schödel, R., Yoshikawa, T., Nagata, T., Minowa, Y. and Tamura, M. (2016) Spectroscopically identified intermediate age stars at 0.5-3 pc distance from Sagittarius A*. A&A. 588, id.A49.
* [47] Chen, Z., Do, T., Ghez, A. M., Hosek, M. W., Feldmeier-Krause, A., Chu, D. S., et al. (2023) The Star Formation History of the Milky Way’s Nuclear Star Cluster. ApJ. 944, id.79.
* [48] Rich, R. M., Ryde, N., Thorsbro, B., Fritz, T. K., Schultheis, M., Origlia, L., et al. (2017) Detailed Abundances for the Old Population near the Galactic Center. I. Metallicity Distribution of the Nuclear Star Cluster. AJ. 154, id.239.
* [49] Pritzl, B. J., Venn, K. A. and Irwin, M. (2005) A Comparison of Elemental Abundance Ratios in Globular Clusters, Field Stars, and Dwarf Spheroidal Galaxies. AJ. 130, 2140–2165.
* [50] Bensby, T., Feltzing, S. and Oey, M. S. (2014) Exploring the Milky Way stellar disk. A detailed elemental abundance study of 714 F and G dwarf stars in the solar neighbourhood. A&A. 562, id.A71.
* [51] Bensby, T., Feltzing, S., Gould, A., Yee, J. C., Johnson, J. A., Asplund, M., et al. (2017) Chemical evolution of the Galactic bulge as traced by microlensed dwarf and subgiant stars. VI. Age and abundance structure of the stellar populations in the central sub-kpc of the Milky Way. A&A. 605, id.A89.
* [52] Lucey, M., Hawkins, K., Ness, M., Nelson, T., Debattista, V. P.., Luna, A., et al. (2022) The COMBS Survey - III. The chemodynamical origins of metal-poor bulge stars. MNRAS. 509, 122–144.
* [53] Hill, V., Skúladóttir, Á., Tolstoy, E., Venn, K. A., Shetrone, M. D., Jablonka, P., et al. (2019) VLT/FLAMES high-resolution chemical abundances in Sculptor: a textbook dwarf spheroidal galaxy. A&A. 626, id.A15.
* [54] Monaco, L., Bellazzini, M., Bonifacio, P., Ferraro, F. R., Marconi, G., Pancino, E., et al. (2005) The Ital-FLAMES survey of the Sagittarius dwarf spheroidal galaxy. I. Chemical abundances of bright RGB stars. A&A. 441, 141–151.
* [55] Van der Swaelmen, M., Hill, V., Primas, F. and Cole, A. A. (2013) Chemical abundances in LMC stellar populations. II. The bar sample. A&A. 560, id.A44.
* [56] Dong, H., Schödel, R., Williams, B. F., Nogueras-Lara, F., Gallego-Cano, E., Gallego-Calvente, T., et al. (2017) Near-infrared variability study of the central 2.3 × 2.3 arcmin2 of the Galactic Centre - II. Identification of RR Lyrae stars in the Milky Way nuclear star cluster”. MNRAS. 471, 3617–3631.
|
# Hyperparameter optimization of orthogonal functions in the numerical
solution of differential equations
Alireza Afzal Aghaei<EMAIL_ADDRESS>Department of Computer and
Data Science, Faculty of Mathematical Science , Shahid Beheshti University,
Tehran, Iran. Kourosh Parand<EMAIL_ADDRESS>Department of Computer and
Data Science, Faculty of Mathematical Science , Shahid Beheshti University,
Tehran, Iran. Department of Cognitive Modeling, Institute for Cognitive and
Brain Sciences , Shahid Beheshti University, Tehran, Iran.
###### Abstract
This paper considers the hyperparameter optimization problem of mathematical
techniques that arise in the numerical solution of differential and integral
equations. The well-known approaches grid and random search, in a parallel
algorithm manner, are developed to find the optimal set of hyperparameters.
Employing rational Jacobi functions, we ran these algorithms on two nonlinear
benchmark differential equations on the semi-infinite domain. The
configurations contain different rational mappings along with their length
scale parameter and the Jacobi functions parameters. These trials are
configured on the collocation Least-Squares Support Vector Regression (CLS-
SVR), a novel numerical simulation approach based on spectral methods. In
addition, we have addressed the sensitivity of these hyperparameters on the
numerical stability and convergence of the CLS-SVR model. The experiments show
that this technique can effectively improve state-of-the-art results.
Keywords— Nonlinear differential equations, Hyperparameter optimization,
Jacobi polynomials, Machine learning
## 1 Introduction
Estimating the unknown dynamics of a given physical system is an essential
task in science and engineering. Mathematicians simulate these systems after
expressing them in functional equations such as differential and integral
equations. The failure of the analytical approaches in solving these problems,
which mainly contain nonlinear terms, has led researchers to develop various
numerical techniques. Finite Elements (FEM), Finite Volume (FVM), Meshless,
and Spectral methods are some well-known efforts. These techniques consider a
simple mathematical model defined by some parameters and hyperparameters. The
former are internal variables learned during the training process, whereas the
latter are external configurations that should be optimized to find the best
estimator. These terms are initially back in the machine learning literature.
For example, in a supervised machine learning task, the Support Vector Machine
(SVM) algorithm considers a hyperplane for separating the given data into
different classes. The vector that defines the hyperplane is a parameter,
whereas the kernel function is a hyperparameter. Likewise, a linear
combination of unknown weights and basis functions is usually considered for
approximating the solution to a differential equation. Here the unknown
weights are parameters, while the basis functions are a hyperparameter.
Choosing the best combination of hyperparameters is critical and may
significantly affect the result. Scientists take advantage of prior knowledge
to choose sub-optimal ones. As an example, it is common to use logarithmic-
spaced values for the regularization parameter of a machine learning model. In
a mathematics setting, the intrinsic nature of the problem is utilized to
choose appropriate hyperparameters. For instance, the periodic problems are
usually approximated by the Fourier basis functions, and the rational
functions may approximate the problems that reach their steady state, and so
on.
Regarding the development of artificial intelligence, hyperparameter
optimization or simply tuning and finding the optimal hyperparameters, has
evolved. Various approaches and techniques are proposed to handle this issue.
Exhaustive search [1], probabilistic algorithms [2], gradient-based [3] and
meta-heuristic algorithms [4] are some of these efforts. In a similar manner,
mathematicians tried to develop some routines to find optimal hyperparameters
that arises in the numerical simulation techniques. To name a few, Boyd [5]
proved some theorems for rational Chebyshev functions, Sanyasiraju et al. [6]
used a local optimization algorithm for Radial Basis Functions (RBFs),
Cavoretto et al. [7] combined leave-one-out cross-validation with univariate
global optimization techniques for RBFs, Tanguy et al. [8] presented a
quadratic minimization procedure for optimal placement of poles in rational
approximations by Müntz–Laguerre functions and Mi at al. [9] developed two
algorithms for the adaptive selection of continuous rational orthogonal basis
functions. However, it is valuable to provide methods for solving problems in
a broader range rather than just specific cases.
A wide range of physical problems involving initial and boundary value
problems are defined on the infinite or semi-infinite domain. Approximating an
accurate solution on the corresponding domain is essential. These problems are
mostly solved with orthogonal polynomials or rational functions with
completeness and orthogonality properties on the problem domain. The Hermite,
Laguerre, and rational Jacobi functions are the most widely-used functions
applied to these problems. Computational and mathematical properties of the
rational functions encouraged scientists to choose them as the basis functions
[10, 11, 12, 13, 14, 15, 16, 17, 18]. However, these functions suffer of
hyperparameters such as rational mapping and length scale parameter. These can
disturb the numerical solution in some cases [19].
In this research, we develop some algorithms and investigate the applications
of machine learning algorithms to optimize the hyperparameters that appear
during the numerical simulation of differential equations arising on semi-
infinite domains. In the continuation of the article, we will discuss the
preliminaries (section 2), the proposed method (section 3), and the state-of-
the-art numerical results (section 4). Finally, the concluding remarks will be
discussed in the last section.
## 2 Preliminaries
In this section, we explain some prerequisites needed in the following
sections. To do so, we first explain the Jacobi polynomials, then the CLS-SVR
method will be recalled. The hyperparameter optimization techniques used in
the rest of the work will be discussed.
### 2.1 Orthogonal Polynomials
Hermite, Laguerre, and Jacobi polynomials are the most well-known orthogonal
polynomials defined on infinite, semi-infinite, and finite intervals. They can
be used to approximate functions in corresponding domains. However,
approximating rational functions may not be very accurate using polynomials.
Therefore, researchers proposed various rational mappings to handle this
issue. In general, Jacobi polynomials with hyperparameters $\alpha,\beta$ are
defined on the interval $[-1,1]$ by the recursive expression
$\displaystyle
J_{i}^{\alpha,\beta}(x)=-\frac{(\alpha+i-1)(\beta+i-1)(\alpha+\beta+2i)}{i(\alpha+\beta+i)(\alpha+\beta+2i-2)}J_{i-2}^{\alpha,\beta}(x)$
(1)
$\displaystyle+\frac{(\alpha+\beta+2i-1)\left\\{\alpha^{2}-\beta^{2}+x(\alpha+\beta+2i)(\alpha+\beta+2i-2)\right\\}}{2i(\alpha+\beta+i)(\alpha+\beta+2i-2)}$
$\displaystyle\quad\times J_{i-1}^{\alpha,\beta}(x),\quad i=2,3,\ldots,$
where
$J_{0}^{\alpha,\beta}(x)=1,\quad
J_{1}^{\alpha,\beta}(x)=\frac{\alpha+\beta+2}{2}x+\frac{\alpha-\beta}{2}$
Their orthogonality is defined using the $L2$ inner product:
${\displaystyle\int_{-1}^{1}(1-x)^{\alpha}(1+x)^{\beta}J_{m}^{(\alpha,\beta)}(x)J_{n}^{(\alpha,\beta)}(x)\,dx={\frac{2^{\alpha+\beta+1}}{2n+\alpha+\beta+1}}{\frac{\Gamma(n+\alpha+1)\Gamma(n+\beta+1)}{\Gamma(n+\alpha+\beta+1)n!}}\delta_{nm}}.$
The Gegenbauer, Chebyshev, and Legendre polynomials are some special cases of
Jacobi polynomials. For Legendre polynomials, the equation (1) with
$\alpha=\beta=0$ reduces to:
$\displaystyle P_{0}(x)=1,\quad P_{1}(x)=x,$ (2)
$\displaystyle(n+1)P_{n+1}(x)=(2n+1)xP_{n}(x)-nP_{n-1}(x),\quad n\geq 1,$
and for Chebyshev with $\alpha=\beta=-\nicefrac{{1}}{{2}}$ we have
$\displaystyle T_{0}(x)=1,\quad T_{1}(x)=x,$ (3) $\displaystyle
T_{n+1}(x)=2x\,T_{n}(x)-T_{n-1}(x),\quad n\geq 1.$
Although these polynomials are defined for the bounded domain, researchers
have used some nonlinear maps $\phi$ with the property
$\phi:[-1,1]\to[0,\infty)$ to transform the orthogonality into the semi-
infinite domain. To our best knowledge, these three mappings are the most used
functions among researchers [20]:
* •
Algebraic mapping:
$\displaystyle\phi(x)=\nicefrac{{(x-\theta)}}{{(x+\theta)}}$.
* •
Exponential mapping:
$\displaystyle\phi(x)=1-2{\textrm{e}^{-\displaystyle\nicefrac{{x}}{{\theta}}}}$.
* •
Logarithmic mapping:
$\displaystyle\phi(x)=2\tanh\left(\nicefrac{{x}}{{\theta}}\right)-1$.
The rational Jacobi functions is defined by a simple transformation
$\tilde{J}(x)=J(\phi(x))$ where $\phi(x)$ is a nonlinear rational mapping.
Therefore, orthogonality takes the form
${\displaystyle\int_{0}^{\infty}\tilde{J}_{m}^{(\alpha,\beta)}(x)\tilde{J}_{n}^{(\alpha,\beta)}(x)w(x)\,dx={\frac{2^{\alpha+\beta+1}}{2n+\alpha+\beta+1}}{\frac{\Gamma(n+\alpha+1)\Gamma(n+\beta+1)}{\Gamma(n+\alpha+\beta+1)n!}}\delta_{nm}},$
(4)
where $w(x)$ is the corresponding weight function of the inner product.
### 2.2 Collocation Least-Squares Support Vector Regression
Collocation Least-Squares Support Vector Regression (CLS-SVR) is a novel
formulation of support vector machines for solving functional equations [21,
22, 23]. In this machine learning model, the unknown solution of a
differential equation is approximated by a linear combination of unknown
coefficients and some known basis functions. The basis functions, also known
as feature maps, transform the input data into a nonlinear space in which we
hope the approximation accuracy will increase. In the following, we recall the
formulation of CLS-SVR which is based on the paper [24].
Suppose that we want to solve an arbitrary differential equation in the form
of $\mathcal{N}(u)=f$. Approximating the solution by $m$ basis functions, we
have:
$\tilde{u}(x)=w^{T}\varphi(x)+b=\sum_{i=1}^{m}w_{i}\varphi_{i}(x)+b.$
The primal optimization form of CLS-SVR takes the form:
$\displaystyle\min\limits_{w,e}$
$\displaystyle\frac{1}{2}w^{T}w+\frac{\gamma}{2}e^{T}e$ (5) s.t.
$\displaystyle\mathcal{N}(\tilde{u})(x_{k})-f(x_{k})=e_{k},\quad
k=1,\ldots,n,$ $\displaystyle\tilde{u}(c_{j})=u_{j},\quad j=1,\ldots,d.$
where $n$ is the number of training points, $e_{k}$ is the value of the
residual function at $k$-th training point, and $\tilde{u}(c_{j})=u_{j}$ is
the set of initial and boundary conditions. The regularization hyperparameter
$\gamma$ controls the fluctuations of the learned solution and reduces the
overfitting on the training data. However, this hyperparameter may not be
optimized in the case of problems with unique solutions.
The dual form of this optimization problem leads to a linear or nonlinear
system of equations. This system can be obtained using the Lagrangian function
for (5):
$\displaystyle\mathscr{L}(w,e,\alpha)=\frac{1}{2}w^{T}w+\frac{\gamma}{2}e^{T}e-\sum_{k=1}^{n}\alpha_{k}\left[\mathcal{N}(\tilde{u})(x_{k})-f(x_{k})-e_{k}\right]-\sum_{j=1}^{d}\beta_{j}\left[\tilde{u}(c_{i})-u_{i}\right].$
The saddle point of this function satisfies the solution of the problem:
$\left\\{\frac{\partial\mathscr{L}}{\partial
w_{i}}=0,\frac{\partial\mathscr{L}}{\partial
e_{k}}=0,\frac{\partial\mathscr{L}}{\partial\alpha_{k}}=0,\frac{\partial\mathscr{L}}{\partial\beta_{j}}=0\right\\},$
where $i=1,\cdots,m$, $j=1,\cdots,d$, and $k=1,\cdots,n$. After solving this
system, we can use the obtained weights $w$ to approximate the equation or use
the dual variables in a kernel method sense.
### 2.3 Hyperparameter optimization
The problem of hyperparameter optimization is usually expressed as a non-
convex minimization or maximization problem. Various algorithms proposed to
solve this problem can be categorized into gradient-based and gradient-free
sets. The former uses the gradient of a loss function to find an optimum
value. At the same time, by generating some candidates, the latter tries to
approximate the search space to find a local or possibly global optimum. Grid
search, random search, Bayesian optimization, and meta-heuristic algorithms
are some examples of gradient-free methods. Grid and random search have been
widely used because of their simplicity and acceptable performance. Moreover,
they can be massively run in parallel. Briefly, the grid search seeks the
Cartesian product of given parameter sets, whereas the random search samples a
fixed number of parameters from user-defined distributions. For categorical
cases, the hyperparameter is chosen uniformly. In both methods, the best
parameter set would be measured on a test dataset usually generated by the
K-Fold cross-validation algorithm. Figure 1 compares these two algorithms.
A key difference between grid and random search is their computational
complexity. The grid search seeks all possible values in $O(n^{k})$ time,
having $k$ hyperparameters and $n$ different values for each of them, while
for a random search, the user can define a budget based on the available
resources so that the algorithm will run on the time complexity of $O(n)$. For
a more precise comparison, we refer to the authors [25, 26].
Figure 1: . 1: A comparison between random and grid searches for a simple
search space with only two different parameters. It can be seen that grid
search may fail to explore the search space efficiently. The figure is adapted
from Bergstra et al. [1].
## 3 Method and Results
In this section, we explain the proposed algorithm and then provide examples
to show the methods’ efficiency. Here, we focus on the hyperparameter
optimization of orthogonal rational Jacobi functions, whereas the presented
algorithm can be easily extended to other mathematical models.
The optimal set of hyperparameters should be found on a test dataset to
prevent the overfitting problem. In machine learning, this is easily done
using cross-validation techniques. Nevertheless, in the mathematical
equations, there is no data to split into train and test sets. However, there
are three alternative options to handle this issue. The first option is to use
a set of new nodes (different from training points) in the domain and
calculate the prediction error on these nodes. The second is employing some
physical properties of the model. The last one is to use some criteria found
by mathematical techniques, including analytical or accurate numerical ones.
The last option can be seen as a composition of previous ones. Most of the
criteria measures used by the authors have important physical meanings which
are good enough to be accuracy test criteria. Therefore, we use the absolute
difference value between the exact or the state-of-the-art approximations and
the predicted one given by CLS-SVR. The proposed grid and random search
algorithms are presented in algorithms 1 and 2.
We will obtain the optimal numerical results for some well-known benchmark
nonlinear differential equations using the proposed method in the following
sections. The configuration used to find an accurate numerical solution for
these problems is reported in tables 2 and 2. It is seen that the grid search
seeks $600$ different configurations to find the optimal set. To have a fair
comparison, we set the maximum iterations for random search the same as the
number of grid nodes. In addition, the roots of Legendre and Chebyshev
polynomials are utilized as the training data.
Data: The differential equation As $ODE$
1 $S_{i}\leftarrow\text{Set of all desired values for }i\text{-th
hyperparameter}$
2 $CriteriaList\leftarrow[]$
3 $Parameter\\_Set\leftarrow CartesianProduct(\\{S_{i}:\forall i\\})$
4 for _$param\\_set\textbf{ in }Parameter\\_Set\textit{ in parallel}$_ do
5 $\tilde{u}\leftarrow CLS-SVR(ODE)$
6 Compute Criteria for $\tilde{u}$
7 Push Criteria to $CriteriaList$
8
9 end for
Result: $parameter\\_set$ associated with best criteria
Algorithm 1 Grid search algorithm
Data: The differential equation As $ODE$
1 $S_{i}\leftarrow\text{A suitable distribution for }i\text{-th
hyperparameter}$
2 $CriteriaList\leftarrow[]$
3
4 for _$iter\textbf{ from }1\textbf{ to }MAX\\_ITER\textit{ in parallel}$_ do
5 $\tilde{u}\leftarrow CLS-SVR(ODE)$
6 Compute Criteria for $\tilde{u}$
7 Push Criteria to $CriteriaList$
8
9 end for
Result: $parameter\\_set$ associated with best criteria
Algorithm 2 Random search algorithm
Parameter | Values
---|---
Kernel | {Legendre, Chebyshev}
Mapping | {Algebraic, Exponential, Logarithmic}
$\theta$ | $\\{0.1,0.2,\cdots,9.9,10\\}$
Table 1: The search space of the grid search
Parameter | Values
---|---
Kernel | {Legendre, Chebyshev}
Mapping | {Algebraic, Exponential, Logarithmic}
$\theta$ | UniformDistribution(0, 10)
Table 2: The search space of the random search
### 3.1 Volterra’s population model
Volterra’s population model is a nonlinear integro-differential equation that
describes population growth of a species within a closed system [24]. It is
common to express this problem into an equivalent nonlinear differential
equation [16]:
$\displaystyle\kappa{u^{\prime\prime}}(x)={u^{\prime}}(x)-{u^{\prime}}(x)^{2}-{u}(x){u^{\prime}}(x),$
(6) $\displaystyle{u}(0)=0,{u^{\prime}}(0)=0.1.$
Here $\kappa$ is a non-dimensional parameter. The criteria for the prediction
correctness of this problem is the maximum value of the approximated function.
TeBeest [27] showed that the maximum peak is:
$\displaystyle u_{max}=1+\kappa\ln({\frac{\kappa}{1+\kappa+{u(0)}}}).$ (7)
Considering (7) as the exact value for (6), the absolute error for this
equations is defined as
$\displaystyle\text{Absolute Error}\vcentcolon=\mid
u_{max}-\max_{t\in(0,\infty)}\tilde{u}(t)\mid.$ (8)
To find a reasonable range for the length scale parameter and the effect of
the other hyperparameters, we first ran a sensitivity analysis on these
hyperparameters on a large domain. The results are reported in the figure 2.
It can be seen that the large values for the length scale do not yield a good
approximation. The maximum reasonable value for this task is about $10$. In
addition, the basis functions do not impose any significant difference on this
parameter. Moreover, the nonlinear mappings can affect the computational
procedure of the optimization problem (5). This issue has resulted a
discontinuity in these figures. Figure 3 plots some of the successfully
learned approximated solutions. It is seen that the equation may not be
accurately simulated using improper hyperparameters. After the sensitivity
analysis, we simulate this problem with five different most used values for
the non-dimensional parameter $\kappa$ using the grid and random search
algorithms. As reported in tables 2 and 2, the interval $(0,10]$ is chosen for
the length scale parameter. Tables 3 and 4 report the best-obtained
hyperparameters. From there, it can be inferred that algebraic mapping is the
best choice for small values of $\kappa$ while for the larger values, the
exponential mapping could obtain better approximations. Moreover, the kernel
function and its nonlinear mapping can result in a bit different accuracy.
To show the effectiveness of the proposed algorithm, we have compared the
absolute error found by various authors in table 5. Rational Legendre (RL) and
Rational Chebyshev (RC) pseudospectral method [17], Sinc collocation method
(SCM) [16], and Fractional Rational Legendre (FRL) [24] are compared to each
other. The length scale values used by Parand et al. [24] are optimized in a
similar manner proposed by Boyd [19].
(a) Legendre rational functions
(b) Chebyshev rational functions
Figure 2: The effect of length scale parameter $\theta$ on a large domain for
Volterra’s population model with $m=25$ and $\kappa=0.5$. Figure 3: A set of
learned solutions with different values for length scale $\theta$ for
Volterra’s population model with $\kappa=0.5$.
$\kappa$ | Kernel | Mapping | $\theta$ | Exact [27] | Approximate | Error
---|---|---|---|---|---|---
$0.02$ | Legendre | $\displaystyle\nicefrac{{(x-\theta)}}{{(x+\theta)}}$ | $0.10000$ | $0.92342717207022$ | $0.92342711545307$ | $5.662\times 10^{-08}$
$0.04$ | Legendre | $\displaystyle\nicefrac{{(x-\theta)}}{{(x+\theta)}}$ | $0.70000$ | $0.87371998300000$ | $0.87371998000000$ | $3.090\times 10^{-09}$
$0.10$ | Legendre | $\displaystyle 1-2\exp(\nicefrac{{-x}}{{\theta}})$ | $1.00000$ | $0.76974149100000$ | $0.76974149100000$ | $7.890\times 10^{-12}$
$0.20$ | Chebyshev | $\displaystyle 1-2\exp(\nicefrac{{-x}}{{\theta}})$ | $1.90000$ | $0.65905038200000$ | $0.65905038200000$ | $4.660\times 10^{-11}$
$0.50$ | Chebyshev | $\displaystyle 1-2\exp(\nicefrac{{-x}}{{\theta}})$ | $3.80000$ | $0.48519029140942$ | $0.48519029141289$ | $3.468\times 10^{-12}$
Table 3: The obtained results for Volterra population equation using a grid
search algorithm.
$\kappa$ | Kernel | Mapping | $\theta$ | Exact [27] | Approximate | Error
---|---|---|---|---|---|---
$0.02$ | Chebyshev | $\displaystyle\nicefrac{{(x-\theta)}}{{(x+\theta)}}$ | $0.539501186666072$ | $0.92342717207022$ | $0.92342733722160$ | $1.652\times 10^{-07}$
$0.04$ | Chebyshev | $\displaystyle\nicefrac{{(x-\theta)}}{{(x+\theta)}}$ | $0.318328463774207$ | $0.87371998315400$ | $0.87371998508417$ | $1.930\times 10^{-09}$
$0.10$ | Chebyshev | $\displaystyle 1-2\exp(\nicefrac{{-x}}{{\theta}})$ | $1.626117351946306$ | $0.76974149070060$ | $0.76974149073275$ | $3.216\times 10^{-11}$
$0.20$ | Chebyshev | $\displaystyle 1-2\exp(\nicefrac{{-x}}{{\theta}})$ | $2.510838579760311$ | $0.65905038155232$ | $0.65905038153414$ | $1.818\times 10^{-11}$
$0.50$ | Legendre | $\displaystyle 2\tanh(\nicefrac{{x}}{{\theta}})-1$ | $6.797026768536748$ | $0.48519029140942$ | $0.48519029141142$ | $2.007\times 10^{-12}$
Table 4: The obtained results for Volterra population equation using a random search algorithm. $\kappa$ | RL [17] | RC [17] | FRL [24] | SCM [16] | Presented Method
---|---|---|---|---|---
$m$ | $50$ | $50$ | $40$ | $35$ | $40$
$0.02$ | $3.72\times 10^{-07}$ | $7.51\times 10^{-07}$ | $6.33\times 10^{-06}$ | $7.00\times 10^{-08}$ | $5.66\times 10^{-08}$
$0.04$ | $1.43\times 10^{-08}$ | $5.27\times 10^{-08}$ | $6.75\times 10^{-08}$ | $3.00\times 10^{-08}$ | $1.93\times 10^{-09}$
$0.10$ | $1.07\times 10^{-10}$ | $2.13\times 10^{-10}$ | $2.73\times 10^{-08}$ | $8.00\times 10^{-08}$ | $7.89\times 10^{-12}$
$0.20$ | $3.53\times 10^{-11}$ | $2.33\times 10^{-10}$ | $8.57\times 10^{-10}$ | $3.30\times 10^{-07}$ | $1.82\times 10^{-11}$
$0.50$ | $2.44\times 10^{-09}$ | $4.87\times 10^{-09}$ | $2.73\times 10^{-10}$ | $3.40\times 10^{-07}$ | $2.01\times 10^{-12}$
Table 5: A comparison among some mathematical methods solved Volterra’s
population model on the semi-infinite domain.
### 3.2 Kidder equation
The unsteady isothermal flow of a gas through a micro-nano porous medium can
be expressed as a nonlinear differential equation [18]. This equation which is
defined on a semi-infinite domain is modeled as:
$\displaystyle u^{\prime\prime}(x)+\frac{2x}{\sqrt{1-\kappa
u(x)}}u^{\prime}(x)=0,$ (9) $\displaystyle u(0)=1,u(\infty)=0.$
The initial slope of the approximated solution is an essential measure of the
accuracy of the problem. Up to now, no exact solution or initial slope has
been found for this problem. However, some researchers developed advanced
techniques to find an accurate solution. In this research, we use the values
obtained by Parand et al. [20] as an exact approximation. This paper has
obtained the exact initial slope up to $38$ digits of precision in a machine
or software that supports arbitrary precision arithmetic. Here, we are just
focusing on the problem of choosing the best hyperparameters. Thus, we compare
our results on a small number of basis and an almost similar number of basis
functions. Furthermore, they have utilized the Quasi-Linearization Method
(QLM) to convert the problem of approximating solutions for nonlinear Ordinary
Differential Equations (ODEs) into a sequence of dependent linear ODEs. This
increases the computational complexity to the number of QLM iterations. Here
we have just solved the original nonlinear problem, which is more
computationally efficient.
Same as the previous example, we first analyze the effect of the
hyperparameters. Figure 4 demonstrates the absolute error obtained by
different sets of hyperparameters. It is seen that the Legendre kernel can
reach better results in comparison to the Chebyshev functions. Furthermore,
some of the hyperparameters lead to numerical issues with the Legendre basis
functions, and hence the plot is discontinuous. Likewise in the previous
example, the interval $(0,10]$ for the length scale contains the best results,
therefore we focus on this range in the next experiments. Tables 6 and 7
reported the best results obtained by the grid and random search,
respectively. From there, it can be seen that the Legendre with algebraic
mapping is the best-obtained hyperparameters in all of the configurations for
different parameters. Furthermore, the random search algorithm overcame the
grid search in all of the experiments.
A comparison is made between the presented method and other related works in
table 8. Some of the simulated approximations are plotted in the figure 5.
(a) Legendre rational functions
(b) Chebyshev rational functions
Figure 4: The effect of length scale parameter $\theta$ on a large domain for
Kidder equation with $m=25$ Figure 5: A set of learned solutions with
different values for length scale $\theta$ for the Kidder equation with
$\kappa=0.5$.
$\kappa$ | Kernel | Mapping | $\theta$ | Exact [20] | Approximate | Error
---|---|---|---|---|---|---
$0.10$ | Legendre | $\displaystyle\nicefrac{{(x-\theta)}}{{(x+\theta)}}$ | $4.80000$ | $-1.13900720617830$ | $-1.13900783881046$ | $6.326\times 10^{-07}$
$0.30$ | Legendre | $\displaystyle\nicefrac{{(x-\theta)}}{{(x+\theta)}}$ | $4.80000$ | $-1.16294145829591$ | $-1.16294198206589$ | $5.238\times 10^{-07}$
$0.50$ | Legendre | $\displaystyle\nicefrac{{(x-\theta)}}{{(x+\theta)}}$ | $4.80000$ | $-1.19179064971942$ | $-1.19179149343671$ | $8.437\times 10^{-07}$
$0.90$ | Legendre | $\displaystyle\nicefrac{{(x-\theta)}}{{(x+\theta)}}$ | $6.20000$ | $-1.28188132220336$ | $-1.28188069301075$ | $6.292\times 10^{-07}$
Table 6: The obtained results for the Kidder equation using a grid search
algorithm.
$\kappa$ | Kernel | Mapping | $\theta$ | Exact [20] | Approximate | Error
---|---|---|---|---|---|---
$0.10$ | Legendre | $\displaystyle\nicefrac{{(x-\theta)}}{{(x+\theta)}}$ | $0.975404049994095$ | $-1.13900720617830$ | $-1.13900677991113$ | $4.263\times 10^{-07}$
$0.30$ | Legendre | $\displaystyle\nicefrac{{(x-\theta)}}{{(x+\theta)}}$ | $1.576130816775483$ | $-1.16294145829591$ | $-1.16294118714719$ | $2.711\times 10^{-07}$
$0.50$ | Legendre | $\displaystyle\nicefrac{{(x-\theta)}}{{(x+\theta)}}$ | $2.760250769985784$ | $-1.19179064971942$ | $-1.19179065703565$ | $7.316\times 10^{-09}$
$0.90$ | Legendre | $\displaystyle\nicefrac{{(x-\theta)}}{{(x+\theta)}}$ | $4.693906410582058$ | $-1.28188132220336$ | $-1.28188182747538$ | $5.053\times 10^{-07}$
Table 7: The obtained result for the Kidder equation using a random search
algorithm.
$\kappa$ | CPA [28] | Sinc [29] | RL [29] | HFC [30] | Bessel [31] | RJ [32] | Presented method
---|---|---|---|---|---|---|---
$m$ | $-$ | 32 | 32 | 20 | 26 | 20 | 25
$0.1$ | $2.94\times 10^{-04}$ | $-$ | $-$ | $-$ | $2.61\times 10^{-06}$ | $1.29\times 10^{-04}$ | $4.26\times 10^{-07}$
$0.5$ | $9.15\times 10^{-04}$ | $3.10\times 10^{-03}$ | $3.10\times 10^{-03}$ | $7.98\times 10^{-03}$ | $3.42\times 10^{-05}$ | $1.17\times 10^{-04}$ | $7.32\times 10^{-09}$
$0.9$ | $1.72\times 10^{-02}$ | $-$ | $-$ | $-$ | $3.65\times 10^{-06}$ | $1.02\times 10^{-04}$ | $5.05\times 10^{-07}$
Table 8: Comparison among various approximate initial slopes for the Kidder
equation (9). The values reported in [18] are assumed as the exact values.
## 4 Conclusion
This paper developed two machine learning techniques for increasing the
accuracy of the numerical simulation of functional equations. The presented
algorithms 1 and 2 are general tuning algorithms capable of hyperparameter
optimization of various mathematical approaches such as spectral methods,
RBFs, and wavelets. However, in this research, we have just focused on the
spectral method for approximating the solution of nonlinear differential
equations on the semi-infinite domain. To do so, we configured the search
space to the hyperparameters of rational Jacobi functions, including basis
functions, nonlinear rational mappings, and the length scale parameter.
Finally, in the numerical results, various experiments have been conducted to
measure the search capability of the proposed algorithms. We discussed the
role of the length scale parameter on the stability and convergence of the
method. In addition, some comparisons among related works were carried out to
show the superiority of these algorithms over traditional mathematical
procedures. This property, along with the small computation complexity handled
with parallel programming approaches, made this process efficient and easy to
use for other researchers. However, modern gradient-free global optimization
techniques, such as Bayesian optimization and Tree-structured Parzen
estimator, can be developed to get better approximations.
## References
* [1] J. Bergstra and Y. Bengio, “Random search for hyper-parameter optimization,” Journal of Machine Learning Research, vol. 13, no. 10, pp. 281–305, 2012\.
* [2] J. Snoek, H. Larochelle, and R. P. Adams, “Practical bayesian optimization of machine learning algorithms,” in Advances in Neural Information Processing Systems (F. Pereira, C. Burges, L. Bottou, and K. Weinberger, eds.), vol. 25, Curran Associates, Inc., 2012.
* [3] D. Maclaurin, D. Duvenaud, and R. Adams, “Gradient-based hyperparameter optimization through reversible learning,” in International conference on machine learning, pp. 2113–2122, PMLR, 2015.
* [4] F. Itano, M. A. d. A. de Sousa, and E. Del-Moral-Hernandez, “Extending mlp ann hyper-parameters optimization by using genetic algorithm,” in 2018 International joint conference on neural networks (IJCNN), pp. 1–8, IEEE, 2018\.
* [5] J. P. Boyd, “The optimization of convergence for chebyshev polynomial methods in an unbounded domain,” Journal of computational physics, vol. 45, no. 1, pp. 43–79, 1982.
* [6] Y. Sanyasiraju and C. Satyanarayana, “On optimization of the rbf shape parameter in a grid-free local scheme for convection dominated problems over non-uniform centers,” Applied Mathematical Modelling, vol. 37, no. 12-13, pp. 7245–7272, 2013.
* [7] R. Cavoretto, A. De Rossi, M. S. Mukhametzhanov, and Y. D. Sergeyev, “On the search of the shape parameter in radial basis functions using univariate global optimization methods,” Journal of Global Optimization, vol. 79, no. 2, pp. 305–327, 2021.
* [8] N. Tanguy, N. Iassamen, M. Telescu, and P. Cloastre, “Parameter optimization of orthonormal basis functions for efficient rational approximations,” Applied Mathematical Modelling, vol. 39, no. 16, pp. 4963–4970, 2015.
* [9] W. Mi and W. X. Zheng, “Adaptive rational orthogonal basis functions for identification of continuous-time systems,” IEEE Transactions on Automatic Control, vol. 66, no. 4, pp. 1809–1816, 2021.
* [10] T. Tajvidi, M. Razzaghi, and M. Dehghan, “Modified rational legendre approach to laminar viscous flow over a semi-infinite flat plate,” Chaos, Solitons & Fractals, vol. 35, no. 1, pp. 59–66, 2008.
* [11] M. Khader and M. Adel, “Numerical approach for solving the riccati and logistic equations via qlm-rational legendre collocation method,” Computational and Applied Mathematics, vol. 39, no. 3, pp. 1–9, 2020.
* [12] A. Saadatmandi and Z. Sanatkar, “Collocation method based on rational legendre functions for solving the magneto-hydrodynamic flow over a nonlinear stretching sheet,” Applied Mathematics and Computation, vol. 323, pp. 193–203, 2018.
* [13] M. A. Abd El Salam, M. A. Ramadan, M. A. Nassar, P. Agarwal, and Y.-M. Chu, “Matrix computational collocation approach based on rational chebyshev functions for nonlinear differential equations,” Advances in Difference Equations, vol. 2021, no. 1, pp. 1–17, 2021.
* [14] S. Deniz and M. Sezer, “Rational chebyshev collocation method for solving nonlinear heat transfer equations,” International Communications in Heat and Mass Transfer, vol. 114, p. 104595, 2020.
* [15] X. Zhang and J. P. Boyd, “Revisiting the thomas–fermi equation: Accelerating rational chebyshev series through coordinate transformations,” Applied Numerical Mathematics, vol. 135, pp. 186–205, 2019.
* [16] K. Parand, Z. Delafkar, N. Pakniat, A. Pirkhedri, and M. K. Haji, “Collocation method using sinc and rational legendre functions for solving volterra’s population model,” Communications in Nonlinear Science and Numerical Simulation, vol. 16, no. 4, pp. 1811–1819, 2011.
* [17] M. Dehghan and M. Shahini, “Rational pseudospectral approximation to the solution of a nonlinear integro-differential equation arising in modeling of the population growth,” Applied Mathematical Modelling, vol. 39, no. 18, pp. 5521–5530, 2015.
* [18] K. Parand, S. Latifi, M. Delkhosh, and M. M. Moayeri, “Generalized lagrangian jacobi gauss collocation method for solving unsteady isothermal gas through a micro-nano porous medium,” The European Physical Journal Plus, vol. 133, no. 1, pp. 1–12, 2018.
* [19] J. P. Boyd, Chebyshev and Fourier spectral methods. Courier Corporation, 2001.
* [20] K. Parand and M. Delkhosh, “An Accurate Numerical Method for Solving Unsteady Isothermal Flow of a Gas Through a Semi-Infinite Porous Medium,” Journal of Computational and Nonlinear Dynamics, vol. 13, p. 011007, 10 2017\.
* [21] A. G. Khoee, K. M. Mohammadi, M. Jani, and K. Parand, “A least squares support vector regression for anisotropic diffusion filtering,” arXiv preprint arXiv:2202.00595, 2022.
* [22] P. Ahadian and K. Parand, “Support vector regression for the temperature-stimulated drug release,” Chaos, Solitons & Fractals, vol. 165, p. 112871, 2022.
* [23] J. A. Rad, K. Parand, and S. Chakraverty, Learning with Fractional Orthogonal Kernel Classifiers in Support Vector Machines: Theory, Algorithms and Applications. Springer Singapore, 2023.
* [24] K. Parand, A. Aghaei, M. Jani, and A. Ghodsi, “Parallel ls-svm for the numerical simulation of fractional volterra’s population model,” Alexandria Engineering Journal, vol. 60, no. 6, pp. 5637–5647, 2021.
* [25] L. Yang and A. Shami, “On hyperparameter optimization of machine learning algorithms: Theory and practice,” Neurocomputing, vol. 415, pp. 295–316, 2020.
* [26] T. Yu and H. Zhu, “Hyper-parameter optimization: A review of algorithms and applications,” arXiv preprint arXiv:2003.05689, 2020.
* [27] K. G. TeBeest, “Classroom note: numerical and analytical solutions of volterra’s population model,” SIAM review, vol. 39, no. 3, pp. 484–493, 1997.
* [28] R. Iacono and J. P. Boyd, “The kidder equation,” Studies in Applied Mathematics, vol. 135, no. 1, pp. 63–85, 2015.
* [29] A. Rezaei, K. Parand, and A. Pirkhedri, “Numerical study on gas flow through a micro-nano porous media based on special functions,” Journal of Computational and Theoretical Nanoscience, vol. 8, no. 2, pp. 282–288, 2011\.
* [30] J. Rad, S. Ghaderi, and K. Parand, “Numerical and analytical solution of gas flow through a micro-nano porous media: A comparison,” Journal of Computational and Theoretical Nanoscience, vol. 8, no. 10, pp. 2033–2041, 2011\.
* [31] K. Parand and M. Nikarya, “Solving the unsteady isothermal gas through a micro-nano porous medium via bessel function collocation method,” Journal of Computational and Theoretical Nanoscience, vol. 11, no. 1, pp. 131–136, 2014.
* [32] K. Parand, P. Mazaheri, M. Delkhosh, and A. Ghaderi, “New numerical solutions for solving kidder equation by using the rational jacobi functions,” SeMA Journal, vol. 74, no. 4, pp. 569–583, 2017.
|
# Direct correspondence between Newtonian gravitation and general relativity
Thomas Buchert Univ Lyon, Ens de Lyon, Univ Lyon1, CNRS, Centre de Recherche
Astrophysique de Lyon UMR5574, F-69007, Lyon, France
Email<EMAIL_ADDRESS>
###### Abstract
We present a strategy to obtain equations of general relativity for an
irrotational dust continuum within a flow-orthogonal foliation of spacetime
from the equations of Newtonian gravitation, and vice versa, without employing
a weak field expansion or a limiting process on the speed of light. We argue
that writing Newton’s equations in a Lagrangian frame and relaxing
integrability of vector gradients is sufficient to obtain equations that are
identical to Einstein’s equations in (3+1)-form when respecting the Lorentzian
signature of the time parametrization. We discuss implications and provide an
outlook on how to extend the obtained correspondence to more general
spacetimes.
## I Newtonian limits and weak fields:
is this subject settled ?
The question of how to obtain the Newtonian limit of general relativity enjoys
various answers. Many of the practical implementations of a Newtonian limit
are heuristic by e.g. expanding Einstein’s equations with respect to a flat
background spacetime and keeping only linear terms as perturbations of metric
components (weak field approximation) together with sending the causality
constant $1/c$ to zero, thus “opening up” the local light cones. A systematic
approach is the frame theory by Ehlers ehlers:frametheory that comprises both
theories in a single theory using the causality constant as a parameter, built
on earlier efforts around the Newton-Cartan theory (for details and references
see Ehlers’ work ehlers:frametheory , the editorial to Ehlers’ work bm , and
Ehlers’ investigation of examples ehlers:examples ). Ellis ellis:relativistic
has nicely compared the (1+3)-Einstein equations and their correspondences in
Newtonian gravitation. There are also proposals of a dictionary between the
theories for metric perturbations to linear order N-GRnumerics1 ;
wald:dictionary that have been applied in the context of general-relativistic
exact solutions and numerical simulations KoksbangHannestad , N-GRnumerics2 .
All of these approaches imply that both theories are substantially different
and that one has to neglect terms in Einstein’s equations to obtain back the
Newtonian equations of gravitation. It is, however, a matter of definition of
the Newtonian limit and the way the Newtonian equations of gravitation are
written.
In this Letter, we will investigate a two-step strategy for arriving at
Einstein’s field equations from Newton’s equations, in the context of an
irrotational dust matter model, that does not need elements of an
approximation. As a first step, we will demonstrate that the Newtonian
equations and Einstein’s equations in a flow-orthogonal foliation of spacetime
are algebraically correspondent, if the former are written in the Lagrangian
rest frame of the dust. As a second step, we will show that the Newtonian
equations become identical to their relativistic counterparts, if an
integrability requirement of the resulting tensor coefficients is relaxed in
the Newtonian equations. These statements extend to variables such as
connection, Ricci and Weyl curvatures, defined geometrically within general
relativity but are also algebraically present in the Newtonian equations, if
written in the Lagrangian frame.
We organize this Letter as follows. Sec. II recalls Newton’s equations for a
self-gravitating continuum of dust. It is then argued in Sec. III that the
Lagrangian form of Newton’s equations can be subjected to a “recipe” that
follows an idea by Einstein to “throw out the Euclidean embedding space” and
that employs Cartan coframes defining nonintegrable deformations of the fluid.
We so obtain Einstein’s equations in a flow-orthogonal (3+1)-setting. Sec. IV
discusses the result and provides an outlook on how to generalize the obtained
correspondence.
NOTATIONS: Bold notation will be used for vectors and forms. The vector
product is denoted by $\times$, while the wedge product $\wedge$ denotes the
antisymmetrized tensor product $\otimes$, with components
$({\bm{a}}\otimes{\bm{b}})_{ij}=a_{i}b_{j}$. We define symmetrization and
antisymmetrization of indices by $v_{(i,j)}=\tfrac{1}{2}(v_{i,j}+v_{j,i})$,
$v_{[i,j]}=\tfrac{1}{2}(v_{i,j}-v_{j,i})$, respectively; $i,j,k\ldots=1,2,3$.
Spatial derivatives with respect to Eulerian coordinates $\bm{x}$ are denoted
by a comma, while later spatial derivatives with respect to Lagrangian
coordinates $\bm{X}$ are denoted by a vertical slash; an overdot will be used
to represent the Lagrangian time derivative; summation over repeated indices
is understood.
## II Newtonian equations for a dust continuum
We will employ a hydrodynamic picture and consider a self-gravitating
continuum of dust, i.e. pressureless matter with rest mass density
$\varrho({\bm{x}},t)$ and velocity field ${\bm{v}}({\bm{x}},t)$ (the words
‘matter’, ‘dust’, ‘fluid’ will be used interchangeably). Both fields are
represented in terms of Eulerian (inertial, nonrotating) coordinates $\bm{x}$
and a time parameter $t$. The acceleration field is equivalent to the
gravitational field strength $\bm{g}({\bm{x}},t)$ due to the equivalence of
inertial and gravitational mass,
$\frac{{\mathrm{d}}}{{\mathrm{d}}t}\bm{v}=\bm{g}\quad;\quad\frac{{\mathrm{d}}}{{\mathrm{d}}t}:=\frac{\partial}{\partial
t}\Big{|}_{\bm{x}}+\bm{v}\cdot\bm{\nabla}=\frac{\partial}{\partial
t}\Big{|}_{\bm{X}}\ ,$ (1)
where we introduced the Lagrangian time derivative
${\mathrm{d}}/{\mathrm{d}}t$ that reduces to a partial time derivative in a
Lagrangian coordinate system $\bm{X}$. In Eulerian coordinate components, we
write Euler’s equation (1) and its Eulerian spatial derivative:
$\frac{\partial}{\partial t}v^{i}+v^{k}v^{i}_{\
,k}=g^{i}\quad,\quad\frac{\mathrm{d}}{{\mathrm{d}}t}v^{i}_{\ ,j}+v^{i}_{\
,k}v^{k}_{\ ,j}=g^{i}_{\ ,j}\ .$ (2)
The Newtonian continuum theory of gravitation is a vector theory, so that the
sources of the curl and divergence of $\bm{g}$ suffice to define the complete
set of field equations (up to a harmonic vector field):
$\bm{\nabla}\times\bm{g}=\bm{0}\quad;\quad\bm{\nabla}\cdot\bm{g}=\Lambda-4\pi
G\varrho\ ,$ (3)
where $G$ denotes Newton’s gravitational coupling constant; for completeness
we included the cosmological constant $\Lambda$ (here with dimension ${\rm
time}^{-2}$). The rest mass density $\varrho$ obeys the continuity equation,
$\frac{{\mathrm{d}}}{{\mathrm{d}}t}\varrho+\varrho\bm{\nabla}\cdot\bm{v}=0\ .$
(4)
The Euler-Newton system comprises (1), (3) and (4). This overdetermined set of
equations can be written as a set of five equations for five variables
($\varrho,v^{i},\Phi$) through introduction of the gravitational potential,
$\bm{g}=:-\bm{\nabla}\Phi$.
In this Letter we will restrict ourselves to irrotational flows,
$v_{[i,j]}=0$. For later considerations we add the Newtonian gravitoelectric
tidal tensor (inserting (2) and (3) in the second line):
$\displaystyle{\mathcal{E}}_{ij}:$
$\displaystyle=g_{(i,j)}-\frac{1}{3}\delta_{ij}g^{k}_{\ ,k}$ (5)
$\displaystyle=\frac{\mathrm{d}}{{\mathrm{d}}t}v_{(i,j)}+v^{k}_{\
,(i}v_{j),k}-\frac{1}{3}\delta_{ij}(\Lambda-4\pi G\varrho)\ .$
## III Realizing Einstein’s vision of a transition strategy
In his Kyoto address in December 1922 ishiwara , Albert Einstein expressed an
intuition of an interesting strategy that we will freely interpret in this
Letter. He said: If all accelerated systems are equivalent, then Euclidean
geometry cannot hold in all of them. To throw out geometry [the Euclidean
embedding space] and keep [vectorial] physical laws is equivalent to
describing thoughts without words. We must search for words before we can
express thoughts […] This point remained unsoluble to me until 1912, when I
suddenly realized, that Gauß’s theory of surfaces holds the key for unlocking
this mystery. […].
We have interpreted Einstein’s words [in brackets] to infer that the writing
of Newton’s equations in terms of vectors as functions of inertial coordinates
implies an embedding into a global vector space. A modern strategy to “throw
out” this embedding space consists in (i) moving to a Lagrangian
representation of Newton’s equations, introducing (intrinsic and noninertial)
Lagrangian coordinates, and (ii) relaxing integrability of the resulting
tensor coefficients, i.e. replacing exact gradients by general one-form
fields.
To achieve (i), we introduce a one-parameter family of spatial diffeomorphisms
to Lagrangian coordinates $\bm{X}$, labeling fluid parcels along their
trajectories, identified with their Eulerian positions at some initial time
$t_{\rm i}$ ehlersbuchert :111Henceforth, we use the indices $a,b,c\ldots$ as
counters, while $i,j,k\ldots$ remain coordinate indices referring to an exact
basis. We realize that the vector components $f^{a}$ are to be written with
counters, but if there exists an embedding space endowed with coordinates
$x^{i}=f^{i\equiv a}$, then these counters are also coordinate indices.
$(\bm{X},t)\mapsto(\bm{x},t)=({\bm{f}}({\bm{X},t}),t)\ ;\
\bm{X}:=\bm{f}(\bm{X},t_{\rm i})\ .$ (6)
We call $\bm{f}$ the field of trajectories and ${\bf d}f^{a}=f^{a}_{\ |k}{\bf
d}X^{k}$ the Lagrangian deformation gradient, represented in the exact
Lagrangian basis. Lagrangian coordinates are intrinsic to the fluid continuum
and they can as well be used as coordinates in a local chart around a point in
a spatial Riemannian manifold. Let us consider the following bilinear metric
form, the first representation of which we call the Lagrangian metric,
${{}^{3}}{\bm{\delta}}:=\delta_{ab}\,f^{a}_{\;|i}f^{b}_{\;|j}{\bf
d}X^{i}\otimes{\bf d}X^{j}=\delta_{ij}\;{\bf d}x^{i}\otimes{\bf d}x^{j}\ .$
(7)
This metric is Euclidean, since we can find a one-parameter family of
diffeomorphisms, namely ${\bm{X}}={\bm{h}}({\bm{x}},t)$, with
${\bm{h}}={\bm{f}}^{-1}$, that transforms the first representation into the
second.
A Riemannian 3-metric ${{}^{3}}{\bm{s}}$ can be written in terms of three
spatial Cartan coframes ${\bm{\eta}}^{a}$ that provide a more elementary
“reference body” than the metric,
${{}^{3}}{\bm{s}}:=\delta_{ab}\;\bm{\eta}^{a}\otimes\bm{\eta}^{b}=\delta_{ab}\;\eta^{a}_{\;i}\,\eta^{b}_{\;j}\,{\bf
d}X^{i}\otimes{\bf d}X^{j}\ .$ (8)
We notice that the mapping
$\bm{\eta}^{a}\;\mapsto\;{\bf d}f^{a}\ ,$ (9)
which we henceforth call Euclidean restriction, implies that the Riemannian
3-metric reduces to the Euclidean metric in the Lagrangian representation. In
the integrable case of exact one-form fields ${\bf d}f^{a}$, the Lagrangian
metric embodies the geometry of the fluid, however, still embedded into
Euclidean space.
We henceforth use the wording integrable when we want to express that the
coefficient matrix $\eta^{a}_{\ i}$ of $\bm{\eta}^{a}=\eta^{a}_{\ i}{\bf
d}X^{i}$ can be obtained through spatial derivatives of vector components
$f^{a}$, i.e. $\bm{\eta}^{a}$ are exact, and we say generalized gradient when
we mean relaxation of integrability that realizes step (ii) of the outlined
strategy.
In what follows we will apply the outlined two-step strategy, where we first
concentrate on kinematic properties of the fluid. To this end we are going to
find the analogy to the Newtonian velocity gradient $v^{a}_{\ ,b}$, now
written both with counter indices, since there will no longer be a reference
to an exact Eulerian basis after relaxing integrability in the sense of
inverting (9). Relating this to the Lagrangian gradient of
$\bm{v}=\dot{\bm{f}}({\bm{X}},t)$ involves the inverse transformation
${\bm{h}}({\bm{x}},t)$: $v^{a}_{\ ,b}=v^{a}_{\ |k}h^{k}_{\
,b}={\dot{f}}^{a}_{\ |k}h^{k}_{\ ,b}$.
Moving to the nonintegrable form of the velocity gradient, we have to
introduce the inverse matrix to ${\eta}^{a}_{\ k}$. We define three frame
fields, ${\bm{e}}_{b}$ (the Dreibein at the worldlines of fluid parcels),
being dual to Cartan’s coframe fields. We express both in the respective local
basis systems (${\bf d}X^{k}$ for forms and $\bm{\partial}_{X^{k}}$ for
vectors):
$\bm{\eta}^{a}={\eta}^{a}_{\ k}\,{\bf d}X^{k}\,;\,{\bm{e}}_{b}=e_{b}^{\
k}\,\bm{\partial}_{X^{k}}\,;\,{\eta}^{a}_{\ k}e_{b}^{\ k}=\delta^{a}_{\
b}\,;\,{\eta}^{a}_{\ k}e_{a}^{\ \ell}=\delta_{k}^{\ \ell}\;.$ (10)
Consequently, the nonintegrable form of the Newtonian velocity gradient is
represented by
$\Theta^{a}_{\ b}:={\dot{\eta}}^{a}_{\ k}e_{b}^{\ k}\ .$ (11)
It is expressed in the nonexact basis (remember that the velocity gradient has
both Eulerian values and derivatives with respect to Eulerian coordinates). We
transform this object into our local exact (Lagrangian) basis with the help of
the transformation matrices (10) and arrive at:
$\Theta^{i}_{\ j}:=e_{a}^{\ i}\eta^{b}_{\ j}\,\Theta^{a}_{\ b}=e_{a}^{\
i}{\dot{\eta}}^{a}_{\ j}\ .$ (12a) This field can be entirely expressed in
terms of coframe fields through the algebraic identity $e_{a}^{\
i}=\frac{1}{2J}\;\epsilon_{abc}\epsilon^{ik\ell}\,\eta^{b}_{\;k}\,\eta^{c}_{\;\ell}\
,$ (12b) with the Levi-Cività symbol $\epsilon_{abc}$, and the nonintegrable
analog of the Jacobian of the spatial diffeomorphism (6), $J:=\det(\eta^{a}_{\
i})=\frac{1}{6}\epsilon_{abc}\epsilon^{ijk}\,\eta^{a}_{\ i}\eta^{b}_{\
j}\eta^{c}_{\ k}\ .$ (12c)
We notice that the variable (12a) that generalizes the Newtonian velocity
gradient has mixed indices, which holds true for the transformation of other
Newtonian fields too. The expansion tensor is then formed by lowering the
upper index using the nonintegrable form of the Lagrangian metric (8),
$\Theta_{ij}=\delta_{ab}\eta^{a}_{\ i}\eta^{b}_{\ k}\Theta^{k}_{\
j}=\delta_{ab}\,\eta^{a}_{\ i}{\dot{\eta}}^{b}_{\ j}$, with the rate of
expansion $\Theta^{k}_{\ k}=:\Theta$. The vanishing of the vorticity,
$v_{[i,j]}=0$, so translates to the symmetry condition $\Theta_{[ij]}=0$.
Turning now to dynamical properties of the dust fluid, we introduce the
nonintegrable form of the field strength gradient along the above lines,
$g^{a}_{\ ,b}\mapsto\mathcal{F}^{a}_{\ b}={\ddot{\eta}}^{a}_{\ k}e_{b}^{\ k}$,
${\mathcal{F}}^{i}_{\ j}:=e_{a}^{\ i}\eta^{b}_{\ j}\,\mathcal{F}^{a}_{\
b}=e_{a}^{\ i}{\ddot{\eta}}^{a}_{\ j}$ and
$\mathcal{F}_{ij}=\delta_{ab}\,\eta^{a}_{\ i}{\ddot{\eta}}^{b}_{\ j}$,
yielding the nonintegrable version of Euler’s equation (2):
${\mathcal{F}}^{i}_{\ j}=\dot{\Theta}^{i}_{\ j}+\Theta^{i}_{\ k}\Theta^{k}_{\
j}\ .$ (13a) The field equations (3) generalize to the set
${\mathcal{F}}^{k}_{\ k}=\Lambda-4\pi
G\varrho\quad;\quad{\mathcal{F}}_{[ij]}=0\ .$ (13b) In the integrable version
of Newton’s equations, these are enough to determine the gravitational field.
However, in the nonintegrable version the tracefree symmetric part must be
part of the gravitational field tensor. We therefore add the nonintegrable
form of the Newtonian tidal tensor (5) (omitting here the redundant
symmetrization): $\displaystyle-{E}^{i}_{\ j}:$
$\displaystyle=\mathcal{F}^{i}_{\ j}-\frac{1}{3}\mathcal{F}^{k}_{\
k}\delta^{i}_{\ j}$ (13c) $\displaystyle=\dot{\Theta}^{i}_{\ j}+\Theta^{i}_{\
k}\Theta^{k}_{\ j}-\frac{1}{3}(\Lambda-4\pi G\varrho)\delta^{i}_{\ j}\ ,$
where we introduced a sign convention that we will explain below.
We now show that Equations (13), together with the nonintegrable Lagrangian
form of (4), $\dot{\varrho}+\Theta\varrho=0$, are identical to Einstein’s
equations in a fluid-orthogonal foliation of spacetime via the definition of a
new (from the Newtonian point of view auxiliary) field:
$-\mathcal{R}^{i}_{\ j}:={\dot{\Theta}}^{i}_{\ j}+\Theta{\Theta}^{i}_{\
j}-(\Lambda+4\pi G\varrho)\delta^{i}_{\ j}\ .$ (14)
Equation (14) implies a key equation of the correspondence: with (13a) we
obtain a relation of the generalized field strength gradient to this newly
defined field:
${\mathcal{F}}^{i}_{\ j}=-\mathcal{R}^{i}_{\ j}+(\Lambda+4\pi
G\varrho)\delta^{i}_{\ j}+\Theta^{i}_{\ k}\Theta^{k}_{\ j}-\Theta\Theta^{i}_{\
j}\ .$ (15)
In the geometrical context of general relativity this field is the spatial
Ricci tensor, $\mathcal{R}_{ij}=\delta_{ab}\eta^{a}_{\ i}\eta^{b}_{\
k}\mathcal{R}^{k}_{\ j}$, the key equation (15) is known to emerge from the
Gauß embedding equation using the nonintegrable Euler equation (13a): the
components of the generalized Newtonian field strength gradient form
components of the spacetime Riemann tensor, $-{\mathcal{F}}^{i}_{\
j}={{}^{4}}R^{i}_{\ 0j0}$. Imposing the field equations (13b), the trace of
(15) becomes the energy constraint, and the antisymmetric part of (15)
vanishes due to the vanishing of the vorticity in a flow-orthogonal foliation:
${\mathcal{F}}_{[ij]}=\delta_{ab}\,\eta^{a}_{\ [i}{\ddot{\eta}}^{b}_{\
j]}=(\delta_{ab}\,\eta^{a}_{\ [i}{\dot{\eta}}^{b}_{\
j]})^{\hbox{\bigastfont.}}=0$. The nonintegrable form of the Newtonian tidal
tensor (13c) is the spatially projected gravitoelectric part of the Weyl
tensor ehlersbuchert:weyl . It reduces to (5) in the Euclidean restriction
(9), up to a sign convention.222The sign convention difference arises, since
in the geometrical context we consider $E_{ij}$ as (“passive”) curvature,
while in Newtonian theory the corresponding field is defined “actively” in
terms of gravitational acceleration. This remark also applies to the extrinsic
curvature $K_{ij}$ vs. expansion $\Theta_{ij}=-K_{ij}$.
For completeness we list the Einstein equations for an irrotational dust fluid
in a flow-orthogonal foliation of spacetime in the usual
(3+1)-representation:333The metric signature is taken to be $(-,+,+,+)$, and
the speed of light $c=1$. Greek indices run through $\mu,\nu\ldots=0,1,2,3$,
and the semicolon denotes covariant derivative with respect to the 4-metric,
while a double vertical slash denotes covariant spatial derivative with
respect to the 3-metric ${{}^{3}}\bm{s}$ with components $s_{ij}$.
$\displaystyle{\dot{\varrho}}+\Theta\varrho=0\ ;$ (16a)
$\displaystyle{\dot{s}}_{ij}=2\,s_{ik}\Theta^{k}_{\ j}\ \ ;\ \
\Theta_{[ij]}=0\ ;$ (16b) $\displaystyle{\dot{\Theta}}^{i}_{\
j}+\Theta{\Theta}^{i}_{\ j}=-\mathcal{R}^{i}_{~{}j}+(\Lambda+4\pi
G\varrho)\delta^{i}_{\ j}\ ;$ (16c) $\displaystyle\Theta^{2}-\Theta^{i}_{\
j}\,\Theta^{j}_{\ i}=-{\mathcal{R}}^{k}_{\ k}+2\Lambda+16\pi G\varrho\ ;$
(16d) $\displaystyle\Theta^{k}_{\ j||k}-\Theta_{||j}\,=\,0\ .$ (16e)
The first equation arises from $T^{\mu\nu}=\varrho u^{\mu}u^{\nu}$, with the
conservation law $T^{\mu\nu}_{\ \ ;\nu}=0$, while the second defines the
expansion tensor (or minus the extrinsic curvature); the third are its $6$
evolution equations that are identical to the nonintegrable Euler equation
(13a) by redefining $\mathcal{R}^{i}_{\ j}$ through (15), and the fourth is
one of the four constraint equations, the energy constraint, all – as shown –
arising from our strategy.
The momentum constraints (16e) seem to not directly arise from the Newtonian
system because their Euclidean restriction (9) does not imply a constraint.
This can be traced back to the fact that the spatially projected
gravitomagnetic part of the Weyl tensor $H_{ij}$ vanishes in the Euclidean
restriction (9), in the current setting:
$-H^{i}_{\ j}=\frac{1}{J}\epsilon^{ik\ell}\Theta_{jk||\ell}\ \mapsto\ 0\ ;\
J\neq 0\ ;$ (17)
$H_{[ij]}=0$ implies (16e). This result is in agreement with the Newtonian
limit in Ehlers’ frame theory ehlersbuchert:weyl , and here trivially follows
via integrability that implies the commutation of second derivatives, see
subsections III.A.3 and III.A.4 in rza1 .
We can derive (16e) by starting with the trivial Newtonian identity that
second derivatives of the velocity commute, $v^{a}_{\ ,b\,c}-v^{a}_{\
,c\,b}=0$. Calculating $v^{a}_{\ ,b}=v^{i}_{\ |j}f^{a}_{\ |i}h^{j}_{\ ,b}$,
transforming the second derivatives to Lagrangian coordinates and projecting
onto the Lagrangian basis (step (i) of our strategy) yields:444Notice that the
(noncovariant) Christoffel connection coefficients $\Gamma^{j}_{\ n\ell}$ do
not vanish in the Euclidean restriction (9): they result in a Newtonian
integrable connection (since the Lagrangian coordinate system is noninertial):
$\Gamma^{j}_{\ \ell n}=\Gamma^{j}_{\ n\ell}\mapsto{}^{\rm N}\Gamma^{j}_{\
n\ell}=h_{\ ,c}^{j}f^{c}_{\ |\ell n}=-h^{j}_{\ ,ab}f^{a}_{\ |\ell}f^{b}_{\
|n}={}^{\rm N}\Gamma^{j}_{\ \ell n}$ (18) (both forms appear in the
calculation of (III)). However, the Euclidean restriction of the (covariant)
Cartan connection ${\bf d}{\bm{\eta}}^{a}=:-{\bm{\omega}}^{a}_{\
b}\wedge{\bm{\eta}}^{b}$ vanishes, i.e. the covariant requirement of
integrability, ${\bf d}^{2}f^{a}={\bf 0}$, with ${\bf d}^{2}:={\bf d}\circ{\bf
d}$, holds.
Notice also that, starting with the transformation of the vector,
$v^{a}=f^{a}_{\ |i}v^{i}$, instead of its gradient, will result in extra terms
proportional to the Lagrangian velocity $v^{i}=0$, which consequently vanish
in a Lagrangian frame.
$\displaystyle(v^{a}_{\ ,bc}-v^{a}_{\ ,cb})(h^{k}_{\ ,a}f^{b}_{\ |n}f^{c}_{\
|\ell})=v^{k}_{\ |n\ell}-v^{k}_{\ |\ell n}$ $\displaystyle+{}^{N}\Gamma^{j}_{\
\ell n}v^{k}_{\ |j}-{}^{N}\Gamma^{j}_{\ n\ell}v^{k}_{\ |j}+{}^{N}\Gamma^{k}_{\
\ell i}v^{i}_{\ |n}-{}^{N}\Gamma^{k}_{\ ni}v^{i}_{\ |\ell}$
$\displaystyle=v^{k}_{\ |n||\ell}-v^{k}_{\
|\ell||n}=\epsilon_{in\ell}\epsilon^{ijm}v^{k}_{\ |j||m}\ .$ (19)
Relaxing integrability (step (ii) of our strategy) then results in the
Peterson-Mainardi-Codazzi identity in the flow-orthogonal foliation
[Sect.8.3]alcubierreGR , eric (using (17) in the second equality):
$\Theta^{k}_{\ n||\ell}-\Theta^{k}_{\
\ell||n}=\epsilon_{in\ell}\epsilon^{ijm}\Theta^{k}_{\
j||m}=-J\epsilon_{in\ell}H^{ik}\ ,$ (20)
the trace of which ($k=\ell$) is (16e). Note that the trace of the
gravitomagnetic part of the Weyl tensor vanishes due to the symmetry condition
$\Theta_{[pj]}=0$. In the integrable case both sides are identically zero.
An important remark is in order here. Both steps (i) and (ii) are crucial for
our transformation strategy, but notice that step (ii) produces a
nonintegrable deformation leading to a general description of spatial
deformations in terms of Cartan coframe fields. Therefore, following from step
(ii), we can derive all the elements of geometry like a nonintegrable
connection and curvature via Cartan’s structure equations in space, ${\bf
d}{\bm{\eta}}^{a}=-\bm{\omega}^{a}_{\ b}\wedge\bm{\eta}^{b}\neq{\bf 0}$ and
$\bm{\Omega}^{a}_{\ b}:={\bf d}\bm{\omega}^{a}_{\ b}+\bm{\omega}^{a}_{\
c}\wedge\bm{\omega}^{c}_{\ b}\neq{\bf 0}$, together with the spatial Bianchi
identities ${\bf d}^{2}{\bm{\eta}}^{a}={\bm{0}}$ and ${\bf
d}^{2}{\bm{\omega}}^{a}_{\ b}={\bm{0}}$.
The final element of the correspondence arises when constructing the spacetime
metric. We notice that the Newtonian equations and the (3+1)-Einstein
equations appear to be parametrized by the coordinate $t$: our two-step
strategy produces the correct equations. However, when constructing a
4-dimensional spacetime, the introduction of the Lorentzian signature in the
4-metric is required: ${{}^{4}}{\bm{s}}:=-{\bf d}t\otimes{\bf
d}t+\delta_{ab}\;\eta^{a}_{\;i}\,\eta^{b}_{\;j}\,{\bf d}X^{i}\otimes{\bf
d}X^{j}$. The Euclidean restriction (9) is then extended to spacetime and
becomes the restriction to Minkowski spacetime. We understand that the
Lagrangian representation is a crucial cornerstone of the correspondence:
Lagrangian observers are at rest and “do not see” the local light cone, since
they do not experience a boost. The Lagrangian observers have just to be told
that their distances in time direction count negatively in a causal
4-dimensional spacetime.
## IV Summary and discussion
We looked at the Newtonian equations for self-gravitating systems in the
Lagrangian frame together with a generalization to a nonintegrable form of the
Newtonian deformation gradient ${\bf d}f^{a}$. We restricted our investigation
to the matter model of irrotational dust. We argued that the nonintegrable
form is equivalent to Einstein’s equations in a flow-orthogonal
(3+1)-foliation of spacetime. We observe that there is no weak field
approximation and no limiting process to be performed. None of the parts in
Einstein’s equations are neglected, which paints an alternative picture to the
current understanding of Newtonian and post-Newtonian dynamics. Newton’s
theory appears to be stronger than believed by employing a modern
interpretation. It will be interesting to revisit predictions of general
relativity where Newtonian predictions in Eulerian representation appear to
fall short.
We could furthermore argue that the integrable (Lagrange-Newton) form is a
measure-zero representation of the general form that, from a pragmatic point
of view, can never be realized: any realization, e.g. a numerical
implementation of an exact gradient ${\bf d}f^{a}$, will be limited by finite
precision and generically produces a nonintegrable field. Newtonian gradients
form a measure zero set of fluid deformations and meet the strong condition
${\bf d}^{2}f^{a}={\bf 0}$ (second derivatives commute). Hence, any
realization will instantly produce the nonintegrable form, and therefore a
nonintegrable connection and curvature via Cartan’s structure equations. A
small perturbation of strict integrability can lead to strong curvature
without there being a smooth limit to Euclidean space. As a pronounced
summary: an attempt to realize Newtonian dynamics in the Lagrangian
representation will, in practice, be a realization of general relativity.
Mathematically, the requirement of integrability has nevertheless interesting
and important implications. We think of the spatial integration of an
integrable field vs. a nonintegrable field.555The difference can be best seen
by performing a Hodge decomposition of Cartan forms into exact, co-exact and
harmonic forms, for a component writing of one-forms in this context see rza4
. Integrable parts will allow for a transformation to surface integrals on the
boundary of a spatial domain, while nonintegrable parts remain in the bulk. An
example is the backreaction problem in cosmology, i.e. the impact of
inhomogeneities on global properties of world models buchert:dust ,GBC : in
flat space, the relevant terms describing a nonvanishing impact are
divergences of vector fields and as a result vanish for isolated systems and
for periodic boundary conditions corresponding to a 3-torus topological
architecture buchert:average . For nonflat spatial sections, i.e. in the
nonintegrable situation, backreaction terms in general furnish a global
contribution.
The reader may find examples where the proposed correspondence has already
been successfully employed in the transition from Lagrangian perturbation
solutions in Newtonian theory to corresponding general-relativistic
perturbation solutions, see the series of papers following rza1 and the
recent review paper Universe . We also point the reader to the construction of
exact solutions of general relativity from Newtonian solutions, see Kasai1995
, rza6 for Szekeres class II solutions with their corresponding Euclidean
class Buchert1989AA , that appear as subclasses of first-order Lagrangian
perturbation solutions at a FLRW (Friedmann-Lemaître-Robertson-Walker)
background; for Szekeres class I solutions see beyond .
It is possible to extend the proposed strategy to more general spacetimes. In
order to describe e.g. vortical flows and more general fluids, we have to
consider tilted flows within a general ADM (Arnowitt-Deser-Misner) foliation
of spacetime. For this purpose we have to diffeomorph the exact basis to
obtain the general metric form (with lapse $N$ and shift $N^{i}$), below
exemplified for a comoving description, where the coordinate velocity is set
to zero. The line element reads:
${{}^{4}}ds^{2}=-\frac{N^{2}}{\gamma^{2}}dt^{2}+2N{\rm
v}_{i}\,dt\,dX^{i}+s_{ij}\,dX^{i}dX^{j}\ ,$ (21)
with the covariant 3-velocity ${\rm v}^{i}=(N^{i}/N)$, the induced spatial
metric components $s_{ij}$, and a 4-velocity that is tilted with respect to
the hypersurface normal BMR :
$u^{\mu}=\frac{\gamma}{N}(1,0,0,0)\quad;\quad
u_{\mu}=(-\frac{N}{\gamma},\gamma{\rm v}_{i})\ .$ (22)
In the general comoving setting the appearance of the Lorentz factor $\gamma$
resurrects the causality constant, while the appearence of the lapse $N$ makes
the time deformation nonintegrable. Rendering the tilted 4-velocity
Lagrangian, $u^{\mu}=(1,0,0,0)$, as our strategy demands, a proper-time
foliation $\tau=\int(N/\gamma)dt=const.$, i.e. $N=\gamma$, investigated in BMR
, can be considered. A correspondence in proper-time foliation can thus be set
up in the spirit of what has been said in this Letter. Its realization is the
subject of work in progress. In the tilted foliation, gravitomagnetic
extensions of Newton’s theory, as proposed by Heaviside heaviside (see also
the comments in bm ), will become relevant.
Acknowledgments: This work is part of a project that has received funding from
the European Research Council under the European Union’s Horizon 2020 research
and innovation program (grant agreement ERC advanced grant 740021-ARTHUS, PI:
TB). Thanks to Hamed Barzegar, Henk van Elst, Asta Heinesen and Pierre Mourier
for valuable remarks on the manuscript, and to an anonymous referee for
insightful and constructive suggestions.
## References
* (1) M. Alcubierre, Introduction to 3+1 Numerical Relativity. International Series of Monographs on Physics, Oxford Academic, Oxford (2008)
* (2) F. Al Roumi, T. Buchert and A. Wiegand, Lagrangian theory of structure formation in relativistic cosmology. IV. Lagrangian approach to gravitational waves. Phys. Rev. D 96, 123538 (2017) [arXiv:1711.01597]
* (3) L. Brunswic and T. Buchert, Gauss–Bonnet–Chern approach to the averaged Universe. Class. Quant. Grav. 37, 215022 (2020) [arXiv:2002.08336]
* (4) T. Buchert, A class of solutions in Newtonian cosmology and the pancake theory. Astron. Astrophys. 223, 9 (1989)
* (5) T. Buchert, On average properties of inhomogeneous fluids in general relativity: dust cosmologies. Gen. Rel. Grav. 32, 105 (2000) [arXiv:gr-qc/9906015]
* (6) T. Buchert, I. Delgado Gaspar and J.J. Ostrowski, On general-relativistic Lagrangian perturbation theory and its non-perturbative generalization. Universe 8, 583 (2022) [arXiv:2209.13417]
* (7) T. Buchert and J. Ehlers, Averaging inhomogeneous Newtonian cosmologies. Astron. Astrophys. 320, 1 (1997) [arXiv:astro-ph/9510056]
* (8) T. Buchert and T. Mädler, Editorial Note to: On the Newtonian Limit of Einstein’s Theory of Gravitation (by Jürgen Ehlers). Gen. Rel. Grav. 51, 162 (2019) [arXiv:1910.12106]
* (9) T. Buchert, P. Mourier and X. Roy, On average properties of inhomogeneous fluids in general relativity III: general fluid cosmologies. Gen. Rel. Grav. 52, 27 (2020) [arXiv:1912.04213]
* (10) T. Buchert and M. Ostermann, Lagrangian theory of structure formation in relativistic cosmology. I. Lagrangian framework and definition of a nonperturbative approximation. Phys. Rev. D 86, 023520 (2012) [arXiv:1203.6263]
* (11) N.E. Chisari and M. Zaldarriaga, Connection between Newtonian simulations and general relativity. Phys. Rev. D 83, 123505 (2011) Erratum: Phys. Rev. D 84, 089901 (2011) [arXiv:1101.3555]
* (12) I. Delgado Gaspar and T. Buchert, Lagrangian theory of structure formation in relativistic cosmology. VI. Comparison with Szekeres exact solutions. Phys. Rev. D 103, 023513 (2021) [arXiv:2009.06339]
* (13) I. Delgado Gaspar, T. Buchert and J.J. Ostrowski, Beyond relativistic Lagrangian perturbation theory. I. An exact-solution controlled model for structure formation. Phys. Rev. D 107, 024018 (2023) [arXiv:2210.04004]
* (14) W.E. East, R. Wojtak and T. Abel, Comparing fully general relativistic and Newtonian calculations of structure formation. Phys. Rev. D 97, 043509 (2018) [arXiv:1711.06681]
* (15) J. Ehlers, Über den Newtonschen Grenzwert der Einsteinschen Gravitationstheorie. In: Nitsch, J., Pfarr, J., Stachow, E.-W. (eds.) Grundlagenprobleme der modernen Physik: Festschrift für Peter Mittelstaedt zum 50. Geburtstag, pp. 65-84. Bibliographisches Institut, Mannheim, Wien, Zürich (1981); Republication of: On the Newtonian limit of Einstein’s theory of gravitation. Gen. Rel. Grav. 51, 163 (2019)
* (16) J. Ehlers, Examples of Newtonian limits of relativistic space-times. Class. Quant. Grav. 14, A119 (1997)
* (17) J. Ehlers and T. Buchert, Newtonian cosmology in Lagrangian formulation: Foundations and perturbation theory. Gen. Rel. Grav. 29, 733 (1997) [arXiv:astro-ph/9609036]
* (18) J. Ehlers and T. Buchert, On the Newtonian limit of the Weyl tensor, Gen. Rel. Grav. 41, 2153 (2009) [arXiv:0907.2645]
* (19) G.F.R. Ellis, Republication of: Relativistic Cosmology, Gen. Rel. Grav. 41, 581 (2009)
* (20) E. Gourgoulhon, 3+1 formalism in general relativity. Bases of numerical relativity. Lecture Notes in Physics vol. 846, Springer, Berlin (2012) [arXiv:gr-qc/0703035]
* (21) S.R. Green and R.M. Wald, Newtonian and relativistic cosmologies. Phys. Rev. D 85, 063512 (2012) [arXiv:1111.2997]
* (22) O. Heaviside, A gravitational and electromagnetic analogy. The Electrician 31, 281 (part I), 359 (part II) (1893) html online by O. D. Jefimenko
* (23) J. Ishiwara, Einstein K$\overline{o}$en–Roku (Records of Professor Einstein’s Lectures). Ishiwara & Hitoshi (eds.), Tokyo–Tosho Corporation, Tokyo (1971)
* (24) M. Kasai, Tetrad-based perturbative approach to inhomogeneous universes: a general relativistic version of the Zel’dovich approximation. Phys. Rev. D 52, 5605 (1995)
* (25) S.M. Koksbang and S. Hannestad, Methods for studying the accuracy of light propagation in N-body simulations. Phys. Rev. D 91, 043508 (2015) [arXiv:1501.01413]
|
# Few-Shot Intent Detection via Contrastive Pre-Training and Fine-Tuning
Jian-Guo Zhang1, Trung Bui2, Seunghyun Yoon2, Xiang Chen2, Zhiwei Liu1
Congying Xia1, Quan Hung Tran2, Walter Chang2, Philip Yu1
1 University of Illinois at Chicago, Chicago, USA
2Adobe Research, San Jose, USA
{jzhan51, zliu213, cxia8<EMAIL_ADDRESS>
{bui, syoon, xiangche, qtran<EMAIL_ADDRESS>
Work done while the first author was an intern at Adobe Research.
###### Abstract
In this work, we focus on a more challenging few-shot intent detection
scenario where many intents are fine-grained and semantically similar. We
present a simple yet effective few-shot intent detection schema via
contrastive pre-training and fine-tuning. Specifically, we first conduct self-
supervised contrastive pre-training on collected intent datasets, which
implicitly learns to discriminate semantically similar utterances without
using any labels. We then perform few-shot intent detection together with
supervised contrastive learning, which explicitly pulls utterances from the
same intent closer and pushes utterances across different intents farther.
Experimental results show that our proposed method achieves state-of-the-art
performance on three challenging intent detection datasets under 5-shot and
10-shot settings.
## 1 Introduction
Intent detection, aiming to identify intents from user utterances, is a key
component in task-oriented dialog systems. In real systems such as Amazon
Alexa, correctly identifying user intents is crucial for downstream tasks
(Zhang et al., 2020b; Ham et al., 2020). A practical challenge is data
scarcity as it is expensive to annotate enough examples for emerging intents,
and how to accurately identify intents in few-shot learning has raised
attention.
Existing methods address the few-shot intent detection tasks mainly from two
perspectives: (1) data augmentation and (2) task-adaptive training with pre-
trained models. For the first category, Zhang et al. (2020a) and Mehri et al.
(2020b) propose a nearest neighbor classification schema with full use of the
limited training examples in both training and inference stages. Xia et al.
(2020b) and Peng et al. (2020) propose to generate utterances for emerging
intents based on variational autoencoder (Kingma and Welling, 2013) and GPT-2
(Radford et al., 2019), respectively. For the second category, Casanueva et
al. (2020) and Mehri et al. (2020a) conduct intent detection by leveraging
related conversational pre-training models based on a few hundred million
conversations. Meanwhile, they devise a task-adaptive training schema where
the model is pre-trained on all relative intent datasets or the target intent
datasets with mask language modeling.
However, previous methods such as data augmentation related models Liu et al.
(2021c) are inefficient for training and hard to scale to tasks with lots of
intents. Moreover, these models do not tackle well the following scenarios: In
real scenarios, the few-shot intent detection could be more challenging when
there exist many fine-grained intents, especially semantically similar
intents. For instance, BANKING77 (Casanueva et al., 2020) has a single domain
with 77 intents, and CLINC150 (Larson et al., 2019) has ten domains with 150
intents. Many intents in the datasets are similar. Therefore, training models
is rather challenging when there are only limited examples.
Inspired by the recent success of contrastive learning (He et al., 2020; Gunel
et al., 2020; Chen et al., 2020; Radford et al., 2021; Liu et al., 2021a; Gao
et al., 2021; Liu et al., 2021b), which aims to enhance discrimination
abilities of models, this work proposes improving few-shot intent detection
via Contrastive Pre-training and Fine-Tuning (CPFT). Intuitively, we first
learn to implicitly discriminate semantically similar utterances via
contrastive self-supervised pre-training on intent datasets without using any
intent labels. We then jointly perform few-shot intent detection and
supervised contrastive learning. The supervised contrastive learning helps the
model explicitly learn to pull utterances from the same intent close and push
utterances across different intents apart.
Our contributions are summarized as follows: 1) We design a simple yet
effective few-shot intent detection schema via contrastive pre-training and
fine-tuning. 2) Experimental results verify the state-of-the-art performance
of CPFT on three challenging datasets under 5-shot and 10-shot settings.
## 2 Related Work
Since this work is related to few-shot intent detection and contrastive
learning, we review recent work from both areas in this section.
The few-shot intent detection task typically includes three scenarios: (1)
learn a intent detection model with only $K$ examples for each intent (Zhang
et al., 2020a; Mehri et al., 2020a; Casanueva et al., 2020); (3) learn to
identify both in-domain and out-of-scope queries with only $K$ examples for
each intent (Zhang et al., 2020a, 2021; Xia et al., 2021b). (2) given a model
trained on existing intents with all examples, learn to generalize the model
to new intents with only $K$ examples for each new intent (Xia et al., 2020a,
b, 2021a).
In this work, we focus on the first scenario, and several methods have been
proposed to tackle the challenge. Specifically, Zhang et al. (2020a) proposes
a data augmentation schema, which pre-trains a model on annotated pairs from
natural language inference (NLI) datasets and designs the nearest neighbor
classification schema to adopt the transfer learning and classify user
intents. However, the training is expensive and hard to scale to tasks with
hundreds of intents Liu et al. (2020). Mehri et al. (2020b); Casanueva et al.
(2020) propose the task-adaptive training, which leverages models pre-trained
from a few hundred million dialogues to tackle few-shot intent detection. It
also includes an unsupervised mask language modeling loss on the target intent
datasets and shows promising improvements.
Contrastive learning has shown superior performance on various domains, such
as visual representation (He et al., 2020; Chen et al., 2020; Radford et al.,
2021), graph representation Qiu et al. (2020); You et al. (2020), and
recommender systems Liu et al. (2021b). Moreover, recent works also adopt
contrastive learning in natural language processing tasks (Gunel et al., 2020;
Liu et al., 2021a; Gao et al., 2021), which employs the contrastive learning
to train the encoder. Specifically, Gunel et al. (2020) designs a supervised
contrastive learning loss for fine-tuning data. Gao et al. (2021) designs a
simple contrastive learning framework through dropout and it shows state-of-
the-art performance on unsupervised and full-shot supervised semantic textual
similarity tasks. Liu et al. (2021a) designs self-supervised Mirror-BERT
framework with two types of data augmentation: randomly erase or mask parts of
the input texts; feature level augmentation through dropout.
Our work differs from them in several respects: Firstly, we specifically
tackle the few-shot intent detection task rather than the general full-shot
learning; Secondly, we design a schema and employ contrastive learning in both
self-supervised pre-training and supervised fine-tuning stages.
## 3 CPFT Methodology
We consider a few-shot intent detection task that handles $C$ user intents,
where the task is to classify a user utterance $u$ into one of the $C$
classes. We set balanced K-shot learning for each intent Zhang et al. (2020a);
Casanueva et al. (2020), i.e., each intent only includes $K$ examples in the
training data. As such, there are in total $C\cdot K$ training examples.
In the following section, we first describe the self-supervised contrastive
pre-training for utterance understanding before introducing the supervised
fine-tuning for few-shot intent detection.
### 3.1 Self-supervised Pre-training
We retrieve the feature representation $\mathbf{h}_{i}$ for the $i$-th user
utterance through an encoder model, which in this paper is BERT Devlin et al.
(2019), i.e., $\mathbf{h}_{i}=\text{BERT}(u_{i})$. We implicitly learn the
sentence-level utterance understanding and discriminate semantically similar
utterances through the self-supervised contrastive learning method (Wu et al.,
2020b; Liu et al., 2021a; Gao et al., 2021):
$\mathcal{L}_{\text{uns\\_{cl}}}=-\frac{1}{N}\sum_{i=1}^{N}\log\frac{\exp(\text{sim}(\mathbf{h}_{i},\bar{\mathbf{h}}_{i})/\tau)}{\sum_{j=1}^{N}\exp(\text{sim}(\mathbf{h}_{i},\bar{\mathbf{h}}_{j})/\tau)},$
(1)
where $N$ is the number of sentences in a batch. $\tau$ is a temperature
parameter that controls the penalty to negative samples.
$sim(\mathbf{h}_{i},\bar{\mathbf{h}_{i}})$ denotes the cosine similarity
between two input vectors $\mathbf{h}_{i}$ and $\bar{\mathbf{h}_{i}}$.
$\bar{\mathbf{h}_{i}}$ represents the representation of sentence
$\bar{u_{i}}$, where $\bar{u_{i}}$ is from the same sentence $u_{i}$ but few
(10%) tokens are randomly masked Devlin et al. (2019). Specifically, we
dynamically mask tokens during batch training (Wu et al., 2020a), i.e., a
sentence has different masked positions across different training epochs, and
we find it is beneficial to the utterance understanding. The sentence $u_{i}$
and $\bar{u_{i}}$ are inputted together to a single encoder during the batch
training (Gao et al., 2021).
Besides the sentence-level enhancement, we also add the mask language modeling
loss Devlin et al. (2019); Wu et al. (2020a) to enhance the token-level
utterance understanding:
$\mathcal{L}_{\text{mlm}}=-\frac{1}{M}\sum_{m=1}^{M}\log P(x_{m}),$ (2)
where $P(x_{m})$ denotes the predicted probability of a masked token $x_{m}$
over the total vocabulary, and $M$ is the number of masked tokens in each
batch.
Our total loss for each batch is
$\mathcal{L}_{\text{stage1}}=\mathcal{L}_{\text{uns\\_{cl}}}+\lambda\mathcal{L}_{\text{mlm}}$,
where $\lambda$ is a weight hyper-parameter.
### 3.2 Supervised Fine-tuning
Through self-supervised learning in the first stage, the model efficiently
utilizes many unlabeled user utterances. The model is given very limited
examples in the second stage, such as 5 and 10 examples for each intent. To
better understanding user intents, especially when intents are similar to each
other, we utilize a supervised contrastive learning method (Gunel et al.,
2020) and train it together with an intent classification loss. We treat two
utterances from the same class as a positive pair and the two utterances
across different classes as a negative pair for contrastive learning. Unlike
the previous work, the utterance and itself could also be a positive pair as
we input them together to the single encoder. Their feature representations
are different due to the dropout of BERT. The corresponding loss is shown as
the following:
$\mathcal{L}_{\text{s\\_{cl}}}=-\frac{1}{T}\sum_{i=1}^{N}\sum_{j=1}^{N}\mathds{1}_{y_{i}=y_{j}}\log\frac{\exp(\text{sim}(\mathbf{h}_{i},\mathbf{h}_{j})/\tau)}{\sum_{n=1}^{N}\exp(\text{sim}(\mathbf{h}_{i},\mathbf{h}_{n})/\tau)},$
(3)
where $T$ is the number of pairs from the same classes in the batch.
Next is the intent classification loss:
$\mathcal{L}_{\text{intent}}=-\frac{1}{N}\sum_{j=1}^{C}\sum_{i=1}^{N}\log
P(C_{j}|u_{i}),$ (4)
where $P(C_{j}|u_{i})$ is the predicted probability of the $i$-th sentence to
be the $j$-th intent class.
We jointly train the two losses together at each batch:
$\mathcal{L}_{\text{stage2}}=\mathcal{L}_{\text{s\\_{cl}}}+\lambda^{\prime}\mathcal{L}_{\text{intent}}$,
where $\lambda^{\prime}$ is a weight hyper-parameter.
Name | # Utterance | # Intent | # Domain
---|---|---|---
CLINC150 (Larson et al., 2019) | 18200 | 150 | 10
BANKING77 (Casanueva et al., 2020) | 10162 | 77 | 1
HWU64 (Liu et al., 2019) | 10030 | 64 | 21
TOP (Gupta et al., 2018) | 35741 | 25 | 2
SNIPS (Coucke et al., 2018) | 9888 | 5 | -
ATIS (Tur et al., 2010) | 4978 | 21 | -
Table 1: Data statistics for intent detection datasets.
| CLINC150 | BANKING77 | HWU64
---|---|---|---
Model | 5-shot | 10-shot | 5-shot | 10-shot | 5-shot | 10-shot
RoBERTa+Classifier (Zhang et al., 2020a) | 87.99 | 91.55 | 74.04 | 84.27 | 75.56 | 82.90
USE (Casanueva et al., 2020) | 87.82 | 90.85 | 76.29 | 84.23 | 77.79 | 83.75
CONVERT (Casanueva et al., 2020) | 89.22 | 92.62 | 75.32 | 83.32 | 76.95 | 82.65
USE+CONVERT (Casanueva et al., 2020) | 90.49 | 93.26 | 77.75 | 85.19 | 80.01 | 85.83
CONVBERT (Mehri et al., 2020a) | - | 92.10 | - | 83.63 | - | 83.77
CONVBERT + MLM (Mehri et al., 2020a) | - | 92.75 | - | 83.99 | - | 84.52
CONVBERT + Combined (Mehri et al., 2020b) | - | 93.97 | - | 85.95 | - | 86.28
DNNC (Zhang et al., 2020a) | 91.02 | 93.76 | 80.40 | 86.71 | 80.46 | 84.72
CPFT | 92.34 | 94.18 | 80.86 | 87.20 | 82.03 | 87.13
Table 2: Testing accuracy ($\times 100\%$) on three datasets under 5-shot and
10-shot settings.
## 4 Experimental Settings
### 4.1 Datasets
#### Pre-training Datasets
We collected six public datasets consisting of different user intents. The
dataset statistics are shown in Table 1.111https://github.com/jianguoz/Few-
Shot-Intent-Detection For fair comparisons, we exclude their test sets during
the pre-training phase, which is different from previous work (Mehri et al.,
2020a, b), where they use the whole datasets. We also remove utterances with
less than five tokens, and there are 80,782 training utterances in total. We
conduct self-supervised pre-training on the collected utterances without using
labels.
#### Evaluation Datasets
To better study the more challenging fine-grained few-shot intent detection
problem and compare with recent state-of-the-art baselines, we pick up three
challenging intent detection datasets for evaluation, i.e., CLINC150 (Larson
et al., 2019), BANKING77 (Casanueva et al., 2020) and HWU64 (Liu et al.,
2019). CLINC150 contains 23,700 utterances across ten different domains, and
there are in total 150 intents. BANKING77 contains 13,083 utterances with a
single banking domain and 77 intents. HWU64 includes 25,716 utterances with 64
intents spanning 21 domains. We follow the setup of Mehri et al. (2020a),
where a small portion of the training set is separated as a validation set,
and the test set is unchanged. Following previous work, we repeat our few-shot
learning model training five times and report the average accuracy.
### 4.2 Model Training and Baselines
We utilize RoBERTa with base configuration, i.e., roberta-base as the BERT
encoder. We pre-train the combined intent datasets without test sets in the
contrastive pre-training stage for 15 epochs, where we set the batch size to
64, $\tau$ to 0.1, and $\lambda$ to 1.0. The pre-training phase takes around
2.5 hours on a single NVIDIA Tesla V100 GPU with 32GB memory. We fine-tune the
model under 5-shot (5 training examples per intent) and 10-shot settings (10
training examples per intent). We set the batch size to 16, and do hyper-
parameters search for $\tau\,{\in}\,\\{0.1,0.3,0.5\\}$ and
$\lambda^{\prime}\,{\in}\,\\{0.01,0.03,0.05\\}$; the fine-tuning takes five
minutes for each run with 30 epochs. We apply label smoothing to the intent
classification loss, following Zhang et al. (2020a).
#### Baselines
We compare with six strong models. 1, RoBERTa+Classifier (Zhang et al.,
2020a): it is a RoBERTa-based classification model. 2, USE Yang et al. (2020):
it is the large multilingual model pre-trained on 16 languages. 3, CONVERT
(Casanueva et al., 2020): it is an intent detection model with dual encoders,
and the dual encoder models are pre-trained on 654 million (input, response)
pairs from Reddit. 4, CONVEBERT (Mehri et al., 2020a): it fine-tunes BERT on a
large open-domain dialogue corpus with 700 million conversations. 5,
CONVEBERT+Comined (Mehri et al., 2020b): it is an intent detection model based
on CONVEBERT, with example-driven training based on similarity matching and
observers for transformer attentions. It also conducts task-adaptive self-
supervised learning with mask language modeling (MLM) on the intent detection
datasets. Combine represents the best MLM+Example+Observers setting in the
referenced paper. 6, DNNC (Zhang et al., 2020a): it is a discriminative
nearest-neighbor model which finds the best-matched example from the training
set through similarity matching. The model conducts data augmentation during
training and boosts performance by pre-training on three natural language
inference tasks.
| CLINC150 | BANKING77 | HWU64
---|---|---|---
Model | 5-shot | 10-shot | 5-shot | 10-shot | 5-shot | 10-shot
CPFT | 92.34 | 94.18 | 80.86 | 87.20 | 82.03 | 87.13
w/o Contrastive pre-training | -4.15 | -2.63 | -4.11 | -2.37 | -6.01 | -4.17
w/o Supervised contrastive learning | -0.56 | -0.32 | -2.06 | -0.88 | -1.14 | -0.27
w/o Contrastive pre-training + w/o Supervised contrastive learning | -4.35 | -2.69 | -6.82 | -2.93 | -6.47 | -4.23
Table 3: Testing accuracy ($\times 100\%$) of CPFT with variants on three
datasets under 5-shot and 10-shot settings.
## 5 Experimental Results
We show the overall comparisons on three datasets in Table 2. The proposed
CPFT method achieves the best performance across all datasets under both the
5-shot and 10-shot settings. Specifically, CPFT outperforms DNNC by 1.32% and
1.57% on CLINC150 and HWU64 under the 5-shot setting, respectively. It also
improves DNNC by 2.41% on HWU64 under the 10-shot setting. Our variances are
also lower when compared with DNNC: Ours vs. DNNC: 0.39 vs. 0.57 and 0.18 vs.
0.42 on CLINC150; 0.20 vs. 0.88 and 0.48 vs. 0.21 on BANKING77; 0.51 vs. 1.00
and 0.25 vs. 0.38 on HWU64 under 5-shot and 10-shot settings, respectively.
The improvements indicate that our proposed method has a better ability to
discriminate semantically similar intents than the strong discriminate
nearest-neighbor model with data augmentation. Moreover, the DNNC training is
expensive, as when training models on a single NVIDIA Tesla V100 GPU with 32GB
memory, DNNC takes more than $3$ hours for 10-shot learning on CLINC150, and
it needs to retrain the model for every new setting. CPFT only needs $2.5$
hours for one-time pre-training, and the fine-tuning only takes five minutes
for each new setting. Compared with CONVBERT+MLM, which does a self-supervised
pre-training with MLM on the intent detection datasets, CPFT improves the
performance by 1.43%, 3.21%, and 2.61% on CLINC150, BANKING77, and HWU64 under
10-shot setting, respectively. CPFL also outperforms CONVBERT+Combined, which
further adds examples-driven training and specific transformer attention
design. We contribute the performance improvements to contrastive learning,
which help the model discriminate semantically similar intents.
## 6 Ablation Study and Analysis
#### Is the schema with both stages necessary?
We conduct ablation study to investigate the effects of self-supervised
contrastive pre-training and supervised contrastive fine-tuning. Table 3 shows
the testing results of CPFT with model variants on three datasets.
Experimental results indicate that both stages are necessary to achieve the
best performance. The self-supervised contrastive pre-training on the first
stage is essential as the performance drops significantly on all datasets. We
hypothesize that contrastive pre-training on the intent datasets without using
labels benefits the discrimination of semantically similar utterances.
Additionally, the performance also drops if without supervised contrastive
learning during the few-shot fine-tuning stage. Specifically, it drops by 2%
on BANKING77 under the 5-shot setting; the reason is that BANKING77 is a
single domain dataset with many similar intents, where supervised contrastive
learning can explicitly discriminate semantically similar intents with very
limited training examples. We also jointly train the first and second stages
together, and compared with the proposed CPFT schema, we observe minimal
improvements. The joint training is also costly as it requires retraining the
model every time for new settings.
#### Is contrastive pre-training beneficial to the target intent dataset?
Additionally, we study whether contrastive pre-training can benefit the intent
detection when excluding the target datasets. Specifically, we pre-train the
model on the datasets except for the HWU64 dataset on the first stage and do
few-shot learning on HWU64 during the second stage. Compared to the model
without contrastive pre-training on the first stage, the performances are
improved by 1.98% and 1.21% under 5-shot and 10-shot settings, respectively.
The improvements indicate that the contrastive pre-training is helpful to
transfer knowledge to new datasets. However, there are still performance drops
compared to the contrastive pre-training, including the HWU64 dataset. Which
shows that it is beneficial to include the target dataset during self-
supervised contrastive learning. We leave whether self-supervised contrastive
pre-training only on the target intent dataset benefits as a future study.
#### Is the training sensitive to hyper-parameters?
We also study the effects of hyper-parameters of contrastive learning, i.e.,
the temperature $\tau$ and weight $\lambda^{\prime}$. We set
$\tau\,{\in}\,\\{0.05,0.1,0.3,0.5\\}$ and
$\lambda^{\prime}\,{\in}\,\\{0.01,0.03,0.05,0.1\\}$. In our primary
experiments, we do not find $\tau$ has a notable influence during the self-
supervised contrastive pre-training on the first stage. Besides, we found that
a batch size larger than 32 works well in the pre-training phase. However,
during the few-shot fine-tuning stage, when setting $\tau$ to a small value
$0.05$, which heavily enforces the penalty to hard negative examples and
$\lambda^{\prime}$ to a large value $0.1$, which increases the weight of
supervised contrastive learning loss, the performance drops significantly. In
addition, the batch size influences performance on this stage. Therefore, few-
shot supervised contrastive loss is sensitive to hyper-parameters when there
are limited training examples.
## 7 Conclusion
In this paper, we improve the performance of few-shot intent detection via
contrastive pre-training and fine-tuning. It first conducts self-supervised
contrastive pre-training on collected intent detection datasets without using
any labels, where the model implicitly learns to separate fine-grained
intents. Then it performs the few-shot fine-tuning based on the joint intent
classification loss and supervised contrastive learning loss, where the
supervised contrastive loss encourages the model to distinguish intents
explicitly. Experimental results on three challenging datasets show that our
proposed method achieves state-of-the-art performance.
## 8 Acknowledgements
This work is supported in part by NSF under grants III-1763325, III-1909323,
III-2106758, and SaTC-1930941. We thank the anonymous reviewers for their
helpful and thoughtful comments.
## References
* Casanueva et al. (2020) Inigo Casanueva, Tadas Temcinas, Daniela Gerz, Matthew Henderson, and Ivan Vulic. 2020. Efficient intent detection with dual sentence encoders. _ACL 2020_ , page 38.
* Chen et al. (2020) Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. 2020. A simple framework for contrastive learning of visual representations. In _ICML_ , pages 1597–1607.
* Coucke et al. (2018) Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, et al. 2018. Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. _arXiv preprint arXiv:1805.10190_.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _NAACL-HLT_ , pages 4171–4186.
* Gao et al. (2021) Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021. Simcse: Simple contrastive learning of sentence embeddings. _EMNLP_.
* Gunel et al. (2020) Beliz Gunel, Jingfei Du, Alexis Conneau, and Ves Stoyanov. 2020. Supervised contrastive learning for pre-trained language model fine-tuning. _arXiv preprint arXiv:2011.01403_.
* Gupta et al. (2018) Sonal Gupta, Rushin Shah, Mrinal Mohit, Anuj Kumar, and Mike Lewis. 2018. Semantic parsing for task oriented dialog using hierarchical representations. In _EMNLP_ , pages 2787–2792.
* Ham et al. (2020) Donghoon Ham, Jeong-Gwan Lee, Youngsoo Jang, and Kee-Eung Kim. 2020. End-to-end neural pipeline for goal-oriented dialogue systems using gpt-2. In _ACL_ , pages 583–592.
* He et al. (2020) Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. 2020. Momentum contrast for unsupervised visual representation learning. In _CVPR_ , pages 9729–9738.
* Kingma and Welling (2013) Diederik P Kingma and Max Welling. 2013. Auto-encoding variational bayes. _arXiv preprint arXiv:1312.6114_.
* Larson et al. (2019) Stefan Larson, Anish Mahendran, Joseph J. Peper, Christopher Clarke, Andrew Lee, Parker Hill, Jonathan K. Kummerfeld, Kevin Leach, Michael A. Laurenzano, Lingjia Tang, and Jason Mars. 2019. An Evaluation Dataset for Intent Classification and Out-of-Scope Prediction. In _EMNLP_ , pages 1311–1316.
* Liu et al. (2021a) Fangyu Liu, Ivan Vulić, Anna Korhonen, and Nigel Collier. 2021a. Fast, effective and self-supervised: Transforming masked language models into universal lexical and sentence encoders. _arXiv preprint arXiv:2104.08027_.
* Liu et al. (2019) Xingkun Liu, Arash Eshghi, Pawel Swietojanski, and Verena Rieser. 2019. Benchmarking natural language understanding services for building conversational agents. _arXiv preprint arXiv:1903.05566_.
* Liu et al. (2021b) Zhiwei Liu, Yongjun Chen, Jia Li, Philip S Yu, Julian McAuley, and Caiming Xiong. 2021b. Contrastive self-supervised sequential recommendation with robust augmentation. _arXiv preprint arXiv:2108.06479_.
* Liu et al. (2021c) Zhiwei Liu, Ziwei Fan, Yu Wang, and Philip S Yu. 2021c. Augmenting sequential recommendation with pseudo-prior items via reversely pre-training transformer. _arXiv preprint arXiv:2105.00522_.
* Liu et al. (2020) Zhiwei Liu, Xiaohan Li, Ziwei Fan, Stephen Guo, Kannan Achan, and S Yu Philip. 2020\. Basket recommendation with multi-intent translation graph neural network. In _IEEE Big Data_ , pages 728–737. IEEE.
* Mehri et al. (2020a) Shikib Mehri, Mihail Eric, and Dilek Hakkani-Tur. 2020a. Dialoglue: A natural language understanding benchmark for task-oriented dialogue. _arXiv preprint arXiv:2009.13570_.
* Mehri et al. (2020b) Shikib Mehri, Mihail Eric, and Dilek Hakkani-Tur. 2020b. Example-driven intent prediction with observers. _arXiv preprint arXiv:2010.08684_.
* Peng et al. (2020) Baolin Peng, Chenguang Zhu, Michael Zeng, and Jianfeng Gao. 2020. Data augmentation for spoken language understanding via pretrained language models. _arXiv preprint arXiv:2004.13952_.
* Qiu et al. (2020) Jiezhong Qiu, Qibin Chen, Yuxiao Dong, Jing Zhang, Hongxia Yang, Ming Ding, Kuansan Wang, and Jie Tang. 2020. Gcc: Graph contrastive coding for graph neural network pre-training. In _KDD_ , pages 1150–1160.
* Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. _Image_ , 2:T2.
* Radford et al. (2019) Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. _OpenAI blog_ , 1(8):9.
* Tur et al. (2010) Gokhan Tur, Dilek Hakkani-Tür, and Larry Heck. 2010. What is left to be understood in atis? In _SLT_ , pages 19–24. IEEE.
* Wu et al. (2020a) Chien-Sheng Wu, Steven Hoi, Richard Socher, and Caiming Xiong. 2020a. ToD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogues. _EMNLP_.
* Wu et al. (2020b) Zhuofeng Wu, Sinong Wang, Jiatao Gu, Madian Khabsa, Fei Sun, and Hao Ma. 2020b. Clear: Contrastive learning for sentence representation. _arXiv preprint arXiv:2012.15466_.
* Xia et al. (2021a) Congying Xia, Caiming Xiong, and Philip Yu. 2021a. Pseudo siamese network for few-shot intent generation. In _ACM SIGIR_.
* Xia et al. (2020a) Congying Xia, Caiming Xiong, Philip Yu, and Richard Socher. 2020a. Composed variational natural language generation for few-shot intents. In _EMNLP: Findings_ , pages 3379–3388.
* Xia et al. (2021b) Congying Xia, Wenpeng Yin, Yihao Feng, and S Yu Philip. 2021b. Incremental few-shot text classification with multi-round new classes: Formulation, dataset and system. In _NAACL-HLT_ , pages 1351–1360.
* Xia et al. (2020b) Congying Xia, Chenwei Zhang, Hoang Nguyen, Jiawei Zhang, and Philip Yu. 2020b. Cg-bert: Conditional text generation with bert for generalized few-shot intent detection. _arXiv preprint arXiv:2004.01881_.
* Yang et al. (2020) Yinfei Yang, Daniel Cer, Amin Ahmad, Mandy Guo, Jax Law, Noah Constant, Gustavo Hernandez Abrego, Steve Yuan, Chris Tar, Yun-hsuan Sung, et al. 2020. Multilingual universal sentence encoder for semantic retrieval. In _ACL_ , pages 87–94.
* You et al. (2020) Yuning You, Tianlong Chen, Yongduo Sui, Ting Chen, Zhangyang Wang, and Yang Shen. 2020. Graph contrastive learning with augmentations. _NeurIPS_ , 33:5812–5823.
* Zhang et al. (2021) Jian-Guo Zhang, Kazuma Hashimoto, Yao Wan, Ye Liu, Caiming Xiong, and Philip S Yu. 2021. Are pretrained transformers robust in intent classification? a missing ingredient in evaluation of out-of-scope intent detection. _arXiv preprint arXiv:2106.04564_.
* Zhang et al. (2020a) Jianguo Zhang, Kazuma Hashimoto, Wenhao Liu, Chien-Sheng Wu, Yao Wan, S Yu Philip, Richard Socher, and Caiming Xiong. 2020a. Discriminative nearest neighbor few-shot intent detection by transferring natural language inference. In _EMNLP_ , pages 5064–5082.
* Zhang et al. (2020b) Jianguo Zhang, Kazuma Hashimoto, Chien-Sheng Wu, Yao Wang, S Yu Philip, Richard Socher, and Caiming Xiong. 2020b. Find or classify? dual strategy for slot-value predictions on multi-domain dialog state tracking. In _*SEM_ , pages 154–167. Association for Computational Linguistics.
|
NewReferences
# The black hole low mass X-ray binary V404 Cygni is part of a wide
hierarchical triple, and formed without a kick
Kevin B. Burdge1,2 ∗ Kareem El-Badry3 Erin Kara1,2 Claude Canizares1,2 Deepto
Chakrabarty1,2 Anna Frebel1,2 Sarah C. Millholland1,2 Saul Rappaport1,2 Rob
Simcoe1,2 Andrew Vanderburg1,2
###### Abstract
Evidence suggests that when compact objects such as black holes and neutron
stars form, they may receive a “natal kick,” where the stellar remnant gains
momentum. Observational evidence for neutron star kicks is substantial[1, 2],
yet limited for black hole natal kicks, and some proposed black hole formation
scenarios result in very small kicks [3, 4]. Here, we report the discovery
that the canonical black hole low-mass X-ray binary V404 Cygni is part of a
wide hierarchical triple with a tertiary companion at least 3500 astronomical
units away from the inner binary. Given the orbital configuration, the black
hole likely received a sub-5 kilometer per second kick to have avoided
unbinding the tertiary. This discovery reveals that at least some black holes
form with nearly no natal kick. Furthermore, the tertiary in this system lends
credence to evolutionary models of low-mass X-ray binaries involving a
hierarchical triple structure[5]. Remarkably, the tertiary is evolved,
indicating that the system formed 3-5 billion years ago, and that the black
hole has removed at least half a solar mass of matter from its evolved
secondary companion. During the event in which the black hole formed, it is
likely that at least half of the mass of the black hole progenitor collapsed
into the black hole; it may even have undergone a complete implosion, enabling
the tertiary to remain loosely bound.
Department of Physics, Massachusetts Institute of Technology, Cambridge, MA
02139, USA
Kavli Institute for Astrophysics and Space Research, Massachusetts Institute
of Technology, Cambridge, MA 02139, USA
Division of Physics, Mathematics and Astronomy, California Institute of
Technology, Pasadena, CA, USA
Black holes (BHs) are the stellar remnants of the most massive stars.
Gravitational wave astronomy has raised key questions regarding the formation
of these objects, including the role of natal kicks[6] and dynamical
interactions[5, 7, 8]. There are several proposed formation pathways for these
stellar remnants, including some channels in which there is an implosion with
no associated natal kicks or supernovae [3, 9, 10]. The best constraints on BH
kick physics come from X-ray binaries’ Galactic orbits[11], astrometric
microlensing[12], and orbital dynamics[13]. These constraints have largely
ruled out the need for kicks of $>100\,\rm km\,s^{-1}$ except for in a handful
of systems with exceptional orbits[14, 15].
We report the discovery that the black hole X-ray binary V404 Cygni, the first
low-mass X-ray binary (LMXB) widely accepted to host a black hole (with a BH
mass of $9^{+0.2}_{-0.6}\rm\,M_{\odot}$[16]), is part of a wide hierarchical
triple. Our serendipitous discovery stemmed from examining an optical image of
V404 Cygni on the Aladin Lite tool, illustrated in panel a of Figure 1, and
taking note that there is a nearby star just $1.43$ arcseconds from V404 Cygni
with _Gaia_ proper motions matching those of V404 Cygni, as seen in panel b of
Figure 1. We investigated the proper motions of nearby sources, and in panel a
of Figure 2, show that such a chance agreement in proper motions is unlikely.
In panel c of Figure 2, we quantify the probability of such an alignment
occurring by chance, which we find is about $10^{-7}$ (Methods). While
searching the literature on V404 Cygni, we found several works that commented
on the nearby star, with most simply assuming that it was an interloper[17,
18]. One notable exception was Maitra et al. 2017 [19], which speculated “in
passing whether the blended star is truly unrelated to the V404 system” due to
the similar estimated distance and extinction of the blended star. V404 Cygni
was noted as having a peculiar velocity in previous work[20], and this was
attributed to a kick. As part of our astrometric analysis, we investigated
nearby stars and found that the velocity of V404 Cygni is typical of stars in
the vicinity (Methods).
Given the 1.43 arcseconds separation and estimated distance of $2.39\pm
0.15\,\rm kpc$[20], we find that the wide tertiary companion is at least 3500
astronomical units (AU) away from the inner binary, which corresponds to a
Keplerian velocity of just a few kilometers per second. As illustrated in
panels a and c of Figure 1, this separation is approximately 90 times larger
than the distance of Pluto to the Sun, and 25000 times larger than that of the
inner V404 Cygni binary, which has a separation of just 0.14 AU, less than
half the distance between the Sun and Mercury.
To further confirm this association, we analyzed archival VLT X-shooter
spectroscopic observations (which targeted V404 Cygni, but contained a
spatially resolved trace of the tertiary) and obtained additional follow-up
GMOS spectroscopy of the source (Methods). We conducted a radial velocity
analysis and found excellent agreement with the reported systemic velocity of
V404 Cygni. As seen in the histogram of radial velocities of nearby stars
shown in panel c of Figure 2, this is unlikely to have occurred by chance,
further solidifying the association of the two components.
We fit the spectroscopic observations with model atmospheres, focusing on the
region around the Hydrogen Alpha and Beta absorption lines in the GMOS spectra
(Methods), and the region around the Calcium triplet lines in the X-shooter
observations, shown in panel d of Figure 3. In addition to fitting the
spectroscopic observations with model atmospheres, we also fit the broadband
spectral energy distribution (SED), carefully extracting photometry by
modeling the point-spread function in epochal Pan-STARRS images of the source
due to the blending of the tertiary and V404 Cygni (Methods). Additionally, we
use archival Hubble Space Telescope observations to obtain a measurement in
the near ultraviolet at 330 nm, and archival observations with the Keck
observatory’s NIRC2 instrument to measure the near-infrared flux at $\sim 2$
microns. By jointly fitting the spectra and SED, we obtained constraints on
the temperature, metallicity, and radius, with values reported in panel c of
Figure 3.
One significant result in our modeling of the SED and spectra is that we find
the tertiary in V404 Cygni has started to evolve off of the main sequence and
is about twice its initial radius. By fitting the tertiary with MIST
isochrones shown in panel a of Figure 3, we find that this constrains the
system’s age to about 3-5 gigayears, and the mass of the tertiary to around
1.2 solar masses (our full parameter estimates can be found in panel c of
Figure 3).
V404 Cygni’s secondary, like the donors seen in some other BH LMXBs, exhibits
enhanced lithium abundance, and this has been attributed to formation as a
result of accretion processes onto the black hole, or in the supernova that
formed it[21]. Given our constraint on the age of the system, we can rule out
that this lithium abundance is a result of recent formation; however, we
inspected our X-shooter and GMOS spectra and concluded that at this time there
is insufficient signal-to-noise and resolution to determine whether the
tertiary has enhanced lithium.
To learn about the physical constraints imposed on the formation of the black
hole, we simulated the dynamics of the triple with a range of configurations,
accounting for the BH kick, mass lost during the BH formation, and the initial
orbital periods of the secondary and tertiary, to investigate which BH
formation scenarios could retain the loosely bound tertiary.
As seen in panel a of Figure 4, the only way the inner binary could have
experienced a large kick and retained the tertiary, is if the tertiary started
at a short orbital period, and was kicked into a highly eccentric orbit
reaching $>3500$ AU at apastron. This scenario is unlikely, as one would need
to fine-tune the kick to the inner system to be large, but just barely below
the escape velocity of the system (Methods). We find that scenarios in which
the tertiary started in a wide orbit, and the inner binary received a small
kick of just a few kilometers per second are thus strongly favored.
When considering the inner binary, we simulated two scenarios. In one case, we
allowed the secondary to start in a wide orbit between 100 AU and 300 AU. This
scenario is viable for reproducing the current system, as we find that von
Zeipel-Lidov-Kozai cycles could readily cause the secondary to migrate into
its current 6.4-day orbit (Methods). As illustrated by the red histograms in
panel b of Figure 4, we find that in this scenario, the BH kick was likely
smaller than 3 kilometers per second and that the mass lost in the BH
formation could have been up to about 10 solar masses, or about half the mass
of the inner binary.
Alternatively, we consider a scenario where the secondary starts with an
initial orbital period between 1 and 6 days (its current orbit is 6.4 days).
These simulations are illustrated as the blue histograms in panel b of Figure
4. In this case, the inner binary’s barycenter receives a significant Blaauw
kick[22] as a result of mass loss, even if the BH itself does not receive a
kick. This results from matter being ejected in the BH progenitor’s rest
frame, causing it to be jettisoned out of the binary at the orbital velocity
of the BH in the barycentric frame. At orbital periods of days, the BH orbits
with a velocity of a few 10s of kilometers per second, and thus, as seen in
panel b of Figure 4, this scenario strongly constrains any possible mass loss
during the BH formation, because even if the BH does not receive a kick, the
inner binary does as a consequence of the mass loss, ejecting the tertiary.
One curiosity of this scenario is that in fine-tuned cases, one can achieve
slightly larger BH kicks while keeping the tertiary bound, because the
velocity imparted on the barycenter of the inner binary by mass loss can
absorb the effect of the kick oriented in the opposite direction, resulting in
a relatively low net kick to the inner binary, allowing the retention of the
tertiary. Overall, this scenario still favors small kick velocities of less
than 5 kilometers per second and does not allow for more than about a solar
mass to be ejected. Thus, if the secondary started in a tight orbit of a few
days, the most likely BH formation scenario is one in which there was a near-
complete implosion of the progenitor star, with negligible mass loss.
In either scenario, a BH formation event in which there is a complete
implosion and no kick always results in the survival of the system. This would
challenge current models for such a scenario, as this black hole has a mass of
just $\sim 9\rm\,M_{\odot}$, which is a smaller BH mass than predicted for
such an event[19]. We cannot rule out mass loss in the system, but we find it
is improbable that more than half the BH progenitor’s mass was suddenly
ejected during its formation.
We consider it unlikely that the system formed dynamically in a dense
environment such as a globular cluster or the Galactic center and was ejected,
as the escape velocities of these environments exceed the orbital velocity of
the tertiary by an order of magnitude (the young age of the tertiary also
disfavors an origin in a globular cluster). We also find it unlikely that the
system captured the tertiary in the field as a result of the low cross-section
of such an interaction occurring.
The presence of a tertiary companion in one of the most well-known LMXBs
supports theoretical work which has suggested that hierarchical triples may be
key to forming BH LMXBs. Forming BH LMXBs via purely binary evolution has been
theoretically challenging because of the large mass ratios involved, resulting
in a common envelope event that proceeds to a merger rather than a successful
ejection of the envelope. Thus, theoretical modeling of the formation of such
systems has explored the possibility that wide tertiary companions helped the
donor migrate into a tight orbit after the formation of the BH[5]. This is
achieved through a gravitational interaction of the tertiary with the inner
orbit, known as a von Zeipel-Lidov-Kozai cycle, in which the inner orbit
cycles between an inclined and eccentric orbit. We note that models such as
those presented in Naoz et al. 2016[5] predict tertiary companions in orbits
at $\sim 10^{4}\rm AU$, which is consistent with what we observe in V404
Cygni. One plausible evolutionary scenario for V404 Cygni is that the inner
binary began its life with a separation of $\sim 10^{2}\rm AU$, and over time
Kozai-Lidov cycles drove this to a shorter orbit as a result of tidal
dissipation and magnetic breaking draining orbital angular momentum from the
inner binary during the highly eccentric phase. Finally, the inner binary
hardened to an orbit of $<6.5$ days, too long for Roche-lobe overflow while
the secondary was on the main-sequence, but as the secondary evolved, it
overflowed its Roche-lobe. This is essentially the giant sub-channel described
in Naoz et al. 2016 [5].
We note that searches for wide binary companions are, in general, highly
incomplete. To be detected by Gaia, a companion must (a) be brighter than
$G\sim 20.7$, and (b) must be separated from the inner binary by at least
$\sim 1$ arcsec (depending somewhat on flux ratio). For V404 Cygni, this
corresponds to detection limits of $M\gtrsim 0.9\,M_{\odot}$ for main-sequence
stars, and separations $s\gtrsim 2500$ AU. The separation distribution of
solar-type tertiaries peaks at 10-100 AU, with companions at 100-2500 AU
outnumbering those at $>2500$ AU by a factor of two[23, 24]. While the
separation distribution of tertiaries to massive stars is quite uncertain,
this suggests that companions too close to be detected by Gaia may be common
and could have evaded detection thus far. Indeed, several BH LMXBs are thought
to have unresolved companions, which have thus far been interpreted as chance
alignments[25, 26, 27]. Proper motions for these companions have not yet been
measured.
V404 Cygni is nearer and brighter in the optical than most other BH LMXBs. If
the system were $\sim$50% more distant, the companion would be blended with
the inner binary, and Gaia would not have been able to measure its proper
motion. If the companion were $\gtrsim 10\%$ more massive, it would already be
a faint white dwarf below Gaia’s detection limit. If it were $\gtrsim 40\%$
less massive, it would be a main sequence star below the Gaia detection limit;
if it were $\gtrsim 20\%$ less massive it would be too faint for Gaia to have
measured a precise proper motion an establish the association with high
confidence. These considerations all suggest that harder-to-detect tertiaries
may well be hiding around other known BHs. It is quite possible that most BH
LMXBs formed through triple evolution, and deeper searches around other BHs
hold promise to detect them.
Evidence suggests that the spin axis of the BH in V404 Cygni is misaligned
from the orbital plane due to the rapidly changing orientation of the jets in
the system, and this has been attributed to a natal kick[28]. However, the
tertiary’s presence largely rules out a natal kick. An alternative possibility
is that the von Zeipel-Lidov-Kozai cycles induced this misalignment, as the
inner orbit would have evolved through a range of inclinations and may have
hardened at an inclination misaligned with the original orbit[29, 30]. If the
current spin of the black hole in the system was primarily inherited from the
progenitor star, this could naturally lead to a misalignment of this spin axis
with the current orbital plane. However, the evolved 1.2 solar mass tertiary
implies the black hole has removed at least 0.5 solar masses from the 0.7
solar mass secondary—which was originally more massive than the tertiary, as
it evolved first. If the black hole conservatively accreted this much mass, it
could account for the large spin, but one would not expect a misalignment from
the orbital plane.
The tertiary companion of V404 Cygni has provided favorable evidence for the
formation of at least some low-mass x-ray binaries in hierarchical triples.
Moreover, it has provided one of the strongest empirical constraints on natal
kicks in the formation of a black hole by indicating that the BH in V404 Cygni
likely formed with a kick of less than five kilometers per second,
demonstrating that at least some stellar mass black holes form without
substantial natal kicks. We conclude by noting that our simulations strongly
suggest that V404 Cygni’s secondary either started in a wider orbit, and
migrated in as a result of von Zeipel-Lidov-Kozai interactions, or if the
secondary originated in a tight orbit, the $9\,M_{\odot}$ BH formed without
ejecting more than a solar mass of matter–a near complete implosion.
Figure 1: a): A Pan-STARRS image of V404 Cygni and its companion, displayed
using the Aladin interface, with V404 Cygni and the tertiary labeled in red
and the separation of 1.43 arcseconds indicated in blue. We discovered the
tertiary while viewing the source on Aladin and inspecting the Gaia astrometry
on the interface, finding remarkable agreement in the measured proper motions
of these two sources. b): A plot of the positions and proper motion vectors of
all stars in the field, with V404 Cygni and its tertiary indicated in red. c):
A zoom-in on the inner binary of V404 Cygni, illustrating the $2.5\times
10^{4}$ ratio of the semi-major axes of the inner and outer orbits in the
triple. Figure 2: a): A plot illustrating the proper motions of all Gaia
stars brighter than 18th magnitude within 5 arcminutes of V404 Cygni, and the
remarkable agreement between V404 Cygni and its tertiary (shown in red). b): A
histogram of Gaia-measured radial velocities of stars in the vicinity of V404
Cygni, with the systemic velocity of V404 Cygni shown as a black vertical
line, and the measured radial velocity of the tertiary shown as the grey
vertical line. c): A plot illustrating the chance alignment probability as a
function of separation for stars brighter than 18th magnitude (shown in red),
and 20th magnitude (shown in blue). This probability accounts for both the
agreement in proper motions, as well as radial velocity. Figure 3: a): MIST
stellar isochrones, with the color bar indicating the age of stars evolving
according to these isochrones. The black star with error bars indicates the
position of the tertiary, which overlaps with tracks in the range of 1.2-1.3
solar masses at ages of around 3-5 billion years. b): The spectral energy
distribution of the tertiary in V404 Cygni (black diamonds), with the bluest
point coming from Hubble Space Telescope ACS observations, the reddest point
from Keck NIRC2 observations, and the remaining photometric measurements from
Pan-STARRS images. The red curve shows our best-fit spectrum to this data, and
the blue crosses indicate the filter-averaged flux values from this spectrum.
c): Our derived parameters as a result of the models illustrated in this
figure. d): The X-shooter spectrum of the Calcium triplet absorption features
and other nearby absorption lines in the tertiary (black), and our best fit
spectral model to it (red). Figure 4: a): An illustration of the range of
possible kick velocities to the inner binary that could result in a bound
tertiary at a separation greater than 3500 AU. In general, large kicks are
only allowed if the tertiary originated at a very short orbital period, a
scenario that is unlikely to retain the tertiary (Methods). b): Histograms
representing our simulations of black hole kicks and mass loss in the system.
The red histograms represent a scenario in which the secondary originates at a
wide range of orbital periods, whereas the blue histograms represent a
scenario in which the secondary started at very short orbital periods.
We dedicate this work to the memory of our dear friend, Tom Marsh. K.B.B. is a
Pappalardo Postdoctoral Fellow in Physics at MIT and thanks the Pappalardo
fellowship program for supporting his research. This research was supported by
NSF grant AST-2307232.
The authors declare that they have no competing financial interests.
Correspondence and requests for materials should be addressed to K.B.B.
(email: [email protected]).
## 0.1 Astrometric association
We investigated the robustness of the astrometric association of the tertiary
with the inner binary by analyzing a sample of all _Gaia_ sources within $30$
arcminutes of V404 Cygni. In _Gaia_ DR3, there are 80739 sources in this
region of the sky. This corresponds to a source density of $0.0079$ sources
per square arcsecond. As an initial check, we parsed these 80739 sources for
objects with proper motion values that fall within 1 sigma of V404 Cygni’s
values and found only 21 such sources. When we account for the error bars of
not just V404 Cygni’s measured proper motions, but also of all other sources,
we find that 1256 sources have 1 sigma confidence intervals that overlap the 1
sigma confidence interval of V404 Cygni. However, the mean magnitude in this
sample is _Gaia_ G of 20.5, and thus it is dominated by sources with large
uncertainties in proper motions, meaning that V404 Cygni’s well-measured
proper motion is consistent with over 1000 poorly measured proper motions in
this region.
To construct the curve shown in panel c of Figure 2, which we define as the
probability of finding a source within a given separation radius that has
proper motions in RA and Dec consistent within 1 sigma, as well as a radial
velocity consistent to within 1 sigma, we take the list of 80739 Gaia sources
within 30 arcminutes, and further downselect to 39577 sources with a Gaia G
magnitude greater than 20 (to construct the blue curve), and 10798 sources
with a Gaia G magnitude greater than 18 (to construct the red curve). We
follow the procedure outlined in El-Badry et al. 2018[31] to determine whether
two sources have consistent proper motions, accounting for possible orbital
motion influencing the astrometric solution. We find that for stars with
_Gaia_ $G>18$ (V404 Cygni’s tertiary is 17.9), there are only 3 sources within
30 arcminutes that have proper motions consistent with V404 Cygni (this number
ranges from 2-4 sources, as the number of sources consistent with the proper
motions of V404 Cygni depends on the assumed separation, as this is used to
compute possible variance in proper motions due to the orbital motion). In any
case, this leads to small chance alignment probabilities. Extended Data Table
1 lists the _Gaia_ astrometric solution of V404 Cygni and its tertiary.
We also consider the radial velocity of the sources in computing the chance
alignments. To do this, we selected all Gaia sources within 30 arcminutes of
V404 Cygni with a Gaia radial velocity error less than 5 kilometers per second
(this is comparable to our measured uncertainty on the RV of the tertiary). We
used these sources to construct the histogram seen in panel b of Figure 2.
From this distribution, we computed that there is approximately an 8.1 percent
probability of a source with a well-measured RV being consistent with that of
V404 Cygni to within 1 sigma, and we incorporated this information in our
chance alignment probabilities illustrated in panel c of Figure 2.
To translate these numbers into formal chance alignment probabilities, we ask,
what is the probability that such a source would fall within a circle of some
radius (represented by the x-axis in panel c of Figure 2) around V404 Cygni.
We use the vertical black line to denote the location of the actual tertiary,
which is 1.43 arcseconds away from V404 Cygni. A 1.43 arcsecond region
represents just 1/1584429 of the 30 arcminute region we queried for sources,
and thus the probability of any individual source falling by chance into such
a region is $6.3\times 10^{-7}$. Thus, if we consider the 3 sources with
_Gaia_ $G>18$ that have proper motions consistent with V404 Cygni, accounting
for possible orbital motion at a 1.43 arcsecond separation, the probability of
one of those sources falling within 1.43 arcseconds of V404 Cygni is
$1.26\times 10^{-6})$, and when we account for the probability of the radial
velocity agreeing to within 1 sigma, this becomes $\sim 10^{-7})$.
While analyzing the astrometry of V404 Cygni and its tertiary, we used the
astrometry of nearby stars to investigate the peculiar velocity reported in
Miller-Jones et al. 2009 [20]. Using stars within 1 degree of V404 Cygni, with
similar distances (parallax values between 1/3 and 1/2), and radial velocity
errors of less than $5\,\rm km\,s^{-1}$, we constructed the Toomre diagram
shown in Extended Data Figure 1, and found that the _Gaia_ astrometry and
radial velocities of nearby stars are largely consistent with that of V404
Cygni.
Table 1: Gaia astrometry of V404 Cygni and the Tertiary
Object | RA (epoch 2016) | Dec (epoch 2016) | $\omega$ | $\mu_{\rm RA}$ | $\mu_{\rm Dec}$
---|---|---|---|---|---
V404 Cygni | $306.0159085525$ | $33.8671768648$ | $0.3024\pm 0.0783$11footnotemark: 1 | $-5.1775\pm 0.0785$ | $-7.7776\pm 0.0922$
Tertiary | $306.0159299085$ | $33.8675732739$ | $0.1423\pm 0.1161$ | $-5.1500\pm 0.1564$ | $-7.7647\pm 0.1316$
Table 1: 1 We note that the Gaia parallax measurement is much less precise
than the radio parallax of $\omega=0.418\pm 0.024$ reported in [20]. The
proper motions are consistent with what was measured in the radio. Extended
Data Figure 1: A Toomre diagram illustrating V404 Cygni’s Galactic orbital
velocity relative to stars within a degree of V404 Cygni. We find that V404
Cygni’s velocity is largely consistent with nearby stars.
## 0.2 Constraining kicks and mass loss
To constrain the possible natal kicks the black hole could have received while
still retaining the tertiary, we treated the system as a triple with component
masses of $M_{BH}=8.5\,M_{\odot}$ as the post supernova black hole mass,
$M_{2}=1.5\,M_{\odot}$ as the initial mass of the inner donor star, and
$M_{3}=1.2\,M_{\odot}$ as the initial mass of the tertiary.
We consider four free parameters in our simulations: the initial orbital
period of the tertiary, the initial orbital period of the secondary, the
amount of mass ejected in the supernova, and the kick that the black hole
received. We marginalized over the kick angle and the orientation of the inner
binary with respect to the outer tertiary’s orbit, and in all cases we assumed
initially circular orbits. In all cases, we assume initial orbits are
circular. To determine whether the post-kick orbits remain bound, we follow
the procedure outlined in Brandt & Podsiadlowski (1995)[32], which we outline
here. We define a non-dimensional characteristic mass:
$\tilde{m}=\frac{M_{1}+M_{2}}{M^{\prime}_{1}+M^{\prime}_{2}},$ (1)
where $M_{1}$ and $M_{2}$ are the initial masses in the system, and
$M^{\prime}_{1}$ is the post-supernova mass of component that underwent mass
loss. We also compute a dimensionless ratio of the magnitude of the kick
velocity to orbital velocity:
$\tilde{v}=\frac{v_{\textrm{kick}}}{v_{\textrm{orb}}},$ (2)
where $v_{\textrm{orb}}$ is the relative orbital velocity of the two
components computed using Kepler’s laws (in the case of the inner binary, the
two components are the BH progenitor and the secondary, and in the case of the
outer orbit, the components are the tertiary and the center of mass of the
inner binary).
Following Brandt & Podsiadlowski (1995)[32], this yields a post-supernova
energy in the new center of mass frame given by:
$E^{\prime}=-\frac{GM^{\prime}_{1}M_{2}}{2a}\left[2-\tilde{m}(1+2\tilde{v}\cos\phi\cos\theta+\tilde{v}^{2})\right],$
(3)
where $\phi$ and $\theta$ represent the polar and azimuthal kick angles,
respectively. As observed in Brandt & Podsiadlowski (1995)[32], for the final
system to remain bound, $E^{\prime}$ must be negative, and the final semi-
major axis is given by
$a^{\prime}=-\frac{GM^{\prime}_{1}M_{2}}{2E^{\prime}}.$ (4)
In each simulation, we first use the BH kick and level of mass loss and
consider the impact this has on the inner orbit’s Barycentric velocity,
$v_{\text{sys}}$. We then use this $v_{\text{sys}}$ as the input to a
calculation to check whether the outer tertiary will remain bound to the inner
binary. We compute $v_{\text{sys}}$ using the expression given in Brandt &
Podsiadlowski (1995)[32]:
$v_{\text{sys}}=\frac{v_{\text{orb}}}{M^{\prime}_{1}+M_{2}}\left[\left(\frac{\mu\Delta
M_{1}}{M_{1}}\right)^{2}-2\frac{\mu\Delta
M_{1}M^{\prime}_{1}}{M_{1}}\tilde{v}\cos\phi\cos\theta+(M^{\prime}_{1}\tilde{v})^{2}\right]^{1/2},$
(5)
where $\Delta M_{1}$ is the mass lost in the supernova, and $\mu$ is the
reduced mass of the pre-supernova binary, defined as
$\mu=\frac{M_{1}M_{2}}{M1+M2}$.
In constructing the histograms shown in panel b of Figure 4, we consider two
cases: one in which the inner companion started out in an orbit between 1 and
6 days (we use a uniform distribution of orbital periods between these two
values), and another case in which it started out at an orbital separation
between 100 and 300 AU (we also always enforce that the secondary must start
out at a shorter orbital period than the tertiary). We use 100 AU as a lower
bound to ensure that the von Zeipel-Lidov-Kozai timescales were short enough
that the secondary would have migrated into its current orbit over the few Gyr
age of V404 Cygni. We enforce the 300 AU upper bound because if the orbit were
significantly wider, it would be dynamically unstable. In both scenarios, we
drew from uniform distributions for the BH kick and mass loss during BH
formation, with the BH kick being drawn from a uniform distribution between
$0\,km\,s^{-1}$ and $300\,km\,s^{-1}$, and the mass loss being drawn from a
uniform distribution ranging from $0\,M_{\odot}$ and $20\,M_{\odot}$. We
consider “surviving” solutions to be ones where the secondary remains bound in
an orbit with a semi-major axis less than the final semi-major axis of the
tertiary and the tertiary remains bound, with a final semi-major axis greater
than $3500\,\rm AU$.
To construct panel a of Figure 4, which represents the range of possible kicks
the inner binary could have experienced while retaining the tertiary as a
function of the initial orbital period of the tertiary, we iterate over
initial orbital periods of the tertiary and simply apply a range of systemic
velocities to the inner binary, without assuming any mass loss, and
investigate which solutions retain the tertiary in a bound orbit greater than
$3500\,\rm AU$.
Extended Data Figure 2: a): The probability of retaining a bound tertiary as a
function of the kick velocity in the inner binary. In general, this
probability diminishes at large kicks because these are only allowed when the
tertiary originates in a tight orbit, and must be fine-tuned to be large
enough to send it to a wide separation without unbinding it. b): The
probability of retaining a bound tertiary at greater than 3500 AU as a
function of the initial orbital period of the tertiary. These probabilities
are diminished for tertiaries that originate in tight orbits due to the large
degree of fine-tuning required to send the tertiary to such a wide orbit
without unbinding it.
## 0.3 Spectroscopic Observations
We searched the ESO archive for observations of V404 Cygni in which the
companion may have serendipitously fallen inside the slit. This yielded a
VLT/X-shooter spectrum obtained on 16 July 2015 (Program 295.D-5027;
PI:Rahoui), between the main outburst and mini-outburst [33], when the black
hole was in quiescence. To our knowledge, these data have not been published
elsewhere. As seen in panel A of Extended Data Figure 3, two separate traces
are clearly visible in the raw spectra, separated by about 1.4 arcsec, with
one 5-10 times brighter than the other. Since V404 Cygni has no other
comparably bright neighbors, there is little doubt that the fainter of the two
traces corresponds to the companion. The seeing was about 1.3 arcsec (FWHM),
meaning that the center of the companion’s trace is separated from V404 Cygni
by 2.5 times the “$\sigma$” of the seeing disk. Given that the companion is
5-10 times fainter than V404 Cygni in the VIS data, with its relative flux
contribution increasing toward redder wavelengths, the companion was likely
only partially in the slit. The data were taken with the 1.2 arcsec slit,
yielding a typical resolution of 6500 in the VIS band.
The mid-exposure time is HJD 2457219.692, when the ephemeris of Casares et al.
2019[33] predicts an RV of $-49.9\,\rm km\,s^{-1}$ for the donor. Twelve
separate 148s exposures were obtained sequentially in nod-and-shuffle mode,
for a total exposure time of 1776s in the VIS band. After inspecting the 12
exposures, we rejected one in which the companion was unusually faint,
presumably because it fell farther outside the slit. We reduced and combined
the other 11, for an effective exposure time of 1628s in the VIS band. We
reduced the data using the ESO Reflex pipeline[34] with standard calibrations.
This performs bias-subtraction, flat fielding, wavelength calibration using
afternoon ThAr arcs, and order merging. We set the extraction window for the
two sources manually using the localize-slit-position and localize-slit-height
parameters. To minimize contamination from V404 Cygni, we extracted only the
$\approx 50\%$ of the companion’s trace on the far side of V404 Cygni. The
fact that the companion’s extracted spectrum shows no emission in H$\alpha$,
where V404 Cygni shows a strong double-peaked emission line (see panels b and
c of Extended Data Figure 3), suggests there is little contamination.
Extended Data Figure 3: a): A region in the archival X-shooter spectrum of
V404 Cygni, with the traces associated with the inner binary of V404 Cygni,
and the tertiary both labeled. b): The extracted X-shooter spectra around the
calcium triplet region for V404 Cygni (shown in red), and the tertiary (shown
in blue), illustrating that we were able to extract a spectrum of the tertiary
with minimal contamination by V404 Cygni. c): The spectrum of V404 Cygni
(shown in red) and the tertiary (blue) around the H Alpha Hydrogen absorption
line.
We used the “A” and “B” telluric bands to perform a flexure correction in the
wavelength solution, obtaining a $-12\rm\,km\,s^{-1}$ correction with “A”
band, and a $-11.0\rm\,km\,s^{-1}$ with “B” band, and adopted a correction of
$-11.5\rm\,km\,s^{-1}$, with a systematic uncertainty of $-1\rm\,km\,s^{-1}$.
After applying these corrections, we converted the X-shooter spectrum to air
wavelengths, and applied the Barycentric correction of $8.6\rm\,km\,s^{-1}$ in
the file headers.
We also obtained an additional spectroscopic observation using Gemini Multi-
Object Spectrographs (GMOS) on the 8.1-m Gemini North telescope on Mauna Kea,
to determine whether there was any RV variability within the tertiary (program
GN-2023B-DD-105). We used the 0.5-arcsecond slit and the R831_G5302 grating,
which provides a resolution of approximately 4400. Our spectrum covered a
wavelength range from $4576\,\text{\AA}$ to $6925\,\text{\AA}$. While being
slightly lower resolution than the X-shooter data, the spectrum provided a
significantly higher signal-to-noise ratio (SNR) at bluer wavelengths.
Extended Data Figure 4: a): The Gemini GMOS-N spectrum of the H Beta
absorption feature in the tertiary (black), and our best fit spectral model to
it (red). b): The Gemini GMOS-N spectrum of the H Alpha absorption feature in
the tertiary (black), and our best fit spectral model to it (red).
We aligned the position angle of the slit perpendicular to the angle between
V404 Cygni and the tertiary, effectively masking V404 Cygni. The observation
was obtained in good seeing (FWHM $\approx 0.7$ arcsec), meaning that V404
Cygni was nearly 4 times the “$\sigma$” of the seeing disk from the nearest
edge of the slit, and contamination is expected to be negligible. We used a
900 second exposure, yielding SNR of $\approx 30$ per pixel at 6600 Å,
$\approx 10$ per pixel at 5600 Å, and $\approx 5$ per pixel at 5000 Å. We
obtained a CuAr arc on-sky immediately after the science exposure.
We reduced the GMOS data using PypeIt, which required manual construction of a
new template for the R831 grating. PypeIt performs bias and flat field
correction, cosmic ray removal, wavelength calibration, sky subtraction,
extraction of 1d spectra, and heliocentric RV corrections. Like the X-shooter
data, we checked the wavelength solution using the telluric bands, and in this
case found that no correction was needed.
## 0.4 Radial Velocity Measurements
We used the X-shooter spectra to measure the radial velocities (RVs) of both
components, as well as perform atmospheric fits in conjunction with modeling
the spectral energy distribution (SED). To measure RVs, we used the ultranest
kernel density estimator to fit an atmospheric model constructed using the
PHOENIX stellar atmosphere library[35]. We selected a region between
$8480\rm\,\text{\AA}$ and $8700\rm\,\text{\AA}$ for the fit, as this
wavelength range contains the calcium triplet lines, which provide a strong RV
signal in the x-shooter spectrum. We allowed the RV to be a free parameter,
and fixed the other parameters in this atmospheric model to an effective
temperature of $T_{eff}=6100\,\rm K$, $logg=3.95\,\rm cgs$, and $[Fe/H]=0.0$
based on our joint atmospheric+SED fit. Our fit yielded an RV measurement of
$-4.5\pm 2.5\pm 1\rm\,km\,s^{-1}$ for the tertiary, in excellent agreement
with the $-2.0\pm 0.4\rm\,km\,s^{-1}$ systemic velocity of the inner binary
reported in Casares et al. 2019[33].
We also conducted a radial velocity analysis of the GMOS spectra, fitting the
region between $6400\rm\,\text{\AA}$ and $6700\rm\,\text{\AA}$ to capture the
H alpha absorption line and nearby features, and the region between
$4750\rm\,\text{\AA}$ and $5000\rm\,\text{\AA}$ for the H beta absorption
feature. We find that this analysis yields a radial velocity of $-4.1\pm
1.9\rm\,km\,s^{-1}$, consistent with that of the X-shooter data, and of V404
Cygni’s systemic velocity. We note that the orbital motion of the tertiary is
on the order of a few kilometers per second, and may also introduce some
difference between the systemic velocities of the two components at this
level.
## 0.5 Spectral energy distribution
To model the spectral energy distribution of the tertiary in V404 Cygni, we
used a combination of observations obtained with the Panoramic Survey
Telescope and Rapid Response System (Pan-STARRS)[36], the Advanced Camera for
Surveys (ACS)[37] aboard the Hubble Space Telescope (HST), and near-infrared
camera (NIRC2) on the W.M. Keck Observatory[38].
Because of the overlapping point spread function of V404 Cygni’s inner binary
and the tertiary in the Pan-STARRS images, we performed our own point spread
function photometry of the source by constructing a routine using the
photutils package to iteratively run over the epochal Pan-STARRs images. We
used a small cutout region, illustrated in panel b) of Extended Data Figure 5
for our analysis, constructing a PSF model using the nearby bright stars near
the top and bottom of this cutout. After constructing the PSF model, we use
photutils to identify bright sources in each image and fit for the position of
the PSF, as seen in panel c) of Extended Data Figure 5, and we found that this
yielded good quality photometry, leaving only small residuals, as seen in
panel d) of Extended Data Figure 5. We note that initially when we performed
this analysis on the stacked Pan-STARRS images, we were unable to eliminate
substantial residuals, and discovered after inspecting the epochal images that
there was a nearby focal plane artifact, in the shape of a heart, which may
have been contaminating some of the stacked images. One example of this
artifact is illustrated in panel a) of Extended Data Figure 5, and we found
that its position regularly changed at different pointings, sometimes
contaminating V404 Cygni or our comparison stars, leading to a variable PSF
model over the stacked images. To quantify the uncertainties in the apparent
magnitude of the V404 Cygni tertiary, we computed the standard deviation of
the estimated apparent magnitude across all epochal Pan-STARRS images for
which we were able to perform photometry (in each filter). In addition to
using the photometric scatter across epochal images to estimate the
measurement error, we compared the inferred apparent magnitude using the star
just to the north of the triple, and the one just to the south of the triple,
to quantify the systematic error in each filter. We note that we used the Pan-
STARRS1 reference catalog of apparent magnitudes for our two comparison stars
as a basis for calibrating the photometry.
We used an archival HST observation (Proposal ID: 9686, PI: Hynes) to extract
flux in the ACS/HRC F330W filter. These observations consist of two 600s
exposures. As seen in panel a of Extended Data Figure 6, the tertiary is
visible and well resolved from V404 Cygni in these images. We used the
ACSTools python package to compute the flux of the tertiary, using a 0.2
arcsecond aperture and applying the appropriate corrections to the flux to
account for the portion of the PSF not encompassed by this aperture.
We performed infrared photometry using a single archival NIRC2 image of V404
Cygni, shown in panel b of Extended Data Figure 6. We used a 0.5 arcsecond
aperture to extract flux for the tertiary, as well as a comparison star just
to the north of the tertiary. We used the 2MASS reference magnitude of this
reference star to calibrate our flux, though we elected to apply a systematic
error of ten percent to our photometry due to the difference in filters in the
2MASS system (2MASS $K_{s}$), and the $K_{p}$ filter used in the NIRC2
observation. We elected not to perform more in-depth photometry on the full
archive of hundreds of images because we found that this photometric
measurement had little overall impact on our results.
Extended Data Figure 5: a): An example of a heart-shaped artifact that
contaminates many of the Pan-STARRS images of V404 Cygni. We believe this
artifact degraded the quality of the stacked PS images by occasionally
occluding either V404 Cygni or comparison stars, resulting in a non-uniform
PSF across the stacked images. This is why we ultimately decided to use
epochal images as the basis of our photometry. b): An epochal Pan-STARRS
r-band image of V404 Cygni. We used epochal images such as this to extract
fluxes for our spectral energy distribution analysis. c): An image
illustrating the point spread function (PSF) model we applied to the image to
extract photometry for V404 Cygni’s tertiary. d): A cutout illustrating the
residuals after we extract PSF photometry, with the PSF model constructed
using the bottom star in the image (hence the zero residuals around this
star).
## 0.6 Joint SED, Spectroscopic, and Isochrone analysis
We performed an analysis in which we fit the SED with model atmospheres,
synthesizing fluxes using the pyphot tool. In addition to the SED measurement,
the only other measurement we included in this analysis was the estimated
radio parallax of $0.418\pm 0.024$ to constrain the distance (and
appropriately account for the uncertainty in distance in estimating the
radius), and the reddening value of $E(g-r)=1.27$ reported in Green et al.
2019[39]. As with the analysis of the spectra, we used the ultranest kernel
density estimate to perform this fit. We fit for four free parameters, the
distance, effective temperature, metallicity, and radius of the star. This
analysis of the SED alone yielded the following parameters: $T_{eff}=6080\pm
86\,K$, $r=1.882\pm 0.047\,R_{\odot}$, $d=2.42\pm 0.14\,kpc$, and
$[Fe/H]=-0.21\pm 0.28$. We note that these estimates depend strongly on the
assumed reddening, but are broadly consistent with what we estimate from the
spectra.
Extended Data Figure 6: a) A Hubble Space Telescope ACS/HRC F330W image of
V404 Cygni, with both objects labelled. We used this archival image to extract
flux in the F330W filter, which was included in our analysis of the spectral
energy distribution of the object. b) A Keck NIRC2 adaptive optics image
obtained in the $K_{p}$ filter, used in our analysis of the spectral energy
distribution of the object.
## References
* [1] Hobbs, G., Lorimer, D. R., Lyne, A. G. & Kramer, M. A statistical study of 233 pulsar proper motions. _MNRAS_ 360, 974–992 (2005). astro-ph/0504584.
* [2] Tauris, T. M. _et al._ Formation of Double Neutron Star Systems. _ApJ_ 846, 170 (2017). 1706.09438.
* [3] Fryer, C. L. & Kalogera, V. Theoretical Black Hole Mass Distributions. _ApJ_ 554, 548–560 (2001). astro-ph/9911312.
* [4] Burrows, A., Wang, T., Vartanyan, D. & Coleman, M. S. B. A Theory for Neutron Star and Black Hole Kicks and Induced Spins. _arXiv e-prints_ arXiv:2311.12109 (2023). 2311.12109.
* [5] Naoz, S., Fragos, T., Geller, A., Stephan, A. P. & Rasio, F. A. Formation of Black Hole Low-mass X-Ray Binaries in Hierarchical Triple Systems. _ApJ_ 822, L24 (2016). 1510.02093.
* [6] Callister, T. A., Farr, W. M. & Renzo, M. State of the Field: Binary Black Hole Natal Kicks and Prospects for Isolated Field Formation after GWTC-2. _ApJ_ 920, 157 (2021). 2011.09570.
* [7] O’Leary, R. M., Meiron, Y. & Kocsis, B. Dynamical Formation Signatures of Black Hole Binaries in the First Detected Mergers by LIGO. _ApJ_ 824, L12 (2016). 1602.02809.
* [8] Rodriguez, C. L., Haster, C.-J., Chatterjee, S., Kalogera, V. & Rasio, F. A. Dynamical Formation of the GW150914 Binary Black Hole. _ApJ_ 824, L8 (2016). 1604.04254.
* [9] Fryer, C. L. Mass Limits For Black Hole Formation. _ApJ_ 522, 413–418 (1999). astro-ph/9902315.
* [10] Mirabel, F. The formation of stellar black holes. _New A Rev._ 78, 1–15 (2017).
* [11] Mandel, I. Estimates of black hole natal kick velocities from observations of low-mass X-ray binaries. _MNRAS_ 456, 578–581 (2016). 1510.03871.
* [12] Andrews, J. J. & Kalogera, V. Constraining Black Hole Natal Kicks with Astrometric Microlensing. _ApJ_ 930, 159 (2022). 2203.15156.
* [13] Shenar, T. _et al._ An X-ray-quiet black hole born with a negligible kick in a massive binary within the Large Magellanic Cloud. _Nature Astronomy_ 6, 1085–1092 (2022). 2207.07675.
* [14] Fragos, T. _et al._ Understanding Compact Object Formation and Natal Kicks. II. The Case of XTE J1118 + 480. _ApJ_ 697, 1057–1070 (2009). 0809.1588.
* [15] Dashwood Brown, C., Gandhi, P. & Zhao, Y. On the natal kick of the black hole X-ray binary H 1705-250. _MNRAS_ 527, L82–L87 (2024).
* [16] Khargharia, J., Froning, C. S. & Robinson, E. L. Near-infrared Spectroscopy of Low-mass X-ray Binaries: Accretion Disk Contamination and Compact Object Mass Determination in V404 Cyg and Cen X-4. _ApJ_ 716, 1105–1117 (2010). 1004.5358.
* [17] Udalski, A. & Kaluzny, J. CCD Photometry of the X-ray Nova V404 Cygni after the 1989 Outburst. _PASP_ 103, 198 (1991).
* [18] Casares, J., Charles, P. A., Jones, D. H. P., Rutten, R. G. M. & Callanan, P. J. Optical studies of V404 Cyg, the X-ray transient GS 2023+338 -I. The 1989outburst and decline. _MNRAS_ 250, 712 (1991).
* [19] Maitra, D. _et al._ Simultaneous Multiwavelength Observations of V404 Cygni during its 2015 June Outburst Decay Strengthen the Case for an Extremely Energetic Jet-base. _ApJ_ 851, 148 (2017). 1712.06668.
* [20] Miller-Jones, J. C. A. _et al._ The First Accurate Parallax Distance to a Black Hole. _ApJ_ 706, L230–L234 (2009). 0910.5253.
* [21] Martin, E. L., Rebolo, R., Casares, J. & Charles, P. A. High lithium abundance in the secondary of the black-hole binary system V404 Cygni. _Nature_ 358, 129–131 (1992).
* [22] Blaauw, A. On the origin of the O- and B-type stars with high velocities (the “run-away” stars), and some related problems. _Bull. Astron. Inst. Netherlands_ 15, 265 (1961).
* [23] Raghavan, D. _et al._ A Survey of Stellar Families: Multiplicity of Solar-type Stars. _ApJS_ 190, 1–42 (2010). 1007.0414.
* [24] Tokovinin, A. From Binaries to Multiples. II. Hierarchical Multiplicity of F and G Dwarfs. _AJ_ 147, 87 (2014). 1401.6827.
* [25] Armas Padilla, M. _et al._ Multiwavelength spectroscopy of the black hole candidate MAXI J1813-095 during its discovery outburst. _MNRAS_ 485, 5235–5243 (2019). 1903.04498.
* [26] Mata Sánchez, D. _et al._ Dynamical confirmation of a stellar mass black hole in the transient X-ray dipping binary MAXI J1305-704. _MNRAS_ 506, 581–594 (2021). 2104.07042.
* [27] Hynes, R. I., Haswell, C. A., Chaty, S., Shrader, C. R. & Cui, W. The evolving accretion disc in the black hole X-ray transient XTE J1859+226. _MNRAS_ 331, 169–179 (2002). astro-ph/0111333.
* [28] Miller-Jones, J. C. A. _et al._ A rapidly changing jet orientation in the stellar-mass black-hole system V404 Cygni. _Nature_ 569, 374–377 (2019). 1906.05400.
* [29] Liu, B. & Lai, D. Spin-Orbit Misalignment of Merging Black Hole Binaries with Tertiary Companions. _ApJ_ 846, L11 (2017). 1706.02309.
* [30] Su, Y., Lai, D. & Liu, B. Spin-orbit misalignments in tertiary-induced binary black-hole mergers: Theoretical analysis. _Phys. Rev. D_ 103, 063040 (2021). 2010.11951.
* [31] El-Badry, K. & Rix, H.-W. Imprints of white dwarf recoil in the separation distribution of Gaia wide binaries. _MNRAS_ 480, 4884–4902 (2018). 1807.06011.
* [32] Brandt, N. & Podsiadlowski, P. The effects of high-velocity supernova kicks on the orbital properties and sky distributions of neutron-star binaries. _MNRAS_ 274, 461–484 (1995).
* [33] Casares, J. _et al._ Accretion and outflow in V404 Cyg. _MNRAS_ 488, 1356–1365 (2019). 1907.00005.
* [34] Freudling, W. _et al._ Automated data reduction workflows for astronomy. The ESO Reflex environment. _A &A_ 559, A96 (2013). 1311.5411.
* [35] Husser, T. O. _et al._ A new extensive library of PHOENIX stellar atmospheres and synthetic spectra. _A &A_ 553, A6 (2013). 1303.5632.
* [36] Chambers, K. C. _et al._ The Pan-STARRS1 Surveys. _arXiv e-prints_ arXiv:1612.05560 (2016). 1612.05560.
* [37] Sirianni, M. _et al._ The Photometric Performance and Calibration of the Hubble Space Telescope Advanced Camera for Surveys. _PASP_ 117, 1049–1112 (2005). astro-ph/0507614.
* [38] Matthews, K. & Soifer, B. T. The Near Infrared Camera on the W. M. Keck Telescope. In McLean, I. S. (ed.) _Astronomy with Arrays, The Next Generation_ , vol. 190 of _Astrophysics and Space Science Library_ , 239 (1994).
* [39] Green, G. M., Schlafly, E., Zucker, C., Speagle, J. S. & Finkbeiner, D. A 3D Dust Map Based on Gaia, Pan-STARRS 1, and 2MASS. _ApJ_ 887, 93 (2019). 1905.02734.
|
# Physics-Consistent Data-driven Waveform Inversion with Adaptive Data
Augmentation
Renán Rojas-Gómez†,⋄, Jihyun Yang†,#, Youzuo Lin†,⋆, James Theiler†, and
Brendt Wohlberg† $\dagger$: Los Alamos National Laboratory
$\diamond$: Department of Electrical and Computer Engineering, University of
Illinois at Urbana-Champaign
$\\#$: Department of Geophysics, Colorado School of Mines
$\star$ Correspondence to: Y. Lin<EMAIL_ADDRESS>
###### Abstract
Seismic full-waveform inversion (FWI) is a nonlinear computational imaging
technique that can provide detailed estimates of subsurface geophysical
properties. Solving the FWI problem can be challenging due to its ill-
posedness and high computational cost. In this work, we develop a new hybrid
computational approach to solve FWI that combines physics-based models with
data-driven methodologies. In particular, we develop a data augmentation
strategy that can not only improve the representativity of the training set,
but also incorporate important governing physics into the training process and
therefore improve the inversion accuracy. To validate the performance, we
apply our method to synthetic elastic seismic waveform data generated from a
subsurface geologic model built on a carbon sequestration site at Kimberlina,
California. We compare our physics-consistent data-driven inversion method to
both purely physics-based and purely data-driven approaches and observe that
our method yields higher accuracy and greater generalization ability.
###### Index Terms:
Computational Imaging, Full-waveform Inversion, Convolutional Neural Networks,
Physics-consistent Machine Learning, Data Augmentation
## I Introduction
In solid earth geosciences, characterizing the subsurface geology is crucial
for energy exploration, civil infrastructure, groundwater contamination and
remediation, etc. However, nearly all of the earth’s interior is inaccessible
to direct observation. Inference of unknown subsurface properties therefore
relies on indirect and limited geophysical measurements taken at or near the
surface. Seismic inversion attempts to reconstruct an image of subsurface
structures from measurements of natural or artificially produced seismic waves
that have travelled through the subsurface. A forward model describes how the
observations depend on the subsurface map, while the inverse problem involves
inferring that map from the observations. The forward model of wave
propagation is nonlinear. Travel-time inversion methods [14] are based on a
linear approximation of the forward model, while seismic full-waveform
inversion (FWI) addresses the full non-linear problem, leading to superior
inversion accuracy and resolution [15].
The FWI problem is challenging due to the non-linearity of the forward model
and its under-determined nature. Conventional computational methods for
solving FWI are based on optimization techniques and generic regularization
[15]. For simplicity of description, we call these approaches “physics-based
FWI methods” to distinguish them from data-driven methods, and from our
proposed hybrid approach. The major advantage of these methods is their
robustness to out-of-distribution data due to noise, change of station, and
other external factors, while the main disadvantage is their computational
expense. The primary computational cost involved in FWI is associated with the
solution of the wave equation, and is affected by the details of the finite
difference solver, the velocity model dimension, sources and receivers.
Most existing regularization techniques used for solving FWI employ generic
functions, such as $\ell_{1}$ or $\ell_{2}$ penalties on the gradient of the
solution [9].
Recently, a new class of algorithm has been developed, based on machine
learning applied to large datasets that are produced from many runs of the
forward physics model. In direct end-to-end learning [17, 1, 4], a large
number of velocity maps and corresponding seismic waveforms (usually
constructed through extensive simulation) are used as training data in
learning the mapping from seismic waveform to velocity map. In low-wave number
learning [12, 6], this type of learning approach is used to predict an initial
velocity map with low-frequency component, which is then used as the initial
guess for traditional physics-based optimization.
Here, we describe an approach for seismic FWI that incorporates the physics
model into the learning procedure. Specifically, this physics-consistent data-
driven full waveform inversion consists of a carefully designed encoder-
decoder-structured neural network and an adaptive data augmentation technique.
This augmentation employs the forward model to produce new training data that
are more representative of the solution we seek. To validate its performance,
we applied our inversion method to detect carbon sequestration leakage using
synthetic seismic data sets generated using a subsurface model for a potential
CO2 storage site at Kimberlina, California [3].
## II Background
### II-A Governing Physics: the Forward Model
Mathematically, the forward model can be expressed in terms of the seismic
elastic-wave partial differential equation [15]:
$\displaystyle\rho({r})\frac{\partial^{2}u({r},t)}{\partial t^{2}}=$
$\displaystyle(\lambda({r})+\mu({r}))\nabla(\nabla\cdot u({r},t))$
$\displaystyle+\mu({r})\nabla^{2}u({r},t)+s({r},\,t),$ (1)
where $\rho({r})$ is the density at spatial location ${r}$, $\lambda({r})$ and
$\mu({r})$ are the Lamé parameters, $s({r},\,t)$ is the source term,
$u(\mathbf{r},t)$ is the displacement wavefield, $t$ represents time, and
$\cdot$ is the divergence operator. When fluid such as supercritical CO2 leaks
into the subsurface formation, the geophysical parameters of P-wave and S-wave
velocities will be changed correspondingly.
Instead of inverting for $\rho({r})$, $\lambda({r})$ and $\mu({r})$, it is
customary to invert for a velocity map $m\in\mathbb{R}^{M\times N}$, where $M$
and $N$ are its vertical and lateral dimensions, respectively; here $m$ refers
to either P-wave or S-wave velocity can be expressed as a function of
$\rho({r})$, $\lambda({r})$ and $\mu({r})$. Similarly, we denote a seismic
data observation $d_{\text{obs}}\in\mathbb{R}^{T\times S\times R}$, where $T$
corresponds to the number of samples in the temporal domain, $S$ to the number
of sources and $R$ to the number of receivers used in the data acquisition
process. The seismic data can be expressed in terms of a highly nonlinear
forward mapping $f$:
$d_{\text{obs}}=f({m}).$ (2)
### II-B Physics-Based Full-waveform Inversion
Various explicit regularization techniques have been developed to stabilize
the computation of seismic inversion, including $\ell_{1}$-norm [9] or
$\ell_{2}$-norm [5] methods. Given the forward model in Eq. (2), the
regularized seismic FWI can be posed as
$m=\underset{{m}}{\operatorname{argmin}}\left\\{\left\|d-f({m})\right\|_{2}^{2}+\lambda\,R({m})\right\\},$
(3)
where ${d}$ represents a recorded/field waveform dataset, $f({m})$ is the
corresponding forward modeling result, $\left\|d-f({m})\right\|_{2}^{2}$ is
the data misfit, $||\cdot||_{2}$ stands for the $\ell_{2}$ norm, $\lambda$ is
a regularization parameter and $R({m})$ is the regularization term. Note that
Eq. (3) is a general formulation for regularized FWI. More effective
regularization techniques have been developed for time-lapse monitoring [10].
## III Physics-Consistent Data-Driven Full-waveform Inversion
### III-A Data-Driven Inversion and Network Structure
A data-driven FWI structure based on an encoder-decoder architecture [16],
denoted $\mathcal{G}$ and characterized by hyperparameters
$\boldsymbol{\theta}$, is proposed to approximate the inverse mapping $f^{-1}$
and obtain accurate velocity map predictions
$\hat{m}(\boldsymbol{\theta})\triangleq\mathcal{G}(\boldsymbol{\theta},d_{\text{obs}})$
under a supervised learning scheme. Optimal parameters
$\boldsymbol{\theta}^{*}$ are obtained by adapting the architecture to a
representative training set with $L$ samples
$\\{d_{\text{obs},\ell},m_{\ell}^{*}\\},\ell\in\\{0,L-1\\}$.
We choose the mean-absolute error (MAE) as our optimality criterion:
$\displaystyle\boldsymbol{\theta}^{*}=\underset{\boldsymbol{\theta}}{\text{argmin}}\
\frac{1}{L}\sum_{\ell=0}^{L-1}\|m_{\ell}^{*}-\hat{m}_{\ell}(\boldsymbol{\theta})\|_{1}.$
(4)
For a more detailed discussion of loss function selection, please refer to our
earlier work [16].
Our data-driven inversion network structure consists of an encoder network and
a decoder network [16]. Full details of our model are provided in the
Supplementary Material.
### III-B Data Description
We apply our method to detect CO2 leakage in subsurface. To the best of our
knowledge, there is no real seismic data related to our problem of interest.
We use the simulated Kimberlina dataset from Lawrence Livermore National
Laboratory (and that we refer to here as CO2leak). The aim of the Kimberlina
dataset is to understand and assess the effectiveness of various geophysical
monitoring techniques in detecting CO2 shallow leakage in the wellbore [7]. A
portion of the of Kimberlina Simulations can be be downloaded from the DOE-EDX
platform [11]. The Kimberlina dataset is generated based on a hypothetical
numerical model built on the geologic structure of a commercial-scale geologic
carbon sequestration (GCS) reservoir at the Kimberlina site in the southern
San Joaquin Basin, 30 km northwest of Bakersfield, CA, USA. The P-wave and
S-wave velocity maps used in this work belong to the geophysical model, which
is created based on the realistic geologic-layer properties from the GCS site
[3].
The CO2leak dataset contains 991 CO2 leakage scenarios, each simulated over a
duration of 200 years, with 20 leakage maps provided (ie, at every ten years)
for each scenario. We obtain synthetic seismograms from elastic forward
modeling on CO2leak velocity maps. First, one-second traces with a time
interval of $0.5$ms using 7 sources and 114 receivers are generated. We then
down-sample each trace by a factor of $2$, resulting in a temporal dimension
of $1000$ time steps. The sources and receivers are evenly distributed along
the top of the model, with depths of 5m and 20m, respectively. The source
interval is 125m, and the receiver interval is 15m. We use a Ricker wavelet
with a central frequency of $25$Hz as the source to generate simulated seismic
waves due to its empirical success in processing seismic field data [5]. The
synthetic data is the staggered grid solution of the elastic wave equation
using a finite-difference scheme with a perfectly matched layered absorbing
boundary condition.
### III-C Data Augmentation: Incorporation of Physics Knowledge
The CO2leak dataset includes $19,600$ velocity maps of $141\times 341$ grid
points describing CO2 and brine leakage plumes evolving with time. It is of
practical interest to detect plumes of leaking CO2 while they are still small.
This is particularly challenging when the available training data is dominated
by large plumes. Thus, CO2leak presents the opportunity to evaluate the
generalization of data-driven inversion with respect to different plume sizes.
Along with the data pairs included in the dataset, the ground-truth CO2 and
brine mass information for each sample, is provided. Based on this, the full
dataset is split into four parts, according to their CO2 leak mass plus brine
leak mass: tiny plumes (from $3.53\times 10^{2}$ to $9.10\times 10^{6}$Kg),
small plumes (from $9.10\times 10^{6}$ to $2.67\times 10^{7}$Kg), medium
plumes (from $2.67\times 10^{7}$ to $8.05\times 10^{7}$Kg), and large plumes
(from $8.05\times 10^{7}$ to $1.62\times 10^{9}$ Kg). These cover $20\%$,
$20\%$, $20\%$, and $40\%$ of the data samples, respectively. Figure 5 in the
Supplementary Material shows representative labels from each subset.
While conventional data augmentation techniques (such as rotation, flip, scale
etc.) have proved to be effective for image processing applications, it is not
clear that they have a useful role to play in our application. Our adaptive
data augmentation scheme provides additional training data that is not only
physically meaningful but also more closely related to the target unlabelled
data that we are trying to invert. We summarize our augmentation method as the
following four steps (illustrated in Fig. 1; a detailed description is
provided in the Supplementary Material):
1. i.
Estimate approximate solver
$\mathcal{G}(\boldsymbol{\hat{\theta}},d_{\text{obs}})$;
2. ii.
Generate approximate velocity maps from unlabeled data
$\hat{m}_{r}=\mathcal{G}(\boldsymbol{\hat{\theta}},d_{\text{obs},r})$;
3. iii.
Create seismic data using forward model
$\hat{d}_{\text{obs},r}=f(\hat{m}_{r})$;
4. iv.
Add new pairs to the original training set.
The augmented dataset plays a key role in model accuracy because it will not
only carry useful physics information, but also provides examples of velocity
maps that are consistent with the target geology feature of interests.
Furthermore, the full augmentation process can be applied in an iterative
fashion by re-training the approximate solver
$\mathcal{G}(\boldsymbol{\hat{\theta}},d_{\text{obs}})$ based on the extended
training set in order to generate new approximate velocity maps $\hat{m}_{r}$.
This approach allows further refinement of the mapping between velocity and
seismic subdomains, as empirically shown in Section IV-C.
Figure 1: Adaptive Data Augmentation: Approximate solver
$\mathcal{G}(\hat{\theta},d_{\text{obs}})$ is fully-trained over labeled set
$\\{m^{*}_{\ell},d_{\text{obs},\ell}\\}$, and applied to unlabeled seismic
data $d_{\text{obs},r}$ to generate new velocity maps $\hat{m}_{r}$.
Physically-coherent seismic data $\hat{d}_{\text{obs},r}$ is then generated
using the forward model $f$, producing a new labeled set
$\\{\hat{m}_{r},\hat{d}_{\text{obs},r}\\}$ which is added to the original
training set.
## IV Numerical Experiments
### IV-A Experimental Setup
For all evaluations, the training process is performed using a fixed stopping
criterion of $250$ epochs, random weight initialization, an initial learning
rate of $10^{-3}$ and its subsequent adaptive optimization via ADAM [8]. We
found that $250$ epochs is appropriate to guarantee diversity in the
reconstructed velocity maps while avoiding overfitting. The training is
performed using fixed batch sizes of $50$, where both features and labels are
normalized prior to their use in the training and inference processes. We
created $392$ batches in total. In consequence, reported loss values are
computed based on normalized training and testing pairs. All training and
testing routines are implemented using PyTorch code and executed on four
NVIDIA GeForce GTX 1080 GPUs. We mention that there are various inversion
strategies for elastic FWI [13], and we provide the inversion of P-wave
velocities in all tests considering its higher relative sensitivity to the
response of CO2 leakage. The S-wave velocities can be obtained through either
sequential inversion or empirical relations [13].
### IV-B Generalization Performance without Data Augmentation
We assess network generalization performance by first training over one subset
of the training data (e.g., large plumes) and testing on a different subset
(e.g., tiny plumes). For each case, we quantify how informative is our data
augmentation approach by measuring the Mean Absolute Error (MAE), as expressed
in Eq. 4, attained by the network along epochs. We show the reconstruction
accuracy along epochs $\varepsilon(\boldsymbol{\theta}_{i})$ as the MAE
normalized with respect to the velocity map dimensions to clearly depict our
method’s effect in the predictions:
$\displaystyle\varepsilon(\boldsymbol{\theta}_{i})=$
$\displaystyle\frac{1}{\hat{L}MN}\sum_{\ell=0}^{\hat{L}-1}\|m_{\ell}^{*}-\hat{m}_{\ell}(\boldsymbol{\theta}_{i})\|_{1},$
(5)
where $\boldsymbol{\theta}_{i}$ corresponds to the set of hyperparameters at
the $i^{\text{th}}$ epoch, and $\hat{L}$ corresponds to the size of the
validation set.
Figure 2 shows the testing loss curve at each epoch. Although decreasing, the
curve shows that the network is not able to accurately reconstruct the
velocity maps under this scenario: given the reduced plume size in the testing
set, an $\ell_{1}$ loss value of $-8.7$ indicates either a large intensity
deviation among ground-truth and prediction, or a very low intersection over
the union between them. This is reflected in Figures 2 and 2, which show a
tiny ground-truth velocity map and its prediction by the network trained over
large plumes. In general, the network tries to generate plumes of large size,
which is consistent with the plumes on which it was trained.
(a) Testing loss curve.
(b) Tiny label.
(c) Tiny prediction.
Figure 2: Transferability: Training on the large subset, testing on the tiny
subset (0.205 (MAE) and 0.899 (SSIM)).
Figure 3: Reconstruction results for our data Augmentation approach applied to
labeled data: Testing MAE loss value for each training epoch.
Figure 4: Reconstruction results for our data Augmentation approach applied to
unlabeled data: Testing MAE loss value for each training epoch.
(a) (b) (c) (d) (e)
Figure 5: (a) Ground truth. Inversions and errors obtained using (b) physics-
based FWI (0.069, 0.972), data-driven model trained on (c) the large and
medium subsets (0.0620, 0.990), (d) large and augmented medium subsets
(augmented once) (0.0134, 0.993), and (e) large and augmented medium subsets
(augmented twice) (0.0122, 0.994). Error is in the format of (MAE, SSIM).
### IV-C Generalization Performance with Data Augmentation
We evaluate the reconstruction accuracy of the augmentation method described
in Section III-C, and compare to traditional data-driven methods with no
augmentation strategies.
For the first scenario, considering the data partitioning described in Section
III-C, we compute the reconstruction accuracy along epochs for the following
setups:
1. i.
Train on the large subset.
2. ii.
Train on the large and medium subsets.
3. iii.
Train on the large, medium and small subsets.
4. iv.
Train on the large and augmented medium subset.
In every case, performance is evaluated on the tiny subset.
For setups (i), (ii) and (iii), the main differences are the number of
training samples and the size of the plumes included in each training set.
Based on the network transferability results shown in Section IV-C, the
network trained over large, medium and small plumes is expected to attain the
best reconstruction accuracy on tiny plumes. On the other hand, setup (iv),
corresponding to the network trained over large and augmented medium subsets,
utilizes the same information as setup (ii). However, setup (ii) also includes
medium samples. In other words, the difference between setup (ii) and (iv) is
that setup (iv) increases the number of medium samples using our augmentation
strategy.
Figure 3 shows the reconstruction accuracy along epochs for the four described
setups. Regarding the training scenarios with no augmentation strategies, the
Mean Absolute Error along epochs behaves as expected: after 250 epochs,
training over large, medium and small plumes attains the best reconstruction
value of $-13.5$ for the error metric in Eq. 5, followed by the network
trained over large and medium plumes at $-9.3$, and the network trained over
large plumes at $-8.7$. On the other hand, our augmented approach from setup
(iv) reaches an accuracy of approximately $-12$, which is smaller than the
error obtained by setup (ii). This implies that the augmented medium samples
generated using the forward model provide physically-coherent information to
the training process, allowing a better velocity map reconstruction.
It is important to remark that the augmentation approach from setup (iv)
generates new samples by augmenting labeled data from the medium subset.
Specifically, we train a surrogate network over the labeled large subset, use
it to generate new data pairs over the labeled medium dataset, and then fully
train the network using the large subset along with both original and
augmented medium datasets. Although this experimental setup and its comparison
against non-augmented strategies illustrates the advantages of our proposed
augmentation approach, a more interesting scenario is the generation of
physically-consistent pairs from unlabeled data.
With this in mind, a second scenario is considered. Let the medium unlabeled
subset correspond exclusively to the medium seismic data (the corresponding
velocity maps are not taken into account). Then, consider the following two
experiments:
1. i.
Train on the large subset.
2. ii.
Train on the large and augmented medium unlabeled subset.
3. iii.
Test on the small subset.
The augmentation process follows the procedure described in Section III-C. In
contrast with our first experiment, however, in which both medium and
augmented medium datasets are added to the large dataset, only the augmented
medium dataset is added to the initial large dataset. Figure 4 shows
reconstruction results along $250$ epochs. The network trained over large
plumes attain a reconstruction accuracy of approximately $-10.5$. On the other
hand, the network trained over large and augmented medium unlabeled plumes
obtain a better reconstruction accuracy of approximately $-11.7$. These
results strengthen our observation regarding the information encapsulated in
the samples generated by the augmentation process: by including the forward
modeling operation in the process, physically-consistent data pairs are
generated, allowing a better domain adaptation. The use of unlabeled data
shows how our proposed method is not limited to labeled data, which can be
difficult to obtain in real applications.
Finally, we provide visualization of the inverted velocity maps using both
physics-based FWI and data-driven approaches in Fig. 5. Our network trained
over the large and medium subsets obtains a reasonably accurate estimate of
the tiny plume. Also, the estimate obtained by our augmentation further
refines the plume shape and location. This example shows how our augmentation
approach used iteratively improves the network output, which reflects the
potential of including the physics-based forward model in learning. An
additional test and detailed analysis of our inversion model robustness with
respect to variant levels of noisy seismic data is provided in the
Supplementary Material.
## V Conclusion and Future Work
We develop a physics-consistent data-driven seismic full-waveform inversion
method. We design a novel data augmentation strategy that incorporates
critical physics information and improves the representability of the training
set. We validate its performance to detect small CO2 leakage. Compared with
purely physics-based and purely data-driven inversion methods, our physics-
consistent data-driven inversion yields higher accuracy and better
generalization. With respect to computational cost, the most expensive
component of our approach is in data generation (including modeling) and
training. The cost of the inference is very low. Considering a single
inversion, the overall modeling cost of our method can be more expensive (up
to 10 times) than that of physics-based FWI. However, for those applications
that require inverting multiple seismic surveys at the same location such as
time-lapse monitoring, our approach can be very cost-effective.
Nonrepeatability is an important issue for time-lapse seismic imaging [2]. Our
technique is general, and can be combined with existing time-lapse imaging
methods. We will study the robustness of our technique in nonrepeatable
scenarios. Different means have been proposed to incorporate physics in
solving inverse problems, cyclic consistency being one of those [18]. We will
compare the performance between our techniques and consistent generative
methods. The high cost in training may hinder the wide application of our
method to a broader class of problems. We believe, however, that there may be
room for further improving the efficiency of the training. Another future
direction would be to explore data augmentation in the low-data regime.
## Acknowledgements
This work was supported by the Center for Space and Earth Science at Los
Alamos National Laboratory (LANL) and by the Laboratory Directed Research and
Development program of LANL under project number 20200061DR. Renán A. Rojas-
Gómez would also like to thank Javier E. Santos and Manish Bhattarai for the
insightful discussions. We also thank two anonymous reviewers and the
Associate Editor, Dr. Luis Gómez, for their constructive suggestions and
comments that improved the quality of this work.
## References
* Araya-Polo et al., [2018] Araya-Polo, M., J. Jennings, A. Adler, and T. Dahlke, 2018, Deep-learning tomography: The Leading Edge, 37, 58–66.
* Asnaashari et al., [2015] Asnaashari, A., R. Brossier, S. Garambois, F. Audebert, P. Thore, and J. Virieux, 2015, Time-lapse seismic imaging using regularized full-waveform inversion with a prior model: which strategy?: Geophysical Prospecting, 63, 78–98.
* Buscheck et al., [2019] Buscheck, T., K. Mansoor, X. Yang, H. Wainwright, and S. Carroll, 2019, Downhole pressure and chemical monitoring for CO2 and brine leak detection in aquifers above a CO2 storage reservoir: Int. J. Greenhouse Gas Control, 91.
* Farris et al., [2018] Farris, S., M. Araya-Polo, J. Jennings, B. Clapp, and B. Biondi, 2018, Tomography: a deep learning vs full-waveform inversion comparison, in First EAGE Workshop on High Performance Computing for Upstream in Latin America, European Association of Geoscientists & Engineers.
* Fichtner, [2010] Fichtner, A., 2010, Full seismic waveform modelling and inversion: Springer Science & Business Media.
* Hu et al., [2019] Hu, W., Y. Jin, X. Wu, and J. Chen, 2019, Progressive transfer learning for low frequency data prediction in full waveform inversion: arXiv preprint arXiv:1912.09944.
* Jordan and Wagoner, [2017] Jordan, P., and J. Wagoner, 2017, Characterizing construction of existing wells to a CO2 storage target: The Kimberlina site, California: Technical report, U.S. Department of Energy - Office of Fossil Energy.
* Kingma and Ba, [2014] Kingma, D. P., and J. Ba, 2014, Adam: A method for stochastic optimization: arXiv preprint arXiv:1412.6980.
* Lin and Huang, [2015] Lin, Y., and L. Huang, 2015, Acoustic- and elastic-waveform inversion using a modified total-variation regularization scheme: Geophysical Journal International, 200, 489–502.
* Maharramov et al., [2016] Maharramov, M., B. Biondi, and M. Meadows, 2016, Time-lapse inverse theory with applications: Geophysics, 81, R485–R501.
* NETL, [2018] NETL, 2018, LLNL Kimberlina 1.2 Simulations. (https://edx.netl.doe.gov/dataset/llnl-kimberlina-1-2-nuft-simulations-june-2018-v2).
* Ovcharenko et al., [2019] Ovcharenko, O., V. Kazei, M. Kalita, D. Peter, and T. Alkhalifah, 2019, Deep learning for low-frequency extrapolation from multioffset seismic data: Geophysics, 84, no. 6, 58–66.
* Raknes and Arntsen, [2014] Raknes, E. B., and B. Arntsen, 2014, Strategies for elastic full waveform inversion, in SEG Technical Program Expanded Abstracts: Society of Exploration Geophysicists, 1222–1226.
* Tarantola, [2005] Tarantola, A., 2005, Inverse problem theory and methods for model parameter estimation: SIAM.
* Virieux et al., [2014] Virieux, J., A. Asnaashari, R. Brossier, L. Métivier, A. Ribodetti, and W. Zhou, 2014, Chapter 6: An introduction to full waveform inversion, in Encyclopedia of Exploration Geophysics: SEG.
* Wu and Lin, [2019] Wu, Y., and Y. Lin, 2019, InversionNet: An efficient and accurate data-driven full waveform inversion: IEEE Transactions on Computational Imaging, 6, 419–433.
* Yang and Ma, [2019] Yang, F., and J. Ma, 2019, Deep-learning inversion: A next-generation seismic velocity model building method: Geophysics, 84, no. 4, R583–R599.
* Zhu et al., [2017] Zhu, J., T. Park, P. Isola, and A. Efros, 2017, Unpaired image-to-image translation using cycle-consistent adversarial networks: Presented at the IEEE International Conference on Computer Vision.
|
# Broken living layers: dislocations in active smectics
Frank Julicher<EMAIL_ADDRESS>Max-Planck Institut für Physik Komplexer
Systeme, Nöthnitzer Str. 38, 01187 Dresden, Germany Jacques Prost
<EMAIL_ADDRESS>Mechanobiology Institute and Department of Biological
Sciences, National University of Singapore, Singapore 117411 3 Laboratoire
Physico Chimie Curie, Institut Curie, PSL Research University, CNRS UMR168,
75005 Paris, France John Toner<EMAIL_ADDRESS>Department of Physics and
Institute for Fundamental Science, University of Oregon, Eugene, OR $97403$
###### Abstract
We show that dislocations in active 2d smectics with underlying rotational
symmetry are always unbound in the presence of noise, meaning the active
smectic phase does not exist for non-zero noise in $d=2$. The active smectic
phase can, like equilibrium smectics in 2d, be stabilized by applying
rotational symmetry breaking fields; however, even in the presence of such
fields, active smectics are still much less stable against noise than
equilibrium ones, when the symmetry breaking field(s) are weak.
## I Introduction
Much of the richness of condensed matter physics is due to the great variety
of possible different phases of matter. Each distinct phase breaks different
symmetries of the underlying physical laws of the universechaikin .
One of the most interesting equilibrium phases of matter is the smectic A
phasedegennes . This liquid crystalline phase is, as the term “liquid crystal”
suggests, a hybrid of a liquid and a crystalline solid. Specifically, a
$d$-dimensional smectic A can be thought of as a one dimensional stack of
$d-1$-dimensional isotropic fluids. In three dimensions, this is a stack of
two dimensional fluid layers.
These fascinating phases exhibit a number of unique properties, including
quasi-long-ranged order -i.e., algebraicly decaying translational
correlations- in three dimensionscaille , and a breakdown of linearized
hydrodynamicsMRT .
A priori, all of the phases found in equilibrium could also be exhibited by
active matter Vicsek ; TT1 ; TT2 ; TT3 ; TT4 ; TT5 ; Active4 ; Active1 ;
Active2 ; Active3 ; JF1 ; JF2 systems, in which the building blocks are kept
out of equilibrium by constant transduction of energy. Examples of such
systems are living organisms, molecular motors, and robots, to name just a
few. In this paper, we consider active smectics A.
Whether or not a particular active phase is stable, and robust against noise,
is, obviously, the first question one must ask about any potential phase of
active matter. Some phases of active matter - e.g., polar ordered active
matter, of which a uniformly moving flock is the most obvious example, are
actually more robust than their equilibrium counterparts. Polar ordered
flocks, for example, can exhibit long-ranged order in two dimensions, while
their nearest equilibrium analog, ferromagnets, can exhibit only quasi-long
ranged order in two dimensions in the presence of noise - i.e., at finite
temperature.
Other active phases, on the other hand, are less stable than their equilibrium
counterparts. “Wet” active nematics - that is, active nematics with momentum
conservation - are actually unstablesimha .
So in this paper, we ask the question of whether or not active smectics are
robust against noise. These systems have already been shownapolarsm ; polarsm
to be stable at zero noise, and to be stable against noise if topological
defects are ignored. These papers also asserted that these systems can be
stable in two spatial dimensions in the presence of noise against topological
defects as well. Here, we show that this is not the case, for the two simplest
possible types of active smectic: dry Malthusian active smectics, and dry
incompressible active smectics. We will now define these phases.
A dry Malthusian active smectic is one in which nothing - not energy, not
momentum, not even particle numberTT5 \- is conservedapolarsm ; polarsm . As
a further simplification, we will consider only apolar smectics; that is,
smectics with up-down symmetry (that is, symmetry between the two directions
normal to the smectic layers).
A dry incompressible active smectic is simply an active smectic with no
momentum conservation whose mean density is fixed. By “mean” density, we mean
the small wavelength components of the density; the components at
$q_{0}\equiv{2n\pi\over a}$, with $a$ the smectic layer spacing, are non-zero
for all integer $n$ as a result of the spontaneous density modulation that
defines the smectic.
Such completely non-conservative systems are by no means of purely academic
interest. Tissues growing or developing on a substrate tissue1 ; tissue , for
example, lack all conservation laws: cell number is not conserved due to cell
division and death. Momentum is lost due to friction with the substrate, and
can be gained due to active forces between the cells and that substrate.
Finally, energy is not conserved, both due to friction with the substrate, and
because living cells have a fuel source (i.e., food) which enables them to
consume energy. Hence, if such a system was thin compared to its lateral
extent, and spontaneously formed a layered structure, it would be a two
dimensional dry Malthusian active smectic of exactly the type we describe
here. However, smectic order has not been observed experimentally yet in two-
dimensional tissues.
Interestingly, the cell actomyosin system may provide an example of a
Malthusian active smectic. Cell contractility is known to result mainly from
the action of the myosin II molecular motors on the cortical actin gel. Myosin
II motors have a long tail, and two active heads. The tails of several motors
tend to bundle in a way similar to the tail bundling of phospholipids, but at
a different scale: the resulting head to head distance is on the order of 300
nmmyosins . Often the picture of the bundle is that of a symmetric flower
bouquet and their distribution in the actin gel is essentially random. More
ordered structures exist: there is well defined 3d crystalline order in
muscles, and 1d periodic patterns occur in stress fibers. An intermediate
arrangement is observed in lamellipodia of fibroblasts myosins ; myosins1 ;
myosins2 : the myosin tails arrange in lines separated from the heads,
building a clear 2d smectic order. The actin filaments, above this structure,
are on average orthogonal to the myosin lines with no specific translational
order. The bundling/unbundling process does not conserve myosin number and
interaction with the substrate exchange momentum. Hence, this is a ”dry”
system in the sense defined earlier. Such systems of stacked myosin II
filaments in cells have thus the characteristics of dry active Malthusian
smectics.
In addition, we think our results shed considerable light on the dislocation
behavior one might expect in more complex active smectic systems in which one
or more quantities are conserved. For instance, our results are valid in 2d
incompressible systems with boundary conditions allowing for layer number
variation as shown in the last section of this manuscript. This is for
instance the case for the roll structures in Rayleigh-Benard instabilities1st
.
We find that, despite being more stable in spin-wave theory - that is, when
topological defects (i.e., dislocations), are neglected, active smectics in
rotation invariant environments (which we will hereafter refer to by the
shorthand “rotation invariant active smectics”) are unstable against
dislocation unbinding in the presence of any noise, no matter how small.
Furthermore, although they can be stabilized by rotational symmetry breaking
fields, they are still less stable than equilibrium smectics in symmetry
breaking fields of the same strength. Specifically, in the active smectics we
study here, we’ll define $\Delta_{c}$ as the critical value of the noise
strength $\Delta$ above which dislocations unbind, causing the smectic to melt
into an active nematic. This critical value $\Delta_{c}$ grows linearly with
the applied symmetry breaking field strength $g$ for small $g$; that is
$\Delta_{c}\propto g\,\,\,{\rm as}\,\,\,\,g\to 0\,.$ (I.1)
This result should be contrasted with the equilibrium resultKT for the
transition temperature $T_{c}$, (temperature is the equilibrium analog of the
noise strength in active smectics):
$T_{c}^{\rm eq}\propto\sqrt{g}\,\,\,{\rm as}\,\,\,\,g\to 0\,,$ (I.2)
whose derivation we’ll review in section II.2. We therefore see that, for
small symmetry breaking fields $g$, the critical noise strength $\Delta_{c}$
for dislocation unbinding and the melting of the smectic phase is much smaller
for active smectics than for equilibrium smectics. That is, active smectics
are less robust against melting, even in the presence of symmetry breaking
fields, than their equilibrium counterparts.
Like equilibrium smectics in the presence of a rotational symmetry breaking
field, active smectics in such a symmetry breaking field exhibit quasi-long-
ranged translational correlations for noise strength smaller than the critical
value. This is most clearly manifest in the Fourier transformed density-
density correlation function (which is also the X-ray structure factor in
scattering experiments). This exhibits peaks at wavevectors
$\mathbf{q}_{n}=nq_{0}\hat{y}+{\bf\delta q}$, where $q_{0}={2\pi\over a}$,
with $a$ the smectic layer spacing. Near these peaks, we have
$\langle|\rho(\mathbf{q},t)|^{2}\rangle\propto|{\bf\delta
q}|^{-2+n^{2}\eta(g,\Delta)}\,,$ (I.3)
with $\eta(g,\Delta)$ a non-universal exponent that depends on the symmetry
breaking field strength $g$ and the noise strength $\Delta$ (as well as other
smectic parameters).
Another consequence of our result (I.1) is that the critical value $\eta_{c}$
of the exponent $\eta$ vanishes linearly with symmetry breaking field:
$\eta_{c}\propto g\,\,\,{\rm as}\,\,\,\,g\to 0\,.$ (I.4)
in contrast to equilibrium smectics, for which $\eta_{c}=1/4$, universally,
independent of the applied symmetry breaking field.
The remainder of this paper is organized as follows: in section II, we review
the theory of equilibrium smectics, both in rotation-invariant systems II.1
and with rotational symmetry breaking fields II.2. In section III, we review
the spin-wave theory of active smectics (that is, the theory in which
dislocations are neglected). Section IV presents the calculation of the fields
due to dislocations, which prove to be identical in form to those found in
equilibrium smectics in the presence of a rotational symmetry breaking field.
Section V derives the equation of motion for dislocations in an active,
rotation invariant smectic. In section VI.1, we show that this equation of
motion leads, for fixed boundary conditions, to the achievement of a type of
“homeostasis,”, in which isolated dislocations do not spontaneously move. For
“constant stress” boundary conditions, on the other hand, the smectic will
either collapse, or grow without bound. In section VII, we show that even in
the homeostatic state, in the presence of noise, dislocations are always
unbound in rotation invariant smectics. Rotational symmetry breaking fields
can stabilize smectic quasi-long-ranged order in active smectics, as we show
in VIII. We also show in that section that although symmetry breaking fields
do stabilize smectic order, the resultant order is still very weak for small
fields, in the sense that it can be much more easily destroyed by noise than
in equilibrium smectics. We then generalize these results to the case of
incompressible dry active smectics. Finally, in section X, we summarize our
results, speculate about the behavior of more complicated smectic systems with
conservation laws, and suggest avenues for further work.
## II Review of Equilibrium 2d smectics
### II.1 Rotation invariant 2d equilibrium smectics
#### II.1.1 ”Spin wave” (Phonon) theory
Any smectic A phase (either equilibrium or active) is characterized by the
spontaneous breaking of translational invariance in one direction (in contrast
to a crystal, in which translational invariance is broken in all $d$
dimensions, where $d$ is the dimension of space). This is equivalent to saying
that the system spontaneously layers, with the additional requirement that the
layers are ”liquid-like”, in the sense of being homogeneous along the layers.
We will choose our co-ordinates throughout this paper so that the direction in
which translational invariance is broken is $y$. This means the layers, in the
absence of fluctuations, run parallel to $x$ (see figure [1]).
Figure 1: Schematic of the ideal smectic state, in which the layers are
parallel, and uniformly spaced. We choose our coordinates $(x,y)$ so that the
$x$-axis runs parallel to the layers, as shown.
Since the smectic breaks the continuous translational symmetry of free space,
such systems (again whether equilibrium or active) have a ”Goldstone mode”
associated with the breaking of this symmetry. In smectics, we usually take
this Goldstone mode to be the local displacement $u(\mathbf{r},t)$ of the
layers in the vicinity of the spatial point $\mathbf{r}$ away from some
reference set of parallel, regularly spaced layer positions (see figure [2]).
Figure 2: Definition of the layer displacement field $u(\mathbf{r},t)$. The
straight parallel line are the reference positions of the layers, while the
curved lines depict a fluctuation in the layer positions. The layer
displacement field $u(\mathbf{r},t)$ is the distance from the reference
position to the fluctuating position of the layers in the vicinity of the
spatial point $\mathbf{r}$, as illustrated.
To describe such systems in equilibrium, one introducescaille ; chaikin ;
degennes a phenomenological elastic Hamiltonian (sometimes called the elastic
free energy). This is constructed as an expansion in powers of spatial
gradients of the displacement field $u(\mathbf{r},t)$, keeping all terms to
leading order in spatial gradients allowed by the symmetries of the system.
Translational invariance requires that all terms involve at least one spatial
derivative of $u(\mathbf{r},t)$, since a spatially uniform displacement (i.e.,
$u(\mathbf{r},t)={\rm constant}$) must cost no energy.
Rotation invariance is somewhat more subtle. Its implications can be
understood by recognizing that a uniform rotation of the layers by a small
angle $\phi\ll 1$ can be represented in our coordinate system by a non-uniform
displacement field
$u(\mathbf{r},t)=\phi x\,.$ (II.1)
From this expression, we see that
$\partial_{x}u=\phi\,,$ (II.2)
that is, the $x$-derivative of the displacement field $u$ gives the rotation
angle of the layers away from their reference orientation.
The relation (II.2) continues to apply for arbitrary layer distortions; that
is, the $x$-derivative of $u$ locally gives the local tilt of the layers away
from their reference orientation (provided that tilt is small, of course). We
will make much use of this relationship throughout this paper.
Rotation invariance therefore forbids the inclusion of terms that depend on
$\partial_{x}u$ in the elastic Hamiltonian, since such terms will be non-zero
for the uniform rotation (II.1). Therefore, the leading order term involving
$x$-derivatives of $u(\mathbf{r},t)$ in $H$ is a term proportional to
$(\partial_{x}^{2}u)^{2}$, which represents the energy cost of bending the
layers.
There is no such prohibition against terms involving $\partial_{y}u$. Indeed,
a term proportional to $(\partial_{y}u)^{2}$ can easily be seen to represent
the energy cost of compressing the layers closer together (for
$\partial_{y}u<0$) or stretching them further apart (for $\partial_{y}u>0$).
It is straightforward to show that
$\delta a=a\partial_{y}u\,,$ (II.3)
where $\delta a$ is the departure of the local layer spacing from its
energetically optimal value $a$. This is another relation which we will use
repeatedly throughout this paper.
These considerations lead, to quadratic order in $u$, to the elastic
Hamiltoniancaille ; chaikin ; degennes :
$H_{\rm sm}=\frac{1}{2}\int{\rm
d}^{2}r\left[B(\partial_{y}u)^{2}+K(\partial_{x}^{2}u)^{2}\right]\,.$ (II.4)
While higher than quadratic order in $u$ terms are actually important in 2d
equilibrium smectics Golub , we will not consider them here, since they do not
play any role in active apolar smecticspolarsm ; polar smectic .
The simplest, purely relaxational, equilibrium dynamics associated with this
Hamiltonian is the time-dependent Landau-Ginsburg-Wilson equation of motion:
$\partial_{t}u=-\Gamma{\delta H_{\rm sm}\over\delta u}+f_{\rm u}\,.$ (II.5)
where $f_{\rm u}(\mathbf{r},t)$ is a Gaussian, zero mean white noise that
drives the smectic to thermal equilibrium, governed by the Hamiltonian $H_{\rm
sm}$ at temperature $T$. To do this, its variance must obey the “fluctuation-
dissipation theorem”chaikin , which requires
$\langle f_{u}({\bf r},t)f_{u}({\bf r}^{\prime},t^{\prime})\rangle=2\Gamma
k_{B}T\delta({\bf r}-{\bf r}^{\prime})\delta(t-t^{\prime})\,.$ (II.6)
Using the Hamiltonian (II.4) in the equation of motion (II.5) gives:
$\displaystyle\partial_{t}u=\Gamma B\partial_{z}^{2}u-\Gamma
K\partial_{x}^{4}u+f_{u}\,.$ (II.7)
Note that this equation exhibits subdiffusive behavior in the $x$-direction.
That is, relaxation along the $x$-direction is even slower than diffusive;
specifically, the lifetime of a plane wave $u$ field running along $x$ with
wavelength $\lambda$ grows like $\lambda^{4}$ for large $\lambda$, in contrast
to the $\lambda^{2}$ scaling of simple diffusion.
This slowness of response to distortions in the $x$-direction, like the
corresponding ”softness” in the $x$-direction of the elastic Hamiltonian
(II.4), is a direct consequence of the rotation invariance (i.e., the zero
energy cost of pure rotations (II.1)) discussed earlier.
This rotation invariance can be removed by applying an external symmetry
breaking field, as we’ll discuss in subsection B.
#### II.1.2 Dislocation effects: There are no 2d equilibrium smectics at
$T\neq 0$
The preceding discussion constitutes what is normally called ”spin-wave”
theory. It assumes that it is possible to define throughout the entire system
a unique, single valued displacement field $u(\mathbf{r},t)$ throughout the
system. This is in fact not the case if dislocations (see figure
[LABEL:Burgers]) are present in the system.
We will now review the theory of these defects in equilibrium, as first
developed by Pershanpershan .
Figure 3: Illustration of the Burgers’ construction for dislocations described
in the text.
As can easily be seen from figure [3], one can think of a dislocation in a
smectic as a place where a layer ends. That these ”topological defects” make
it impossible to define a single valued $u$ field can be seen by the well-
known ”Burgers’ construction”, which is also illustrated in figure 3. In this
construction, one ”walks” in a closed loop around the dislocation, crossing
equal numbers of layers while moving up and moving down. In the example of
figure 3, one crosses two layers while going up from the starting point S to
the corner A, plus another two while moving from the lower right corner E to
the final point F, for a total of four going up, and crosses four going down
between the upper left corner B and the lower left corner C. Clearly, if the
dislocation (at point D in figure 3) were not there, or if the path did not
enclose the dislocation (i.e., the point D), this path would have returned to
the starting point S. Equally clearly, when it does enclose the dislocation,
the path does not return to $S$, but, rather, overshoots closure by the length
of the thick blue arrow between S and F, whose length is clearly the layer
spacing $a$. This failure to close is known as the ”Burgers’ number” $b$ for
the dislocation, and in this case is $b=+1$.
It is straightforward to see that in general, the Burgers’ number will be an
integer, the integer being simply the difference between the number of layers
coming in from the left that end inside the loop, and the number coming in
from the right that do so.
The Burgers’ number, defined in this way, is the analog for smectics (which,
we remind the reader, only translationally order in one direction) to the
better-known ”Burgers’ vector” defined by an almost identical construction in
crystalline solids (which translationally order in all directions).
One way of thinking about this result is that, if we had defined the
displacement field at the starting point S to be $u(S)=0$, we would, after
having completed the loop, found that the layer had been displaced up by an
amount $na$ for $b=n$. That is, $u$ is no longer single valued, but instead
increases by $ba$ every time one moves around a loop enclosing the
dislocation. Mathematically, the contour integral $\int_{C}m\hat{\bf
n}\cdot{\bf dl}$ counts the number of layers traversed along the integration
path $C$, where $m=1/a(\mathbf{r})$ is the local layer density at the point
$\mathbf{r}$, and $\hat{n}(\mathbf{r})$ is the unit normal to the layers at
the point $\mathbf{r}$. The statement that $u$ is not single valued, but
changes when moving around a loop enclosing dislocations, is equivalent to the
statement that a nonvanishing number of layers is encountered by a closed loop
$\oint m\hat{\bf n}\cdot{\bf dl}=\sum_{\alpha}b_{\alpha}\,.$ (II.8)
Here we have now generalized to the case of many dislocations, labeled by
$\alpha$, with Burgers’ numbers $b_{\alpha}$, at positions
$\mathbf{r}_{\alpha}$ that are enclosed by the loop over which the contour
integral on the left-hand side of (II.8) is done. We reiterate that the
Burgers’ number $b_{\alpha}$ of each dislocation must be an integer that is,
$b_{\alpha}=n$, with $n$ an integer.
Applying Stokes’ theorem to (II.8) gives
$\nabla\times m\hat{\bf
n}=\sum_{\alpha}b_{\alpha}\delta(\mathbf{r}-\mathbf{r}_{\alpha})\,,$ (II.9)
In (II.9), we have defined the curl in two dimensions in the usual way; that
is, as a scalar given, for any vector $\mathbf{v}$, by
$\nabla\times\mathbf{v}\equiv\partial_{x}v_{y}-\partial_{y}v_{x}$.
It is convenient to define ${\bf w}=(a_{0}m-1)\hat{\bf n}$, where $a_{0}$ is
the layer spacing in a reference state. For small displacements it can be
written as
$\mathbf{w}(\mathbf{r},t)\simeq\phi(\mathbf{r},t)\hat{x}+\left({\delta
a(\mathbf{r},t)\over a_{0}}\right)\hat{y}\,.$ (II.10)
with $\phi(\mathbf{r},t)$ and $\delta a(\mathbf{r},t)$ respectively the local
tilt of the layers at $\mathbf{r}$ at time $t$, and the local change in the
layer spacing.
Keeping in mind our earlier discussion of the relationships (II.2) and (II.3)
which hold between these two quantities $\phi(\mathbf{r},t)$ and $\delta
a(\mathbf{r},t)$ and the layer displacement field $u(\mathbf{r},t)$ in the
absence of dislocations, we see that $\mathbf{w}$ is clearly simply the
generalization of $\nabla u$ to situations in which dislocations are present.
To say this another way, when dislocations are absent,
$\mathbf{w}=\nabla u\ \ \ ,\ \ \ {\rm no}\,{\rm dislocations}\,.$ (II.11)
Thus, $\mathbf{w}$ is the natural generalization of the vector field $\nabla
u$ in the presence of dislocations, see Appendix A.
We can use this idea to calculate the $\mathbf{w}$ field of a dislocation.
That will be the field that minimizes the energy of the system for a given
configuration of dislocations. That is, we wish to find the field
$\mathbf{w}(\mathbf{r})$ that minimizes the energy of the system subject to
the constraint
$\nabla\times\mathbf{w}(\mathbf{r})=\sum_{\alpha}a_{0}b_{\alpha}\delta(\mathbf{r}-\mathbf{r}_{\alpha})\,,$
(II.12)
which is where the dislocation configuration
$\\{b_{\alpha},\mathbf{r}_{\alpha}\\}$ enters the calculation.
In the absence of dislocations, the minimum energy configuration
$u(\mathbf{r})$ can be obtained from the Euler-Lagrange equation associated
with the smectic elastic Hamiltonian (II.4). That equation is easily seen to
be:
$B\partial^{2}_{y}u(\mathbf{r})-K\partial^{4}_{x}u(\mathbf{r})=0\,.$ (II.13)
We can obviously rewrite this as
$B\partial_{y}(\partial_{y}u(\mathbf{r}))-K\partial^{3}_{x}(\partial_{x}u(\mathbf{r}))=0\,.$
(II.14)
Now generalizing this equation to situations in which dislocations are present
by replacing $\nabla u$ with $\mathbf{w}$ gives
$B\partial_{y}w_{y}(\mathbf{r})-K\partial^{3}_{x}w_{x}(\mathbf{r})=0\,.$
(II.15)
The other condition on dislocations is the Burgers’ condition (II.12).
These two simultaneous linear equations (II.15) and (II.12) can be easily
solved by Fourier transforming in space; this gives
$Bq_{y}w_{y}(\mathbf{q})+Kq_{x}^{3}w_{x}(\mathbf{q})=0\,,$ (II.16)
and
$q_{x}w_{y}(\mathbf{q})-q_{y}w_{x}(\mathbf{q})=-i\sum_{\alpha}a_{0}b_{\alpha}e^{-i\mathbf{q}\cdot\mathbf{r}_{\alpha}}\,,$
(II.17)
Solving these simple linear equations gives
$\displaystyle w_{x}(\mathbf{q})={iBq_{y}\over
Bq_{y}^{2}+Kq_{x}^{4}}\sum_{\alpha}a_{0}b_{\alpha}e^{-i\mathbf{q}\cdot\mathbf{r}_{\alpha}}\,,$
(II.18) $\displaystyle w_{y}(\mathbf{q})={-iKq_{x}^{3}\over
Bq_{y}^{2}+Kq_{x}^{4}}\sum_{\alpha}a_{0}b_{\alpha}e^{-i\mathbf{q}\cdot\mathbf{r}_{\alpha}}\,.$
(II.19)
Fourier transforming these solutions back to real space gives
$\displaystyle
w_{x}(\mathbf{r})=\sum_{\alpha}a_{0}b_{\alpha}G_{x}(\mathbf{r}-\mathbf{r}_{\alpha})\,,$
(II.20) $\displaystyle
w_{y}(\mathbf{r})=\sum_{\alpha}a_{0}b_{\alpha}G_{y}(\mathbf{r}-\mathbf{r}_{\alpha})\,,$
(II.21)
where the Green’s functions $G_{x,y}$ are given by
$\displaystyle G_{x}(\mathbf{r})=-{1\over
4\sqrt{\pi\lambda|y|}}\exp\bigg{[}-\left({x^{2}\over
4\lambda|y|}\right)\bigg{]}{\rm sgn}(y)\,,$ (II.22) $\displaystyle
G_{y}(\mathbf{r})={x\over
8\sqrt{\pi\lambda|y|^{3}}}\exp\bigg{[}-\left({x^{2}\over
4\lambda|y|}\right)\bigg{]}\,,$ (II.23)
where we’ve defined $\lambda\equiv(K/B)^{1/2}$.
We can also obtain the energy of interaction between dislocations by inserting
the solution (II.19) for $\mathbf{w}$ into our elastic Hamiltonian (II.4). To
do so, we must first rewrite (II.4) in terms of $\mathbf{w}$ using the same
replacement $\nabla u\to\mathbf{w}$ we’ve been using. This gives:
$H_{\rm sm}=\frac{1}{2}\int{\rm
d}^{2}r\left[Bw_{y}^{2}+K(\partial_{x}w_{x})^{2}\right]\,.$ (II.24)
Fourier transforming this, inserting the solution (II.19) for $\mathbf{w}$
into the result, and Fourier transforming back to real space gives
$H(\\{b_{\alpha},\mathbf{r}_{\alpha}\\})=\sum_{\alpha,\beta}a_{0}^{2}b_{\alpha}b_{\beta}U(\mathbf{r}_{\alpha}-\mathbf{r}_{\beta})$
(II.25)
where the pairwise interaction potential
$U(\mathbf{r})={B\over
4}\sqrt{\lambda\over\pi|y|}\exp\bigg{[}-\left({x^{2}\over
4\lambda|y|}\right)\bigg{]}\,.$ (II.26)
Because this vanishes as the separation between dislocations goes to infinity,
dislocations will always be unbound in smectics at any non-zero temperature,
thereby destroying the smectic order. This means that 2d smectics do not
actually exist in rotation invariant systems at non-zero temperature, as first
shown (using exactly this argument) by 1st .
While we do not need the equation of motion for the dislocations in this
equilibrium system, since the statistical mechanics is determined entirely by
the Hamiltonian (II.4), it is instructive to formulate those equations of
motion. This will allow us later to compare and contrast them with the
equations of motion for dislocations in active smectics, for which the
equation of motion is the only information we have about the behavior of
dislocations in those non-equilibrium systems.
To obtain the equations of motion, we first calculate the forces arising from
the potential (II.26).
Consider a system with only two dislocations $\alpha=(1,2)$ of Burgers’
charges: $b_{1}$ and $b_{2}$, at positions $\mathbf{r}_{1}={\bf 0}$,
$\mathbf{r}_{2}=\mathbf{r}=x\hat{x}+y\hat{y}$. The dislocation Hamiltonian
(II.25) then implies that the energy of this pair will be
$V(\mathbf{r})={Bb_{1}b_{2}a_{0}^{2}\over
4}\sqrt{\lambda\over\pi|y|}\exp\bigg{[}-\left({x^{2}\over
4\lambda|y|}\right)\bigg{]}\,.$ (II.27)
The force experienced by dislocation $\alpha=2$ will therefore be ${\bf
F}=F_{x}\hat{x}+F_{y}\hat{y}$, with its Cartesian components $F_{x,y}$ given
by
$\displaystyle F_{x}$ $\displaystyle=$
$\displaystyle-\partial_{x}V(\mathbf{r})={Bb_{1}b_{2}a_{0}^{2}x\over
8\sqrt{\pi\lambda|y|^{3}}}\exp\bigg{[}-\left({x^{2}\over
4\lambda|y|}\right)\bigg{]}=Bb_{2}a_{0}w_{y}^{(1)}(\mathbf{r})$ $\displaystyle
F_{y}$ $\displaystyle=$
$\displaystyle-\partial_{y}V(\mathbf{r})=-{Bb_{1}b_{2}a_{0}^{2}{\rm
sgn}(y)\over
16\sqrt{\pi\lambda|y|^{5}}}\bigg{\\{}x^{2}-2\lambda|y|\bigg{\\}}\exp\bigg{[}-\left({x^{2}\over
4\lambda|y|}\right)\bigg{]}=Kb_{2}a_{0}\partial_{x}^{2}w_{x}^{(1)}(\mathbf{r})$
(II.28)
where by $\mathbf{w}^{1}(\mathbf{r})$, we mean the contribution to the field
$\mathbf{w}$ at the position $\mathbf{r}$ of the $\alpha=2$ dislocation coming
from the $\alpha=1$ dislocation (i.e., neglecting the the field created by
dislocation $\alpha=2$ itself). It is straightforward to show that the
generalization of these forces to configurations with more than two
dislocations is:
$\displaystyle F_{x}^{\alpha}$ $\displaystyle=$ $\displaystyle
Bb_{\alpha}a_{0}w_{y}(\mathbf{r}_{\alpha})$ $\displaystyle F_{y}^{\alpha}$
$\displaystyle=$ $\displaystyle
Kb_{\alpha}a_{0}\partial_{x}^{2}w_{x}(\mathbf{r}_{\alpha})$ (II.29)
where by $\mathbf{w}(\mathbf{r}_{\alpha})$, we mean the contribution to the
field $\mathbf{w}$ at the position $\mathbf{r}_{\alpha}$ of the $\alpha$’th
dislocation coming from all of the other dislocations (i.e., neglecting the
field created by dislocation $\alpha$ itself).
Since the dislocation cannot ”tell” whether the local field $\mathbf{w}$ is
created by other dislocations, by spin waves, or by externally applied
stresses, we expect (II.29) to hold more generally if we take
$\mathbf{w}_{\alpha}$ on the right-hand side of those equations to be the
entire $\mathbf{w}$ field, excluding the part due to dislocation $\alpha$
itself. This proves to be important when we consider the effect of stresses at
the boundary on dislocation motion.
There are two important features of the result (II.29) that should be noted:
1) the force on a given dislocation $\alpha$ is determined entirely by the
local value of the field $\mathbf{w}(\mathbf{r}_{\alpha})$ (and its
derivatives) at the location $\mathbf{r}_{\alpha}$ of that dislocation
(excluding the part of that field due to the given dislocation itself).
2) The dependence of the force on the $x$ component $w_{x}$ involves spatial
derivatives of that component; a uniform $w_{x}$ does not generate any force
on the dislocation. This is a consequence of rotation invariance: as shown by
equation (II.29), a spatially uniform $w_{x}$ corresponds to a uniform
rotation of the layers, which clearly cannot lead to any force on the
dislocation in a rotation-invariant system. This consideration will continue
to apply in active smectics, and will forbid certain terms in the force in
those systems which one might otherwise expect, as we will see in section [V]
below.
Since there will be friction between the dislocations and the underlying
substrate, we expect the velocity $\mathbf{v}$ (rather than the acceleration
$\dot{\mathbf{v}}$) of the dislocations to be linear in the force
$\mathbf{F}$. That is,
$\mathbf{v}_{\alpha}=\bm{\mu}\mathbf{F}_{\alpha}\,,$ (II.30)
where $\bm{\mu}$ is a constant ”mobility tensor”. On symmetry grounds, we
expect this tensor to be diagonal in our $(x,y)$ coordinate system (i.e., with
the $x$ and $y$ axes respectively parallel and perpendicular to the mean
smectic layers); hence
$v_{x}^{\alpha}=\mu_{x}F_{x}^{\alpha}\,,\,v_{y}^{\alpha}=\mu_{y}F_{y}^{\alpha}\,.$
(II.31)
Using our earlier results (II.29) for the forces on the dislocations, we can
rewrite these as
$\displaystyle v_{x}^{\alpha}$ $\displaystyle=$
$\displaystyle\mu_{x}Bb_{\alpha}a_{0}w_{y}(\mathbf{r}_{\alpha})$
$\displaystyle v_{y}^{\alpha}$ $\displaystyle=$ $\displaystyle
Kb_{\alpha}a_{0}\partial_{x}^{2}w_{x}(\mathbf{r}_{\alpha})\,.$ (II.32)
We see that, like the component $F_{y}$ of the force, the $y$-component of the
dislocation velocity (which is, after all, proportional to $F_{y}$) vanishes
for spatially uniform $w_{x}$. And it does so for the same reason: a spatially
uniform $w_{x}$ corresponds to a uniform rotation, which cannot lead to
dislocation motion in a rotation invariant system.
This result, based as it is purely on symmetry, proves to continue to apply
even in active, rotation invariant smectics, as we will see in section [V].
### II.2 Non-rotation invariant 2d equilibrium smectics: effects of a
symmetry breaking field
#### II.2.1 ”Spin wave” (Phonon) theory with a symmetry breaking field:
Quasi-long-ranged order
As we have seen, rotation invariance plays a crucial role in the behavior of
equilibrium two-dimensional smectics - indeed, in some sense, it makes the
smectic phase impossible(at non-zero temperature) in $d=2$. One can, however,
make a 2d smectic non-rotation invariant. This can be done in a number of
ways. Two of the simplest are:
1) applying a magnetic field (${\bf H}$)
2) preparing the 2d surface on which the smectic lives in some non-rotation
invariant way. For example, one could rub the surface in one direction with an
abrasive cloth, or etch a set of parallel grooves along it.
Magnetic fields break rotation invariance by picking out a preferred direction
for the layer normal
$\hat{{\bf
n}}(\mathbf{r})=-\sin[\phi(\mathbf{r})]\,\hat{x}+\cos[\phi(\mathbf{r})]\,\hat{y}\,,$
(II.33)
where $\phi(\mathbf{r})$ is the angle between the local layers at $\mathbf{r}$
and the $y$-axis. They do this due to the fact that the magnetic
susceptibility tensor $\chi^{H}_{ij}$ must, by symmetry, have the layer normal
$\hat{{\bf n}}$ as one of their principal axes. This implies that the
susceptibility tensor can be written in the form degennes ; chaikin
$\chi^{H}_{ij}=\chi^{H}_{0}\delta_{ij}-\Delta\chi^{H}n_{i}n_{j}\,,$ (II.34)
where the material parameters $\chi^{H}_{0}$ and $\Delta\chi^{H}$ are
respectively the isotropic and anisotropic parts of the susceptibility.
The expression II.34 in turn implies that the magnetic energy of the smectic
are given by (note that the $H$ on the left-hand side of the following
expressions stands for “Hamiltonian”, while the $H$ on the right-hand side of
the first of them stands for magnetic field):
$\displaystyle H_{mag}$ $\displaystyle={1\over 2}\int
d^{2}r\chi^{H}_{ij}H_{i}H_{j}$ (II.35) $\displaystyle=-{1\over 2}\int
d^{2}r\Delta\chi^{H}({\bf H}\cdot\hat{{\bf n}})^{2}+{\rm constant}\,$
where the “constant” includes those parts of the energy independent of the
layer normal $\hat{{\bf n}}$.
Inserting our expression II.33 for $\hat{{\bf n}}$, and choosing our $y$-axis
to be along the magnetic field, gives
$H_{mag}=-{1\over 2}\int
d^{2}r\Delta\chi^{H}H^{2}\cos^{2}[\phi(\mathbf{r})]+{\rm constant}\,.$ (II.36)
Clearly, the magnetic energy favors (for positive $\Delta\chi^{H}$) alignment
of $\hat{{\bf n}}$ along the magnetic fields (i.e., $\phi=0$). If, e.g.,
$\Delta\chi^{H}$) is negative, then the lowest energy configuration will have
the layer normal $\hat{{\bf n}}$ perpendicular to ${\bf H}$. In these cases,
we can still arrive at the expression II.36 simply by choosing the $y$-axis to
be perpendicular to the applied field. So the result II.36 can easily be made
to hold in general.
Now assuming that fluctuations away from this minimum energy state are small,
we can expand II.36 for small $\phi$, obtaining
$H_{mag}={1\over 2}\int d^{2}r\Delta\chi^{H}H^{2}[\phi(\mathbf{r})]^{2}+{\rm
constant}\,$ (II.37)
Finally, using the relation $\partial_{x}u=\phi$ (i.e., (II.2)) between $\phi$
and the layer displacement field $u$, we can rewrite this as
$H_{mag}={1\over 2}\int d^{2}r\,g(\partial_{x}u)^{2}+{\rm constant}\,,$
(II.38)
where we’ve defined the “symmetry breaking field strength” $g$ via
$g\equiv\Delta\chi^{H}H^{2}$ (II.39)
for the case of an applied magnetic field.
Note that the symmetry breaking field strength $g$ is actually proportional to
the square of the applied field. This is a consequence of the fact that the
smectic is apolar; in a polar smectic, this symmetry breaking would be linear
in the applied field, since the constituents of the smectic would then have
spontaneous magnetic and electric dipole moments.
The second approach mentioned above, of breaking symmetry by preparing the
surface in a non-rotation invariant way, can be shown by similar arguments to
lead to a symmetry breaking contribution to the Hamiltonian of the form II.38
as well. The dependence of $g$ on whatever quantity one uses to characterize
the strength of the symmetry breaking of the surface preparation need not be
quadratic, however. For example, if the preparation consisted of etching or
rubbing a set of grooves onto the substrate, we would expect $g$ in II.38 to
be linear, not quadratic, in the density of such grooves, at least when that
density is small.
Adding this additional symmetry breaking energy II.39 to the terms already
present in our smectic energy gives the total smectic Hamiltonian for the non-
rotation invariant case:
$H_{\rm sm}=\frac{1}{2}\int{\rm
d}^{2}r\left[B(\partial_{y}u)^{2}+g(\partial_{x}u)^{2}+K(\partial_{x}^{2}u)^{2}\right]\,,$
(II.40)
The bend elasticity term $K(\partial_{x}^{2}u)^{2}$ is clearly negligible, at
long wavelengths (i.e., for small spatial gradients) relative to the symmetry
breaking term $g(\partial_{x}u)^{2}$. We will therefore henceforth drop it,
which leaves our Hamiltonian in the form:
$H_{\rm sm}=\frac{1}{2}\int{\rm
d}^{2}r\left[B(\partial_{y}u)^{2}+g(\partial_{x}u)^{2}\right]\,,$ (II.41)
This non-rotation invariant smectic problem can readily be seen to be
equivalent to an XY model . To see this, make a simple linear change of
variables from the layer spacing $u(\mathbf{r},t)$ to an ”angle field”
$\theta(\mathbf{r},t)$ defined via
$\theta\equiv q_{0}u$ (II.42)
where $q_{0}\equiv{2\pi\over a_{0}}$ is the wavevector of the smectic
layering. This has the effect of converting the invariance of the smectic
system under the translation $u(\mathbf{r},t)\to u(\mathbf{r},t)+a_{0}$ (which
is a symmetry since the smectic structure is periodic with period $a_{0}$ in
the $y$-direction) to invariance under $\theta\to\theta+2\pi$. The latter
symmetry implies that $\theta$ can be interpreted as the angle between a unit
spin and some reference direction, or, equivalently, as the phase of a complex
scalar. Both of these systems are XY models.
With the change of variables (II.42), the Hamiltonian (II.40) becomes
$H_{XY}=\frac{1}{2}\int{\rm
d}^{2}r\left[K_{y}(\partial_{y}\theta)^{2}+K_{x}(\partial_{x}\theta)^{2}\right]\,,$
(II.43)
with
$K_{y}=Bq_{0}^{-2}\ \ \ ,\ \ \ K_{x}=gq_{0}^{-2}\,.$ (II.44)
We can convert this into the most familiar, isotropic form of the XY model by
rescaling lengths anisotropically so as to make the coefficients of the two
terms in (II.46) equal. It is easy to show that the change of variables
$x=x^{\prime}\sqrt{K_{x}\over K_{y}}=x^{\prime}\sqrt{g\over B}$ (II.45)
accomplishes this, leading to an isotropic model
$H_{XYiso}=\frac{\bar{K}}{2}\int{\rm d}x^{\prime}{\rm
d}y|\nabla^{\prime}\theta|^{2}\,,$ (II.46)
where
$\nabla^{\prime}\equiv\partial_{x^{\prime}}\hat{x}^{\prime}+\partial_{y}\hat{y}$,
and the spin wave stiffness $\bar{K}$ is just the geometric mean of $K_{x}$
and $K_{y}$:
$\bar{K}=\sqrt{K_{x}K_{y}}\,.$ (II.47)
The model (II.46) with the “compactness condition” that the operation
$\theta\to\theta+2\pi$ takes one to the same physical state is the extremely
well-studied ”XY model”KT . It describes spin systems, with the local spin
$\mathbf{S}(\mathbf{r}^{\prime})=(\cos(\theta(\mathbf{r}^{\prime})),\sin(\theta(\mathbf{r}^{\prime})))$,
w here
$\mathbf{r}^{\prime}\equiv(x^{\prime},y)=\left(x\sqrt{B\over g},y\right)\,.$
(II.48)
It exhibits quasi-long-ranged order, that is, algebraically decaying spin
correlationsKT :
$\langle\mathbf{S}(\mathbf{r}^{\prime}_{1})\cdot\mathbf{S}(\mathbf{r}^{\prime}_{2})\rangle\propto|\mathbf{r}^{\prime}_{1}-\mathbf{r}^{\prime}_{2}|^{-\eta(T)}\,,$
(II.49)
with the non-universal, temperature-dependent exponentKT
$\eta(T)={k_{B}T\over 2\pi\bar{K}}={k_{B}T\over
2\pi\sqrt{K_{x}K_{y}}}={k_{B}Tq_{0}^{2}\over 2\pi\sqrt{gB}}\,.$ (II.50)
The correlation function in smectics most closely analogous to the spin-spin
correlation function (II.49) is the density-density correlation function. This
follows from a standard result of the scattering theory of smectics degennes ,
which states that the Fourier transformed equal-time correlations of the
density near the $n$’th Bragg peak (i.e., near wavevector
$\mathbf{q}=nq_{0}\hat{y}$ with integer $n$) are given by
$\langle|\rho(\mathbf{q},t)|^{2}\rangle\propto\mathbf{FT}\bigg{\\{}\langle\exp\\{inq_{0}[u(\mathbf{r}^{\prime}_{1})-u(\mathbf{r}^{\prime}_{2})]\\}\rangle\bigg{\\}}\bigg{|}_{{\bf\delta
q}}\,,$ (II.51)
where $\mathbf{FT}$ denotes a Fourier transform and ${\bf\delta
q}\equiv\mathbf{q}-nq_{0}\hat{y}$. Mapping this to the XY problem by using the
change of fields from $\theta$ to $u$ (II.42), and the change of coordinates
from $x$ to $x^{\prime}$ (II.45), we obtain power law correlations for the
complex exponential in (II.51):
$\langle\exp\\{inq_{0}[u(\mathbf{r}^{\prime}_{1})-u(\mathbf{r}^{\prime}_{2})]\\}\rangle\propto|\mathbf{r}^{\prime}_{1}-\mathbf{r}^{\prime}_{2}|^{-n^{2}\eta(T)}\,.$
(II.52)
This is easily shown to imply that the Bragg peaks become power law
singularities near the $n$’th Bragg peak:
$\langle|\rho(\mathbf{q},t)|^{2}\rangle\propto|{\bf\delta
q}|^{-2+n^{2}\eta(T)}\,.$ (II.53)
This can be measured experimentally by various scattering techniques (either
X-ray or light scattering, depending on the layer spacing $a$), or, in
experiments in which the constituent particles can actually be imaged, by
simply constructing the spatially Fourier transformed density correlations
directly from particle positions.
#### II.2.2 Equation of motion for the non-rotation invariant case
Again taking the simplest, purely relaxational, equilibrium dynamics
associated with this Hamiltonian, which is the time-dependent Landau-Ginsburg-
Wilson EOM (II.5), we obtain
$\displaystyle\partial_{t}u=D_{y}\partial_{y}^{2}u+D_{x}\partial_{x}^{2}u+f_{u}$
(II.54)
where we’ve defined $D_{y}\equiv\Gamma B$ and $D_{x}\equiv\Gamma g$.
In (II.54), we have, as in the rotation-invariant case, added a Gaussian white
noise $f_{u}$ with correlations:
$\langle f_{u}({\bf r},t)f_{u}({\bf r}^{\prime},t^{\prime})\rangle=2\Gamma
k_{B}T\delta({\bf r}-{\bf r}^{\prime})\delta(t-t^{\prime})\,,$ (II.55)
where the coefficient $\Gamma k_{B}T$ is required by the fluctuation-
dissipation theoremchaikin .
One somewhat surprising feature of the equation of motion (II.54) is that,
although it was derived from the non-rotation invariant free energy (II.40),
the equation of motion (II.54) itself is rotation invariant, as can be seen by
noting that the equation remains unchanged under the substitution
$u(\mathbf{r},t)\to u(\mathbf{r},t)+\phi x$, which corresponds to a uniform
rotation of all the smectic layers by an angle $\phi\ll 1$. We will use this
observation later to argue that (II.54) is therefore the spin wave equation of
motion we would expect even for rotation invariant active smectics, for which
there is no free energy.
#### II.2.3 Dislocation effects: Kosterlitz-Thouless transition
We can now treat dislocations in smectics without rotation invariance exactly
as we treated those with rotation invariance in the previous section. The
definition of the field $\mathbf{w}$ as the generalization of $\nabla u$ in
the presence of dislocations is unchanged, as is the Burgers’ condition
(II.12). All that changes is the Hamiltonian, which is now (II.40) rather than
(II.24). As a result, the Euler-Lagrange equation now becomes
$g\partial_{x}w_{x}(\mathbf{r})+B\partial_{y}w_{y}(\mathbf{r})=0\,,$ (II.56)
which can be rewritten in Fourier space as
$gq_{x}w_{x}(\mathbf{q})+Bq_{y}w_{y}(\mathbf{q})=0\,.$ (II.57)
Solving this simultaneously with the unchanged Burgers’ condition (II.12)
gives
$\displaystyle w_{x}(\mathbf{q})={iBq_{y}\over
gq_{x}^{2}+Bq_{y}^{2}}\sum_{\alpha}a_{0}b_{\alpha}e^{i\mathbf{q}\cdot\mathbf{r}_{\alpha}}\,,$
(II.58) $\displaystyle w_{y}(\mathbf{q})={-igq_{x}\over
gq_{x}^{2}+Bq_{y}^{2}}\sum_{\alpha}a_{0}b_{\alpha}e^{i\mathbf{q}\cdot\mathbf{r}_{\alpha}}\,.$
(II.59)
Fourier transforming these solutions back to real space gives
$\displaystyle
w_{x}(\mathbf{r})=\sum_{\alpha}a_{0}b_{\alpha}G_{x}(\mathbf{r}-\mathbf{r}_{\alpha})\,,$
(II.60) $\displaystyle
w_{y}(\mathbf{r})=\sum_{\alpha}a_{0}b_{\alpha}G_{y}(\mathbf{r}-\mathbf{r}_{\alpha})\,,$
(II.61)
where the Green’s functions $G_{x,y}$ are now given by
$\displaystyle G_{x}(\mathbf{r})=-{y\sqrt{gB}\over 2\pi(gy^{2}+Bx^{2})}\,,$
(II.62) $\displaystyle G_{y}(\mathbf{r})={x\sqrt{gB}\over
2\pi(gy^{2}+Bx^{2})}\,.$ (II.63)
As we did for the rotation invariant smectic, we can calculate the energy of a
dislocation configuration by inserting these results into the elastic
Hamiltonian (II.40), which can be rewritten in terms of the components of
$\mathbf{w}$ as
$H_{\rm sm}=\frac{1}{2}\int{\rm d}^{2}r\left[gw_{x}^{2}+Bw_{y}^{2}\right]\,.$
(II.64)
Inserting our results (II.21) and (II.63) into this expression, and performing
the integral over $\mathbf{r}$ gives our dislocation Hamiltonian $H_{\rm
disl}$ for non-rotation invariant smectics:
$H_{\rm disl}=-{\sqrt{gB}\over
2\pi}\sum_{\langle\alpha\neq\beta\rangle}a_{0}^{2}b_{\alpha}b_{\beta}\ln\left({|\mathbf{r}^{\prime}_{\alpha}-\mathbf{r}^{\prime}_{\beta}|\over
a}\right)\,,$ (II.65)
where the sum is over pairs $\alpha$, $\beta$ of dislocations, with each pair
counted once, and $\alpha\neq\beta$. We remind the reader that
$\mathbf{r}\equiv(x,y)$ and $\mathbf{r}^{\prime}\equiv(x\sqrt{B\over g},y)$.
Note that the sign of this expression implies that the potential between two
oppositely charged dislocations is attractive.
From the form of this Hamiltonian, it is possible see why a Kosterlitz-
Thouless defect unbinding transition must occur, and to even determine its
temperature, by the following very simple argument, originally given (in a
slightly different form) by Kosterlitz and ThoulessKT .
Consider a minimal neutral pair of dislocations $\alpha=(1,2)$ of Burgers’
charges: $b_{1}=1$ and $b_{2}=-1$. From the dislocation Hamiltonian (II.65),
we see that the energy of this pair will be
$V(\mathbf{R})={\sqrt{gB}a_{0}^{2}\over
2\pi}\ln\left({|\mathbf{r}^{\prime}_{1}-\mathbf{r}^{\prime}_{2}|\over
a}\right)+2E_{c}\,,$ (II.66)
where $\mathbf{R}\equiv\mathbf{r}_{1}-\mathbf{r}_{2}$ is the separation of the
pair, and $E_{c}$ is a“core energy” that we have added to the energy of the
pair to take into account the energy coming from distortions within a distance
$a$ of the cores of the two dislocations, which are not accurately captured by
our elastic theory, because that theory is only valid at long distances. This
core energy $E_{c}$ also contains some constant ($\mathbf{R}$-independent)
contributions to the energy of the pair coming from the elastic energy outside
this region. Hence, by simple Boltzmann statistics, the probability density
$p(\mathbf{R})$ for this pair is
$\displaystyle p(\mathbf{R})=\exp\left(-{V(\mathbf{R})\over
k_{B}T}\right)=\kappa^{2}|R^{\prime}|^{-\nu}\,,$ (II.67)
where we’ve defined the “dislocation fugacity”
$\kappa\equiv\exp(-E_{c}/k_{B}T)$,
$\mathbf{R}^{\prime}\equiv(R_{x}\sqrt{B\over g},R_{y})$, and
$\nu\equiv{\sqrt{gB}a_{0}^{2}\over 2\pi k_{B}T}\,.$ (II.68)
The mean squared size of this dipole is
$\langle|\mathbf{R}|^{2}\rangle=\int
d^{2}R\,p(\mathbf{R})|\mathbf{R}|^{2}=\kappa^{2}\sqrt{g\over B}\int
d^{2}R^{\prime}\,|\mathbf{R}^{\prime}|^{2-\nu}\left({R\over
R^{\prime}}\right)^{2}\,.$ (II.69)
Since ${R\over R^{\prime}}$ is bounded between $1$ and $\sqrt{B\over g}$, as
the reader can easily convince herself, this mean squared value clearly
diverges if $\nu\leq 4$. This signals dislocation unbinding; i.e., the
Kosterlitz-Thouless transition, which corresponds to the loss of quasi-long-
ranged translational order, or equivalently, the melting of the smectic into a
nematic. This transition clearly occurs at the temperature $T_{KT}$ at which
$\nu=4$; hence, using (II.68), we have
${\sqrt{gB}a_{0}^{2}\over 2\pi k_{B}T_{KT}}=4\,.$ (II.70)
Using this result, and setting $T=T_{KT}$ in our expression (II.50) for the
exponent for algebraic decay of density correlations, we recover the famous
resultKT ; NK
$\eta(T_{KT})=1/4\,.$ (II.71)
The crucial point the reader should take away from this discussion: first, is
that, when the probability density for the separation of a dislocation pair
falls off like a power law $R^{-\nu}$ with distance, dislocations will be
bound if $\nu>4$, and unbound if $\nu<4$. We will later apply this criterion
to active, non-rotation invariant smectics to determine the critical noise
strength at which dislocations unbind, melting the smectic phase.
Solving our expression II.70 for $T_{KT}$ gives
$T_{KT}={\sqrt{gB}a_{0}^{2}\over 8\pi k_{B}}\propto\sqrt{g}\,.$ (II.72)
Thus, we see that the smectic melting temperature grows quite rapidly with the
symmetry breaking field strength $g$; specifically, like $\sqrt{g}$. We will
see in section [VIII] that, while active smectics can also be stabilized
against dislocation unbinding by rotational symmetry breaking fields, they are
much more delicate. Specifically, the critical noise strength (the analog in
those non-equilibrium systems of the critical temperature) grows only linearly
with the symmetry breaking field strength $g$.
Although we do not need to explicitly consider the dynamics of dislocations in
order to understand their unbinding in this equilibrium problem, it is
instructive to do so, in order to facilitate comparison with the active cases
that we will consider next, for which the dynamics is all we have.
To study the dynamics, consider first a general pair of dislocations
$\alpha=(1,2)$ of Burgers’ charges: $b_{1}$ and $b_{2}$, at positions
$\mathbf{r}_{1}={\bf 0}$, $\mathbf{r}_{2}=\mathbf{r}=x\hat{x}+y\hat{y}$. The
dislocation Hamiltonian (II.65) then implies that the energy of this pair will
be
$V(\mathbf{r})=-{\sqrt{gB}b_{1}b_{2}a_{0}^{2}\over
2\pi}\ln\left({|\mathbf{r}^{\prime}|\over a}\right)\,.$ (II.73)
The force experienced by dislocation $\alpha=2$ will therefore be ${\bf
F}=F_{x}\hat{x}+F_{y}\hat{y}$, with its Cartesian components $F_{x,y}$ given
by
$\displaystyle F_{x}$ $\displaystyle=$
$\displaystyle-\partial_{x}V(\mathbf{r})={b_{1}b_{2}a_{0}^{2}xB\sqrt{gB}\over
2\pi(gy^{2}+Bx^{2})}=Ba_{0}b_{2}w_{y}^{(1)}(\mathbf{r})$ $\displaystyle F_{y}$
$\displaystyle=$
$\displaystyle-\partial_{y}V(\mathbf{r})={b_{1}b_{2}a_{0}^{2}yg\sqrt{gB}\over
2\pi(gy^{2}+Bx^{2})}=-ga_{0}b_{2}w_{x}^{(1)}(\mathbf{r})$ (II.74)
where by $\mathbf{w}^{1}(\mathbf{r})$, we mean the contribution to the field
$\mathbf{w}$ at the position $\mathbf{r}$ of the $\alpha=2$ dislocation coming
from the $\alpha=1$ dislocation (i.e., neglecting the field created by
dislocation $\alpha=2$ itself). It is straightforward to show that the
generalization of these forces to configurations with more than two
dislocations is:
$\displaystyle F_{x}^{\alpha}$ $\displaystyle=$ $\displaystyle-
Ba_{0}b_{\alpha}w_{y}(\mathbf{r}_{\alpha})$ $\displaystyle F_{y}^{\alpha}$
$\displaystyle=$ $\displaystyle ga_{0}b_{\alpha}w_{x}(\mathbf{r}_{\alpha})$
(II.75)
where by $\mathbf{w}(\mathbf{r})$, we mean the contribution to the field
$\mathbf{w}$ at the position $\mathbf{r}_{\alpha}$ of the $\alpha$’th
dislocation coming from all of the other dislocations (i.e., neglecting the
field created by dislocation $\alpha$ itself).
Since the dislocation cannot ”tell” whether the local field $\mathbf{w}$ is
created by other dislocations, by spin waves, or by externally applied
stresses, we expect (II.29) to hold more generally if we take
$\mathbf{w}_{\alpha}$ on the right-hand side of those equations to be the
entire $\mathbf{w}$ field, excluding the part due to dislocation $\alpha$
itself. This proves to be important when we consider the effect of stresses at
the boundary on dislocation motion.
There are two important features of the result (II.29) that should be noted:
1) the force on a given dislocation $\alpha$ is determined entirely by the
local value of the field $\mathbf{w}(\mathbf{r}_{\alpha})$ (and its
derivatives) at the location $\mathbf{r}_{\alpha}$ of that dislocation
(excluding the part of that field due to the given dislocation itself).
2) The force in this non-rotation invariant case now depends directly on the
$x$ component $w_{x}$; no spatial derivatives are required. This means in
particular that a uniform $w_{x}$ does generate a force on the dislocation.
This is a consequence of the lack of rotation invariance: as shown by equation
(II.2), a spatially uniform $w_{x}$ corresponds to a uniform rotation of the
layers, which now can lead to a force on the dislocation, since the system is
not rotation-invariant. This consideration will continue to apply in active
smectics, and will allow certain terms in the force in those systems in the
rotation non-invariant case which are absent in the rotation invariant case.
As in the rotation-invariant case, we expect the velocity $\mathbf{v}$ of the
dislocations to be linear in the force $\mathbf{F}$. That is,
$\mathbf{v}_{\alpha}=\bm{\mu}\mathbf{F}_{\alpha}\,,$ (II.76)
where $\bm{\mu}$ is a constant ”mobility tensor”. On the same symmetry grounds
as before, we expect this tensor to be diagonal in our $(x,y)$ coordinate
system (i.e., with the $x$ and $y$ axes respectively parallel and
perpendicular to the mean smectic layers); hence
$v_{x}^{\alpha}=\mu_{x}F_{x}^{\alpha}\,,\,v_{y}^{\alpha}=\mu_{y}F_{y}^{\alpha}\,.$
(II.77)
Using our earlier results (II.75) for the forces on the dislocations, we can
rewrite these as
$\displaystyle v_{x}^{\alpha}$ $\displaystyle=$
$\displaystyle-\mu_{x}Ba_{0}b_{\alpha}w_{y}(\mathbf{r})$ $\displaystyle
v_{y}^{\alpha}$ $\displaystyle=$
$\displaystyle\mu_{y}ga_{0}b_{\alpha}w_{x}(\mathbf{r})\,.$ (II.78)
We see that, like the component $F_{x}$ of the force, the $x$-component of the
dislocation velocity (which is, after all, proportional to $F_{x}$) no longer
vanishes for spatially uniform $w_{x}$. And it need not, since, although a
spatially uniform $w_{x}$ still corresponds to a uniform rotation, in a non-
rotation invariant system, no symmetry forbids a uniform rotation from causing
dislocation motion.
### II.3 Summary of the equilibrium cases
We have seen that, in equilibrium, rotation invariant 2d smectics,
translational order at non-zero temperature is always short ranged, even in
spin wave theory (i.,e., when dislocations are ignored). Furthermore,
dislocations are always unbound at any non-zero temperature, if the smectic is
rotation invariant. Effectively, this means that 2d smectics melt as soon as
the temperature becomes non-zero. Another way to say this is that 2d smectics
do not exist at temperatures $T>0$.
Breaking rotation invariance by, e.g., applying a rotational symmetry breaking
magnetic field, or breaking the underlying rotation invariance in other ways,
can stabilize quasi-long-ranged translational order (i.e., power law decay of
translational correlations) in spin wave theory. Rotational symmetry breaking
also stabilizes two-dimensional equilibrium smectics against dislocation
unbinding. The temperature $T_{m}$ at which these systems melt vanishes as the
strength $g$ of the applied symmetry breaking field vanishes, according to the
law II.72.
We will see that, while the presence or absence of rotation invariance has no
important effect on the spin wave dynamics of active smectics, nor on the
fields created by dislocations, it has a profound effect on the motion of
dislocations. In fact, like equilibrium 2d smectics, active 2d smectics are
only stable against dislocations if rotation invariance has been explicitly
broken by an externally applied symmetry breaking field. Furthermore, the
field required to stabilize active 2d smectics against dislocations is much
higher than the field needed in equilibrium 2d smectics.
## III ”Spin wave” (Phonon) theory of active smectics
### III.1 Rotation invariant 2d active smectics: spin wave theory
In this section, we will review the hydrodynamic theory of active, rotation-
invariant, apolar smectics. For more details, the interested reader is
referred to apolarsm . We will first limit our discussion to “spin-wave
theory”; that is, dislocation-free smectics.
Because there are no conserved quantities, the only hydrodynamic field in our
problem is the layer displacement $u$, which is the ”Goldstone mode”
associated with the breaking of translational symmetry by the layering. The
long-wavelength hydrodynamics of this field is therefore simply the most
general equation of motion, to leading order in space and time derivatives,
that respects the symmetries of this system. These symmetries are rotation and
translation invariance. As we noted earlier in our discussion of the non-
rotation invariant equilibrium smectic, the equation of motion (II.54) for
that system, oddly, is rotation invariant. Therefore, that equation, which we
repeat here for the readers’ convenience, also describes active, rotation
invariant smectics:
$\displaystyle\partial_{t}u=D_{y}\partial_{y}^{2}u+D_{x}\partial_{x}^{2}u+f,$
(III.1)
where $f^{u}$ is a Gaussian, zero-mean spatiotemporally white noise with
$\langle f_{u}({\bf r},t)f_{u}({\bf
r}^{\prime},t^{\prime})\rangle=2\Delta\delta({\bf r}-{\bf
r}^{\prime})\delta(t-t^{\prime})\,.$ (III.2)
Because this is a non-equilibrium system, there is no longer a fluctuation-
dissipation theoremchaikin relating the noise variance $\Delta$ to the
dissipative terms in (III.1). However, the equation of motion (III.1), with
the noise correlations (III.2) is, as noted, identical to that of an
equilibrium, non-rotation invariant smectic with
$\Gamma k_{B}T=\Delta\,\ \ \ ,\ \ \ \Gamma B=D_{y}\,\ \ \ ,\ \ \ \Gamma
g=D_{x}\,.$ (III.3)
We can therefore use these relations to obtain any spin-wave correlation
function in the active, rotation-invariant smectic from the corresponding
correlation function in an equilibrium, non-rotation invariant smectic. In
particular, this reasoning predicts , in the absence of dislocations, that
active, rotation invariant smectics will exhibit power law singularities near
the $n$’th Bragg peak (i.e., for wavevector
$\mathbf{q}=nq_{0}\hat{y}+\delta\mathbf{q}$ with integer $n$ and
$|\delta\mathbf{q}|\ll q_{0})$:
$\langle|\rho(\mathbf{q},t)|^{2}\rangle\propto|{\bf\delta
q}|^{-2+n^{2}\eta(\Delta)}\,,$ (III.4)
with the non-universal exponentKT
$\eta(D_{x},D_{y},\Delta)={\Delta q_{0}^{2}\over 2\pi\sqrt{D_{x}D_{y}}}\,.$
(III.5)
As in equilibrium systems, this can be measured experimentally by various
scattering techniques (either X-ray or light scattering, depending on the
layer spacing $a$), or, in experiments in which the constituent particles can
actually be imaged, by simply constructing the spatially Fourier transformed
density correlations directly from particle positions. It can be determined
from simulations by the latter approach as well.
Interestingly, as we shall see, in rotation invariant active smectics, this
quasi-long-ranged order is destroyed by unbound dislocations, and they
therefore do not exhibit any singular behavior at the Bragg spots at all. In
rotation non-invariant active smectics, however, the results III.4 and III.6
will hold for sufficiently small noise.
Clearly, the non-universal exponent $\eta(D_{x},D_{y},\Delta)$ for algebraic
decay of translational correlations, when expressed in terms of $D_{x}$ and
$D_{y}$ will continue to be given in terms of $D_{x}$, $D_{y}$, and the noise
strength $\Delta$ (which we also expect to be independent of the symmetry
breaking field $g$ for small $g$) by the result found earlier for rotation
invariant active smectics; i.e.,
$\eta(T)={k_{B}Tq_{0}^{2}\over 2\pi\sqrt{gB}}={\Delta q_{0}^{2}\over
2\pi\sqrt{D_{x}D_{y}}}\,.$ (III.6)
Thus, it would appear, from this spin wave theory, that active smectics are
more robust against fluctuations that equilibrium ones: it looks like we can
get quasi-long-ranged translational order in these systems even without
breaking rotation invariance. As we’ll see in section (VII), this conclusion
is actually wrong, due to the unbinding of dislocations.
One last comment about our equation of motion (III.1) for active, rotation-
invariant smectics is in order. The term $D_{x}\partial_{x}^{2}u$ actten ;
Ramaswamy2000 is, as we discussed in section [II.1], forbidden in equilibrium
rotation-invariant smectic, where it would correspond to a term
$\propto(\partial_{x}u)^{2}$ in the Hamiltonian (II.4) , which is forbidden by
rotation invariance. In our out-of-equilibrium system, however, only the
equation of motion itself must be rotation-invariant, which we have already
shown equation III.1 is. Physically, the origin of this term is that local
vectorial asymmetry of a curved layer must inevitably lead to directed motion
in a self-driven system.
### III.2 Non-rotation invariant 2d active smectics: effects of a symmetry
breaking field (spin wave theory)
We now turn to the spin-wave theory for active smectics in the presence of a
symmetry breaking field. Even when rotational symmetry is broken, the lowest
order in derivative terms allowed by $x\to-x$, $y\to-y$ symmetry are still
second order derivatives. Therefore, the equation of motion is still:
$\displaystyle\partial_{t}u=D_{y}\partial_{y}^{2}u+D_{x}\partial_{x}^{2}u+f,$
(III.7)
where $f^{u}$ is a Gaussian, zero-mean spatiotemporally white noise with
$\langle f_{u}({\bf r},t)f_{u}({\bf
r}^{\prime},t^{\prime})\rangle=2\Delta\delta({\bf r}-{\bf
r}^{\prime})\delta(t-t^{\prime})\,,$ (III.8)
as in the rotation-invariant case. The only difference between this and the
equilibrium case is that, in equilibrium, $D_{x}$ can only be non-zero due to
the presence of the symmetry breaking field. Indeed, we expect $D_{x}\propto
g$. In active smectics, on the other hand, as we’ve just seen, $D_{x}$ can be
non-zero simply due to the activity. Thus, for small symmetry breaking field
$g$ (the limit we will consider later), we expect $D_{x}$ to be essentially
independent of the symmetry breaking field $g$.
## IV Dislocations in active smectics: configurations
In our non-equilibrium system, we can no longer determine the fields of the
dislocations by minimizing the free energy, since there is no free energy for
a non-equilibrium system. However, we can readily obtain the fields of static
dislocations simply by looking for steady state solutions of the equations of
motion (III.1), once those equations are suitably rewritten to take into
account the presence of dislocations. As has already been discussed, this
amounts to making the replacements $\partial_{x}u\to w_{x}$ and
$\partial_{y}u\to w_{y}$. Doing so in the equation of motion (III.1), and
setting all time derivatives to zero, gives
$D_{x}\partial_{x}w_{x}(\mathbf{r})+D_{y}\partial_{y}w_{y}(\mathbf{r})=0\,.$
(IV.1)
The other condition on dislocations is the Burger’s condition, which can be
written
$\partial_{x}w_{y}(\mathbf{r})-\partial_{y}w_{x}(\mathbf{r})=\sum_{\alpha}a_{0}b_{\alpha}\delta(\mathbf{r}-\mathbf{r}_{\alpha})\,.$
(IV.2)
These two simultaneous linear equations (II.16) and (II.17) are exactly the
same as those we obtained for non-rotation-invariant, equilibrium smectics if
we make the identifications (III.3). Therefore, we can simply transcribe the
solutions we obtained for that problem here. This gives
$\displaystyle
w_{x}(\mathbf{r})=\sum_{\alpha}a_{0}b_{\alpha}G_{x}(\mathbf{r}-\mathbf{r}_{\alpha})\,,$
(IV.3) $\displaystyle
w_{y}(\mathbf{r})=\sum_{\alpha}a_{0}b_{\alpha}G_{y}(\mathbf{r}-\mathbf{r}_{\alpha})\,,$
(IV.4)
where the Green’s functions $G_{x,y}$ are now given by
$\displaystyle G_{x}(\mathbf{r})=-{y\sqrt{D_{x}D_{y}}\over
2\pi(D_{x}y^{2}+D_{y}x^{2})}\,,$ (IV.5) $\displaystyle
G_{y}(\mathbf{r})={x\sqrt{D_{x}D_{y}}\over 2\pi(D_{x}y^{2}+D_{y}x^{2})}\,.$
(IV.6)
Note that we can write these Greens functions in terms of the gradient of a
potential:
$\displaystyle G_{x}(\mathbf{r})=-\varpi\partial_{y}V(\mathbf{r})\,,$
$\displaystyle G_{y}(\mathbf{r})={1\over\varpi}\partial_{x}V(\mathbf{r})\,,$
(IV.7)
where the potential
$V(\mathbf{r})={1\over 2\pi}\ln\left({|\mathbf{r}^{\prime}|\over
a_{0}}\right)={1\over 4\pi}\ln\left({D_{y}x^{2}+D_{x}y^{2}\over
D_{x}a_{0}^{2}}\right)\,,$ (IV.8)
and we’ve defined $\varpi\equiv\sqrt{D_{y}\over D_{x}}$ and
$\mathbf{r}^{\prime}\equiv(x\sqrt{D_{y}\over D_{x}},y)$.
These dislocation fields are essentially identical in form to those we found
for equilibrium, non-rotation invariant smectics in section [II.2]. However,
we will see in the next section that the motion of dislocations in response to
these fields is very different from that case.
## V Dislocation equation of motion for rotation invariant active smectics
We have established that both the spin wave theory of active, rotation-
invariant smectics, and the field $\mathbf{w}(\mathbf{r})$ generated by
dislocations, are the same as those found for a non-rotation invariant
equilibrium smectic. It might therefore seem reasonable to assume that the
motion of dislocations, and, as a result, the dislocation unbinding transition
in these active, rotation invariant systems would be the same as those of the
equilibrium, non-rotation invariant system. Indeed, precisely this argument
was made in earlier publications by one of usapolarsm ; polarsm .
However, this conclusion proves to be wrong. The reason is that, despite the
just noted similarities between active rotation invariant systems and
equilibrium, non-rotation invariant systems, active, rotation invariant
smectics are still rotation invariant. This fairly obvious (indeed,
tautological!) statement makes the motion of dislocations in an active,
rotation-invariant smectic very different from that in an equilibrium, non-
rotation invariant one. The motion is so different, in fact, that although
dislocations are always bound at sufficiently low temperatures in equilibrium,
non-rotation invariant smectics at sufficiently low temperature, they are
never bound in an active, rotation-invariant smectic with any non-zero noise,
no matter how small.
We will now demonstrate this. We begin by deriving the equation of motion for
dislocations in an active, rotation-invariant smectic.
We restrict ourselves to “unit” dislocations, by which we mean a dislocation
whose Burgers’ number $b$ has the smallest possible magnitude; i.e., $b=\pm
1$.
As can be seen from figure [3], and from our analytic solutions (IV.4) for the
dislocation fields, a dislocation in a smectic is an inherently polar object;
it breaks the left-right symmetry along the layers. This means that, in an
active system, there is no symmetry forbidding spontaneous motion of
dislocations either left or right along the layers. Therefore, by the Curie
principle curie , that “anything that’s not forbidden is compulsory”, we
expect such motion to occur.
Since a unit dislocation with $b=-1$ is just the mirror image of one with
$b=1$, if $b=+1$ dislocations spontaneously move to the left, $b=-1$ must
spontaneously move to the right, and visa-versa. The motion should be along
the local layer direction, since spontaneous motion perpendicular to the
layers is forbidden by the fact that dislocations do not break up-down
symmetry.
Of course, a local curvature of the layers will break up-down symmetry.
Indeed, this is the origin of the force in equilibrium smectics in the
$y$-direction given in equation (II.29). Note, however, that because this
effect involves curvature, it involves at least two derivatives of the
displacement field, or, equivalently, one derivative of $\mathbf{w}$. Hence,
to leading (i.e., zeroth) order in derivatives of $\mathbf{w}$, the motion of
a dislocation must be along the layers. This has profound implications for the
stability of the active smectic state in rotation invariant systems, as we
will see.
These considerations imply that, to zeroth order in gradients of $\mathbf{w}$,
the velocity $\mathbf{v}_{\alpha}$ of the $\alpha$’th dislocation must take
the form:
$\mathbf{v}_{\alpha}=v_{s}{\rm sgn}(b_{\alpha})\hat{z}\times\hat{{\bf
n}}(\mathbf{r}_{\alpha})$ (V.1)
where
$\hat{{\bf n}}\approx\hat{y}-\phi(\mathbf{r})\hat{x}$ (V.2)
is the local normal to the smectic layers. The characteristic speed $v_{s}$
appearing in (V.1) is a system-dependent parameter. It will depend,
importantly, on local properties like the mean layer spacing $a$ of the active
smectic. It is this dependence that makes it possible for the active smectic
to reach homeostasis, as we’ll argue below.
The direction $\hat{z}\times\hat{{\bf n}}$ of $\mathbf{v}$ is dictated by the
requirement that this spontaneous motion be along the layers, which, as we
just discussed, is required by up-down symmetry. The factor of ${\rm
sgn}(b_{\alpha})$ simply reflects the fact noted above that oppositely charged
dislocations must move in opposite directions.
Since our definition (II.10) of $\mathbf{w}$ implies that the local layer
spacing is simply
$a(\mathbf{r})=a_{0}(1+w_{y}(\mathbf{r}))\,,$ (V.3)
where $a_{0}$ is the “reference” layer spacing , relative to which we measure
$\mathbf{w}$, the dependence of the spontaneous speed $v_{s}$ on the layer
spacing $a$ is equivalent to dependence on $w_{y}$. That is, we can (and
will!) take $v_{s}$ to be a local function of $w_{y}$.
Note that rotation invariance forbids any dependence of $v_{s}$ on $w_{x}$,
since a uniform $w_{x}$ corresponds to a pure rotation, which cannot change
the spontaneous speed (or, indeed, any local scalar) in a rotation invariant
system.
These arguments imply that the spontaneous speed $v_{s}(\mathbf{r})$ at a
point $\mathbf{r}$ can be written as
$v_{s}(\mathbf{r})=v_{s}(a(\mathbf{r}))=v_{s}(a_{0}(1+w_{y}(\mathbf{r}))\equiv
v_{s}(w_{y}(\mathbf{r}))\,.$ (V.4)
Using (V.4) in our expression (V.1) for the dislocation velocity gives
$\displaystyle v_{x}^{\alpha}$ $\displaystyle=$ $\displaystyle v_{s}{\rm
sgn}(b_{\alpha})+\mu_{x}b_{\alpha}w_{y}(\mathbf{r}_{\alpha})$ $\displaystyle
v_{y}^{\alpha}$ $\displaystyle=$ $\displaystyle v_{s}{\rm
sgn}(b_{\alpha})w_{x}(\mathbf{r}_{\alpha})\,.$ (V.5)
We will argue in the next section that, for an active smectic confined between
fixed boundaries, $v_{s}$ vanishes in the steady state. We will refer to this
state as the state of “homeostasis”. Note that this implies that there is no
motion in the $y$-direction (i.e., normal to the smectic layers) in the
homeostatic state. This in turn implies that, in the presence of noise,
dislocations in an active smectic are always unbound. As a result, the active
smectic state is, at any non-zero noise, always destroyed by dislocation
unbinding in rotation invariant active smectics.
## VI Dislocations: Self-propulsion and the approach to homeostasis
The dominant term in the equations of motion (V.5) is the “self-propulsion”
term $v_{s}{\rm sgn}(b_{\alpha})$. For $v_{s}>0$, this term will make positive
dislocations move to the right, and negative dislocations move to the left.
Obviously, this switches for $v_{s}<0$.
Because $w_{x,y}(\mathbf{r}_{\alpha})\to 0$ as $|\mathbf{r}|\to\infty$ (as can
be seen from equation (IV.4)), the interactions between dislocations cannot
compete with this constant “external force” -like motion. Hence, even tightly
bound pairs of dislocations will eventually be rent asunder by the spontaneous
motion , with all of the positive dislocations moving to one side, and all of
the negative dislocations moving to the other.
We therefore expect dislocation pairs to constantly nucleate, be ripped apart
by this spontaneous velocity, and traverse the system.
Consider first the case $v_{s}(w_{y}=0)>0$. In this case, all positive
dislocations will eventually traverse the system from left to right, while all
negative dislocations will eventually traverse the system from right to left.
It is easy to see, both by inspection of figure 3, and from our expressions
for the dislocation fields, that each time one of these happens (i.e., each
time either a positive dislocation traverses the system from left to right, or
a negative dislocation traverses the system from right to left), the number of
layers in the system is increased by one. Therefore, if the boundaries of the
system at the top and bottom remain fixed, the strain $w_{y}$ will decrease
(i.e., become more negative), since the mean layer spacing is reduced.
The crucial point here is that this self-propelled dislocation motion changes
the mean strain $w_{y}$ in the system. But since the dislocation self-
propulsion speed $v_{s}(w_{y})$ is itself a function of $w_{y}$, this implies
that, as this process of pair nucleation, separation, and motion across the
system continues, the strain $w_{y}$ will continue to evolve.
Will it ever stop? Yes, it will, provided that the speed $v_{s}(w_{y})$
vanishes at some negative $w_{y}$. We would expect it to do so: as shown by
(II.78), in an equilibrium smectic, a more negative $w_{y}$ causes positive
dislocations to move to the left dislocation with a speed proportional to
$w_{y}$. Stability requires that the negative strain induced by the process
just described will oppose the motion of positive dislocations to the right.
This opposition should get stronger as $w_{y}$ becomes more negative, so it is
very plausible that a type of “homeostasis” will eventually be reached, at
which the strain $w_{y}$ takes on a value $w_{y,{\rm h}}$ such that
$v_{s}(w_{y,{\rm h}})=0\,.$ (VI.1)
This will almost always happen whenever the direction of spontaneous motion of
the dislocations causes them to move in a direction that makes the strain
evolve in such a way as to oppose the spontaneous motion of the dislocations.
As just discussed, this is most likely to occur if $v_{s}(w_{y}=0)>0$, and
${dv_{s}\over dw_{y}}>0$ giving rise to a steady state extensile stress normal
to the layers. The opposite case of $v_{s}(w_{y}=0)<0$, and ${dv_{s}\over
dw_{y}}<0$ will also reach homeostasis, and give rise to a contractile steady
state stress normal to the layers.
So we expect stable active smectic systems to reach homeostasis, as defined by
VI.1. In such systems, the homeostatic layer spacing $a_{h}$ will be
$a_{h}=a_{0}(1+w_{y,{\rm h}})\,.$ (VI.2)
It clearly makes sense to define this homeostatic layer spacing as our
reference layer spacing, and measure our displacement field $u$ relative to
that. It is easy to relate the $u$ field $u_{h}(\mathbf{r})$ defined relative
to this homeostatic state to that $u_{0}(\mathbf{r})$ defined relative to the
initial state with layer spacing $a_{0}$; the required transformation is just
a linear function of the Cartesian coordinate $y$:
$u_{h}(\mathbf{r})=u_{0}(\mathbf{r})-w_{y,{\rm h}}y\,.$ (VI.3)
Henceforth, we will drop the subscript ${\rm h}$, and implicitly assume that
our $u$ field is measured relative to the homeostatic state. With this choice
of variables, the homeostatic condition VI.1 becomes simply
$v_{s}(w_{y}=0)=0\,.$ (VI.4)
Note that the argument just presented for the approach of active smectics to
the homeostatic state depended on the presence of fixed boundaries without
layer flux at the top and bottom of the system. For active smectics confined
under constant stress between movable boundaries, however, a homeostatic state
is never reached. Instead, the active smectic either grows arbitrarily large,
pushing the boundaries ever further out, or shrinks and disappears. The
unbounded growth scenario occurs if the applied normal stress $\sigma_{n}$ at
the boundaries is less than some homeostatic value $\sigma_{c}$, while
shrinkage and disappearance occurs if $\sigma_{n}>\sigma_{c}$.
The reason for this behavior is clear: like all elastic systems, fixing the
stress is equivalent to fixing the strain. In smectics, the “strain” is just
$w_{y}$. Hence, by fixing the stress at the boundary, we fix $w_{y}$. If
$w_{y}$ is fixed at a value such that $v_{s}(w_{y})>0$ (this corresponds to
$\sigma_{n}<\sigma_{c}$), then positive dislocations will move to the right,
and negative ones to the left, thereby adding layers to the system. The only
way to keep $w_{y}$ fixed, therefore, is for the top and bottom surfaces to
move out, to accommodate the extra layers. This process will continue
indefinitely.
On the other hand, if $w_{y}$ is fixed at a value such that $v_{s}(w_{y})<0$
(this corresponds to $\sigma_{n}>\sigma_{c}$), then positive dislocations will
move to the right, and negative ones to the left, thereby adding layers to the
system. The only way to keep $w_{y}$ fixed, therefore, is for the top and
bottom surfaces to move in, to accommodate the loss of layers. This process
will continue until the active smectic disappears completely. This behavior is
reminiscent of that predicted for tissues and observed in epithelia
homeoTissue ,homeoTissuetwo , homeoTissueexp . The addition and removal of
layers corresponds to the addition of cells by cell division and the removal
of cells by cell death. In a tissue under homeostatic conditions, the cell
division rate is exactly balanced by cell death rate. Since both cell division
and cell death depend on tissue pressure, in general there will be only one
tissue pressure value for which these rates balance exactly. This defines the
homeostatic pressure, corresponding to the stress $\sigma_{c}$ in the
homeostatic state of the smectic. If the tissue is given a prescribed volume,
keeping constant biochemical conditions, cells will divide and the tissue will
grow to occupy all space and settle to steady state, ie homeostasis, when the
tissue pressure reaches the homeostatic pressure. Alternatively if the tissue
is kept at a constant pressure larger than the homeostatic one, cell death
will win over cell division and the tissue will disappear. If the pressure is
kept at a lower value the tissue will invade all space. Tissue invasion of one
type of tissue by another one, will occur if the homeostatic pressure of the
invading tissue is larger than the homeostatic pressure of the invaded tissue.
While the above discussion has been very physical, biochemistry nonetheless
plays an important role, since the homeostatic pressure values depend on local
biochemical conditions.
Returning now to the case of fixed boundaries, and the resultant homeostatic
state, we can now ask , since the spontaneous velocity tearing dislocation
pairs apart vanishes in the homeostatic state, whether or not it is possible
to achieve a state free of unbound dislocations. Only if such a state is
possible can we have a true smectic.
In the presence of noise, the answer to this question proves to be no:
dislocations in a rotation invariant active smectic will always be unbound. We
will demonstrate this in the next section.
## VII Motion at homeostasis and the destruction of the active smectic phase
As shown in the last section, at homeostasis,
$v_{s}(\mathbf{r})=v_{s}(a(\mathbf{r}))=v_{s}(a_{\rm
h}(1+w_{y}(\mathbf{r}))\approx\mu_{x}w_{y}(\mathbf{r})\,,$ (VII.1)
where in the last, approximate, equality, we have expanded for small
$w_{y}(\mathbf{r})$, and defined the “mobility” $\mu_{x}\equiv{1\over
a}\left({dv_{s}(a)\over da}\right)_{a=a_{\rm h}}$. We have also used the fact
that
$v_{s}(a_{\rm h})=0\,,$ (VII.2)
since, as discussed above, isolated dislocations do not spontaneously move in
the homeostatic state.
Inserting VII.1 into our general equation of motion (V.5) for the
dislocations, and linearizing those equations in the strain field $\mathbf{w}$
gives
$\displaystyle v_{x}^{\alpha}(t)$ $\displaystyle=$
$\displaystyle\mu_{x}b_{\alpha}w_{y}(\mathbf{r}_{\alpha},t)+f^{\alpha}_{x}(t)\,,$
$\displaystyle v_{y}^{\alpha}$ $\displaystyle=$ $\displaystyle
f^{\alpha}_{y}(t)\,,$ (VII.3)
where we have added a “Langevin force” - i.e., a random white noise
$\mathbf{f}^{\alpha}$ \- to the equation of motion. In equilibrium, these
would simply be thermal noises, with variances proportional to temperature. In
our non-equilibrium system, they can have non-thermal, active contributions as
well. We will assume, as seems reasonable, that these forces are white,
Gaussian, zero-mean, and decorrelated between different dislocations. Taking
these conditions together with the $x\to-x$, $y\to-y$ symmetries of the apolar
smectic state we’re considering in this paper, implies that these forces have
correlations
$\displaystyle\langle\mathbf{f}^{\alpha}(t)\rangle$ $\displaystyle={\bf 0}\,,$
$\displaystyle\langle f^{\alpha}_{i}(t)f^{\beta}_{j}(t^{\prime})\rangle$
$\displaystyle=\Bigg{(}\Delta_{x}\delta_{ij}^{x}+\Delta_{y}\delta_{ij}^{y}\Bigg{)}\delta(t-t^{\prime})\delta_{\alpha\beta}\,.$
Here $\delta_{ij}^{k}=1$, when $i=j=k$ and $0$ otherwise. Since the forces are
assumed to be Gaussian random variables (as suggested by the central limit
theorem), these correlations completely specify the distribution of the forces
$\mathbf{f}^{\alpha}(t)$.
This implies that, to linear order in the field $\mathbf{w}$, dislocation
motion in the $y$-direction is a perfectly random walk. Therefore, any pair of
dislocations will eventually wander arbitrarily far apart in the
$y$-direction. Hence, dislocations in active, rotation invariant smectics are
always unbound. Another way to say this is that a true active smectic phase
cannot exist in a noisy, rotation invariant system.
## VIII Binding dislocations with a symmetry breaking field
As in equilibrium smectics, it is possible to make dislocations bind in 2d
active smectics by applying symmetry breaking fields. Once we do so, there is
no longer any symmetry argument forcing the dislocation velocity
$\mathbf{v}_{s}$ to be independent of the “rotational” component of the strain
$w_{x}$. Nor must this velocity be directed along the layers. Therefore, we
can write in general:
$\mathbf{v}_{s}^{\alpha}=\bigg{(}v_{s\parallel}\hat{z}\times\hat{{\bf
n}}(\mathbf{r}_{\alpha})+v_{s\perp}\hat{{\bf
n}}(\mathbf{r}_{\alpha})\bigg{)}{\rm sgn}(b_{\alpha})$ (VIII.1)
By up-down symmetry, $v_{s\perp}(w_{x}=0)=0$. Therefore, the leading order
term in the expansion of $v_{s\perp}(w_{x})$ in powers of $w_{x}$ is the
linear term:
$v_{s\perp}(w_{x})=-\mu_{y}w_{x}\,,$ (VIII.2)
where we’ve defined
$\mu_{y}\equiv-\left({dv_{d\perp}(w_{x})\over dw_{x}}\right)_{w_{x}=0}\,.$
(VIII.3)
By the same reasoning about homeostasis that we applied to the rotation
invariant case, we expect that, at homeostasis,
$v_{s\parallel}(a_{\rm h})=0\,.$ (VIII.4)
We can therefore again choose the state with $a=a_{\rm h}$ to be our reference
state, and expand for small strain $w_{y}$, obtaining, just as we did for the
rotation invariant case,
$v_{s\parallel}(\mathbf{r})=v_{s}(a(\mathbf{r}))=v_{s}(a_{\rm
h}(1+w_{y}(\mathbf{r}))\approx\mu_{x}w_{y}(\mathbf{r})\,,$ (VIII.5)
where again, we have excluded a term proportional to $w_{x}$ by up-down
symmetry.
Inserting the results (VIII.2) and (VIII.5) into (VIII.1), using again the
relation V.2 $\hat{{\bf n}}\approx\hat{y}-\phi(\mathbf{r})\hat{x}$ between the
layer normal $\hat{{\bf n}}$ and the strain $w_{x}$, and expanding to linear
order in the strain $\mathbf{w}$, we obtain
$v_{x}=\mu_{x}w_{y}\,,\,v_{y}=-\mu_{y}w_{x}\,.$ (VIII.6)
Since the $\mu_{y}$ term in (VIII.6) only appears due to the rotational
symmetry breaking field, $\mu_{y}$ must vanish as the strength $g$ of that
symmetry breaking field goes to zero. We therefore expect
$\mu_{y}\propto g\,$ (VIII.7)
for small symmetry breaking field strength $g$. This is the only parameter in
the dislocation dynamics that we expect to exhibit any strong dependence on
$g$ at small $g$. We will use this fact later to determine the behavior of the
critical noise strength at which dislocation unbinding occurs with $g$.
Note also that, as we saw in our discussion in section [II.2.1] of symmetry
breaking fields in equilibrium smectics, $g$ could scale non-linearly with
experimentally tunable parameters like magnetic field $H$. Indeed, we expect
that, as in equilibrium, $g\propto H^{2}$ for magnetic symmetry breaking
fields. On the other hand, for symmetry breaking induced by etching grooves
into the 2d surface, we expect $g$ to scale linearly with the density of those
grooves.
Consider now an isolated neutral pair of fundamental dislocations, one with
Burgers number $b_{+}=+a$ located at $\mathbf{r}_{+}\equiv(x_{+},y_{+})$, the
other with Burgers number $b_{-}=-a$ located at
$\mathbf{r}_{-}\equiv(x_{-},y_{-})$.
Since, as discussed earlier, the spin-wave equation of motion (III.7) is
unchanged in form by the presence of the symmetry breaking fields, and since,
furthermore, the Burgers’ condition is also unchanged, we can use the
expressions (IV.4) for the dislocation fields for active, rotation invariant
active smectics for this non-rotation invariant case as well. Therefore, the
strain field $\mathbf{w}(\mathbf{r}_{+})$ due to the $-$ dislocation at the
position $\mathbf{r}_{+}$ of the $+$ dislocation is, from equation (IV.4),
$w_{x}(\mathbf{r}_{+})=-aG_{x}(\mathbf{r})\,\ \ \ ,\ \ \
w_{y}(\mathbf{r}_{+})=-aG_{y}(\mathbf{r})\,,$ (VIII.8)
where we’ve defined the relative displacement
$\mathbf{r}\equiv(x_{+}-x_{-},y_{+}-y_{-})\equiv(x,y)$. As always, we exclude
the field of the $+$ dislocation at $\mathbf{r}_{+}$ from the field
$\mathbf{w}$ above, since the $+$ dislocation only responds to the field of
the other dislocation. Likewise, the strain field $\mathbf{w}(\mathbf{r}_{-})$
due to the $+$ dislocation at the position $\mathbf{r}_{-}$ of the $-$
dislocation is, from equation (IV.4),
$w_{x}=aG_{x}(\mathbf{r})\,\ \ \ ,\ \ \ w_{y}=aG_{y}(\mathbf{r})\,,$ (VIII.9)
Using the dislocation equation of motion (VIII.6), together with our
expressions (VIII.8) and (VIII.9) for the fields at the $+$ and $-$
dislocations, , we obtain the equations of motion for the dislocations:
$\displaystyle{dx_{\pm}\over
dt}=\mp\mu_{x}aG_{y}(\mathbf{r})+f^{\pm}_{x}(t)\,,$
$\displaystyle{dy_{\pm}\over
dt}=\pm\mu_{y}aG_{x}(\mathbf{r})+f^{\pm}_{y}(t)\,,$ (VIII.10)
where, as we did for the rotation invariant case, we have added random noises
$\mathbf{f}^{\pm}$ to the equation of motion for each dislocation, to take
into account random microscopic processes (including, but not limited to,
thermal fluctuations) that move the dislocations. We will continue to take
these noises to have correlations given by LABEL:disforcecorr, with the
indices $\alpha$ and $\beta$ running over the two values $+$ and $-$.
From VIII.10, it follows that the relative displacement $\mathbf{r}$ obeys
$\displaystyle{dx\over dt}$ $\displaystyle={dx_{+}\over dt}-{dx_{-}\over
dt}=-2\mu_{x}aG_{y}(\mathbf{r})+f_{x}(t)\,,$ $\displaystyle{dy\over dt}$
$\displaystyle={dy_{+}\over dt}-{dy_{-}\over
dt}=2\mu_{y}aG_{x}(\mathbf{r})+f_{y}(t)\,,$ (VIII.11)
where the “relative force” $\mathbf{f}=\mathbf{f}^{+}-\mathbf{f}^{-}$. From
this, it follows that the relative force is also Gaussian, with mean and
variance given by:
$\displaystyle\langle\mathbf{f}(t)\rangle$ $\displaystyle={\bf 0}\,,$
$\displaystyle\langle f_{i}(t)f_{j}(t^{\prime})\rangle$
$\displaystyle=2\Bigg{(}\Delta_{x}\delta_{ij}^{x}+\Delta_{y}\delta_{ij}^{y}\Bigg{)}\delta(t-t^{\prime})\delta_{\alpha\beta}\,.$
Using our earlier expression (IV.7) relating the Greens functions $G_{x,y}$ to
the gradient of the potential (IV.8), we can rewrite this as
$\displaystyle{dx\over
dt}=-2{\mu_{x}\over\varpi}a\partial_{x}V(\mathbf{r})+f_{x}(t)\,,$
$\displaystyle{dy\over dt}=-2\mu_{y}\varpi
a\partial_{y}V(\mathbf{r})+f_{y}(t)\,.$ (VIII.13)
Note that if the noise variances $\Delta_{x,y}$ and the effective “relative
mobilities”
$\mu^{\rm rel}_{x}\equiv{2a\mu_{x}\over\varpi}\ \ \ ,\ \ \ \mu^{\rm
rel}_{y}\equiv 2a{\mu_{y}\varpi}$ (VIII.14)
satisfied
${\Delta_{x}\over\Delta_{y}}={\mu^{\rm rel}_{x}\over\mu^{\rm rel}_{y}}\,,$
(VIII.15)
then the equations of motion VIII.13 could be written in the form
$\displaystyle{dr_{i}\over
dt}=-\tilde{\Gamma}_{ij}\partial_{j}U(\mathbf{r})+f_{i}(t)\,,$ (VIII.16)
with $U(\mathbf{r})=KV(\mathbf{r})$, a diagonal kinetic coefficient tensor
$\displaystyle\tilde{\Gamma}_{ij}=\Bigg{(}\tilde{\Gamma}_{x}\delta_{ij}^{x}+\tilde{\Gamma}_{y}\delta_{ij}^{y}\Bigg{)}\,,$
(VIII.17)
and the noise correlations given by
$\displaystyle\langle f^{\alpha}_{i}(t)f^{\beta}_{j}(t^{\prime})\rangle$
$\displaystyle=2k_{B}T\tilde{\Gamma}_{ij}\delta(t-t^{\prime})\delta_{\alpha\beta}\,.$
The reader can easily show for herself that this works provided the
temperature $T$, kinetic coefficient tensor components $\tilde{\Gamma}_{x,y}$,
and effective spin wave stiffness $K$ obey
${k_{B}T\over K}={\Delta_{x}\over\mu^{\rm rel}_{x}}={\Delta_{y}\over\mu^{\rm
rel}_{y}}\ \ \ ,\ \ \ \tilde{\Gamma}_{x,y}={\mu^{\rm rel}_{x,y}\over K}\,.$
(VIII.19)
The relation LABEL:disforcecorreqeq between these noise correlations and the
kinetic coefficient tensor $\tilde{\Gamma}_{ij}$ is exactly that required by
the fluctuation-dissipation relationschaikin . Therefore, the relative motion
of the dislocation pair is exactly that of a pair moving in equilibrium in the
potential $U(\mathbf{r})$. This is precisely the equilibrium model for the
Kosterlitz-Thouless transition discussed in section [II.2.3]. Therefore, if
this system did happen to satisfy the relation (VIII.15), it would undergo a
Kosterlitz-Thouless unbinding transition as noise was increased. Most
importantly, at sufficiently small, but non-zero, noise (i.e., sufficiently
low temperature in the equilibrium model), dislocations would remain bound,
and the active smectic phase would be stable.
However, in a weak symmetry breaking field, the system will always be far from
satisfying the relation (VIII.15). This can be seen from limiting behaviors of
the $\Delta$’s and $\mu$”s in the limit of weak symmetry breaking field $g$:
they all go to finite constants as $g\to 0$ except $\mu_{y}$, which, as
discussed earlier, vanishes according to $\mu_{y}\propto g$ as $g\to 0$.
Even if the symmetry breaking field is not weak, there is no reason our non-
equilibrium active smectic should obey (VIII.15). Therefore, the connection to
the equilibrium Kosterlitz-Thouless transition just made for the special case
in which equation (VIII.15) is satisfied will not, in general, hold.
Nonetheless, we expect this system to undergo something very like a
Kosterlitz-Thouless transition, at least for very weak symmetry breaking
fields. To establish this, however, we cannot use equilibrium arguments.
Instead, we must use the more general Fokker-Planck equation for the generic
non-equilibrium system, to which we now turn.
Using standard techniqueschaikin , we can show that the stochastic equation of
motion (VIII.13) for the relative displacement $\mathbf{r}$ of the two
dislocations implies that the probability density $\rho(x,y,t)$ for the
relative displacement vector $\mathbf{r}$ obeys the Fokker-Planck equation:
$\partial_{t}\rho+{\bf\nabla}\cdot(\mathbf{u}\rho)-(\Delta_{x}\partial_{x}^{2}+\Delta_{y}\partial_{y}^{2})\rho=0\,,$
(VIII.20)
where
$u_{i}(\mathbf{r})\equiv-\mu_{ij}\partial_{j}V(\mathbf{r})\,,$ (VIII.21)
with the effective relative mobility tensor
$\mu_{ij}=\Bigg{(}\mu^{\rm rel}_{x}\delta_{ij}^{x}+\mu^{\rm
rel}_{y}\delta_{ij}^{y}\Bigg{)}\,,$ (VIII.22)
is the deterministic part of the relative dislocation velocity in equation
(VIII.13).
Looking for steady-state solutions to this equation, we can set the time
derivative to zero, and drop the time dependence of $\rho(x,y,t)$ (that is,
set $\rho(x,y,t)=\rho(x,y)$). Doing so, and using our expression (IV.8) for
$V(\mathbf{r})$, leads to the steady state equation:
$(\Delta_{x}\partial_{x}^{2}+\Delta_{y}\partial_{y}^{2})\rho+\partial_{x}\left({\gamma_{x}x\rho\over
x^{2}+\alpha y^{2}}\right)+\partial_{y}\left({\gamma_{y}y\rho\over
x^{2}+\alpha y^{2}}\right)=0\,,$ (VIII.23)
where we’ve defined
$\alpha\equiv{D_{x}\over D_{y}}={1\over\varpi^{2}}$ (VIII.24)
and
$\gamma_{i}={\mu^{\rm rel}_{i}\over 2\pi}\ \ \ ,\ \ \ \,\,i=(x,y)\,.$
(VIII.25)
Since we expect that, for small symmetry breaking field $g$, $\mu_{y}\propto
g$, while all other parameters should go to finite, non-zero constants as
$g\to 0$, we therefore expect
$\gamma_{y}\propto g\,\,\,{\rm as}\,\,\,\,g\to 0\,.$ (VIII.26)
We will use this scaling law later to determine how the critical noise
strength $\Delta_{y}^{c}(\zeta,g)$ at which noise destroys smectic order
depends on activity $\zeta$ and symmetry breaking field strength $g$.
Our experience with the equilibrium case suggests that we seek a solution of
this equation which falls off like a power law; i.e., $\rho(\mathbf{r})\propto
r^{-\nu}$ for large $r$. We will therefore insert the scaling ansatz
$\rho(x,y)=y^{-\nu}\Phi(x/y)\,$ (VIII.27)
into VIII.23.
Doing so, we find that such a solution will indeed work, provided that the
scaling function $\Phi(z)$ obeys the ODE:
$\Delta_{x}\Phi^{\prime\prime}(z)+\Delta_{y}\bigg{[}z^{2}\Phi^{\prime\prime}(z)+2(\nu+1)z\Phi^{\prime}(z)+\nu(\nu+1)\Phi(z)\bigg{]}=-\gamma_{x}{d\over
dz}\left({z\Phi(z)\over
z^{2}+\alpha}\right)+\gamma_{y}\left[{(1+\nu)\Phi(z)\over
z^{2}+\alpha}+z{d\over dz}\left({\Phi(z)\over z^{2}+\alpha}\right)\right]\,.$
(VIII.28)
We will not need to solve this equation (fortunately!); rather, we will use it
simply to establish that the scaling function $\Phi(z)$ does nothing singular
as $\gamma_{y}\to 0$; that is, as the symmetry breaking field strength goes to
zero. In particular, the range of $\Phi(z)$ \- that is, the value of $z$ at
which $\Phi(z)$ starts falling off fast enough that its integral converges -
does not diverge as $\gamma_{y}\to 0$. To see this, consider (VIII.29) at
$z=0$, where it reads
$\Phi^{\prime\prime}(z=0)=-\bigg{[}\left({\Delta_{y}\over\Delta_{x}}\right)\nu(1+\nu)+{(\gamma_{x}-\gamma_{y}(1+\nu))\over\alpha\Delta_{x}}\bigg{]}\Phi(z=0)\,.$
(VIII.29)
Since $\Delta_{x}$ and $\Delta_{y}$ are of the same order of magnitude, the
coefficient of $\Phi(z=0)$ in (VIII.29) is at least $O(1)$. Therefore,
$\Phi^{\prime\prime}(z=0)$ is a negative number whose magnitude is $\gtrsim
O(1)$ times $\Phi(z=0)$. This means that we would expect $\Phi(z)$ to drop
from $\Phi(z=0)$ to small values on a scale no larger than $O(1)$. For large
$z$ the behavior of $\Phi(z)$ obeys
$z^{2}\Phi^{\prime\prime}(z)+2(\nu+1)z\Phi^{\prime}(z)+\nu(\nu+1)\Phi(z)=0$,
which implies $\Phi(z)\sim z^{-\nu}$ for large $z$.
Our scaling ansatz (VIII.27) implies that
$f(y)\equiv\int_{-\infty}^{\infty}\rho(x,y)dx=\int_{-\infty}^{\infty}y^{-\nu}\Phi(x/y)dx=y^{1-\nu}\Upsilon_{1}\,,$
(VIII.30)
where we’ve defined
$\Upsilon_{1}\equiv\int_{-\infty}^{\infty}\Phi(z)dz\,.$ (VIII.31)
Our above observation that $\Phi(z)$ becomes small for $z\gtrsim{\cal O}(1)$
regardless of the value of $\gamma_{y}$ implies that $\Upsilon_{1}={\cal
O}(1)$ regardless of the value of $\gamma_{y}$ as well. Note also that
$\Upsilon_{1}$ is independent of $y$.
Now, integrating equation (VIII.23) over $x$ from $-\infty$ to $\infty$ gives
$\Delta_{y}f^{\prime\prime}(y)=-\gamma_{y}{d\over
dy}\bigg{[}y\int_{-\infty}^{\infty}{\rho(x,y)dx\over x^{2}+\alpha
y^{2}}\bigg{]}\,.$ (VIII.32)
In deriving this equation, we have dropped surface terms that must vanish
since the scaling function $\Phi(z)$ vanishes sufficiently rapidly to be
integrable.
Using our scaling ansatz (VIII.27) enables us to rewrite
$\int_{-\infty}^{\infty}{\rho(x,y)dx\over x^{2}+\alpha
y^{2}}=\int_{-\infty}^{\infty}{y^{-\nu}\Phi(x/y)dx\over x^{2}+\alpha
y^{2}}=y^{-1-\nu}\Upsilon_{2}$ (VIII.33)
where we’ve defined
$\Upsilon_{2}\equiv\int_{-\infty}^{\infty}{\Phi(z)dz\over z^{2}+\alpha}\,.$
(VIII.34)
Like $\Upsilon_{1}$, $\Upsilon_{2}$ can be readily seen to be ${\cal O}(1)$,
even when $\gamma_{y}\to 0$. Thus VIII.32 can be rewritten as
$\Delta_{y}{d^{2}\over dy^{2}}(\Upsilon_{1}y^{1-\nu})=-\gamma_{y}{d\over
dy}(\Upsilon_{2}y^{-\nu})\,,$ (VIII.35)
which can readily be seen to work (because both sides scale the same way with
$y$; specifically, like $y^{-(\nu+1)}$) provided
$\Delta_{y}\Upsilon_{1}\nu(\nu-1)=\gamma_{y}\Upsilon_{2}\nu\,.$ (VIII.36)
Since $\nu=4$ at the dislocation unbinding transition, (VIII.36) implies that
the value $\Delta_{y}^{c}$ of $\Delta_{y}$ at the transition obeys
$\Delta_{y}^{c}=\gamma_{y}\left({\Upsilon_{2}\over 3\Upsilon_{1}}\right)\,.$
(VIII.37)
Since $\Upsilon_{1}$ and $\Upsilon_{2}$ are of ${\cal O}(1)$, even when
$\gamma_{y}\to 0$, (VIII.37) implies that
$\Delta_{y}^{c}=\gamma_{y}\times{\cal O}(1)\,.$ (VIII.38)
Given our earlier argument that $\gamma_{y}$ should vanish linearly with the
symmetry breaking field $g$ as $g\to 0$, this result implies that the noise
strength $\Delta_{y}^{c}$ at the transition should also vanish linearly with
$g$ as $g\to 0$.
Since we also expect that the noise strengths $\Delta_{x,y}$ in the
dislocation equation of motion noise correlations (LABEL:reldisforcecorr) are
both proportional to the spin wave noise strength $\Delta$, this implies that
the value $\Delta_{c}$ of the spin wave noise strength $\Delta$ at the
transition should also scale linearly with the symmetry breaking field
strength $g$ for small $g$; that is
$\Delta_{c}\propto g\,\,\,{\rm as}\,\,\,\,g\to 0\,.$ (VIII.39)
This result should be contrasted with the equilibrium result
$\Delta_{c}^{\rm eq}\propto\sqrt{g}\,\,\,{\rm as}\,\,\,\,g\to 0\,,$ (VIII.40)
which follows from equation (II.72) if we interpret $T_{KT}$ as the critical
noise correlation strength. We therefore see that, for small symmetry breaking
fields $g$, the critical noise strength $\Delta_{c}$ for dislocation unbinding
and the melting of the smectic phase is much weaker for active smectics than
for equilibrium smectics. Active smectics are less robust against melting,
even in the presence of symmetry breaking fields, than their equilibrium
counterparts.
Another consequence of this result VIII.39 is that the critical value
$\eta_{c}$ of the exponent $\eta$ for algebraic decay of smectic translational
order becomes non-universal. Indeed, since the diffusion constants $D_{x}$ and
$D_{y}$ go to finite, non-zero constants as the symmetry breaking field $g\to
0$, while the critical noise strength $\Delta_{c}$ vanishes according to
VIII.39, the value of $\eta_{c}$ also vanishes linearly with symmetry breaking
field:
$\eta_{c}\propto{\gamma_{y}\over\sqrt{D_{x}D_{y}}}\propto g\,\,\,{\rm
as}\,\,\,\,g\to 0\,.$ (VIII.41)
## IX From Malthusian to incompressible active smectics
In the preceding paragraphs, for the sake of simplicity we have ignored the
existence of a local velocity field $\mathbf{v}(\mathbf{r})$. This field has
two effects: it can advect the dislocations, and it can modify the expressions
of $w_{x}(\mathbf{r})$ and $w_{y}(\mathbf{r})$. The equations of motion for
the dislocation pair separation now read
$\displaystyle{dx\over dt}$ $\displaystyle=2\psi
v_{x}(\mathbf{r})-2\mu_{x}w_{y}(\mathbf{r})+f_{x}(t)\,,$
$\displaystyle{dy\over dt}$ $\displaystyle=2\psi
v_{y}(\mathbf{r})+2\mu_{y}w_{x}(\mathbf{r})+f_{y}(t)\,,$ (IX.1)
where by $\mathbf{v}(\mathbf{r})$ we mean the velocity field at the point
$\mathbf{r}$ generated by a $+1$ dislocation, and the factor of $2$ comes from
the fact that an equal and opposite velocity field is generated by the $-1$
dislocation at the position of the $+1$ dislocation, and the two effects on
the relative motion add. Note that the prefactors of $v_{x}(\mathbf{r})$ and
$v_{y}(\mathbf{r})$ are not $2$, as one might naively expect, but $2\psi$,
where the factor $\psi$ captures the effects of drag between the dislocations
and the substrate, as discussed in the appendix. However, these effects do not
modify our analysis since we will show that the velocity contribution is
subdominant compared to that of $w_{x},w_{y}$. The equations for the velocity
field involve the force balance, the layer dynamics equation and the density
non conservation equation. At steady state the layer dynamical equation
(III.1) becomes, see Appendix A:
$\displaystyle
0=v_{y}(\mathbf{r})+D_{y}\partial_{y}w_{y}+D_{x}\partial_{x}w_{x}+f,$ (IX.2)
This equation shows that $\mathbf{v}(\mathbf{r})$ is of the order of
derivatives of $\partial_{x}w_{x},\partial_{y}w_{y}$ which lead to terms
subdominant compared to $w_{x},w_{y}$ in the long distance limit, hence our
claim that the advection term in (IX.1) is subdominant. The force balance
expresses the fact that the momentum extracted from the substrate is balanced
by the divergence of the stress. The momentum exchange with the substrate
involves not only the usual friction term, but because we are dealing with an
active system, also involves gradients of layer spacing plus bend and splay of
the layer normal. The stress tensor involves the layer compression term as in
conventional smectics, the pressure term and the usual active stress of
anisotropic systems. The viscous term which is of higher order in gradients
than substrate friction may be omitted. The density balance equation involves
a source term which expresses that whenever the pressure departs from its
homeostatic value, the elements building the active smectic are either created
or destroyed. It has been shown that under these conditions on long time
scales one can replace the pressure term by a bulk viscous term homeoTissuetwo
. As already pointed out viscous terms can be omitted in the long wavelength
limit and it is then straightforward to show that the equation we found for
$w_{x},w_{y}$ given in Eq. (IV.1) is valid and the effect of flow is simply to
renormalize the coefficients. Thus our conclusions concerning the Malthusian
case are valid even when one takes into account a momentum exchange with the
substrate more complex than just friction.
We now turn to the incompressible active smectic case. The pressure becomes a
Lagrange multiplier which can be calculated with the condition that the
divergence of the velocity field vanishes. Other equations are identical to
the one we just discussed. They can be solved in a straightforward way. All
together, this adds up to eight parameters in the problem. Following the same
logic as before we obtain:
$\displaystyle w_{x}(\mathbf{r})=-{y\gamma^{(1)}_{x}\over
2\pi(\alpha_{1}y^{2}+x^{2})}+{y\gamma^{(2)}_{x}\over
2\pi(\alpha_{2}y^{2}+x^{2})}\,,$ (IX.3) $\displaystyle
w_{y}(\mathbf{r})={x\gamma^{(1)}_{y}\over
2\pi(\alpha_{1}y^{2}+D_{y}x^{2})}-{x\gamma^{(2)}_{y}\over
2\pi(\alpha_{2}y^{2}+D_{y}x^{2})}\,.$ (IX.4)
We have obtained expressions for
$\gamma^{(1)}_{x},\gamma^{(2)}_{x},\gamma^{(1)}_{y},\gamma^{(2)}_{y},\alpha_{1},\alpha_{2}$
in terms of the aforementioned 8 parameters, but they are not very
illuminating and so we will not give them here. In the large friction regime
one recovers the results of the Malthusian case, with
$\gamma^{(2)}_{x},\gamma^{(2)}_{y}$ going to zero,
$\gamma^{(1)}_{x},\gamma^{(1)}_{y}$ going to $\gamma_{x},\gamma_{y}$ and
$\alpha_{1},\alpha_{2}$ going to $\alpha$. The important point is that the
expressions IX.4 appear as the difference between two functions having the
same structure as the one appearing in the Malthusian case, with the same
scaling. Thus there is an important parameter space for which the sign of
$w_{x}(\mathbf{r}),w_{y}(\mathbf{r})$ and the scaling are the same as in the
Malthusian case. This means that the same procedure as in the preceding
section can be followed and that the conclusions detailed in the Malthusian
case are valid in general in a broad range of parameters even in the
incompressible case.
## X Summary, conclusions, and suggestions for future work
We have shown that dislocations in dry active Malthusian smectics - that is,
smectics lacking all conservation laws, including that of particle number-
behave very differently from those in equilibrium smectics. Specifically:
1) they can move spontaneously, even in isolation.
2) Because of this, active smectics with “constant stress” boundary conditions
can never reach a steady state. Instead, they either grow forever, or shrink
and disappear. This behavior is similar to that of tissuestissue .
3) When their boundaries are fixed, active smectics reach a state of
“homeostasis”, in which the spontaneous motion of isolated dislocations
ceases.
4) However, even in the state of homeostasis, dislocations are always unbound
in rotation invariant active smectics, if there is any noise, however small.
This means that the active smectic phase does not, in fact, exist at finite
noise.
5) By applying rotational symmetry breaking fields, active smectics can be
stabilized against dislocation unbinding for sufficiently small noise.
However, for weak symmetry breaking fields, active smectics are less robust
against noise than their equilibrium counterparts. Specifically, the critical
noise strength $\Delta_{c}$ above which dislocations unbind and smectic order
is lost scales linearly with the symmetry breaking field strength $g$, in
contrast to the $\sqrt{g}$ scaling of the critical temperature in an
equilibrium smectic.
6) As a result, the exponent $\eta$ for the algebraic decay of smectic
correlations (given by equation (III.6)) becomes non-universal at melting, and
in fact vanishes linearly with symmetry breaking field strength $g$ as $g\to
0$. This should be contrasted with the universal value $\eta(T_{c})=1/4$ of
this exponent in equilibrium smectics with a symmetry breaking field, a result
that is completely independent of the symmetry breaking field $g$.
Our work here has, of course, focused mainly on a very particular type of
smectic, namely dry Malthusian apolar smectics. In contrast to equilibrium
systemschaikin , for which the exact nature of the dynamics - in particular,
what conservation laws the dynamics respects - has no effect on the equal time
correlations of smectic fluctuations, in active systems, because they are non-
equilibrium, all we have is dynamics, and, so, a priori, results could change
if we consider smectics with conservation laws. The most obvious of these is
number conservation, which our Malthusian smectics lack, due to “birth and
death” of the constituent active particles. One can of course imagine many
situations in which birth and death are absent (at last on the time scale of
an experiment). Such systems will have very different hydrodynamic equations,
because the conserved particle density will now become a slow, hydrodynamic
variableMPP , thereby completely changing the dynamics. The spin wave theory
for this case has already been worked out apolarsm , and our analysis of the
incompressible case suggests our results are quite general.
Likewise, momentum conservation, which will hold for freely suspended 2d
systems (i.e., those not in contact with a substrate to which they can lose
momentum) always radically changes the dynamics, for similar reasons.
Finally, polarity (ie., the absence of head-tail symmetry) is also
knownpolarsm to change the nature of the spin-wave theory of active smectics.
So its role in dislocation behavior should also be investigated.
We strongly suspect that in all of these more complicated systems, our
fundamental conclusion - namely, that dislocations in a rotation invariant 2d
active smectic will always be unbound in the presence of noise, meaning that
the active smectic phase cannot exist at finite noise in two dimensions. We
suspect this because, whatever the conserved quantities, rotation invariance
forbids motion of dislocations, other than Brownian motion driven by noise ,
or motion driven by curvature of the layers, in the direction perpendicular to
the layers in a rotation invariant system. Because the curvature field induced
by dislocations will always fall off quite rapidly with distance from the
dislocation, the motion induced by them will always be insufficient to bind a
dislocation pair. Instead, dislocations can always unbind by diffusing apart
in the direction normal to the layers, as we found here for dry Malthusian
active smectics.
But, obviously, this conclusion is at best speculative until those other
systems enumerated above are explicitly investigated.
###### Acknowledgements.
JT thanks The Higgs Centre for Theoretical Physics at the University of
Edinburgh for their hospitality and support while this work was in progress.
He likewise thanks the Max Planck Institute for the Physics of Complex
Systems, Dresden, Germany, for their support through the Martin Gutzwiller
Fellowship. JP thanks A. Bershadsky for drawing his attention on smectic order
in fibroblast cells.
## Appendix A Theory of active smectics
We describe the smectic configuration by the vector field $m\hat{\bf n}$ which
also defines ${\bf w}(\mathbf{r},t)=(a_{0}m-1)\hat{\bf n}$, where $m=1/a$ is
the density of layers, $\hat{\bf n}$ is a unit vector normal to the layers and
$a_{0}$ a reference layer spacing. The vector field $m\hat{\bf n}$ satisfies a
general balance equation
$\partial_{t}(m\hat{\bf n})+\nabla J_{m}=-\hat{z}\times{\bf J}_{d}\quad.$
(A.1)
Here, the dislocation current is
${\bf J}_{d}=\sum_{\alpha}b_{\alpha}{\bf
v}_{\alpha}\delta(\mathbf{r}-\mathbf{r}_{\alpha})\quad,$ (A.2)
where ${\bf v}_{\alpha}$ is the velocity of disloctation $\alpha$ and
$b_{\alpha}$ its Burgers number. The layer rate $J_{m}$ can in the absence of
dislocations be identified with the rate of change of layer displacement:
$J_{m}a_{0}=-\partial_{t}u\ \ \ ,\ \ \ {\rm no}\,{\rm dislocations}\,.$ (A.3)
Eqns. (A.1) and (A.2) imply Eq. (II.9).
In a hydrodynamic theory, we write phenomenological constitutive equations for
$J_{m}$ and for the force balance, using terms allowed by symmetry at lowest
order in spatial derivatives. For an up-down and rotationally symmetric
smectic, these can be written as
$\displaystyle J_{m}$ $\displaystyle=$ $\displaystyle m{\bf v}\cdot\hat{\bf
n}-D_{m}\hat{\bf n}\cdot\nabla m+\lambda_{m}\nabla\cdot\hat{\bf n}$ (A.4)
$\displaystyle\nabla\cdot\bm{\sigma}$ $\displaystyle=$
$\displaystyle\bm{\mu}\cdot{\bf v}+\bm{\nu}\cdot\nabla
m+\lambda_{b}^{v}(\hat{\bf n}\cdot\nabla)\hat{\bf n}$ (A.5) $\displaystyle+$
$\displaystyle\lambda_{s}^{v}\hat{\bf n}(\nabla\cdot\hat{\bf n})$
where $\bf v$ is the material flow velocity and we have introduced
phenomenological coefficients $D_{m}$, $\lambda_{m}$, $\lambda_{b,s}^{v}$ as
well as mobility and kinetic tensors $\bm{\mu}$, $\bm{\nu}$ and we have
omitted the noise for simplicity.
The stress tensor $\bm{\sigma}$ also obeys a constitutive relation which is of
the form
$\sigma_{ij}=-P\delta_{ij}-B\bigg{(}\hat{n}_{i}\hat{n}_{j}-\frac{1}{2}\delta_{ij}\bigg{)}\frac{m-m_{0}}{m_{h}}\quad,$
(A.6)
where $P$ is pressure and we have omitted higher order and viscous terms which
are subdominant in the hydrodynamic limit. To study small deformations, we
write $m=1/a_{0}+\delta m$ and $\hat{\bf
n}=\sin(\phi)\hat{x}-\cos(\phi)\hat{y}$. To linear order in $\phi$ and $\delta
m$ we then have
$a_{0}m\hat{\bf n}\simeq\phi\hat{x}-(1+a_{0}\delta m)\hat{y}\quad,$ (A.7)
which is the same as Eq. (II.10). Furthermore, we have to lowest order
$w_{x}\simeq\phi$ and $w_{y}\simeq-a_{0}\delta m$. Finally, Eq. (A.4) becomes
$-J_{m}a_{0}=v_{y}+D_{x}\partial_{x}w_{x}+D_{y}\partial_{y}w_{y}$ (A.8)
where $D_{y}=D_{m}$ and $D_{x}=-\lambda_{m}a_{0}$. In steady state $J_{m}=0$
and we obtain Eq. (IX.2).
The velocity $v_{y}$ can be determined using Eqns. (A.5) and (A.6). Doing so,
we have to distinguish the Malthusian from the incompressible case. In the
Malthusian case, material is not conserved and $\nabla\cdot{\bf v}$ does not
vanish. In this case, $P=-\eta_{b}\nabla\cdot{\bf v}$, where $\eta_{b}$ is a
bulk viscosity and the contributions from viscosity and pressure are
irrelevant in the hydrodynamic limit. We then have
$\mu_{yy}v_{y}\simeq\left(\lambda_{s}^{v}-B\left(\frac{m-m_{0}}{m_{h}}\right)\right)\partial_{x}\phi-\left(\nu_{yy}+\frac{B}{2m_{h}}\right)\partial_{y}m\quad.$
(A.9)
Using this expression in (A.8) to eliminate $v_{y}$ we arrive in steady state
at Eq. (IV.1) of a Malthusian active smectic but with renormalized
coefficients
$\displaystyle D_{x}$ $\displaystyle=$
$\displaystyle-\lambda_{m}a_{0}+\frac{\lambda_{s}^{v}-B(m_{h}-m_{0})/m_{h}}{\mu_{yy}}$
(A.10) $\displaystyle D_{y}$ $\displaystyle=$ $\displaystyle
D_{m}+\frac{\nu_{yy}+B/(2m_{h})}{a_{0}\mu_{yy}}\quad.$ (A.11)
In the incompressible case the pressure acts as a Lagrange multiplier to
impose the incompressibility constraint $\nabla\cdot{\bf v}=0$. This modifies
the hydrodynamic modes, see section IX.
The velocity of dislocation $\alpha$ can also be written on symmetry grounds
to lowest order as
${\bf v}_{\alpha}=\psi{\bf
v}+B^{\prime}\left(\frac{m-m_{h}}{m_{h}}\right)b_{\alpha}\hat{\bf
n}\times\hat{z}$ (A.12)
where $m_{h}$ is the layer density in the homeostatic state and which is
equivalent to Eq. (V.1).
In the presence of a symmetry breaking field ${\bf H}$ rotation invariance is
broken. In this case additional symmetry breaking terms involving a tensor
$S_{ij}=g\hat{H}_{i}\hat{H}_{j}$ are permitted. These additional terms do not
affect Eq. (A.8) at linear order, see Eq. (IX.2). However, $S_{ij}$ allows for
an additional term in the dislocation velocity when rotation invariance is
broken
${\bf v}_{\alpha}=\psi{\bf
v}+B^{\prime}\left(\frac{m-m_{h}}{m_{h}}\right)b_{\alpha}\hat{\bf
n}\times\hat{z}+B^{\prime\prime}b_{\alpha}{\bf S}\cdot\hat{\bf
n}\times\hat{z}$ (A.13)
where the field $\bf H$ is aligned such that ${\bf
S}=gH^{2}\hat{x}\otimes\hat{x}$, which corresponds to Eq. (VIII.1). The
dislocation velocities thus obey Eq. (VIII.6) with
$\mu_{x}=B^{\prime}b_{\alpha}$ and $\mu_{y}=gH^{2}B^{\prime\prime}b_{\alpha}$
and we obtain Eq. (VIII.7).
## References
* (1) P. M. Chaikin and T. C. Lubensky, Principles of condensed matter physics (Cambridge University Press, Cambridge UK, 1995).
* (2) P. G. de Gennes, J. Prost, The physics of liquid crystals (Oxford University Press, Oxford, UK, 1993).
* (3) A. Caille, C. R. Acad. Sci., Ser. B 274, 891 (1972); P. G. de Gennes, J. Phys. (Paris), Colloq. 30, C9-65 (1969); L. D. Landau and E. M. Lifshitz, Statistical Physics, 2nd ed. (Pergamon, Oxford, 1969), p. 402; T. C. Lubensky, Phys. Rev. Lett. 29, 206 (1972).
* (4) G. F. Mazenko, S. Ramaswamy and J. Toner, Viscosities diverge as $1/\omega$ in smectic-A liquid crystals, Phys. Rev. Lett. 49, 51 (1982); Breakdown of Conventional Hydrodynamics for Smectic-A, Hexatic-B, and Cholesteric Liquid Crystals, Phys. Rev. A28, 1618 (1983).
* (5) T. Vicsek, A. Czirók, E. Ben-Jacob, I. Cohen, and O. Shochet, Novel type of phase transition in a system of self-Driven particle, Phys. Rev. Lett. 75, 1226 (1995).
* (6) J. Toner and Y. Tu, Long-range order in a two-dimensional dynamical XY model: how birds fly together. Phys. Rev. Lett. 75, 4326 (1995).
* (7) Y. Tu, M. Ulm, and J. Toner, Sound waves and the absence of Galilean invariance in flocks, Phys. Rev. Lett. 80, 4819 (1998).
* (8) J. Toner and Y. Tu, Flocks, herds, and schools: a quantitative theory of flocking, Phys. Rev. E 58, 4828(1998).
* (9) J. Toner, Y. Tu, and S. Ramaswamy, Hydrodynamics and phases of flocks. Ann. Phys. 318, 170 (2005).
* (10) J. Toner, Birth, death and flight: a theory of Malthusian flocks. Phys. Rev. Lett. 108, 088102 (2012).
* (11) F. Schweitzer, Brownian Agents and Active Particles: Collective Dynamics in the Natural and Social Sciences, Springer Series in Synergetics (Springer, New York, 2003).
* (12) S. Ramaswamy, The mechanics and statics of active matter. Ann. Rev. Condens. Matt. Phys. 1, 323-345 (2010).
* (13) M.C. Marchetti, J.F. Joanny, S. Ramaswamy, T.B. Liverpool, J. Prost, M. Rao, and R.A. Simha, Hydrodynamics of soft active matter, Rev. Mod. Phys. 85, 1143-1188 (2013).
* (14) C. Bechinger, R. Di Leonardo, H. Löwen, C. Reichhardt, G. Volpe, and G. Volpe, Active particles in complex and crowded environments, Rev. Mod. Phys. 88, 045006 (2016).
* (15) K. Kruse, J.-F. Joanny, F. Jülicher, J. Prost and K. Sekimoto, Asters, Vortices and Rotating Spirals in Active Gels of Polar Filaments, Phys. Rev. Lett. 92, 078101 (2004).
* (16) J. Prost, F. Jülicher and J.F. Joanny, Active Gel Physics, Nature Physics 11, 111 (2015).
* (17) S. Ramaswamy and R. A. Simha, Hydrodynamic Fluctuations and Instabilities in Ordered Suspensions of Self-Propelled Particles, Phys. Rev. Lett. 89, 058101 (2002); Statistical hydrodynamics of ordered suspensions of self-propelled particles: waves, giant number fluctuations and instabilities, Physica A 306, 262-269 (2002); R.A. Simha, Ph D thesis, Indian Institute of Science (2003).
* (18) T. C. Adhyapak, S. Ramaswamy, and J. Toner, Live soap: Stability, order, and fluctuations in apolar active smectics, Phys. Rev. Lett. 110, 118102 (2013).
* (19) L. Chen and J. Toner, Universality for moving stripes: A hydrodynamic theory of polar active smectics, Phys. Rev. Lett. 111, 088701 (2013).
* (20) R. Etournay, M. Popović, M. Merkel, A. Nandi, C. Blasse, B. Aigouy, H. Brandl, G. Myers, G. Salbreux, F. Jülicher, and S. Eaton, Interplay of cell dynamics and epithelial tension during morphogenesis of the Drosophila pupal wing, eLife 4:e07090 (2015).
* (21) G. Duclos, C. Blanch-Mercader, V. Yashunsky, G. Salbreux, J. F. Joanny, J. Prost, and P. Silberzan, Spontaneous shear flow in confined cellular nematics, Nature Physics 14, 728 (2018)
* (22) S. Hu, K. Dasbiswas, Z. Guo, Ye.-H. Tee, V. Thiagarajan, P. Hersen, T.-L. Chew, S. A. Safran, R. Zaidel-Bar and A. D. Bershadsky, Long-range self-organization of cytoskeletal myosin II filament stacks, Nature Cell Biol. 19 133 (2017).
* (23) K. Dasbiswas, S. Hu, F. Schnorrer, S. A. Safran and A. D. Bershadsky, Ordering of myosin II filaments driven by mechanical forces: experiments and theory, Phil. Trans. R. Soc. B 373 20170114 (2018).
* (24) S. Hu, H. Grobe, Z. Guo, Y-H. Wang, B. L. Doss, M. Pan, B. Ladoux, A. D. Bershadsky, and Ronen Zaidel-Bar, Reciprocal regulation of actomyosin organization and contractility in nonmuscle cells by tropomyosins and alpha-actinins, Mol Bio of the Cell, 30 2025 (2019).
* (25) L. Golubović and Z.-G. Wang, Anharmonic elasticity of smectics A and the Kardar-Parisi-Zhang model. Phys. Rev. Lett. 69, 2535 (1992); Kardar-Parisi-Zhang model and anomalous elasticity of two- and three-dimensional smectic- $A$ liquid crystals, Phys. Rev. E 49, 2567 (1994).
* (26) There are non-linearities for polar active smectics that are relevant in spin-wave theory in certain regimes of parameter space; see polarsm for details.
* (27) J. M. Kosterlitz, The critical properties of the two-dimensional XY model, J. Phys. C: Solid State Phys. 7 1046 (1974); J.M. Kosterlitz and D.J. Thouless, Long range order and metastability in two-dimensional solids and superfluids, J. Phys. C 5, L124 (1972);J.M. Kosterlitz and D.J. Thouless, Ordering metastability and phase transitions in two-dimensional systems, J. Phys. C 6, 1181 (1973).
* (28) P. S. Pershan, Dislocation effects in smectic A liquid crystals, J. Appl. Phys, 45, 1590 (1974).
* (29) J. Toner and D. R. Nelson, Smectic, cholesteric and Rayleigh-Benard order in two dimensions, Phys. Rev. B 23, 316 (1981).
* (30) D. R. Nelson and J. M. Kosterlitz, Universal Jump in the Superfluid Density of Two-Dimensional Superfluids, Phys. Rev. Lett. 39, 1201 (1977).
* (31) This term was originally introduced as an activity-induced tension for a single membrane with active pumps; see Ramaswamy2000 .
* (32) S. Ramaswamy, J. Toner, and J. Prost, Nonequilibrium fluctuations, traveling waves, and instabilities in active membranes, Phys. Rev. Lett. 84, 3494 (2000).
* (33) P.C. Martin, O. Parodi, and P.S. Pershan, Unified Hydrodynamic Theory for Crystals, Liquid Crystals, and Normal Fluids, Phys. Rev. A 6, 2401 (1972).
* (34) P. Curie, Sur la symetrie dans les phenomenes physiques, symetrie d’un champ electrique et d’un champ magnetique, Journal de physique theorique et appliquee, vol. 3, no 1, p. 393-415, (1894).
* (35) M. Basan, T. Risler, J.-F. Joanny, X. Sastre-Garau, J. Prost, Homeostatic competition drives tumor growth and metastasis nucleation, HFSP J. 3 (4), 265-272 (2009).
* (36) J. Ranft, M. Basan, J. Elgeti, J. F. Joanny, J. Prost, and F. Jülicher, Fluidization of tissues by cell division and apoptosis, Proceedings of the National Academy of Sciences 107 (49), 20863-20868, (2010).
* (37) G. T. Eisenhoffer and J. Rosenblatt, Bringing balance by force: live cell extrusion controls epithelial cell numbers, Trends Cell Biol, 23(4), 185-92 (2013).
|
# POD-ROMs for incompressible flows including snapshots of the temporal
derivative of the full order solution
Bosco García-Archilla Departamento de Matemática Aplicada II, Universidad de
Sevilla, Sevilla, Spain. Research is supported by Spanish MCINYU under grants
PGC2018-096265-B-I00 and PID2019-104141GB-I00<EMAIL_ADDRESS>Volker John
Weierstrass Institute for Applied Analysis and Stochastics, Leibniz Institute
in Forschungsverbund Berlin e. V. (WIAS), Mohrenstr. 39, 10117 Berlin,
Germany. Freie Universität of Berlin, Department of Mathematics and Computer
Science, Arnimallee 6, 14195 Berlin, Germany. Julia Novo Departamento de
Matemáticas, Universidad Autónoma de Madrid, Spain. Research is supported by
Spanish MINECO under grants PID2019-104141GB-I00 and VA169P20
<EMAIL_ADDRESS>
###### Abstract
In this paper we study the influence of including snapshots that approach the
velocity time derivative in the numerical approximation of the incompressible
Navier-Stokes equations by means of proper orthogonal decomposition (POD)
methods. Our set of snapshots includes the velocity approximation at the
initial time from a full order mixed finite element method (FOM) together with
approximations to the time derivative at different times. The approximation at
the initial velocity can be replaced by the mean value of the velocities at
the different times so that implementing the method to the fluctuations, as
done mostly in practice, only approximations to the time derivatives are
included in the set of snapshots. For the POD method we study the differences
between projecting onto $L^{2}$ and $H^{1}$. In both cases pointwise in time
error bounds can be proved. Including grad-div stabilization both in the FOM
and POD methods error bounds with constants independent on inverse powers of
the viscosity can be obtained.
AMS subject classifications. 65M12, 65M15, 65M60.
Keywords. incompressible Navier–Stokes equations, proper orthogonal
decomposition (POD), reduced order models (ROMs), snapshots of the temporal
derivative of the full order solution, grad-div stabilization, robust
pointwise in time estimates
## 1 Introduction
It is well known that the computational cost of direct numerical simulations,
also called full order methods (FOMs), can be reduced by using reduced order
models (ROMs). In this paper, we study ROMs based on proper orthogonal
decomposition (POD) methods, so-called POD-ROMs. The computation of the
reduced basis uses solutions of a FOM, so-called snapshots.
We study incompressible flow problems that are modeled by means of the
incompressible Navier–Stokes equations
$\begin{array}[]{rcll}\boldsymbol{u}_{t}-\nu\Delta\boldsymbol{u}+(\boldsymbol{u}\cdot\nabla)\boldsymbol{u}+\nabla
p&=&\boldsymbol{f}&\text{in }\ (0,T]\times\Omega,\\\
\nabla\cdot\boldsymbol{u}&=&0&\text{in }\ (0,T]\times\Omega,\end{array}$ (1)
in a bounded domain $\Omega\subset{\mathbb{R}}^{d}$, $d\in\\{2,3\\}$, with
initial condition $\boldsymbol{u}(0)=\boldsymbol{u}^{0}$. In (1),
$\boldsymbol{u}$ is the velocity field, $p$ the kinematic pressure, $\nu>0$
the kinematic viscosity coefficient, and $\boldsymbol{f}$ represents the
accelerations due to external body forces acting on the fluid. The
Navier–Stokes equations (1) have to be complemented with boundary conditions.
For simplicity, we only consider homogeneous Dirichlet boundary conditions
$\boldsymbol{u}=\boldsymbol{0}$ on $[0,T]\times\partial\Omega$.
This paper studies the impact of including approximations of the temporal
derivative of the velocity in the set of snapshots. The idea consists in
taking, in addition to the mixed finite element approximation to the velocity
at the initial time, $\left\\{\boldsymbol{u}_{h}(t_{0})\right\\}$, the time
derivatives of the mixed finite element approximations,
$\left\\{\boldsymbol{u}_{h,t}(t_{j})\right\\}$. These temporal derivatives can
be easily computed using the right-hand side of the mixed finite element
Galerkin equation. In the present paper we follow an idea presented in the
recent paper [22] in which it is shown that there is no need to include in the
set of snapshots other than one approximation to the velocity at a fixed time,
instead of the full set $\left\\{\boldsymbol{u}_{h}(t_{j})\right\\}_{j=0}^{M}$
as it is usually done in the literature. The numerical analysis in [22] is
carried out for the heat equation and with difference quotients,
$\left\\{(\boldsymbol{u}_{h}(t_{j})-\boldsymbol{u}_{h}(t_{j-1})/(t_{j}-t_{j-1})\right\\}$,
approaching the time derivative. In the present paper we consider instead the
Galerkin time derivatives although the analysis for the difference quotients
case is essentially the same (or even simpler). Actually, in practice, any
approximation to the time derivative can work equally well. We also prove that
the snapshot at the initial value can be replaced by the mean value
$\overline{\boldsymbol{u}}_{h}=(M+1)^{-1}\sum_{j=0}^{M}\boldsymbol{u}_{h}(t_{j})$,
which can be more efficient in the numerical simulations. It is standard to
apply the POD method to the fluctuations
$\left\\{\boldsymbol{y}_{h}(t_{j})\right\\}_{j=0}^{M}=\left\\{\boldsymbol{u}_{h}(t_{j})-\overline{\boldsymbol{u}}_{h}\right\\}_{j=0}^{M}$
that have no mean by definition. Then, with the method we propose only
approximations to the derivatives are needed in this case.
Several works in the literature have already studied the subject of increasing
the set of snapshots with approximations to the time derivatives. However,
apart from [22], all of them include as snapshots the set of the
approximations at different times, instead of only one snapshot. Also,
starting with the pioneering paper [21], most of these papers include a
different type of approximation than considered in this paper, namely
difference quotients. In particular, to the best of our knowledge, this is the
first paper that studies the inclusion of the temporal derivatives of the
mixed finite element velocity approximations in order to generate the reduced
order basis for the incompressible Navier–Stokes equations. With this more
general setting we can deduce that in practice any approximation to the time
derivative produces essentially the same results.
The initial motivation for investigating a different approximation than
difference quotients in the set of snapshots is that the results for
difference quotients are ambivalent. On the one hand, from the theoretical
point of view, the inclusion of the difference quotients possesses some
advantages. First of all, it allows to prove optimal error bounds for the POD-
ROM when the POD basis functions are based on the projection onto the Hilbert
space $X=H_{0}^{1}(\Omega)^{d}$, see [21, 15, 27]. In this way, the standard
finite element error analysis is mimicked, in which the Ritz or Stokes
projection is used to split the error in a projection error and a discrete
remainder. It was observed that if the POD basis functions are based on the
projection onto the Hilbert space $X=L^{2}(\Omega)^{d}$, the difference
quotients are not needed to prove optimal error bounds in certain norms, see
[4, 15, 27, 23]. However, as pointed out in [19], even in this case, the
inclusion of the difference quotients allows to get pointwise estimates in
time that generally cannot be proved if there are no difference quotients in
the set of snapshots. On the other hand, from the numerical point of view, it
is not clear that the difference quotients should be included in the actual
simulations with the POD-ROM. In fact, it is reported in [18, 17] that the
POD-ROM without the difference quotients performs considerably better than
with the difference quotients.
Trying to keep the theoretical advantages of including approximations of the
temporal derivative of the velocity in the set of snapshots but relaxing their
drawbacks in practical simulations, we study in this paper, both theoretically
and numerically, the inclusion of time derivatives of the discrete velocity.
As in [22], our approach has half the number of snapshots as in the standard
POD finite quotient approach. Also, as in the difference quotient case, we are
able to get pointwise in time estimates using as projecting space
$X=H_{0}^{1}(\Omega)^{d}$ and $L^{2}(\Omega)^{d}$. To this end, we follow [22,
Lemma 3.3] (see also [19, Lemma 3.6]) to prove that the $X$-norm of a function
at any point in time (let’s say $t_{j}$) is bounded in terms of the $X$-norm
of the value at $t_{0}$ plus the mean values of its time derivative taken in a
time interval (let’s say $\left\\{t_{0},t_{1},\ldots,t_{M}\right\\}$, up to an
error that tends to zero with the length of the time step. From the numerical
point of view, including the snapshots of the time derivatives avoids the
potential problem of performing badly conditioned operations in the
computation of the snapshots like
$\left\\{(\boldsymbol{u}_{h}(t_{j})-\boldsymbol{u}_{h}(t_{j-1}))/(t_{j}-t_{j-1})\right\\}$
because both numerator and denominator may suffer from numerical cancellation.
Finally, we mimic model reduction ideas coming from dynamical systems, where
using snapshots from the time derivative is a more common approach, see for
example [20].
For the error analysis in the present paper, instead of considering a concrete
fully discrete scheme from which the values
$\left\\{\boldsymbol{u}_{h}(t_{j})\right\\}$ are taken, we consider a
continuous-in-time method, which has some advantages. In practice, one always
computes the snapshots with a fully discrete method but the error analysis
based on the continuous-in-time method holds for any time integrator used in
the FOM. With this approach one can use different time steps for the FOM,
which produces the snapshots, and for the POD-ROM method. Our error analysis
takes into account the temporal error coming from the POD-ROM. Finally,
following [23], we analyze the case in which stabilized approximations are
computed both for the FOM and the POD-ROM. More precisely, the considered
finite element method is based on a Galerkin discretization plus grad-div
stabilization with pairs of inf-sup stable elements. For the POD-ROM we also
use grad-div stabilization. In this way, the constants in the error bounds for
the snapshots do not depend explicitly on inverse powers of the viscosity,
i.e., they do not blow up for small viscosity coefficients, see [8]. Adapting
the results from [23], the same holds for the error bounds of the POD-ROM. The
importance of such so-called robust methods is stated in the survey [9]: In
the case of small viscosity coefficients and coarse grids, only robust
estimates provide useful information about the behavior of a numerical method
on coarse grids if the analytic solution is smooth.
In the numerical studies, we compare the different approaches obtained by
taking $X=H_{0}^{1}(\Omega)^{d}$ and $X=L^{2}(\Omega)^{d}$ in combination with
one of the following sets: the set of snapshots at different times, the set of
difference quotients, and the set of Galerkin time derivatives. We cannot
deduce from these studies that any of the approaches is much better than the
other ones and the necessary comprehensive numerical studies are outside the
scope of this paper, which is a rigorous numerical analysis from which
interesting properties and sharp bounds for the different methods can be
deduced.
The paper is organized as follows. In Section 2 we state some preliminaries
and notations. The POD method and some a priori bounds for the projection of
the FOM approximation onto the POD space are shown in Section 3. The analysis
of the POD method is included in Section 4 with a first subsection for the
case $X=H_{0}^{1}(\Omega)^{d}$ and a second one for the case
$X=L^{2}(\Omega)^{d}$. As stated above, Section 5 is devoted to study the
performance of the methods with some numerical experiments. Finally, we have
included an appendix in which we get robust bounds for the second time
derivative of the FOM approximation in some norms, since this time derivative
appears in our bounds in the a priori error analysis.
## 2 Preliminaries and notations
Standard symbols will be used for Lebesgue and Sobolev spaces, with the usual
convention that $W^{s,2}(\Omega)=H^{s}(\Omega)$, $s\geq 1$. The inner product
in $L^{2}(\Omega)^{d}$, $d\geq 1$, is denoted by $(\cdot,\cdot)$.
The following Sobolev imbeddings [1] will be used in the analysis: For
$q\in[1,\infty)$, there exists a constant $C=C(\Omega,q)$ such that
$\|v\|_{L^{q^{\prime}}}\leq
C\|v\|_{W^{s,q}},\,\,\quad\frac{1}{q^{\prime}}\geq\frac{1}{q}-\frac{s}{d}>0,\quad
q<\infty,\quad v\in W^{s,q}(\Omega)^{d}.$ (2)
We will denote by $C_{p}$ the constant in the Poincaré inequality
$\|\boldsymbol{v}\|_{0}\leq
C_{p}\|\nabla\boldsymbol{v}\|_{0},\quad\boldsymbol{v}\in
H_{0}^{1}(\Omega)^{d}.$ (3)
The following inequality can be found in [16, Remark 3.35]
$\|\nabla\cdot\boldsymbol{v}\|_{0}\leq\|\nabla\boldsymbol{v}\|_{0},\quad\boldsymbol{v}\in
H_{0}^{1}(\Omega)^{d}.$ (4)
Let us denote by ${\boldsymbol{V}}=H_{0}^{1}(\Omega)^{d}$ and
$Q=L_{0}^{2}(\Omega)=\\{q\in L^{2}(\Omega)\ \mid\ (q,1)=0\\}$.
Let $\mathcal{T}_{h}=(\sigma_{j}^{h},\phi_{j}^{h})_{j\in J_{h}}$, $h>0$, be a
family of partitions of $\overline{\Omega}$, where $h$ denotes the maximum
diameter of the mesh cells $\sigma_{j}^{h}\in\mathcal{T}_{h}$, and
$\phi_{j}^{h}$ are the mappings from the reference simplex $\sigma_{0}$ onto
$\sigma_{j}^{h}$. We shall assume that the family of partitions is shape-
regular and quasi-uniform. On these partitions, we define the following finite
element spaces
$\displaystyle Y_{h}^{l}$ $\displaystyle=$ $\displaystyle\left\\{v_{h}\in
C^{0}(\overline{\Omega})\ \mid\ {v_{h}}_{\mid_{K}}\in{\mathbb{P}}_{l}(K),\
\forall\ K\in\mathcal{T}_{h}\right\\},\ l\geq
1,\quad{\boldsymbol{Y}}_{h}^{l}=\left(Y_{h}^{l}\right)^{d},$
$\displaystyle{\boldsymbol{X}}_{h}^{l}$ $\displaystyle=$
$\displaystyle{\boldsymbol{Y}}_{h}^{l}\cap H_{0}^{1}(\Omega)^{d},\quad
Q_{h}^{l}=Y_{h}^{l}\cap L_{0}^{2}(\Omega),$
$\displaystyle{\boldsymbol{V}}_{h,l}$ $\displaystyle=$
$\displaystyle{\boldsymbol{X}}_{h}^{l}\cap\left\\{{\boldsymbol{v}}_{h}\in
H_{0}^{1}(\Omega)^{d}\ \mid\ (q_{h},\nabla\cdot{\boldsymbol{v}}_{h})=0\
\forall\ q_{h}\in Q_{h}^{l-1}\right\\},\quad l\geq 2.$ (5)
The space ${\boldsymbol{V}}_{h,l}$ is the space of discretely divergence-free
functions.
Since the family of partitions is quasi-uniform, the following inverse
inequality holds for each $\boldsymbol{v}_{h}\in Y_{h}^{l}$, e.g., see [6,
Theorem 3.2.6],
$|\boldsymbol{v}_{h}|_{W^{m,p}(K)}\leq
c_{\mathrm{inv}}h_{K}^{n-m-d\left(\frac{1}{q}-\frac{1}{p}\right)}|\boldsymbol{v}_{h}|_{W^{n,q}(K)},$
(6)
where $0\leq n\leq m\leq 1$, $1\leq q\leq p\leq\infty$, and $h_{K}$ is the
diameter of $K\in\mathcal{T}_{h}$.
The analysis uses a modified Stokes projection $\boldsymbol{s}_{h}^{m}\ :\
{\boldsymbol{V}}\rightarrow{\boldsymbol{V}}_{h,l}$ that was introduced in [7]
and that is defined by
$(\nabla\boldsymbol{s}_{h}^{m},\nabla\boldsymbol{\varphi}_{h})=(\nabla\boldsymbol{u},\nabla\boldsymbol{\varphi}_{h}),\quad\forall\
\boldsymbol{\varphi}_{h}\in\boldsymbol{V}_{h,l}.$ (7)
This projection satisfies the following error bound, see [7],
$\|\boldsymbol{u}-\boldsymbol{s}_{h}^{m}\|_{0}+h\|\boldsymbol{u}-\boldsymbol{s}_{h}^{m}\|_{1}\leq
C\|\boldsymbol{u}\|_{j}h^{j},\quad 1\leq j\leq l+1.$ (8)
From [5], we also have
$\|\nabla\boldsymbol{s}_{h}^{m}\|_{\infty}\leq
C\|\nabla\boldsymbol{u}\|_{\infty},$ (9)
and from [12, Lemma 3.8]
$\displaystyle\|\boldsymbol{s}_{h}^{m}\|_{\infty}$ $\displaystyle\leq$
$\displaystyle C(\|\boldsymbol{u}\|_{d-2}\|\boldsymbol{u}\|_{2})^{1/2},$ (10)
$\displaystyle\|\nabla\boldsymbol{s}_{h}^{m}\|_{L^{2d/(d-1)}}$
$\displaystyle\leq$ $\displaystyle
C\bigl{(}\|\boldsymbol{u}\|_{1}\|\boldsymbol{u}\|_{2}\bigr{)}^{1/2},$ (11)
where all constants $C$ in (9) – (11) are independent of $\nu$.
We consider the mixed finite element pair known as Hood–Taylor elements [2,
28] $({\boldsymbol{X}}_{h}^{l},Q_{h}^{l-1})$, $l\geq 2$. For these elements a
uniform inf-sup condition is satisfied (see [2]), that is, there exists a
constant $\beta_{\rm is}>0$ independent of the mesh size $h$ such that
$\inf_{q_{h}\in
Q_{h}^{l-1}}\sup_{\boldsymbol{v}_{h}\in{\boldsymbol{X}}_{h}^{l}}\frac{(q_{h},\nabla\cdot\boldsymbol{v}_{h})}{\|\boldsymbol{v}_{h}\|_{1}\|q_{h}\|_{L^{2}/{\mathbb{R}}}}\geq\beta_{\rm{is}}.$
(12)
As a direct method, or full order method, we consider a Galerkin method with
grad-div stabilization. The semi-discrete method reads as follows: Find
$(\boldsymbol{u}_{h},p_{h})\in{\boldsymbol{X}}_{h}^{l}\times Q_{h}^{l-1}$ such
that
$\displaystyle\left(\boldsymbol{u}_{h,t},\boldsymbol{v}_{h}\right)+\nu(\nabla\boldsymbol{u}_{h},\nabla\boldsymbol{v}_{h})+b(\boldsymbol{u}_{h},\boldsymbol{u}_{h},\boldsymbol{v}_{h})$
(13)
$\displaystyle-(p_{h},\nabla\cdot\boldsymbol{v}_{h})+\mu(\nabla\cdot\boldsymbol{u}_{h},\nabla\cdot\boldsymbol{v}_{h})$
$\displaystyle=$
$\displaystyle({\boldsymbol{f}},\boldsymbol{v}_{h})\quad\forall\
\boldsymbol{v}_{h}\in{\boldsymbol{X}}_{h}^{l},$
$\displaystyle(\nabla\cdot\boldsymbol{u}_{h},q_{h})$ $\displaystyle=$
$\displaystyle 0\quad\forall\ q_{h}\in Q_{h}^{l-1},$
where $\mu$ is the positive grad-div stabilization parameter.
It is well-known that considering the discretely divergence-free space
$\boldsymbol{V}_{h,l}$, we can remove the pressure from (13) since
$\boldsymbol{u}_{h}\in\boldsymbol{V}_{h,l}$ satisfies
$\left(\boldsymbol{u}_{h,t},\boldsymbol{v}_{h}\right)+\nu(\nabla\boldsymbol{u}_{h},\nabla\boldsymbol{v}_{h})+b(\boldsymbol{u}_{h},\boldsymbol{u}_{h},\boldsymbol{v}_{h})+\mu(\nabla\cdot\boldsymbol{u}_{h},\nabla\cdot\boldsymbol{v}_{h})=({\boldsymbol{f}},\boldsymbol{v}_{h}),\quad\forall\
\boldsymbol{v}_{h}\in{\boldsymbol{V}}_{h,l}.$ (14)
For this method the following bound holds, see [8],
$\|\boldsymbol{u}(\cdot,t)-\boldsymbol{u}_{h}(\cdot,t)\|_{0}+h\|\boldsymbol{u}(\cdot,t)-\boldsymbol{u}_{h}(\cdot,t)\|_{1}\leq
C(\boldsymbol{u},p,l+1)h^{l},\quad t\in(0,T].$ (15)
where the constant $C(\boldsymbol{u},p,l+1)$ does not explicitly depend on
inverse powers of $\nu$. Actually, only the first term on the left-hand side
of (15) is considered in [8] but the estimate for the second term follows then
from (8) and the inverse inequality (6). Numerical studies presented in [9]
show that the estimate from [8] is sharp. From the error analysis performed in
[8], it can be seen that an optimal order for the error of the velocity
gradient, in $L^{2}(0,T;L^{2}(\Omega)^{d})$, is obtained with a constant that
depends on $\nu^{-1}$, i.e., this estimate is not robust.
## 3 Proper orthogonal decomposition
We consider a POD method. Let us fix $T>0$ and $M>0$ and take $\Delta t=T/M$.
For $N=M+1$ we define the following space
${\cal\boldsymbol{U}}=\mbox{span}\left\\{\boldsymbol{y}_{h}^{1},\boldsymbol{y}_{h}^{2},\ldots,\boldsymbol{y}_{h}^{N}\right\\}=\mbox{span}\left\\{\sqrt{N}\boldsymbol{u}_{h}^{0},\tau\boldsymbol{u}_{h,t}^{1},\ldots,\tau\boldsymbol{u}_{h,t}^{M}\right\\},$
where we use the notation
$\boldsymbol{u}_{h}^{j}=\boldsymbol{u}_{h}(\cdot,t_{j})$ for the
approximations at time instance $t_{j}=j\Delta t$ and
$\boldsymbol{u}_{h,t}^{j}=\boldsymbol{u}_{h,t}(\cdot,t_{j})$ are the snapshots
of the temporal derivatives. The factor $\tau$ in front of the temporal
derivatives is a time scale and it makes the snapshots dimensionally correct,
i.e., all snapshots possess the same physical unit. Let $d_{v}$ be the
dimension of $\cal\boldsymbol{U}$.
The correlation matrix corresponding to the snapshots is given by
$K_{v}=((k_{i,j}^{v}))\in{\mathbb{R}}^{N\times N}$, with the entries
$k_{i,j}^{v}=\frac{1}{N}\left(\boldsymbol{y}_{h}^{i},\boldsymbol{y}_{h}^{j}\right)_{X},\quad
i,j=1,\ldots,N,$
and $(\cdot,\cdot)_{X}$ is the inner product in $X$, which is either
$L^{2}(\Omega)^{d}$ or $H_{0}^{1}(\Omega)^{d}$. Following [21], we denote by
$\lambda_{1}\geq\lambda_{2},\ldots\geq\lambda_{d_{v}}>0$ the positive
eigenvalues of $K_{v}$ and by
$\boldsymbol{v}_{1},\ldots,\boldsymbol{v}_{d_{v}}\in{\mathbb{R}}^{N}$
associated eigenvectors of Euclidean norm $1$. Then, the (orthonormal) POD
basis functions of $\cal\boldsymbol{U}$ are given by
$\boldsymbol{\varphi}_{k}=\frac{1}{\sqrt{N}}\frac{1}{\sqrt{\lambda_{k}}}\sum_{j=1}^{N}v_{k}^{j}\boldsymbol{y}_{h}^{j},$
(16)
where $v_{k}^{j}$ is the $j$-th component of the eigenvector
$\boldsymbol{v}_{k}$. The following error estimate is known from [21,
Proposition 1]
$\frac{1}{N}\sum_{j=1}^{N}\left\|\boldsymbol{y}_{h}^{j}-\sum_{k=1}^{r}(\boldsymbol{y}_{h}^{j},\boldsymbol{\varphi}_{k})_{X}\boldsymbol{\varphi}_{k}\right\|_{X}^{2}=\sum_{k=r+1}^{d_{v}}\lambda_{k},$
from which one can deduce
$\left\|\boldsymbol{u}_{h}^{0}-\sum_{k=1}^{r}(\boldsymbol{u}_{h}^{0},\boldsymbol{\varphi}_{k})_{X}\boldsymbol{\varphi}_{k}\right\|_{X}^{2}+\frac{\tau^{2}}{M+1}\sum_{j=1}^{M}\left\|\boldsymbol{u}_{h,t}^{j}-\sum_{k=1}^{r}(\boldsymbol{u}_{h,t}^{j},\boldsymbol{\varphi}_{k})_{X}\boldsymbol{\varphi}_{k}\right\|_{X}^{2}=\sum_{k=r+1}^{d_{v}}\lambda_{k}.$
(17)
We will denote the mass matrix of the POD basis by
$M^{v}=((m_{i,j}^{v}))\in{\mathbb{R}}^{d_{v}\times d_{v}}$, where
$m_{i,j}^{v}=(\boldsymbol{\varphi}_{j},\boldsymbol{\varphi}_{i})_{X}$. In the
case $X=H_{0}^{1}(\Omega)^{d}$, for any
$\boldsymbol{v}\in{\cal\boldsymbol{U}}$, the following inverse inequality
holds, see [21, Lemma 2],
$\|\nabla\boldsymbol{v}\|_{0}\leq\sqrt{\|(M^{v})^{-1}\|_{2}}\|\boldsymbol{v}\|_{0}.$
(18)
The stiffness matrix of the POD basis is given by
$S^{v}=((s_{i,j}^{v}))\in{\mathbb{R}}^{d_{v}\times d_{v}}$, with the entries
$s_{i,j}^{v}=(\nabla\boldsymbol{\varphi}_{j},\nabla\boldsymbol{\varphi}_{i})_{X}$.
If $X=L^{2}(\Omega)^{d}$, the following inequality holds for for all
$\boldsymbol{v}\in{\cal\boldsymbol{U}}$, see [21, Lemma 2],
$\|\nabla\boldsymbol{v}\|_{0}\leq\sqrt{\|S^{v}\|_{2}}\|\boldsymbol{v}\|_{0}.$
(19)
The following lemma will be the basis for proving pointwise in time estimates.
We follow [22, Lemma 3.3] (see also [19, Lemma 3.6]).
###### Lemma 1
Let $T>0$, $\Delta t=T/M$, $t^{n}=n\Delta t$, $n=0,1,\ldots M$, let $X$ be a
normed space, $\boldsymbol{z}(t,\boldsymbol{x})$ be a function defined for
$t\in[0,T]$, $\boldsymbol{x}\in\Omega$ with
$\boldsymbol{z}^{n}=\boldsymbol{z}(t_{n},\cdot)\in X$,
$\boldsymbol{z}_{t}^{n}=\boldsymbol{z}_{t}(t_{n},\cdot)\in X$ and
$\boldsymbol{z}_{tt}(t,\cdot)\in X$ for any $t\in[0,T]$. Then, the following
estimate holds
$\max_{0\leq k\leq
N}\|\boldsymbol{z}^{k}\|_{X}^{2}\leq{3}\|\boldsymbol{z}^{0}\|_{X}^{2}+\frac{3T^{2}}{M}\sum_{n=1}^{M}\|\boldsymbol{z}_{t}^{n}\|_{X}^{2}+\frac{4T}{3}(\Delta
t)^{2}\int_{0}^{T}\|\boldsymbol{z}_{tt}(s)\|_{X}^{2}\ ds.$ (20)
###### Proof:
For each $k$ we have
$\boldsymbol{z}^{k}=\boldsymbol{z}^{0}+\int_{t_{0}}^{t_{k}}\boldsymbol{z}_{t}\
ds.$ (21)
Adding and subtracting terms leads to
$\boldsymbol{z}^{k}=\boldsymbol{z}^{0}+\int_{t_{0}}^{t_{k}}\boldsymbol{z}_{t}\
ds=\boldsymbol{z}^{0}+\sum_{n=0}^{k-1}\Delta
t\boldsymbol{z}_{t}^{n}+\sum_{n=0}^{k-1}\int_{t_{n}}^{t_{n+1}}\left(\boldsymbol{z}_{t}(s)-\boldsymbol{z}_{t}(t_{n})\right)\
ds.$ (22)
To bound the last term on the right-hand side, we first notice that
$\boldsymbol{z}_{t}(s)-\boldsymbol{z}_{t}(t_{n})=\int_{t_{n}}^{s}\boldsymbol{z}_{tt}(\sigma)\
d\sigma.$
With the Cauchy–Schwarz inequality, it follows that
$\displaystyle\left\|\int_{t_{n}}^{t_{n+1}}(\boldsymbol{z}_{t}(s)-\boldsymbol{z}_{t}(t_{n}))\
ds\right\|_{X}$ $\displaystyle\leq$
$\displaystyle\int_{t_{n}}^{t_{n+1}}\left\|\boldsymbol{z}_{t}(s)-\boldsymbol{z}_{t}(t_{n})\right\|_{X}\
ds$ $\displaystyle\leq$
$\displaystyle\int_{t_{n}}^{t_{n+1}}(s-t_{n})^{1/2}\left(\int_{t_{n}}^{s}\left\|\boldsymbol{z}_{tt}(\sigma)\right\|_{X}^{2}\
d\sigma\right)^{1/2}\ ds$ $\displaystyle\leq$ $\displaystyle\frac{2}{3}(\Delta
t)^{3/2}\left(\int_{t_{n}}^{t_{n+1}}\left\|\boldsymbol{z}_{tt}(s)\right\|_{X}^{2}\
ds\right)^{1/2}.$
Consequently, for the last term on the right-hand side of (22), we obtain,
using the Cauchy–Schwarz inequality for sums,
$\displaystyle\left\|\sum_{n=0}^{k-1}\left(\int_{t_{n}}^{t_{n+1}}(\boldsymbol{z}_{t}(s)-\boldsymbol{z}_{t}(t_{n}))\
ds\right)\right\|_{X}$ $\displaystyle\leq$
$\displaystyle\sum_{n=0}^{k-1}\left\|\int_{t_{n}}^{t_{n+1}}(\boldsymbol{z}_{t}(s)-\boldsymbol{z}_{t}(t_{n}))\
ds\right\|_{X}$ $\displaystyle\leq$
$\displaystyle\frac{2}{3}\sum_{n=0}^{k-1}\left[(\Delta
t)^{3/2}\left(\int_{t_{n}}^{t_{n+1}}\left\|\boldsymbol{z}_{tt}(s)\right\|_{X}^{2}\
ds\right)^{1/2}\right]$ $\displaystyle\leq$
$\displaystyle\frac{2}{3}\left(\sum_{n=0}^{k-1}(\Delta
t)^{3}\right)^{1/2}\left(\int_{t_{0}}^{t_{k}}\left\|\boldsymbol{z}_{tt}(t)\right\|_{X}^{2}\
ds\right)^{1/2}$ $\displaystyle\leq$ $\displaystyle\frac{2}{3}T^{1/2}\Delta
t\left(\int_{t_{0}}^{t_{k}}\left\|\boldsymbol{z}_{tt}(t)\right\|_{X}^{2}\
ds\right)^{1/2}.$
Taking norms in (22) gives the estimate
$\displaystyle\|\boldsymbol{z}^{k}\|_{X}$ $\displaystyle\leq$
$\displaystyle\|\boldsymbol{z}^{0}\|_{X}+\sum_{n=l}^{k-1}\Delta
t\|\boldsymbol{z}_{t}^{n}\|_{X}+\frac{2}{3}T^{1/2}\Delta
t\left(\int_{t_{l}}^{t_{k}}\left\|\boldsymbol{z}_{tt}(t)\right\|_{X}^{2}\
ds\right)^{1/2}$ $\displaystyle\leq$
$\displaystyle\|\boldsymbol{z}^{0}\|_{X}+T^{1/2}(\Delta
t)^{1/2}\left(\sum_{n=1}^{M}\|\boldsymbol{z}_{t}^{n}\|_{X}^{2}\right)^{1/2}+\frac{2}{3}T^{1/2}\Delta
t\left(\int_{0}^{T}\left\|\boldsymbol{z}_{tt}(t)\right\|_{X}^{2}\
ds\right)^{1/2},$
from which we conclude (20). $\Box$
The above lemma also holds true changing the initial value by the mean value.
###### Lemma 2
With the assumptions of Lemma 1, it holds
$\max_{0\leq k\leq
N}\|\boldsymbol{z}^{k}\|_{X}^{2}\leq{3}\|\overline{\boldsymbol{z}}\|_{X}^{2}+\frac{12T^{2}}{M}\sum_{n=1}^{M}\|\boldsymbol{z}_{t}^{n}\|_{X}^{2}+\frac{16T}{3}(\Delta
t)^{2}\int_{0}^{T}\|\boldsymbol{z}_{tt}(s)\|_{X}^{2}\ ds,$ (23)
where
$\overline{\boldsymbol{z}}=\frac{1}{M+1}\sum_{j=0}^{M}\boldsymbol{z}^{j}$.
###### Proof:
We first observe that (21) gives
$\overline{\boldsymbol{z}}=\boldsymbol{z}^{0}+\frac{1}{M+1}\left\\{\int_{t_{0}}^{t_{1}}\boldsymbol{z}_{t}\
ds+\int_{t_{0}}^{t_{2}}\boldsymbol{z}_{t}\
ds+\ldots+\int_{t_{0}}^{t_{M}}\boldsymbol{z}_{t}\ ds\right\\},$
so that
$\|\boldsymbol{z}^{0}\|_{X}\leq\|\overline{\boldsymbol{z}}\|_{X}+\frac{1}{M+1}\sum_{j=1}^{M}\left\|\int_{t_{0}}^{t_{j}}\boldsymbol{z}_{t}\
ds\right\|_{X}.$
Since, we have obtained in the proof of Lemma 1
$\left\|\int_{t_{0}}^{t_{j}}\boldsymbol{z}_{t}\right\|_{X}\leq T^{1/2}(\Delta
t)^{1/2}\left(\sum_{n=1}^{M}\|\boldsymbol{z}_{t}^{n}\|_{X}^{2}\right)^{1/2}+\frac{2}{3}T^{1/2}\Delta
t\left(\int_{0}^{T}\left\|\boldsymbol{z}_{tt}(t)\right\|_{X}^{2}\
ds\right)^{1/2},$ (24)
it follows that
$\|\boldsymbol{z}^{0}\|_{X}\leq\|\overline{\boldsymbol{z}}\|_{X}+T^{1/2}(\Delta
t)^{1/2}\left(\sum_{n=1}^{M}\|\boldsymbol{z}_{t}^{n}\|_{X}^{2}\right)^{1/2}+\frac{2}{3}T^{1/2}\Delta
t\left(\int_{0}^{T}\left\|\boldsymbol{z}_{tt}(t)\right\|_{X}^{2}\
ds\right)^{1/2}.$ (25)
Now, taking into account (21), (24), and (25), we obtain for any $k$
$\|\boldsymbol{z}^{k}\|_{X}\leq\|\overline{\boldsymbol{z}}\|_{X}+2T^{1/2}(\Delta
t)^{1/2}\left(\sum_{n=1}^{M}\|\boldsymbol{z}_{t}^{n}\|_{X}^{2}\right)^{1/2}+\frac{4}{3}T^{1/2}\Delta
t\left(\int_{0}^{T}\left\|\boldsymbol{z}_{tt}(t)\right\|_{X}^{2}\
ds\right)^{1/2},$
from which we conclude (23). $\Box$
###### Remark 1
In the sequel we will apply Lemma 1 and assume that we have in the set of
snapshots $\boldsymbol{y}_{h}^{1}=\sqrt{N}\boldsymbol{u}_{h}^{0}$. However,
applying Lemma 2 instead of Lemma 1, we can substitute the first snapshot by
$\boldsymbol{y}_{h}^{1}=\sqrt{N}\overline{\boldsymbol{u}}_{h}$, where
$\overline{\boldsymbol{u}}_{h}$ is the mean value
$\overline{\boldsymbol{u}}_{h}=\frac{1}{M+1}\sum_{j=0}^{M}\boldsymbol{u}_{h}^{j}$,
to obtain the same results. The set of snapshots in this case is
${\cal\boldsymbol{U}}=\mbox{span}\left\\{\boldsymbol{y}_{h}^{1},\boldsymbol{y}_{h}^{2},\ldots,\boldsymbol{y}_{h}^{N}\right\\}=\mbox{span}\left\\{\sqrt{N}\overline{\boldsymbol{u}}_{h},\tau\boldsymbol{u}_{h,t}^{1},\ldots,\tau\boldsymbol{u}_{h,t}^{M}\right\\},$
It is standard in numerical simulations to subtract the mean and apply the POD
method to the fluctuations, see [3], which by definition have zero mean. Then,
the new approach we propose, in which the first snapshot is a weighted mean
and the rest are weighted time derivatives, has the advantage that applying
the POD method to the fluctuations only includes snapshots for the
approximations to the time derivatives (since the mean is zero). This
procedure was applied in our numerical studies.
In the sequel, we will denote by
${\cal\boldsymbol{U}}^{r}=\mbox{span}\\{\boldsymbol{\varphi}_{1},\boldsymbol{\varphi}_{2},\ldots,\boldsymbol{\varphi}_{r}\\}$,
$1\leq r\leq d_{v},$ and by $P_{r}^{v}\ :\
{\boldsymbol{X}}_{h}^{l}\to{\cal\boldsymbol{U}}^{r}$, the $X$-orthogonal
projection onto ${\cal\boldsymbol{U}}^{r}$.
Taking $\boldsymbol{z}=P_{r}^{v}\boldsymbol{u}_{h}-\boldsymbol{u}_{h}$ in (20)
and applying (17) yields
$\displaystyle\max_{0\leq n\leq
M}\|P_{r}^{v}\boldsymbol{u}_{h}^{n}-\boldsymbol{u}_{h}^{n}\|_{X}^{2}$
$\displaystyle\leq$
$\displaystyle\left(3+6\frac{T^{2}}{\tau^{2}}\right)\sum_{k={r+1}}^{d_{v}}\lambda_{k}$
$\displaystyle+\frac{4T}{3}(\Delta
t)^{2}\int_{0}^{T}\|P_{r}^{v}\boldsymbol{u}_{h,tt}(s)-\boldsymbol{u}_{h,tt}(s)\|_{X}^{2}\
ds.$
Let us bound the second term on the right-hand side above. Using the triangle
inequality and taking into account that
$\|P_{r}^{v}\boldsymbol{w}\|_{X}^{2}\leq\|\boldsymbol{w}\|_{X}^{2}$, we
conclude that
$\|P_{r}^{v}\boldsymbol{u}_{h,tt}(s)-\boldsymbol{u}_{h,tt}(s)\|_{X}^{2}\leq
2\|P_{r}^{v}\boldsymbol{u}_{h,tt}(s)\|_{X}^{2}+2\|\boldsymbol{u}_{h,tt}(s)\|_{X}^{2}\leq
4\|\boldsymbol{u}_{h,tt}(s)\|_{X}^{2},$
so that
$\max_{0\leq n\leq
M}\|P_{r}^{v}\boldsymbol{u}_{h}^{n}-\boldsymbol{u}_{h}^{n}\|_{X}^{2}\leq
C_{X}^{2}:=\left(3+6\frac{T^{2}}{\tau^{2}}\right)\sum_{k={r+1}}^{d_{v}}\lambda_{k}+\frac{16T}{3}(\Delta
t)^{2}\int_{0}^{T}\|\boldsymbol{u}_{h,tt}(s)\|_{X}^{2}\ ds.$ (26)
Finally, from (26) we obtain
$\displaystyle\frac{1}{M}\sum_{j=1}^{M}\left\|\boldsymbol{u}_{h}^{j}-\sum_{k=1}^{r}(\boldsymbol{u}_{h}^{j},\boldsymbol{\varphi}_{k})_{X}\boldsymbol{\varphi}_{k}\right\|_{X}^{2}\leq
C_{X}^{2}.$ (27)
The integral term in (26) and (27) is bounded in the appendix. In case one
uses difference quotients instead of Galerkin time derivatives in the set of
snapshots this term does not appear.
The remainder of this section is devoted to proving a priori bounds for
$P_{r}^{v}\boldsymbol{u}_{h}^{j}$, $j=0,\ldots,M$. The a priori bounds for the
orthogonal projection $P_{r}^{v}\boldsymbol{u}_{h}^{j}$, $j=0,\cdots,M$, are
obtained from a priori bounds for the Galerkin approximation
$\boldsymbol{u}_{h}^{j}$, $j=0,\cdots,M$. We argue as in [23]. Then, we start
getting a priori bounds for the stabilized approximation
$\boldsymbol{u}_{h}^{n}$. We follow the same arguments we introduced in [11].
We start with the $L^{\infty}$ norm. Using (6), (10), (15), and (8), we get
$\displaystyle\max_{0\leq j\leq M}\|\boldsymbol{u}_{h}^{j}\|_{\infty}$
$\displaystyle\leq$
$\displaystyle\|\boldsymbol{u}_{h}^{j}-\boldsymbol{s}_{h}^{m}(\cdot,t_{j})\|_{\infty}+\|\boldsymbol{s}_{h}^{m}(\cdot,t_{j})\|_{\infty}$
$\displaystyle\leq$ $\displaystyle
Ch^{-d/2}\|\boldsymbol{u}_{h}^{j}-\boldsymbol{s}_{h}^{m}(\cdot,t_{j})\|_{0}+C(\|\boldsymbol{u}^{j}\|_{d-2}\|\boldsymbol{u}^{j}\|_{2})^{1/2}$
$\displaystyle\leq$ $\displaystyle
Ch^{-d/2}C(\boldsymbol{u},p,3)h^{2}+C(\|\boldsymbol{u}^{j}\|_{d-2}\|\boldsymbol{u}^{j}\|_{2})^{1/2}$
$\displaystyle\leq$ $\displaystyle C_{\boldsymbol{u},{\rm
inf}}:=C\left(C(\boldsymbol{u},p,3)+\left(\|\boldsymbol{u}\|_{L^{\infty}(H^{d-2})}\|\boldsymbol{u}\|_{L^{\infty}(H^{2})}\right)^{1/2}\right).$
The $L^{\infty}$ norm of the gradient is bounded in a similar way. Using (6),
(9), (15), and (8), we obtain
$\displaystyle\max_{0\leq j\leq M}\|\nabla\boldsymbol{u}_{h}^{j}\|_{\infty}$
$\displaystyle\leq$
$\displaystyle\|\nabla\boldsymbol{u}_{h}^{j}-\nabla\boldsymbol{s}_{h}^{m}(\cdot,t_{j})\|_{\infty}+\|\nabla\boldsymbol{s}_{h}^{m}(\cdot,t_{j})\|_{\infty}$
(29) $\displaystyle\leq$ $\displaystyle
Ch^{-d/2}\|\boldsymbol{u}_{h}^{j}-\boldsymbol{s}_{h}^{m}(\cdot,t_{j})\|_{1}+C\|\nabla\boldsymbol{u}^{j}\|_{\infty}$
$\displaystyle\leq$ $\displaystyle
Ch^{-d/2}C(\boldsymbol{u},p,d+1)h^{d-1}+C\|\nabla\boldsymbol{u}^{j}\|_{\infty}$
$\displaystyle\leq$ $\displaystyle C_{\boldsymbol{u},1,{\rm
inf}}:=C\left(C(\boldsymbol{u},p,d+1)+\|\nabla\boldsymbol{u}\|_{L^{\infty}(L^{\infty})}\right).$
Note that the estimate for $d=3$ requires the use cubic elements for the
velocity. Finally, applying (6), (11), (15), and (8) leads to the following
bound of the $L^{2d/(d-1)}$ norm of the velocity gradient
$\displaystyle\max_{0\leq j\leq
M}\|\nabla\boldsymbol{u}_{h}^{j}\|_{L^{2d/(d-1)}}$ $\displaystyle\leq$
$\displaystyle\|\nabla(\boldsymbol{u}_{h}^{j}-\boldsymbol{s}_{h}^{m}(\cdot,t_{j}))\|_{L^{2d/(d-1)}}+\|\nabla\boldsymbol{s}_{h}^{m}(\cdot,t_{j})\|_{L^{2d/(d-1)}}$
$\displaystyle\leq$ $\displaystyle
Ch^{-1/2}\|\boldsymbol{u}_{h}^{j}-\boldsymbol{s}_{h}^{m}(\cdot,t_{j})\|_{1}+C\bigl{(}\|\boldsymbol{u}\|_{1}\|\boldsymbol{u}\|_{2}\bigr{)}^{1/2}$
$\displaystyle\leq$ $\displaystyle
Ch^{-1/2}C(\boldsymbol{u},p,3)h+C\bigl{(}\|\boldsymbol{u}^{j}\|_{1}\|\boldsymbol{u}^{j}\|_{2}\bigr{)}^{1/2}$
$\displaystyle\leq$ $\displaystyle C_{\boldsymbol{u},{\rm
ld}}:=C\left(C(\boldsymbol{u},p,3)+\left(\|\boldsymbol{u}\|_{L^{\infty}(H^{1})}\|\boldsymbol{u}\|_{L^{\infty}(H^{2})}\right)^{1/2}\right).$
Now, we prove a priori bounds in the same norms for
$P_{r}^{v}\boldsymbol{u}_{h}^{j}=(P_{r}^{v}\boldsymbol{u}_{h}^{j}-\boldsymbol{u}_{h}^{j})+\boldsymbol{u}_{h}^{j}.$
Since we have already proved error bounds for the second term on the right-
hand side, we only need to bound the first one. First, we consider the case
$X=L^{2}(\Omega)^{d}$. From (26) we get for $j=0,\ldots,M$,
$\displaystyle\|\boldsymbol{u}_{h}^{j}-P_{r}^{v}\boldsymbol{u}_{h}^{j}\|_{0}\leq
C_{L^{2}}.$ (31)
Applying the inverse inequality (6), (3), and (31) gives
$\|P_{r}^{v}\boldsymbol{u}_{h}^{j}\|_{\infty}\leq\|\boldsymbol{u}_{h}^{j}\|_{\infty}+c_{\rm
inv}h^{-d/2}\|\boldsymbol{u}_{h}^{j}-P_{r}^{v}\boldsymbol{u}_{h}^{j}\|_{0}.$
Then
$\displaystyle C_{\rm inf}:=\max_{0\leq j\leq
M}\|P_{r}^{v}\boldsymbol{u}_{h}^{j}\|_{\infty}\leq C_{\boldsymbol{u},{\rm
inf}}+c_{\rm inv}h^{-d/2}C_{L^{2}},$ (32)
and
$K_{\rm inf}:=\Delta
t\sum_{j=0}^{M}\|P_{r}^{v}\boldsymbol{u}_{h}^{j}\|_{\infty}^{2}\leq TC_{\rm
inf}^{2}.$ (33)
Now, we observe that from (19) and (31), we get
$\|\nabla(\boldsymbol{u}_{h}^{j}-P_{r}^{v}\boldsymbol{u}_{h}^{j})\|_{0}\leq\|S^{v}\|_{2}^{1/2}C_{L^{2}}.$
Applying this inequality together with the inverse inequality (6) and (29), we
obtain
$C_{1,\rm inf}:=\max_{0\leq j\leq M}\|\nabla
P_{r}^{v}\boldsymbol{u}_{h}^{j}\|_{\infty}\leq C_{\boldsymbol{u},1,{\rm
inf}}+c_{\rm inv}h^{-d/2}\|S^{v}\|_{2}^{1/2}C_{L^{2}},$ (34)
and then, as before,
$\displaystyle K_{1,{\rm inf}}:=\Delta t\sum_{j=0}^{M}\|\nabla
P_{r}^{v}\boldsymbol{u}_{h}^{j}\|_{\infty}\leq TC_{1,\rm inf}.$ (35)
Finally, arguing in the same way but applying (3) instead of (29), we find
$C_{\rm ld}:=\max_{0\leq j\leq M}\|\nabla
P_{r}^{v}\boldsymbol{u}^{j}\|_{L^{2d/(d-1)}}\leq C_{\boldsymbol{u},{\rm
ld}}+c_{\rm inv}h^{-1/2}\|S^{v}\|_{2}^{1/2}C_{L^{2}}.$ (36)
The case $X=H_{0}^{1}(\Omega)^{d}$ is simpler. As before, from (26), we get
for $j=0,\ldots,M$,
$\|\nabla(\boldsymbol{u}_{h}^{j}-P_{r}^{v}\boldsymbol{u}_{h}^{j})\|_{0}\leq
C_{H^{1}}.$
Applying Poincaré’s inequality yields
$\|\boldsymbol{u}_{h}^{j}-P_{r}^{v}\boldsymbol{u}_{h}^{j}\|_{0}\leq
C_{p}C_{H^{1}}.$
Arguing as before, we obtain
$\displaystyle C_{\rm inf}:=\max_{0\leq j\leq
M}\|P_{r}^{v}\boldsymbol{u}_{h}^{j}\|_{\infty}$ $\displaystyle\leq$
$\displaystyle C_{\boldsymbol{u},{\rm inf}}+c_{\rm
inv}C_{p}h^{-d/2}C_{H^{1}},$ (37) $\displaystyle C_{1,\rm inf}:=\max_{0\leq
j\leq M}\|\nabla P_{r}^{v}\boldsymbol{u}_{h}^{j}\|_{\infty}$
$\displaystyle\leq$ $\displaystyle C_{\boldsymbol{u},1,{\rm inf}}+c_{\rm
inv}h^{-d/2}C_{H^{1}},$ (38) $\displaystyle C_{\rm ld}:=\max_{0\leq j\leq
M}\|\nabla P_{r}^{v}\boldsymbol{u}^{j}\|_{L^{2d/(d-1)}}$ $\displaystyle\leq$
$\displaystyle C_{\boldsymbol{u},{\rm ld}}+c_{\rm inv}h^{-1/2}C_{H^{1}}.$ (39)
From (37), it follows that
$K_{\rm inf}:=\Delta
t\sum_{j=0}^{M}\|P_{r}^{v}\boldsymbol{u}_{h}^{j}\|_{\infty}^{2}\leq TC_{\rm
inf}^{2}$ (40)
and from (38) that
$\displaystyle K_{1,{\rm inf}}$ $\displaystyle:=$ $\displaystyle\Delta
t\sum_{j=0}^{M}\|\nabla P_{r}^{v}\boldsymbol{u}_{h}^{j}\|_{\infty}\leq
TC_{1,\rm inf}.$ (41)
###### Remark 2
Let us observe that the factor $\|S^{v}\|_{2}^{1/2}$ appearing in (34), (35),
and (36) does not appear in (38), (39), and (41). Hence, for a comparable
value of $\sqrt{\lambda_{r+1}}$ in the first and the second bounds, the second
bounds, i.e., those corresponding to the case $X=H_{0}^{1}(\Omega)^{d}$, are
smaller. $\Box$
## 4 The POD-ROM method
### 4.1 The case $X=H_{0}^{1}(\Omega)^{d}$
We now consider the grad-div POD-ROM model. For the sake of simplifying the
error analysis, we use the implicit Euler method as time integrator: For
$n\geq 1$, find $\boldsymbol{u}_{r}^{n}\in{\cal\boldsymbol{U}}^{r}$ such that
for all $\boldsymbol{\varphi}\in{\cal\boldsymbol{U}}^{r}$
$\left(\frac{\boldsymbol{u}_{r}^{n}-\boldsymbol{u}_{r}^{n-1}}{\Delta
t},\boldsymbol{\varphi}\right)+\nu(\nabla\boldsymbol{u}_{r}^{n},\nabla\boldsymbol{\varphi})+b_{h}(\boldsymbol{u}_{r}^{n},\boldsymbol{u}_{r}^{n},\boldsymbol{\varphi})+\mu(\nabla\cdot\boldsymbol{u}_{r}^{n},\nabla\cdot\boldsymbol{\varphi})=(\boldsymbol{f}^{n},\boldsymbol{\varphi}).$
(42)
Taking $t=t_{j}$ in (13) and considering the second equation we get
$\boldsymbol{u}_{h}^{j}\in\boldsymbol{V}_{h,l}$. Differentiating the second
equation in (13) with respect to $t$ and taking again $t=t_{j}$, we also
obtain $\boldsymbol{u}_{h,t}^{j}\in\boldsymbol{V}_{h,l}$. As a consequence, we
observe that ${\cal\boldsymbol{U}}^{r}\subset\boldsymbol{V}_{h,l}$ so that
$\boldsymbol{u}_{r}^{n}$ belongs to the space $\boldsymbol{V}_{h,l}$ of
discretely divergence-free functions.
Choosing $t=t_{n}$ in (14) yields for all
$\boldsymbol{v}_{h}\in{\boldsymbol{V}}_{h,l}$
$\left(\boldsymbol{u}_{h,t}^{n},\boldsymbol{v}_{h}\right)+\nu(\nabla\boldsymbol{u}_{h}^{n},\nabla\boldsymbol{v}_{h})+b(\boldsymbol{u}_{h}^{n},\boldsymbol{u}_{h}^{n},\boldsymbol{v}_{h})+\mu(\nabla\cdot\boldsymbol{u}_{h}^{n},\nabla\cdot\boldsymbol{v}_{h})=({\boldsymbol{f}^{n}},\boldsymbol{v}_{h}).$
(43)
Using the notation
$\boldsymbol{\eta}_{h}^{n}=P_{r}^{v}\boldsymbol{u}_{h}^{n}-\boldsymbol{u}_{h}^{n}$,
a straightforward calculation gives
$\displaystyle\left(\frac{P_{r}^{v}\boldsymbol{u}^{n}_{h}-P_{r}^{v}\boldsymbol{u}^{n-1}_{h}}{\Delta
t},\boldsymbol{\varphi}\right)+\nu(\nabla
P_{r}^{v}\boldsymbol{u}^{n}_{h},\nabla\boldsymbol{\varphi})+b_{h}(P_{r}^{v}\boldsymbol{u}^{n}_{h},P_{r}^{v}\boldsymbol{u}^{n}_{h},\boldsymbol{\varphi})+\mu(\nabla\cdot
P_{r}^{v}\boldsymbol{u}^{n}_{h},\nabla\cdot\boldsymbol{\varphi})$
$\displaystyle=$
$\displaystyle(\boldsymbol{f}^{n},\boldsymbol{\varphi})+\left(\frac{P_{r}^{v}\boldsymbol{u}^{n}_{h}-P_{r}^{v}\boldsymbol{u}^{n-1}_{h}}{\Delta
t}-\boldsymbol{u}_{h,t}^{n},\boldsymbol{\varphi}\right)+\mu(\nabla\cdot\boldsymbol{\eta}_{h}^{n},\nabla\cdot\boldsymbol{\varphi})$
$\displaystyle+b_{h}(P_{r}^{v}\boldsymbol{u}^{n}_{h},P_{r}^{v}\boldsymbol{u}^{n}_{h},\boldsymbol{\varphi})-b_{h}(\boldsymbol{u}^{n}_{h},\boldsymbol{u}^{n}_{h},\boldsymbol{\varphi})\quad\forall\
\boldsymbol{\varphi}\in{\cal\boldsymbol{U}}^{r}.$
Subtracting (4.1) from (42) and denoting by
$\boldsymbol{e}_{r}^{n}=\boldsymbol{u}_{r}^{n}-P_{r}^{v}\boldsymbol{u}_{h}^{n}\in{\cal\boldsymbol{U}}^{r}$
leads to
$\displaystyle\left(\frac{\boldsymbol{e}_{r}^{n}-\boldsymbol{e}_{r}^{n-1}}{\Delta
t},\boldsymbol{\varphi}\right)+\nu(\nabla\boldsymbol{e}_{r}^{n},\nabla\boldsymbol{\varphi})+\mu(\nabla\cdot\boldsymbol{e}_{r}^{n},\nabla\cdot\boldsymbol{\varphi})$
(45)
$\displaystyle+b_{h}(\boldsymbol{u}_{r}^{n},\boldsymbol{u}_{r}^{n},\boldsymbol{\varphi})-b_{h}(P_{r}^{v}\boldsymbol{u}^{n}_{h},P_{r}^{v}\boldsymbol{u}^{n}_{h},\boldsymbol{\varphi})$
$\displaystyle=$
$\displaystyle\left(\boldsymbol{u}_{h,t}^{n}-\frac{P_{r}^{v}\boldsymbol{u}^{n}_{h}-P_{r}^{v}\boldsymbol{u}^{n-1}_{h}}{\Delta
t},\boldsymbol{\varphi}\right)-\mu(\nabla\cdot\boldsymbol{\eta}_{h}^{n},\nabla\cdot\boldsymbol{\varphi})$
$\displaystyle+b_{h}(\boldsymbol{u}^{n}_{h},\boldsymbol{u}^{n}_{h},\boldsymbol{\varphi})-b_{h}(P_{r}^{v}\boldsymbol{u}_{r}^{n},P_{r}^{v}\boldsymbol{u}_{r}^{n},\boldsymbol{\varphi}),\quad\forall\
\boldsymbol{\varphi}\in{\cal\boldsymbol{U}}^{r}.$
Taking now $\boldsymbol{\varphi}=\boldsymbol{e}_{r}^{n}$ yields
$\displaystyle\frac{1}{2\Delta
t}\left(\|\boldsymbol{e}_{r}^{n}\|_{0}^{2}-\|\boldsymbol{e}_{r}^{n-1}\|_{0}^{2}\right)+\nu\|\nabla\boldsymbol{e}_{r}^{n}\|_{0}^{2}+\mu\|\nabla\cdot\boldsymbol{e}_{r}^{n}\|_{0}^{2}$
(46) $\displaystyle\leq$
$\displaystyle\left(\boldsymbol{u}_{h,t}^{n}-\frac{P_{r}^{v}\boldsymbol{u}^{n}_{h}-P_{r}^{v}\boldsymbol{u}^{n-1}_{h}}{\Delta
t},\boldsymbol{e}_{r}^{n}\right)+\big{(}b_{h}(P_{r}^{v}\boldsymbol{u}^{n}_{h},P_{r}^{v}\boldsymbol{u}^{n}_{h},\boldsymbol{e}_{r}^{n})-b_{h}(\boldsymbol{u}_{r}^{n},\boldsymbol{u}_{r}^{n},\boldsymbol{e}_{r}^{n})\big{)}$
$\displaystyle-\mu(\nabla\cdot\boldsymbol{\eta}_{h}^{n},\nabla\cdot\boldsymbol{e}_{r}^{n})+\big{(}b_{h}(\boldsymbol{u}^{n}_{h},\boldsymbol{u}^{n}_{h},\boldsymbol{e}_{r}^{n})-b_{h}(P_{r}^{v}\boldsymbol{u}_{r}^{n},P_{r}^{v}\boldsymbol{u}_{r}^{n},\boldsymbol{e}_{r}^{n})\big{)}$
$\displaystyle=$ $\displaystyle I+II+III+IV.$
The first term on the right-hand side of (46) is estimated by using the
Cauchy–Schwarz and Young inequalities
$\displaystyle|I|\leq\frac{T}{2}\left\|\boldsymbol{u}_{h,t}^{n}-\frac{P_{r}^{v}\boldsymbol{u}^{n}_{h}-P_{r}^{v}\boldsymbol{u}^{n-1}_{h}}{\Delta
t}\right\|_{0}^{2}+\frac{1}{2T}\|\boldsymbol{e}_{r}^{n}\|_{0}^{2}.$ (47)
To bound the second term on the right-hand side of (46), we use, following
[23, Eq. (45)], the skew-symmetry of the trilinear term, Hölder’s inequality,
(37), (38), and Young’s inequality to obtain
$\displaystyle|II|=|b_{h}(\boldsymbol{e}_{r}^{n},P_{r}^{v}\boldsymbol{u}_{h}^{n},\boldsymbol{e}_{r}^{n})|$
$\displaystyle\leq$ $\displaystyle\|\nabla
P_{r}^{v}\boldsymbol{u}_{h}^{n}\|_{\infty}\|\boldsymbol{e}_{r}^{n}\|_{0}^{2}+\frac{1}{2}\|\nabla\cdot\boldsymbol{e}_{r}^{n}\|_{0}\|P_{r}^{v}\boldsymbol{u}_{h}^{n}\|_{\infty}\|\boldsymbol{e}_{r}^{n}\|_{0}$
(48) $\displaystyle\leq$ $\displaystyle\left(\|\nabla
P_{r}^{v}\boldsymbol{u}_{h}^{n}\|_{\infty}+\frac{\|P_{r}^{v}\boldsymbol{u}_{h}^{n}\|_{\infty}^{2}}{4\mu}\right)\|\boldsymbol{e}_{r}^{n}\|_{0}^{2}+\frac{\mu}{4}\|\nabla\cdot\boldsymbol{e}_{r}^{n}\|_{0}^{2}.$
For the third term, the application of the Cauchy–Schwarz and Young
inequalities leads to
$\displaystyle|III|\leq\mu\|\nabla\boldsymbol{\eta}_{h}^{n}\|_{0}^{2}+\frac{\mu}{4}\|\nabla\cdot\boldsymbol{e}_{r}^{n}\|_{0}^{2}.$
(49)
For estimating the fourth term, we follow [23, Eq. (50)]. Using Hölder’s
inequality, (3), (3), The Sobolev imbedding (2) with $s=1$ and $q=2$, (4), and
Young’s inequality leads to
$\displaystyle|IV|$ $\displaystyle\leq$
$\displaystyle|b_{h}(P_{r}^{v}\boldsymbol{u}_{h}^{n},\boldsymbol{\eta}_{h}^{n},\boldsymbol{e}_{r}^{n})|+|b_{h}(\boldsymbol{\eta}_{h}^{n},\boldsymbol{u}_{h}^{n},\boldsymbol{e}_{r}^{n})|$
(50) $\displaystyle\leq$
$\displaystyle\|P_{r}^{v}\boldsymbol{u}_{h}^{n}\|_{\infty}\|\nabla\boldsymbol{\eta}_{h}^{n}\|_{0}\|\boldsymbol{e}_{r}^{n}\|_{0}+\frac{1}{2}\|\nabla\cdot
P_{r}^{v}\boldsymbol{u}_{h}^{n}\|_{L^{2d/(d-1)}}\|\boldsymbol{\eta}_{h}^{n}\|_{L^{2d}}\|\boldsymbol{e}_{r}^{n}\|_{0}$
$\displaystyle+\|\boldsymbol{\eta}_{h}^{n}\|_{L^{2d}}\|\nabla\boldsymbol{u}_{h}^{n}\|_{L^{2d/(d-1)}}\|\boldsymbol{e}_{r}^{n}\|_{0}+\frac{1}{2}\|\nabla\cdot\boldsymbol{\eta}_{h}^{n}\|_{0}\|\boldsymbol{u}_{h}^{n}\|_{\infty}\|\boldsymbol{e}_{r}^{n}\|_{0}$
$\displaystyle\leq$ $\displaystyle C_{\rm
inf}\|\nabla\boldsymbol{\eta}_{h}^{n}\|_{0}\|\boldsymbol{e}_{r}^{n}\|_{0}+CC_{\rm
ld}\|\nabla\boldsymbol{\eta}_{h}^{n}\|_{0}\|\boldsymbol{e}_{r}^{n}\|_{0}$
$\displaystyle+C\|\nabla\boldsymbol{\eta}_{h}^{n}\|_{0}C_{\boldsymbol{u},\rm
ld}\|\boldsymbol{e}_{r}^{n}\|_{0}+\frac{1}{2}\|\nabla\boldsymbol{\eta}_{h}^{n}\|_{0}C_{\boldsymbol{u},\rm
inf}\|\boldsymbol{e}_{r}^{n}\|_{0}$ $\displaystyle\leq$ $\displaystyle
C_{m}^{2}T\|\nabla\boldsymbol{\eta}_{h}^{n}\|_{0}^{2}+\frac{1}{2T}\|\boldsymbol{e}_{r}^{n}\|_{0}^{2},$
where
$C_{m}=C(C_{\inf}+C_{\boldsymbol{u},\rm inf}+C_{\rm ld}+C_{\boldsymbol{u},\rm
ld}).$ (51)
Inserting (47), (48), (49), and (50) into (46) and adding over the time
instances, we get
$\displaystyle\|\boldsymbol{e}_{r}^{n}\|_{0}^{2}+2\nu\sum_{j=1}^{n}\Delta
t\|\nabla\boldsymbol{e}_{r}^{j}\|_{0}^{2}+\mu\sum_{j=1}^{n}\Delta
t\|\nabla\cdot\boldsymbol{e}_{r}^{j}\|_{0}^{2}$ (52) $\displaystyle\leq$
$\displaystyle\|\boldsymbol{e}_{r}^{0}\|_{0}^{2}+\sum_{j=1}^{n}\Delta
t\left(2\|\nabla
P_{r}^{v}\boldsymbol{u}_{h}^{n}\|_{\infty}+\frac{\|P_{r}^{v}\boldsymbol{u}_{h}^{n}\|_{\infty}^{2}}{2\mu}+\frac{2}{T}\right)\|\boldsymbol{e}_{r}^{j}\|_{0}^{2}$
$\displaystyle+2(\mu+C_{m}^{2}T)\sum_{j=1}^{n}\Delta
t\|\nabla\boldsymbol{\eta}_{h}^{j}\|_{0}^{2}+T\sum_{j=1}^{n}\Delta
t\left\|\boldsymbol{u}_{h,t}^{j}-\frac{P_{r}^{v}\boldsymbol{u}^{j}_{h}-P_{r}^{v}\boldsymbol{u}^{j-1}_{h}}{\Delta
t}\right\|_{0}^{2}.$
Adding $\pm P_{r}^{v}\boldsymbol{u}_{h,t}^{j}$ for bounding the last term on
the right-hand side of (52) gives
$\boldsymbol{u}_{h,t}^{j}-\frac{P_{r}^{v}\boldsymbol{u}^{j}_{h}-P_{r}^{v}\boldsymbol{u}^{j-1}_{h}}{\Delta
t}=(\boldsymbol{u}_{h,t}^{j}-P_{r}^{v}\boldsymbol{u}_{h,t}^{j})+\left(P_{r}^{v}\boldsymbol{u}_{h,t}^{j}-\frac{P_{r}^{v}\boldsymbol{u}^{j}_{h}-P_{r}^{v}\boldsymbol{u}^{j-1}_{h}}{\Delta
t}\right),$ (53)
from what follows
$\displaystyle\sum_{j=1}^{n}\Delta
t\left\|\boldsymbol{u}_{h,t}^{j}-\frac{P_{r}^{v}\boldsymbol{u}^{j}_{h}-P_{r}^{v}\boldsymbol{u}^{j-1}_{h}}{\Delta
t}\right\|_{0}^{2}$ $\displaystyle\leq$ $\displaystyle 2\sum_{j=1}^{n}\Delta
t\left\|P_{r}^{v}\boldsymbol{u}_{h,t}^{j}-\boldsymbol{u}_{h,t}^{j}\right\|_{0}^{2}+2\sum_{j=1}^{n}\Delta
t\left\|P_{r}^{v}\left(\boldsymbol{u}_{h,t}^{j}-\frac{\boldsymbol{u}^{j}_{h}-\boldsymbol{u}^{j-1}_{h}}{\Delta
t}\right)\right\|_{0}^{2}.$
To bound the first term on the right-hand side of (4.1), we apply Poincaré’s
inequality (3) and (17), taking into account that $(M+1)/M\leq 2$, to get
$\sum_{j=1}^{n}\Delta
t\left\|P_{r}^{v}\boldsymbol{u}_{h,t}^{j}-\boldsymbol{u}_{h,t}^{j}\right\|_{0}^{2}\leq
C_{p}^{2}\sum_{j=1}^{n}\Delta
t\left\|\nabla\left(P_{r}^{v}\boldsymbol{u}_{h,t}^{j}-\boldsymbol{u}_{h,t}^{j}\right)\right\|_{0}^{2}\leq\frac{2C_{p}^{2}T}{\tau^{2}}\sum_{k=r+1}^{d_{v}}\lambda_{k}.$
(55)
For the second one, we also use Poincaré’s inequality (3) and then notice
that, utilizing a property of a projection in Hilbert spaces, $\|\nabla
P_{r}^{v}\boldsymbol{w}\|_{0}^{2}\leq\|\nabla\boldsymbol{w}\|_{0}^{2}$ for
every $\boldsymbol{w}\in H_{0}^{1}(\Omega)^{d}$, so that
$\sum_{j=1}^{n}\Delta
t\left\|P_{r}^{v}\left(\boldsymbol{u}_{h,t}^{j}-\frac{\boldsymbol{u}^{j}_{h}-\boldsymbol{u}^{j-1}_{h}}{\Delta
t}\right)\right\|_{0}^{2}\leq C_{p}^{2}\sum_{j=1}^{n}\Delta
t\left\|\nabla\left(\boldsymbol{u}_{h,t}^{j}-\frac{\boldsymbol{u}_{h}^{j}-\boldsymbol{u}_{h}^{j-1}}{\Delta
t}\right)\right\|_{0}^{2}.$ (56)
Inserting (55) and (56) into (4.1), we obtain
$\displaystyle\sum_{j=1}^{n}\Delta
t\left\|\boldsymbol{u}_{h,t}^{j}-\frac{P_{r}^{v}\boldsymbol{u}^{j}_{h}-P_{r}^{v}\boldsymbol{u}^{j-1}_{h}}{\Delta
t}\right\|_{0}^{2}$ (57) $\displaystyle\leq$
$\displaystyle\frac{2C_{p}^{2}T}{\tau^{2}}\sum_{k=r+1}^{d_{v}}\lambda_{k}+C_{p}^{2}\sum_{j=1}^{n}\Delta
t\left\|\nabla\left(\boldsymbol{u}_{h,t}^{j}-\frac{\boldsymbol{u}^{j}_{h}-\boldsymbol{u}^{j-1}_{h}}{\Delta
t}\right)\right\|_{0}^{2}.$
For the second term on the right-hand side of (57), a standard argument gives
$C_{p}^{2}\sum_{j=1}^{n}\Delta
t\left\|\nabla\left(\boldsymbol{u}_{h,t}^{j}-\frac{\boldsymbol{u}^{j}_{h}-\boldsymbol{u}^{j-1}_{h}}{\Delta
t}\right)\right\|_{0}^{2}\leq CC_{p}^{2}(\Delta
t)^{2}\int_{0}^{T}\|\nabla(\boldsymbol{u}_{h,tt})\|_{0}^{2}\ ds.$ (58)
Arguing as in [14, Proposition 3.2], one can get an a priori bound for
$\int_{0}^{T}\|\nabla(\boldsymbol{u}_{h,tt})\|_{0}^{2}$ (59)
However, the error analysis in [14] is not valid for high Reynolds numbers. In
the appendix we get an error bound for this term with constants independent of
inverse powers of the viscosity.
Assuming that
$\Delta t\left(2C_{1,\rm inf}+\frac{C_{\rm
inf}^{2}}{2\mu}+\frac{2}{T}\right)\leq\frac{1}{2},$ (60)
denoting by
$C_{u}=2K_{1,{\rm inf}}+\frac{K_{\rm inf}^{2}}{2\mu}+2,$ (61)
applying Gronwall’s Lemma [14, Lemma 5.1], (27), and taking into account that
$(M+1)/M\leq 2$, we obtain from (52)
$\displaystyle\|\boldsymbol{e}_{r}^{n}\|_{0}^{2}+2\nu\sum_{j=1}^{n}\Delta
t\|\nabla\boldsymbol{e}_{r}^{j}\|_{0}^{2}+\mu\sum_{j=1}^{n}\Delta
t\|\nabla\cdot\boldsymbol{e}_{r}^{j}\|_{0}^{2}$ $\displaystyle\leq$
$\displaystyle
e^{2C_{u}}\biggl{(}\|\boldsymbol{e}_{r}^{0}\|_{0}^{2}+\left(2T(\mu+C_{m}^{2}T)(3+6(T/\tau)^{2})+4C_{p}^{2}(T/\tau)^{2}\right)\sum_{k=r+1}^{d_{v}}\lambda_{k}\
$
$\displaystyle{}+\left(CC_{p}^{2}T+2T(\mu+C_{m}^{2}T)\frac{16T}{3}\right)(\Delta
t)^{2}\int_{0}^{T}\|\nabla(\boldsymbol{u}_{h,tt})\|_{0}^{2}\ ds\biggr{)}.$
###### Theorem 4.1
Let $\boldsymbol{u}$ be the velocity in the Navier–Stokes equations (1), let
$\boldsymbol{u}_{r}$ be the grad-div POD stabilized approximation defined in
(42), assume that the solution $(\boldsymbol{u},p)$ of (1) is regular enough
and that (60) holds. Then, the following bound is valid
$\displaystyle\sum_{j=1}^{n}\Delta
t\|\boldsymbol{u}_{r}^{j}-\boldsymbol{u}^{j}\|_{0}^{2}$ $\displaystyle\leq$
$\displaystyle
3Te^{2C_{u}}\left[\|\boldsymbol{e}_{r}^{0}\|_{0}^{2}+\left(2T(\mu+C_{m}^{2}T)(3+6(T/\tau)^{2})+4C_{p}^{2}(T/\tau)^{2}\right)\sum_{k=r+1}^{d_{v}}\lambda_{k}\right.$
$\displaystyle\left.{}+\left(CC_{p}^{2}T+2T(\mu+C_{m}^{2}T)\frac{16T}{3}+16T^{2}C_{p}^{2}\right)\int_{0}^{T}\|\nabla(\boldsymbol{u}_{h,tt})\|_{0}^{2}\
ds\right]$
$\displaystyle+3TC(\boldsymbol{u},p,l)^{2}h^{2l}+3TC_{p}^{2}\left(3+6(T/\tau)^{2}\right)\sum_{k=r+1}^{d_{v}}\lambda_{k}.$
###### Proof:
We have $\sum_{j=1}^{n}\Delta t\|\boldsymbol{e}_{r}^{j}\|_{0}^{2}\leq
T\max_{1\leq j\leq n}\|\boldsymbol{e}_{r}^{j}\|_{0}^{2}$ and
$\sum_{j=1}^{n}\Delta
t\|\boldsymbol{u}_{r}^{j}-\boldsymbol{u}^{j}\|_{0}^{2}\leq
3\left(\sum_{j=1}^{n}\Delta
t\|\boldsymbol{e}_{r}^{j}\|_{0}^{2}+\sum_{j=1}^{n}\Delta
t\|P_{r}^{v}\boldsymbol{u}_{h}^{j}-\boldsymbol{u}_{h}^{j}\|_{0}^{2}+\sum_{j=1}^{n}\Delta
t\|\boldsymbol{u}_{h}^{j}-\boldsymbol{u}^{j}\|_{0}^{2}\right).$
Inserting the estimates (4.1), (27), and (15) leads directly to (4.1). $\Box$
A robust estimate for (59), in the case $l\geq 3$, is derived in the appendix.
Hence, for $l\geq 3$, there is no explicit appearance of inverse powers of the
viscosity coefficient in the error bound (4.1). The technical reason for not
obtaining a robust estimate for $l=2$ with the technique from the appendix is
the gradient in front of $\boldsymbol{u}_{h,tt}$. This gradient was introduced
with the transition from the $L^{2}(\Omega)^{d}$ norm to the corresponding
norm of the gradient in (56) in order to be able to apply the Hilbert space
argument. Note that with the approach presented in the appendix the
boundedness can be shown also for $l=2$, but not the robustness of the bound.
###### Remark 3
With the error decomposition in the proof of Theorem 4.1 and applying (18) or
the inverse inequality (6) to (4.1), one can also prove a robust error bound
for $\sum_{j=1}^{n}\Delta
t\|\nabla(\boldsymbol{u}_{r}^{j}-\boldsymbol{u}^{j})\|_{0}^{2}$. $\Box$
We can apply Lemma 1 to get pointwise estimates with respect to time both in
$L^{2}(\Omega)$ and $H^{1}(\Omega)$. Let us prove pointwise estimates in
$L^{2}(\Omega)$, the argument for proving bounds in $H^{1}(\Omega)$ is the
same. Since
$\|\boldsymbol{u}_{r}^{n}-\boldsymbol{u}^{n}\|_{0}^{2}\leq
3\|\boldsymbol{e}_{r}^{n}\|_{0}^{2}+3\|P_{r}^{v}\boldsymbol{u}_{h}^{n}-\boldsymbol{u}_{h}^{n}\|_{0}^{2}+3\|\boldsymbol{u}_{h}^{n}-\boldsymbol{u}^{n}\|_{0}^{2},$
we can utilize (4.1) and (15) to bound the first and third term on the right-
hand side. For bounding the second term, (26) is utilized. More precisely,
taking into account we are analyzing the case $X=H_{0}^{1}(\Omega)^{d}$ we
apply Poincaré’s inequality (3) to bound
$\|P_{r}^{v}\boldsymbol{u}_{h}^{n}-\boldsymbol{u}_{h}^{n}\|_{0}^{2}\leq
C_{p}^{2}\|\nabla(P_{r}^{v}\boldsymbol{u}_{h}^{n}-\boldsymbol{u}_{h}^{n})\|_{0}^{2}$
and then (26). Collecting the estimates (4.1), (26), (15) proves the following
theorem.
###### Theorem 4.2
Let the assumptions of Theorem 4.1 be satisfied, then the following bound is
valid
$\displaystyle\max_{0\leq n\leq
M}\|\boldsymbol{u}_{r}^{n}-\boldsymbol{u}^{n}\|_{0}^{2}$ $\displaystyle\leq$
$\displaystyle
3e^{2C_{u}}\left[\|\boldsymbol{e}_{r}^{0}\|_{0}^{2}+\left(2T(\mu+C_{m}^{2}T)(3+6(T/\tau)^{2})+4C_{p}^{2}(T/\tau)^{2}\right)\sum_{k=r+1}^{d_{v}}\lambda_{k}\right.$
$\displaystyle+\left.\left(CC_{p}^{2}T+2T(\mu+C_{m}^{2}T)\frac{16T}{3}+16TC_{p}^{2}\right)(\Delta
t)^{2}\int_{0}^{T}\|\nabla(\boldsymbol{u}_{h,tt})\|_{0}^{2}\ ds\right]$
$\displaystyle+3C_{p}^{2}(3+6(T/\tau)^{2})\sum_{k=r+1}^{d_{v}}\lambda_{k}+C(\boldsymbol{u},p,l+1)h^{l},$
where the constants for $l\geq 3$ do not blow up for small viscosity
coefficients since (59) is bounded.
### 4.2 The case $X=L^{2}(\Omega)^{d}$
For the sake of brevity, we are going to mention only the differences with
respect to the analysis for the case $X=H_{0}^{1}(\Omega)^{d}$.
Observing that the orthogonality property of $P_{r}^{v}$ affects now the term
with the approximation of the temporal derivative (instead of the viscous
term) we obtain, instead of (45), the following error equation
$\displaystyle\left(\frac{\boldsymbol{e}_{r}^{n}-\boldsymbol{e}_{r}^{n-1}}{\Delta
t},\boldsymbol{\varphi}\right)+\nu(\nabla\boldsymbol{e}_{r}^{n},\nabla\boldsymbol{\varphi})+\mu(\nabla\cdot\boldsymbol{e}_{r}^{n},\nabla\cdot\boldsymbol{\varphi})$
(64)
$\displaystyle+b_{h}(\boldsymbol{u}_{r}^{n},\boldsymbol{u}_{r}^{n},\boldsymbol{\varphi})-b_{h}(P_{r}^{v}\boldsymbol{u}^{n}_{h},P_{r}^{v}\boldsymbol{u}^{n}_{h},\boldsymbol{\varphi})$
$\displaystyle=$
$\displaystyle\left(\boldsymbol{u}_{h,t}^{n}-\frac{\boldsymbol{u}_{h}^{n}-\boldsymbol{u}_{h}^{n-1}}{\Delta
t},\boldsymbol{\varphi}\right)-\nu(\nabla\boldsymbol{\eta}_{h}^{n},\nabla\boldsymbol{\varphi})-\mu(\nabla\cdot\boldsymbol{\eta}_{h}^{n},\nabla\cdot\boldsymbol{\varphi})$
$\displaystyle+b_{h}(\boldsymbol{u}^{n}_{h},\boldsymbol{u}^{n}_{h},\boldsymbol{\varphi})-b_{h}(P_{r}^{v}\boldsymbol{u}^{n}_{h},P_{r}^{v}\boldsymbol{u}^{n}_{h},\boldsymbol{\varphi})\quad\forall\
\boldsymbol{\varphi}\in{\cal\boldsymbol{U}}^{r}.$
Using the same techniques as for the other case, we infer, instead of (52),
that
$\displaystyle\|\boldsymbol{e}_{r}^{n}\|_{0}^{2}+2\nu\sum_{j=1}^{n}\Delta
t\|\nabla\boldsymbol{e}_{r}^{j}\|_{0}^{2}+\mu\sum_{j=1}^{n}\Delta
t\|\nabla\cdot\boldsymbol{e}_{r}^{j}\|_{0}^{2}$ (65) $\displaystyle\leq$
$\displaystyle\|\boldsymbol{e}_{r}^{0}\|_{0}^{2}+\sum_{j=1}^{n}\Delta
t\left(2\|\nabla
P_{r}^{v}\boldsymbol{u}_{h}^{n}\|_{\infty}+\frac{\|P_{r}^{v}\boldsymbol{u}_{h}^{n}\|_{\infty}^{2}}{2\mu}+\frac{2}{T}\right)\|\boldsymbol{e}_{r}^{j}\|_{0}^{2}$
$\displaystyle+2(\nu+\mu+C_{m}^{2}T)\sum_{j=1}^{n}\Delta
t\|\nabla\boldsymbol{\eta}_{h}^{j}\|_{0}^{2}+CT(\Delta
t)^{2}\int_{0}^{T}\|\boldsymbol{u}_{h,tt}(s)\|_{0}^{2}\ ds.$
From (65) we continue as before, compare [23, Theorem 5.3], and take into
account that applying (19) and (27), it is
$\sum_{j=1}^{n}\Delta t\|\nabla\boldsymbol{\eta}_{h}^{j}\|_{0}^{2}\leq
T\|S^{v}\|_{2}\left((3+6(T/\tau)^{2})\sum_{k=r+1}^{d_{v}}\lambda_{k}+\frac{16T^{2}}{3}(\Delta
t)^{2}\int_{0}^{T}\|\boldsymbol{u}_{h,tt}(s)\|_{0}^{2}\ ds\right).$
Then, instead of (4.1) we conclude
$\displaystyle\|\boldsymbol{e}_{r}^{n}\|_{0}^{2}+2\nu\sum_{j=1}^{n}\Delta
t\|\nabla\boldsymbol{e}_{r}^{j}\|_{0}^{2}+\mu\sum_{j=1}^{n}\Delta
t\|\nabla\cdot\boldsymbol{e}_{r}^{j}\|_{0}^{2}$ (66) $\displaystyle\leq$
$\displaystyle
e^{2C_{u}}\biggl{(}\|\boldsymbol{e}_{r}^{0}\|_{0}^{2}+2T(\nu+\mu+C_{m}^{2}T)\|S^{v}\|_{2}(3+6(T/\tau)^{2})\sum_{k=r+1}^{d_{v}}\lambda_{k}\
$ $\displaystyle{}+(\Delta
t)^{2}\left(CT+2T(\nu+\mu+C_{m}^{2}T)\|S^{v}\|_{2}(16T^{2})/3\right)\int_{0}^{T}\|\boldsymbol{u}_{h,tt}(s)\|_{0}^{2}\
ds\biggr{)}.$
A robust estimate for the following term and $l\geq 2$ is proved in the
appendix
$\int_{0}^{T}\|\boldsymbol{u}_{h,tt}\|_{0}^{2}\ ds.$ (67)
In [23, Section 5], there is an error analysis for the case in which a fully
discrete Galerkin method with the same implicit Euler method is utilized for
the FOM.
###### Remark 4
Let us observe that at this point it seems that the snapshots from the time
derivative are not helpful in the $L^{2}(\Omega)^{d}$ case. However, it turns
out that these snapshots are needed for obtaining pointwise estimates in time.
In fact, we can repeat the arguments of the previous section to get such
estimates for the $L^{2}(\Omega)^{d}$ error, and also for the
$H^{1}(\Omega)^{d}$ error applying the inverse inequality (19), that cannot be
obtained, at least with the same arguments, without adding those snapshots.
$\Box$
###### Remark 5
Comparing the $L^{2}(\Omega)^{d}$ and the $H_{0}^{1}(\Omega)^{d}$ cases, we
can observe that in the $L^{2}(\Omega)^{d}$ case the eigenvalues are
multiplied by $\|S^{v}\|_{2}$ in the error bound (66), which increases the
size of this term, compared with (4.1). In addition, the factor
$\|S^{v}\|_{2}^{1/2}$ appears in the definition (61) of $C_{u}$, which gives a
bigger constant in the exponential term of the error bound, and also in the
assumption for the time steps (60), i.e., the condition for applying
Gronwall’s lemma leads to a severe time step restriction if $\|S^{v}\|_{2}$ is
very large. On the other hand, using $L^{2}(\Omega)^{d}$ as projection space
leads to an integral term in the error bound, (67), that can be bounded in a
robust way for $l\geq 2$, whereas $l\geq 3$ is necessary if the projection
space is $H_{0}^{1}(\Omega)^{d}$, to bound the term (59), at least with the
approach from the appendix. Thus, both approaches possess advantages with
respect to certain aspects of the error analysis. $\Box$
###### Remark 6
A popular pair of finite element spaces that leads to weakly divergence-free
discrete velocity fields is the Scott–Vogelius pair
$({\boldsymbol{X}}_{h}^{l},Q_{h,\mathrm{disc}}^{l-1})$. It is inf-sup stable
for $l\geq d$ on so-called barycentric-refined grids [29]. A favorable feature
of the Scott–Vogelius pair is that it leads to so-called pressure-robust
velocity estimates, i.e., the velocity error bounds do not depend on the
pressure. In particular, an estimate of form (15) was derived in [26] with
$C(\boldsymbol{u},l+1)$. In estimating the integral term as performed in the
appendix, the term $(\sigma_{2},\nabla\cdot\boldsymbol{\varphi}_{h})$ vanishes
if $\boldsymbol{\varphi}_{h}$ is weakly divergence-free, such that the bound
of the integral term does not contain the pressure. Finally, the snapshots are
only from the velocity field, which can be (formally) computed by solving an
equation of type (14), which does not contain the pressure. Hence all terms in
estimates (27) and (17) are independent of the pressure. Applying the same
analysis as for the Taylor–Hood pair of spaces, even with some
simplifications, gives for the Scott–Vogelius pair pressure-robust error
bounds of the same type, under the conditions on the inf-sup stability
mentioned above. $\Box$
## 5 Numerical studies
We now present some results for the well-known benchmark problem defined in
[25]. The domain is given by
$\Omega=(0,2.2)\times(0,0.41)/\left\\{(x,y)\mid(x-0.2)^{2}+(y-0.2)^{2}\leq
0.0025\right\\}.$
On the inflow boundary the velocity is prescribed by
$\boldsymbol{u}(0,y)=\frac{6}{0.41^{2}}\sin\left(\frac{\pi
t}{8}\right)\left(\begin{array}[]{c}y(0.41-y)\\\ 0\end{array}\right)$
and on the outflow boundary $x=2.2$ we set the so-called “do nothing” boundary
condition. On the rest of the boundary the velocity is set
$\boldsymbol{u}={\bf 0}$. It is well known that for $\nu=0.001$ there is a
stable periodic orbit.
We use piecewise quadratic and linear elements for velocity and pressure,
respectively, on the same mesh as in [10], resulting in 27168 degrees of
freedom for the velocity and 3480 for the pressure. We call this grid here the
main grid. The grid obtained from this one by regular refinement (107328
degrees of freedom for the velocity and 13584 for the pressure), called
refined grid henceforth, was used only to compute a reference solution and the
errors of the snapshots. On both grids we computed the periodic orbit by
finding the fixed point of the return map to a Poincaré section (see e.g.,
[24]). The periods, computed with a relative error below $10^{-6}$ were
$T=0.331761$ and $T_{r}=0.331338$ on the main and refined grid, respectively.
We computed snapshots over one period with a spacing of $T/1024$ and
$T_{r}/1024$. Taking as exact the results on the refined grid, the relative
errors of the snapshots are shown in Fig. 1, both in velocity and in
acceleration, and both in the $L^{2}$ and $H^{1}$ norms. It can be seen that
the errors in velocity are around 0.003 in the $L^{2}$ norm and around 0.03 in
the $H^{1}$ norm, and that the acceleration errors are, approximately, eight
times larger than those of the velocity in the $L^{2}$ norm, and four times
larger in the $H^{1}$ norm.
Figure 1: Snapshot errors in the $L^{2}$ norm (left) and in the $H^{1}$ norm
(right).
Fig. 2 shows the first 256 singular values $\sigma_{k}=\sqrt{\lambda_{k}}$
relative to their Euclidean norm, that is,
$\sigma_{k}/\biggl{(}\sum_{j=1}^{N}\sigma_{k}^{2}\biggr{)}^{1/2},$ (68)
where $N$ is the number of snapshots, both when the inner product is that of
$L^{2}$ and $H^{1}$, in the cases where the elements in the data sets are
$\boldsymbol{y}_{h}^{j}=\boldsymbol{u}_{h}^{j}-\overline{\boldsymbol{u}}_{h}$,
$\boldsymbol{y}_{h}^{j}=T\boldsymbol{u}_{u,t}^{j}$, and
$\boldsymbol{y}_{h}^{j}=T\delta_{t}\boldsymbol{u}_{h}^{j}=T(\boldsymbol{u}_{h}^{j}-\boldsymbol{u}_{h}^{j-1})/\Delta
t$.
Figure 2: First 256 singular values of data set relative to their Euclidean
norm (68), for the $L^{2}$ inner product (left) and the $H^{1}$ inner product
(right).
We see that with both inner products, the singular values are slightly larger
when the data set is the time derivatives or their approximation, but already
for $k=50$ their value is considerable smaller than the approximation errors
in Fig. 1. In fact, for the singular values corresponding to the data set
given by the fluctuations
$\boldsymbol{u}_{h}^{j}-\overline{\boldsymbol{u}}_{h}$ (blue line), there are
only $16$ values above $10^{-3}$ in Fig. 2 (left) and $14$ above $10^{-2}$ in
the right plot. Note that $10^{-3}$ and $10^{-2}$ are about one third of the
average errors in Fig. 1 for the $L^{2}$ and the $H^{1}$ norm, so it
reasonable to use a POD basis with no more elements, since although with a
larger basis we may better approximate the elements in the data set, these
have already larger approximation errors. Consequently, in the sequel, we use
a POD basis with 16 elements if the $L^{2}$ inner product is utilized and $14$
in case of the $H^{1}$ inner product.
Next we present in Fig. 3 the projection errors of the snapshots,
$\left\|\boldsymbol{u}_{h}^{j}-P_{r}^{v}\boldsymbol{u}_{h}^{j}\right\|_{i}\bigg{/}\left\|\boldsymbol{u}_{h}^{j}\right\|_{i},\quad
j=1,\ldots,N,\quad i\in\\{0,1\\},$ (69)
for data sets
$\boldsymbol{y}_{h}^{j}=\boldsymbol{u}_{h}^{j}-\overline{\boldsymbol{u}}_{h}$,
$\boldsymbol{y}_{h}^{j}=T\boldsymbol{u}_{u,t}^{j}$, and
$\boldsymbol{y}_{h}^{j}=T\delta_{t}\boldsymbol{u}_{h}^{j}=(\boldsymbol{u}_{h}^{j}-\boldsymbol{u}_{h}^{j-1})/\Delta
t$, when $r=16$ and $r=14$ for POD basis based on $L^{2}$ and $H^{1}$ inner
products, respectively.
Figure 3: Snapshot projection errors (69) for the correlation matrix based on
$L^{2}$ product (left) and $H^{1}$ products (right). Results for data sets
$\boldsymbol{y}_{h}^{j}=\boldsymbol{u}_{h}^{j}-\overline{\boldsymbol{u}}_{h}$
(blue), $\boldsymbol{y}_{h}^{j}=T\boldsymbol{u}_{u,t}^{j}$ (magenta), and
$\boldsymbol{y}_{h}^{j}=T\delta_{t}\boldsymbol{u}_{h}^{j}=(\boldsymbol{u}_{h}^{j}-\boldsymbol{u}_{h}^{j-1})/\Delta
t$ (black).
We notice that the results on the left plot are quite independent of the data
set used, whereas those on the right plot show some advantage for the data set
based on the fluctuations,
$\boldsymbol{y}_{h}^{j}=\boldsymbol{u}_{h}^{j}-\overline{\boldsymbol{u}}_{h}$,
although in all cases the projection errors are well below the upper bounds
$\biggl{(}\sum_{k=r+1}^{N}\lambda_{k}\biggr{)}^{1/2}\bigg{/}\biggl{(}\sum_{k=1}^{N}\lambda_{k}\biggr{)}^{1/2}$
which, recall, are $0.001$ and $0.01$ for POD basis based on $L^{2}$ and
$H^{1}$, respectively.
Fig. 4 depicts the POD-ROM errors
$\left\|\boldsymbol{u}_{r}^{j}-\boldsymbol{u}_{h}^{j}\right\|_{i}\bigg{/}{\left\|\boldsymbol{u}_{h}^{j}\right\|_{i}},\quad
j=1,\ldots,N,\quad i\in\\{0,1\\},$ (70)
with the same color convention as in Fig. 3.
Figure 4: POD-ROM errors (70) for correlation matrix based on $L^{2}$ product
(left) and $H^{1}$ products (right). Results for data sets
$\boldsymbol{y}_{h}^{j}=\boldsymbol{u}_{h}^{j}-\overline{\boldsymbol{u}}_{h}$
(blue), $\boldsymbol{y}_{h}^{j}=T\boldsymbol{u}_{u,t}^{j}$ (magenta) and
$\boldsymbol{y}_{h}^{j}=T\delta_{t}\boldsymbol{u}_{h}^{j}=(\boldsymbol{u}_{h}^{j}-\boldsymbol{u}_{h}^{j-1})/\Delta
t$ (black).
For the time integration, instead of the backward Euler method in (42), we
used the two step BDF formula with fixed step size $\Delta t=T/N$, $N=1024$,
except for the first step, where the backward Euler method was applied. The
parameter for the grad-div term was as in the computation of the snapshots
$\mu=0.01$. We see that the results are very independent of the data set used,
for both inner products, and that the POD-ROM errors are slightly larger than
the corresponding projection errors, except for data sets based on snapshots,
$\boldsymbol{y}_{h}^{j}=\boldsymbol{u}_{h}^{j}-\overline{\boldsymbol{u}}_{h}$
(blue line) and POD basis based on $H^{1}$ inner products (right plot) which
are approximately twice the size of the corresponding errors in Fig. 3.
Finally, lift and drag coefficients are studied. For the FOM, they were
computed as indicated in [16] (see also [10]). For the POD-ROM method they
were computed as in [11]. In Fig. 5 we see the evolution of the drag and lift
coefficients along 12 periods for the reference solution (computed on the
refined grid), the FOM, and the POD-ROM approximation with the fluctions
$\boldsymbol{u}_{h}^{j}-\overline{\boldsymbol{u}}_{h}$ of the snapshots
computed in the first period as data set with the $L^{2}$ inner product (all
other POD-ROM approximations gave similar results).
Figure 5: Drag (left) and lift (right) coefficients for the reference solution
(black), the FOM (magenta), and the POD-ROM approximation with snapshots
$\boldsymbol{u}_{h}^{j}$ as data set and $L^{2}$ inner product.
At first sight, the agreement is excellent. Fig. 6 compares the first and
twelfth periods. There is a phase difference between the black line and the
other two, since, as commented above, the “exact” solution computed on the
refined grid has a slightly different period than that computed on the main
grid. We notice however, that the difference between the other two lines is
hardly altered from the first period to the last one.
Figure 6: Drag (left) and lift (right) coefficients for the reference solution
(black), the FOM (magenta), and the POD-ROM approximation with snapshots
$\boldsymbol{u}_{h}^{j}$ as data set and $L^{2}$ inner product.
In addition, the results for the lift coefficient corresponding to the FOM and
the POD-ROM approximation are on top of each other (in fact the maximum
relative error of this coefficient with respect to that of the FOM in the 12
periods is below $1.7\times 10^{-4}$). Some differences can be seen in the
drag coefficient. However, the maximum relative error with respect the
coefficient computed with the FOM is below 0.0011.
The summary of the results is that there are no significant differences in the
POD-ROM approximation, i.e., they are quite independent of the data set used.
Only some minor differences on the projection errors depending on the data
set, when the $H^{1}$ inner product is used, could be observed in the
computation of the POD basis.
## 6 Conclusions
We analyzed reduced order models for the incompressible Navier–Stokes
equations based on proper orthogonal decomposition methods. The influence of
including approximations to the time derivative in the set of snapshots was
studied. Our set of snapshots is constituted by the approximation to the
velocity at the initial time together with approximations to the time
derivative at different times. The approximation to the velocity at the
initial time can be replaced by the mean value and then only approximations to
the time derivatives are required for applying the POD method to the
fluctuations. The Galerkin time derivative can be replaced by any other
approximation as the standard difference quotient.
We studied the differences between projecting onto $L^{2}$ and $H^{1}$. We
proved that including the Galerkin time derivatives (or the difference
quotients) leads to pointwise estimates for both projections. In the $L^{2}$
case, error bounds can be proved in the no-time-derivatives case (with only
snapshots for the velocity) as shown in the literature, e.g., see [23].
However, the time derivatives approach is useful also in this case to get
pointwise estimates. In the numerical analysis, we utilized the projection of
the continuous-in-time Galerkin approximation since this allows to use in
practice any time integrator for computing the snapshots. It is easy to
include the corresponding error in time in the bounds of the present paper.
Also, different times can be considered to compute the set of snapshots and
the fully discrete POD approximations. Finally, as in one of the methods in
[23], we added grad-div stabilization to the approximations computed with the
FOM and POD methods to be able to prove error bounds in which the constants do
not depend on inverse powers of the viscosity.
In the numerical studies, we compared three different sets of snapshots, for
both inner products, where the elements in the data sets were
$\boldsymbol{y}_{h}^{j}=\boldsymbol{u}_{h}^{j}-\overline{\boldsymbol{u}}_{h}$,
$\boldsymbol{y}_{h}^{j}=T\boldsymbol{u}_{u,t}^{j}$, and
$\boldsymbol{y}_{h}^{j}=T\delta_{t}\boldsymbol{u}_{h}^{j}=T(\boldsymbol{u}_{h}^{j}-\boldsymbol{u}_{h}^{j-1})/\Delta
t$. While the snapshots were computed using only one period, we showed that
very good approximations are obtained to the lift and drag coefficients in a
time interval of 12 periods. In our numerical studies there were no
significant differences between the different procedures. More comprehensive
studies have to further investigate this topic.
Altogether, we think that the rigorous numerical analysis presented in this
paper shows interesting properties and sharp bounds for the different methods.
It also supports the idea of the recent paper [22] which shows that in the
case of the heat equation there is no need to include in the set of snapshots
other than one approximation to the solution at a fixed time. We prove that
for the approximation to the incompressible Navier–Stokes equations the same
situation holds. Moreover, in case of applying the POD method to fluctuations,
as it is standard, only snapshots of the time derivative are needed.
## Appendix A Robust bounds for the terms with the second temporal derivative
In this section we derive bounds for the terms (67) and (59). As a
consequence, the terms on the right-hand sides of (4.1) and (66) that contain
(59) and (67), respectively, are bounded.
The constants in the bounds below do not blow up as $\nu\to 0$. It will be
assumed that all functions are sufficiently smooth such that the performed
operations are well defined. It is
$\|\boldsymbol{u}_{h,tt}\|_{0}\leq\|\boldsymbol{u}_{h,tt}-\boldsymbol{s}_{h,tt}\|_{0}+\|\boldsymbol{u}_{tt}\|_{0}+\|\boldsymbol{s}_{h,tt}-\boldsymbol{u}_{tt}\|_{0},$
(71)
and likewise for the norm of the gradient. The first term on the right-hand
side will be bounded below. The second term is bounded because the solution is
sufficiently smooth, and then the last term can be bounded with (8), where the
notation $\boldsymbol{s}_{h,tt}$ has to be understood that the modified Stokes
projection is applied to $\boldsymbol{u}_{tt}$.
Denote $\boldsymbol{e}_{h}=\boldsymbol{u}_{h}-\boldsymbol{s}_{h}$. The error
equation for (13) is given by
$\displaystyle\left(\boldsymbol{e}_{h,t},\boldsymbol{\varphi}_{h}\right)+\nu(\nabla\boldsymbol{e}_{h},\nabla\boldsymbol{\varphi}_{h})+b(\boldsymbol{u}_{h},\boldsymbol{u}_{h},\boldsymbol{\varphi}_{h})-b(\boldsymbol{u},\boldsymbol{u},\boldsymbol{\varphi}_{h})+\mu(\nabla\cdot\boldsymbol{e}_{h},\nabla\cdot\boldsymbol{\varphi}_{h})$
(72) $\displaystyle=$
$\displaystyle(\sigma_{1},\boldsymbol{\varphi}_{h})+(\sigma_{2},\nabla\cdot\boldsymbol{\varphi}_{h})\quad\forall\
\boldsymbol{\varphi}_{h}\in{\boldsymbol{V}}_{h}^{l},$
with $\sigma_{1}=\boldsymbol{u}_{t}-\boldsymbol{s}_{h,t}$,
$\sigma_{2}=(p-P_{Q_{h}}p)+\mu(\nabla\cdot(\boldsymbol{u}-\boldsymbol{s}_{h})$,
and $P_{Q_{h}}p$ is the best approximation of $p$ in $Q_{h}^{l}$. Note that if
a (discrete) velocity function is in $\boldsymbol{V}$ (or
${\boldsymbol{V}}_{h}^{l}$), then also the temporal derivatives of this
function are in the same space. Hence, taking
$\boldsymbol{\varphi}_{h}=\boldsymbol{e}_{h,t}\in{\boldsymbol{V}}_{h}^{l}$ in
(72) and using the inverse inequality (6) yields
$\displaystyle\|\boldsymbol{e}_{h,t}\|_{0}^{2}+\frac{d}{dt}\frac{1}{2}\nu\|\nabla\boldsymbol{e}_{h}\|_{0}^{2}+\frac{d}{dt}\frac{1}{2}\mu\|\nabla\cdot\boldsymbol{e}_{h}\|_{0}^{2}\leq|b(\boldsymbol{u}_{h}-\boldsymbol{u},\boldsymbol{u},\boldsymbol{e}_{h,t})|$
(73)
$\displaystyle+|b(\boldsymbol{u}_{h},\boldsymbol{u}_{h}-\boldsymbol{u},\boldsymbol{e}_{h,t})|+\|\sigma_{1}\|_{0}\|\boldsymbol{e}_{h,t}\|_{0}+c_{\rm
inv}\|\sigma_{2}\|_{0}h^{-1}\|\boldsymbol{e}_{h,t}\|_{0}.$
The first term on the right-hand side is estimated with Hölder’s inequality,
(15), and Young’s inequality
$\displaystyle|b(\boldsymbol{u}_{h}-\boldsymbol{u},\boldsymbol{u},\boldsymbol{e}_{h,t})|$
$\displaystyle\leq$
$\displaystyle\|\boldsymbol{u}_{h}-\boldsymbol{u}\|_{0}\|\nabla\boldsymbol{u}\|_{\infty}\|\boldsymbol{e}_{h,t}\|_{0}+\frac{1}{2}\|\nabla\cdot(\boldsymbol{u}_{h}-\boldsymbol{u})\|_{0}\|\boldsymbol{u}\|_{\infty}\|\boldsymbol{e}_{h,t}\|_{0}$
$\displaystyle\leq$ $\displaystyle
Ch^{2(l-1)}+\frac{1}{8}\|\boldsymbol{e}_{h,t}\|_{0}^{2}.$
For the second term, in addition an estimate from [16, Lemma 6.11], the
inverse inequality (6), and the condition $l\geq 2$ are utilized
$\displaystyle|b(\boldsymbol{u}_{h},\boldsymbol{u}_{h}-\boldsymbol{u},\boldsymbol{e}_{h,t})|$
$\displaystyle\leq$
$\displaystyle|b(\boldsymbol{u}_{h}-\boldsymbol{u},\boldsymbol{u}_{h}-\boldsymbol{u},\boldsymbol{e}_{h,t})|+|b(\boldsymbol{u},\boldsymbol{u}_{h}-\boldsymbol{u},\boldsymbol{e}_{h,t})|$
$\displaystyle\leq$ $\displaystyle
C\|\boldsymbol{u}_{h}-\boldsymbol{u}\|_{1}^{2}\|\boldsymbol{e}_{h,t}\|_{1}+\|\boldsymbol{u}\|_{\infty}\|\nabla(\boldsymbol{u}_{h}-\boldsymbol{u})\|_{0}\|\boldsymbol{e}_{h,t}\|_{0}$
$\displaystyle\leq$ $\displaystyle
C\|\boldsymbol{u}_{h}-\boldsymbol{u}\|_{1}^{2}c_{\rm
inv}h^{-1}\|\boldsymbol{e}_{h,t}\|_{0}+C\|\boldsymbol{u}_{h}-\boldsymbol{u}\|_{1}\|\boldsymbol{e}_{h,t}\|_{0}$
$\displaystyle\leq$ $\displaystyle
C\left(h^{4l-6}+h^{2l-2}\right)+\frac{1}{8}\|\boldsymbol{e}_{h,t}\|_{0}^{2}\leq
Ch^{2(l-1)}+\frac{1}{8}\|\boldsymbol{e}_{h,t}\|_{0}^{2}.$
For the last two terms, we obtain, with (8), the $L^{2}(\Omega)$ best
approximation error for the pressure, and Young’s inequality, the following
bound
$\displaystyle\|\sigma_{1}\|_{0}\|\boldsymbol{e}_{h,t}\|_{0}+c_{\rm
inv}\|\sigma_{2}\|_{0}h^{-1}\|\boldsymbol{e}_{h,t}\|_{0}$ $\displaystyle\leq$
$\displaystyle 2\|\sigma_{1}\|_{0}^{2}+c_{\rm
inv}^{2}\|\sigma_{2}\|_{0}^{2}h^{-2}+\frac{1}{4}\|\boldsymbol{e}_{h,t}\|_{0}^{2}$
$\displaystyle\leq$ $\displaystyle
Ch^{2(l-1)}+\frac{1}{4}\|\boldsymbol{e}_{h,t}\|_{0}^{2}.$
Absorbing terms in the left-hand side of (73) and collecting the other terms
on the right-hand side leads to
$\frac{1}{2}\left(\|\boldsymbol{e}_{h,t}\|_{0}^{2}+\frac{d}{dt}\nu\|\nabla\boldsymbol{e}_{h}\|_{0}^{2}+\frac{d}{dt}\mu\|\nabla\cdot\boldsymbol{e}_{h}\|_{0}^{2}\right)\leq
Ch^{2(l-1)},$ (74)
such that, assuming for simplicity
$\boldsymbol{u}_{h}(0)=\boldsymbol{s}_{h}(0)$, we derive the estimate
$\int_{0}^{t}\|\boldsymbol{e}_{h,t}(s)\|_{0}^{2}\
ds+\nu\|\nabla\boldsymbol{e}_{h}(t)\|_{0}^{2}+\mu\|\nabla\cdot\boldsymbol{e}_{h}(t)\|_{0}^{2}\leq
Ch^{2(l-1)}.$ (75)
Considering $t\to 0$ in (72), the viscous term and the grad-div stabilization
term vanish, since $\boldsymbol{e}_{h}(0)=\boldsymbol{0}$. A pressure at $t=0$
can be defined as proposed in [13]. Then, one can perform the same analysis as
above and obtains, instead of (74),
$\frac{1}{2}\|\boldsymbol{e}_{h,t}(0)\|_{0}^{2}\leq Ch^{2(l-1)}.$
Applying the inverse inequality yields
$\|(\nabla\boldsymbol{e}_{h,t})(0)\|_{0}^{2}\leq
Ch^{2(l-2)},\quad\|(\nabla\cdot\boldsymbol{e}_{h,t})(0)\|_{0}^{2}\leq
Ch^{2(l-2)},$ (76)
such that the norms on the left-hand sides are bounded for $l\geq 2$.
Taking the time derivative of (72) gives
$\displaystyle\left(\boldsymbol{e}_{h,tt},\boldsymbol{\varphi}_{h}\right)+\nu(\nabla\boldsymbol{e}_{h,t},\nabla\boldsymbol{\varphi}_{h})+b(\boldsymbol{u}_{h,t},\boldsymbol{u}_{h},\boldsymbol{\varphi}_{h})-b(\boldsymbol{u}_{t},\boldsymbol{u},\boldsymbol{\varphi}_{h})$
$\displaystyle+b(\boldsymbol{u}_{h},\boldsymbol{u}_{h,t},\boldsymbol{\varphi}_{h})-b(\boldsymbol{u},\boldsymbol{u}_{t},\boldsymbol{\varphi}_{h})+\mu(\nabla\cdot\boldsymbol{e}_{h,t},\nabla\cdot\boldsymbol{\varphi}_{h})$
$\displaystyle=$
$\displaystyle(\sigma_{1,t},\boldsymbol{\varphi}_{h})+(\sigma_{2,t},\nabla\cdot\boldsymbol{\varphi}_{h})\quad\forall\
\boldsymbol{\varphi}_{h}\in{\boldsymbol{V}}_{h}^{l},$
Choosing $\boldsymbol{\varphi}_{h}=\boldsymbol{e}_{h,tt}$ and arguing as
before leads to
$\displaystyle\|\boldsymbol{e}_{h,tt}\|_{0}^{2}+\frac{d}{dt}\frac{1}{2}\nu\|\nabla\boldsymbol{e}_{h,t}\|_{0}^{2}+\frac{d}{dt}\frac{1}{2}\mu\|\nabla\cdot\boldsymbol{e}_{h,t}\|_{0}^{2}$
$\displaystyle\leq$ $\displaystyle\|\sigma_{1,t}\|_{0}^{2}+c_{\rm
inv}^{2}h^{-2}\|\sigma_{2,t}\|_{0}^{2}+\frac{1}{4}\|\boldsymbol{e}_{h,tt}\|_{0}^{2}+|b(\boldsymbol{u}_{h,t}-\boldsymbol{u}_{t},\boldsymbol{u},\boldsymbol{e}_{h,tt})|$
$\displaystyle+|b(\boldsymbol{u}_{h,t},\boldsymbol{u}-\boldsymbol{u}_{h},\boldsymbol{e}_{h,tt})|+|b(\boldsymbol{u}_{h}-\boldsymbol{u},\boldsymbol{u}_{t},\boldsymbol{e}_{h,tt})|+|b(\boldsymbol{u}_{h},\boldsymbol{u}_{t}-\boldsymbol{u}_{h,t},\boldsymbol{e}_{h,tt})|.$
Let us observe that the bounds for $\|\sigma_{1,t}\|_{0}$ and
$\|\sigma_{2,t}\|_{0}$ depend on the regularity of $\boldsymbol{u}_{t}$,
$\boldsymbol{u}_{tt}$, and $p_{t}$, but all are bounded if the solution is
sufficiently regular. For the nonlinear terms, the same arguments as above are
used. For the first term, we obtain
$\displaystyle|b(\boldsymbol{u}_{h,t}-\boldsymbol{u}_{t},\boldsymbol{u},\boldsymbol{e}_{h,tt})|$
(79) $\displaystyle\leq$
$\displaystyle\|\boldsymbol{u}_{h,t}-\boldsymbol{u}_{t}\|_{0}\|\nabla\boldsymbol{u}\|_{\infty}\|\boldsymbol{e}_{h,tt}\|_{0}+\frac{1}{2}\|\nabla\cdot(\boldsymbol{u}_{h,t}-\boldsymbol{u}_{t})\|_{0}\|\boldsymbol{u}\|_{\infty}\|\boldsymbol{e}_{h,tt}\|_{0}$
$\displaystyle\leq$ $\displaystyle
C\|\boldsymbol{u}_{t}-\boldsymbol{u}_{h,t}\|_{1}^{2}+\frac{1}{16}\|\boldsymbol{e}_{h,tt}\|_{0}^{2}.$
It follows from (8), (75), and the inverse inequality (6) that
$\int_{0}^{t}\|\boldsymbol{u}_{t}-\boldsymbol{u}_{h,t}\|_{1}^{2}\leq
Ch^{2(l-2)},$
so that absorbing the second term on the right-hand side of (79) in the left-
hand side of (A) and integrating in time, the corresponding first term on the
right-hand side of (79) is bounded. For the second nonlinear term on the
right-hand side of (A), we obtain
$\displaystyle|b(\boldsymbol{u}_{h,t},\boldsymbol{u}-\boldsymbol{u}_{h},\boldsymbol{e}_{h,tt})|$
$\displaystyle\leq$ $\displaystyle
C\|\boldsymbol{u}_{h,t}\|_{1}\|\boldsymbol{u}-\boldsymbol{u}_{h}\|_{1}c_{\rm
inv}h^{-1}\|\boldsymbol{e}_{h,tt}\|_{0}$ $\displaystyle\leq$ $\displaystyle
Ch^{2(l-2)}\|\boldsymbol{u}_{h,t}\|_{1}^{2}+\frac{1}{16}\|\boldsymbol{e}_{h,tt}\|_{0}^{2}.$
Again, the last term on the right-hand side can be absorbed in the left-hand
side of (A) and the integral with respect to time of the first term is
bounded. For the third nonlinear term we argue as for the second one to get
$|b(\boldsymbol{u}_{h}-\boldsymbol{u},\boldsymbol{u}_{t},\boldsymbol{e}_{h,tt})|\leq\|\boldsymbol{u}_{h}-\boldsymbol{u}\|_{1}\|\boldsymbol{u}_{t}\|_{1}\|\boldsymbol{e}_{h,tt}\|_{1}\leq
Ch^{2(l-2)}\|\boldsymbol{u}_{t}\|_{1}^{2}+\frac{1}{16}\|\boldsymbol{e}_{h,tt}\|_{0}^{2}.$
Using for the fourth nonlinear term the inverse inequality (6) and (8) gives
$\displaystyle|b(\boldsymbol{u}_{h},\boldsymbol{u}_{t}-\boldsymbol{u}_{h,t},\boldsymbol{e}_{h,tt})|\leq\|\boldsymbol{u}_{h}\|_{\infty}\|\boldsymbol{u}_{t}-\boldsymbol{u}_{h,t}\|_{1}\|\boldsymbol{e}_{h,tt}\|_{0}$
$\displaystyle+\frac{1}{2}\|\nabla\cdot(\boldsymbol{u}_{h}-\boldsymbol{s}_{h})\|_{\infty}\|\boldsymbol{u}_{t}-\boldsymbol{u}_{h,t}\|_{0}\|\boldsymbol{e}_{h,tt}\|_{0}+\frac{1}{2}\|\nabla\cdot\boldsymbol{s}_{h}\|_{\infty}\|\boldsymbol{u}_{t}-\boldsymbol{u}_{h,t}\|_{0}\|\boldsymbol{e}_{h,tt}\|_{0}$
$\displaystyle\leq$ $\displaystyle
C\left(\|\boldsymbol{u}_{t}-\boldsymbol{u}_{h,t}\|_{1}^{2}+c_{\rm
inv}^{2}h^{-d}h^{2(l-1)}\|\boldsymbol{u}_{t}-\boldsymbol{u}_{h,t}\|_{0}^{2}\right)+\frac{1}{16}\|\boldsymbol{e}_{h,tt}\|_{0}^{2}.$
Again, the last term on the right-hand side above is absorbed into the left-
hand side of (A), the integral of the first term is bounded, and the integral
for the second is
$c_{\rm
inv}^{2}h^{-d}h^{2(l-1)}\int_{0}^{t}\|\boldsymbol{u}_{t}-\boldsymbol{u}_{h,t}(s)\|_{0}^{2}\
ds\leq Ch^{4l-4-d},$
which is bounded since $l\geq 2$. Collecting all error bounds and taking (76)
into account, we conclude that
$\int_{0}^{t}\|\boldsymbol{e}_{h,tt}(s)\|_{0}^{2}\ ds\leq Ch^{2(l-2)},$ (80)
where the constant does not depend explicitly on inverse powers of $\nu$.
Estimate (80) can be applied, in combination with (71), in (66). To bound
(59), which is the term appearing in (4.1), the inverse inequality gives
$\int_{0}^{t}\|\nabla(\boldsymbol{e}_{h,tt})(s)\|_{0}^{2}\ ds\leq
Ch^{2(l-3)},$
such that a robust estimate is proved only for pairs of finite element spaces
with $l\geq 3$.
## References
* [1] R. A. Adams. Sobolev spaces. Academic Press [A subsidiary of Harcourt Brace Jovanovich, Publishers], New York-London, 1975. Pure and Applied Mathematics, Vol. 65.
* [2] F. Brezzi and R. S. Falk. Stability of higher-order Hood-Taylor methods. SIAM J. Numer. Anal., 28(3):581–590, 1991.
* [3] A. Caiazzo, T. Iliescu, V. John, and S. Schyschlowa. A numerical investigation of velocity-pressure reduced order models for incompressible flows. J. Comput. Phys., 259:598–616, 2014.
* [4] D. Chapelle, A. Gariah, and J. Sainte-Marie. Galerkin approximation with proper orthogonal decomposition: new error estimates and illustrative examples. ESAIM Math. Model. Numer. Anal., 46(4):731–757, 2012.
* [5] H. Chen. Pointwise error estimates for finite element solutions of the Stokes problem. SIAM J. Numer. Anal., 44(1):1–28, 2006.
* [6] P. G. Ciarlet. The finite element method for elliptic problems, volume 40 of Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2002. Reprint of the 1978 original [North-Holland, Amsterdam].
* [7] J. de Frutos, B. García-Archilla, V. John, and J. Novo. Grad-div stabilization for the evolutionary Oseen problem with inf-sup stable finite elements. J. Sci. Comput., 66(3):991–1024, 2016.
* [8] J. de Frutos, B. García-Archilla, V. John, and J. Novo. Analysis of the grad-div stabilization for the time-dependent Navier-Stokes equations with inf-sup stable finite elements. Adv. Comput. Math., 44(1):195–225, 2018.
* [9] B. García-Archilla, V. John, and J. Novo. On the convergence order of the finite element error in the kinetic energy for high Reynolds number incompressible flows. Comput. Methods Appl. Mech. Engrg., 385:Paper No. 114032, 54, 2021\.
* [10] B. García-Archilla and J. Novo. Robust error bounds for the Navier-Stokes equations using implicit-explicit second order BDF method with variable steps, 2021.
* [11] B. García-Archilla, J. Novo, and S. Rubino. Error analysis of proper orthogonal decomposition data assimilation schemes with grad-div stabilization for the Navier-Stokes equations. J. Comput. Appl. Math., 411:Paper No. 114246, 30, 2022.
* [12] B. García-Archilla, J. Novo, and E. S. Titi. Uniform in time error estimates for a finite element method applied to a downscaling data assimilation algorithm for the Navier-Stokes equations. SIAM J. Numer. Anal., 58(1):410–429, 2020.
* [13] J. G. Heywood and R. Rannacher. Finite element approximation of the nonstationary Navier-Stokes problem. I. Regularity of solutions and second-order error estimates for spatial discretization. SIAM J. Numer. Anal., 19(2):275–311, 1982.
* [14] J. G. Heywood and R. Rannacher. Finite-element approximation of the nonstationary Navier-Stokes problem. IV. Error analysis for second-order time discretization. SIAM J. Numer. Anal., 27(2):353–384, 1990.
* [15] T. Iliescu and Z. Wang. Are the snapshot difference quotients needed in the proper orthogonal decomposition? SIAM J. Sci. Comput., 36(3):A1221–A1250, 2014.
* [16] V. John. Finite element methods for incompressible flow problems, volume 51 of Springer Series in Computational Mathematics. Springer, Cham, 2016.
* [17] V. John, B. Moreau, and J. Novo. Error analysis of a supg-stabilized pod-rom method for convection-diffusion-reaction equations. submitted.
* [18] K. Kean and M. Schneier. Error analysis of supremizer pressure recovery for POD based reduced-order models of the time-dependent Navier-Stokes equations. SIAM J. Numer. Anal., 58(4):2235–2264, 2020.
* [19] B. Koc, S. Rubino, M. Schneier, J. Singler, and T. Iliescu. On optimal pointwise in time error bounds and difference quotients for the proper orthogonal decomposition. SIAM J. Numer. Anal., 59(4):2163–2196, 2021.
* [20] T. Kostova-Vassilevska and G. M. Oxberry. Model reduction of dynamical systems by proper orthogonal decomposition: error bounds and comparison of methods using snapshots from the solution and the time derivatives. J. Comput. Appl. Math., 330:553–573, 2018.
* [21] K. Kunisch and S. Volkwein. Galerkin proper orthogonal decomposition methods for parabolic problems. Numer. Math., 90(1):117–148, 2001.
* [22] S. K. Locke and J. R. Singler. A new approach to proper orthogonal decomposition with difference quotiens. arXiv:2106.10224v1 [math.NA] 18 Jun 2021.
* [23] J. Novo and S. Rubino. Error analysis of proper orthogonal decomposition stabilized methods for incompressible flows. SIAM J. Numer. Anal., 59(1):334–369, 2021.
* [24] J. Sánchez, M. Net, B. García-Archilla, and C. Simó. Newton-Krylov continuation of periodic orbits for Navier-Stokes flows. J. Comput. Phys., 201(1):13–33, 2004.
* [25] M. Schäfer, S. Turek, F. Durst, E. Krause, and R. Rannacher. Benchmark Computations of Laminar Flow Around a Cylinder, pages 547–566. Vieweg+Teubner Verlag, Wiesbaden, 1996.
* [26] P. W. Schroeder and G. Lube. Pressure-robust analysis of divergence-free and conforming FEM for evolutionary incompressible Navier-Stokes flows. J. Numer. Math., 25(4):249–276, 2017.
* [27] J. R. Singler. New POD error expressions, error bounds, and asymptotic results for reduced order models of parabolic PDEs. SIAM J. Numer. Anal., 52(2):852–876, 2014.
* [28] C. Taylor and P. Hood. A numerical solution of the Navier-Stokes equations using the finite element technique. Internat. J. Comput. & Fluids, 1(1):73–100, 1973.
* [29] S. Zhang. A new family of stable mixed finite elements for the 3D Stokes equations. Math. Comp., 74(250):543–554, 2005.
|
# Scaling Black Holes and Modularity
Aradhita Chattopadhyaya, Jan Manschot, Swapnamay Mondal
1 School of Mathematics, Trinity College, Dublin 2, Ireland
2 Hamilton Mathematical Institute, Trinity College, Dublin 2, Ireland
###### Abstract:
Scaling black holes are solutions of supergravity with multiple black hole
singularities, which can be adiabatically connected to a single center black
hole solution. We develop techniques to determine partition functions for such
scaling black holes, if each constituent carries a non-vanishing magnetic
charge corresponding to a D4-brane in string theory, or equivalently M5-brane
in M-theory. For three constituents, we demonstrate that the partition
function is a mock modular form of depth two, and we determine the appropriate
non-holomorphic completion using generalized error functions. From the four-
dimensional perspective, the modular parameter is the axion-dilaton, and our
results show that $S$-duality leaves this subset of the spectrum invariant.
From the five-dimensional perspective, the modular parameter is the complex
structure of a torus $T^{2}$, and the scaling black holes are dual to states
in the dimensional reduction of the M5-brane worldvolume theory to $T^{2}$. As
a case study, we specialize the compactification manifold to a K3 fibration,
and explicitly evaluate holomorphic parts of partition functions.
## 1 Introduction
Solutions of supergravity with multiple black hole singularities provide
interesting insights on the spectrum of quantum gravity and the dependence on
the compactification moduli [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. Such multi-center
solutions are particularly intriguing if their solution space contains a
region, where the distances between the centers becomes arbitrarily small. In
other words, these solutions, known as “scaling” solutions [6, 11] in four
dimensions, are adiabatically connected to a supergravity solution with a
single black hole singularity.
We will study such scaling black holes formed from M5-branes in M-theory, or
equivalently D4-branes in Type IIA string theory [12]. Such black holes are
well studied using the AdS3/CFT2 correspondence [12, 13, 14, 15], as well as
the hypermultiplet geometry of the IIB string theory [16, 17, 18]. The
D4/M5-branes wrap a four-cycle (or divisor) $P\in H_{4}(X,\mathbb{Z})$ of the
Calabi-Yau threefold $X$, which can also be studied within algebraic geometry
[19, 20, 21, 22, 23].
Multi-center solutions exist with each center having a positive D4-brane
charge but vanishing D6-brane charge. Their walls of marginal stability extend
to the large volume regime of the Kähler moduli space [24, 25, 26, 27], which
corresponds to $\mathrm{Im}(t)=J=\lambda\,\underline{J}$, where
$\underline{J}$ is the normalized, dimensionless Kähler modulus,
$\underline{J}^{3}=1$, and $\lambda\gg 1$. While attractor points lie
typically in the interior of the Kähler moduli space, there is an analogue of
the attractor point at large volume $t^{\lambda}_{\gamma}$ defined in Eq. (58)
[18, 25, 27]. Similarly, single center black holes, whose internal degrees of
freedom are independent of the asymptotic moduli, have an analogue at large
volume, where families of black hole solutions appear effectively as single
center black holes. Much like nucleons can be considered as elementary
particles at sufficiently low energies.
To deal with these effectively single center solutions, we introduce the
notion of “$\lambda$-core”, or “core” for short. The defining property of a
“$\lambda$-core” is that the distances between constituents in a
$\lambda$-core are bounded by $C\lambda^{-3}$ for sufficiently large
$\lambda$, and for some fixed length $C$. Examples of such bound black holes
states are of course proper single center black holes, as well as bound states
of a D6-brane and anti-D6-brane, for which the wall of marginal stability lies
in the interior of the Kähler moduli space.
The $\lambda$-core solutions are also distinguished in the uplift to five
dimensions, and decoupling limit. Recall that the four dimensional Newton’s
constant equals $G_{4}=\ell_{5}^{3}/R$, with $\ell_{5}$ the five dimensional
Planck length and $R$ the radius of the M-theory circle. Then, $\lambda$ is
related to these variables by $\lambda=R/\ell_{5}$ [27]. When uplifted to five
dimensions and in the decoupling limit $\ell_{5}\to 0$, states with D4-D2-D0
charges and within a $\lambda$-core will develop an asymptotically AdS3
throat, whereas bound states with larger separation will decouple from the
spectrum since their energy diverges [27].
In a series of works [18, 28, 29, 30], the modular properties of partition
functions enumerating D4-D2-D0 black holes are studied from a complementary
perspective, namely by mapping this D-brane system to D3-D1-D(-1) instantons.
These D-instantons correct the hypermultiplet geometry [17, 31, 32, 33], which
is constrained by IIB $SL(2,\mathbb{Z})$ S-duality group. This duality group
acts on the axion-dilation field $\chi:=C^{(0)}+ie^{-\phi}$ by linear
fractional transformations. Here $C^{(0)}$ is the RR scalar and $\phi$ is the
dilaton. Alternatively, the duality can be identified with the modular
symmetry of the worldvolume reduction of M5-branes. These have proven to be
fruitful connections to determine the modular and analytic properties of the
partition functions for D4-D2-D0 black holes. In this way, non-holomorphic
contributions to the partition function are determined, which imply that the
partition functions involve mock modular forms [34, 35], and mock modular
forms of higher depth [36, 37, 38, 39]. This implies potentially interesting
arithmetic of the BPS indices, while the non-holomorphic contribution is also
interesting independently, and a generalization of similar non-holomorphic
terms in partition functions of $\mathcal{N}=4$ Yang-Mills on four-manifolds
[40, 41]. The origins and explicit expressions of these terms have been
understood better recently [30, 42, 43, 44, 45, 46, 47, 48].
We will study in this paper the partition function of scaling black holes with
three constituents. Each constituent is a $\lambda$-core carrying a positive
D4-brane charge, but vanishing D6-brane charge. Intriguingly, each core gives
rise to an associated AdS3/CFT2 after uplifting to M-theory, while the near
coincident region gives rise to an asymptotic AdS3 solution for the total
magnetic charge, and should thus be captured by the AdS3/CFT2 correspondence
for the total charge. In other words, they are examples of AdS fragmentation,
and attributed to the Coulomb branch of the CFT [1]. The existence of scaling
black holes with these charges puts constraints on the topology of the Calabi-
Yau threefold. In particular, these only exist if the second Betti number of
the Calabi-Yau manifold $b_{2}\geq 2$.
Our explicit analysis of scaling solutions gives rise to a decomposition for
the partition function of attractor black holes $\mathcal{Z}^{\lambda}_{P}$
with fixed magnetic charge $P$ reads (See also Eq. (128).) in terms of
partition functions of single core and $n$-core scaling black holes
$\mathcal{Z}^{nT}_{P}$. This reads explicitly,
$\mathcal{Z}^{\lambda}_{P}=\mathcal{Z}^{T}_{P}+\mathcal{Z}^{3T}_{P}+\dots,$
(1)
where $\mathcal{Z}^{T}_{P}$ is the partition function of single core black
holes, $\mathcal{Z}^{3T}_{P}$ the partition function of scaling black holes
consistenting of three cores, and the dots stand for scaling black holes with
$n>3$ cores. We determine the holomorphic part of the partition function using
formula’s for the degeneracies of black hole bound states such as studied in
[6, 11, 49, 50, 51, 52, 53], which gives rise to a holomorphic indefinite
theta series $\Psi_{\boldsymbol{\mu}}$ of signature $(2,2b_{2}-2)$. Since its
coefficients grow polynomially, this demonstrates that the entropy arising
from these solutions is exponentially smaller than the entropy of a single
center black hole. This is expected since we have not included pure Higgs
degeneracies [11, 49] or single center [51]. We have worked out two explicit
case studies, where we specialize the CY three-fold to a K3 fibration and
determine explicit $q$-series for the partition function. We do observe that
the exponent of the leading term in the $q$-expansions is rather large.
We have moreover demonstrated that the partition function
$\mathcal{Z}^{3T}_{P}$ admits a theta function decomposition as a consequence
of a spectral flow symmetry familiar from MSW CFT. This in contrast to generic
bound states with non-vanishing D4-brane charge for which this symmetry is not
present [25]. On the other hand for modular transformations,
$\mathcal{Z}^{3T}_{P}$ needs to be complemented with additional non-
holomorphic terms, a phenomena which is also familiar for the attractor
partition function $\mathcal{Z}^{\lambda}_{P}$ as mentioned above. We
distinguish the completed functions from the original functions by a hat, thus
$\widehat{\mathcal{Z}}^{\lambda}_{P}$ for $\mathcal{Z}^{\lambda}_{P}$ and
$\widehat{\mathcal{Z}}^{3T}_{P}$ for $\mathcal{Z}^{3T}_{P}$ and similarly for
other functions. We then establish that $\widehat{\mathcal{Z}}^{\lambda}_{P}$
and $\widehat{\mathcal{Z}}^{3T}_{P}$ transform identically. Alexandrov and
Pioline have derived in Ref. [29] the non-holomorphic terms for the rhs,
$\widehat{\mathcal{Z}}^{\lambda}_{P}$. It would be interesting to combine this
with the non-holomorphic terms derived in the present paper for
$\widehat{\mathcal{Z}}^{3T}_{P}$ to deduce the non-holomorphic terms of
$\widehat{\mathcal{Z}}^{T}_{P}$.
The identical transformation properties of
$\widehat{\mathcal{Z}}^{\lambda}_{P}$ and $\widehat{\mathcal{Z}}^{nT}_{P}$
raises the question, which term(s) correspond to the partition function of the
conformal field theory. Since upon taking the decoupling limit to vanishing 5d
Planck length $\ell_{5}\to 0$ [27], the supergravity solutions contributing to
$\widehat{\mathcal{Z}}^{3T}_{P}$ decouple from the AdS3 geometry. This term is
therefore not expected to correspond to states within the MSW conformal field
theory, and it seems therefore plausible to us to expect that
$\widehat{\mathcal{Z}}^{T}_{P}$ is to be identified with the MSW CFT partition
function. It would be interesting to understand whether the finite difference
between $\widehat{\mathcal{Z}}^{T}_{P}$ and
$\widehat{\mathcal{Z}}^{\lambda}_{P}$ can arise by turning on the irrelevant
perturbation in the CFT, which corresponds to moving away from the near
horizon geometry and up the attractor flow [54].
Determination of the completion $\widehat{\mathcal{Z}}^{3T}_{P}$ amounts to
determining the completion of the indefinite theta series
$\Psi_{\boldsymbol{\mu}}$. We determine furthermore the non-holomorphic
completion of a closely related function $\Phi_{\boldsymbol{\mu}}$ (123),
which enumerates the number of “scaling” charge configurations. Although still
involved, determination of the non-holomorphic completion of this function is
simpler than for generating function of the BPS indices. The analysis of
$\Phi_{\boldsymbol{\mu}}$ and $\Psi_{\boldsymbol{\mu}}$ demonstrates that
these are mock modular forms of depth 2. There are different representations
of the completion:
1. 1.
As a non-holomorphic kernel of the theta series. This involves (generalized)
error functions [36]. The modular properties of the completion follow from
application of results by Vignéras [55]. This is applied to the partition
functions for scaling black holes in Section 4.4. For the partition function
$\widehat{\Phi}_{\boldsymbol{\mu}}$ see Equation (152), and for
$\widehat{\Psi}_{\boldsymbol{\mu}}$, see Equation (181).
2. 2.
Another useful representation, well-known for mock modular forms [34, 35], is
as a holomorphic $q$-series plus an (iterated) integral of modular forms [36,
43, 56]. This form is determined for three-center scaling black holes in
equation (155) and (170). The modular transformations of the $q$-series follow
directly from those of the iterated integral. This representation is also
relevant physically, where the non-holomorphic part is contributed to
continuum of states [57], or the Coulomb branch [44, 48].
3. 3.
The third representation we mention here is as an integral over a domain in a
symmetric space, studied by Funke, Kudla and Millson [38, 39, 58], also known
as Narain moduli space in the context of conformal field theory. In the case
of a 3-center scaling solution, this would be a union of triangles in
$SO(2,2b_{2}-2;\mathbb{Z})\backslash
SO(2,2b_{2}-2;\mathbb{R})/SO(2;\mathbb{R})\times SO(2b_{2}-2;\mathbb{R})$.
While we have not explored this representation in detail, it is interesting to
mention in light of recent discussion on averaging over Narian moduli space
and the AdS3/CFT2 correspondence [59, 60].
Though technically involved, we believe that our results can be extended to
scaling solutions with $n>3$ cores, and will give rise to mock modular forms
of depth $n-1$. Any Calabi-Yau manifold with $b_{2}\geq 2$ gives rise to such
mock modular forms, which thus provides a large resource of holomorphic higher
depth mock modular forms. Further, it will be interesting to include single
center degeneracies. Moreover, we hope that our results could be used for the
study of AdS3 fragmention, and the interpretation of these solutions in the
dual CFT. In this way, it may be possible to derive the non-holomorphic terms
within gravity or the worldvolume theory of intersecting D-branes.
The outline of this paper is as follows. Section 2 reviews aspects of multi-
center black holes, and in particular the index of scaling solutions. Section
3 reviews partition functions of D4-brane black holes. Section 4 discusses
charge lattices for D4-brane bound states, and defines the partition functions
of scaling black holes. We define here also the partition functions
$\Phi_{\boldsymbol{\mu}}$ and $\Psi_{\boldsymbol{\mu}}$, which enumerate
scaling configurations and their BPS indices, and determine its modular
completion. Section 5 discusses the relation to M-theory and the decoupling
limit AdS3. Section 6 considers two case studies for a specific Calabi-Yau
3-fold and charges, and presents the holomorphic $q$-series which are mock
modular forms of depth 2.
## 2 Black hole solutions in $\mathcal{N}=2$ Supergravity
We briefly review in this section supersymmetric black holes in
$\mathcal{N}=2$ supergravity and partition functions.
### 2.1 Black hole bound states
Let $X$ be a simply connected Calabi-Yau threefold, with triple intersection
product $d_{abc}$, $a,b,c=1,\dots,b_{2}$. The intersection product $d_{abc}$
is symmetric in its indices. The classical central charge of a BPS state is
given by
$Z(\gamma,t)=-\int_{X}e^{-t}\wedge\gamma,$ (2)
where $\gamma$ on the rhs is the Poincaré dual differential form of the
homology class of the cycle which supports the D-branes, and is in 1-to-1
correspondence with the electric-magnetic charge of the BPS state. Moreover,
$t$ is the Kähler modulus of the Calabi-Yau three-fold.
The scalar fields $X^{I}$, $I=0,\dots,b_{2}$ of the vector multiplets are
related to the Calabi-Yau moduli, $t^{a}=B^{a}+iJ^{a}$, as $t^{a}=X^{a}/X^{0}$
for $a=1,\dots,b_{2}$. Near the horizon, their values are determined by the
attractor equations at the horizon in terms of the electric-magnetic charges
of the black hole [61, 62, 63]. On the other hand, their asymptotic values for
$|\vec{r}|\to\infty$ are boundary conditions for the equations of motion.
Besides the single center black hole, the equations of motion of
$\mathcal{N}=2$ supergravity give rise to intricate multi-center black hole
solutions [3, 6, 64, 65]. Upon varying the asymptotic values of the scalar
fields, multi-center solutions can cease to exist as proper solutions to the
supergravity equations of motions, or reversely new solutions can appear. If
the asymptotic values are chosen equal to the attractor values only few multi-
center solutions exist. That is to say, only multi-center solutions which can
be continuously connected to a single center black hole exist. These are the
scaling solutions mentioned above, and are the main focus of this paper.
To understand this more explicitly, recall that an $n$-center solution, is
required to satisfy the following $n-1$ Denef equations,
$\sum_{i\neq
j}\frac{\left<\gamma_{i},\gamma_{j}\right>}{r_{ij}}=2\,\left.\mathrm{Im}\left(e^{-{\rm
i}\alpha}Z(\gamma_{i},t)\right)\right|_{r=\infty},$ (3)
where $\left<,\right>$ is the symplectic innerproduct between the charges,
$\left<\gamma_{1},\gamma_{2}\right>=-P_{1}^{0}Q_{0,2}+P_{1}\cdot
Q_{2}-P_{2}\cdot Q_{1}+P_{2}^{0}Q_{0,1}.$ (4)
Moreover, $r_{ij}=|\vec{r}_{i}-\vec{r}_{j}|$ is the distance between the
centers $i$ and $j$, and $\alpha$ is the phase of the central charge
$Z(\gamma,t)$ for the total charge $\gamma=\sum_{i=1}^{n}\gamma_{i}$. We set
$c_{j}=2\,\left.\mathrm{Im}\left(e^{-{\rm
i}\alpha}Z(\gamma_{j},t)\right)\right|_{r=\infty}.$ (5)
We fix $\vec{r}_{1}$ at the origin $\mathbb{R}^{3}$ and let $\mathcal{M}_{n}$
be the solution space for $\vec{r}_{j}\in\mathbb{R}^{3}$, $j=2,\dots,n$ to
(3). Then $\mathcal{M}_{n}$ has dimension $2n-2$. The low energy degrees of
freedom of the supersymmetric multi-center black hole give rise to
$\mathcal{N}=4$ quiver quantum mechanics [64]. The quiver for an $n$-center
bound state with charges $\\{\gamma_{j}\\}$ consists of $n$ nodes, and
$\gamma_{ij}>0$ arrows from node $i$ to node $j$.
Note that the equations (3) are necessary but not sufficient for the multi-
center solution to exist, to this end one needs to verify that the full
supergravity solution is regular away from the black hole singularities, and
without time-like curves [3]. Since we restrict to the large volume limit of
the Calabi-Yau moduli space, we assume that this is the case in the following,
and that we can determine the existence of bound states from (3).
The gravity perspective has led to the following form for the index of an
$n$-center bound state [6, 9]. To express this, we first introduce the
rational index $\bar{\Omega}(\gamma)$ associated to the integer index
$\Omega(\gamma)$,
$\bar{\Omega}(\gamma)=\sum_{m|\gamma}\frac{\Omega(\gamma/m)}{m^{2}}.$ (6)
The single center invariants $\Omega_{S}(\gamma)$ are the internal
degeneracies of a supersymmetric particle or black hole with charge $\gamma$.
It is expected to be a positive integer for a black hole. To analyze the
spectrum of bound states it is convenient to introduce a fugacity $y$ for
angular momentum. The rational variant of the refined index is defined as
$\bar{\Omega}(\gamma,y)=\sum_{m|\gamma}\frac{1}{m}\frac{y-y^{-1}}{y^{m}-y^{-m}}\,\Omega(\gamma/m,y^{m}).$
(7)
This reproduces (6) in the limit $y\to 1$. A few variants of BPS indices will
be important for us. We mention,
* •
The BPS invariant $\Omega(\gamma;t)$, which enumerates BPS states for a given
value $t$ of the asymptotic moduli. This include single-center BPS states as
well as bound states.
* •
The single-center invariant $\Omega_{S}(\gamma)$, which is the internal
degeneracy of a BPS particle or black hole center. This invariant is
independent of the moduli $t$.
* •
The total invariant $\Omega_{T}(\gamma)$, which is a composite of
$\Omega_{S}(\gamma)$ and independent of the moduli $t$. We give the expression
below in (10).
The rational refined BPS index $\bar{\Omega}(\gamma,y;t)$ can be expressed as
a sum over partitions of $\gamma$, each representing a BPS bound state. It
takes the form [9]
$\bar{\Omega}(\gamma,y;t)=\sum_{\gamma=\sum_{i=1}^{n}\gamma_{i}}\frac{g_{C}(\\{\gamma_{j}\\},\\{c_{j}\\},y)}{|{\rm
Aut}(\\{\gamma_{j}\\})|}\,\prod_{j=1}^{n}\bar{\Omega}_{T}(\gamma_{j},y),$ (8)
with,
* •
$|{\rm Aut}(\\{\gamma_{j}\\})|$ is the order of the subgroup of the
permutation group, which preserves the ordered set
$\\{\gamma_{1},\dots,\gamma_{n}\\}$.
* •
The index $g_{C}$ can be determined using localization of the black hole
solution with respect to rotation around a fixed axis generated by $J_{3}$,
say the $z$-axis [66]. A fixed point $p\in\mathcal{M}_{n}$ corresponds to a
collinear solution with all centers placed on the $z$-axis. If the associated
bound state quiver has no oriented loop, $g_{C}$ is the refined index of the
$\mathcal{N}=4$ quiver quantum mechanics describing the bound state [9],
$g_{C}(\\{\gamma_{j}\\},\\{c_{j}\\},y)=\mathrm{Tr}^{\prime}_{\mathcal{H}_{\rm
qm}}(-y)^{2J_{3}},$ (9)
where the trace is over the BPS Hilbert space $\mathcal{H}_{\rm qm}$ of the
quiver quantum mechanics, and $J_{3}$ is one of the generators of $SU(2)$.
* •
$\Omega_{T}$ the total invariant defined by
$\Omega_{T}(\gamma,y)=\Omega_{S}(\gamma,y)+\sum_{\sum_{j=1}^{n}m_{j}\gamma_{j}=\gamma}H(\\{\gamma_{i},m_{i}\\},y)\,\prod_{i=1}^{n}\Omega_{S}(\gamma_{i},y^{m_{i}}),$
(10)
where $m_{j}\in\mathbb{N}$ are multiplicities of the charges in the partition
of $\gamma$. For bound states whose associated quiver has no closed loops, the
$H(\\{\gamma_{i},m_{i}\\},y)$ vanish. Otherwise they are determined by the
“minimal modification hypothesis”. This has the effect that if we express (8)
as
$\bar{\Omega}(\gamma,y;t)=\sum_{\gamma=\sum_{i=1}^{n}\gamma_{i}}\frac{\bar{g}_{C}(\\{\gamma_{j}\\},\\{c_{j}\\},y)}{|{\rm
Aut}(\\{\gamma_{j}\\})|}\,\prod_{j=1}^{n}\bar{\Omega}_{S}(\gamma_{j},y),$ (11)
then the $\bar{g}_{C}$ are $SU(2)$ characters.
To determine $g_{C}$ using localization, it is convenient to introduce the
refined index
$g_{C}(\\{\gamma_{j}\\},\\{c_{j}\\},y)=\mathrm{Tr}^{\prime}_{\mathcal{H}_{\rm
qm}}(-y)^{2J_{3}}.$
Let $z_{j}$ be the position of the center with charge $\gamma_{j}$. The
localization technique then gives the following sum over collinear fixed
points with respect to rotation around this axis,
$g_{C}(\\{\gamma_{j}\\},\\{c_{j}\\};y)=(-1)^{n-1}(y-y^{-1})^{-n+1}\sum_{p\in\mathcal{M}_{n}}s(p)\,(-y)^{\sum_{i<j}\gamma_{ij}\mathop{\mathrm{sgn}}(z_{j}-z_{i})},$
(12)
where $s(p)\in\pm 1$ is a sign depending on the details of the fixed point,
and $z_{j}$ is the $z$-coordinate of center $j$. If the associated quiver does
not contain loops, this is the complete index and the $y\to 1$ limit is well-
defined. However, if the quiver contains loops, the distances between the
black hole centers may be arbitrarily small [6]. Such solutions are known as
scaling solutions, and additional fixed points need to be included in (12). An
($n-$center) scaling black hole is a multi-center solution of $n$ black holes,
whose phase space $\mathcal{M}_{n}$ contains a region where the centers can
approach each other arbitrarily close. Thus, while the centers are spatially
separated for generic points of $\mathcal{M}_{n}$, they are adiabatically
connected to the black hole solution with a single center.
While many BPS bound states decay if we tune the moduli to their attractor
values, scaling solutions remain part of the BPS spectrum. Since the index is
evaluated at the attractor point $c_{j}^{*}$, each term on the rhs of (8) with
$n\geq 3$ corresponds to a scaling solution.
Various quantities may diverge in the limit $y\to 1$, such as $\Omega_{T}$ and
$g_{C}$. In order to arrive at numerical counterparts for these quantities, we
propose to regularize a rational function of the form
$\frac{f(y)}{(y-y^{-1})^{\ell}},\qquad\text{with}\qquad\lim_{y\to 1}f(y)\neq
0,$ (13)
as follows
$\frac{f(y)}{(y-y^{-1})^{\ell}}\longrightarrow\frac{1}{2^{\ell}\,\ell!}\,\left.\left(y\frac{d}{dy}\right)^{\ell}f(y)\right|_{y=1}.$
(14)
### 2.2 Bound state indices
Let us consider the equations (3) for small values of $n$. For $n=2$, there is
a single equation,
$\frac{\gamma_{12}}{r_{12}}=c_{1}.$ (15)
We deduce that the two-center solution only exists as a physical solution if
$\gamma_{12}\,c_{1}>0$. This depends on the moduli $t$. If $t$ approaches a
value where $c_{1}$ vanishes, $r_{12}$ diverges and the solution disappears as
a solution to the supergravity equations of motion. In particular for the
attractor point, the two-center solution never exists.
The quantum states of the two-center solution correspond to the product of the
internal degeneracies of the centers times the states of a spin
$(|\gamma_{12}|-1)/2$ multiplet, which arises due to the electric-magnetic
fields sourced by the charges of the two-centers [64]. We express it here as
the product
$\begin{split}\Omega_{2}(\gamma_{1}+\gamma_{2};t)&=g_{C}(\\{\gamma_{1},\gamma_{2}\\},\\{c_{1},c_{2}\\})\,\Omega(\gamma_{1})\,\Omega(\gamma_{2}),\end{split}$
(16)
with
$\begin{split}&g_{C}(\\{\gamma_{1},\gamma_{2}\\},\\{c_{1},c_{2}\\})=\tfrac{1}{2}\left(\mathop{\mathrm{sgn}}(\gamma_{12})+\mathop{\mathrm{sgn}}(c_{1})\right)\,(-1)^{\gamma_{12}-1}\,\gamma_{12},\end{split}$
(17)
and $\Omega(\gamma_{j})$ are degeneracies of the individual centers. For
$\mathop{\mathrm{sgn}}$, we use the definition
$\mathop{\mathrm{sgn}}(x)=\left\\{\begin{array}[]{rl}1,&x>0,\\\ 0,&x=0,\\\
-1,&x<0.\end{array}\ \right.$ (18)
The function
$\frac{1}{2}\left(\mathop{\mathrm{sgn}}(\gamma_{12})+\mathop{\mathrm{sgn}}(c_{1})\right)$
equals 1 if the solution to (15) is physical, i.e. $r_{12}>0$, and it vanishes
if the sign of $r_{12}$ is unphysical, $r_{12}<0$. The factor
$(-1)^{\gamma_{12}-1}\gamma_{12}$ is the number of states of the bound state,
assuming that it exists. The case that $c_{1}=0$ is a special case, we aim to
avoid. At the attractor point for the total charge, the $c_{j}$ (5) are equal
to $c^{*}_{j}$, given by [3, 4]
$c^{*}_{j}=|Z(\gamma,t^{*}_{\gamma})|\,\left<\gamma,\gamma_{j}\right>.$ (19)
As a result, two-center solutions do not exist at the attractor point, because
substituting $c^{*}_{1}$ in (15) gives a negative value for $r_{12}$ which is
unphysical.
For $n=3$ distinct charges, (3) gives two independent equations,
$\begin{split}\frac{\gamma_{12}}{r_{12}}+\frac{\gamma_{13}}{r_{13}}=c_{1},\\\
\frac{\gamma_{21}}{r_{12}}+\frac{\gamma_{23}}{r_{23}}=c_{2}.\\\ \end{split}$
(20)
An intriguing aspect of these equations is that for appropriate values of
$\gamma_{ij}$, they can be satisfied with positive $r_{ij}$ for all pairs
$i\neq j$, for $t$ at the attractor point. Then, there is a one-parameter
family of solutions [6], with
$r_{ij}=\varepsilon\,\gamma_{ij}+O(\varepsilon^{2}),\qquad\varepsilon\to 0,$
(21)
for $ij$ equal to $12$, $23$ and $31$.
We can in fact do better, and give an all order solution in $\varepsilon$. The
parameter $\varepsilon$ together with three angular variables form the
4-dimensional solution space to (20). We set $\gamma_{12}=a$, $\gamma_{23}=b$
and $\gamma_{31}=c$ in the following, and assume $a,b,c>0$.111We apologize for
the multiple use of $a$, $b$ and $c$. , we can then verify that the following
1-parameter family of distances $r_{ij}$ satisfy Denef’s equations (3),
$\displaystyle\frac{1}{r_{12}}$
$\displaystyle=\frac{1}{a\varepsilon}-\rho_{a}\,,\quad\,\frac{1}{r_{23}}=\frac{1}{b\varepsilon}-\rho_{b}\,,\quad\,\frac{1}{r_{31}}=\frac{1}{c\varepsilon}-\rho_{c}\,,$
(22)
where $\rho_{a},\rho_{b},\rho_{c}$ satisfy
$\left(\rho_{c}c-\rho_{a}a\right)=c_{1},\,\left(\rho_{a}a-\rho_{b}b\right)=c_{2},\,\left(\rho_{b}b-\rho_{c}c\right)=c_{3}$.
The range of $\varepsilon$ is determined by triangle inequalities and
positivity of $r_{12},r_{23},r_{31}$. The equations for $\rho_{a,b,c}$ allow
for a shift, which modifies the range of $\varepsilon$ accordingly.
We discuss this now in detail for the attractor poin, where we can use (19)
such that $|Z(\gamma,t^{*}_{\gamma})|=M$ and $\rho_{a}=\rho_{b}=\rho_{c}=M$.
We then have,
$\frac{1}{r_{12}}=\frac{1}{a\varepsilon}-M\,,\quad\frac{1}{r_{23}}=\frac{1}{b\varepsilon}-M\,,\quad\frac{1}{r_{31}}=\frac{1}{c\varepsilon}-M\,.$
(23)
The free parameter $\varepsilon$ is bounded from below by 0. In
$\varepsilon\ll 1/M$ regime, this solution reduces to Eq. (21); thus existence
of the scaling solution requires $a,b,c$ to obey triangle inequalities [6]. As
$\varepsilon$ is increased, the shape of the triangle changes, and we need to
determine the correct maximum of $\varepsilon$ domain. First, positivity of
$r_{12},r_{23},r_{31}$ imposes the upper bound
$\varepsilon\leq\frac{1}{M\max{}(a,b,c)}$. However, we also need to impose
that $r_{ij}$ satisfy the triangle inequalities, which imposes an even
stronger upper bound. E.g. the condition $r_{12}+r_{23}\geq r_{31}$ requires
$(a+b-c)-2ab\varepsilon M+abc\varepsilon^{2}M^{2}\geq 0.$ (24)
If $(c-a)(c-b)<0$, this condition is always satisfied since $a+b-c>0$ and the
lhs does not have real roots in this case. Moreover if $(c-a)(c-b)\geq 0$, the
condition is saturated for
$\varepsilon_{c}^{\pm}=\frac{1}{Mc}\left[1\pm\sqrt{\frac{(c-a)(c-b)}{ab}}\right]$
and violated for $\varepsilon_{c}^{-}<\varepsilon<\varepsilon_{c}^{+}$. Both
roots are non-negative provided $(a,b,c)$ obeys the triangle inequality
$a+b\geq c$. Noting that $\varepsilon_{c}^{+}>\frac{1}{Mc}$, we see
$\varepsilon\geq\varepsilon_{c}^{+}$ makes $r_{31}$ negative. Thus we must
have $\varepsilon\leq\varepsilon_{c}^{-}$. Using two other triangle
inequalities, we have
$\varepsilon\in(0,\min{}(\varepsilon_{a}^{-},\varepsilon_{b}^{-},\varepsilon_{c}^{-})_{\mathbb{R}}]$
where
$\varepsilon_{a}^{-}=\frac{1}{Ma}\left[1-\sqrt{\frac{(a-b)(a-c)}{bc}}\right],\,\varepsilon_{b}^{-}=\frac{1}{Mb}\left[1-\sqrt{\frac{(b-a)(b-c)}{ac}}\right]$,
and
$\min{}(\varepsilon_{a}^{-},\varepsilon_{b}^{-},\varepsilon_{c}^{-})_{\mathbb{R}}$
means the minimum among the $\varepsilon^{-}_{a,b,c}\in\mathbb{R}$. One of
$\varepsilon^{-}_{a,b,c}$ may be complex. At the maximal value of
$\varepsilon$ the configuration is collinear. With the three additional
angular variables, one can aligned the solution along the $z$-axis, thus
giving a fixed point for rotations around this axis.
For three centers, the sum over fixed points reads
$\displaystyle
g_{C}\left(\\{\gamma_{j}\\};\\{c_{j}\\};y\right)=\frac{(-1)^{a+b+c}}{(y-y^{-1})^{2}}\Bigg{[}F(123)y^{a+b-c}+F(321)y^{-a-b+c}+F(213)y^{-a+b-c}$
$\displaystyle+F(312)y^{a-b+c}+F(132)y^{a-b-c}+F(231)y^{-a+b+c}\Bigg{]},$ (25)
where
$F(ijk):=F(ijk;\\{c_{j}\\})=\left\\{\begin{array}[]{rl}s(p),&\qquad\exists\,{\rm
a\,\,fixed\,\,point\,\,}p\in\mathcal{M}_{n}\,\,{\rm
with\,\,}z_{i}<z_{j}<z_{k},\\\ 0,&\qquad\nexists\,{\rm
a\,\,fixed\,\,point\,\,}p\in\mathcal{M}_{n}{\rm\,\,with\,\,}z_{i}<z_{j}<z_{k}.\end{array}\right.$
(26)
Since $a,b,c\in\mathbb{Z}$, the signs $(-1)^{a\pm b\pm c}$ equal
$(-1)^{a+b+c}$, and we can factor this out from the sum over permutations. The
dependence on the rhs on $\\{c_{j}\\}$ is through $\mathcal{M}_{n}$.
Since $F(123;\\{c_{j}\\})=F(321;\\{c_{j}\\})$, we can shorten $g_{C}$ to
$\begin{split}&g_{C}\left(\\{\gamma_{j}\\};\\{c_{j}\\};y\right)=\frac{(-1)^{a+b+c}}{(y-y^{-1})^{2}}\Bigg{[}F(123;\\{c_{j}\\})\left(y^{a+b-c}+y^{-a-b+c}\right)\\\
&+F(213;\\{c_{j}\\})\left(y^{-a+b-c}+y^{a-b+c}\right)+F(132;\\{c_{j}\\})\left(y^{a-b-c}+y^{-a+b+c}\right)\Bigg{]}.\end{split}$
(27)
This expression does not have a smooth $y\rightarrow 1$ limit if
$\mathcal{M}_{n}$ is non-compact and contains a scaling region. In that case,
only one of $F(ijk;\\{c_{j}\\})$’s is non-vanishing. Turning on a fugacity is
indeed known to be subtle for non-compact phase spaces [67].
In the present context, the minimal modification hypothesis is put forward in
[66] to correct this. It adds a term with minimal angular momentum
corresponding to the fixed point with coincident centers. The effect of the
minimal modification hypothesis for three distinct charges is that the refined
index $g_{C}\left(\\{\gamma_{j}\\};\\{c_{j}\\}\right)$ is completed to
$\bar{g}_{C}\left(\\{\gamma_{j}\\};\\{c_{j}\\},y\right)=g_{C}\left(\\{\gamma_{j}\\};\\{c_{j}\\},y\right)+H(\\{\gamma_{j}\\},y),$
(28)
with
$H(\\{\gamma_{j}\\},y)=\left\\{\begin{array}[]{ll}-\frac{2}{(y-y^{-1})^{2}},&\quad{\rm
if}~{}a+b+c\in 2\mathbb{Z},\\\ \frac{y+y^{-1}}{(y-y^{-1})^{2}},&\quad{\rm
if}~{}a+b+c\in 2\mathbb{Z}+1,\end{array}\right.$ (29)
$\bar{g}_{C}$ has a well-defined $y\to 1$ limit, which reads,
$\begin{split}\bar{g}_{C}\left(\\{\gamma_{j}\\};\\{c_{j}\\}\right)&=\lim_{y\to
1}\bar{g}_{C}\left(\\{\gamma_{j}\\};\\{c_{j}\\};y\right)\\\
&=(-1)^{a+b-c}\left[F(123)\,s(a,b,c)+F(213)\,s(a,c,b)+F(132)\,s(c,b,a)\right],\end{split}$
(30)
with
$s(a,b,c)=\left\\{\begin{array}[]{rl}\frac{(a+b-c)^{2}}{4},&\qquad\text{if}~{}a+b+c\in
2\mathbb{Z},\\\ \frac{(a+b-c)^{2}-1}{4},&\qquad\text{if}~{}a+b+c\in
2\mathbb{Z}+1.\end{array}\right.\,$ (31)
We note that for degenerate cases, where one or more among $a,b$ and $c$
vanish, $g_{C}$ can be a multiple of $1/2$ or $1/4$ rather than in
$\\{-1,0,1\\}$.
Using the regularization (14), we obtaine for the numerical version (29) of
$H$,
$H(\\{\gamma_{j}\\})=\left\\{\begin{array}[]{rl}0,&\quad{\rm if}~{}a+b+c\in
2\mathbb{Z},\\\ \frac{1}{4},&\quad{\rm if}~{}a+b+c\in
2\mathbb{Z}+1,\end{array}\right.$ (32)
which in turn gives the numerical counterpart for $\Omega_{T}$,
$\Omega_{T}(\gamma)=\Omega_{S}(\gamma)+\left\\{\begin{array}[]{rl}0,&\quad{\rm
if}~{}a+b+c\in 2\mathbb{Z},\\\
\frac{1}{4}\prod_{j=1}^{3}\Omega_{S}(\gamma_{j}),&\quad{\rm if}~{}a+b+c\in
2\mathbb{Z}+1.\end{array}\right.$ (33)
We obtain similarly using (14) for the numerical version of $g_{C}$,
$\begin{split}&g_{C}\left(\\{\gamma_{j}\\};\\{c_{j}\\}\right)=\frac{(-1)^{a+b+c}}{4}\times\left[F(123;\\{c_{j}\\})\,(a+b-c)^{2}\right.\\\
&\quad\left.+F(213,\\{c_{j}\\})\,(a-b+c)^{2}+F(132,\\{c_{j}\\})\,(a-b-c)^{2}\right].\end{split}$
(34)
Thus the numerical $g_{C}$ essentially corresponds to the $y\to 1$ limit of
the equivariant volume of the solution space $\mathcal{M}_{3}$ [66].
The term $F(123,\\{c_{j}\\})$ is determined in [51, Equation (2.57)]. For our
purposes we rewrite this in terms of $\mathop{\mathrm{sgn}}$ rather than the
step function. This reads,
$F(123,\\{c_{j}\\})=F_{1}(a,b,c,\\{c_{j}\\})+F_{2}(a,b,c),$ (35)
with
$\begin{split}F_{1}(a,b,c,\\{c_{j}\\})&=\frac{1}{4}(\mathop{\mathrm{sgn}}(a)+\mathop{\mathrm{sgn}}(c_{1}))\,(\mathop{\mathrm{sgn}}(b)-\mathop{\mathrm{sgn}}(c_{3})),\\\
F_{2}(a,b,c)&=\frac{1}{4}\left(\mathop{\mathrm{sgn}}(a)+\mathop{\mathrm{sgn}}(b)\right)\left(\mathop{\mathrm{sgn}}(a+b-c)-\mathop{\mathrm{sgn}}(a+b)\right).\end{split}$
(36)
At special charge configurations, where one or more arguments of the signs
vanish, this may differ from [51, Equation (2.57)]. In such cases, the $F_{j}$
can be a fraction rather than an integer. To deal properly with such cases, we
include additional terms below in Eqs (42) and (44).
Our interest in this paper is the index at the attractor point, thus
$g_{C}\left(\\{\gamma_{j}\\};\\{c_{j}^{*}\\}\right),$ (37)
where $c_{j}^{*}$ (19) is $c_{j}$ evaluated at the attractor point
$t^{*}_{\gamma}$. Then $F_{1}$ reads
$\begin{split}F_{1}(a,b,c)&=\frac{1}{4}\left(\mathop{\mathrm{sgn}}(a)+\mathop{\mathrm{sgn}}(c-a)\right)\left(\mathop{\mathrm{sgn}}(b)+\mathop{\mathrm{sgn}}(c-b)\right).\end{split}$
(38)
Assuming non-vanishing arguments of the sign functions, we can simplify the
sum $F^{*}(123)=F_{1}+F_{2}$ using the identity [68, Eq. (A.1)]222We thank
Sergey Alexandrov for stressing the simplifying power of this identity.
$(\mathop{\mathrm{sgn}}(x_{1})+\mathop{\mathrm{sgn}}(x_{2}))\,\mathop{\mathrm{sgn}}(x_{1}+x_{2})-\mathop{\mathrm{sgn}}(x_{1})\,\mathop{\mathrm{sgn}}(x_{2})=1,\qquad(x_{1},x_{2})\neq(0,0),$
(39)
to $F^{*}(123)=F_{1}(a,b,c)+F_{2}(a,b,c)$ [68, Eq. (4.10)],
$F^{*}(123)=\frac{1}{4}(1+\mathop{\mathrm{sgn}}(a-c)\,\mathop{\mathrm{sgn}}(b-c)+\mathop{\mathrm{sgn}}(b-c)\,\mathop{\mathrm{sgn}}(c-a-b)+\mathop{\mathrm{sgn}}(c-a-b)\,\mathop{\mathrm{sgn}}(a-c)\,).$
(40)
We stress that this expression may differ from (35) if the arguments of some
products of $\mathop{\mathrm{sgn}}$’s vanish. For example for $a=0,b=c=1$,
$F(123)=0$, while $F^{*}(123)$ equals $\frac{1}{4}$.
For the other permutations, we also define
$\begin{split}F_{3}(a,b,c)&=\frac{1}{4}\left(\mathop{\mathrm{sgn}}(a)+\mathop{\mathrm{sgn}}(b-a)\right)\left(\mathop{\mathrm{sgn}}(c)+\mathop{\mathrm{sgn}}(b-c)\right),\\\
F_{4}(a,b,c)&=\frac{1}{4}\left(\mathop{\mathrm{sgn}}(a)+\mathop{\mathrm{sgn}}(c)\right)\left(\mathop{\mathrm{sgn}}(a+c-b)-\mathop{\mathrm{sgn}}(a+c)\right),\\\
F_{5}(a,b,c)&=\frac{1}{4}\left(\mathop{\mathrm{sgn}}(c)+\mathop{\mathrm{sgn}}(a-c)\right)\left(\mathop{\mathrm{sgn}}(b)+\mathop{\mathrm{sgn}}(a-b)\right),\\\
F_{6}(a,b,c)&=\frac{1}{4}\left(\mathop{\mathrm{sgn}}(c)+\mathop{\mathrm{sgn}}(b)\right)\left(\mathop{\mathrm{sgn}}(c+b-a)-\mathop{\mathrm{sgn}}(c+b)\right),\end{split}$
(41)
and
${}F^{*}(213)=\frac{1}{4}(1+\mathop{\mathrm{sgn}}(a-b)\,\mathop{\mathrm{sgn}}(c-b)+\mathop{\mathrm{sgn}}(c-b)\,\mathop{\mathrm{sgn}}(b-a-c)+\mathop{\mathrm{sgn}}(b-a-c)\,\mathop{\mathrm{sgn}}(a-b)\,),$
${}F^{*}(132)=\frac{1}{4}(1+\mathop{\mathrm{sgn}}(c-a)\,\mathop{\mathrm{sgn}}(b-a)+\mathop{\mathrm{sgn}}(b-a)\,\mathop{\mathrm{sgn}}(a-b-c)+\mathop{\mathrm{sgn}}(a-b-c)\,\mathop{\mathrm{sgn}}(c-a)\,).$
Having defined the $F^{*}(ijk)$, we turn to
$g_{C}(\\{\gamma_{j}\\};\\{c_{j}^{*}\\})$ and take care of the special cases
where both arguments of a products of $\mathop{\mathrm{sgn}}$’s vanish. Let us
first determine for which values of $a$, $b$ and $c$,
$g_{C}(\\{\gamma_{j}\\};\\{c_{j}^{*}\\})$ is affected. When the last two
products of $\mathop{\mathrm{sgn}}$’s of $F^{*}(123)$ (40) vanish, the angular
momentum factor $(a+b-c)^{2}/4$ also vanishes. Thus replacing these products
of $\mathop{\mathrm{sgn}}$’s with a non-vanishing value will not affect
$g_{C}$. For the remaining product,
$\mathop{\mathrm{sgn}}(a-c)\,\mathop{\mathrm{sgn}}(b-c)$’s, the arguments
vanish in the equilateral case $a=b=c$ for which the angular momentum factor
is $a^{2}/4$. This is the same as for the other permutations, $F^{*}(213)$ and
$F^{*}(132)$, such that we can take all three into account by adding a single
additional term with yet undetermined coefficient $A$. We obtain thus for
$g_{C}$ at the attractor point,
$\begin{split}g_{C}(\\{\gamma_{j}\\};\\{c_{j}^{*}\\})&=\frac{(-1)^{a+b+c}}{4}\left[F^{*}(123)\,(a+b-c)^{2}\,+F^{*}(213)\,(a-b+c)^{2}\,\right.\\\
&\quad\left.+F^{*}(132)\,(-a+b+c)^{2}\,+\frac{1}{4}A\,\delta_{a,c}\,\delta_{b,c}\,a^{2}\right].\end{split}$
(42)
At this point, we can make a “guess” of the preferred “physical” value for
$A$. From the gravity perspective, the equilateral configurations with $a=b=c$
are perfectly fine multi-center solutions, such that it is most natural that
these contribute with $1\times(-1)^{a}\,a^{2}/4$ to $g_{C}$. For (42), we have
instead $(3+A)\times(-1)^{a}\,a^{2}/16$, thus suggesting $A=1$. We will
demonstrate in Section 4.5 that modularity of the $q$-series leads to exactly
the same value.
Besides the bound state index $g_{C}$, we are also interested to enumerate the
number of charge configurations giving rise to scaling black holes. Up to
vanishing arguments, the sum $F_{\rm total}(a,b,c)=\sum_{j=1}^{6}F_{j}(a,b,c)$
can be further simplified to
$\begin{split}F_{\rm
total}(a,b,c)&=\frac{1}{4}\Big{[}1+\mathop{\mathrm{sgn}}{}(a+b-c)\mathop{\mathrm{sgn}}{}(a+c-b)\\\
&+\mathop{\mathrm{sgn}}{}(a+c-b)\mathop{\mathrm{sgn}}{}(b+c-a)+\mathop{\mathrm{sgn}}{}(b+c-a)\mathop{\mathrm{sgn}}{}(a+b-c)\Big{]}\,.\end{split}$
(43)
This expression has the advantage that for $a+b+c$ odd, the arguments of the
$\mathop{\mathrm{sgn}}$’s never vanish. With inclusion of additional terms to
deal with vanishing arguments, we define number $f_{C}$,
$\begin{split}f_{C}(\\{\gamma_{j}\\},\\{c_{j}^{*}\\})&=F_{\rm
total}(a,b,c)\,(-1)^{a+b+c}\\\
&+\tfrac{1}{4}A_{1}\,\delta_{a,0}\,\delta_{b,c}+\tfrac{1}{4}A_{2}\,\delta_{c,0}\,\delta_{a,b}+\tfrac{1}{4}A_{3}\,\delta_{b,0}\,\delta_{a,c},\end{split}$
(44)
where the constants $A_{j}$ are yet to be determined. We will determine these
from the modular completion. To our surprise, these numbers are typically
irrational for $a+b+c\in 2\mathbb{Z}$, while they do not contribute for
$a+b+c\in 2\mathbb{Z}+1$. We find the irrationality quite peculiar. On the
other hand, $f_{C}$ is not a BPS index, such that it is not really violating
any physical principles.
## 3 Review of D4-brane black holes
We review in this section aspects of partition functions of D4-brane black
holes. We start in subsection 5 by reviewing the uplift of D4-brane black
holes to M-theory. In Subsection 3.1, we discuss the “supergravity” partition
function, which enumerates D4-brane BPS indices $\Omega(\gamma;t)$ for fixed
Kähler modulus $t$. In Subsection 3.2, we discuss the “attractor” partition
function, which is a generating function of BPS indices
$\Omega(\gamma;t_{\gamma}^{*})$ evaluated at the attractor point
$t_{\gamma}^{*}$ of the corresponding charge $\gamma$.
### 3.1 Supergravity partition function
From the perspective of asymptotically flat $\mathbb{R}^{4}$, the most natural
BPS partition function is enumerates the BPS indices for a fixed, asymptotic
value of the (Kähler) moduli $t$, and in the mixed ensemble where the magnetic
charge $P$ is fixed [69, 13, 14, 6] and the electric charge $Q$ is varied. The
electric charge takes value in the lattice $\Lambda\simeq\mathbb{Z}^{b_{2}}$
with bi-linear form $D_{ab}=d_{abc}P^{c}$. For a positive magnetic charge
$P^{a}$, $d_{abc}P^{c}$ provides a non-degenerate quadratic form with
signature $(1,b_{2}-1)$ for the lattice $\Lambda$ of magnetic charges. The
electric charges $Q_{a}$ take values in dual lattice $\Lambda^{*}$, with
quadratic form $D^{ab}=(d_{abc}P^{c})^{-1}$. We abbreviate the pairing between
an element $Q\in\Lambda^{*}$ and $k\in\Lambda$ as
$\sum_{a=1}^{b_{2}}Q_{a}P^{a}=Q.P$ (45)
and extend this by linearity in each argument to
$\Lambda^{*}\otimes\mathbb{R}$ and $\Lambda\otimes\mathbb{R}$. For later use,
we also introduce the notation $\mu\in\Lambda^{*}/\Lambda$,
$\Lambda_{\mu}^{*}=\left\\{Q\in\Lambda+\mu+P/2\right\\}.$ (46)
We stress that the elements of $\Lambda_{\mu}^{*}$ do not necessarily lie in
$\Lambda^{*}$ due to the shift by $P/2$. Using this notation, the partition
function for D4-D2-D0 black holes reads schematically
$\displaystyle\mathcal{Z}_{SG}(\tau,C,t)$
$\displaystyle=\sum_{Q_{0},Q_{a}}\bar{\Omega}(\gamma,t)\,(-1)^{P.Q}\,e^{-2\pi\tau_{2}|Z(\gamma,t)|+2\pi
i\tau_{1}\left(Q_{0}-Q.B+B^{2}/2\right)+2\pi iC.(Q-B/2)},$ (47)
where $\bar{\Omega}(\gamma,t)$ is the rational index (6),
$\tau_{1}\in\mathbb{R}$ is the RR 1-form $C_{1}$, and
$\tau_{2}=e^{-\phi}\in\mathbb{R}_{+}$ with $\phi$ being the dilaton, $C$ the
RR 3-form and $B$ the $B$-field.
Here $Z(\gamma,t)$ is the central charge of the $\mathcal{N}=2$ algebra.
Ignoring the non-perturbative terms in the strict large volume limit, $Z$
reads
$\displaystyle Z(\gamma,t)$
$\displaystyle=\frac{1}{2}P.(J^{2}-B^{2})+Q.B-Q_{0}+i(Q-BP).J\,.$ (48)
The BPS mass becomes in this limit,
$\displaystyle|Z(\gamma,t)|$
$\displaystyle=\frac{1}{2}P.(J^{2}-B^{2})+Q.B-Q_{0}+\frac{\left((Q-BP).J\right)^{2}}{P.J^{2}}\,,$
(49)
up to inverse powers of $J$. By assumption $J^{2}>0$, hence $J^{a}$ lies in
the positive cone of $\Lambda$. Thus $\frac{k.J\,J}{J^{2}}$ is the component
of the vector $k$ along $J$.
In the large volume limit, supergravity has an $Sp(2b_{2}+2,\mathbb{Z})$
symmetry, generated by matrices
$\displaystyle\mathbb{K}(k)$ $\displaystyle=\begin{pmatrix}1&0&0&0\\\
k^{a}&\mathbb{I}_{b_{2}}&0&0\\\
\frac{1}{2}d_{abc}k^{b}k^{c}&d_{abc}k^{c}&\mathbb{I}_{b_{2}}&0\\\
\frac{1}{6}d_{abc}k^{a}k^{b}k^{c}&\frac{1}{2}d_{abc}k^{b}k^{c}&k^{a}&1\end{pmatrix},\,\,k\in\mathbb{Z}^{b_{2}}\,,$
(50)
and acts linearly on $\gamma=(P^{0},P^{a},Q_{a},Q_{0})$. For $P^{0}=0$, these
transformations preserve the magnetic charge, and act on remaining charges and
moduli as follows:
$\displaystyle{}Q_{a}$ $\displaystyle\rightarrow Q_{a}+d_{abc}k^{b}P^{c}\,,$
$\displaystyle Q_{0}$ $\displaystyle\rightarrow
Q_{0}+k^{a}Q_{a}+\frac{1}{2}d_{abc}k^{a}k^{b}P^{c}\,,$ (51)
$\displaystyle{}t^{a}$ $\displaystyle\rightarrow t^{a}+k^{a}\,.$
We introduce the abbreviations:
$\begin{split}\hat{Q}_{\bar{0}}&:=-Q_{0}+\frac{1}{2}Q^{2}\,,\\\
\,\hat{Q}&:=Q-B\,,\\\ \,\hat{Q}^{2}_{-}&:=\hat{Q}^{2}-\hat{Q}^{2}_{+}\,,\\\
\,(Q-B)^{2}_{+}&:=\frac{\left((Q-BJ).P\right)^{2}}{PJ^{2}}\,.\end{split}$ (52)
Note that the combinations $\hat{Q},\hat{Q}_{\bar{0}}$ and the conjugacy class
$\mu$ of electric charge vector are invariant under the transformations (51).
Invariance of the conjugacy class is seen by noting that spectral flow shifts
$Q$ by integer lattice vectors, when mapped to magnetic lattice $\Lambda$ and
such shifts do not change the conjugacy class.
The holomorphic and anti-holomorphic dependence on $\tau$ can be made more
manifest, if we rewrite $\mathcal{Z}_{SG}$ as
$\displaystyle\mathcal{Z}_{SG}(\tau,C,t)$
$\displaystyle=e^{-\pi\tau_{2}J^{2}}\sum_{Q_{0},Q}\bar{\Omega}(P,Q,Q_{0};t)\,(-1)^{P.Q}\bar{q}^{\hat{Q}_{\bar{0}}-\hat{Q}^{2}_{-}/2}q^{\hat{Q}^{2}_{+}/2}e^{2i\pi
C.(Q-B/2)},$ (53)
where $q=e^{2\pi i\tau}$. The transformations (51) act on $\mathcal{Z}_{SG}$
as
$\displaystyle\mathcal{Z}_{SG}(\tau,C,t)$
$\displaystyle\rightarrow(-1)^{P.k}e^{\pi iC.k}\mathcal{Z}_{SG}(\tau,C,t)\,,$
(54)
under the action of $\mathbb{K}(k)$.
We can map the system of D4-D2-D0 branes to IIB string theory, by a T-duality
along time circle. The RR 1-form $C_{1}$ is mapped to the RR 0-form $C_{0}$,
while the D4, D2 and D0-branes respectively become D3, D1 branes and
D(-1)-instantons. Moreover, the action of the IIB strong-weak duality on the
instantonic branes carries over to the spectrum of D4-D2-D0 branes in IIA
string theory. The duality acts as follows. The modular parameter is
$\tau=C_{1}+ie^{-\phi}$, with $\phi$ being the dilaton and $C_{1}$ the Ramond-
Ramond 1-form flux along $S^{1}_{t}$. For an element $\begin{pmatrix}a&b\\\
c&d\end{pmatrix}\in SL(2,\mathbb{Z})$. This duality acts as
$\begin{split}&\tau\rightarrow\frac{a\tau+b}{c\tau+d},\\\ &\begin{pmatrix}C\\\
B\end{pmatrix}\rightarrow\begin{pmatrix}a&b\\\
c&d\end{pmatrix}\begin{pmatrix}C\\\ B\end{pmatrix},\\\
&J\rightarrow|c\tau+d|J\,,\end{split}$ (55)
In a series of papers [6, 13, 14, 18, 25, 26, 29], the supergravity partition
function (47) has been analyzed. We summarize the main properties:
* •
Quasi-periodicity in the two-form fields $B$ and $C$:
For $k\in\Lambda$, we have
$\begin{split}&\mathcal{Z}_{SG}(\tau,C,t+k)=(-1)^{P.k}\,e^{\pi
iC.k}\,\mathcal{Z}_{SG}(\tau,C,t),\\\
&\mathcal{Z}_{SG}(\tau,C+k,t)=(-1)^{P.k}\,e^{-\pi
iB.k}\,\mathcal{Z}_{SG}(\tau,C,t).\end{split}$ (56)
These can be seen as large gauge transformations.
* •
$SL(2,\mathbb{Z})$ S-duality:
$\begin{split}S&:\qquad\,\mathcal{Z}_{SG}(-1/\tau,-B,C+|\tau|J)=\tau^{1/2}\bar{\tau}^{-3/2}\varepsilon(S)\,\mathcal{Z}_{SG}(\tau,C,t)\,,\\\
T&:\qquad\,\mathcal{Z}_{SG}(\tau+1,C+B,t)=\varepsilon(T)\,\mathcal{Z}_{SG}(\tau,C,t)\,,\end{split}$
(57)
where
$\varepsilon(S)=\varepsilon(T)^{-3},\varepsilon(T)=e^{-i\pi\frac{c_{2}(X).P}{12}}$
are phases. The factor $\bar{\tau}^{-3/2}$ can be understood from the non-
compact bosons in the CFT due to the center of mass in $\mathbb{R}^{3}$.
* •
The partition function involves an intricate dependence on the moduli $t$
through wall-crossing. The proper modular invariant partition function differs
from (53) by additional non-holomorphic, subleading terms which are non-
holomorphic in $\tau$.
### 3.2 Attractor partition function
The supergravity partition function is a rather complicated function. It has
become clear that an alternative partition function is a more amenable object
to study [18]. This is the attractor partititon function
$\mathcal{Z}_{P}^{\lambda}$, which is the generating function of indices
$\bar{\Omega}(\gamma;t^{*}_{\gamma})$, where each index is evaluated at its
(large volume) attractor point $t^{\lambda}_{\gamma}$. The indices
$\bar{\Omega}(\gamma;t^{\lambda}_{\gamma})$ are also referred to as MSW
invariants [18]. For irreducible magnetic charge $P$ and magnetic charges
which can be written as a sum of at most 2 magnetic charges, these indices are
conjectured to coincide with the CFT indices. However as discussed in Section
5, our findings in this paper suggest that there is a difference for generic
magnetic charges due to scaling black holes.
This partition is obtained by replacing $\Omega(\gamma,t)$ in the definition
of $\mathcal{Z}_{SG}$ by its attractor value
$\Omega(\gamma,t^{\lambda}_{\gamma})$. For a D4-D2-D0 black hole with charge
$\gamma=(0,P^{a},Q_{a},Q_{0})$, we have for the “large volume” attractor value
$(t_{\gamma}^{*})^{a}$,
$(t^{\lambda}_{\gamma})^{a}=D^{ab}Q_{b}+i\lambda\,P^{a},$ (58)
with $\lambda$ sufficiently large, such that subleading terms in a $\lambda$
expansion can be ignored. Eq. (58) is equivalent to
$B^{a}_{\gamma}=D^{ab}Q_{b},\qquad(J^{\lambda}_{\gamma})^{a}=\lambda\,P^{a}.$
(59)
The precise proportionality constant $\lambda$ between $J^{a}$ and $P^{a}$
will not be important for us, since we will restrict to the large volume
limit, $|J|\gg 1$. On the other hand, we do not take the limit
$\lambda\to\infty$, since physical quantities such as the BPS mass diverge in
this limit.
The attractor or MSW partition function then reads,
$\displaystyle\mathcal{Z}^{\lambda}_{P}(\tau,C,t)$
$\displaystyle=\sum_{Q_{0},Q_{a}}\bar{\Omega}(\gamma,t_{\gamma}^{\lambda})\,(-1)^{P.Q}\,e^{-2\pi\tau_{2}|Z(\gamma,t)|+2\pi
i\tau_{1}\left(Q_{0}-Q.B+B^{2}/2\right)+2\pi iC.(Q-B/2)}.$ (60)
Note that although the degeneracy is evaluated at attractor point
$t^{\lambda}_{\gamma}$ , the mass in the exponent is the ADM mass evaluated
for moduli at infinity being $t$. The other moduli dependence in the exponent
is also similar to that of $\mathcal{Z}_{SG}(\tau,C,t)$.
Since black holes states contributing to the attractor index exist everywhere
in moduli space, their quantum degeneracies should be moduli independent.
Moreover, one can show that $\Omega(\gamma;t^{\lambda}_{\gamma})$ are
invariant under spectral flow transformations (51). This entails that
$\Omega(\gamma;t^{\lambda}_{\gamma})$ depends on the charges only through the
invariant combination $\hat{Q}_{\bar{0}}$ and the conjugacy class $\mu$. These
imply the sum
$\displaystyle h_{P,\mu}(\tau)$
$\displaystyle:=\sum_{Q_{0}}\overline{\Omega}(\gamma;t^{\lambda}_{\gamma})\,q^{\hat{Q}_{\bar{0}}}\,,$
(61)
for fixed $Q$ is also invariant under spectral flow transformations and apart
from the magnetic charge, depends solely on the conjugacy class $\mu$. It has
been understood from wall-crossing and the perspective of D-instantons, that
$h_{P,\mu}$ receives additional non-holomorphic contributions $h_{P,\mu}$ [18,
29]. The resultant is the non-holomorphic function
$\widehat{h}_{P,\mu}(\tau,\bar{\tau})$.
Hence one is led to the following theta function decomposition of the
attractor partition function,
$\displaystyle\mathcal{Z}^{\lambda}_{P}(\tau,C,t)$
$\displaystyle=e^{-\pi\tau_{2}J^{2}}\sum_{\mu\in\Lambda^{*}/\Lambda}\overline{h_{P,\mu}(\tau)}\,\Theta_{\mu}(\tau,\bar{\tau},C,B)\,,$
(62) $\displaystyle\text{where}\,\,\Theta_{\mu}(\tau,\bar{\tau},C,B)$
$\displaystyle=\sum_{Q\in\Lambda^{*}_{\mu}}(-1)^{P.Q}q^{\hat{Q}^{2}_{+}/2}\bar{q}^{-\hat{Q}^{2}_{-}/2}e^{2\pi
iC.(Q-B/2)}\,,$ (63)
with the set $\Lambda^{*}_{\mu}$ is given in (46). We define analogously the
completion $\widehat{\mathcal{Z}}^{\lambda}_{P}$ by replacing $h_{P,\mu}$ by
$\widehat{h}_{P,\mu}$ in (62).
Various factors in the right hand side of (62) also have definite modular
transformation properties. The prefactor $e^{-\pi\tau_{2}J^{2}}$ is modular
invariant. Moreover, the Siegel-Narain theta function $\Theta_{\mu}(\tau,C,B)$
transforms as
$\begin{split}S&:\,\Theta_{\mu}(-1/\tau,-1/\bar{\tau},-B,C)=\frac{1}{\sqrt{|\Lambda^{*}/\Lambda|}}(-i\tau)^{b_{2}^{+}/2}(i\bar{\tau})^{b_{2}^{-}/2}e^{-i\pi
P^{2}/2}\\\
&\qquad\qquad\qquad\qquad\times\sum_{\nu\in\Lambda^{*}/\Lambda}e^{-2\pi
i\mu.\nu}\Theta_{\nu}(\tau,\bar{\tau},C,B)\,,\\\
T&:\,\Theta_{\mu}(\tau+1,\bar{\tau}+1,C+B,B)=e^{i\pi(\mu+P/2)^{2}}\Theta_{\mu}(\tau,\bar{\tau},B,C)\,.\end{split}$
(64)
This implies that the appropriate completion $\widehat{h}_{P,\mu}$ transforms
as a vector valued modular form,
$\begin{split}S&:\,\widehat{h}_{P,\mu}(-1/\tau,-1/\bar{\tau})=-\frac{1}{\sqrt{|\Lambda^{*}/\Lambda|}}(-i\tau)^{-b_{2}/2-1}\varepsilon(S)^{*}e^{-i\pi
P^{2}/2}\sum_{\delta\in\Lambda^{*}/\Lambda}e^{-2\pi
i\delta.\mu}\widehat{h}_{P,\delta}(\tau,\bar{\tau})\,,\\\
T&:\,\widehat{h}_{P,\mu}(\tau+1,\bar{\tau}+1)=\varepsilon(T)^{*}e^{i\pi(\mu+P/2)^{2}}\,\widehat{h}_{P,\mu}(\tau,\bar{\tau}),\end{split}$
(65)
where $\varepsilon(S)$ and $\varepsilon(T)$ are as below (57).
## 4 Partition functions for scaling black holes
We consider multi-core black holes, where each core carries a positive
D4-brane charge and vanishing D6-brane charge. The notion of “core” is as
introduced in the Introduction; it is a bound state with spatial magnitude of
the order $\ell_{5}^{3}/R^{2}$, which decreases as $\lambda^{-3}$ in the large
$\lambda$ limit. We will determine the partition function at the large volume
attractor point (59) for such black holes. For this value of the moduli and
assuming that $\gamma$ and $\gamma_{j}$ both carry D4-brane charge, the
$c_{j}$ (5) are to leading order equal to $c^{\lambda}_{j}$,
$c^{\lambda}_{j}=2\lambda\left<\gamma,\gamma_{j}\right>.$ (66)
Since the factor $2\lambda$ on the rhs is positive, the large volume attractor
point is similar in nature to the $c_{j}^{*}$ (19) for the proper attractor
point. Restricting to three center solutions with D4-brane charge, we have for
$a$, $b$ and $c$,
$\begin{split}a&=P_{1}Q_{2}-P_{2}Q_{1},\\\ b&=P_{2}Q_{3}-P_{3}Q_{2},\\\
c&=P_{3}Q_{1}-P_{1}Q_{3}.\end{split}$ (67)
This gives for the $c_{j}^{\lambda}$,
$c_{1}^{\lambda}=2\lambda\,(c-a),c_{2}^{\lambda}=2\lambda\,(a-b)$. Since the
$c^{*}_{j}$ (19) and $c^{\lambda}_{j}$ (66) are simply related by the
substitution $|Z(\gamma;t_{\gamma}^{*})|\to 2\lambda$, the discussion on the
solution to Denef’s equations below (23) is applicable, and in particular
gives the separation of collinear solutions.
At the attractor point, many multi-center configurations do not exist as
physical solutions. However we have seen that some multi-center solutions do
exist. These solutions are distinguished from generic solutions, since the
centers can approach each other arbitrarily close. The scaling solutions do
respect the spectral flow symmetry, since the symplectic innerproducts are
invariant under (51).
We note that the existence scaling solutions poses constraints on the magnetic
charges $P_{j}$. For example since for a scaling solution $a,b,c$ must have
the same sign, we deduce that there are no scaling solutions for
$P_{1}=P_{2}=P_{3}=P/3$, since this gives rise to
$\gamma_{12}+\gamma_{23}+\gamma_{31}=a+b+c=0$. Therefore, there must be an
asymmetry in the magnetic charge of the centers for such scaling solutions to
exist. To see that $F$ vanishes in the case of three equal magnetic charges,
note that the second factors in $F_{2},F_{4},F_{6}$ vanish immediately as
${\rm sgn}(2x)-{\rm sgn}(x)=0$. Moreover, in $F_{1}$ we can replace $c=-a-b$
and when $a>0$ the first factor imposes the constraint $-2a-b\geq 0$ so
$b<-2a$ and $b$ must be negative. Hence the second factor in $F_{1}$ vanishes.
The argument goes similarly for $F_{3},F_{5}$. Similarly one may show that
such scaling black holes only exist for $b_{2}>1$.
### 4.1 Bound states and lattices
We discuss in this subsection various aspects of charge lattices of bound
states, and the projection of the charge lattice to a sublattice with fixed
total charge. The discussion in this subsection does not rely on the existence
of scaling black holes for these charges and lattices.
Lattice decomposition
We apply some techniques of decompositions and gluings of integral lattices to
charge lattices of bound states. See for example [70, 71]. We will consider a
black hole bound state of $n$ centers, with non-vanishing, positive magnetic
charge $P_{j}\in\Lambda_{j},$ $j=1,2,\dots,n$, where $\Lambda_{j}$ is the
$b_{2}$-dimensional lattice associated to the $j$’th center with innerproduct
$D_{j}$,
$D_{jab}=d_{abc}P^{c}_{j},\qquad a,b,c=1,\dots,b_{2}.$ (68)
The total magnetic charge is $P=\sum_{j=1}^{n}P_{j}$ with quadratic form $D$
and $b_{2}$-dimensional lattice $\Lambda$. The signature $\Lambda_{j}$ and
$\Lambda$ is $(1,b_{2}-1)$.
We use boldface notation for the lattices for boundstates. For $n$ centers, we
introduce $(n\,b_{2})$-dimensional lattices and vectors,
$\begin{split}\vec{k}&=(k_{1},k_{2},\dots,k_{n})\in{\boldsymbol{\Lambda}}:=\Lambda_{1}\oplus\Lambda_{2}\oplus\dots\oplus\Lambda_{n},\\\
\vec{x}&=(x_{1},x_{2},\dots,x_{n})\in{\boldsymbol{\Lambda}^{*}}:=\Lambda_{1}^{*}\oplus\Lambda_{2}^{*}\oplus\dots\oplus\Lambda_{n}^{*},\\\
\vec{Q}&=(Q_{1},Q_{2},\dots,Q_{n})\in{\boldsymbol{\Lambda}}^{*}+\vec{P}/2=\Lambda_{1}^{*}\oplus\Lambda_{2}^{*}\oplus\dots\oplus\Lambda_{n}^{*}+(P_{1},P_{2},\dots,P_{n})/2.\end{split}$
(69)
We denote the quadratic form for ${\boldsymbol{\Lambda}}$ by $\vec{D}={\rm
diag}(D_{1},D_{2},\dots,D_{n})$.
Since we typically sum over bound states with fixed total electric charge, we
aim to decompose the lattice ${\boldsymbol{\Lambda}}$ in a $b_{2}$-dimensional
sublattice $\overline{\boldsymbol{\Lambda}}\subset{\boldsymbol{\Lambda}}$
representing the total charge, and its orthogonal complement
$\underline{{\boldsymbol{\Lambda}}}$ with dimension $(n-1)b_{2}$ representing
the relative charge distribution over the constituents. To introduce this
properly, let $\overline{\boldsymbol{\Lambda}}$ be the sublattice
$\overline{\boldsymbol{\Lambda}}\subset{\boldsymbol{\Lambda}}$,
$\overline{\boldsymbol{\Lambda}}=\\{\vec{k}=(k,k,\dots,k)\in{\boldsymbol{\Lambda}}\,|\,k\in\mathbb{Z}^{b_{2}}\\}.$
(70)
The lattice ${\boldsymbol{\Lambda}}$ induces a quadratic form on
$\overline{\boldsymbol{\Lambda}}$. Namely for
$\vec{k}\in{\boldsymbol{\Lambda}}$,
$\vec{D}(\vec{k})=\sum_{j=1}^{n}D_{j}(k)=D(k)$. For $k\in\mathbb{Z}^{b_{2}}$,
this is the quadratic form for the total charge $P$ as desired. As a result,
we have the group isomorphism
$\Lambda^{*}/\Lambda=\overline{\boldsymbol{\Lambda}}^{*}/\overline{\boldsymbol{\Lambda}}$.
Let $\overline{\pi}$ be the orthogonal projection,
$\overline{\pi}:{\boldsymbol{\Lambda}}\to\overline{\boldsymbol{\Lambda}}\otimes\mathbb{Q}.$
(71)
For $\vec{k}\in{\boldsymbol{\Lambda}}$ and
$\vec{m}\in\overline{\boldsymbol{\Lambda}}$, we have
$\vec{D}(\overline{\pi}(\vec{k}),\vec{m})=\vec{D}(\vec{k},\overline{\pi}(\vec{m}))=\vec{D}(\vec{k},\vec{m})\in\mathbb{Z}.$
(72)
Therefore, $\overline{\pi}$ gives an injection of ${\boldsymbol{\Lambda}}$ to
the dual lattice $\overline{\boldsymbol{\Lambda}}^{*}$,
$\overline{\pi}:{\boldsymbol{\Lambda}}\to\overline{\boldsymbol{\Lambda}}^{*}.$
(73)
Moreover, if we extend $\overline{\pi}$ by linearity to
${\boldsymbol{\Lambda}}^{*}$, a similar argument to (72) shows that
$\overline{\pi}:{\boldsymbol{\Lambda}}^{*}\to\overline{\boldsymbol{\Lambda}}^{*}$.
The projection $\overline{\pi}$ is explicitly given for
$\vec{k}=(k_{1},k_{2},\dots,k_{n})\in{\boldsymbol{\Lambda}}$ by,
$\overline{\pi}(\vec{k})=(k,k,\dots,k)\in(\overline{\boldsymbol{\Lambda}})^{*}\quad{\rm
with}\quad k=D^{-1}\sum_{j=1}^{n}D_{j}k_{j}.$ (74)
We define furthermore the kernel $\underline{{\boldsymbol{\Lambda}}}={\rm
Ker}(\overline{\pi})$,
$\begin{split}&\underline{{\boldsymbol{\Lambda}}}\coloneqq\left\\{\vec{k}\in{\boldsymbol{\Lambda}}\,\left|\,\sum_{j=1}^{n}D_{j}k_{j}=0\right.\right\\}.\end{split}$
(75)
Elements in $\underline{{\boldsymbol{\Lambda}}}$ have vanishing innerproduct
with $\overline{\boldsymbol{\Lambda}}$, such that they are indeed each others
orthogonal complement in ${\boldsymbol{\Lambda}}$.
Since $\overline{\pi}(\vec{k})\in\overline{\boldsymbol{\Lambda}}$ for
$\vec{k}\in{\boldsymbol{\Lambda}}$, we have that
$\vec{k}-\overline{\pi}(\vec{k})\in\underline{{\boldsymbol{\Lambda}}}$. Thus
$\overline{\boldsymbol{\Lambda}}\oplus\underline{{\boldsymbol{\Lambda}}}$ is
in the kernel of the homomorphism
$\overline{h}:{\boldsymbol{\Lambda}}\to\overline{\boldsymbol{\Lambda}}^{*}/\overline{\boldsymbol{\Lambda}}$.
We call
$G={\boldsymbol{\Lambda}}/(\overline{\boldsymbol{\Lambda}}\oplus\underline{{\boldsymbol{\Lambda}}}),$
(76)
the glue group for the decomposition of ${\boldsymbol{\Lambda}}$, and the
image of $G$ under $\overline{h}$,
$\overline{h}(G)\subset\overline{\boldsymbol{\Lambda}}^{*}/\overline{\boldsymbol{\Lambda}}$,
the glue group for $\overline{\boldsymbol{\Lambda}}$ [70]. The homomorphism
$\overline{h}$ gives an injection of $G$ to the subgroup
$\overline{h}(G)\subset\overline{\boldsymbol{\Lambda}}^{*}/\overline{\boldsymbol{\Lambda}}$.
Therefore, the number of glue vectors,
$N_{g}=|{\boldsymbol{\Lambda}}/(\overline{\boldsymbol{\Lambda}}\oplus\underline{{\boldsymbol{\Lambda}}})|$
is a factor in
$|\overline{\boldsymbol{\Lambda}}^{*}/\overline{\boldsymbol{\Lambda}}|=\det(\overline{D})$.
In the special case that $\det(\overline{D})$ is prime, or more generally co-
prime with $\det(D_{j})$ for all $j$, $N_{g}=\det(\overline{D})$. By the same
arguments, there is a projection
$\underline{\pi}:{\boldsymbol{\Lambda}}\to\underline{{\boldsymbol{\Lambda}}}^{*}$,
and a homomorphism
$\underline{h}:G\to\underline{{\boldsymbol{\Lambda}}}^{*}/\underline{{\boldsymbol{\Lambda}}}$.
Therefore, $N_{g}$ is also a factor in $\det(\underline{D})$. This gives for
the number of glue vectors $N_{g}$ in general
$N_{g}=\sqrt{\frac{\det(\overline{D})\,\det(\underline{D})}{\prod_{j=1}^{n}\det(D_{j})}}.$
(77)
The order of the quotient group
$(\underline{{\boldsymbol{\Lambda}}}^{*}/\underline{{\boldsymbol{\Lambda}}})/\underline{h}(G)$
is
$N_{q}=\frac{\det(\underline{D})}{N_{g}}.$ (78)
This will be useful for us in the following way. If we consider the class of
vectors $\vec{k}\in{\boldsymbol{\Lambda}}$ with fixed projection to
$\overline{\boldsymbol{\Lambda}}^{*}$, for example
$\overline{\pi}(\vec{k})=0$, this fixes an element of the glue group $G$, and
thus also of the images of $G$ in
$\overline{\boldsymbol{\Lambda}}^{*}/\overline{\boldsymbol{\Lambda}}$ and
$\underline{{\boldsymbol{\Lambda}}}^{*}/\underline{{\boldsymbol{\Lambda}}}$.
As a result, the number of possible conjugacy classes of
$\underline{\pi}(\vec{k})\in\underline{{\boldsymbol{\Lambda}}}^{*}/\underline{{\boldsymbol{\Lambda}}}$
of $\underline{\pi}(\vec{k})$ is $N_{q}$ (78). This is also the case for a
vector $\vec{x}\in{\boldsymbol{\Lambda}}^{*}$ with fixed projection
$\overline{\pi}(\vec{x})\in\overline{\boldsymbol{\Lambda}}^{*}$.
Similarly to $\Lambda^{*}_{\mu}$ (46), we introduce the notation
$\underline{\boldsymbol{\Lambda}}^{*}_{{\boldsymbol{\mu}}}$,
$\begin{split}&\underline{\boldsymbol{\Lambda}}^{*}_{{\boldsymbol{\mu}}}\coloneqq\left\\{\vec{Q}=(Q_{1},Q_{2},\dots,Q_{n})\in{\boldsymbol{\Lambda}}+(\mu_{1},\mu_{2},\dots,\mu_{n})+\vec{P}/2\,\left|\,\sum_{j=1}^{n}Q_{j}=\mu+P/2\right.\right\\},\end{split}$
(79)
where the subscript
${\boldsymbol{\mu}}\in\underline{\boldsymbol{\Lambda}}^{*}/\underline{\boldsymbol{\Lambda}}$,
$\mu=\sum_{j=1}^{n}\mu_{j}\in\Lambda^{*}$ and
$P=\sum_{j=1}^{n}P_{j}\in\Lambda$. We define the quadratic form
${\boldsymbol{Q}}^{2}$ for
$\underline{{\boldsymbol{\Lambda}}}^{*}_{\boldsymbol{\mu}}$,
$\begin{split}{\boldsymbol{Q}}^{2}&=-Q^{2}+\sum_{j=1}^{3}(Q_{j})_{j}^{2}\\\
&=-(\mu+P/2)^{2}+\sum_{j=1}^{3}(Q_{j})_{j}^{2}.\end{split}$ (80)
This form of the quadratic form appears naturally, when we determine the
partition function $h^{s}_{\mu}$ (118) of scaling solutions in the next
subsection. On the other hand, it should match with the inverse of the
quadratic form for $\underline{{\boldsymbol{\Lambda}}}$. In the cases
considered below, we find that this is indeed the case.
Note that in components, $P$ in (79) has a lower index and equals
$P_{a}=d_{abc}P^{b}P^{c}=d_{abc}(\sum_{j}P_{j})^{b}(\sum_{j}P_{j})^{c}$. To
understand constraint $\sum_{j}Q_{j}=\mu+P/2$ better, we change variables from
$Q_{j}\in\Lambda^{*}_{j}$ to $k_{j}\in\Lambda_{j}$ using
$\begin{split}&Q_{1,a}=\mu_{1,a}+D_{1ab}(k^{b}_{1}+P_{2}^{b}+P_{1}^{b}/2),\\\
&Q_{2,a}=\mu_{2,a}+D_{2ab}(k^{b}_{2}+P_{3}^{b}+P_{2}^{b}/2),\\\
&\qquad\vdots\\\
&Q_{n-1,a}=\mu_{n-1,a}+D_{(n-1)ab}(k^{b}_{n}+P_{n}^{b}+P_{n-1}^{b}/2),\\\
&Q_{n,a}=\mu_{n,a}+D_{nab}(k^{b}_{n}+P_{1}^{b}+P_{n}^{b}/2),\end{split}$ (81)
with $k_{j}^{b}\in\mathbb{Z}^{b_{2}}$ for $j=1,\dots,n$. The shifts of
$k_{j}^{b}$ by $P_{j}^{b}$ are included such that the required identity for
the $k_{j}$ takes a compact form,
$\sum_{j=1}^{n}D_{jab}\,k_{j}^{b}=0,\qquad k_{j}\in\mathbb{Z}^{b_{2}},$ (82)
which is indeed identical to the defining condition for the lattice
$\underline{{\boldsymbol{\Lambda}}}$ in (75). Solving this relation over the
integers, $k_{j}\in\mathbb{Z}^{b_{2}}$, is in general a complicated problem
depending on the $D_{j}$. Solving say for $k_{n}$, we have
$k_{n}=-D_{n}^{-1}\sum_{j=1}^{n-1}D_{j}k_{j}.$ (83)
Thus we find that if $D_{n}^{-1}D_{j}$, $j=1,\dots n-1$ are not integral
matrices, not all $k_{j}\in\mathbb{Z}^{b_{2}}$ can correspond to bound state
charges for these conjugacy classes. In some cases, this problem can be avoid
by solving for another $k_{j}$ instead of $k_{n}$, but not in general. We
restrict to special cases in the following.
2- and 3-center bound states for $b_{2}=1$
For $b_{2}=1$, the $D_{j}$ are simply positive numbers. For $n=2$, the
relation (82) becomes
$D_{1}k_{1}+D_{2}k_{2}=0.$ (84)
The solutions with $k_{1,2}\in\mathbb{Z}$ are
$k_{1}=\frac{D_{2}}{\gcd(D_{1},D_{2})}\,m,\qquad
k_{2}=-\frac{D_{1}}{\gcd(D_{1},D_{2})}\,m,\qquad m\in\mathbb{Z},$ (85)
where $\gcd$ stands for the greatest common divisor. Substituting (85) in the
quadratic form $\sum_{j=1,2}D_{j}k_{j}^{2}$, gives for the quadratic form
$\underline{D}$ on $\underline{{\boldsymbol{\Lambda}}}$,
$\underline{D}=\frac{D_{1}D_{2}(D_{1}+D_{2})}{\gcd(D_{1},D_{2})^{2}}.$ (86)
Thus the number of glue vectors (77) and order $N_{q}$ (78) are in this case,
$n=2:\qquad N_{g}=\frac{D_{1}+D_{2}}{\gcd(D_{1},D_{2})},\qquad
N_{q}=\frac{D_{1}D_{2}}{\gcd(D_{1},D_{2})}.$ (87)
Moving on to $n=3$, we need to find the integral solutions to
$D_{1}k_{1}+D_{2}k_{2}+D_{3}k_{3}=0.$ (88)
To this end, we first consider $k_{3}=0$. Then, the solutions are obviously
given by (85). Next for a fixed non-vanishing $k_{3}$, there are only
solutions if $\gcd(D_{1},D_{2})$ divides $D_{3}k_{3}$ by Bézout’s identity. As
a result, if we choose $\gcd(D_{1},D_{2})/\gcd(D_{1},D_{2},D_{3})$ for
$k_{3}$, Bézout’s identity asserts that there is an integral solution
$(\ell_{1},\ell_{2})$ for $(k_{1},k_{2})$,
$D_{1}\ell_{1}+D_{2}\ell_{2}+D_{3}\frac{\gcd(D_{1},D_{2})}{\gcd(D_{1},D_{2},D_{3})}=0.$
(89)
Since this choice for $k_{3}$ has the smallest non-vanishing magnitude with
integral solutions, the other choices for $k_{3}$ follow by multiplication by
an integer $m_{1}$. Including also the solutions (85), we find for the general
solution,
$\begin{split}k_{1}&=\ell_{1}m_{1}+\frac{D_{2}}{\gcd(D_{1},D_{2})}\,m_{2},\\\
k_{2}&=\ell_{2}m_{1}-\frac{D_{1}}{\gcd(D_{1},D_{2})}\,m_{2},\\\
k_{3}&=\frac{\gcd(D_{1},D_{2})}{\gcd(D_{1},D_{2},D_{3})}m_{1}.\end{split}$
(90)
Substitution of (90) in $\sum_{j=1}^{3}D_{j}k_{j}^{2}$, one finds for the
2-dimensional quadratic form $\underline{D}$ for $(m_{1},m_{2})$,
$\underline{D}=\left(\begin{array}[]{cc}D_{1}\ell_{1}^{2}+D_{2}\ell_{2}^{2}+D_{3}\frac{\gcd(D_{1},D_{2})^{2}}{\gcd(D_{1},D_{2},D_{3})^{2}}&\quad\frac{D_{1}D_{2}(\ell_{1}-\ell_{2})}{\gcd(D_{1},D_{2})}\\\
\frac{D_{1}D_{2}(\ell_{1}-\ell_{2})}{\gcd(D_{1},D_{2})}&\frac{D_{1}D_{2}(D_{1}+D_{2})}{\gcd(D_{1},D_{2})^{2}}\end{array}\right).$
(91)
Using (89), we find for its determinant,
$\det(\underline{D})=\frac{D_{1}D_{2}D_{3}(D_{1}+D_{2}+D_{3})}{\gcd(D_{1},D_{2},D_{3})^{2}},$
(92)
which is symmetric in the $D_{j}$. For the number of glue vectors $N_{g}$ and
order $N_{q}$, we now obtain,
$n=3:\qquad N_{g}=\frac{D_{1}+D_{2}+D_{3}}{\gcd(D_{1},D_{2},D_{3})},\qquad
N_{q}=\frac{D_{1}D_{2}D_{3}}{\gcd(D_{1},D_{2},D_{3})}.$ (93)
Together with the discussion below, this is very suggestive that the
generalization to $n>3$ is
$\det(\underline{D})=\frac{(\sum_{j=1}^{n}D_{j})\,\prod_{j=1}^{n}D_{j}}{\gcd(D_{1},D_{2},\dots,D_{n})^{2}}.$
(94)
2- and 3-center bound states for $b_{2}\geq 1$ with simplifications
We continue to discuss the general case $b_{2}\geq 1$. Let us first consider
$n=2$. We make the assumption that $D_{2}^{-1}D_{1}$ is an integral matrix.
The equation (82) can be solved by setting $k_{2}=-D_{2}^{-1}D_{1}k_{1}$.
Substituting this in the quadratic form $\vec{D}$, gives us for the
$b_{2}$-dimensional quadratic form on $\underline{{\boldsymbol{\Lambda}}}$
$\underline{D}=D_{1}+D_{1}D_{2}^{-1}D_{1}.$ (95)
This agrees with (86) upon specialization to $b_{2}=1$. We find then for
$N_{g}$ and $N_{q}$,
$n=2:\qquad N_{g}={\rm det}(D_{2}^{-1}D),\qquad N_{q}={\rm det}(D_{1}).$ (96)
To deal with our main case, $n=3$, we make two technical simplifications:
1. 1.
We assume that $D_{3}^{-1}D_{1}$ and $D_{3}^{-1}D_{2}$ are integral matrices,
such that $k_{3}\in\mathbb{Z}^{b_{2}}$ in (83) if $k_{1}$ and
$k_{2}\in\mathbb{Z}^{2}$.
2. 2.
We assume that $d_{abc}\in 2\mathbb{Z}$ for all $a$, $b$, $c$. Then
$\Lambda^{*}+P/2=\Lambda^{*}$ for any $P$ such that shifts by $P$ (and
$P_{j}$) in (79), (80) and (81) are unnecessary.
These assumptions are satisfied in the examples in Section 6.
If we substitute now (83) in ${\boldsymbol{Q}}^{2}$, we arrive at
$\begin{split}{\boldsymbol{Q}}^{2}&=-\mu^{2}+(\mu_{1})^{2}_{1}+(\mu_{2})^{2}_{2}+(\mu_{3})^{2}_{3}+2(\mu_{1}-D_{3}^{-1}D_{1}\mu_{3}).k_{1}+2(\mu_{2}-D_{3}^{-1}D_{2}\mu_{3}).k_{2}\\\
&\quad+(k_{1},k_{2})\,\underline{D}\,(k_{1},k_{2})^{T},\end{split}$ (97)
where $\underline{D}$ is the quadratic form of the lattice
$\underline{{\boldsymbol{\Lambda}}}$,
$\underline{D}=\left(\begin{array}[]{cc}D_{1}+D_{1}D_{3}^{-1}D_{1}&\quad
D_{1}D_{3}^{-1}D_{2}\\\ D_{2}D_{3}^{-1}D_{1}&\quad
D_{2}+D_{2}D_{3}^{-1}D_{2}\end{array}\right).$ (98)
The determinant of $\underline{D}$ is given by333We use that for an invertible
$n\times n$ matrix $A$ and $n\times m$ matrices $U$ and $V$, we have
$\det(A+UV^{T})=\det(A)\,\det(1_{m}+V^{T}A^{-1}U)$,
en.wikipedia.org/wiki/Matrix$\\_$determinant$\\_$lemma
$\begin{split}\det(\underline{D})&=\det(D_{1})\,\det(D_{2})\,\det(D_{3}^{-1})\,\det(D_{1}+D_{2}+D_{3})\\\
&=\det(D_{1})\,\det(D_{2})\,\det(D_{3}^{-1}D).\end{split}$ (99)
We have for $N_{g}$ and $N_{q}$ in this case,
$N_{g}=\det(D_{3}^{-1}D),\qquad N_{q}=\det(D_{1})\det(D_{2}).$ (100)
These formulas are in agreement with (92) for $b_{2}=1$, and
$\gcd(D_{1},D_{2},D_{3})=D_{3}$. Moreover for generic $b_{2}$, we can consider
cases where we can solve (82) in terms of $k_{2}$ as well as $k_{3}$, and we
expect a symmetry in $D_{2}\leftrightarrow D_{3}$. Indeed, then
$D_{2}^{-1}D_{1}$ and $D_{2}^{-1}D_{3}$ are integral matrices too. The only
way for $D_{2}^{-1}D_{3}$ and $D_{3}^{-1}D_{2}$ both to be integral is to
satisfy $|\det(D_{2})|=|\det(D_{3})|$, in which case (99) reduces to
$\det\underline{D}=\pm\det(D_{1})\,\det(D_{1}+D_{2}+D_{3})$, which is
symmetric under the exchange of 2 and 3. Similar comments hold of course as
well for $k_{1}$.
Using general formula’s for inverses of block matrices [72], we derive that
the inverse of $\underline{D}$ reads
$\begin{split}\underline{D}^{-1}&=\left(\begin{array}[]{cc}D^{-1}(D_{2}+D_{3})D_{1}^{-1}&\quad-D^{-1}\\\
-D^{-1}&D^{-1}(D_{1}+D_{3})D_{2}^{-1}\end{array}\right)\\\
&=\left(\begin{array}[]{cc}D_{1}^{-1}-D^{-1}&\quad-D^{-1}\\\
-D^{-1}&D_{2}^{-1}-D^{-1}\end{array}\right),\end{split}$ (101)
One can verify that the determinant of $\underline{D}^{-1}$ is indeed the
inverse of (99). Indeed, if we introduce the two components
${\boldsymbol{\mu}}_{1}$ and ${\boldsymbol{\mu}}_{2}$,
$\begin{split}{\boldsymbol{\mu}}_{1}&=\mu_{1}-D_{1}D_{3}^{-1}\mu_{3},\\\
{\boldsymbol{\mu}}_{2}&=\mu_{2}-D_{2}D_{3}^{-1}\mu_{3},\end{split}$ (102)
the quadratic form ${\boldsymbol{Q}}^{2}$ (97) for $k_{1}=k_{2}=0$,
${\boldsymbol{Q}}^{2}={\boldsymbol{\mu}}^{2}$, can be written as
${\boldsymbol{\mu}}^{2}=-({\boldsymbol{\mu}}_{1}+{\boldsymbol{\mu}}_{2})^{2}+({\boldsymbol{\mu}}_{1})_{1}^{2}+({\boldsymbol{\mu}}_{2})_{2}^{2}=({\boldsymbol{\mu}}_{1},{\boldsymbol{\mu}}_{2})\underline{D}^{-1}({\boldsymbol{\mu}}_{1},{\boldsymbol{\mu}}_{2})^{T}.$
(103)
For $k_{1},k_{2}$ non-zero, we have
${\boldsymbol{Q}}=({\boldsymbol{Q}}_{1},{\boldsymbol{Q}}_{2})$ with
$\displaystyle{\boldsymbol{Q}}_{1}$
$\displaystyle=\mu_{1}-D_{1}D_{3}^{-1}\mu_{3}+(D_{1}+D_{1}D_{3}^{-1}D_{1})k_{1}+D_{1}D_{3}^{-1}D_{2}k_{2},$
(104) $\displaystyle{}{\boldsymbol{Q}}_{2}$
$\displaystyle=\mu_{2}-D_{2}D_{3}^{-1}\mu_{3}+D_{2}D_{3}^{-1}D_{1}k_{1}+(D_{2}+D_{2}D_{3}^{-1}D_{2})k_{2},$
or equivalently
$\begin{split}{\boldsymbol{Q}}_{1}&=Q_{1}-D_{1}D_{3}^{-1}Q_{3},\\\
{\boldsymbol{Q}}_{2}&=Q_{2}-D_{2}D_{3}^{-1}Q_{3},\end{split}$ (105)
which we can write more compactly as
${\boldsymbol{Q}}={\boldsymbol{\mu}}+\underline{D}\,{\boldsymbol{k}}$ with
${\boldsymbol{k}}=({\boldsymbol{k}}_{1},{\boldsymbol{k}}_{2})^{T}$. Thus an
element
${\boldsymbol{\mu}}\in\underline{{\boldsymbol{\Lambda}}}^{*}/\underline{{\boldsymbol{\Lambda}}}$
is completely determined by
${\boldsymbol{\mu}}=\\{(\mu_{1},\mu_{2},\mu_{3},\mu)|\,\,\mu_{j}\in\Lambda_{j}^{*},\quad\mu_{1}+\mu_{2}+\mu_{3}=\mu\in\Lambda^{*}\\}.$
(106)
Generic number of constituents (with simplifications)
The analysis for a generic number $n$ of constituents (including $n=2$)
follows analogously. The quadratic form reads as in (80), but with 3 replaced
by $n$ in the summation, and with constraint $\sum_{j=1}^{n}Q_{j}=\mu+P/2$.
Similarly to (82), the constraint can be expressed as
$\sum_{j=1}^{n}D_{jab}k_{j}^{b}=0.$ (107)
With the assumption that $D_{n}^{-1}D_{j}$ is an integer matrix for all
$j=1,\dots,n$, this can be solved and the quadratic form becomes
$\underline{D}={\rm
diag}(D_{1},\dots,D_{n-1})+\left(\begin{array}[]{c}D_{1}\\\ \dots\\\
D_{n-1}\end{array}\right)\,(D_{n}^{-1}D_{1},\dots,D_{n}^{-1}D_{n-1}),$ (108)
with inverse
$\underline{D}^{-1}={\rm
diag}(D_{1}^{-1},\dots,D_{n-1}^{-1})-\left(\begin{array}[]{c}D^{-1}\\\
\dots\\\ D^{-1}\end{array}\right)\,(1,\dots,1),$ (109)
where $D=\sum_{j=1}^{n}D_{j}$. Moreover, the determinant of $\underline{D}$ is
$\begin{split}{\rm
det}(\underline{D})&=\det(D_{n}^{-1})\,\det(D)\,\prod_{j=1}^{n-1}\det(D_{j})\\\
&=\det(D_{n}^{-1}D)\,\prod_{j=1}^{n-1}\det(D_{j}),\end{split}$ (110)
from which $N_{g}$ and $N_{q}$ are easily determined.
Characteristic vectors
We briefly discuss here a characteristic vector for the lattice
$\underline{{\boldsymbol{\Lambda}}}$ with $n=3$, which is important for the
theta series of the scaling solutions. A sign which frequently occured in
Section 2 is $(-1)^{a+b+c}$. Such signs in a theta series are typically
written in terms of a characteristic vector. We therefore express $a+b+c$ as
$a+b+c={\boldsymbol{K}}.{\boldsymbol{Q}},$ (111)
with
${\boldsymbol{K}}=(P_{3}-P_{2},P_{1}-P_{3}).$ (112)
This is a characteristic vector of $\underline{{\boldsymbol{\Lambda}}}$.
Indeed, we have with $\vec{P}=(P_{1},P_{2},P_{3})$,
$\vec{k}\cdot\vec{P}+\vec{k}^{2}\in 2\mathbb{Z},$ (113)
since $P_{j}$ is a characteristic vector for $\Lambda_{j}$, $j=1,2,3$. We can
decompose with respect to the lattice decomposition
$\underline{{\boldsymbol{\Lambda}}}\oplus\overline{\boldsymbol{\Lambda}}$,
${\boldsymbol{k}}\cdot{\boldsymbol{K}}+{\boldsymbol{k}}^{2}+k\cdot P+k^{2}.$
(114)
Since $P$ is a characteristic vector for $\Lambda$, this shows that
${\boldsymbol{K}}$ is a characteristic vector for ${\boldsymbol{\Lambda}}$.
However, if we express $a+b+c+P.Q$ in terms of vectors in
${\boldsymbol{\Lambda}}=\sum_{j}\Lambda_{j}$, such that
$a+b+c=\vec{K}.\vec{Q}\mod 2$, then
$\vec{K}=(P_{3}-P_{2},P_{1}-P_{3},P_{2}-P_{1})$. Then
$(\vec{K}+\vec{P})\cdot\vec{Q}={\boldsymbol{K}}\cdot{\boldsymbol{Q}}+\vec{P}\cdot\vec{Q}=P\cdot
Q\mod 2$. Moreover, $\vec{P}$ is a characteristic vector of
${\boldsymbol{\Lambda}}$, and
$\vec{P}^{2}=P^{3}+{\boldsymbol{K}}^{3}\mod 4.$ (115)
### 4.2 Partition functions
We consider black hole bound states with three cores. The $j^{\rm th}$ core
carries electric and magnetic charges $Q_{j}$ and $P_{j}$ respectively. It is
natural to work with a mixed ensemble with total magnetic charge $P$ held
fixed. For the present purpose, we shall fix the total electric D2-brane
charge $Q=\sum_{j}Q_{j}\in\Lambda^{*}+P/2$ with $\mu\in\Lambda^{*}/\Lambda$ as
well. We work at the attractor value of the moduli, corresponding to total
charge vector $(P,Q)$. At this point apart from single core black holes, the
only other black holes to survive are the scaling black holes. A natural
question is - for a fixed total charge, how many scaling black holes are there
and what is their contribution to the index?
To this end, we define the generating function $h^{T}_{\mu}(\tau)$ of
numerical total core invariants $\Omega_{T}$ in analogy to the attractor
indices,
$h^{T}_{P,\mu}(\tau)=\sum_{Q_{0}}\bar{\Omega}_{T}(\gamma)\,q^{\hat{Q}_{\bar{0}}}.$
(116)
We can similarly define the partition function of single core indices
$h^{S}_{P,\mu}(\tau)$, with $\Omega_{T}$ replaced by $\Omega_{S}$. The
$\Omega_{T}(\gamma)$ are determined from the refined ones (10) using the
regularization (14). For the 3-core case, this gives (33). If $P$ is
irreducible, i.e. it can not be written as as sum of more than 1 positive
magnetic charge, and the three partition functions agree.
Based on (8), we can express the attractor partition function $h_{P,\mu}$ in
terms of the partition function $h^{T}_{P,\mu}$. We have schematically
$h_{P,\mu}(\tau)=h^{T}_{P,\mu}(\tau)+\sum_{n>1}\sum_{\sum_{j=1}^{n}P_{j}=P\atop\sum_{j=1}^{n}Q_{j}=Q}\frac{g_{C}(\\{\gamma_{j}\\},\\{c_{j}^{\lambda}\\})}{|{\rm
Aut(\\{\gamma_{j}\\})}|}q^{\mu^{2}/2-\sum_{j}(Q_{j})_{j}^{2}/2}\prod_{j=1}^{n}h^{T}_{P_{j},\mu_{j}}(\tau).$
(117)
Recall that there are no 2-center/core scaling black holes, such that there is
no contribution from $n=2$ on the rhs.
We will proceed by considering the term in (117) with $n=3$. Using the
notation introduced in Section 4.1, we can enumerate the number of three-core
scaling black holes as
$h^{3T}_{\\{P_{j}\\},\mu}(\tau)=\sum_{\mu_{j}\in\Lambda^{*}_{j}/\Lambda_{j},\,\,j=1,2,3,\atop{\mu_{1}+\mu_{2}+\mu_{3}=\mu}}h^{T}_{P_{1},\mu_{1}}(\tau)\,h^{T}_{P_{2},\mu_{2}}(\tau)\,h^{T}_{P_{3},\mu_{3}}(\tau)\,\Psi_{{\boldsymbol{\mu}}}(\tau)\,,$
(118)
with ${\boldsymbol{\mu}}$ as in (106), and where $\Psi_{\boldsymbol{\mu}}$ is
the indefinite theta series
$\Psi_{{\boldsymbol{\mu}}}(\tau)=\sum_{{\boldsymbol{Q}}\in\underline{\boldsymbol{\Lambda}}^{*}_{{\boldsymbol{\mu}}}}g_{C}(\\{\gamma_{j}\\},\\{c_{j}^{\lambda}\\})\,q^{-{\boldsymbol{Q}}^{2}/2}.$
(119)
With $y=e^{2\pi iz}$, we define the refined series as
$\Psi_{{\boldsymbol{\mu}}}(\tau,z)=(y-y^{-1})^{2}\sum_{{\boldsymbol{Q}}\in\underline{\boldsymbol{\Lambda}}^{*}_{{\boldsymbol{\mu}}}}g_{C}(\\{\gamma_{j}\\},\\{c_{j}^{\lambda}\\};y)\,q^{-{\boldsymbol{Q}}^{2}/2},$
(120)
with $g_{C}(\\{\gamma_{j}\\},\\{c_{j}^{\lambda}\\})$ as in (42) and
$g_{C}(\\{\gamma_{j}\\},\\{c_{j}^{\lambda}\\};y)$ as in (27). Note that
$\Psi_{\boldsymbol{\mu}}$ is symmetric as function of $z$,
$\Psi_{{\boldsymbol{\mu}}}(\tau,-z)=\Psi_{{\boldsymbol{\mu}}}(\tau,z).$ (121)
The two functions are related by (14),
$\Psi_{{\boldsymbol{\mu}}}(\tau)=\left(\frac{1}{4\pi
i}\frac{\partial}{\partial
z}\right)^{2}\left.\Psi_{{\boldsymbol{\mu}}}(\tau,z)\right|_{z=0}.$ (122)
The kernel $g_{C}(\\{\gamma_{j}\\},\\{c_{j}^{\lambda}\\})$ and therefore
$\Psi_{{\boldsymbol{\mu}}}$ is unchanged under a symplectic transformation
(51), such that $h^{T}_{\\{P_{j}\\},\mu}$ is invariant under spectral flow as
required. The number of terms in the sum over $\mu_{j}$ in (118) is given by
$N_{q}$ (78).
To determine the modular properties of $\Psi_{\boldsymbol{\mu}}$, we consider
first the generating function of $f_{C}$ (44), which enumerates the number of
scaling charge configurations for a given total charge. We define this
function $\Phi_{{\boldsymbol{\mu}}}$ as the following theta series,
$\begin{split}\Phi_{{\boldsymbol{\mu}}}(\tau)&=\sum_{{\boldsymbol{Q}}\in\underline{\boldsymbol{\Lambda}}^{*}_{{\boldsymbol{\mu}}}}f_{C}(\\{\gamma_{j}\\},\\{c_{j}^{\lambda}\\})\,q^{-{\boldsymbol{Q}}^{2}/2},\end{split}$
(123)
with $f_{C}(\\{\gamma_{j}\\},\\{c_{j}^{\lambda}\\})$ as in (44).
The following subsections will demonstrate that
$\Phi_{{\boldsymbol{\mu}}}(\tau)$ is a convergent $q$-series, which can be
completed to a function $\widehat{\Phi}_{\boldsymbol{\mu}}$ which transform as
a vector-valued modular form. The transformation properties under the $S$ and
$T$ transformations are
$\begin{split}\widehat{\Phi}_{{\boldsymbol{\mu}}}(-1/\tau,-1/\bar{\tau})&=-\frac{(-i\tau)^{b_{2}}}{\sqrt{|\underline{{\boldsymbol{\Lambda}}}^{*}/\underline{{\boldsymbol{\Lambda}}}|}}\,e^{\pi
i{\boldsymbol{K}}^{2}/2}\sum_{{\boldsymbol{\nu}}\in\underline{{\boldsymbol{\Lambda}}}^{*}/\underline{{\boldsymbol{\Lambda}}}}e^{2\pi
i{\boldsymbol{\mu}}.{\boldsymbol{\nu}}}\,\widehat{\Phi}_{{\boldsymbol{\nu}}}(\tau,\bar{\tau}),\\\
\widehat{\Phi}_{{\boldsymbol{\mu}}}(\tau+1,\bar{\tau}+1)&=e^{\pi
i({\boldsymbol{\mu}}+{\boldsymbol{K}}/2)^{2}}\,\widehat{\Phi}_{{\boldsymbol{\mu}}}(\tau,\bar{\tau}).\end{split}$
(124)
The partition function $\widehat{\Psi}_{{\boldsymbol{\mu}}}(\tau)$ can be
obtained by introducing a suitable elliptic variable in
$\widehat{\Phi}_{{\boldsymbol{\mu}}}$ and subsequently differentiating twice
to this variable. As a result, the modular transformations of the completed
function $\widehat{\Phi}_{{\boldsymbol{\mu}}}$ equal those of the completed
$\widehat{\Psi}_{{\boldsymbol{\mu}}}$ except that the weight of
$\widehat{\Psi}_{{\boldsymbol{\mu}}}$ is increased by two compared to
$\widehat{\Phi}_{{\boldsymbol{\mu}}}$. The weight of
$\widehat{\Psi}_{{\boldsymbol{\mu}}}$ is thus $b_{2}+2$. The non-holomorphic
terms are determined in this way in Section 4.5, specifically Eq. (181). The
end result is that $\widehat{\Psi}_{{\boldsymbol{\mu}}}$ transforms as
$\begin{split}\widehat{\Psi}_{{\boldsymbol{\mu}}}(-1/\tau,-1/\bar{\tau})&=\frac{(-i\tau)^{b_{2}+2}}{\sqrt{|\underline{{\boldsymbol{\Lambda}}}^{*}/\underline{{\boldsymbol{\Lambda}}}|}}\,e^{\pi
i{\boldsymbol{K}}^{2}/2}\sum_{{\boldsymbol{\nu}}\in\underline{{\boldsymbol{\Lambda}}}^{*}/\underline{{\boldsymbol{\Lambda}}}}e^{2\pi
i{\boldsymbol{\mu}}.{\boldsymbol{\nu}}}\,\widehat{\Psi}_{{\boldsymbol{\nu}}}(\tau,\bar{\tau}),\\\
\widehat{\Psi}_{{\boldsymbol{\mu}}}(\tau+1,\bar{\tau}+1)&=e^{\pi
i({\boldsymbol{\mu}}+{\boldsymbol{K}}/2)^{2}}\,\widehat{\Psi}_{{\boldsymbol{\mu}}}(\tau,\bar{\tau}).\end{split}$
(125)
Therefore, the completion of $h^{3T}_{\\{P_{j}\\},\mu}$ (118),
$\widehat{h}^{3T}_{\\{P_{j}\\},\mu}(\tau,\bar{\tau})=\sum_{\mu_{j}\in\Lambda^{*}_{j}/\Lambda_{j},\,\,j=1,2,3,\atop{\mu_{1}+\mu_{2}+\mu_{3}=\mu}}\widehat{h}^{T}_{P_{1},\mu_{1}}(\tau)\,\widehat{h}^{T}_{P_{2},\mu_{2}}(\tau)\,\widehat{h}^{T}_{P_{3},\mu_{3}}(\tau)\,\widehat{\Psi}_{{\boldsymbol{\mu}}}(\tau)\,,$
(126)
transforms as $\widehat{h}_{P,\mu}$ (65) as we aimed to show. We can
furthermore combine $\widehat{h}_{P,\mu}$ with the theta series
$\Theta_{\mu}$,
$\widehat{\mathcal{Z}}^{3T}_{P}(\tau,C,t)=\sum_{\sum_{j=1}^{3}P_{j}=P}\sum_{\mu\in\Lambda^{*}/\Lambda}\widehat{h}^{3T}_{\\{P_{j}\\},\mu}(\tau,\bar{\tau})\,\Theta_{\mu}(\tau,\bar{\tau},C,B).$
(127)
We can then decompose the attractor partition
$\widehat{\mathcal{Z}}^{\lambda}_{P}$ in terms of the multi-core partition
functions $\widehat{\mathcal{Z}}^{nT}_{P}$,
$\widehat{\mathcal{Z}}^{\lambda}_{P}(\tau,C,t)=\widehat{\mathcal{Z}}^{T}_{P}(\tau,C,t)+\widehat{\mathcal{Z}}^{3T}_{P}(\tau,C,t)+\dots.$
(128)
Since the partition functions transform the same way, this raises the question
which terms are captured by the MSW conformal field theory. As mentioned in
the introduction, it will also be interesting to deduce the non-holomorphic
terms of $\widehat{\mathcal{Z}}^{T}_{P}$ using those determined for
$\widehat{\mathcal{Z}}^{3T}_{P}$ in this paper, and those for
$\widehat{\mathcal{Z}}^{\lambda}_{P}$ in [29].
### 4.3 Convergence
A crucial aspect of $\Phi_{\boldsymbol{\mu}}$ (and $\Psi_{\boldsymbol{\mu}}$)
is whether the sum on the rhs of (123) (and (119)) is convergent. If
${\boldsymbol{Q}}^{2}$ would be negative definite, convergence of these series
would be guaranteed. However this is not the case since the electric charge
lattice has signature $(2,2b_{2}-2)$, i.e. has 2 positive directions.
To prove the convergence, we first introduce a theta series
$\Theta_{\mu}[\mathcal{K}](\tau)$ with kernel $\mathcal{K}$ for a generic
indefinite theta lattice $L$ and $\mu\in L^{*}$,
$\displaystyle\Theta_{\mu}[\mathcal{K}](\tau;L)$ $\displaystyle=\sum_{x\in
L+\mu}\mathcal{K}(x)\,q^{-B(x)/2}\,,$ (129)
with integral quadratic form $B$. If $L$ is negative definite, we also use
$\displaystyle\theta_{\mu}(\tau;L)$ $\displaystyle=\Theta_{\mu}[1](\tau;L).$
(130)
For an indefinite lattice the kernel
$\mathcal{K}(x)=\mathcal{K}(x,\mathcal{V})$ depends on a collection
$\mathcal{V}=\\{V_{1},V_{2},\dots,V_{N}\\},$
of positive vectors. For signature $(2,2b_{2}-2)$,
$\mathcal{K}(x,\mathcal{V})$ can be expressed as [36, 28, 73]
$\mathcal{K}(x,\mathcal{V})=\frac{1}{4}\left(w(\mathcal{V})+\sum_{j=1}^{N}\mathop{\mathrm{sgn}}(B(x,V_{j}))\mathop{\mathrm{sgn}}(B(x,V_{j+1}))\right)\,,$
(131)
where for any strictly positive vector $v\in L$, $v^{2}>0$,
$w(\mathcal{V})=-\sum_{j=1}^{N}\mathop{\mathrm{sgn}}(B(v,V_{j}))\mathop{\mathrm{sgn}}(B(v,V_{j+1})),$
(132)
which is independent of the choice of positive vector $v$ [73]. There are
various sufficient conditions for convergence put forward in the literature
[28, 36, 38, 39, 73]. We will consider here the following $N$-gon conditions
put forward in [28, 36, 73], which read
$\begin{split}&B(V_{j},V_{j})>0,\\\
&B(V_{j},V_{j})\,B(V_{j+1},V_{j+1})-B(V_{j},V_{j+1})^{2}>0,\\\
&B(V_{j},V_{j})\,B(V_{j-1},V_{j+1})-B(V_{j},V_{j-1})\,B(V_{j},V_{j+1})<0.\end{split}$
(133)
Now let us return to the sum (123) at hand. It comprises of six individual
sums, which are each of the form
$\displaystyle\begin{split}&\sum_{Q_{i}\in\mu_{i}+\Lambda_{i}+P_{i}/2\atop{Q_{1}+Q_{2}+Q_{3}=\mu+P/2}}F_{\ell}(a,b,c)\,(-1)^{a+b+c}\,q^{Q^{2}/2-\sum_{i}(Q_{i})_{i}^{2}/2}=\sum_{{\boldsymbol{Q}}\in\underline{\boldsymbol{\Lambda}}^{*}_{{\boldsymbol{\mu}}}}F_{\ell}(a,b,c)\,(-1)^{a+b+c}\,q^{-{\boldsymbol{Q}}^{2}/2},\end{split}$
(134)
with $\ell=1,\dots,6$.
The simplification $F_{\rm total}(a,b,c)$ for $\sum_{j=1}^{6}F_{j}(a,b,c)$ put
forward in (43) is precisely of the form (131) with $N=3$. To present the
vectors, we define
$\begin{split}&C_{a}=(-P_{2},P_{1},0),\\\ &C_{b}=(0,-P_{3},P_{2}),\\\
&C_{c}=(P_{3},0,-P_{1}),\end{split}$ (135)
such that $C_{a}.{\boldsymbol{Q}}=a$, $C_{b}.{\boldsymbol{Q}}=b$ and
$C_{c}.{\boldsymbol{Q}}=c$, with $a,b$ and $c$ as in (67). The $V_{j}$ are
then identified with $C_{j}\in\underline{{\boldsymbol{\Lambda}}}$ with the
$C_{j}$ given by,
$\begin{split}&C_{1}=C_{a}+C_{b}-C_{c}=(-P_{2}-P_{3},\,P_{1}-P_{3},P_{1}+P_{2}),\\\
&C_{2}=C_{a}-C_{b}+C_{c}=(-P_{2}+P_{3},\,P_{1}+P_{3},\,-P_{1}-P_{2}),\\\
&C_{3}=-C_{a}+C_{b}+C_{c}=(P_{2}+P_{3},\,-P_{1}-P_{3},\,-P_{1}+P_{2})\,.\end{split}$
(136)
If we assume that $P_{j}$ is an ample divisor for each $j\in 1,2,3$, triple
intersections $P_{i}P_{j}P_{k}>0$ for all $i,j,k\in\\{1,2,3\\}$. The
conditions for convergence (133) are then satisfied.
It is also useful to consider the convergence for the kernel due to a single
permutation separately, since the different permuations are weighted by a
different factor in $g_{C}$. The vectors $V_{j}$ for the kernel $F^{*}(123)$
(40) can be chosen as $C_{j}^{(123)}$,
$\begin{split}C_{1}^{(123)}&=C_{a}-C_{c}=(C_{1}-C_{2})/2,\\\
C_{2}^{(123)}&=C_{b}-C_{c}=(C_{1}-C_{3})/2,\\\
C_{3}^{(123)}&=C_{c}-C_{a}-C_{b}=-C_{1}.\end{split}$ (137)
Again one may verify that with the assumption $P_{i}P_{j}P_{k}>0$ for all
$i,j,k\in\\{1,2,3\\}$, these vectors satisfy the conditions for convergence
(133).
### 4.4 Modular completion of $\Phi_{\boldsymbol{\mu}}$
Having discussed the convergence of $\Phi_{\boldsymbol{\mu}}$, we proceed in
this Section to discuss its modularity. Since $\Phi_{\boldsymbol{\mu}}$ is a
sum over a subset of an indefinite lattice, the function is not modular in the
classical sense. Our task is to determine a modular completion
$\widehat{\Phi}_{\boldsymbol{\mu}}$, which differes from
$\Phi_{\boldsymbol{\mu}}$ by subleading non-holomorphic terms, and which does
transform as a modular form. Essentially, products of sgn-functions are
replaced by a generalized error function [36]. Following this approach, we
will demonstrate that the difference between
$\widehat{\Phi}_{\boldsymbol{\mu}}$ and $\Phi_{\boldsymbol{\mu}}$, is given by
iterated integrals of modular forms.
Such non-holomorphic contributions have appeared in similar contexts. In
specific cases, the non-holomorphic contributions are derived from different
physical points of view, for example the continuum of multi-particle states in
$\mathbb{R}^{4}$ [57], or the quantum field theory on the world volume of the
D-branes [44, 47, 48], or the perspective of D3-instantons in the
hypermultiplet moduli space [18].
To this end let us consider $F_{\rm total}(a,b,c)$ in (43). Under modular
completion, one adds certain extra terms to
$\mathop{\mathrm{sgn}}(V_{1},x)\mathop{\mathrm{sgn}}(V_{2},x)$, therefore
replacing $\mathop{\mathrm{sgn}}(V_{1},x)\mathop{\mathrm{sgn}}(V_{2},x)$ with
the double error function $E_{2}(\alpha,u_{1},u_{2})$:
$\mathop{\mathrm{sgn}}(V_{1},x)\mathop{\mathrm{sgn}}(V_{2},x)\to
E_{2}(\alpha,\sqrt{2\tau_{2}}\,{\boldsymbol{u}}),$ (138)
with [36]
$E_{2}(\alpha;{\boldsymbol{u}})=\int_{\mathbb{R}^{2}}e^{-\pi(u_{1}-u_{1}^{\prime})^{2}-\pi(u_{2}-u_{2}^{\prime})^{2}}\,\mathop{\mathrm{sgn}}(u_{2}^{\prime})\,\mathop{\mathrm{sgn}}(u_{1}^{\prime}+\alpha\,u_{2}^{\prime})\,du_{1}^{\prime}\,du_{2}^{\prime},$
(139)
whose arguments are given in terms of $V_{1}$, $V_{2}$ and $x$ by
$\displaystyle\alpha=\alpha(V_{1},V_{2})=\frac{(V_{1},V_{2})}{\sqrt{V_{1}^{2}\,V_{2}^{2}-(V_{1},V_{2})^{2}}},$
(140)
$\displaystyle{\boldsymbol{u}}={\boldsymbol{u}}(V_{1},V_{2};x)=(u_{1}(V_{1},V_{2};x),u_{2}(V_{1},V_{2};x)),$
with
$\displaystyle u_{1}(V_{1},V_{2};x)=\frac{(V_{1\perp 2},x)}{\sqrt{(V_{1\perp
2},V_{1\perp 2})}}\,,$ (141) $\displaystyle
u_{2}(V_{1},V_{2};x)=\frac{(V_{2},x)}{\sqrt{(V_{2},V_{2})}}\,$ (142)
and $V_{1\perp 2}$ the component of $V_{1}$ orthogonal to $V_{2}$,
$V_{1\perp 2}=V_{1}-\frac{(V_{1},V_{2})}{(V_{2},V_{2})}\,V_{2}.$ (143)
To stress the dependance of $E_{2}$ on the vectors $V_{1}$, $V_{2}$ and $x$,
we will also use $E_{2}$ with alternative arguments,
$E_{2}(\alpha,{\boldsymbol{u}})\equiv E_{2}(V_{1},V_{2};x),$ (144)
with the identifications as in (140).
In addition to (139), another (equivalent) expression for $E_{2}$ is in terms
of Eichler integrals. To this end, we first define the Eichler integrals
$M_{1}$,
$M_{1}(u)=\left\\{\begin{array}[]{cc}\frac{iu}{\sqrt{2\tau_{2}}}q^{\frac{u^{2}}{4\tau_{2}}}\int_{-\bar{\tau}}^{i\infty}\frac{e^{\frac{i\pi
u^{2}w}{2\tau_{2}}}}{\sqrt{-i(w+\tau)}}dw,&\qquad u\neq 0,\\\ 0,&\qquad
u=0.\end{array}\right.\\\ $ (145)
and the (iterated) Eichler integral $M_{2}$ and $m_{2}$, for $u_{1}\neq 0$,
and $u_{2}-\alpha u_{1}\neq 0$,
$\begin{split}m_{2}(u_{1},u_{2})&=\left\\{\begin{array}[]{cc}\frac{u_{1}u_{2}}{2\tau_{2}}q^{\frac{u_{1}^{2}}{4\tau_{2}}+\frac{u_{2}^{2}}{4\tau_{2}}}\int_{-\bar{\tau}}^{i\infty}dw_{2}\int_{w_{2}}^{i\infty}dw_{1}~{}\frac{e^{\frac{\pi
iu_{1}^{2}w_{1}}{2\tau_{2}}+\frac{\pi
iu_{2}^{2}w_{2}}{2\tau_{2}}}}{\sqrt{-(w_{1}+\tau)(w_{2}+\tau)}},&\qquad
u_{1}\neq 0\\\ 0,&\qquad u_{1}=0.\end{array}\right.\,\\\
M_{2}(\alpha;u_{1},u_{2})&=\left\\{\begin{array}[]{rl}&-m_{2}(u_{1},u_{2})-m_{2}\left(\frac{u_{2}-\alpha
u_{1}}{\sqrt{1+\alpha^{2}}},\frac{u_{1}+\alpha
u_{2}}{\sqrt{1+\alpha^{2}}}\right)\,\quad u_{1}\neq 0,u_{2}-\alpha u_{1}\neq
0\,,\\\ &-m_{2}\left(\frac{u_{2}-\alpha
u_{1}}{\sqrt{1+\alpha^{2}}},\frac{u_{1}+\alpha
u_{2}}{\sqrt{1+\alpha^{2}}}\right)\,\hskip 76.82243ptu_{1}=0,u_{2}\neq 0\,,\\\
&-m_{2}(u_{1},u_{2})\,\hskip 128.0374ptu_{1}\neq 0,u_{2}-\alpha u_{1}=0\,,\\\
&\frac{2}{\pi}\arctan\alpha\,\hskip 139.4185ptu_{1}=u_{2}=0.\end{array}\
\right.\\\ \end{split}$ (146)
With ${\boldsymbol{u}}=(u_{1},u_{2})$ as before, the double error function
$E_{2}$ is then defined as a linear combination of $M_{1}$ and $M_{2}$ [36,
56]
$\begin{split}E_{2}(\alpha;{\boldsymbol{u}})&=\mathop{\mathrm{sgn}}(u_{2})\mathop{\mathrm{sgn}}(u_{1}+\alpha
u_{2})+\mathop{\mathrm{sgn}}(u_{1})M_{1}(u_{2})\\\
&\quad+\mathop{\mathrm{sgn}}(u_{2}-\alpha
u_{1})\,M_{1}\\!\left(\frac{u_{1}+\alpha
u_{2}}{\sqrt{1+\alpha^{2}}}\right)+M_{2}(\alpha;u_{1},u_{2})\,.\end{split}$
(147)
See [36] for other representations of $E_{2}$.
Thus $E_{2}$ consists of the original
$\mathop{\mathrm{sgn}}(V_{1},x)\mathop{\mathrm{sgn}}(V_{2},x)$ plus four more
terms. Noting that
$\displaystyle{}\frac{u_{2}(V_{2},x)-\alpha
u_{1}(V_{1},V_{2},x)}{\sqrt{1+\alpha^{2}}}$
$\displaystyle=u_{1}(V_{2},V_{1},x)\,,$
$\displaystyle\frac{u_{1}(V_{1},V_{2},x)+\alpha
u_{2}(V_{2},x)}{\sqrt{1+\alpha^{2}}}$ $\displaystyle=u_{2}(V_{1},x)\,,$ (148)
we can write
$\displaystyle{}E_{2}(\alpha;{\boldsymbol{u}})$
$\displaystyle=\mathop{\mathrm{sgn}}{}(V_{1}.x)\,\mathop{\mathrm{sgn}}{}(V_{2}.x)+\mathop{\mathrm{sgn}}{}(u_{1}(V_{1},V_{2},x))\,M_{1}(u_{2}(V_{2},x))$
$\displaystyle+\mathop{\mathrm{sgn}}{}(u_{1}(V_{2},V_{1},x))\,M_{1}(u_{2}(V_{1},x))$
(149)
$\displaystyle{}-m_{2}(u_{1}(V_{1},V_{2},x),\,u_{2}(V_{2},x))-m_{2}(u_{1}(V_{2},V_{1},x),\,u_{2}(V_{1},x))\,.$
$E_{2}$ satisfies an identity similar to (39). This reads
$E_{2}(V_{1},V_{1}+V_{2};x)+E_{2}(V_{2},V_{1}+V_{2};x)-E_{2}(V_{1},V_{2};x)=1$
(150)
and is valid for any choice of the arguments such that the corresponding
$\alpha$, $u_{1}$ and $u_{2}$’s are in $\mathbb{R}$.
The sum at hand (43) has the form
$\displaystyle\mathop{\mathrm{sgn}}{}(C_{1}.x)\,\mathop{\mathrm{sgn}}{}(C_{2}.x)\,+\,\mathop{\mathrm{sgn}}{}(C_{2}.x)\,\mathop{\mathrm{sgn}}{}(C_{3}.x)\,+\,\mathop{\mathrm{sgn}}{}(C_{3}.x)\,\mathop{\mathrm{sgn}}{}(C_{1}.x)\,,$
(151)
with $C_{1},C_{2},C_{3}$ given in (136). The modular completion
$\widehat{\Phi}_{\boldsymbol{\mu}}$ of $\Phi_{\boldsymbol{\mu}}$ (123) follows
by the adding to the coefficient $f_{C}$ (44) the following terms
$\frac{1}{4}\sum_{\ell=1,2,3}\left[E_{2}(C_{\ell},C_{\ell+1};\sqrt{2\tau_{2}}x)-\mathop{\mathrm{sgn}}{}(C_{\ell}.x)\,\mathop{\mathrm{sgn}}{}(C_{\ell+1}.x)-A_{\ell}\,\delta_{(C_{\ell}.x)}\,\delta_{(C_{\ell+1}.x)}\right].$
(152)
This essentially amounts to replacing $f_{C}$ by a linear combination of
$E_{2}$’s. Since the latter satisfies Vignéras equation, modular
transformation properties are ensured [55].
Our first aim is to determine the value of $A_{\ell}$ such that the completion
is subleading, i.e. it vanishes in the limit that ${\rm
Im}(\tau)=\tau_{2}\to\infty$. This follows from realizing that the difference
$E_{2}(C_{\ell},C_{\ell+1};\sqrt{2\tau_{2}}x)-\mathop{\mathrm{sgn}}{}(C_{\ell}.x)\,\mathop{\mathrm{sgn}}{}(C_{\ell+1}.x)$
vanishes in this limit except if $(C_{\ell}.x)=(C_{\ell+1}.x)=0$, when it
equals $\frac{2}{\pi}\arctan(\alpha_{\ell})$,
$\alpha_{\ell}=\alpha(C_{\ell},C_{\ell+1})$ (140). Requiring that (152)
vanish, we thus arrive at
$A_{\ell}=\frac{2}{\pi}\arctan(\alpha_{\ell}).$ (153)
Surprisingly this implies that $A_{\ell}$ can be irrational, as we will see in
the explicit case studies in Section 6. This of course obstructs an
interpretation as a “counting” function for the coefficients to which
$A_{\ell}$ contribute. On the other hand, since the function
$\Phi_{\boldsymbol{\mu}}$ is not a proper physical partition function summing
over a Hilbert space, we are not very concerned about this.
To proceed with determining the completion, we rearrange the terms and write
(152) as
$\begin{split}&\frac{1}{4}\sum_{\ell=1,2,3}\left[\mathop{\mathrm{sgn}}{}(u_{1}(C_{\ell+1},C_{\ell},x))+\mathop{\mathrm{sgn}}{}(u_{1}(C_{\ell-1},C_{\ell},x))\right]M_{1}(\sqrt{2\tau_{2}}\,u_{2}(C_{\ell},x))\\\
&-m_{2}(\sqrt{2\tau_{2}}u_{1}(C_{\ell+1},C_{\ell},x),\sqrt{2\tau_{2}}u_{2}(C_{\ell},x))-m_{2}(\sqrt{2\tau_{2}}u_{1}(C_{\ell-1},C_{\ell},x),\sqrt{2\tau_{2}}u_{2}(C_{\ell},x)).\end{split}$
(154)
We determine now the $A_{\ell}$ by requiring that this expression vanishes in
the limit $y\to\infty$.
Using these expressions, we thus naturally separate the holomorphic part
$\Phi_{\boldsymbol{\mu}}$ from the completion
$\widehat{\Phi}_{\boldsymbol{\mu}}$,
$\widehat{\Phi}_{{\boldsymbol{\mu}}}(\tau,\bar{\tau})=\Phi_{\boldsymbol{\mu}}(\tau)+R^{\Phi}_{\boldsymbol{\mu}}(\tau,\bar{\tau}),$
(155)
with the non-holomorphic completion $R^{\Phi}_{\boldsymbol{\mu}}$ defined by
$\begin{split}R^{\Phi}_{\boldsymbol{\mu}}(\tau,\bar{\tau})=&\sum_{{\boldsymbol{Q}}\in\underline{\boldsymbol{\Lambda}}^{*}_{\mu}}\,\,\sum_{\ell=1,2,3}\\\
&\left[\left[\mathop{\mathrm{sgn}}{}(u_{1}(C_{\ell+1},C_{\ell},x))+\mathop{\mathrm{sgn}}{}(u_{1}(C_{\ell-1},C_{\ell},x))\right]M_{1}(\sqrt{2\tau_{2}}u_{2}(C_{\ell},x))\right.\\\
&\left.-m_{2}(\sqrt{2\tau_{2}}u_{1}(C_{\ell+1},C_{\ell},x),\sqrt{2\tau_{2}}u_{2}(C_{\ell},x))\right.\\\
&\left.-m_{2}(\sqrt{2\tau_{2}}u_{1}(C_{\ell-1},C_{\ell},x),\sqrt{2\tau_{2}}u_{2}(C_{\ell},x))\right]\,\\\
&\times(-1)^{{\boldsymbol{K}}.{\boldsymbol{Q}}}q^{-{\boldsymbol{Q}}^{2}/2}.\end{split}$
(156)
Our next aim is to write the non-holomorphic part as an (iterated) integral
over modular forms. This makes the modular properties of the holomorphic
$q$-series manifest, since the modular properties of integrals of modular
forms are readily determined. Moreover, it is straightforward to determine the
non-holomorphic anomaly.
To determine this form of the non-holomorphic part, we write
$R^{\Phi}_{\boldsymbol{\mu}}$ as
$\displaystyle R^{\Phi}_{\boldsymbol{\mu}}(\tau,\bar{\tau})$
$\displaystyle=\sum_{\ell=1,2,3}\left[R_{{\boldsymbol{\mu}},1,\ell}(\tau,\bar{\tau})+R_{{\boldsymbol{\mu}},2,(\ell-1,\ell)}(\tau,\bar{\tau})+R_{{\boldsymbol{\mu}},2,(\ell+1,\ell)}(\tau,\bar{\tau})\right]$
(157)
and $R_{{\boldsymbol{\mu}},1,\ell},R_{{\boldsymbol{\mu}},2,(k,\ell)}$ are
defined as
$\displaystyle{}R_{{\boldsymbol{\mu}},1,\ell}(\tau,\bar{\tau})$
$\displaystyle=\sum_{{\boldsymbol{Q}}\in\underline{\boldsymbol{\Lambda}}^{*}_{\mu}}\,\,\left[\mathop{\mathrm{sgn}}{}(u_{1,(\ell+1,\ell)}+\mathop{\mathrm{sgn}}{}(u_{1,(\ell-1,\ell)})\right]M_{1}(\sqrt{2y}u_{2,\ell})(-1)^{{\boldsymbol{K}}.{\boldsymbol{Q}}}q^{-{\boldsymbol{Q}}^{2}/2}\,,$
$\displaystyle R_{{\boldsymbol{\mu}},2,(k,\ell)}(\tau,\bar{\tau})$
$\displaystyle=-\sum_{{\boldsymbol{Q}}\in\underline{\boldsymbol{\Lambda}}^{*}_{\mu}}\,\,m_{2}(\sqrt{2\tau_{2}}u_{1,(k,l)},\,\sqrt{2\tau_{2}}u_{2,\ell})\,(-1)^{{\boldsymbol{K}}.{\boldsymbol{Q}}}q^{-{\boldsymbol{Q}}^{2}/2}\,,$
(158)
where we have used the abbreviations
$u_{1,(k,\ell)}=u_{1}(C_{k},C_{\ell},x),\,u_{2,\ell}=u_{2}(C_{\ell},x)$.
To evaluate the sums (158), it is useful to split ${\boldsymbol{Q}}^{2}$ as
follows
$\displaystyle{\boldsymbol{Q}}^{2}$
$\displaystyle=u_{1,(k,\ell)}^{2}+u_{2,\ell}^{2}+({\boldsymbol{Q}}^{2}-u_{1,(k,\ell)}^{2}-u_{2,\ell}^{2})\,,$
(159)
$u_{1,(k,\ell)}^{2},\,u_{2,\ell}^{2}$ and
$({\boldsymbol{Q}}^{2}-u_{1,(k,\ell)}^{2}-u_{2,\ell}^{2})$ are naturally
associated with quadratic forms and lattices as described in Table 1.
term | form | dual lattice | signature
---|---|---|---
${\boldsymbol{Q}}^{2}$ | $\underline{D}$ | $\underline{{\boldsymbol{\Lambda}}}^{*}_{\boldsymbol{\mu}}$ | $(2,2b_{2}-2)$
$u_{2,\ell}^{2}$ | $\underline{D}_{\ell}=|C_{\ell}|^{2}$ | $(L_{\ell})^{*}_{\boldsymbol{\mu}}\subset\underline{{\boldsymbol{\Lambda}}}_{\boldsymbol{\mu}}^{*}$ | (1,0)
${\boldsymbol{Q}}^{2}-u_{2,\ell}^{2}$ | $\underline{D}_{\perp\ell}$ | $(L_{\ell}^{\perp})^{*}_{\boldsymbol{\mu}}\subset\underline{{\boldsymbol{\Lambda}}}_{\boldsymbol{\mu}}^{*}$ | $(1,2b_{2}-2)$
$u_{1,(k,l)}^{2}$ | $\underline{D}_{(k,l)}=|C_{k\perp\ell}|^{2}$ | $(L_{k\ell})^{*}_{\boldsymbol{\mu}}\subset(L_{\ell}^{\perp})^{*}_{\boldsymbol{\mu}}$ | (1,0)
${\boldsymbol{Q}}^{2}-u_{1,(k,l)}^{2}-u_{2,\ell}^{2}$ | $\underline{D}^{\perp}_{(k\ell)}$ | $(L_{k\ell}^{\perp})^{*}_{\boldsymbol{\mu}}\subset(L_{\ell}^{\perp})^{*}_{\boldsymbol{\mu}}$ | $(0,2b_{2}-2)$
Table 1: Lattices and quadratic forms associated to the splitting in Equation
(159).
We let $L_{\ell}$ be the 1-dimensional lattice spanned by $C_{\ell}$. The
quadratic form is a number in this case, $D_{\ell}=|C_{\ell}|^{2}$. We denote
the dual lattice with quadratic form $|C_{\ell}|^{-2}$ by $(L_{\ell})^{*}$.
The projection of ${\boldsymbol{\mu}}$ to $(L_{\ell})^{*}$ is
$\mu_{\ell}=({\boldsymbol{\mu}}.C_{\ell})\,C_{\ell}\in\mathbb{Z}^{2b_{2}}$. To
express a generic vector in
$\underline{{\boldsymbol{\Lambda}}}_{\boldsymbol{\mu}}^{*}$ as an element of
$(L_{\ell})^{*}\oplus(L_{\ell}^{\perp})^{*}$, we introduce glue vectors
$\rho$. We denote by $(L_{\ell})^{*}_{\boldsymbol{\mu}}$ the set of vectors
$\mu_{\ell}\mod L_{\ell}\in(L_{\ell})^{*}$, and by
$(L_{\ell})^{*}_{{\boldsymbol{\mu}}+\rho}$ the set of vectors
$\mu_{\ell}+\rho\mod L_{\ell}\in(L_{\ell})^{*}$. We introduce similar notation
for $L_{\ell}^{\perp}$. Using this notation, the direct sum
$(L_{\ell})^{*}_{\boldsymbol{\mu}}\oplus(L_{\ell}^{\perp})^{*}_{\boldsymbol{\mu}}$
is a subset of the lattice
$\underline{{\boldsymbol{\Lambda}}}_{\boldsymbol{\mu}}^{*}$,
$(L_{\ell})^{*}_{\boldsymbol{\mu}}\oplus(L_{\ell}^{\perp})^{*}_{\boldsymbol{\mu}}\subset\underline{{\boldsymbol{\Lambda}}}_{\boldsymbol{\mu}}^{*}$.
A generic element ${\boldsymbol{k}}$ of
$\underline{{\boldsymbol{\Lambda}}}_{\boldsymbol{\mu}}^{*}$ can be written as
a sum,
${\boldsymbol{k}}={\boldsymbol{l}}_{\ell}+{\boldsymbol{l}}_{\ell}^{\perp}\in(L_{\ell})^{*}_{{\boldsymbol{\mu}},\rho}\oplus(L_{\ell}^{\perp})^{*}_{{\boldsymbol{\mu}},\rho}$
for some $\rho$, where the projection ${\boldsymbol{l}}_{\ell}$ to
$(L_{\ell}^{\perp})^{*}$ vanishes, and similarly for the projection of
${\boldsymbol{l}}_{\ell}^{\perp}$ to $(L_{\ell})^{*}$. We have a similar
decomposition of vectors in $(L_{\ell})^{\perp}$ with respect to the
decomposition $(L_{k\ell})^{*}\oplus(L_{k\ell}^{\perp})^{*}$. The
representatives of minimal length in
$(L_{\ell})^{*}\oplus(L_{\ell}^{\perp})^{*}$ appearing in such splits are
called “glue vectors”. We use glue vectors $\rho$ for the splitting
$(L_{\ell})^{*}_{{\boldsymbol{\mu}},\rho}\oplus(L_{\ell}^{\perp})^{*}_{{\boldsymbol{\mu}},\rho}$
and $\nu$ for the splitting $(L_{k\ell})^{*}\oplus(L_{k\ell}^{\perp})^{*}$. We
then have,
$\begin{split}&\underline{{\boldsymbol{\Lambda}}}_{{\boldsymbol{\mu}}}^{*}=\sum_{\rho}(L_{\ell})^{*}_{{\boldsymbol{\mu}}+\rho}\oplus(L_{\ell}^{\perp})^{*}_{{\boldsymbol{\mu}}+\rho},\\\
&(L^{\perp}_{\ell})^{*}_{\rho}=\sum_{\nu}\subset(L_{k})^{*}_{\rho+\nu}\oplus(L_{k\ell}^{\perp})^{*}_{\rho+\nu}.\end{split}$
(160)
Now let us evaluate the sum $R_{{\boldsymbol{\mu}},1}$ and
$R_{{\boldsymbol{\mu}},2}$ (158). The embedding of $L_{\ell}$ in
$\underline{{\boldsymbol{\Lambda}}}_{\boldsymbol{\mu}}\otimes\mathbb{Q}$ is
spanned by $k\,C_{\ell}$ with $k\in\mathbb{Z}+\rho$ where $\rho\in\mathbb{Q}$
is (the projection of) the glue vector. Then, $u_{2,\ell}=|C_{\ell}|k$, such
that $R_{{\boldsymbol{\mu}},1,\ell}$ reads
$\displaystyle R_{{\boldsymbol{\mu}},1,\ell}(\tau,\bar{\tau})$
$\displaystyle=\sum_{\rho}\sum_{{\boldsymbol{l}}_{\ell}^{\perp}\in(L_{\ell}^{\perp})^{*}_{{\boldsymbol{\mu}}+\rho}\atop
k\in\mathbb{Z}+{\boldsymbol{\mu}}+\rho}(\mathop{\mathrm{sgn}}(u_{1,(\ell-1,\ell)})+\mathop{\mathrm{sgn}}(u_{1,(\ell+1,\ell)}))$
(161)
$\displaystyle\qquad\times(-1)^{{\boldsymbol{K}}.{\boldsymbol{l}}_{\ell}^{\perp}+{\boldsymbol{K}}.C_{\ell}\,k}\,M_{1}\\!\left(\sqrt{2\tau_{2}}\,|C_{\ell}|\,k\right)q^{-({\boldsymbol{l}}_{\ell}^{\perp})^{2}/2-C_{\ell}^{2}\,k^{2}/2}.$
To write this more compactly, we define for a generic lattice $L$ of signature
$(1,\dim(L)-1)$ and characteristic vector $K$,
$\displaystyle\Theta_{\alpha}(\tau;L,\\{V,V^{\prime}\\})$
$\displaystyle:=\sum_{x\in
L+\alpha}\left(\mathop{\mathrm{sgn}}(V,x)-\mathop{\mathrm{sgn}}(V^{\prime},x)\right)(-1)^{K.x}q^{-x^{2}/2}\,.$
(162)
Convergence of $\Theta_{\alpha}$ requires [34]
$\displaystyle V^{2}$
$\displaystyle>0\,,\,(V^{\prime})^{2}>0\,,\,(V,V^{\prime})>0\,,$ (163)
Moreover, we define the unary theta series $\Upsilon_{\alpha}$,
$\Upsilon_{\alpha}(\tau,M,N)=\sum_{x\in\mathbb{Z}+\alpha}x\,(-1)^{Nx}\,e^{\pi
i\tau Mx^{2}}.$ (164)
For $\sigma\in\bar{\mathbb{H}}$, we introduce the period integral
$\begin{split}R_{\alpha}(\tau,\sigma;M,N)&:=i\sum_{x\in\mathbb{Z}+\alpha}(-1)^{Nx}x\int_{-\sigma}^{i\infty}dw\,\frac{e^{\pi
iwMx^{2}}}{\sqrt{-i(w+\tau)}}\,\\\
&=i\int_{-\sigma}^{i\infty}dw\,\frac{\Upsilon_{\alpha}(w,M,N)}{\sqrt{-i(w+\tau)}}\,.\end{split}$
(165)
The non-holomorphic modular completion $\widehat{\Theta}_{\alpha}$ of
$\Theta_{\alpha}$ is expressed in terms of $R_{\alpha}$ as
$\begin{split}\widehat{\Theta}_{\alpha}(\tau,\bar{\tau};L,\\{V,V^{\prime}\\})&=\Theta_{\alpha}(\tau;L,V,V^{\prime}\\})\\\
&\quad+\sum_{\nu}\theta_{\alpha+\nu}(\tau;L^{\perp}_{V})\,R_{\alpha+\nu}(\tau,\bar{\tau};V^{2},K_{L}.V)\\\
&\quad-\sum_{\nu^{\prime}}\theta_{\alpha+\nu^{\prime}}(\tau;L^{\perp}_{V^{\prime}})\,R_{\alpha+\nu^{\prime}}(\tau,\bar{\tau};(V^{\prime})^{2},K_{L}.V^{\prime}),\end{split}$
(166)
with $\theta_{\alpha}$ as in (130).
Returning to $R_{{\boldsymbol{\mu}},1,\ell}$, we can now write this as
$R_{{\boldsymbol{\mu}},1,\ell}(\tau,\bar{\tau})=\sum_{\rho}\Theta_{{\boldsymbol{\mu}}+\rho}(\tau,L_{\ell}^{\perp},\\{C_{\ell-1},\,C_{\ell+1}\\})\,R_{{\boldsymbol{\mu}}+\rho}(\tau,\bar{\tau};C_{\ell}^{2},{\boldsymbol{K}}.C_{\ell}).$
(167)
The relations for convergence of $\Theta_{{\boldsymbol{\mu}}+\rho}$ are indeed
satisfied for the vectors given in (136). We will continue to demonstrate that
these are captured by the term $R_{{\boldsymbol{\mu}},2}$ in (157). We have
$\begin{split}R_{{\boldsymbol{\mu}},2,(k,\ell)}(\tau,\bar{\tau})&=-\sum_{{\boldsymbol{Q}}\in\underline{{\boldsymbol{\Lambda}}}_{\boldsymbol{\mu}}^{*}}m_{2}(\sqrt{2\tau_{2}}\,u_{1}(C_{k},C_{\ell},{\boldsymbol{Q}}),\sqrt{2\tau_{2}}\,u_{2}(C_{\ell},{\boldsymbol{Q}}))\,(-1)^{{\boldsymbol{K}}\cdot{\boldsymbol{Q}}}\,q^{-{\boldsymbol{Q}}^{2}/2}\\\
&=-\sum_{{\boldsymbol{Q}}\in\underline{{\boldsymbol{\Lambda}}}_{\boldsymbol{\mu}}^{*}}u_{1}(C_{k},C_{\ell},{\boldsymbol{Q}})u_{2}(C_{\ell},{\boldsymbol{Q}})\,(-1)^{{\boldsymbol{K}}\cdot{\boldsymbol{Q}}}\,q^{(u_{1}(C_{k},C_{\ell},{\boldsymbol{Q}})^{2}+u_{2}(C_{\ell},{\boldsymbol{Q}})^{2}-{\boldsymbol{Q}}^{2})/2}\\\
&\quad\times\int_{-\bar{\tau}}^{i\infty}dw_{2}\int_{w_{2}}^{i\infty}dw_{1}\frac{e^{\pi
i(w_{1}u_{1}(C_{k},C_{\ell},{\boldsymbol{Q}})^{2}+w_{2}u_{2}(C_{\ell},{\boldsymbol{Q}})^{2})}}{\sqrt{-(w_{1}+\tau)(w_{2}+\tau)}}.\end{split}$
(168)
Using the splits of the lattices and $\Upsilon_{\alpha}$ (164), we can express
this as
$\begin{split}&R_{{\boldsymbol{\mu}},2,(k,\ell)}(\tau,\bar{\tau})=-\sum_{\rho}\theta_{{\boldsymbol{\mu}}+\rho+\nu}(\tau;L_{k\ell}^{\perp})\\\
&\quad\times\,\int_{-\bar{\tau}}^{i\infty}dw_{2}\int_{w_{2}}^{i\infty}dw_{1}\frac{\Upsilon_{{\boldsymbol{\mu}}+\rho}(w_{2};C_{\ell}^{2},{\boldsymbol{K}}\cdot
C_{\ell})\,\Upsilon_{{\boldsymbol{\mu}}+\rho+\nu}(w_{1};C_{k\perp\ell}^{2},{\boldsymbol{K}}\cdot
C_{k\perp\ell})}{\sqrt{-(w_{1}+\tau)(w_{2}+\tau)}}.\end{split}$ (169)
Next, we aim to combine the sums in (156). We recognize the two $\Upsilon$’s
in the integrand in (169). The first, $\Upsilon_{{\boldsymbol{\mu}}+\rho}$,
appears in the integrand of $R_{{\boldsymbol{\mu}}+\rho}$ on the rhs of (167),
whereas the second $\Upsilon_{{\boldsymbol{\mu}}+\rho+\nu}$ matches with one
of the non-holomorphic terms on the rhs of (166). More precisely and
concisely, we arrive at
$R^{\Phi}_{\boldsymbol{\mu}}(\tau,\bar{\tau})=\sum_{\ell=1,2,3}i\int_{-\bar{\tau}}^{i\infty}dw\frac{\sum_{\rho}\widehat{\Theta}_{{\boldsymbol{\mu}}+\rho}(\tau,-w;L^{\perp}_{\ell},\\{C_{\ell-1}C_{\ell+1}\\})\,\Upsilon_{{\boldsymbol{\mu}}+\rho}(w;C_{\ell}^{2},{\boldsymbol{K}}\cdot
C_{\ell})}{\sqrt{-i(w+\tau)}}.$ (170)
We thus have succeeded to express $R^{\Phi}_{\boldsymbol{\mu}}$ in (155) as an
iterated integral of modular forms.
Note that for $\ell=1$, there is a symmetry exchanging $P_{1}\leftrightarrow
P_{2}$, and there are similar symmetries for $\ell=2$ and $\ell=3$. The form
(170) makes the determination of the anti-holomorphic derivative of
$\widehat{\Phi}_{\boldsymbol{\mu}}$ immediate. We have,
$\frac{\partial\widehat{\Phi}_{\boldsymbol{\mu}}(\tau,\bar{\tau})}{\partial\bar{\tau}}=\sum_{\ell=1,2,3}\sum_{\rho}\frac{\widehat{\Theta}_{{\boldsymbol{\mu}}+\rho}(\tau,\bar{\tau};L^{\perp}_{\ell},\\{C_{\ell-1},C_{\ell+1}\\})\,\Upsilon_{{\boldsymbol{\mu}}+\rho}(-\bar{\tau};C_{\ell}^{2},{\boldsymbol{K}}\cdot
C_{\ell})}{\sqrt{2\tau_{2}}}.$ (171)
### 4.5 Modular completion of $\Psi_{\boldsymbol{\mu}}$
We will treat in this subsection the completion of $\Psi_{\boldsymbol{\mu}}$.
Eq. (122) related the holomorphic $q$-series $\Psi_{\boldsymbol{\mu}}(\tau)$
to that the second derivative of the function
$\Psi_{\boldsymbol{\mu}}(\tau,z)$. To determine the non-holomorphic completion
$\widehat{\Psi}_{\boldsymbol{\mu}}(\tau,\bar{\tau})$, we consider the non-
holomorphic differential operator
$-\frac{1}{4\pi^{2}}\left(\frac{\partial^{2}}{\partial z^{2}}+\frac{2\pi
m}{\tau_{2}}\right),$ (172)
on the completion
$\widehat{\Psi}_{\boldsymbol{\mu}}(\tau,\bar{\tau},z,\bar{z})$ of
$\Psi_{\boldsymbol{\mu}}(\tau,z)$ (120). The differential operator maps a
Jacobi form of weight $k$ and index $m$ to a Jacobi form of weight $k+2$ and
index $m$. The second non-holomorphic term in (172) is required for modularity
but does is not relevant for the holomorphic part.
To determine the completion
$\widehat{\Psi}_{\boldsymbol{\mu}}(\tau,\bar{\tau},z,\bar{z})$, we set
$z=\beta\tau+\delta$ with $\beta,\delta\in\mathbb{R}$, so that $\beta={\rm
Im}(z)/\tau_{2}$. We first note that completing the square gives for a generic
vector $V\in\underline{{\boldsymbol{\Lambda}}}$,
$y^{V.{\boldsymbol{Q}}}\,q^{-\frac{1}{2}{\boldsymbol{Q}}^{2}}=q^{\frac{\beta^{2}}{2}V^{2}-\frac{1}{2}({\boldsymbol{Q}}-\beta
V)^{2}}e^{2\pi i(V.{\boldsymbol{Q}})\delta},$ (173)
for an arbitrary vector $V$. To write the modular completion, we can treat the
three permutations separately. With $C_{1}$ as in (136), and
$C^{(123)}_{\ell}$ as in (137), we find that the kernel
$(-1)^{a+b-c}F^{*}(123)\,(y^{a+b-c}+y^{-a-b+c})$ is to be completed to
$\widehat{F}^{*}(123,y)=\frac{(-1)^{a+b-c}}{4}\sum_{\pm}\left[1+\sum_{\ell=1}^{3}E_{2}(C^{(123)}_{\ell},C^{(123)}_{\ell+1};\sqrt{2\tau_{2}}\,({\boldsymbol{Q}}\mp\beta
C_{1}))\right]y^{\pm(a+b-c)}.$ (174)
Then the completion
$\widehat{\Psi}_{\boldsymbol{\mu}}(\tau,\bar{\tau},z,\bar{z})$ reads,
$\widehat{\Psi}_{\boldsymbol{\mu}}(\tau,\bar{\tau},z,\bar{z})=\sum_{{\boldsymbol{Q}}\in\underline{{\boldsymbol{\Lambda}}}_{\boldsymbol{\mu}}^{*}}[\widehat{F}^{*}(123,y)+\widehat{F}^{*}(213,y)+\widehat{F}^{*}(132,y)]\,q^{-{\boldsymbol{Q}}^{2}/2}.$
(175)
The completion $\widehat{\Psi}_{\boldsymbol{\mu}}(\tau,\bar{\tau},z,\bar{z})$
transforms as a Jacobi form of weight $b_{2}/2$ and index $m_{P}$ given by,
$m_{P}=-\frac{1}{2}C_{1}^{2}=-P_{1}P_{2}P_{3}-\frac{1}{2}\sum_{\ell=1}^{3}P_{\ell}\,(P_{\ell+1}^{2}+P_{\ell+2}^{2}).$
(176)
This is symmetric under permutations, and thus also equal to
$-C_{2}^{2}/2=-C_{3}^{2}/2$.
To write the modular completion, we define the function
$\begin{split}G(V_{1},V_{2},V_{3};{\boldsymbol{Q}},\tau_{2})&=-\frac{1}{4\pi^{2}}\partial_{z}^{2}\left(E_{2}(V_{1},V_{2};\sqrt{2\tau_{2}}\,({\boldsymbol{Q}}-\beta
V_{3}))\,y^{V_{3}.{\boldsymbol{Q}}}\right)|_{z=0}\\\
&=(V_{3}.{\boldsymbol{Q}})^{2}E_{2}(V_{1},V_{2};\sqrt{2\tau_{2}}\,{\boldsymbol{Q}})\\\
&\quad-\frac{1}{4\pi^{2}}\partial_{z}^{2}E_{2}(V_{1},V_{2};\sqrt{2\tau_{2}}\,({\boldsymbol{Q}}-\beta
V_{3}))|_{z=0}\\\ &\quad+\frac{1}{2\pi
i}(V_{3}.{\boldsymbol{Q}})\,\partial_{z}E_{2}(V_{1},V_{2};\sqrt{2\tau_{2}}\,({\boldsymbol{Q}}-\beta
V_{3}))|_{z=0}.\end{split}$ (177)
The limit $\tau_{2}\to\infty$ is determined by the first term,
$\lim_{\tau_{2}\to\infty}G(V_{1},V_{2},V_{3};{\boldsymbol{Q}},\tau_{2})=\left\\{\begin{array}[]{rl}(V_{3}.{\boldsymbol{Q}})^{2}\,\arctan(\alpha),&\quad{\rm
if}\,\,V_{1}.{\boldsymbol{Q}}=V_{2}.{\boldsymbol{Q}}=0,\\\
(V_{3}.{\boldsymbol{Q}})^{2}\,\mathop{\mathrm{sgn}}(V_{1}.{\boldsymbol{Q}})\,\mathop{\mathrm{sgn}}(V_{2}.{\boldsymbol{Q}}),&\quad{\rm
otherwise},\end{array}\right.$ (178)
with $\alpha$ as in (140).
To write the modular completion, we can treat the three permutations
separately. We set
$\widehat{G}^{*}(123)=\frac{(-1)^{a+b-c}}{4}\left((a+b-c)^{2}+\sum_{\ell=1}^{3}\,G(C^{(123)}_{\ell},C^{(123)}_{\ell+1},C_{1};{\boldsymbol{Q}},\tau_{2})\right),$
(179)
with $C_{1}$ as in (136), and $C^{(123)}_{\ell}$ as in (137) The other
permutations $(213)$ and $(132)$ give similarly rise to $\widehat{F}^{*}(213)$
and $\widehat{F}^{*}(132)$. We define,
$\begin{split}\widehat{g}_{C}(\\{\gamma_{j}\\};\\{c_{j}^{*}\\})&=\frac{1}{4}\left[\widehat{G}^{*}(123)+\widehat{G}^{*}(213)+\widehat{G}^{*}(132)\right].\end{split}$
(180)
The modular completion $\widehat{\Psi}_{\boldsymbol{\mu}}$ now reads
$\widehat{\Psi}_{\boldsymbol{\mu}}(\tau,\bar{\tau})=\sum_{{\boldsymbol{Q}}\in\underline{{\boldsymbol{\Lambda}}}_{\boldsymbol{\mu}}^{*}}\widehat{g}_{C}(\\{\gamma_{j}\\};\\{c_{j}^{*}\\})\,q^{-{\boldsymbol{Q}}^{2}/2}-\frac{m_{P}}{2\pi\tau_{2}}\,\widehat{\Phi}_{\boldsymbol{\mu}}(\tau,\bar{\tau}).$
(181)
where the last term is due to the $m$-dependent term in (172). We let this
expression be our final form for
$\widehat{\Psi}_{{\boldsymbol{\mu}}}(\tau,\bar{\tau})$.
Our last task is to determine the constant $A$ in
$g_{C}(\\{\gamma_{j}\\};\\{c_{j}^{*}\\})$ (42). To this end, we consider the
$\tau_{2}\to\infty$ limit of
$\widehat{g}_{C}(\\{\gamma_{j}\\};\\{c_{j}^{*}\\})$ and require that it
reduces to $g_{C}(\\{\gamma_{j}\\};\\{c_{j}^{*}\\})$. If we consider the term
$F^{*}(123)\,(a+b-c)^{2}$, we see that only the completion of the term
$\mathop{\mathrm{sgn}}(a-c)\,\mathop{\mathrm{sgn}}(b-c)$ in $g_{C}$ can
contribute a non-vanishing difference. Indeed, if we include the other
permutations, the only remaining terms in the difference are the equilateral
cases with $a=b=c$,444We stress that the equilateral condition $a=b=c$, is the
condition on ${\boldsymbol{Q}}$ to satisfy
$C_{a}.{\boldsymbol{Q}}=C_{b}.{\boldsymbol{Q}}=C_{c}.{\boldsymbol{Q}}$, and
does not imply equalities among $C_{a}$, $C_{b}$ and $C_{c}$.
$\begin{split}&\lim_{\tau_{2}\to\infty}\widehat{g}_{C}(\\{\gamma_{j}\\};\\{c_{j}^{*}\\})-g_{C}(\\{\gamma_{j}\\};\\{c_{j}^{*}\\})=\frac{(-1)^{a}}{4}\,a^{2}\,\delta_{a,b}\,\delta_{b,c}\\\
&\qquad\times\lim_{\tau_{2}\to\infty}\left(E_{2}(C_{a}-C_{c},C_{b}-C_{c};\sqrt{2\tau_{2}}\,{\boldsymbol{Q}})+E_{2}(C_{a}-C_{b},C_{c}-C_{b};\sqrt{2\tau_{2}}\,{\boldsymbol{Q}})\right.\\\
&\qquad\left.+E_{2}(C_{c}-C_{a},C_{b}-C_{a};\sqrt{2\tau_{2}}\,{\boldsymbol{Q}})-A)\right).\end{split}$
(182)
Now the sum of $E_{2}$’s is precisely of the form (150), and thus equals 1\.
As a result, we find that with $A=1$ the limit vanishes for any choice of
Calabi-Yau and charge configurations. This matches perfectly with the physical
expectation. Note that the individual values for $E_{2}$ are given by an
arctan, and are generically irrational, but that this combination adds up to
1.
It is quite striking that the values of $E_{2}$’s precisely confirm the
physical expectation. Also in other cases [43, 30], the values of the
generalized error functions for vanishing arguments has matched with the
expectations based on BPS invariants.
## 5 Relation to M5-branes and AdS3/CFT2
In this brief section, we discuss our findings from the point of view of
M-theory and the MSW CFT. We discuss how the partition functions
$\widehat{Z}_{P}^{nT}$, $n>2$, may arise from the 2-dimensional perspective.
The uplift of the D4-branes to M5-branes in M-theory is useful to understand
the modular properties of the partition functions [6, 12, 13, 14]. The spatial
dimensions of the M5-brane are $P\times S^{1}_{M}$, where $P\in
H_{4}(X,\mathbb{Z})$ is a four-cycle in the Calabi-Yau threefold and
$S^{1}_{M}$ is the M-theory circle. The D2 branes of IIA string theory are
realized as excitations of the self-dual 2-form field on M5 brane world
volume, while the D0-branes are realized as momenta of the brane system along
$S^{1}_{M}$ with radius $R=g_{s}\,\ell_{s}=\ell_{11}^{3}/\ell_{s}^{2}$ in
terms of the string coupling $g_{s}$, string length $\ell_{s}$, and eleven
dimensional Planck length $\ell_{11}$. The world volume theory of the M5 brane
gives a low energy description of the system, provided gravitational effects
can be ignored. This can be ensured by taking the volume of the CY3 to be
large, namely $V_{X}/\ell_{11}^{6}$ large but fixed. Supersymmetry of the M5
world volume theory implies that the MSW CFT has (0,4) supersymmetry, and is
dual to the near horizon AdS3 geometry [12, 74]. Bosons of this CFT include,
moduli of the divisor $P$ inside $X$, translations along flat $\mathbb{R}^{3}$
and (anti)-chiral scalars coming from reduction of self-dual two-form field in
the M5 brane world volume. The number of fields can be determined using
geometric data of $X$, when $P$ is a very ample divisor. Consequently the
central charges can be determined. The central charge of the left-moving, non-
supersymmetric sector is
$c_{L}=P^{3}+c_{2}(X)\cdot P,$ (183)
where $c_{2}(X)$ is the second Chern class of the Calabi-Yau three-fold $X$.
Using Cardy’s formula, Reference [12] demonstrated that microscopic entropy is
in agreement with one loop corrected Bekenstein-Hawking entropy.
As for any CFT, a key feature of the MSW CFT is the modular symmetry. When
time direction is compactified, the CFT lives on a 2-torus
$T^{2}=S^{1}_{M}\times S^{1}_{t}$. The linear fractional transformation of the
complex structure modulus $\tau$ of $T^{2}$ by an element of
$SL(2,\mathbb{Z})$ corresponds to the same torus $T^{2}$. Thus the
$SL(2,\mathbb{Z})$ symmetry, or modular symmetry, appears as a symmetry of the
CFT. This is a strong constraint on the degeneracies. Another property of the
CFT is the spectral flow of the $U(1)^{b_{2}}$ current algebra. This leads to
a symmetry of degeneracies as function of the charges, such that the partition
function can be decomposed as a finite sum of theta series $\times$ modular
functions, much as is the case for the attractor partition function in Section
3.2.
We proceed by considering the AdS3 dual to the MSW CFT following [27]. The
relation of $\lambda$ in (58) to the five-dimensional quantities can be
understood from four-dimensional Newton’s constant. From the IIA perspective,
we have
$G_{4}=\frac{g_{s}^{2}\,\ell_{s}^{8}}{V_{X}}=\frac{g_{s}^{2}\,\ell_{s}^{2}}{\lambda^{3}},$
(184)
while in terms of the five-dimensional quantities,
$G_{4}=\frac{\ell_{5}^{3}}{R}=g_{s}^{2}\,\ell_{s}^{2}\frac{\ell_{5}^{3}}{R^{3}},$
(185)
such that one has,
$\lambda=\frac{R}{\ell_{5}}.$ (186)
Starting from five dimensional asymptotically flat geometries with appropriate
charges, one takes the decoupling limit as follows: size of M-theory circle
$R$ is kept fixed (in absolute units), Calabi-Yau volume is kept finite in
units of five dimensional Planck length $\ell_{5}$. In the decoupling limit to
AdS3, $\ell_{5}\rightarrow 0$, hence $\lambda\to\infty$ by (186). Multi-
centered geometries where centers have mutual distance of the order of
$\ell_{5}^{3}\sim\lambda^{-3}$ or less, survive this limit and go over to
asymptotically AdS${}_{3}\times S^{2}$ geometries, with asymptotic moduli
fixed to their attractor values. These are the $\lambda$-core geometries
mentioned in the introduction, and include centers with non-vanishing
individual D6 brane charges, which add up to zero. On the other hand, if the
distances between the centers is larger than $\sim\ell_{5}^{3}/R^{2}$ for
$\ell_{5}\to 0$, the bound states decouple from the spectrum. As we can see
from the Denef equations (3), this is for example the case if the centers
carry a non-vanishing D4-brane charge with vanishing D6-brane charge. Then the
distances between the centers scale as $\ell_{5}$ for non-scaling bound
states, and the centers give rise to disconnected AdS3 throats in the
decoupling limit. On the other hand for scaling solutions, the distance
between the centers contains a regime where the centers can come arbitrarily
close.
As explained in Section 2, the BPS index can be determined using localization
with respect to rotation around the $z$-axis, and we can therefore concentrate
on collinear solutions to Denef’s equations [66]. These collinear solutions
admit two branches, one corresponding to centers at finite distances, and the
other one is when the centers are nearly coincident. The second branch,
sometimes referred to as “deep scaling regime” reproduces pure Higgs states
and goes to a single throat in $\ell_{5}\rightarrow 0$ limit [53]. This is in
accordance with the expectation that MSW CFT captures the near coincident
regime of scaling solutions since these are smoothly connected to the single
center black hole. The separation between the centers at the collinear fixed
point is of order $\lambda^{-1}\sim\ell_{5}$, and these therefore decouple
from the AdS3 geometry. As a result, these do not appear to be captured by the
CFT. This leads us to speculate that the first term on the rhs of (1),
$\widehat{\mathcal{Z}}_{P}^{T}$, corresponds to the AdS3/CFT2 partition
function while the other terms on the rhs do not.
It is intriguing that the terms due to scaling solutions
$\widehat{\mathcal{Z}}_{P}^{nT}$ with $n\geq 3$ do satisfy the restrictive
modular transformations as well as the spectral flow symmetry, and it is
desirable to understand the origin. We think that these terms can appear after
turning on an irrelevant deformation in the $(0,4)$ CFT, away from the
conformal fixed point and reversing the attractor flow and $\ell_{5}\to 0$
limit. While such a deformation does not lead to a finite change in the
partition function for $(4,4)$ CFT [54], it seems plausible to us that this
can happen with reduced supersymmetry. This deformation is distinguished among
other deformations spanning the space of attractor moduli, since this
deformation does preserve the spectral flow symmetry which is in general not
the case for variations of the Kähler moduli orthogonal to $P$ [25]. We leave
a further exploration of these interesting aspects for future work.
## 6 Case studies
To make a suitable choice of a Calabi-Yau threefold, we note that the lattices
involved have dimensions linear in $b_{2}$, the second Betti number. Thus the
computations are less involved for smaller $b_{2}$. However for the simplest
case $b_{2}=1$, there are no cyclic quivers and hence no scaling
solutions.555Consider triangular quiver to start with. For $b_{2}=1$, electric
and magnetic charges are numbers and satisfy the identity
$aP_{3}+bP_{1}+cP_{2}=0$. Since $P_{i}>0$ this implies $a,b,c$ can not all
have the same sign hence the quiver can not be cyclic. This is easily
generalized to any cyclic quiver. Also, the lattices concerned are positive
definite, and therefore do not give rise to mock modular forms. So we settle
for the next simplest case $b_{2}=2$.
In order to define quadratic forms on the lattices, one needs the intersection
numbers. For a Calabi Yau threefold with $b_{2}=2$, there are $2^{3}=8$
intersection numbers of which only 4 are independent. For the sake of
simplicity we shall choose a Calabi Yau threefold $X$, for which all
intersection numbers are $\in 2\mathbb{Z}$ such that Simplification 2 of
Section 4.1 holds, and for which most of these numbers vanish. To be specific,
we choose $X$ to be a $K3$ fibration over $\mathbb{P}^{1}$. This Calabi-Yau
manifold corresponds to Polytope ID # =14 in the online database [75]. $X$ can
be realized as a divisor in an ambient toric variety
$\mathbb{P}^{3}\times\mathbb{P}^{1}$, specified by the weight matrix
$\displaystyle W$ $\displaystyle=\begin{pmatrix}0&0&0&1&1&0\\\
1&1&1&0&0&1\end{pmatrix}\,.$ (187)
$W$ defines the toric variety through the identification on
$\mathbb{C}^{6}/\\{0\\}$
$\displaystyle(z_{1},\dots,z_{6})$
$\displaystyle\sim(\lambda^{W_{i1}}z_{1},\dots,\lambda^{W_{i6}}z_{6}),\,\,\lambda\in\mathbb{C}^{\star},\,\,i=1,2\,,$
(188)
where $z_{1},\dots,z_{6}$ are coordinates of $\mathbb{C}^{6}$. Anti-canonical
divisor of this toric variety defines the Calabi Yau threefold under
consideration. It has the following Hodge numbers
$\displaystyle h^{1,1}$ $\displaystyle=2,\quad\,h^{2,1}=86\,$ (189)
and consequently the Euler number is $\chi=2(h^{1,1}-h^{2,1})=-168$.
Intersection polynomial and second Chern class of $X$ are given respectively
by
$\begin{split}\text{intersection polynomial}&=4J_{1}J_{2}^{2}+2J_{2}^{3},\,\\\
\text{second Chern class}&=8J_{1}J_{2}+6J_{2}^{2}\,,\end{split}$ (190)
where $J_{i}$’s are a basis of 2-forms on $X$ and generators of the Kähler
cone. $X$ is favorable, meaning that Kähler forms on $X$ descend from the
ambient toric variety. In our notation, the intersection numbers $d_{abc}$ and
second Chern class $c_{2,a}$ read
$\begin{split}&d_{111}=d_{112}=0,\qquad d_{122}=4,\qquad d_{222}=2,\\\
&c_{2,1}=24,\qquad c_{2,2}=44\end{split}$ (191)
and the $d_{abc}$ for other indices follow by permutation.
### 6.1 Charge configuration 1: $P_{1}=P_{2}=(0,1),\;P_{3}=(1,1)$
In this example we choose the total magnetic charge $P=(1,3)$ magnetic charges
split into 3 centers as
$\displaystyle P_{1}=P_{2}=(0,1),\quad P_{3}=(1,1).$ (192)
We note that $P_{1,2}$ are irreducible while $P_{3}$ is reducible. Therefore
$h_{P_{1,2},\mu}$ is a weakly holomorphic modular form, while $h_{P_{3},\mu}$
is a mock modular form. The corresponding central charges of the left-moving
sector of the CFT (183) are,
$c_{L}(P_{1,2})=46,\qquad c_{L}(P_{3})=92,\qquad c_{L}(P)=318.$ (193)
For the $j$-th center, we have for the innerproduct $D_{j}=d_{abc}P_{j}^{c}$,
$\displaystyle D_{1}=D_{2}=\begin{pmatrix}0&4\\\ 4&2\end{pmatrix},\quad
D_{3}=\begin{pmatrix}0&4\\\ 4&6\end{pmatrix}.$ (194)
For this choice, $D_{3}^{-1}D_{1}=D_{3}^{-1}D_{2}$ is an integral matrix, such
that Simplication 1. of Section 4.1 is satisfied, and we can use the results
of that section. We have, $D_{1}D_{3}^{-1}D_{1}=\begin{pmatrix}0&4\\\
4&-2\end{pmatrix}$ which produces the quadratic form of
$\underline{{\boldsymbol{\Lambda}}}$ (98),
$\displaystyle\underline{D}$ $\displaystyle=$
$\displaystyle\begin{pmatrix}[c]0&8&0&4\\\ 8&0&4&-2\\\ 0&4&0&8\\\
4&-2&8&0\end{pmatrix},\quad\det{\underline{D}}=2304.$ (195)
The quadratic form on the lattice of electric charges $\underline{D}^{-1}$ is
given by,
$\displaystyle\underline{D}^{-1}=\left(\begin{array}[]{cccc}-\frac{1}{18}&\frac{1}{6}&\frac{5}{72}&-\frac{1}{12}\\\
\frac{1}{6}&0&-\frac{1}{12}&0\\\
\frac{5}{72}&-\frac{1}{12}&-\frac{1}{18}&\frac{1}{6}\\\
-\frac{1}{12}&0&\frac{1}{6}&0\\\ \end{array}\right).$ (200)
Electric charge vectors $Q_{j}\in\Lambda_{j}^{*}$ have the following form
$Q_{j}=\begin{pmatrix}q_{j1}\\\ q_{j2}\end{pmatrix}.$
There are $\det(D_{1})\,\det(D_{2})=256$ conjugacy classes of the form
$\displaystyle q_{11}$ $\displaystyle=$ $\displaystyle 4k_{12}+\mu_{11},\quad
q_{12}=4k_{11}+2k_{12}+\mu_{12},$ (201) $\displaystyle{}q_{21}$
$\displaystyle=$ $\displaystyle 4k_{22}+\mu_{21},\quad
q_{22}=4k_{21}+2k_{22}+\mu_{22},$
where $\mu_{ij}\in\\{0,1,2,3\\}$, $k_{ij}\in\mathbb{Z}$. These conjugacy
classes have the exchange symmetry between $\mu_{1},\mu_{2}$. Since $D_{3}$
divides $D_{1}$ and $D_{2}$, $N_{q}=\det(D_{1})\,\det(D_{2})$ (100) is the
number of conjugacy classes of
$\underline{{\boldsymbol{\Lambda}}}^{*}/\underline{{\boldsymbol{\Lambda}}}$
for fixed $\mu_{1}+\mu_{2}+\mu_{3}=\mu\in\Lambda^{*}/\Lambda$, since the class
|
# Grothendieck classes of
quadrics and involution varieties
Gonçalo Tabuada Gonçalo Tabuada, Mathematics Institute, Zeeman Building,
University of Warwick, Coventry CV4 7AL UK<EMAIL_ADDRESS>https://homepages.warwick.ac.uk/ u1972846/ To Yuri I. Manin on the occasion of
his $85^{\mathrm{th}}$ birthday, with my deepest admiration and gratitude.
###### Abstract.
In this article, by combining the recent theory of noncommutative motives with
the classical theory of motives, we prove that if two quadrics (or, more
generally, two involution varieties) have the same Grothendieck class, then
they have the same even Clifford algebra and the same signature. As an
application, we show in numerous cases (e.g., when the base field is a local
or global field) that two quadrics (or, more generally, two involution
varieties) have the same Grothendieck class if and only if they are
isomorphic.
Adoramos a perfeição, porque não a podemos ter; repugna-la-íamos se a
tivéssemos.
O perfeito é o desumano porque o humano é imperfeito. Bernardo Soares, Livro
do Desassossego.
## 1\. Introduction
Let $k$ be a field and $\mathrm{Var}(k)$ the category of varieties, i.e.,
reduced separated $k$-schemes of finite type. The Grothendieck ring of
varieties $K_{0}\mathrm{Var}(k)$, introduced in a letter from Grothendieck to
Serre (consult [6, letter of 16/08/1964]), is defined as the quotient of the
free abelian group on the set of isomorphism classes of varieties $[X]$ by the
“scissor” relations $[X]=[Z]+[X\backslash Z]$, where $Z$ is a closed
subvariety of $X$. The multiplication law is induced by the product of
varieties. Despite the efforts of several mathematicians (consult, for
example, the articles [3, 4, 18, 21, 22, 24, 25] and the references therein),
the structure of the Grothendieck ring of varieties still remains nowadays
poorly understood. In this article, in order to better understand the
structure of the Grothendieck ring of varieties, we address the following
question: Question: For which pairs of varieties $X$ and $Y$ does the
implication $[X]=[Y]\Rightarrow X\simeq Y$ holds? Such implication does not
holds in general because there are numerous identifications that occur in the
Grothendieck ring of varieties. For example, when $X$ and $Y$ are piecewise
isomorphic111A concrete example is given by the ordinary cusp
$X:=\mathrm{Spec}(k[x,y]/\langle y^{2}-x^{3}\rangle)$ and the affine line
$Y:=\mathbb{A}^{1}$. Although $X$ and $Y$ are not isomorphic, we have
$[X]=[Y]$ because they are piecewise isomorphic., we have $[X]=[Y]$. Moreover,
as explained in [5, Chap. 2 §6.3], there also exist varieties which are not
piecewise isomorphic but which still have the same Grothendieck class222A
concrete example, in characteristic zero, is the following: let $V$ be a
$k$-vector space of dimension $7$ and $W\subset\bigwedge^{2}V^{\vee}$ a
generic subspace of dimension $7$. Note that, by definition, an element $w$ of
$W$ corresponds to a skew-symmetric bilinear form $\varphi_{w}$ on $V$. Under
these identifications, we can consider the varieties $X:=\\{P\subset
V\,|\,{\varphi_{w}}_{|P}=0\,\,\,\forall w\in W\\}\subset\mathrm{Gr}(2,V)$ and
$Y:=\\{w\in W\,|\,\mathrm{rank}(\varphi_{w})<6\\}\subset\mathbb{P}(W)$, where
$\mathrm{Gr}(2,V)$ stands for the Grassmannian of $2$-dimensional subspaces in
$V$. As proved by Borisov in [4], we have
$[X\times\mathbb{A}^{6}]=[Y\times\mathbb{A}^{6}]$ although
$X\times\mathbb{A}^{6}$ and $Y\times\mathbb{A}^{6}$ are not piecewise
isomorphic.. In this article we show that, surprisingly, the aforementioned
implication holds nevertheless for numerous pairs of quadrics and involution
varieties!
### Statement of results - quadrics
Let $k$ be a field of characteristic zero. Given a (finite-dimensional) non-
degenerate quadratic form $q\colon V\to k$, let us write $\mathrm{dim}(q)$ for
its dimension, $\delta(q)\in k^{\times}/(k^{\times})^{2}$ for its discriminant
(when $\mathrm{dim}(q)$ is even), $C_{0}(q)$ for its even Clifford algebra,
and $Q_{q}\subset\mathbb{P}(V)$ for the associated quadric; consult [17, §V].
Recall from [17, §V Thm. 2.4] that when $\mathrm{dim}(q)$ is odd, $C_{0}(q)$
is a central simple $k$-algebra; that when $\mathrm{dim}(q)$ is even and
$\delta(q)\notin(k^{\times})^{2}$, $C_{0}(q)$ is a central simple algebra over
its center $k(\sqrt{\delta(q)})$; and that when $\mathrm{dim}(q)$ is even and
$\delta(q)\in(k^{\times})^{2}$, $C_{0}(q)\simeq C_{0}(q)^{+}\times
C_{0}(q)^{-}$ is a product of two isomorphic central simple $k$-algebras.
Recall also that $\mathrm{dim}_{k}(C_{0}(q))=2^{\mathrm{dim}(q)-1}$ and
$\mathrm{dim}(Q_{q})=\mathrm{dim}(q)-2$. Finally, when $k$ is formally-real,
we will write $\mathrm{sgn}_{P}(q)\in\mathbb{Z}$ for the signature of $q$ with
respect to an ordering $P$ of $k$; consult [17, §VIII].
###### Theorem 1.1.
Let $q$ and $q^{\prime}$ be two non-degenerate quadratic forms. If
$[Q_{q}]=[Q_{q^{\prime}}]$, then the following holds:
* (i)
We have $\mathrm{dim}(q)=\mathrm{dim}(q^{\prime})$.
* (ii)
We have
$(\delta(q)\in(k^{\times})^{2})\Leftrightarrow(\delta(q^{\prime})\in(k^{\times})^{2})$.
* (iii)
We have $C_{0}(q)\simeq C_{0}(q^{\prime})$.
* (iv)
When $k$ is formally-real, we have
$|\mathrm{sgn}_{P}(q)|=|\mathrm{sgn}_{P}(q^{\prime})|$ for every ordering $P$
of $k$.
###### Remark 1.2.
Note that if $C_{0}(q)\simeq C_{0}(q^{\prime})$, then
$\delta(q)=\delta(q^{\prime})$.
Intuitively speaking, Theorem 1.1 shows that the dimension, the discriminant,
the even Clifford algebra, and the absolute value of the signature, of a
quadratic form are preserved by the “scissor” relations. Among other
ingredients, the proof of items (ii)-(iii), resp. item (iv), makes use of the
recent theory of noncommutative motives, resp. of the classical theory of
motives333The first article on the theory of motives was written by Yuri I.
Manin [23] in the sixties.; consult §2.2-§2.6 below. Theorem 1.1 enables the
following applications:
###### Theorem 1.3.
Let $q$ and $q^{\prime}$ be two non-degenerate quadratic forms. If
$[Q_{q}]=[Q_{q^{\prime}}]$, then the following holds:
* (i)
When $\mathrm{dim}(q)\leq 4$ (or, equivalently, $\mathrm{dim}(q^{\prime})\leq
4$), we have $Q_{q}\simeq Q_{q^{\prime}}$.
* (ii)
When $I(k)^{3}$ is torsion-free, where $I(k)$ stands for the fundamental ideal
of the Witt ring of quadratic forms $W(k)$, we have $Q_{q}\simeq
Q_{q^{\prime}}$. In the case where $k$ is formally-real and $\mathrm{dim}(q)$
is even (or, equivalently, $\mathrm{dim}(q^{\prime})$ is even), we assume
moreover that the Hasse-number $\tilde{u}(k)$ of $k$ is finite.
###### Remark 1.4.
* (a)
When $k$ is not formally-real, the Witt ring $W(k)$ is torsion; consult [7,
§5]. Consequently, $I(k)^{3}$ is torsion-free if and only if $I(k)^{3}=0$.
* (b)
All the assumptions of Theorem 1.3 hold when $k$ is a local or global field,
or a field of transcendence degree $\leq 1$ over a real-closed field or over
an algebraically closed field; consult [7, §V-§VI].
Item (i) of Theorem 1.3 shows that, in dimensions $\leq 2$, two quadrics have
the same Grothendieck class if and only if they are isomorphic! In other
words, we have the following inclusion:
(1.5) $\frac{\\{\text{Quadrics}\,\,Q_{q}\,|\,\mathrm{dim}(Q_{q})\leq
2\\}}{\text{isomorphism}}\subset K_{0}\mathrm{Var}(k)\,.$
###### Remark 1.6 (Quaternion algebras).
The assignment $(a,b)\mapsto
C(a,b):=(-ax^{2}-by^{2}+abu^{2}=0)\subset\mathbb{P}^{2}$ induces a bijection
between the set of quaternion $k$-algebras up to isomorphism and the set of
quadrics in $\mathbb{P}^{2}$ up to isomorphism (a.k.a. conics). In the same
vein, the assignment
$(a,b)\mapsto(x^{2}-ay^{2}-bu^{2}+(ab)w^{2}=0)\subset\mathbb{P}^{3}$ induces a
bijection between the set of quaternion $k$-algebras up to isomorphism and the
set of quadrics in $\mathbb{P}^{3}$ with trivial discriminant up to
isomorphism.
###### Example 1.7 (Quadrics of dimension $\leq 2$ over $\mathbb{R}$).
When $k=\mathbb{R}$, there are two quadrics in $\mathbb{P}^{2}$ up to
isomorphism, namely $C(1,1)\simeq\mathbb{P}^{1}$ and $C({\bf H})$, where ${\bf
H}:=(-1,-1)$ stands for Hamilton’s $\mathbb{R}$-algebra of quaternions. In
$\mathbb{P}^{3}$ there are three quadrics up to isomorphism, namely
$(x^{2}-y^{2}-u^{2}+w^{2}=0)\subset\mathbb{P}^{3}$,
$(x^{2}+y^{2}+u^{2}+w^{2}=0)\subset\mathbb{P}^{3}$ and
$(x^{2}+y^{2}+u^{2}-w^{2}=0)\subset\mathbb{P}^{3}$. Making use of the above
inclusion (1.5), we hence conclude that the Grothendieck classes of these
quadrics remain distinct in $K_{0}\mathrm{Var}(\mathbb{R})$. Note that since
$\mathbb{R}^{\times}/(\mathbb{R}^{\times})^{2}\simeq\\{\pm 1\\}$, the
discriminant of the quadric $(x^{2}+y^{2}+u^{2}-w^{2}=0)\subset\mathbb{P}^{3}$
is non-trivial.
###### Example 1.8 (Quadrics of dimension $\leq 2$ over $\mathbb{Q}_{p}$).
When $k=\mathbb{Q}_{p}$, with $p\neq 2$, there are two quadrics in
$\mathbb{P}^{2}$ up to isomorphism, namely $C(1,1)\simeq\mathbb{P}^{1}$ and
$C(\epsilon,p)$, where $\epsilon$ is a(ny) unit of $\mathbb{Q}_{p}^{\times}$
such that $\overline{\epsilon}$ is not a square in
$(\mathbb{Z}/p\mathbb{Z})^{\times}$. In $\mathbb{P}^{3}$ there are six
quadrics up to isomorphism:
(1.9) $\displaystyle(x^{2}-y^{2}-u^{2}+w^{2}=0)\subset\mathbb{P}^{3}$
$\displaystyle(x^{2}-\epsilon y^{2}-pu^{2}+(\epsilon
p)w^{2}=0)\subset\mathbb{P}^{3}$ (1.10)
$\displaystyle(x^{2}-y^{2}-u^{2}+\epsilon w^{2}=0)\subset\mathbb{P}^{3}$
$\displaystyle(x^{2}-y^{2}-u^{2}+pw^{2}=0)\subset\mathbb{P}^{3}$ (1.11)
$\displaystyle(x^{2}-y^{2}-u^{2}+(\epsilon p)w^{2}=0)\subset\mathbb{P}^{3}$
$\displaystyle(x^{2}+y^{2}-\epsilon u^{2}-pw^{2}=0)\subset\mathbb{P}^{3}\,.$
Making use of the above inclusion (1.5), we hence conclude that the
Grothendieck classes of these quadrics remain distinct in
$K_{0}\mathrm{Var}(\mathbb{Q}_{p})$. Note that since
$\mathbb{Q}_{p}^{\times}/(\mathbb{Q}_{p}^{\times})^{2}=\\{1,\epsilon,p,\epsilon
p\\}$, the discriminant of the quadrics (1.10)-(1.11) is non-trivial.
###### Example 1.12 (Quadrics of dimension $\leq 2$ over $\mathbb{Q}$).
When $k=\mathbb{Q}$, there are infinitely many quadrics in $\mathbb{P}^{2}$ up
to isomorphism. More specifically, following Remark 1.6, there is a bijection
between the set of quaternion $\mathbb{Q}$-algebras up to isomorphism and the
set of those positive integers which are not squares; consult [28, §III-IV].
Under such bijection, the prime numbers $p$ which are congruent to $3$ modulo
$4$ correspond to the quaternions algebras $(-1,-p)$. Consequently, we have,
for example, the infinite family of non-isomorphic quadrics
$\\{C(-1,-p)\\}_{p\equiv 3(\mathrm{mod}\,4)}$. In the same vein, we have, for
example, the infinite family of non-isomorphic quadrics with trivial
discriminant
$\\{(x^{2}+y^{2}+pu^{2}+pw^{2}=0)\subset\mathbb{P}^{3}\\}_{p\equiv
3(\mathrm{mod}\,4)}$. Note that we have also, for example, the infinite family
of non-isomorphic quadrics with non-trivial discriminant
$\\{(x^{2}+y^{2}+u^{2}+pu^{2}=0)\subset\mathbb{P}^{3}\\}_{p}$ parametrized by
all prime numbers $p$. Making use of the above inclusion (1.5), we hence
conclude that the Grothendieck classes of all the aforementioned quadrics
remain distinct in $K_{0}\mathrm{Var}(\mathbb{Q})$.
Item (ii) of Theorem 1.3 shows that when $I(k)^{3}$ is torsion-free (and
$\tilde{u}(k)<\infty$), two quadrics have the same Grothendieck class if and
only if they are isomorphic! Consequently, the above inclusion (1.5) admits
the following extension to all dimensions:
(1.13)
$\displaystyle\frac{\\{\text{Quadrics}\,\,Q_{q}\\}}{\text{isomorphism}}\subset
K_{0}\mathrm{Var}(k)$
$\displaystyle\mathrm{with}\,\,I(k)^{3}\,\,\mathrm{torsion}\text{-}\mathrm{free}\,\,\,(\mathrm{and}\,\,\tilde{u}(k)<\infty)\,.$
###### Example 1.14 (Quadrics over $\mathbb{R}$).
When $k=\mathbb{R}$, there are $\lfloor\frac{n+3}{2}\rfloor$ quadrics in
$\mathbb{P}^{n}$ up to isomorphism, namely the following family
$\\{(x^{2}_{1}+\cdots+x^{2}_{i}-x^{2}_{i+1}-\cdots-x^{2}_{n+1}=0)\subset\mathbb{P}^{n}\\}_{i}$,
with $\frac{n+1}{2}\leq i\leq n+1$ when $n$ is odd, and with
$\lfloor\frac{n+3}{2}\rfloor\leq i\leq n+1$ when $n$ is even. Making use of
the above inclusion (1.13), we hence conclude that the Grothendieck classes of
these quadrics remain distinct in $K_{0}\mathrm{Var}(\mathbb{R})$.
###### Example 1.15 (Quadrics over $\mathbb{Q}_{p}$).
When $k=\mathbb{Q}_{p}$, with $p\neq 2$, there are finitely many quadrics in
$\mathbb{P}^{n}$ up to isomorphism. More specifically, since the $u$-invariant
$u(\mathbb{Q}_{p})$ of $\mathbb{Q}_{p}$ is equal to $4$ (consult [7, §VI]),
there are two, resp. six, quadrics in $\mathbb{P}^{n}$ up to isomorphism when
$n$ is even, resp. odd. The corresponding quadratic forms (well-defined up to
similarity) are obtained by taking the orthogonal sum of the corresponding
quadratic forms of Example 1.20 with a finite number of copies of the
hyperbolic plane. Making use of the inclusion (1.13), we hence conclude that
the Grothendieck classes of these quadrics remain distinct in
$K_{0}\mathrm{Var}(\mathbb{Q}_{p})$.
###### Example 1.16 (Quadrics over $\mathbb{Q}$).
When $k=\mathbb{Q}$, there are infinitely many quadrics in $\mathbb{P}^{n}$ up
to isomorphism. Note that the assignment
$m\mapsto(\mathrm{sign}(m),\\{r_{p}\\}_{p})$, where
$m=\pm\prod_{p\,\mathrm{prime}}p^{r_{p}}$ is the prime factorization, gives
rise to a group isomorphism
$\mathbb{Q}^{\times}/(\mathbb{Q}^{\times})^{2}\simeq(\\{\pm
1\\}\times\oplus_{p}\mathbb{Z})/(\\{1\\}\times\oplus_{p}2\mathbb{Z})$.
Consequently, when $n$ is odd, we have, for example, the following infinite
family of non-isomorphic quadrics $\\{(x^{2}_{1}+\cdots+x^{2}_{n}+\lambda
x^{2}_{n+1}=0)\subset\mathbb{P}^{n}\\}_{\lambda}$ parametrized by the square
classes $\lambda\in\mathbb{Q}^{\times}/(\mathbb{Q}^{\times})^{2}$. Making use
of the above inclusion (1.13), we hence conclude that the Grothendieck classes
of all these quadrics remain distinct in $K_{0}\mathrm{Var}(\mathbb{Q})$.
### Statement of results - involution varieties
Let $k$ be a field of characteristic zero. Given a central simple $k$-algebra
$A$, let us write $\mathrm{deg}(A)$ for its degree and $\mathrm{SB}(A)$ for
the associated Severi-Brauer variety. In the same vein, given a central simple
$k$-algebra with involution of orthogonal type $(A,\ast)$, let us write
$\delta(A,\ast)\in k^{\times}/(k^{\times})^{2}$ for its discriminant (when
$\mathrm{deg}(A)$ is even), $C_{0}(A,\ast)$ for its even Clifford $k$-algebra,
and $\mathrm{Iv}(A,\ast)\subset\mathrm{SB}(A)$ for the associated involution
variety; consult [16, §II] [35]. Recall from [16, Thm. 8.10] that when
$\mathrm{deg}(A)$ is odd, $C_{0}(A,\ast)$ is a central simple $k$-algebra;
that when $\mathrm{deg}(A)$ is even and
$\delta(A,\ast)\notin(k^{\times})^{2}$, $C_{0}(A,\ast)$ is a central simple
algebra over its center $k(\sqrt{\delta(A,\ast)})$; and that when
$\mathrm{deg}(A)$ is even and $\delta(A,\ast)\in(k^{\times})^{2}$,
$C_{0}(A,\ast)\simeq C_{0}(A,\ast)^{+}\times C_{0}(A,\ast)^{-}$ is a product
of two central simple $k$-algebras. Recall also that
$\mathrm{dim}_{k}(C_{0}(A,\ast))=2^{\mathrm{deg}(A)-1}$ and
$\mathrm{dim}(\mathrm{Iv}(A,\ast))=\mathrm{deg}(A)-2$. Finally, when $k$ is
formally-real, we will write $\mathrm{sgn}_{P}(A,\ast)\in\mathbb{N}$ for the
signature of $(A,\ast)$ with respect to an ordering $P$ of $k$; consult [16,
§11].
###### Example 1.17 (Quadrics).
In the particular case where $A$ is split, the central simple $k$-algebra with
involution of orthogonal type $(A,\ast)$ becomes isomorphic to
$(\mathrm{M}_{\mathrm{deg}(A)\times\mathrm{deg}(A)}(k),\ast_{q})$, where
$\ast_{q}$ is the adjoint involution of a uniquely determined quadratic form
$q$ (up to similarity). Hence, in this particular case, the involution variety
$\mathrm{Iv}(A,\ast)$ reduces to the quadric $Q_{q}$. Moreover,
$\mathrm{deg}(A)=\mathrm{dim}(q)$, $\delta(A,\ast)=\delta(q)$ and
$C_{0}(A,\ast)=C_{0}(q)$. Furthermore, when $k$ is formally-real,
$\mathrm{sgn}_{P}(A,\ast)$ reduces to $|\mathrm{sgn}_{P}(q)|$.
###### Example 1.18 (Odd dimensional involution varieties).
Given a central simple $k$-algebra $A$ of odd degree, it is well-known that
$A$ admits an involution of orthogonal type if and only if $A$ is split.
Thanks to Example 1.17, this shows that the odd dimensional involution
varieties are the odd dimensional quadrics.
###### Example 1.19 (Forms of quadrics).
Following Example 1.17, note that every involution variety
$\mathrm{Iv}(A,\ast)$ becomes isomorphic to a quadric after extension of
scalars to a splitting field of $A$. Hence, involution varieties may be
understood as “forms of quadrics”. Moreover, as explained in [35, §2], the
involution varieties admit the following characterization: a smooth projective
$k$-variety $X$ of dimension $n-2$ is an involution variety if and only if
$X\times_{k}\overline{k}$ is isomorphic to the (unique) quadric
$(x_{1}^{2}+\cdots+x^{2}_{n}=0)\subset\mathbb{P}_{\overline{k}}^{n-1}$.
Furthermore, two involution varieties $\mathrm{Iv}(A,\ast)$ and
$\mathrm{Iv}(A^{\prime},\ast^{\prime})$ are isomorphic if and only if the
central simple $k$-algebras with involution of orthogonal type $(A,\ast)$ and
$(A^{\prime},\ast^{\prime})$ are isomorphic.
###### Example 1.20 (Products of two conics).
When $\mathrm{deg}(A)=4$ and $\delta(A,\ast)\in(k^{\times})^{2}$, we have an
isomorphism $(A,\ast)\simeq(Q_{1},\ast_{1})\otimes(Q_{2},\ast_{2})$, where
$Q_{1}=(a_{1},b_{1})$ and $Q_{2}=(a_{2},b_{2})$ are uniquely determined
quaternion $k$-algebras (up to isomorphism) and $\ast_{1}$ and $\ast_{2}$ are
the conjugation involutions of $Q_{1}$ and $Q_{2}$, respectively. Following
[35, Thm. 4.15], this leads to an isomorphism $\mathrm{Iv}(A,\ast)\simeq
C(a_{1},b_{1})\times C(a_{2},b_{2})$. This shows that the involution varieties
of dimension $2$ with trivial discriminant are the products of two conics.
Note that in the particular case where $A$ is split, we have
$Q_{1}=Q_{2}=(a,b)$. Consequently, we conclude from Remark 1.6 that
$(x^{2}-ay^{2}-bu^{2}+(ab)w^{2}=0)\simeq C(a,b)\times C(a,b)$ for every
quaternion $k$-algebra $(a,b)$.
Example 1.20 motivates the following notation:
###### Notation 1.21 (Condition $(\star)$).
Let $(A,\ast)$ be a central simple $k$-algebra with involution of orthogonal
type, with $\mathrm{deg}(A)=4$ and $\delta(A,\ast)\in(k^{\times})^{2}$. We
will say that $(A,\ast)$ satisfies condition $(\star)$ if $A$ is split.
###### Theorem 1.22.
Let $(A,\ast)$ and $(A^{\prime},\ast^{\prime})$ be two central simple
$k$-algebras with involutions of orthogonal type. If
$[\mathrm{Iv}(A,\ast)]=[\mathrm{Iv}(A^{\prime},\ast^{\prime})]$, then the
following holds:
* (i)
We have $\mathrm{deg}(A)=\mathrm{deg}(A^{\prime})$.
* (ii)
We have
$(\delta(A,\ast)\in(k^{\times})^{2})\Leftrightarrow(\delta(A^{\prime},\ast^{\prime})\in(k^{\times})^{2})$.
* (iii)
We have $C_{0}(A,\ast)\simeq C_{0}(A^{\prime},\ast^{\prime})$.
* (iv)
We have $A\simeq A^{\prime}$
* (v)
When $k$ is formally-real, we have
$\mathrm{sgn}_{P}(A,\ast)=\mathrm{sgn}_{P}(A^{\prime},\ast^{\prime})$ for
every ordering $P$ of $k$.
In the particular case where $\mathrm{deg}(A)=4$ and
$\delta(A,\ast)\in(k^{\times})^{2}$ (or, equivalently,
$\mathrm{deg}(A^{\prime})=4$ and
$\delta(A^{\prime},\ast^{\prime})\in(k^{\times})^{2}$), we assume moreover in
items (iii)-(v) that $(A,\ast)$ and $(A^{\prime},\ast^{\prime})$ satisfy
condition $(\star)$.
###### Remark 1.23.
Note that if $C_{0}(A,\ast)\simeq C_{0}(A^{\prime},\ast)$, then
$\delta(A,\ast)=\delta(A^{\prime},\ast^{\prime})$.
###### Remark 1.24 (Severi-Brauer varieties).
It is well-known that two central simple $k$-algebras $A$ and $A^{\prime}$ are
isomorphic if and only if the associated Severi-Brauer varieties
$\mathrm{SB}(A)$ and $\mathrm{SB}(A^{\prime})$ are isomorphic. Consequently,
item (iv) of Theorem 1.22 shows that if two involution varieties
$\mathrm{Iv}(A,\ast)\subset\mathrm{SB}(A)$ and
$\mathrm{Iv}(A^{\prime},\ast^{\prime})\subset\mathrm{SB}(A^{\prime})$ have the
same Grothendieck class, then the corresponding “ambient” Severi-Brauer
varieties $\mathrm{SB}(A)$ and $\mathrm{SB}(A^{\prime})$ are necessarily
isomorphic!
Theorem 1.22 enables the following applications:
###### Theorem 1.25.
Let $(A,\ast)$ and $(A^{\prime},\ast^{\prime})$ be two central simple
$k$-algebras with involutions of orthogonal type. If
$[\mathrm{Iv}(A,\ast)]=[\mathrm{Iv}(A^{\prime},\ast^{\prime})]$, then the
following holds:
* (i)
When $\mathrm{deg}(A)\leq 4$ (or, equivalently, $\mathrm{deg}(A^{\prime})\leq
4$), we have $\mathrm{Iv}(A,\ast)\simeq\mathrm{Iv}(A^{\prime},\ast^{\prime})$.
* (ii)
When $I(k)^{3}$ is torsion-free, we have
$\mathrm{Iv}(A,\ast)\simeq\mathrm{Iv}(A^{\prime},\ast^{\prime})$. In the case
where $k$ is formally-real and $\mathrm{deg}(A)$ is even (or, equivalently,
$\mathrm{deg}(A^{\prime})$ is even), we assume moreover that
$\tilde{u}(k)<\infty$.
In the particular case where $\mathrm{deg}(A)=4$ and
$\delta(A,\ast)\in(k^{\times})^{2}$ (or, equivalently,
$\mathrm{deg}(A^{\prime})=4$ and
$\delta(A^{\prime},\ast^{\prime})\in(k^{\times})^{2}$), we assume moreover in
items (i)-(ii) that $(A,\ast)$ and $(A^{\prime},\ast^{\prime})$ satisfy
condition $(\star)$.
Note that Theorems 1.22 and 1.25 are far reaching generalizations of Theorems
1.1 and 1.3, respectively. Consequently, the above inclusions (1.5) and (1.13)
admit extensions to the realm of involution varieties. In particular, the
above Example 1.8 admits the following extension:
###### Example 1.26 (Involution varieties of dimension $\leq 2$ over
$\mathbb{Q}_{p}$).
As mentioned in Example 1.18, there is no difference between involution
varieties of dimension $1$ and quadrics of dimension $1$. Hence, we address
solely the $2$-dimensional case. First, recall from [27, §XII-XIII] that given
any local field $l$, there is a unique division quaternion $l$-algebra ${\bf
D}$ up to isomorphism; when $l=\mathbb{Q}_{p}$, we have ${\bf
D}\simeq(\epsilon,p)$. As explained in [16, §15.B], the assignment
$(A,\ast)\mapsto C_{0}(A,\ast)$ gives rise to a bijection between the set of
central simple $\mathbb{Q}_{p}$-algebras of degree $4$ with involution of
orthogonal type up to isomorphism and the set of quaternion algebras over some
étale quadratic extension of $\mathbb{Q}_{p}$ up to isomorphism. The inverse
bijection is induced by the assignment
$Q\mapsto(N_{l/\mathbb{Q}_{p}}(Q),N_{l/\mathbb{Q}_{p}}(\ast))$, where $l$ is
the center of $Q$, $\ast$ is the conjugation involution of $Q$, and
$N_{l/k}(-)$ is the norm construction. Consequently, since
$\mathbb{Q}^{\times}_{p}/(\mathbb{Q}_{p}^{\times})^{2}=\\{1,\epsilon,p,\epsilon
p\\}$, there are nine involution varieties of dimension $2$ over
$\mathbb{Q}_{p}$ up to isomorphism. Note that under the assignment
$\mathrm{Iv}(A,\ast)\mapsto(A,\ast)\mapsto C_{0}(A,\ast)$, the involution
varieties (1.9)-(1.11) correspond to the following quaternion algebras:
$\displaystyle(1,1)\times(1,1)\,\,\mathrm{over}\,\,\mathbb{Q}_{p}\times\mathbb{Q}_{p}$
$\displaystyle(\epsilon,p)\times(\epsilon,p)\,\,\mathrm{over}\,\,\mathbb{Q}_{p}\times\mathbb{Q}_{p}$
$\displaystyle(1,1)\,\,\mathrm{over}\,\,\mathbb{Q}_{p}(\sqrt{\epsilon})$
$\displaystyle(1,1)\,\,\mathrm{over}\,\,\mathbb{Q}_{p}(\sqrt{p})$
$\displaystyle(1,1)\,\,\mathrm{over}\,\,\mathbb{Q}_{p}(\sqrt{\epsilon p})$
$\displaystyle{\bf D}\,\,\mathrm{over}\,\,\mathbb{Q}_{p}(\sqrt{\epsilon
p})\,.$
Therefore, in addition to (1.9)-(1.11), we can also consider the following two
involution varieties:
(1.27)
$\displaystyle\mathrm{Iv}(N_{\mathbb{Q}_{p}(\sqrt{\epsilon})/\mathbb{Q}_{p}}({\bf
D}),N_{\mathbb{Q}_{p}(\sqrt{\epsilon})/\mathbb{Q}_{p}}(\ast))$
$\displaystyle\mathrm{Iv}(N_{\mathbb{Q}_{p}(\sqrt{p})/\mathbb{Q}_{p}}({\bf
D}),N_{\mathbb{Q}_{p}(\sqrt{p})/\mathbb{Q}_{p}}(\ast))\,.$
Making use of item (i) of Theorem 1.25, we hence conclude that the
Grothendieck classes of the eight involution varieties, namely (1.9)-(1.11)
and (1.27), remain distinct in $K_{0}\mathrm{Var}(\mathbb{Q}_{p})$.
The following result of Kollár444Consult also the subsequent work [8]. [12]
explains why condition $(\star)$ is needed in Theorems 1.22 and 1.25:
###### Theorem 1.28 (Products of two conics).
Let $Q_{1}$ and $Q_{2}$, resp. $Q^{\prime}_{1}$ and $Q^{\prime}_{2}$, be
quaternion $k$-algebras and $C(Q_{1})$ and $C(Q_{2})$, resp.
$C(Q^{\prime}_{1})$ and $C(Q^{\prime}_{2})$, the associated conics. The
following conditions are equivalent:
* (a)
$[C(Q_{1})\times C(Q_{2})]=[C(Q^{\prime}_{1})\times C(Q^{\prime}_{2})]$
* (b)
$C(Q_{1})\times
C(Q_{2})\,\,\mathrm{is}\,\,\mathrm{birational}\,\,\mathrm{to}\,\,C(Q^{\prime}_{1})\times
C(Q^{\prime}_{2})$
* (c)
$\langle[Q_{1}],[Q_{2}]\rangle=\langle[Q^{\prime}_{1}],[Q^{\prime}_{2}]\rangle$
in the Brauer group $\mathrm{Br}(k)$.
###### Example 1.29 (Products of two conics over $\mathbb{R}$).
Let $k=\mathbb{R}$, $Q_{1}={\bf H}$, $Q_{2}=(1,1)$, and
$Q^{\prime}_{1}=Q^{\prime}_{2}={\bf H}$. Following Example 1.20, let
$(A,\ast):=(Q_{1},\ast_{1})\otimes(Q_{2},\ast_{2})$ and
$(A^{\prime},\ast^{\prime}):=(Q^{\prime}_{1},\ast^{\prime}_{1})\otimes(Q^{\prime}_{2},\ast^{\prime}_{2})$.
Note that $(A^{\prime},\ast^{\prime})\simeq(\mathrm{M}_{4\times
4}(\mathbb{R}),\ast^{\prime}_{q^{\prime}})$, where
$\ast^{\prime}_{q^{\prime}}$ is the adjoint involution of the quadratic form
$q^{\prime}=\langle 1,1,1,1\rangle$. On the one hand, since $\langle[{\bf
H}],[(1,1)]\rangle=\langle[{\bf H}],[{\bf H}]\rangle$ in the Brauer group
$\mathrm{Br}(\mathbb{R})$, Theorem 1.28 implies that
$[\mathrm{Iv}(A,\ast)]=[C({\bf H})\times\mathbb{P}^{1}]=[C({\bf H})\times
C({\bf H})]=[\mathrm{Iv}(A^{\prime},\ast^{\prime})]$. On the other hand, the
even Clifford algebra $C_{0}(A,\ast)\simeq{\bf H}\times(1,1)$ is not
isomorphic to $C_{0}(A^{\prime},\ast^{\prime})\simeq{\bf H}\times{\bf H}$, the
central simple $\mathbb{R}$-algebra $A\simeq\mathrm{M}_{2\times 2}({\bf H})$
is not isomorphic to $A^{\prime}\simeq\mathrm{M}_{4\times 4}(\mathbb{R})$, the
signature $\mathrm{sgn}(A,\ast)=0$ is different from
$\mathrm{sgn}(A^{\prime},\ast^{\prime})=|\mathrm{sgn}(\langle
1,1,1,1\rangle)|=4$, and the involution variety $C({\bf
H})\times\mathbb{P}^{1}$ is not isomorphic to $C({\bf H})\times C({\bf H})$.
This shows, in particular, that the above Theorems 1.22 and 1.25 are false
without assuming condition $(\star)$.
Finally, note that by combining Kollár’s Theorem 1.28 with item (i) of Theorem
1.25, we obtain a complete description of the Grothendieck classes of all the
involution varieties of dimension $2$ (over any base field $k$). Here is one
illustrative example:
###### Example 1.30 (Grothendieck classes of the involution varieties of
dimension $2$ over $\mathbb{Q}_{p}$).
As explained in Example 1.26, there are nine involution varieties of dimension
$2$ over $\mathbb{Q}_{p}$, with $p\neq 2$, up to isomorphism. Eight of them
were described in Examples 1.8 and 1.26, namely (1.9)-(1.11) and (1.27), and
these have distinct Grothendieck classes. The remaining involution variety
(which has trivial discriminant) is the following one
$\mathrm{Iv}(((\epsilon,p),\ast)\otimes((1,1),\ast)\simeq C(\epsilon,p))\times
C(1,1)\simeq C(\epsilon,1)\times\mathbb{P}^{1}$. Following Example 1.20, note
that the right-hand side of (1.9) is isomorphic to the product
$C(\epsilon,p)\times C(\epsilon,p)$. Since
$\langle[(\epsilon,p)],[(\epsilon,p)]\rangle=\langle[(\epsilon,p)],[(1,1)]\rangle$
in the Brauer group $\mathrm{Br}(\mathbb{Q}_{p})$, Theorem 1.28 implies that
$[C(\epsilon,p)\times C(\epsilon,p)]=[C(\epsilon,p)\times\mathbb{P}^{1}]$.
Consequently, we conclude that the nine involution varieties of dimension $2$
give rise to eight distinct Grothendieck classes.
## 2\. Preliminaries
Throughout the article $k$ denotes a base field of characteristic zero, and we
will write $G:=\mathrm{Gal}(\overline{k}/k)$ for its absolute Galois group.
Given a central simple $k$-algebra $A$, we will write $\mathrm{ind}(A)$ for
its index and $[A]$ for its class in the Brauer group $\mathrm{Br}(k)$. Recall
that $\mathrm{Br}(k)\simeq\oplus_{p\,\mathrm{prime}}\mathrm{Br}(k)\\{p\\}$,
where $\mathrm{Br}(k)\\{p\\}$ stands for the $p$-power torsion subgroup.
Finally, in order to simplify the exposition, we will write $\mathrm{pt}$
instead of $\mathrm{Spec}(k)$.
### 2.1. Dg categories
Throughout the article, we will assume some basic familiarity with the
language of dg categories; consult, for example, Keller’s survey [10]. We will
write $\mathrm{dgcat}(k)$ for the category of (small) dg categories and
$\mathrm{dgcat}_{\mathrm{sp}}(k)$ for the full subcategory of smooth proper dg
categories in the sense of Kontsevich [13, 14, 15]. Examples of smooth proper
dg categories include, for example, the finite-dimensional $k$-algebras of
finite global dimension $A$ as well as the dg categories of perfect complexes
$\mathrm{perf}_{\mathrm{dg}}(X)$ associated to smooth proper $k$-schemes $X$.
As explained in [31, §1.7], the symmetric monoidal category
$(\mathrm{dgcat}_{\mathrm{sp}}(k),\otimes)$ is rigid555Recall that a symmetric
monoidal category is called rigid if all its objects are dualizable., with the
dual of a smooth proper dg category ${\mathcal{A}}$ being its opposite dg
category ${\mathcal{A}}^{\mathrm{op}}$.
### 2.2. Chow motives
Given a commutative ring of coefficients $R$, recall from [1, §4.1][23] the
definition of the category of Chow motives $\mathrm{Chow}(k)_{R}$. This
category is $R$-linear, rigid symmetric monoidal and idempotent complete.
Moreover, it comes equipped with a symmetric monoidal functor
$\mathfrak{h}(-)_{R}\colon\mathrm{SmProj}(k)\to\mathrm{Chow}(k)_{R}$ defined
on the category of smooth projective $k$-schemes. The Chow motive
$\mathfrak{h}(\mathbb{P}^{1})_{R}$ of the projective line $\mathbb{P}^{1}$
decomposes into a direct sum $\mathfrak{h}(\mathrm{pt})_{R}\oplus R(-1)$. The
direct summand $R(-1)$ is called the Lefschetz motive and its
$\otimes$-inverse $R(1)$ the Tate motive. Given a smooth projective $k$-scheme
$X$ and an integer $i\in\mathbb{Z}$, we will write $\mathfrak{h}(X)_{R}(i)$
instead of $\mathfrak{h}(X)_{R}\otimes R(1)^{\otimes i}$. Under these
notations, given two smooth projective $k$-schemes $X$ and $Y$ and two
integers $i,j\in\mathbb{Z}$, we have an isomorphism
$\mathrm{Hom}_{\mathrm{Chow}(k)_{R}}(\mathfrak{h}(X)_{R}(i),\mathfrak{h}(Y)_{R}(j))\simeq{\mathcal{Z}}^{\mathrm{dim}(X)-i+j}_{\sim\mathrm{rat}}(X\times
Y)_{R}\,,$
where the right-hand side stands for the $R$-module of algebraic cycles on
$X\times Y$ of codimension $\mathrm{dim}(X)-i+j$ up to rational equivalence.
Finally, in the particular case where $R=\mathbb{Z}$, we will write
$\mathrm{Chow}(k)$ instead of $\mathrm{Chow}(k)_{\mathbb{Z}}$ and
$\mathfrak{h}(-)$ instead of $\mathfrak{h}(-)_{\mathbb{Z}}$.
### 2.3. Noncommutative Chow motives
Given a commutative ring of coefficients $R$, recall from [31, §4.1] the
definition of the category of noncommutative Chow motives
$\operatorname{NChow}(k)_{R}$. This category is $R$-linear, rigid symmetric
monoidal, idempotent complete, and comes equipped with a symmetric monoidal
functor
$U(-)_{R}\colon\mathrm{dgcat}(k)_{\mathrm{sp}}\to\operatorname{NChow}(k)_{R}$.
Given smooth proper dg categories ${\mathcal{A}}$ and
${\mathcal{A}}^{\prime}$, we have an isomorphism
$\mathrm{Hom}_{\operatorname{NChow}(k)_{R}}(U({\mathcal{A}})_{R},U({\mathcal{A}}^{\prime})_{R})\simeq
K_{0}({\mathcal{A}}^{\mathrm{op}}\otimes{\mathcal{A}}^{\prime})_{R}\,,$
where the right-hand side stands for the $R$-linearized Grothendieck group of
${\mathcal{A}}^{\mathrm{op}}\otimes{\mathcal{A}}^{\prime}$. Moreover, the
composition law on $\operatorname{NChow}(k)$ is induced by the (derived)
tensor product of bimodules, and the identity of $U({\mathcal{A}})_{R}$ is the
Grothendieck class of the diagonal
${\mathcal{A}}\text{-}{\mathcal{A}}$-bimodule ${\mathcal{A}}$. In the
particular case where $R=\mathbb{Z}$, we will write $\operatorname{NChow}(k)$
instead of $\operatorname{NChow}(k)_{\mathbb{Z}}$ and $U(-)$ instead of
$U(-)_{\mathbb{Z}}$.
###### Theorem 2.1.
(see [32, Thm. 9.1]) Given two central simple $k$-algebras $A$ and
$A^{\prime}$, we have the equivalence:
$\displaystyle U(A)\simeq
U(A^{\prime})\,\,\mathrm{in}\,\,\operatorname{NChow}(k)$
$\displaystyle\Leftrightarrow$
$\displaystyle[A]=[A^{\prime}]\,\,\mathrm{in}\,\,\mathrm{Br}(k)\,.$
###### Theorem 2.2.
(see [33, Thm. 2.20(iv)]) Given two families of central simple $k$-algebras
$\\{A_{i}\\}_{1\leq i\leq n}$ and $\\{A^{\prime}_{j}\\}_{1\leq j\leq m}$, we
have an isomorphism $\oplus_{i=1}^{n}U(A_{i})\simeq\oplus_{j=1}^{m}U(A_{j})$
in $\operatorname{NChow}(k)$ if and only if $n=m$ and for every prime number
$p$ there exists a permutation $\sigma_{p}$ such that
$[A^{\prime}_{i}]^{p}=[A_{\sigma_{p}(i)}]^{p}$ in $\mathrm{Br}(k)\\{p\\}$ for
every $i$.
###### Remark 2.3 (Central simple algebras over field extensions).
Let $l/k$, resp. $l^{\prime}/k$, be a finite Galois field extension and $A$,
resp. $A^{\prime}$, a central simple $l$-algebra, resp. central simple
$l^{\prime}$-algebra. Let us denote by $H\subseteq G$, resp.
$H^{\prime}\subseteq G$, the (unique) subgroup such that $\overline{k}^{H}=l$,
resp. $\overline{k}^{H^{\prime}}=l^{\prime}$. Given a commutative ring of
coefficients $R$, recall from [33, Thm. 2.13] that
$\mathrm{Hom}_{\operatorname{NChow}(k)_{R}}(U(A)_{R},U(A^{\prime})_{R})$ can
be identified with the $R$-module $\mathrm{Map}^{G}(G/H\times G/H^{\prime},R)$
of $G$-invariant maps from the finite $G$-set $G/H\times G/H^{\prime}$
(equipped with the diagonal $G$-action) to $R$. Under these identifications,
the composition map
$\mathrm{Hom}_{\operatorname{NChow}(k)_{R}}(U(A)_{R},U(A^{\prime})_{R})\times\mathrm{Hom}_{\operatorname{NChow}(k)_{R}}(U(A^{\prime})_{R},U(A)_{R})\longrightarrow\mathrm{Hom}_{\operatorname{NChow}(k)_{R}}(U(A)_{R},U(A)_{R})$
corresponds to the bilinear pairing
$\mathrm{Map}^{G}(G/H\times
G/H^{\prime},R)\times\mathrm{Map}^{G}(G/H^{\prime}\times
G/H,R)\longrightarrow\mathrm{Map}^{G}(G/H\times G/H,R)$
that sends a pair of $G$-invariant maps $(\alpha,\beta)$ to the following
$G$-invariant map:
$(\overline{g_{1}},\overline{g_{3}})\mapsto\sum_{\overline{g_{2}}\in
G/H^{\prime}}\alpha(\overline{g_{1}},\overline{g_{2}})\beta(\overline{g_{2}},\overline{g_{3}})\mathrm{ind}\big{(}(A\otimes_{\overline{k}^{H}}\overline{k}^{(H\cap
H^{\prime})})^{\mathrm{op}}\otimes_{\overline{k}^{(H\cap
H^{\prime})}}(A^{\prime}\otimes_{\overline{k}^{H^{\prime}}}\overline{k}^{(H\cap
H^{\prime})})\big{)}^{2}\,.$
Moreover, the identity of $U(A)_{R}$ corresponds to the $G$-invariant map
$G/H\times G/H\to R$ with $1$ in the diagonal and $0$ elsewhere.
### 2.4. Numerical motives
Given an additive rigid symmetric monoidal category ${\mathcal{C}}$, recall
from [2, §7] that its ${\mathcal{N}}$-ideal is defined as follows
${\mathcal{N}}(a,b):=\\{f\in\mathrm{Hom}_{\mathcal{C}}(a,b)\,|\,\forall
g\in\mathrm{Hom}_{\mathcal{C}}(b,a)\,\,\mathrm{we}\,\,\mathrm{have}\,\,\mathrm{tr}(g\circ
f)=0\\}\,,$
where $\mathrm{tr}(g\circ f)$ is the categorical trace of $g\circ f$. Via the
adjunction isomorphism
$\mathrm{Hom}_{\mathcal{C}}(a,b)\simeq\mathrm{Hom}_{\mathcal{C}}({\bf
1},a^{\vee}\otimes b)$, where ${\bf 1}$ stands for the $\otimes$-unit and
$a^{\vee}$ for the dual of $a$, the ${\mathcal{N}}$-ideal can also be
described as follows:
${\mathcal{N}}(a,b)=\\{f\in\mathrm{Hom}_{\mathcal{C}}({\bf 1},a^{\vee}\otimes
b)\,|\,\forall g\in\mathrm{Hom}_{\mathcal{C}}(a^{\vee}\otimes b,{\bf
1})\,\,\mathrm{we}\,\,\mathrm{have}\,\,g\circ f=0\\}\,.$
Given a commutative ring of coefficients $R$, recall from [1, §4.1][23] that
the category of numerical motives $\operatorname{Num}(k)_{R}$ is defined as
the idempotent completion of the quotient of $\mathrm{Chow}(k)_{R}$ by the
$\otimes$-ideal ${\mathcal{N}}$. This category is $R$-linear, rigid symmetric
monoidal and idempotent complete. Moreover, given two smooth projective
$k$-schemes $X$ and $Y$ and two integers $i,j\in\mathbb{Z}$, we have an
isomorphism
$\mathrm{Hom}_{\mathrm{Num}(k)_{R}}(\mathfrak{h}(X)_{R}(i),\mathfrak{h}(Y)_{R}(j))\simeq{\mathcal{Z}}_{\sim\mathrm{num}}^{\mathrm{dim}(X)-i+j}(X\times
Y)_{R}\,,$
where the right-hand side stands for the $R$-module of algebraic cycles up to
numerical equivalence.
### 2.5. Noncommutative numerical motives
Given a commutative ring of coefficients $R$, recall from [31, §4.6] that the
category of noncommutative numerical motives $\operatorname{NNum}(k)_{R}$ is
defined as the idempotent completion of the quotient of
$\operatorname{NChow}(k)_{R}$ by the $\otimes$-ideal ${\mathcal{N}}$. This
category is $R$-linear, rigid symmetric monoidal and idempotent complete.
### 2.6. Noncommutative radical motives
Given an additive category ${\mathcal{C}}$, its ${\mathcal{R}}$-ideal is
defined as follows:
${\mathcal{R}}(a,b):=\\{f\in\mathrm{Hom}_{\mathcal{C}}(a,b)\,|\,\forall
g\in\mathrm{Hom}_{\mathcal{C}}(b,a)\,\,\mathrm{the}\,\,\mathrm{endomorphism}\,\,\operatorname{id}_{a}-g\circ
f\,\,\mathrm{is}\,\,\mathrm{invertible}\\}\,.$
Given a commutative ring of coefficients $R$, the category of noncommutative
radical motives $\mathrm{NRad}(k)_{R}$ is defined as the idempotent completion
of the quotient of $\operatorname{NChow}(k)_{R}$ by the ideal ${\mathcal{R}}$.
By construction, this category is $R$-linear, additive, and idempotent
complete.
### 2.7. Motivic measures
Let $K_{0}(\operatorname{Num}(k))$ and $K_{0}(\operatorname{NChow}(k))$ be the
Grothendieck rings of the additive symmetric monoidal categories
$\operatorname{Num}(k)$ and $\operatorname{NChow}(k)$, respectively. The
following motivic measures will be used throughout the article:
###### Proposition 2.4.
(see [1, Cor. 13.2.2.1]) The assignment $X\mapsto\mathfrak{h}(X)$, with $X$ a
smooth projective $k$-scheme, gives rise to a motivic measure
$\mu_{\mathrm{c}}\colon K_{0}\mathrm{Var}(k)\to K_{0}(\operatorname{Num}(k))$.
###### Proposition 2.5.
(see [30, Prop. 4.1]) The assignment $X\mapsto
U(\mathrm{perf}_{\mathrm{dg}}(X))$, with $X$ a smooth projective $k$-scheme,
gives rise to a motivic measure $\mu_{\mathrm{nc}}\colon
K_{0}\mathrm{Var}(k)\to K_{0}(\operatorname{NChow}(k))$.
## 3\. Cancellation property
We start by recalling the following cancellation result:
###### Proposition 3.1.
(see [30, Prop. 4.9]) Let $\\{A_{i}\\}_{1\leq i\leq n}$ and
$\\{A^{\prime}_{j}\\}_{1\leq j\leq m}$ be two families of central simple
$k$-algebras. Given any noncommutative Chow motive
$N\\!\\!M\in\operatorname{NChow}(k)$, we have the following implication:
(3.2) $\displaystyle N\\!\\!M\oplus\oplus_{i=1}^{n}U(A_{i})\simeq
N\\!\\!M\oplus\oplus_{j=1}^{m}U(A^{\prime}_{j})$ $\displaystyle\Rightarrow$
$\displaystyle
n=m\,\,\,\mathrm{and}\,\,\oplus_{i=1}^{n}U(A_{i})\simeq\oplus_{j=1}^{m}U(A^{\prime}_{j})\,.$
The following result, which is of independent interest, will play a key role
in the sequel:
###### Theorem 3.3 (Cancellation).
Let $l/k$, resp. $l^{\prime}/k$, be a field a extension of degree $2$ and $A$,
resp. $A^{\prime}$, a central simple $l$-algebra, resp. central simple
$l^{\prime}$-algebra, such that $\mathrm{ind}(A)$, resp.
$\mathrm{ind}(A^{\prime})$, is a power of $2$. Moreover, let $B$ and
$B^{\prime}$ be two central simple $k$-algebras. Under these assumptions,
given any noncommutative Chow motive $N\\!\\!M\in\operatorname{NChow}(k)$ and
integer $n\geq 0$, we have the following implication:
$\displaystyle N\\!\\!M\oplus U(B)^{\oplus n}\oplus U(A)\simeq N\\!\\!M\oplus
U(B^{\prime})^{\oplus n}\oplus U(A^{\prime})$ $\displaystyle\Rightarrow$
$\displaystyle U(A)\simeq U(A^{\prime})\,.$
In order to prove Theorem 3.3, we need several ingredients.
###### Lemma 3.4.
Given two field extensions $l/k$ and $l^{\prime}/k$ of degree $2$, we have the
following equivalences:
$l\simeq l^{\prime}\quad\Leftrightarrow\quad U(l)_{\mathbb{Q}}\simeq
U(l^{\prime})_{\mathbb{Q}}\,\,\mathrm{in}\,\,\operatorname{NChow}(k)_{\mathbb{Q}}\quad\Leftrightarrow\quad
U(l)_{\mathbb{Q}}\simeq
U(l^{\prime})_{\mathbb{Q}}\,\,\mathrm{in}\,\,\operatorname{NNum}(k)_{\mathbb{Q}}\,.$
###### Proof.
Both implications $(\Rightarrow)$ are clear. We start by proving the right-
hand side implication $(\Leftarrow)$. Note that since the field extension
$l/k$ is of degree $2$, the composition map
$\mathrm{Hom}_{\operatorname{NChow}(k)_{\mathbb{Q}}}(U(k)_{\mathbb{Q}},U(l)_{\mathbb{Q}})\times\mathrm{Hom}_{\operatorname{NChow}(k)_{\mathbb{Q}}}(U(l)_{\mathbb{Q}},U(k)_{\mathbb{Q}})\longrightarrow\mathrm{Hom}_{\operatorname{NChow}(k)_{\mathbb{Q}}}(U(k)_{\mathbb{Q}},U(k)_{\mathbb{Q}})$
corresponds to the bilinear pairing
$\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q},(\alpha,\beta)\mapsto 2\alpha\beta$.
Note also that since the field extension $l/k$ is Galois, we have the
following isomorphism of $k$-algebras
$l\otimes_{k}l\stackrel{{\scriptstyle\simeq}}{{\to}}l\times
l,\lambda_{1}\otimes\lambda_{2}\mapsto(\lambda_{1}\lambda_{2},\sigma(\lambda_{1})\lambda_{2})$,
where $\sigma$ stands for the generator of the Galois group of $l/k$. This
implies that
$U(l)^{\vee}_{\mathbb{Q}}\otimes U(l)_{\mathbb{Q}}\simeq
U(l^{\mathrm{op}})_{\mathbb{Q}}\otimes U(l)_{\mathbb{Q}}\simeq
U(l^{\mathrm{op}}\otimes_{k}l)_{\mathbb{Q}}=U(l\otimes_{k}l)_{\mathbb{Q}}\simeq
U(l)_{\mathbb{Q}}\oplus U(l)_{\mathbb{Q}}\,.$
Therefore, following the definition of the category of noncommutative
numerical motives, we observe that
$\mathrm{End}_{\operatorname{NChow}(k)_{\mathbb{Q}}}(U(l)_{\mathbb{Q}})=\mathrm{End}_{\operatorname{NNum}(k)_{\mathbb{Q}}}(U(l)_{\mathbb{Q}})$.
All the above holds mutatis mutandis with $l$ replaced by $l^{\prime}$. Hence,
we conclude that if $U(l)_{\mathbb{Q}}\simeq
U(l^{\prime})_{\mathbb{Q}}\,\,\mathrm{in}\,\,\operatorname{NNum}(k)_{\mathbb{Q}}$,
then $U(l)_{\mathbb{Q}}\simeq
U(l^{\prime})_{\mathbb{Q}}\,\,\mathrm{in}\,\,\operatorname{NChow}(k)_{\mathbb{Q}}$.
We now prove the left-hand side implication $(\Leftarrow)$. Let $H\subseteq G$
and $H^{\prime}\subseteq G$ be the subgroups of index $2$ such that
$\overline{k}^{H}=l$ and $\overline{k}^{H^{\prime}}=l^{\prime}$, respectively.
Recall from Remark 2.3 that
$\mathrm{Hom}_{\operatorname{NChow}(k)_{\mathbb{Q}}}(U(l)_{\mathbb{Q}},U(l^{\prime})_{\mathbb{Q}})$
can be identified with the $\mathbb{Q}$-vector space
$\mathrm{Map}^{G}(G/H\times G/H^{\prime},\mathbb{Q})$ of $G$-invariant maps
from the finite $G$-set $G/H\times G/H^{\prime}$ to $\mathbb{Q}$. Recall also
that the composition map
$\mathrm{Hom}_{\operatorname{NChow}(k)_{\mathbb{Q}}}(U(l)_{\mathbb{Q}},U(l^{\prime})_{\mathbb{Q}})\times\mathrm{Hom}_{\operatorname{NChow}(k)_{\mathbb{Q}}}(U(l^{\prime})_{\mathbb{Q}},U(l)_{\mathbb{Q}})\longrightarrow\mathrm{Hom}_{\operatorname{NChow}(k)_{\mathbb{Q}}}(U(l)_{\mathbb{Q}},U(l)_{\mathbb{Q}})$
corresponds to the bilinear pairing
$\mathrm{Map}^{G}(G/H\times
G/H^{\prime},\mathbb{Q})\times\mathrm{Map}^{G}(G/H^{\prime}\times
G/H,\mathbb{Q})\longrightarrow\mathrm{Map}^{G}(G/H\times G/H,\mathbb{Q})$
that sends $(\alpha,\beta)$ to the $G$-invariant map
$(\overline{g_{1}},\overline{g_{3}})\mapsto\sum_{\overline{g_{2}}\in
G/H^{\prime}}\alpha(\overline{g_{1}},\overline{g_{2}})\beta(\overline{g_{2}},\overline{g_{3}})$.
Moreover, the identity of $U(l)_{\mathbb{Q}}$ corresponds to the $G$-invariant
map $G/H\times G/H\to\mathbb{Q}$ with $1$ in the diagonal and $0$ elsewhere.
This shows, in particular, that the $\mathbb{Q}$-algebra of endomorphisms
$\mathrm{End}_{\operatorname{NChow}(k)_{\mathbb{Q}}}(U(l)_{\mathbb{Q}})$
corresponds to the group $\mathbb{Q}$-algebra $\mathbb{Q}[G/H]$. Now, let us
assume that $l\not\simeq l^{\prime}$, i.e., that the $k$-algebras $l$ and
$l^{\prime}$ are not isomorphic. Thanks to Galois theory, we have $H\neq
H^{\prime}$. Moreover, since the field extensions $l/k$ and $l^{\prime}/k$ are
of degree $2$, we have $H\not\subseteq H^{\prime}$ and
$H^{\prime}\not\subseteq H$. This implies that the (diagonal) $G$-action on
the set $G/H\times G/H^{\prime}$ is transitive. Therefore, it follows from the
above description of the category of noncommutative Chow motives that
$U(l)_{\mathbb{Q}}\simeq U(l^{\prime})_{\mathbb{Q}}$ in
$\operatorname{NChow}(k)_{\mathbb{Q}}$ if and only if there exist rational
numbers $\alpha,\beta\in\mathbb{Q}$ such that
(3.5) $\begin{cases}\alpha\beta+\alpha\beta=1\\\
\alpha\beta+\alpha\beta=0\end{cases}\,.$
Since the system of equations (3.5) is impossible, we hence conclude that
$U(l)_{\mathbb{Q}}\not\simeq U(l^{\prime})_{\mathbb{Q}}$ in
$\operatorname{NChow}(k)_{\mathbb{Q}}$. ∎
###### Lemma 3.6.
Given a field extension $l/k$ of degree $2$ and two central simple
$l$-algebras $A$ and $A^{\prime}$, we have the following equivalence:
$\displaystyle U(A)\simeq
U(A^{\prime})\,\,\mathrm{in}\,\,\operatorname{NChow}(k)$
$\displaystyle\Leftrightarrow$
$\displaystyle[A]=[A^{\prime}]\,\,\mathrm{in}\,\,\mathrm{Br}(l)\,.$
###### Proof.
Let $H\subseteq G$ be the subgroup of index $2$ such that
$\overline{k}^{H}=l$. Given a commutative ring of coefficients $R$, recall
from Remark 2.3 that
$\mathrm{Hom}_{\operatorname{NChow}(k)_{R}}(U(A)_{R},U(A^{\prime})_{R})$ can
be identified with the $R$-module $\mathrm{Map}^{G}(G/H\times G/H,R)$ of
$G$-invariant maps from the finite $G$-set $G/H\times G/H$ to $R$. Recall also
that, under these identifications, the composition map
$\displaystyle\mathrm{Hom}_{\operatorname{NChow}(k)_{R}}(U(A)_{R},U(A^{\prime})_{R})\times\mathrm{Hom}_{\operatorname{NChow}(k)_{R}}(U(A^{\prime})_{R},U(A)_{R})\longrightarrow\mathrm{Hom}_{\operatorname{NChow}(k)_{R}}(U(A)_{R},U(A)_{R})$
corresponds to the bilinear pairing
$\mathrm{Map}^{G}(G/H\times G/H,R)\times\mathrm{Map}^{G}(G/H\times
G/H,R)\longrightarrow\mathrm{Map}^{G}(G/H\times G/H,R)$
that sends $(\alpha,\beta)$ to the $G$-invariant map
$(\overline{g_{1}},\overline{g_{3}})\mapsto\sum_{\overline{g_{2}}\in
G/H}\alpha(\overline{g_{1}},\overline{g_{2}})\beta(\overline{g_{2}},\overline{g_{3}})\mathrm{ind}(A^{\mathrm{op}}\otimes_{l}A^{\prime})^{2}$.
Moreover, the identity of $U(A)_{R}$ corresponds to the $G$-invariant map
$G/H\times G/H\to R$ with $1$ in the diagonal and $0$ elsewhere. This shows,
in particular, that the $R$-algebra of endomorphisms
$\mathrm{End}_{\operatorname{NChow}(k)_{R}}(U(A)_{R})$ corresponds to the
group $R$-algebra $R[G/H]$. Note that since the field extension $l/k$ is of
degree $2$, the $G$-set $G/H\times G/H$ has two orbits (the elements in the
diagonal and the elements outside the diagonal). Therefore, thanks to the
above description of the category of noncommutative Chow motives, we observe
that $U(A)_{R}\simeq U(A^{\prime})_{R}$ in $\operatorname{NChow}(k)_{R}$ if
and only if there exist elements $\alpha^{+},\alpha^{-}\in R$ and
$\beta^{+},\beta^{-}\in R$ such that
(3.7)
$\begin{cases}\alpha^{+}\beta^{+}\mathrm{ind}(A^{\mathrm{op}}\otimes_{l}A^{\prime})^{2}+\alpha^{-}\beta^{-}\mathrm{ind}(A^{\mathrm{op}}\otimes_{l}A^{\prime})^{2}=1\\\
\alpha^{+}\beta^{-}\mathrm{ind}(A^{\mathrm{op}}\otimes_{l}A^{\prime})^{2}+\alpha^{-}\beta^{+}\mathrm{ind}(A^{\mathrm{op}}\otimes_{l}A^{\prime})^{2}=0\end{cases}\Leftrightarrow\begin{cases}(\alpha^{+}\beta^{+}+\alpha^{-}\beta^{-})\mathrm{ind}(A^{\mathrm{op}}\otimes_{l}A^{\prime})^{2}=1\\\
(\alpha^{+}\beta^{-}+\alpha^{-}\beta^{+})\mathrm{ind}(A^{\mathrm{op}}\otimes_{l}A^{\prime})^{2}=0\,.\end{cases}$
Now, let us restrict ourselves to the case where $R=\mathbb{Z}$. Note that if
the system of equations (3.7) holds, then
$\mathrm{ind}(A^{\mathrm{op}}\otimes_{l}A^{\prime})=1$. Conversely, if
$\mathrm{ind}(A^{\mathrm{op}}\otimes_{l}A^{\prime})=1$, then the system of
equations (3.7) holds; take, for example, $\alpha^{+}=\beta^{+}=1$ and
$\alpha^{-}=\beta^{-}=0$. Consequently, the proof follows now from the
classical fact that $\mathrm{ind}(A^{\mathrm{op}}\otimes_{l}A^{\prime})=1$ if
and only if $[A]=[A^{\prime}]$ in $\mathrm{Br}(l)$. ∎
###### Lemma 3.8.
Let $B$ a central simple $k$-algebra, $l/k$ a field extension of degree $2$
and $A$ a central simple $l$-algebra. Under these assumptions, we have
$\mathrm{Hom}_{\mathrm{NRad}(k)_{\mathbb{F}_{2}}}(U(B)_{\mathbb{F}_{2}},U(A)_{\mathbb{F}_{2}})=0$.
###### Proof.
Let $H\subseteq G$ be the subgroup of index $2$ such that
$\overline{k}^{H}=l$. Given a commutative ring of coefficients $R$, recall
from Remark 2.3 that
$\mathrm{Hom}_{\operatorname{NChow}(k)_{R}}(U(B)_{R},U(A)_{R})$ can be
identified with the $R$-module $\mathrm{Map}^{G}(G/G\times G/H,R)$ of
$G$-invariant maps from the finite set $G$-set $G/G\times G/H=G/H$ to $R$.
Recall also that, under these identifications, the composition map
(3.9)
$\mathrm{Hom}_{\operatorname{NChow}(k)_{R}}(U(B)_{R},U(A)_{R})\times\mathrm{Hom}_{\operatorname{NChow}(k)_{R}}(U(A)_{R},U(B)_{R})\longrightarrow\mathrm{Hom}_{\operatorname{NChow}(k)_{R}}(U(B)_{R},U(B)_{R})$
corresponds to the bilinear pairing $R\times R\to R,(\alpha,\beta)\mapsto
2\alpha\beta\mathrm{ind}((B\otimes_{k}l)^{\mathrm{op}}\otimes_{l}A)^{2}$.
Consequently, in the particular case where $R=\mathbb{F}_{2}$, the composition
map (3.9) is equal to zero. By definition of the category of noncommutative
radical motives, this hence implies that
$\mathrm{Hom}_{\mathrm{NRad}(k)_{\mathbb{F}_{2}}}(U(B)_{\mathbb{F}_{2}},U(A)_{\mathbb{F}_{2}})=0$.
∎
###### Lemma 3.10.
Let $l/k$ be a field extension of degree $2$ and $A$ a central simple
$l$-algebra. Given any noncommutative Chow motive
$N\\!\\!M\in\operatorname{NChow}(k)$, we have a direct sum decomposition
$N\\!\\!M_{\mathbb{F}_{2}}\simeq M\\!\\!N\oplus U(A)_{\mathbb{F}_{2}}^{\oplus
m}$ in the category $\mathrm{NRad}(k)_{\mathbb{F}_{2}}$ for some integer
$m\geq 0$, where $M\\!\\!N$ is a noncommutative radical motive which does not
contains $U(A)_{\mathbb{F}_{2}}$ as a direct summand.
###### Proof.
Recall from the proof of Lemma 3.6 that
$\mathrm{End}_{\operatorname{NChow}(k)}(U(A))$ corresponds to
$\mathbb{Z}[G/H]$, where $H\subseteq G$ is the subgroup of index $2$ such that
$\overline{k}^{H}=l$. Note that since the field extension $l/k$ is of degree
$2$, we have
$\mathbb{Z}[G/H]=\\{\alpha^{+}1+\alpha^{-}\sigma\,|\,\alpha^{+},\alpha^{-}\in\mathbb{Z}\\}$,
where $\sigma$ stands for the generator of the Galois group of $l/k$. By
definition, the category $\mathrm{NRad}(k)_{\mathbb{F}_{2}}$ is idempotent
complete. Therefore, we can inductively split the direct summand
$U(A)_{\mathbb{F}_{2}}$ from $N\\!\\!M_{\mathbb{F}_{2}}$. We claim that this
inductive process stops. Let us suppose by absurd that it does not. If this is
the case, then, given any integer $n\geq 1$, $U(A)_{\mathbb{F}_{2}}^{\oplus
n}$ is a direct summand of $N\\!\\!M_{\mathbb{F}_{2}}$. Using the fact that
the quotient functor
$\operatorname{NChow}(k)_{\mathbb{F}_{2}}\to\mathrm{NRad}(k)_{\mathbb{F}_{2}}$
reflects (co)retractions (consult [2, Prop. 1.4.4]), we conclude that
$U(A)_{\mathbb{F}_{2}}^{\oplus n}$ is a direct summand of
$N\\!\\!M_{\mathbb{F}_{2}}$ in the category
$\operatorname{NChow}(k)_{\mathbb{F}_{2}}$. Hence, there exist morphisms
$\rho\colon U(A)^{\oplus n}\to N\\!\\!M$ and $\varrho\colon N\\!\\!M\to
U(A)^{\oplus n}$ in the category $\operatorname{NChow}(k)$ such that:
(3.11)
$\displaystyle\varrho\circ\rho=\begin{bmatrix}\alpha^{+}_{11}1+\alpha^{-}_{11}\sigma&\cdots&\alpha^{+}_{1n}1+\alpha^{-}_{1n}\sigma\\\
\vdots&&\vdots\\\
\alpha^{+}_{n1}1+\alpha^{-}_{n1}\sigma&\cdots&\alpha^{+}_{nn}1+\alpha^{-}_{nn}\sigma\end{bmatrix}_{n\times
n}$ $\displaystyle\mathrm{with}$
$\displaystyle\begin{cases}\alpha^{+}_{ij}\,\,\mathrm{odd}\quad i=j\\\
\alpha^{+}_{ij}\,\,\mathrm{even}\,\,\,\,i\neq
j\end{cases}\,\,\,\,\alpha^{-}_{ij}\,\,\mathrm{even}\,.$
Recall from [31, §2.2.8] that Hochschild homology gives rise to a functor
$HH_{0}(-):\mathrm{dgcat}_{\mathrm{sp}}(k)\to\mathrm{vect}(k)$, with values in
the category of finite-dimensional $k$-vector spaces, such that
$HH_{0}(U({\mathcal{A}}))\simeq HH_{0}({\mathcal{A}})$ for every smooth proper
dg category ${\mathcal{A}}$. As proved in [34, Thm. 2.1], the canonical map
$l\to A$ induces an isomorphism
$HH_{0}(l)\stackrel{{\scriptstyle\simeq}}{{\to}}HH_{0}(A)$. Moreover, it is
well-known that $HH_{0}(l)\simeq l/[l,l]\simeq l$. Note that since the field
extension $l/k$ is of degree $2$, we have $l\simeq k(\sqrt{\lambda})$ for some
$\lambda\in k^{\times}/(k^{\times})^{2}$. Therefore, we conclude that
$HH_{0}(A)\simeq k(\sqrt{\lambda})$. Under the identification
$\mathrm{End}_{\operatorname{NChow}(k)}(U(A))\simeq\mathbb{Z}[G/H]$, the
additive functor $HH_{0}(-)$ sends the element $\sigma$ to the conjugation
automorphism $\sqrt{\lambda}\mapsto-\sqrt{\lambda}$ of the $k$-vector space
$k(\sqrt{\lambda})$. Consequently, making use of the identification
$\mathrm{End}_{\mathrm{vect}(k)}(HH_{0}(A))\simeq M_{2\times 2}(k)$ induced by
the basis $\\{1,\sqrt{\lambda}\\}$ of $k(\sqrt{\lambda})$, we conclude that
the functor $HH_{0}(-)$ induces the following ring homomorphism:
$\displaystyle\mathbb{Z}[G/H]\longrightarrow M_{2\times 2}(k)$
$\displaystyle\alpha^{+}1+\alpha^{-}\sigma\mapsto\begin{bmatrix}\alpha^{+}+\alpha^{-}&0\\\
0&\alpha^{+}-\alpha^{-}\end{bmatrix}_{2\times 2}\,.$
This implies that the composition of $HH_{0}(\rho)\colon
k(\sqrt{\lambda})^{\oplus n}\to HH_{0}(N\\!\\!M)$ with $HH_{0}(\varrho)\colon
HH_{0}(N\\!\\!M)\to k(\sqrt{\lambda})^{\oplus n}$, in the category
$\mathrm{vect}(k)$, admits the following block matrix representation:
(3.12)
$HH_{0}(\varrho\circ\rho)=\left[\begin{array}[]{cc|cc|cc}\alpha^{+}_{11}+\alpha^{-}_{11}&0&\cdots&\cdots&\alpha^{+}_{1n}+\alpha^{-}_{1n}&0\\\
0&\alpha^{+}_{11}-\alpha^{-}_{11}&\cdots&\cdots&0&\alpha^{+}_{1n}-\alpha^{-}_{1n}\\\
\hline\cr\vdots&\vdots&&&\vdots&\vdots\\\ \vdots&\vdots&&&\vdots&\vdots\\\
\hline\cr\alpha^{+}_{n1}+\alpha^{-}_{n1}&0&\cdots&\cdots&\alpha^{+}_{nn}+\alpha^{-}_{nn}&0\\\
0&\alpha^{+}_{n1}-\alpha^{-}_{n1}&\cdots&\cdots&0&\alpha^{+}_{nn}-\alpha^{-}_{nn}\end{array}\right]_{2n\times
2n}\,.$
Note that (3.11) implies that all the elements in the diagonal of the matrix
(3.12) are odd and that all the elements outside the diagonal are even.
Therefore, a simple inductive argument shows that the determinant of (3.12) is
odd. In particular, the determinant of (3.12) is non-zero. This implies that
the homomorphism $HH_{0}(\rho)$ is injective and consequently that
$\mathrm{dim}_{k}HH_{0}(N\\!\\!M)\geq 2n$. Since the integer $n\geq 1$ is
arbitrary and the dimension of the $k$-vector space $HH_{0}(N\\!\\!M)$ is
finite, we hence arrive to a contradiction. Therefore, there exists an integer
$m\geq 0$ such that $N\\!\\!M_{\mathbb{F}_{2}}$ contains $U(A)^{\oplus
m}_{\mathbb{F}_{2}}$, but not $U(A)^{\oplus(m+1)}_{\mathbb{F}_{2}}$, as a
direct summand. ∎
We now have all the ingredients necessary for the proof of Theorem 3.3.
### Proof of Theorem 3.3
Note first that we have induced isomorphisms:
(3.13) $\displaystyle N\\!\\!M_{\mathbb{Q}}\oplus U(B)^{\oplus
n}_{\mathbb{Q}}\oplus U(A)_{\mathbb{Q}}\simeq N\\!\\!M_{\mathbb{Q}}\oplus
U(B^{\prime})^{\oplus n}_{\mathbb{Q}}\oplus
U(A^{\prime})_{\mathbb{Q}}\,\,\mathrm{in}\,\,\operatorname{NNum}(k)_{\mathbb{Q}}$
(3.14) $\displaystyle N\\!\\!M_{\mathbb{F}_{2}}\oplus
U(B)_{\mathbb{F}_{2}}^{\oplus n}\oplus U(A)_{\mathbb{F}_{2}}\simeq
N\\!\\!M_{\mathbb{F}_{2}}\oplus U(B^{\prime})_{\mathbb{F}_{2}}^{\oplus
n}\oplus
U(A^{\prime})_{\mathbb{F}_{2}}\,\,\mathrm{in}\,\,\mathrm{NRad}(k)_{\mathbb{F}_{2}}\,.$
Moreover, as explained in [34, Thm. 2.1], we have isomorphisms
$U(B)_{\mathbb{Q}}\simeq U(k)_{\mathbb{Q}}$ and $U(A)_{\mathbb{Q}}\simeq
U(l)_{\mathbb{Q}}$ in the category $\operatorname{NChow}(k)_{\mathbb{Q}}$;
similarly for $B^{\prime}$ and $A^{\prime}$. Consequently, the above
isomorphism (3.13) reduces to $N\\!\\!M_{\mathbb{Q}}\oplus U(k)^{\oplus
n}_{\mathbb{Q}}\oplus U(l)_{\mathbb{Q}}\simeq N\\!\\!M_{\mathbb{Q}}\oplus
U(k)^{\oplus n}_{\mathbb{Q}}\oplus U(l^{\prime})_{\mathbb{Q}}$. Since the
category $\operatorname{NNum}(k)_{\mathbb{Q}}$ is abelian semi-simple (consult
[31, Thm. 4.27]), we hence conclude that $U(l)_{\mathbb{Q}}\simeq
U(l^{\prime})_{\mathbb{Q}}$ in $\operatorname{NNum}(k)_{\mathbb{Q}}$. Thanks
to Lemma 3.4, this implies that the $k$-algebras $l$ and $l^{\prime}$ are
isomorphic. Therefore, in the remainder of the proof we can (and will) assume
without loss of generality that $l=l^{\prime}$. Let $H\subseteq G$ be the
subgroup of index $2$ such that $\overline{k}^{H}=l$. As explained in the
proof of Lemma 3.6, the $\mathbb{F}_{2}$-algebras of endomorphisms
$\mathrm{End}_{\operatorname{NChow}(k)_{\mathbb{F}_{2}}}(U(A)_{\mathbb{F}_{2}})$
and
$\mathrm{End}_{\operatorname{NChow}(k)_{\mathbb{F}_{2}}}(U(A^{\prime})_{\mathbb{F}_{2}})$
can be identified with the group $\mathbb{F}_{2}$-algebra
$\mathbb{F}_{2}[G/H]$. Consequently, by definition of the category of
noncommutative radical motives, we obtain induced identifications:
(3.15)
$\displaystyle\mathrm{End}_{\mathrm{NRad}(k)_{\mathbb{F}_{2}}}(U(A)_{\mathbb{F}_{2}})\simeq\mathbb{F}_{2}$
$\displaystyle\mathrm{End}_{\mathrm{NRad}(k)_{\mathbb{F}_{2}}}(U(A^{\prime})_{\mathbb{F}_{2}})\simeq\mathbb{F}_{2}\,.$
Let us assume by absurd that $U(A)\not\simeq U(A^{\prime})$ in
$\operatorname{NChow}(k)$. Thanks to Lemma 3.6, this implies that
$[A]\neq[A^{\prime}]$ in $\mathrm{Br}(l)$ or, equivalently, that
$\mathrm{ind}(A^{\mathrm{op}}\otimes_{l}A^{\prime})\neq 1$. Since
$\mathrm{ind}(A^{\mathrm{op}}\otimes_{l}A^{\prime})$ divides the product
$\mathrm{ind}(A^{\mathrm{op}})\mathrm{ind}(A)$ and
$\mathrm{ind}(A^{\mathrm{op}})=\mathrm{ind}(A)$ and $\mathrm{ind}(A^{\prime})$
are powers of $2$, the index
$\mathrm{ind}(A^{\mathrm{op}}\otimes_{l}A^{\prime})$ is also a power of $2$.
Therefore, following the proof of Lemma 3.6, we observe that the composition
map
$\mathrm{Hom}_{\operatorname{NChow}(k)_{\mathbb{F}_{2}}}(U(A)_{\mathbb{F}_{2}},U(A^{\prime})_{\mathbb{F}_{2}})\times\mathrm{Hom}_{\operatorname{NChow}(k)_{\mathbb{F}_{2}}}(U(A^{\prime})_{\mathbb{F}_{2}},U(A)_{\mathbb{F}_{2}})\longrightarrow\mathrm{Hom}_{\operatorname{NChow}(k)_{\mathbb{F}_{2}}}(U(A)_{\mathbb{F}_{2}},U(A)_{\mathbb{F}_{2}})$
is equal to zero; similarly with $A$ and $A^{\prime}$ interchanged. By
definition of the category of noncommutative radical motives, this hence
implies that
(3.16)
$\mathrm{Hom}_{\mathrm{NRad}(k)_{\mathbb{F}_{2}}}(U(A)_{\mathbb{F}_{2}},U(A^{\prime})_{\mathbb{F}_{2}})=\mathrm{Hom}_{\mathrm{NRad}(k)_{\mathbb{F}_{2}}}(U(A^{\prime})_{\mathbb{F}_{2}},U(A)_{\mathbb{F}_{2}})=0\,.$
Thanks to Lemma 3.10, we have isomorphisms $N\\!\\!M_{\mathbb{F}_{2}}\simeq
M\\!\\!N\oplus U(A)^{\oplus m}_{\mathbb{F}_{2}}$ and
$N\\!\\!M_{\mathbb{F}_{2}}\simeq M\\!\\!N^{\prime}\oplus U(A^{\prime})^{\oplus
m^{\prime}}_{\mathbb{F}_{2}}$ in the category
$\mathrm{NRad}(k)_{\mathbb{F}_{2}}$ for some integers $m,m^{\prime}\geq 0$,
where $M\\!\\!N$, resp. $M\\!\\!N^{\prime}$, is a noncommutative radical
motive which does not contains $U(A)_{\mathbb{F}_{2}}$, resp.
$U(A^{\prime})_{\mathbb{F}_{2}}$, as a direct summand. Note that, thanks to
(3.16), these direct sum decompositions also yield a direct sum decomposition
$N\\!\\!M_{\mathbb{F}_{2}}\simeq M\\!\\!N^{\prime\prime}\oplus U(A)^{\oplus
m}_{\mathbb{F}_{2}}\oplus U(A^{\prime})^{\oplus m^{\prime}}_{\mathbb{F}_{2}}$
in $\mathrm{NRad}(k)_{\mathbb{F}_{2}}$, where $M\\!\\!N^{\prime\prime}$ is a
noncommutative radical motive which does not contains $U(A)_{\mathbb{F}_{2}}$
neither $U(A^{\prime})_{\mathbb{F}_{2}}$ as a direct summand. Consequently,
the above isomorphism (3.14) can be re-written as follows:
(3.17) $M\\!\\!N^{\prime\prime}\oplus U(B)_{\mathbb{F}_{2}}^{\oplus n}\oplus
U(A)^{\oplus(m+1)}_{\mathbb{F}_{2}}\oplus U(A^{\prime})^{\oplus
m^{\prime}}_{\mathbb{F}_{2}}\simeq M\\!\\!N^{\prime\prime}\oplus
U(B^{\prime})_{\mathbb{F}_{2}}^{\oplus n}\oplus U(A)_{\mathbb{F}_{2}}^{\oplus
m}\oplus U(A^{\prime})^{\oplus(m^{\prime}+1)}_{\mathbb{F}_{2}}\,.$
Now, by combining the above computations (3.15)-(3.16) with Lemma 3.8 and with
the fact that $M\\!\\!N^{\prime\prime}$ does not contains
$U(A)_{\mathbb{F}_{2}}$ neither $U(A^{\prime})_{\mathbb{F}_{2}}$ as a direct
summand, we observe that the above isomorphism (3.17) restricts to an
isomorphism $U(A)^{\oplus(m+1)}_{\mathbb{F}_{2}}\simeq U(A)^{\oplus
m}_{\mathbb{F}_{2}}$. By applying the functor
$\mathrm{Hom}_{\mathrm{NRad}(k)_{\mathbb{F}_{2}}}(U(A)_{\mathbb{F}_{2}},-)$ to
this latter isomorphism, we hence obtain an induced isomorphism
$\mathbb{F}_{2}^{\oplus(r+1)}\simeq\mathbb{F}_{2}^{\oplus r}$ of
$\mathbb{F}_{2}$-vector spaces, which is a contradiction. Therefore, we
conclude that $U(A)\simeq U(A^{\prime})$ in $\operatorname{NChow}(k)$.
###### Remark 3.18 (Motivation for noncommutative radical motives).
The category of noncommutative numerical motives has several good properties.
In particular, it is abelian semi-simple. However, given a field extension
$l/k$ of degree $2$ and a central simple $l$-algebra $A$, we observe that
$U(A)_{\mathbb{F}_{2}}$ is isomorphic to the zero object in the category
$\operatorname{NNum}(k)_{\mathbb{F}_{2}}$. Consequently, the proof of Theorem
3.3 fails miserably when the category $\mathrm{NRad}(k)_{\mathbb{F}_{2}}$ is
replaced by the category $\operatorname{NNum}(k)_{\mathbb{F}_{2}}$. This was
our main motivation for the use of the category of noncommutative radical
motives (instead of the category of noncommutative numerical motives), which
“interpolates” between noncommutative Chow motives and noncommutative
numerical motives.
## 4\. Proof of Theorem 1.22
Proof of item (i). Let $X$ and $Y$ be two varieties. As proved in [5, §2 Cor.
3.5.12 b)], if $[X]=[Y]$ in the Grothendieck ring of varieties
$K_{0}\mathrm{Var}(k)$, then $\mathrm{dim}(X)=\mathrm{dim}(Y)$. Consequently,
if $[\mathrm{Iv}(A,\ast)]=[\mathrm{Iv}(A^{\prime},\ast^{\prime})]$, we
conclude that
$\mathrm{dim}(\mathrm{Iv}(A,\ast))=\mathrm{dim}(\mathrm{Iv}(A^{\prime},\ast^{\prime}))$.
Using the fact that $\mathrm{dim}(\mathrm{Iv}(A,\ast))=\mathrm{deg}(A)-2$ and
$\mathrm{dim}(\mathrm{Iv}(A^{\prime},\ast^{\prime}))=\mathrm{deg}(A^{\prime})-2$,
this implies that $\mathrm{deg}(A)=\mathrm{deg}(A^{\prime})$. Proof of item
(ii). Following [32, Thm 2.1 and Rk. 2.3], we have the following computation
$\displaystyle
U(\mathrm{Iv}(A,\ast))\simeq\begin{cases}U(k)^{\oplus(\mathrm{deg}(A)-2)}\oplus
U(C_{0}(A,\ast))&\mathrm{deg}(A)\,\,\mathrm{odd}\\\
U(k)^{\oplus(\mathrm{deg}(A)/2-1)}\oplus
U(A)^{\oplus(\mathrm{deg}(A)/2-1)}\oplus
U(C_{0}(A,\ast))&\mathrm{deg}(A)\,\,\mathrm{even}\end{cases}$
in the category $\operatorname{NChow}(k)$; similarly with $(A,\ast)$ replaced
by $(A^{\prime},\ast^{\prime})$. If
$[\mathrm{Iv}(A,\ast)]=[\mathrm{Iv}(A^{\prime},\ast^{\prime})]$ in
$K_{0}\mathrm{Var}(k)$, then
$\mu_{\mathrm{nc}}(\mathrm{Iv}(A,\ast))=\mu_{\mathrm{nc}}(\mathrm{Iv}(A^{\prime},\ast^{\prime}))$
in $K_{0}(\mathrm{NChow}(k))$, where $\mu_{\mathrm{nc}}$ is the motivic
measure of Proposition 2.5. Thanks to the definition of the Grothendieck ring
$K_{0}(\mathrm{NChow}(k))$, this implies that there exists a noncommutative
Chow motive $N\\!\\!M\in\operatorname{NChow}(k)$ such that
(4.1) $\displaystyle\quad\begin{cases}N\\!\\!M\oplus U(k)^{\oplus d}\oplus
U(C_{0}(A,\ast))\simeq N\\!\\!M\oplus U(k)^{\oplus d}\oplus
U(C_{0}(A^{\prime},\ast^{\prime}))&\mathrm{deg}(A)\,\,\mathrm{odd}\\\
N\\!\\!M\oplus U(k)^{\oplus d}\oplus U(A)^{\oplus d}\oplus
U(C_{0}(A,\ast))\simeq N\\!\\!M\oplus U(k)^{\oplus d}\oplus
U(A^{\prime})^{\oplus d}\oplus
U(C_{0}(A^{\prime},\ast^{\prime}))&\mathrm{deg}(A)\,\,\mathrm{even}\,,\end{cases}$
where $d:=\mathrm{deg}(A)-2$ when $\mathrm{deg}(A)$ is odd and
$d:=\mathrm{deg}(A)/2-1$ when $\mathrm{deg}(A)$ is even. Therefore, the proof
follows now from Lemma 4.2 below.
###### Lemma 4.2.
Let $(A,\ast)$ and $(A^{\prime},\ast^{\prime})$ be two central simple
$k$-algebras with involutions of orthogonal type such that $\mathrm{deg}(A)$
and $\mathrm{deg}(A^{\prime})$ are even. Given any noncommutative Chow motive
$N\\!\\!M\in\operatorname{NChow}(k)$ and integer $n\geq 0$, we have the
following implication:
$\displaystyle N\\!\\!M\oplus U(A)^{\oplus n}\oplus U(C_{0}(A,\ast))\simeq
N\\!\\!M\oplus U(A^{\prime})^{\oplus n}\oplus
U(C_{0}(A^{\prime},\ast^{\prime}))$ $\displaystyle\Rightarrow$
$\displaystyle\big{(}\delta(A,\ast)\in(k^{\times})^{2}\Leftrightarrow\delta(A^{\prime},\ast^{\prime})\in(k^{\times})^{2}\big{)}\,.$
###### Proof.
Note first that we have an induced isomorphism
(4.3) $N\\!\\!M_{\mathbb{Q}}\oplus U(A)^{\oplus n}_{\mathbb{Q}}\oplus
U(C_{0}(A,\ast))_{\mathbb{Q}}\simeq N\\!\\!M_{\mathbb{Q}}\oplus
U(A^{\prime})^{\oplus n}_{\mathbb{Q}}\oplus
U(C_{0}(A^{\prime},\ast^{\prime}))_{\mathbb{Q}}\,\,\mathrm{in}\,\,\operatorname{NNum}(k)_{\mathbb{Q}}\,.$
Moreover, as proved in [34, Thm. 2.1], we have isomorphisms
$U(A)_{\mathbb{Q}}\simeq U(k)_{\mathbb{Q}}$ and
$U(A^{\prime})_{\mathbb{Q}}\simeq U(k)_{\mathbb{Q}}$ in
$\operatorname{NChow}(k)_{\mathbb{Q}}$. Consequently, since the category
$\operatorname{NNum}(k)_{\mathbb{Q}}$ is abelian semi-simple (consult [31,
Thm. 4.27]), we conclude from (4.3) that $U(C_{0}(A,\ast))_{\mathbb{Q}}\simeq
U(C_{0}(A^{\prime},\ast^{\prime}))_{\mathbb{Q}}$ in
$\operatorname{NNum}(k)_{\mathbb{Q}}$. Let us now prove the implication
$\delta(A,\ast)\in(k^{\times})^{2}\Rightarrow\delta(A^{\prime},\ast^{\prime})\in(k^{\times})^{2}$;
we will assume that $\delta(A,\ast)\in(k^{\times})^{2}$ and, by absurd, that
$\delta(A^{\prime},\ast^{\prime})\not\in(k^{\times})^{2}$. These assumptions
imply that $C_{0}(A,\ast)\simeq C^{+}_{0}(A,\ast)\times C^{-}_{0}(A,\ast)$ is
a product of two central simple $k$-algebras and that
$C_{0}(A^{\prime},\ast^{\prime})$ is a central simple algebra over its center
$l^{\prime}:=k(\sqrt{\delta(A^{\prime},\ast^{\prime})})$. Hence, we obtain an
isomorphism $U(C_{0}(A,\ast))\simeq U(C^{+}_{0}(A,\ast))\oplus
U(C^{-}_{0}(A,\ast))$ in the category $\operatorname{NChow}(k)$. As proved in
[34, Thm. 2.1], we have moreover isomorphisms
$U(C^{+}_{0}(A,\ast))_{\mathbb{Q}}\simeq U(k)_{\mathbb{Q}}$,
$U(C^{-}_{0}(A,\ast))_{\mathbb{Q}}\simeq U(k)_{\mathbb{Q}}$ and
$U(C_{0}(A^{\prime},\ast^{\prime}))_{\mathbb{Q}}\simeq
U(l^{\prime})_{\mathbb{Q}}$ in the category
$\operatorname{NChow}(k)_{\mathbb{Q}}$. Therefore, we obtain from the above
considerations an isomorphism $U(k)_{\mathbb{Q}}\oplus U(k)_{\mathbb{Q}}\simeq
U(l^{\prime})_{\mathbb{Q}}$ in $\operatorname{NNum}(k)_{\mathbb{Q}}$ and,
consequently, an induced isomorphism of $\mathbb{Q}$-vector spaces:
(4.4)
$\mathrm{Hom}_{\operatorname{NNum}(k)_{\mathbb{Q}}}(U(k)_{\mathbb{Q}},U(k)_{\mathbb{Q}}\oplus
U(k)_{\mathbb{Q}})\simeq\mathrm{Hom}_{\operatorname{NNum}(k)_{\mathbb{Q}}}(U(k)_{\mathbb{Q}},U(l^{\prime})_{\mathbb{Q}})\,.$
On the one hand, the left-hand side of (4.4) is isomorphic to
$\mathbb{Q}\oplus\mathbb{Q}$. On the other hand, since the field extension
$l^{\prime}/k$ is of degree $2$, the following composition map
$\mathrm{Hom}_{\operatorname{NChow}(k)_{\mathbb{Q}}}(U(k)_{\mathbb{Q}},U(l^{\prime})_{\mathbb{Q}})\times\mathrm{Hom}_{\operatorname{NChow}(k)_{\mathbb{Q}}}(U(l^{\prime})_{\mathbb{Q}},U(k)_{\mathbb{Q}})\longrightarrow\mathrm{Hom}_{\operatorname{NChow}(k)_{\mathbb{Q}}}(U(k)_{\mathbb{Q}},U(k)_{\mathbb{Q}})$
corresponds to the bilinear pairing
$\mathbb{Q}\times\mathbb{Q}\to\mathbb{Q},(\alpha,\beta)\mapsto 2\alpha\beta$.
By definition of the category $\operatorname{NNum}(k)_{\mathbb{Q}}$, this
implies that the right-hand side of (4.4) is isomorphic to $\mathbb{Q}$, which
is a contradiction! The proof of the converse implication is similar; simply
interchange $(A,\ast)$ and $(A^{\prime},\ast^{\prime})$. ∎
Proof of item (iii). We start with the following cancellation result:
###### Proposition 4.5.
Let $(A,\ast)$ and $(A^{\prime},\ast^{\prime})$ be two central simple
$k$-algebras with involutions of orthogonal type such that $\mathrm{deg}(A)$
and $\mathrm{deg}(A^{\prime})$ are even. Given any noncommutative Chow motive
$N\\!\\!M\in\operatorname{NChow}(k)$ and integer $n\geq 0$, we have the
following implication:
$\displaystyle N\\!\\!M\oplus U(A)^{\oplus n}\oplus U(C_{0}(A,\ast))\simeq
N\\!\\!M\oplus U(A^{\prime})^{\oplus n}\oplus
U(C_{0}(A^{\prime},\ast^{\prime}))$ $\displaystyle\Rightarrow$ $\displaystyle
U(C_{0}(A,\ast))\simeq U(C_{0}(A^{\prime},\ast^{\prime}))\,.$
In the particular case where $n=1$, $\mathrm{deg}(A)\equiv 0$ (mod $4$),
$\mathrm{deg}(A^{\prime})\equiv 0$ (mod $4$), and
$\delta(A,\ast)\in(k^{\times})^{2}$ (or, equivalently,
$\delta(A^{\prime},\ast^{\prime})\in(k^{\times})^{2}$), we assume moreover
that $A$ and $A^{\prime}$ are split.
###### Proof.
We consider first the case where $\delta(A,\ast)\in(k^{\times})^{2}$. Thanks
to Lemma 4.2, we also have
$\delta(A^{\prime},\ast^{\prime})\in(k^{\times})^{2}$. This implies that
$C_{0}(A,\ast)\simeq C^{+}_{0}(A,\ast)\times C^{-}_{0}(A,\ast)$ and
$C_{0}(A^{\prime},\ast^{\prime})\simeq
C^{+}_{0}(A^{\prime},\ast^{\prime})\times C^{-}_{0}(A^{\prime},\ast^{\prime})$
are products of central simple $k$-algebras. Hence, we obtain induced
isomorphisms
(4.6) $\displaystyle U(C_{0}(A,\ast))\simeq U(C_{0}^{+}(A,\ast))\oplus
U(C_{0}^{-}(A,\ast))$ $\displaystyle U(C_{0}(A^{\prime},\ast^{\prime}))\simeq
U(C_{0}^{+}(A^{\prime},\ast^{\prime}))\oplus
U(C_{0}^{-}(A^{\prime},\ast^{\prime}))\,,$
which lead to the following isomorphism of noncommutative Chow motives:
$N\\!\\!M\oplus U(A)^{\oplus n}\oplus U(C^{+}_{0}(A,\ast))\oplus
U(C^{-}_{0}(A,\ast))\simeq N\\!\\!M\oplus U(A^{\prime})^{\oplus n}\oplus
U(C^{+}_{0}(A^{\prime},\ast^{\prime}))\oplus
U(C^{-}_{0}(A^{\prime},\ast^{\prime}))\,.$
Note that thanks to Proposition 3.1, the latter isomorphism further restricts
to an isomorphism
(4.7) $U(A)^{\oplus n}\oplus U(C^{+}_{0}(A,\ast))\oplus
U(C^{-}_{0}(A,\ast))\simeq U(A^{\prime})^{\oplus n}\oplus
U(C^{+}_{0}(A^{\prime},\ast^{\prime}))\oplus
U(C^{-}_{0}(A^{\prime},\ast^{\prime}))\,\,\mathrm{in}\,\,\operatorname{NChow}(k)\,.$
When $\mathrm{deg}(A)\equiv 2$ (mod $4$), we have the following relations in
the Brauer group:
(4.8) $\displaystyle 2[C^{+}_{0}(A,\ast)]=[A]$ $\displaystyle
3[C^{+}_{0}(A,\ast)]=[C^{-}_{0}(A,\ast)]$ $\displaystyle
4[C^{+}_{0}(A,\ast)]=[k]\,.$
In the same vein, when $\mathrm{deg}(A)\equiv 0$ (mod $4$), we have the
following relations in the Brauer group:
(4.9) $\displaystyle 2[C^{+}_{0}(A,\ast)]=[k]$ $\displaystyle
2[C^{+}_{0}(A,\ast)]=[k]$
$\displaystyle[C^{+}_{0}(A,\ast)]+[C^{-}_{0}(A,\ast)]=[A]\,.$
This implies, in particular, that the Brauer classes $[C^{+}_{0}(A,\ast)]$,
$[C^{-}_{0}(A,\ast)]$, and $[A]$, belong to $\mathrm{Br}(k)\\{2\\}$; similarly
with $A$ and $\ast$ replaced by $A^{\prime}$ and $\ast^{\prime}$.
Consequently, by applying Theorem 2.2 to the above isomorphism (4.7), we
conclude that the following two sets of Brauer classes are the same up to
permutation:
$\displaystyle\\{\underbrace{[A],\ldots,[A]}_{n\text{-}\mathrm{copies}},[C_{0}^{+}(A,\ast)],[C_{0}^{-}(A,\ast)]\\}$
$\displaystyle\\{\underbrace{[A^{\prime}],\ldots,[A^{\prime}]}_{n\text{-}\mathrm{copies}},[C_{0}^{+}(A^{\prime},\ast^{\prime})],[C_{0}^{-}(A^{\prime},\ast^{\prime})]\\}\,.$
When $n\neq 1$, the above relations (4.8)-(4.9) imply that
(4.10)
$\displaystyle\begin{cases}[C_{0}^{+}(A,\ast)]=[C_{0}^{+}(A^{\prime},\ast^{\prime})]\\\
[C_{0}^{-}(A,\ast)]=[C_{0}^{-}(A^{\prime},\ast^{\prime})]\end{cases}$
$\displaystyle\mathrm{or}$
$\displaystyle\begin{cases}[C_{0}^{+}(A,\ast)]=[C_{0}^{-}(A^{\prime},\ast^{\prime})]\\\
[C_{0}^{-}(A,\ast)]=[C_{0}^{+}(A^{\prime},\ast^{\prime})]\end{cases}\,.$
Similarly, when $n=1$ and $\mathrm{deg}(A)\equiv 2$ (mod $4$) or
$\mathrm{deg}(A^{\prime})\equiv 2$ (mod $4$), the above relations (4.8)-(4.9)
yield the equalities (4.10). The same happens in the particular case where
$n=1$, $\mathrm{deg}(A)\equiv 0$ (mod $4$), $\mathrm{deg}(A^{\prime})\equiv 0$
(mod $4$), and $A$ and $A^{\prime}$ are split. Making use of Theorem 2.1, we
hence conclude from the above isomorphisms (4.6) that in all these different
cases we have $U(C_{0}(A,\ast))\simeq U(C_{0}(A^{\prime},\ast^{\prime}))$ in
$\operatorname{NChow}(k)$. We now consider the case where
$\delta(A,\ast)\notin(k^{\times})^{2}$. Thanks to Lemma 4.2, we also have
$\delta(A^{\prime},\ast^{\prime})\notin(k^{\times})^{2}$. This implies that
$C_{0}(A,\ast)$ and $C_{0}(A^{\prime},\ast^{\prime})$ are central simple
algebras over their centers $l:=k(\sqrt{\delta(A,\ast)})$ and
$l^{\prime}:=k(\sqrt{\delta(A^{\prime},\ast^{\prime})})$, respectively. Note
that since the field extension $l/k$ is of degree $2$ and
$\mathrm{dim}_{k}(C_{0}(A,\ast))=2^{\mathrm{deg}(A)-1}$, the index of the
central simple $l$-algebra $C_{0}(A,\ast)$ is a power of $2$; similarly for
the central simple $l^{\prime}$-algebra $C_{0}(A^{\prime},\ast^{\prime})$.
Consequently, the proof follows now from Theorem 3.3. ∎
###### Corollary 4.11.
Under the assumptions of Proposition 4.5, we have the following implication:
(4.12) $\displaystyle N\\!\\!M\oplus U(A)^{\oplus n}\oplus
U(C_{0}(A,\ast))\simeq N\\!\\!M\oplus U(A^{\prime})^{\oplus n}\oplus
U(C_{0}(A^{\prime},\ast^{\prime}))$ $\displaystyle\Rightarrow$ $\displaystyle
U(A)\simeq U(A^{\prime})\,.$
###### Proof.
Proposition 4.5 yields an isomorphism $U(C_{0}(A,\ast))\simeq
U(C_{0}(A^{\prime},\ast^{\prime}))$. Therefore, by applying Proposition 3.1 to
the left-hand side of (4.12), we conclude that $U(A)^{\oplus n}\simeq
U(A^{\prime})^{\oplus n}$. Thanks to Theorems 2.1 and 2.2, this hence implies
that $U(A)\simeq U(A^{\prime})$ in $\operatorname{NChow}(k)$. ∎
Recall from the proof of item (ii) of Theorem 1.22 that if
$[\mathrm{Iv}(A,\ast)]=[\mathrm{Iv}(A^{\prime},\ast^{\prime})]$, then we
obtain the above computation (4.1) in the category of noncommutative Chow
motives. Consequently, by applying Proposition 3.1 to the above isomorphism
(4.1) when $\mathrm{deg}(A)$ is odd, we conclude that $U(C_{0}(A,\ast))\simeq
U(C_{0}(A^{\prime},\ast^{\prime}))$ in $\operatorname{NChow}(k)$. In the same
vein, by applying Proposition 4.5 to the above isomorphism (4.1) when
$\mathrm{deg}(A)$ is even, we conclude that $U(C_{0}(A,\ast))\simeq
U(C_{0}(A^{\prime},\ast^{\prime}))$ in $\operatorname{NChow}(k)$; note that
Proposition 4.5 can be applied because in the particular case where
$\mathrm{deg}(A)=4$ and $\delta(A,\ast)\in(k^{\times})^{2}$ (or, equivalently,
$\mathrm{deg}(A^{\prime})=4$ and
$\delta(A^{\prime},\ast^{\prime})\in(k^{\times})^{2}$), we assume moreover
that $(A,\ast)$ and $(A^{\prime},\ast^{\prime})$ satisfy condition $(\star)$.
Therefore, since $\mathrm{deg}(A)=\mathrm{deg}(A^{\prime})$, the proof follows
now from Proposition 4.13 below.
###### Proposition 4.13.
Let $(A,\ast)$ and $(A^{\prime},\ast^{\prime})$ be two central simple
$k$-algebras with involutions of orthogonal type such that
$\mathrm{deg}(A)=\mathrm{deg}(A^{\prime})$. Under these assumptions, we have
the following implication:
$\displaystyle U(C_{0}(A,\ast))\simeq
U(C_{0}(A^{\prime},\ast^{\prime}))\,\,\mathrm{in}\,\,\operatorname{NChow}(k)$
$\displaystyle\Rightarrow$ $\displaystyle C_{0}(A,\ast)\simeq
C_{0}(A^{\prime},\ast^{\prime})\,.$
###### Proof.
Recall that $\mathrm{dim}_{k}(C_{0}(A,\ast))=2^{\mathrm{deg}(A)-1}$ and
$\mathrm{dim}_{k}(C_{0}(A^{\prime},\ast^{\prime}))=2^{\mathrm{deg}(A^{\prime})-1}$.
Therefore, since $\mathrm{deg}(A)=\mathrm{deg}(A^{\prime})$, we have
$\mathrm{dim}_{k}(C_{0}(A,\ast))=\mathrm{dim}_{k}(C_{0}(A^{\prime},\ast^{\prime}))$.
We consider first the case where $\mathrm{deg}(A)$ is odd. In this case,
$C_{0}(A,\ast)$ and $C_{0}(A^{\prime},\ast^{\prime})$ are central simple
$k$-algebras. Hence, Theorem 2.1 implies that
$[C_{0}(A,\ast)]=[C_{0}(A^{\prime},\ast^{\prime})]$. Consequently, by
combining the equality
$\mathrm{dim}_{k}(C_{0}(A,\ast))=\mathrm{dim}_{k}(C_{0}(A^{\prime},\ast^{\prime}))$
with the Wedderburn theorem, we conclude that $C_{0}(A,\ast)\simeq
C_{0}(A^{\prime},\ast^{\prime})$. We now consider the case where
$\mathrm{deg}(A)$ is even and $\delta(A,\ast)\in(k^{\times})^{2}$. Thanks to
Lemma 4.2, we also have $\delta(A^{\prime},\ast^{\prime})\in(k^{\times})^{2}$.
This implies that $C_{0}(A,\ast)\simeq C^{+}_{0}(A,\ast)\times
C^{-}_{0}(A,\ast)$ and $C_{0}(A^{\prime},\ast^{\prime})\simeq
C^{+}_{0}(A^{\prime},\ast^{\prime})\times C^{-}_{0}(A^{\prime},\ast^{\prime})$
are products of central simple $k$-algebras. Hence, we obtain induced
isomorphisms
$\displaystyle U(C_{0}(A,\ast))\simeq U(C_{0}^{+}(A,\ast))\oplus
U(C_{0}^{-}(A,\ast))$ $\displaystyle U(C_{0}(A^{\prime},\ast^{\prime}))\simeq
U(C_{0}^{+}(A^{\prime},\ast^{\prime}))\oplus
U(C_{0}^{-}(A^{\prime},\ast^{\prime}))\,,$
which lead to the following isomorphism
(4.14) $U(C^{+}_{0}(A,\ast))\oplus U(C^{-}_{0}(A,\ast))\simeq
U(C^{+}_{0}(A^{\prime},\ast^{\prime}))\oplus
U(C^{-}_{0}(A^{\prime},\ast^{\prime}))\,\,\mathrm{in}\,\,\operatorname{NChow}(k)\,.$
As explained in the proof of Proposition 4.5, the Brauer classes
$[C_{0}^{+}(A,\ast)]$ and $[C_{0}^{-}(A,\ast)]$ belong to
$\mathrm{Br}(k)\\{2\\}$; similarly with $(A,\ast)$ replaced by
$(A^{\prime},\ast^{\prime})$. Therefore, by applying Theorem 2.2 to the above
isomorphism (4.14), we conclude that the sets of Brauer classes
$\\{[C_{0}^{+}(A,\ast)],[C_{0}^{-}(A,\ast)]\\}$ and
$\\{[C_{0}^{+}(A^{\prime},\ast^{\prime})],[C_{0}^{-}(A^{\prime},\ast^{\prime})]\\}$
are the same up to permutation. Thanks to the assumption
$\mathrm{deg}(A)=\mathrm{deg}(A^{\prime})$, to the following equalities
$\displaystyle\mathrm{dim}_{k}(C^{+}_{0}(A,\ast))=\mathrm{dim}_{k}(C^{-}_{0}(A,\ast))=2^{\mathrm{deg}(A)-2}$
$\displaystyle\mathrm{dim}_{k}(C^{+}_{0}(A^{\prime},\ast^{\prime}))=\mathrm{dim}_{k}(C^{-}_{0}(A^{\prime},\ast^{\prime}))=2^{\mathrm{deg}(A^{\prime})-2}\,,$
and to the Wedderburn theorem, this implies that
$\displaystyle\begin{cases}C_{0}^{+}(A,\ast)\simeq
C_{0}^{+}(A^{\prime},\ast^{\prime})\\\ C_{0}^{-}(A,\ast)\simeq
C_{0}^{-}(A^{\prime},\ast^{\prime})\end{cases}$ $\displaystyle\mathrm{or}$
$\displaystyle\begin{cases}C_{0}^{+}(A,\ast)\simeq
C_{0}^{-}(A^{\prime},\ast^{\prime})\\\ C_{0}^{-}(A,\ast)\simeq
C_{0}^{+}(A^{\prime},\ast^{\prime})\end{cases}\,.$
In both cases, we have an isomorphism $C_{0}(A,\ast)\simeq
C_{0}(A^{\prime},\ast^{\prime})$. Finally, we consider the case where
$\mathrm{deg}(A)$ is even and $\delta(A,\ast)\notin(k^{\times})^{2}$. Thanks
to Lemma 4.2, we also have
$\delta(A^{\prime},\ast^{\prime})\notin(k^{\times})^{2}$. This implies that
$C_{0}(A,\ast)$ and $C_{0}(A^{\prime},\ast)$ are central simple algebras over
their centers $l:=k(\sqrt{\delta(A,\ast)})$ and
$l^{\prime}:=k(\sqrt{\delta(A^{\prime},\ast^{\prime})})$, respectively. Hence,
we obtain an induced isomorphism $U(C_{0}(A,\ast))_{\mathbb{Q}}\simeq
U(C_{0}(A^{\prime},\ast^{\prime}))_{\mathbb{Q}}$ in
$\operatorname{NChow}(k)_{\mathbb{Q}}$. As proved in [34, Thm. 2.1], we have
isomorphisms $U(C_{0}(A,\ast))_{\mathbb{Q}}\simeq U(l)_{\mathbb{Q}}$ and
$U(C_{0}(A^{\prime},\ast^{\prime}))_{\mathbb{Q}}\simeq
U(l^{\prime})_{\mathbb{Q}}$ in $\operatorname{NChow}(k)_{\mathbb{Q}}$. Thanks
to Lemma 3.4, this implies that $l\simeq l^{\prime}$. Therefore, we can
consider the central simple $l$-algebra
$\mathbf{C}_{0}:=C_{0}(A^{\prime},\ast^{\prime})\otimes_{l^{\prime}}l$. Note
that since the $k$-algebras $C_{0}(A^{\prime},\ast^{\prime})$ and
$\mathbf{C}_{0}$ are isomorphic, we have an isomorphism
$U(C_{0}(A^{\prime},\ast^{\prime}))\simeq U(\mathbf{C}_{0})$ in
$\operatorname{NChow}(k)$. Thanks to Lemma 3.6, this latter isomorphism,
combined with the hypothesis that $U(C_{0}(A,\ast))\simeq
U(C_{0}(A^{\prime},\ast^{\prime}))$ in $\operatorname{NChow}(k)$, implies that
$[C_{0}(A,\ast)]=[\mathbf{C}_{0}]$ in $\mathrm{Br}(l)$. Consequently, making
use of the following equalities
$\mathrm{dim}_{l}(C_{0}(A,\ast))=\mathrm{dim}_{k}(C_{0}(A,\ast))/2=\mathrm{dim}_{k}(C_{0}(A^{\prime},\ast^{\prime}))/2=\mathrm{dim}_{k}(\mathbf{C}_{0})/2=\mathrm{dim}_{l}(\mathbf{C}_{0})$
and of the Wedderburn theorem, we conclude that the $l$-algebras
$C_{0}(A,\ast)$ and $\mathbf{C}_{0}$ are isomorphic. This implies that
$C_{0}(A,\ast)\simeq C_{0}(A^{\prime},\ast^{\prime})$. ∎
Proof of item (iv). Thanks to item (i) of Theorem 1.22, we have
$\mathrm{deg}(A)=\mathrm{deg}(A^{\prime})$. Moreover, recall from Example 1.18
that when $\mathrm{deg}(A)=\mathrm{deg}(A^{\prime})$ is odd, both $A$ and
$A^{\prime}$ are isomorphic to
$\mathrm{M}_{\mathrm{deg}(A)\times\mathrm{deg}(A)}(k)$. Hence, it suffices to
consider the case where $\mathrm{deg}(A)=\mathrm{deg}(A^{\prime})$ is even.
Recall from item (ii) of Theorem 1.22 that if
$[\mathrm{Iv}(A,\ast)]=[\mathrm{Iv}(A^{\prime},\ast^{\prime})]$, then we
obtain the above computation (4.1) in the category of noncommutative Chow
motives. By applying Corollary 4.11 to the above isomorphism (4.1), we hence
conclude that $U(A)\simeq U(A^{\prime})$ in $\operatorname{NChow}(k)$; note
that Corollary 4.11 can be applied because in the particular case where
$\mathrm{deg}(A)=4$ and $\delta(A,\ast)\in(k^{\times})^{2}$ (or, equivalently,
$\mathrm{deg}(A^{\prime})=4$ and
$\delta(A^{\prime},\ast^{\prime})\in(k^{\times})^{2}$), we assume moreover
that $(A,\ast)$ and $(A^{\prime},\ast^{\prime})$ satisfy condition $(\star)$.
Therefore, since $\mathrm{deg}(A)=\mathrm{deg}(A^{\prime})$, it follows now
from Theorem 2.1 and from the Wedderburn theorem that $A\simeq A^{\prime}$.
Proof of item (v). We start with a general result of independent interest:
###### Proposition 4.15.
Let $q$ and $q^{\prime}$ be two non-degenerate quadratic forms such that
$\mathrm{dim}(q)=\mathrm{dim}(q^{\prime})$. If
$\mu_{\mathrm{c}}([Q_{q}])=\mu_{\mathrm{c}}([Q_{q^{\prime}}])$ in
$K_{0}(\operatorname{Num}(k))$, then $q$ and $q^{\prime}$ are both isotropic
or both anisotropic.
###### Proof.
Let $l/k$ be a field extension making the quadratic form $q\otimes_{k}l$
isotropic. As explained in [1, §4.2.3], extension of scalars along the
inclusion $k\subset l$ gives rise to the commutative diagram:
$\textstyle{\mathrm{Chow}(k)\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{-\times_{k}l}$$\textstyle{\mathrm{Chow}(l)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\operatorname{Num}(k)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{-\times_{k}l}$$\textstyle{\operatorname{Num}(l)\,.}$
This leads to the following induced commutative diagram of abelian groups
(4.16)
$\textstyle{\mathrm{Hom}_{\mathrm{Chow}(k)}(\mathfrak{h}(\mathrm{pt})(-d),\mathfrak{h}(Q_{q}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi_{q}}$$\textstyle{\mathrm{Hom}_{\mathrm{Chow}(l)}(\mathfrak{h}(\mathrm{pt})(-d),\mathfrak{h}(Q_{q}\times_{k}l))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\mathrm{Hom}_{\operatorname{Num}(k)}(\mathfrak{h}(\mathrm{pt})(-d),\mathfrak{h}(Q_{q}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi_{q}}$$\textstyle{\mathrm{Hom}_{\operatorname{Num}(l)}(\mathfrak{h}(\mathrm{pt})(-d),\mathfrak{h}(Q_{q}\times_{k}l))\,,}$
where $d:=\mathrm{dim}(Q_{q})=\mathrm{dim}(q)-2$. Since the quadratic form
$q\otimes_{k}l$ is isotropic, we have an orthogonal sum decomposition
$q\otimes_{k}l=\underline{q}\perp\mathbb{H}$. Consequently, as proved by Rost
in [26, Prop. 2], the Chow motive
$\mathfrak{h}(Q_{q}\times_{k}l)=\mathfrak{h}(Q_{q\otimes_{k}l})$ admits the
direct sum decomposition
$\mathfrak{h}(\mathrm{pt})\oplus\mathfrak{h}(Q_{\underline{q}})(-1)\oplus\mathbb{Z}(-d)$
in $\mathrm{Chow}(k)$. Since $\mathrm{dim}(Q_{\underline{q}})=d-2$, this
decomposition implies that the upper and lower right corners of the
commutative diagram (4.16) are both isomorphic to $\mathbb{Z}$ and that the
(vertical) homomorphism between them is the identity. Note that by definition
(consult §2), the upper and lower left corners of the commutative diagram
(4.16) are isomorphic to ${\mathcal{Z}}^{d}_{\sim\mathrm{rat}}(Q_{q})$ and
${\mathcal{Z}}^{d}_{\sim\mathrm{num}}(Q_{q})$, respectively. Note also that
these are the abelian groups of $0$-cycles on $Q_{q}$ up to rational
equivalence and numerical equivalence, respectively. Following Karpenko [9,
§2], under the above identifications, the homomorphism $\phi_{q}$ corresponds
to the degree map
$\mathrm{deg}\colon{\mathcal{Z}}^{d}_{\sim\mathrm{rat}}(Q_{q})\to\mathbb{Z}$.
Consequently, since two $0$-cycles in $Q_{q}$ are numerically trivial if and
only if they have the same degree, we conclude that
$\mathrm{cok}(\varphi_{q})\simeq\mathrm{cok}(\phi_{q})$. All the above holds
mutatis mutandis with $q$ replaced by $q^{\prime}$. Now, let $l/k$ be a field
extension making both quadratic forms $q\otimes_{k}l$ and
$q^{\prime}\otimes_{k}l$ isotropic. By definition of the Grothendieck ring
$K_{0}(\operatorname{Num}(k))$, if
$\mu_{\mathrm{c}}([Q_{q}])=\mu_{\mathrm{c}}([Q_{q^{\prime}}])$, then there
exists a numerical motive $M\in\operatorname{Num}(k)$ such that
$M\oplus\mathfrak{h}(Q_{q})\simeq M\oplus\mathfrak{h}(Q_{q^{\prime}})$. Let us
choose an isomorphism $\theta\colon M\oplus\mathfrak{h}(Q_{q})\to
M\oplus\mathfrak{h}(Q_{q^{\prime}})$. Note that since the extension of scalars
functor $-\times_{k}l\colon\operatorname{Num}(k)\to\operatorname{Num}(l)$ is
additive, it leads to the following commutative diagram of abelian groups:
$\textstyle{\mathrm{Hom}_{\operatorname{Num}(k)}(\mathfrak{h}(\mathrm{pt})(-d),M\oplus\mathfrak{h}(Q_{q}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\theta_{\ast}}$$\scriptstyle{\simeq}$$\scriptstyle{\varphi\oplus\varphi_{q}}$$\textstyle{\mathrm{Hom}_{\operatorname{Num}(l)}(\mathfrak{h}(\mathrm{pt})(-d),(M\times_{k}l)\oplus\mathfrak{h}(Q_{q}\times_{k}l))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{(\theta\times_{k}l)_{\ast}}$$\scriptstyle{\simeq}$$\textstyle{\mathrm{Hom}_{\operatorname{Num}(k)}(\mathfrak{h}(\mathrm{pt})(-d),M\oplus\mathfrak{h}(Q_{q^{\prime}}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\varphi\oplus\varphi_{q^{\prime}}}$$\textstyle{\mathrm{Hom}_{\operatorname{Num}(l)}(\mathfrak{h}(\mathrm{pt})(-d),(M\times_{k}l)\oplus\mathfrak{h}(Q_{q^{\prime}}\times_{k}l))\,.}$
This implies that
$\mathrm{cok}(\varphi\oplus\varphi_{q})\simeq\mathrm{cok}(\varphi\oplus\varphi_{q^{\prime}})$
or, equivalently, that
$\mathrm{cok}(\varphi)\oplus\mathrm{cok}(\varphi_{q})\simeq\mathrm{cok}(\varphi)\oplus\mathrm{cok}(\varphi_{q^{\prime}})$.
Using the fact that these abelian groups are finitely generated (consult [1,
Prop. 3.2.7.1]), we hence conclude that
$\mathrm{cok}(\varphi_{q})\simeq\mathrm{cok}(\varphi_{q^{\prime}})$. The proof
follows now from a celebrated result of Springer (consult [7, Cor. 71.3][29]),
which asserts that $\mathrm{cok}(\varphi_{q})\simeq 0$ if $q$ is isotropic and
$\mathrm{cok}(\varphi_{q})\simeq\mathbb{Z}/2$ if $q$ is anisotropic. ∎
Let $P$ be a (fixed) ordering of $k$ and $k_{P}$ the associated real-closure
of $k$. Note first that extension of scalars along the inclusion $k\subset
k_{P}$ gives rise to a ring homomorphism $-\times_{k}k_{P}\colon
K_{0}\mathrm{Var}(k)\to K_{0}\mathrm{Var}(k_{P})$. Consequently, since by
hypothesis $[\mathrm{Iv}(A,\ast)]=[\mathrm{Iv}(A^{\prime},\ast^{\prime})]$, we
obtain the following equality in $K_{0}\mathrm{Var}(k_{P})$:
(4.17)
$[\mathrm{Iv}(A,\ast)\times_{k}k_{P}]=[\mathrm{Iv}(A^{\prime},\ast^{\prime})\times_{k}k_{P}]=[\mathrm{Iv}(A\otimes_{k}k_{P},\ast\otimes_{k}k_{P})]=[\mathrm{Iv}(A^{\prime}\otimes_{k}k_{P},\ast^{\prime}\otimes_{k}k_{P})]\,.$
Now, recall from item (iv) that if
$[\mathrm{Iv}(A,\ast)]=[\mathrm{Iv}(A^{\prime},\ast^{\prime})]$, then $A\simeq
A^{\prime}$. This yields an induced isomorphism $A\otimes_{k}k_{P}\simeq
A^{\prime}\otimes_{k}k_{P}$. On one hand, if the central simple
$k_{P}$-algebra $A\otimes_{k}k_{P}$ is not split, we have
$\mathrm{sgn}_{P}(A,\ast)=0$; consult [20, Thm. 1]. If this is the case, then
$A^{\prime}\otimes_{k}k_{P}$ is also not split and
$\mathrm{sgn}_{P}(A^{\prime},\ast^{\prime})=0$. On the other hand, if
$A\otimes_{k}k_{P}$ is split, we have an isomorphism of central simple
$k_{P}$-algebras with involutions of orthogonal type
$(A\otimes_{k}k_{P},\ast\otimes_{k}k_{P})\simeq(\mathrm{M}_{\mathrm{deg}(A)\times\mathrm{deg}(A)}(k_{P}),\ast_{q})$
(consult Example 1.17) and $\mathrm{sgn}_{P}(A,\ast)=|\mathrm{sgn}(q)|$. If
this is the case, then $A^{\prime}\otimes_{k}k_{P}$ is also split, we also
have an isomorphism of central simple $k_{P}$-algebras with involution of
orthogonal type
$(A^{\prime}\otimes_{k}k_{P},\ast^{\prime}\otimes_{k}k_{P})\simeq(\mathrm{M}_{\mathrm{deg}(A)\times\mathrm{deg}(A)}(k_{P}),\ast_{q^{\prime}})$,
and $\mathrm{sgn}_{P}(A^{\prime},\ast^{\prime})=|\mathrm{sgn}(q^{\prime})|$.
Consequently, thanks to the above equality (4.17) and to the fact that
$\mathrm{Iv}(A\otimes_{k}k_{P},\ast\otimes_{k}k_{P})\simeq Q_{q}$ and
$\mathrm{Iv}(A^{\prime}\otimes_{k}k_{P},\ast^{\prime}\otimes_{k}k_{P})\simeq
Q_{q^{\prime}}$, the proof of item (v) of Theorem 1.22 follows now from
Proposition 4.18 below.
###### Proposition 4.18.
Given two non-degenerate quadratic forms $q$ and $q^{\prime}$ over a real-
closed field $k$, we have the following implication:
$\displaystyle[Q_{q}]=[Q_{q^{\prime}}]\,\,\mathrm{in}\,\,K_{0}\mathrm{Var}(k)$
$\displaystyle\Rightarrow$
$\displaystyle|\mathrm{sgn}(q)|=|\mathrm{sgn}(q^{\prime})|\,.$
###### Proof.
Recall first from item (i) of Theorem 1.1 that if by hypothesis
$[Q_{q}]=[Q_{q^{\prime}}]$, then $\mathrm{dim}(q)=\mathrm{dim}(q^{\prime})$.
It is well-known that every real-closed field is, in particular, euclidean.
Consequently, thanks to Sylvester’s law of inertia (consult [7, Prop. 31.5]),
the quadratic forms $q$ and $q^{\prime}$ are similar to the following ones
(4.19) $\displaystyle
q=\langle\underbrace{1,\ldots,1}_{m\text{-}\text{copies}},\underbrace{-1,\ldots,-1}_{n\text{-}\text{copies}}\rangle$
$\displaystyle
q^{\prime}=\langle\underbrace{1,\ldots,1}_{m^{\prime}\text{-}\text{copies}},\underbrace{-1,\ldots,-1}_{n^{\prime}\text{-}\text{copies}}\rangle$
for uniquely determined integers $m\geq n\geq 0$ and $m^{\prime}\geq
n^{\prime}\geq 0$, respectively. Note that since
$\mathrm{dim}(q)=\mathrm{dim}(q^{\prime})$, we have
$m+n=m^{\prime}+n^{\prime}$. In what follows we will assume that
$|\mathrm{sgn}(q)|\neq|\mathrm{sgn}(q^{\prime})|$ and, by absurd, that
$[Q_{q}]=[Q_{q^{\prime}}]$. If $|\mathrm{sgn}(q)|=m-n\neq
m^{\prime}-n^{\prime}=|\mathrm{sgn}(q^{\prime})|$, then $m\neq m^{\prime}$.
Hence, we can (and will) assume without loss of generality that
$m>m^{\prime}$. Let us consider first the particular case where $n=0$. In this
case, the quadratic form $q$ is anisotropic while the quadratic form
$q^{\prime}$ is isotropic (because $n^{\prime}>0$). However, if
$[Q_{q}]=[Q_{q^{\prime}}]$, then it follows from Proposition 4.18 that $q$ and
$q^{\prime}$ are both isotropic or both anisotropic. Hence, we arrive to a
contradiction. Let us consider now the case where $n>0$. In this case, we have
orthogonal sum decompositions
$\displaystyle q=\underline{q}\perp
n\mathbb{H}\,\,\,\,\text{with}\,\,\,\,\underline{q}=\langle\underbrace{1,\ldots\ldots,1}_{(m-n)\text{-}\text{copies}}\rangle$
$\displaystyle q^{\prime}=\underline{q^{\prime}}\perp
n\mathbb{H}\,\,\,\,\text{with}\,\,\,\,\underline{q^{\prime}}=\langle\underbrace{1,\ldots\ldots,1}_{(m^{\prime}-n)\text{-}\text{copies}},\underbrace{-1,\ldots\ldots,-1}_{(n^{\prime}-n)\text{-}\text{copies}}\rangle\,,$
where $\mathbb{H}=\langle 1,-1\rangle$ stands for the hyperbolic plane. Note
that the quadratic form $\underline{q}$ is anisotropic while the quadratic
form $\underline{q^{\prime}}$ is isotropic (because $n^{\prime}-n>0$). Making
use of Lemma 4.20 below, we hence obtain the following computations in the
Grothendieck ring of varieties:
$[Q_{q}]=(1+\mathbb{L}+\cdots+\mathbb{L}^{n-1})+[Q_{\underline{q}}]\cdot\mathbb{L}^{n}+\mathbb{L}^{\mathrm{dim}(q)-2}\cdot(1+\mathbb{L}+\cdots+\mathbb{L}^{n-1})$
$[Q_{q^{\prime}}]=(1+\mathbb{L}+\cdots+\mathbb{L}^{n-1})+[Q_{\underline{q^{\prime}}}]\cdot\mathbb{L}^{n}+\mathbb{L}^{\mathrm{dim}(q)-2}\cdot(1+\mathbb{L}+\cdots+\mathbb{L}^{n-1})\,.$
This implies that
$[Q_{q}]\cdot\mathbb{L}^{n}=[Q_{q^{\prime}}]\cdot\mathbb{L}^{n}$ in
$K_{0}\mathrm{Var}(k)$. Since $\mu_{\mathrm{c}}(\mathbb{L})=[\mathbb{Z}(-1)]$
(consult [1, §13.2.2]) and the Lefschetz motive
$\mathbb{Z}(-1)\in\operatorname{Num}(k)$ is $\otimes$-invertible, this further
implies that
$\mu_{\mathrm{c}}([Q_{\underline{q}}]\cdot\mathbb{L}^{n})=\mu_{\mathrm{c}}([Q_{\underline{q^{\prime}}}]\cdot\mathbb{L}^{n})\Leftrightarrow\mu_{\mathrm{c}}([Q_{\underline{q}}])\cdot[\mathbb{Z}(-n)]=\mu_{\mathrm{c}}([Q_{\underline{q^{\prime}}}])\cdot[\mathbb{Z}(-n)]\Leftrightarrow\mu_{\mathrm{c}}([Q_{\underline{q}}])=\mu_{\mathrm{c}}([Q_{\underline{q^{\prime}}}])\,.$
Consequently, Proposition 4.18 implies that $\underline{q}$ and
$\underline{q^{\prime}}$ are both isotropic or both anisotropic, which is a
contradiction. This concludes the proof. ∎
###### Lemma 4.20.
Given a non-degenerate quadratic form $q$ and an integer $n\geq 1$, we have
the following computation in $K_{0}\mathrm{Var}(k)$ ($\mathbb{L}$ stands for
the Grothendieck class $[\mathbb{A}^{1}]$):
(4.21) $[Q_{q\perp
n\mathbb{H}}]=(1+\mathbb{L}+\cdots+\mathbb{L}^{n-1})+[Q_{q}]\cdot\mathbb{L}^{n}+\mathbb{L}^{\mathrm{dim}(q)}\cdot(\mathbb{L}^{n-1}+\cdots+\mathbb{L}^{2(n-1)})\,.$
###### Proof.
Let ${\mathcal{P}}(n)$ be the equality (4.21). The proof will be done by
induction on $n$. We start by proving the equality $\mathcal{P}(1)$ (for every
quadratic form $q$). Note that there exists a choice of coordinates
$x_{1},\ldots,x_{\mathrm{dim}(q)},u,w$ such that
$(q\perp\mathbb{H})(x_{1},\ldots,x_{\mathrm{dim}(q)},u,w)=q(x_{1},\ldots,x_{\mathrm{dim}(q)})+uw$.
Following Rost [26, Prop. 2], consider the stratification $\mathrm{pt}\subset
Z\subset Q_{q\perp\mathbb{H}}$ of $Q_{q\perp\mathbb{H}}$, where
$Z:=\\{[x_{1}:\cdots:x_{\mathrm{dim}(q)}:0:w]\,|\,q(x_{1},\ldots,x_{\mathrm{dim}(q)})+0w=0\\}$
and $\mathrm{pt}:=[0:\cdots 0:1:0]$. By definition of $K_{0}\mathrm{Var}(k)$,
we hence obtain the following computation:
(4.22) $[Q_{q\perp\mathbb{H}}]=[Z]+[Q_{q\perp\mathbb{H}}\backslash
Z]=[\mathrm{pt}]+[Z\backslash\mathrm{pt}]+[Q_{q\perp\mathbb{H}}\backslash
Z]\,.$
On the one hand, note that $Q_{q\perp\mathbb{H}}\backslash
Z\simeq\mathbb{A}^{\mathrm{dim}(q)}$. This implies that
$[Q_{q\perp\mathbb{H}}\backslash Z]=\mathbb{L}^{\mathrm{dim}(q)}$. On the
other hand, as explained in [26, Prop. 2], the following projection map
$\displaystyle Z\backslash\mathrm{pt}\longrightarrow Q_{q}$
$\displaystyle[x_{1}:\cdots:x_{\mathrm{dim}(q)}:0:w]\mapsto[x_{1}:\cdots:x_{\mathrm{dim}(q)}]$
is a $1$-dimensional vector bundle. Following [5, §2 Prop. 2.3.3], this
implies that $[Z\backslash\mathrm{pt}]=[Q_{q}]\cdot[\mathbb{A}^{1}]$.
Consequently, since $[\mathrm{pt}]=1$, we conclude that (4.22) reduces to the
following computation:
(4.23)
$[Q_{q\perp\mathbb{H}}]=1+[Q_{q}]\cdot\mathbb{L}+\mathbb{L}^{\mathrm{dim}(q)}\,.$
Let us now prove the implication $\mathcal{P}(n)\Rightarrow\mathcal{P}(n+1)$.
If the equality $\mathcal{P}(n)$ holds (for every quadratic form $q$), then we
obtain the following computation in $K_{0}\mathrm{Var}(k)$:
$[Q_{(q\perp\mathbb{H})\perp
n\mathbb{H}}]=(1+\mathbb{L}+\cdots+\mathbb{L}^{n-1})+[Q_{q\perp\mathbb{H}}]\cdot\mathbb{L}^{n}+\mathbb{L}^{\mathrm{dim}(q)+2}\cdot(\mathbb{L}^{n-1}+\cdots+\mathbb{L}^{2(n-1)})\,.$
By combining it with (4.23), we hence obtain the searched equality
$\mathcal{P}(n+1)$. ∎
## 5\. Proof of Theorem 1.25
Proof of item (i). Recall first from item (i) of Theorem 1.22 that if
$[\mathrm{Iv}(A,\ast)]=[\mathrm{Iv}(A^{\prime},\ast^{\prime})]$, then
$\mathrm{deg}(A)=\mathrm{deg}(A^{\prime})$. Also, recall from Example 1.18
that the odd dimensional involution varieties are the odd dimensional
quadrics. As proved in [16, §15.A], the assignment $q\mapsto C_{0}(q)$ gives
rise to an bijection between the similarity classes of non-degenerate
quadratic forms of dimension $3$ and the isomorphism classes of quaternion
algebras. In the same vein, as proved in [16, §15.B], the assignment
$(A,\ast)\mapsto C_{0}(A,\ast)$ gives rise to a bijection between the
isomorphism classes of central simple $k$-algebras with involution of
orthogonal type of degree $4$ and the isomorphism classes of quaternion
algebras over an étale quadratic $k$-algebra. Consequently, given two central
simple $k$-algebras with involutions of orthogonal type $(A,\ast)$ and
$(A^{\prime},\ast^{\prime})$ such that
$[\mathrm{Iv}(A,\ast)]=[\mathrm{Iv}(A^{\prime},\ast^{\prime})]$ and
$\mathrm{deg}(A)=\mathrm{deg}(A^{\prime})\leq 4$, the proof follows now from
the combination of item (iii) of Theorem 1.22 with the well-known fact that
$(A,\ast)$ and $(A^{\prime},\ast^{\prime})$ are isomorphic if and only if
$\mathrm{Iv}(A,\ast)$ and $\mathrm{Iv}(A^{\prime},\ast^{\prime})$ are
isomorphic. Proof of item (ii). Given a central simple $k$-algebra $A$ and two
involutions of orthogonal type $\ast$ and $\ast^{\prime}$ on $A$, recall from
[19, Thm. B] that if $I(k)^{3}$ is torsion-free, then we have the following
equivalence:
(5.1)
$(A,\ast)\simeq(A,\ast^{\prime})\Leftrightarrow\big{(}C_{0}(A,\ast)\simeq
C_{0}(A,\ast^{\prime})\,\,\text{and}\,\,\mathrm{sgn}_{P}(A,\ast)=\mathrm{sgn}_{P}(A,\ast^{\prime})\,\,\text{for}\,\,\text{every}\,\,\text{ordering}\,\,P\,\,\text{of}\,\,k\big{)}\,;$
in the case where $k$ is formally-real and $\mathrm{deg}(A^{\prime})$ is even,
we assume moreover that $\tilde{u}(k)<\infty$. Now, let $(A,\ast)$ and
$(A^{\prime},\ast^{\prime})$ be two central simple $k$-algebras with
involution of orthogonal type such that
$[\mathrm{Iv}(A,\ast)]=[\mathrm{Iv}(A^{\prime},\ast^{\prime})]$. Thanks to
item (iv) of Theorem 1.22, we have an isomorphism $A\simeq A^{\prime}$.
Therefore, we can (and will) assume without loss of generality that
$A=A^{\prime}$. Consequently, the proof follows now from items (iii) and (v)
of Theorem 1.22, from the above equivalence (5.1), and from the well-known
fact that $(A,\ast)$ and $(A^{\prime},\ast^{\prime})$ are isomorphic if and
only if $\mathrm{Iv}(A,\ast)$ and $\mathrm{Iv}(A^{\prime},\ast^{\prime})$ are
isomorphic.
## References
* [1] Y. André, Une introduction aux motifs (motifs purs, motifs mixtes, périodes). Panoramas et Synthèses, vol. 17, Société Mathématique de France, Paris, 2004.
* [2] Y. André and B. Kahn, Nilpotence, radicaux et structures monoïdales, Rend. Sem. Mat. Univ. Padova 108 (2002), 107–291, With an appendix by Peter O’Sullivan. Erratum: Rend. Sem. Mat. Univ. Padova 113 (2005), 125–128.
* [3] F. Bittner, The universal Euler characteristic for varieties of characteristic zero. Compos. Math. 140 (2004), no. 4, 1011–1032.
* [4] L. Borisov, The class of the affine line is a zero divisor in the Grothendieck ring. J. Algebr. Geom. 27(2), 203–209 (2018)
* [5] A. Chambert-Loir, J. Nicaise and J. Sebag, Motivic integration. Progress in Mathematics, 325. Birkhäuser, Springer, 2018.
* [6] P. Colmez and J.-P. Serre, Correspondance Grothendieck-Serre. Documents Mathématiques 2, 2001.
* [7] R. Elman, N. Karpenko and A. Merkurjev, The Algebraic and Geometric Theory of Quadratic Forms. Colloquium Publications, vol. 56. American Mathematical Society, 2008.
* [8] A. Hogadi, Products of Brauer-Severi surfaces. Proc. AMS 137 (2009), no. 1, 45–50.
* [9] N. Karpenko, Criteria of motivic equivalence for quadratic forms and central simple algebras. Math. Ann. 317 (2000), no. 3, 585–611.
* [10] B. Keller, On differential graded categories. International Congress of Mathematicians (Madrid), Vol. II, 151–190. Eur. Math. Soc., Zürich (2006).
* [11] G. M. Kelly, On the radical of a category. J. Australian Math. Soc. 4 (1964), 299–307.
* [12] J. Kollár, Conics in the Grothendieck ring. Adv. Math. 198 (2005), no. 1, 27–35.
* [13] M. Kontsevich, Mixed noncommutative motives. Talk at the Workshop on Homological Mirror Symmetry, Miami, 2010. Notes available at www-math.mit.edu/auroux/frg/miami10-notes.
* [14] by same author, Notes on motives in finite characteristic. Algebra, arithmetic, and geometry: in honor of Yu. I. Manin. Vol. II, 213–247, Progr. Math., 270, Birkhäuser, Boston, MA, 2009.
* [15] by same author, Noncommutative motives. Talk at the IAS on the occasion of the $61^{\mathrm{st}}$ birthday of Pierre Deligne (2005). Available at http://video.ias.edu/Geometry-and-Arithmetic.
* [16] M.-A. Knus, A. Merkurjev, M. Rost, and J.-P. Tignol, The book of involutions. With a preface in french by J. Tits. AMS Colloquium Publications, 44, Providence, RI, 1998.
* [17] T.-Y. Lam, Introduction to quadratic forms over fields. Graduate Studies in Mathematics, 67. American Mathematical Society, Providence, RI, 2005.
* [18] M. Larsen and V. Lunts, Motivic measures and stable birational geometry. Mosc. Math. J. 3 (2003), no. 1, 85–95, 259.
* [19] D. Lewis and J.-P. Tignol, Classification theorems for central simple algebras with involution (with an appendix by R. Parimala). Manuscripta Mathematica, 100 (1999), 259–276.
* [20] by same author, On the signature of an involution. Arch. Math. 60 (1993), 128–135.
* [21] J. Lieber, Y. I. Manin and M. Marcolli, Bost-Connes systems and $\mathbb{F}_{1}$-structures in Grothendieck rings, spectra, and Nori motives. Facets of Algebraic Geometry. A collection in Honor of William Fulton’s 80th birthday, vol. 2, (2022), 147–227.
* [22] Q. Liu and J. Sebag, The Grothendieck ring of varieties and piecewise isomorphisms. Math. Z. 265 (2010), no. 2, 321–342.
* [23] Y. Manin, Correspondences, motifs and monoidal transformations, Mat. Sb. (N.S.) 77 (119) (1968), 475–507.
* [24] N. Naumann, Algebraic independence in the Grothendieck ring of varieties. Trans. Amer. Math. Soc. 359 (2007), no. 4, 1653–1683.
* [25] B. Poonen, The Grothendieck ring of varieties is not a domain. Math. Res. Lett. 9 (2002), no. 4, 493–497.
* [26] M. Rost, The motive of a Pfister form. Available at https://www.math.uni-bielefeld.de/~rost/motive.html.
* [27] J.-P. Serre Local fields. Springer Graduate Texts in Mathematics, vol. 67 (2013).
* [28] by same author, A course in arithmetic. Springer Graduate Texts in Mathematics, vol. 7 (1973).
* [29] T. A. Springer, Sur les formes quadratiques d’indice zéro. C. R. Acad. Sci. Paris 234 (1952), 1517–1519.
* [30] G. Tabuada, Jacques Tits motivic measure. Math. Ann. 382 (2022), no. 3-4, 1245–1278.
* [31] by same author, Noncommutative Motives. With a preface by Yuri I. Manin. University Lecture Series 63. American Mathematical Society, Providence, RI, 2015.
* [32] by same author, Additive invariants of toric and twisted projective homogeneous varieties via noncommutative motives. Journal of Algebra, 417 (2014), 15–38.
* [33] G. Tabuada and M. Van den Bergh, Noncommutative motives of separable algebras. Adv. Math. 303 (2016), 1122–1161.
* [34] by same author, Noncommutative motives of Azumaya algebras. J. Inst. Math. Jussieu 14 (2015), no. 2, 379–403.
* [35] D. Tao, A variety associated to an algebra with involution. J. Algebra 168 (1994), no. 2, 479–520.
|
# Deep learning in physics: a study of dielectric quasi-cubic particles in a
uniform electric field
Zhe Wang
Energy Research Institute
Nanyang Technological University
637141 Singapore
<EMAIL_ADDRESS>
&Claude Guet
Energy Research Institute
Nanyang Technological University
637141 Singapore
School of Materials Science and Engineering
Nanyang Technological University
639798 Singapore
<EMAIL_ADDRESS>
###### Abstract
Solving physics problems for which we know the equations, boundary conditions
and symmetries can be done by deep learning. The constraints can be either
imposed as terms in a loss function or used to formulate a neural ansatz. In
the present case study, we calculate the induced field inside and outside a
dielectric cube placed in a uniform electric field, wherein the dielectric
mismatch at edges and corners of the cube makes accurate calculations
numerically challenging. The electric potential is expressed as an ansatz
incorporating neural networks with known leading order behaviors and
symmetries and the Laplace’s equation is then solved with boundary conditions
at the dielectric interface by minimizing a loss function. The loss function
ensures that both Laplace’s equation and boundary conditions are satisfied
everywhere inside a large solution domain. We study how the electric potential
inside and outside a quasi-cubic particle evolves through a sequence of shapes
from a sphere to a cube. The neural network being differentiable, it is
straightforward to calculate the electric field over the whole domain, the
induced surface charge distribution and the polarizability. The neural network
being retentive, one can efficiently follow how the field changes upon
particle’s shape or dielectric constant by iterating from any previously
converged solution. The present work’s objective is two-fold, first to show
how an a priori knowledge can be incorporated into neural networks to achieve
efficient learning and second to apply the method and study how the induced
field and polarizability change when a dielectric particle progressively
changes its shape from a sphere to a cube.
## 1 Introduction
Solving physics problems for which we know the equations, boundary conditions
and symmetries by deep learning follows from the universal approximation
theorem [1, 2, 3], which states a sufficiently deep artificial neural network
(ANN) can approximate any well-behaved function with a finite number of
parameters. In 1994, Meade and Fernandez approximated the solution of linear
[4] and nonlinear [5] ordinary differential equations using a single-layer
perceptron. This approach was soon generalized by Lagaris et. al. [6] and
applied to two-dimensional Poisson equations with various source terms in a
rectangular domain. Their trial functions consisted of two terms: the first
one which contained no trainable variable satisfied the boundary conditions,
whereas the second one involved neural networks, whose parameters, e.g.
weights and biases, were adjusted to minimize a loss function. A neural
network solution of Poisson’s equation in a 3D domain with irregular
boundaries was achieved by including the boundary conditions into the loss
function [7]. Alternatively, McFall and Mahan constructed a proper trial
function so that the boundary conditions were automatically satisfied [8].
Massive growth of available scientific data has introduced a new flavor into
the ANN approach to differential equations. By incorporating data and
governing equations into the loss functions, the physics-informed neural
networks enable inferring hidden physics from measured data [9, 10, 11] with
successful applications on the visualization of turbulent flows [12, 13, 14],
where multiple scales interact, and the design of metamaterials in nano-optics
[15], where the finite size effect dominates. However, as pointed out by Wong
et al. [16], high-dimensional, non-convex loss functions require significant
optimization efforts, calling for sophisticated hyper-parameter optimization
[17, 18, 19] and transfer neuroevolution [16] algorithms.
In this work, we combine physics knowledge with recent advances in deep
learning to calculate the induced electric field of semiconductor colloidal
nanocrystals in an external photon field. Semiconductor colloidal nanocrystals
show numerous advantages for powerful opto-electronic materials, as their
optical properties change with shape and size due to quantum effects [20, 21].
In order to characterize correctly their absorption/emission properties, one
needs to know the electric field induced by the external photon field [22,
23]. As the particle’s size is much smaller than the photon wavelength, the
inner field results from a homogeneous applied electric field of amplitude
given by the laser intensity. Whereas there are analytical solutions of
Laplace’s equation for dielectric particles with shapes that allow to define a
system of curvilinear coordinates such as sphere, ellipsoids, torus, etc. [20,
24, 25, 26], numerical solvers are required for other shapes. The case of a
cube is challenging because the edges and corners lead to sharp variations of
the induced fields. To our knowledge there is only one published calculation
based on finite element methods applied to a cube with relative dielectric
constant ranging from to $2$ to $10$ [22]. The authors claimed a $1\%$
accuracy at the far field of their finite element simulation domain. At low
values of the relative dielectric constant considered, they found that the
electric field at the center of the particle is lower for a cube than for a
sphere. This would imply a lower polarizability for a cube than for a sphere,
at variance with accurate calculations [27, 28, 29, 30, 31, 32, 33]. Note that
these herein cited accurate calculations all focused on solving a surface
integral equation to estimate the dielectric polarizability and consequently
did not provide the induced electric field inside and outside the cube.
In this paper, we introduce an alternative method to calculate the electric
field inside and outside a dielectric nanoparticle embedded in another
dielectric medium. From the field inside, the polarizability is extracted.
More specifically, we approximate the solution of Laplace’s equation by a
function combining a known analytical term and an ANN. Then, instead of
solving numerically Laplace’s equation with boundary conditions, we express
the problem as an optimization problem and construct a loss function which can
be minimized by machine learning algorithms to yield the full electric
potential inside and outside the particle. One clear advantage of the ANN
approach over finite element methods is that it provides the solution of
Laplace’s equation as a differentiable function that can be used as such.
Additionally, the retentive nature of neural networks allows for a systematic
tracking of the evolution of induced fields as dielectric particles deviate
from canonical spherical and cubic shapes. Note that physical nanoparticles
usually have rounded edges and corners, see e.g. Fig. 2s in [34] and in [35]
for perovskite nanocrystals.
Most previous works considered partial differential equations in a homogeneous
domain with explicit boundary conditions. The electrostatic problem of a
colloidal particle requires to treat accurately the discontinuity of the
displacement field at the interface. To the best of our knowledge, the present
work is the first ANN-based attempt to address this problem, namely solving
Laplace’s equation in a three-dimensional piece-wise homogeneous domain with
implicit boundary conditions on an irregular interface. In addition to
formulating the best loss functions, one can substantially reduce the
optimization effort by incorporating known symmetries and specific features,
e.g. leading order behaviors, into the ANN ansatz.
The rest of the paper is organized as follows: in Sec. 2, we present the
physics model. Starting from a general solution in the form of a linear
combination of spherical harmonics, we replace the higher order terms by a
function containing neural networks and construct a loss function that
includes all constraints. The architecture of the neural network is then
discussed. In Sec. 3.1 , we benchmark the proposed ANN method by studying
dielectric spheroids for which analytical solutions are known [24]. In Sec.
3.2, we study the evolution of polarizabilities and induced fields inside and
outside a dielectric particle through a sequence of shapes from a sphere to a
cube. Finally, conclusions are drawn in Sec. 4, with a highlight on future
works.
## 2 Neural Network solution of Laplace’s equation
### 2.1 Governing equations
Consider a neutral homogeneous dielectric 3D particle, with surface $S$ and
permittivity $\epsilon_{1}$, embedded in a homogeneous medium with
permittivity $\epsilon_{0}$. In a spherical coordinates $(r,\theta,\varphi)$
system whose origin coincides with the center of the particle, we take the
direction of an uniform electric field $\boldsymbol{E}_{\text{ext}}$ to be the
axis from which the polar angle $\theta$ is measured. In the absence of
external charges, the electric potential obeys Laplace’s equation inside and
outside the particle,
$\displaystyle\nabla^{2}\phi=0.$ (1)
The continuity of the tangential components of the electric field
($\boldsymbol{E}=-\nabla\phi$) and the normal component of the displacement
field ($\boldsymbol{D}=\epsilon\boldsymbol{E}$) at the interface $S$ leads to
the following boundary conditions for the potential:
$\displaystyle\phi_{0}$ $\displaystyle=\phi_{1},$ (2a)
$\displaystyle\nabla\phi_{0}\cdot\hat{\boldsymbol{n}}$
$\displaystyle=\epsilon_{r}\nabla\phi_{1}\cdot\hat{\boldsymbol{n}}.$ (2b)
Here, $\hat{\boldsymbol{n}}=\nabla S/|\nabla S|$, is the unit normal vector to
$S$, $\epsilon_{r}=\epsilon_{1}/\epsilon_{0}$ is the relative dielectric
constant, and subscripts 0 and 1 denote quantities outside and inside the
particle, respectively. Furthermore, the electric field tends asymptotically
to the applied field and the potential is zero at the origin, yielding
$\displaystyle\phi_{0}$
$\displaystyle=-E_{\text{ext}}r\cos(\theta),\quad\mbox{as}\quad r\to\infty,$
(3a) $\displaystyle\phi_{1}$ $\displaystyle=0,\quad\mbox{at}\quad r=0,$ (3b)
where $E_{\text{ext}}=|\boldsymbol{E}_{\text{ext}}|$.
Figure 1: $(a)$ Sequences of shapes from a sphere to a quasi-cube with
increasing values of $N$, cf. Eq. (4) with $a=b=c$. $(b)$ Plot of normalized
volume and surface area of quasi-cubes by the same of the sphere as a function
of $N$.
Within the perspective of approaching a real cube and benchmarking the present
numerical method with analytical solutions, we consider a super-ellipsoidal
inclusion whose surface is described by the equation
$\displaystyle
S(x,y,z)\equiv\left|\frac{x}{a}\right|^{2N}+\left|\frac{y}{b}\right|^{2N}+\left|\frac{z}{c}\right|^{2N}=R^{2N},$
(4)
in the Cartesian coordinates. Here, $a,b,c,R\in\mathbb{R}^{+}$ and the
exponent $N\geq 1$. For $a=b=c$ and $N=1$, Eq. (4) defines a sphere with
radius $R$. A continuous deformation of a sphere with radius $R$ to a cube of
edge length $2R$ as $N$ increases towards $N\in\infty$, is shown in Fig. 1.
For cases where $a$, $b$, $c$ are not equal to each other, one obtains an
ellipsoid for $N=1$ and a rectangular parallelopiped as $N\to\infty$. Note by
passing that “real” nanoparticles have rounded corners, resembling quasi-cubes
with $N\in[1,8]$ in Fig. 1. For instance, by fitting $2D$ images of $100$
perovskite nanocrystals to a superellipse, cf. Eq. (4) with $z=0$, Tang et al.
[35] found that $N\approx 2.65$ for freshly synthesized and $N\approx 1.8$ for
aged samples, respectively.
A general solution to Eq. (1) can be expressed as a linear combination of
spherical harmonics $Y_{l}^{m}(\theta,\varphi)$ weighted by appropriate
scaling factors $r^{l}$ inside and $r^{-(l+1)}$ outside of the dielectric
particle
$\phi=\sum_{l=0}^{\infty}\sum_{m=-l}^{l}\left[A_{l}^{m}r^{-(l+1)}+B_{l}^{m}r^{l}\right]Y_{l}^{m}(\theta,\varphi),$
(5)
where coefficients $A_{l}^{m}$ and $B_{l}^{m}$ are determined by the boundary
conditions. The homogeneity and the direction of the external electric field
$\boldsymbol{E}_{\text{ext}}$ along the $z$-axis impose an anti-symmetry on
the electric potential with respect to the mid-plane $\theta=\pi/2$, which is
orthogonal to $\boldsymbol{E}_{\text{ext}}$ and crosses the center of the
particle. Thus,
$\displaystyle\phi(r,\theta,\varphi)=-\phi(r,\pi-\theta,\varphi),$ (6)
such that only odd terms remain in the spherical harmonics expansions (5). For
sake of simplicity, we select one of the principal axis of the dielectric
particle to be aligned with $\boldsymbol{E}_{\text{ext}}$. As such, there is a
further mirror symmetry,
$\displaystyle\phi(r,\theta,\varphi)$ $\displaystyle=\phi(r,\theta,-\varphi),$
(7a) $\displaystyle\phi(r,\theta,\varphi)$
$\displaystyle=\phi(r,\theta,\pi+\varphi),$ (7b)
$\displaystyle\phi(r,\theta,\varphi)$
$\displaystyle=\phi(r,\theta,\pi-\varphi).$ (7c)
For $N=1$, only the dipole moment contributes to $\phi_{1}$, leading to a
constant electric field inside the particle [24]. Deviating from a sphere,
$N\geq 2$, however, brings in contributions from higher-order moments with no
clear optimal order of truncation, rendering an analytical solution
unfeasible.
Figure 2: $(a)$ Sketch for a dielectric particle immersed in a uniform
electric field along the $z$-axis, $\boldsymbol{E}_{\text{ext}}$. $(b)$ A
superposition of collocation points sampled from a uniform distribution and a
Gaussian distribution with mean centered at the dielectric interface $r_{S}$.
$(c)$ Proposed ANN ansatz for electric potentials, see Sec. 2.2. $(d)$ Loss
function which measures the deviation from the Laplace equation (1) and the
boundary conditions (2) on the dielectric interface, see Sec. 2.3. $(e)$
Visualization of radial, polar, and azimuthal components of $\nabla^{2}\phi$
and its deviation from Eq. (1). $(f)$ Recovery of electric fields
$\boldsymbol{E}$ and charge density $\sigma$ from $\phi$. Here and thereafter,
results are plotted in a circular domain of radius $3$, wherein the blue curve
indicates the interface.
### 2.2 ANN ansatz
Our ansatz for solving Laplace’s equation with the boundary conditions given
by Eqs. (2,3) is to explicitly retain the dipole contributions and group the
infinite series of higher order terms into two ANN functions:
$\displaystyle\phi_{0}$
$\displaystyle=-\left[E_{\text{ext}}r-d_{0}r^{-2}\right]\cos(\theta)+\frac{r-R_{\text{max}}}{r_{S}-R_{\text{max}}}H_{0},$
(8a) $\displaystyle\phi_{1}$
$\displaystyle=d_{1}r\cos(\theta)+\frac{r-R_{\text{min}}}{r_{S}-R_{\text{min}}}H_{1},$
(8b)
where the dipole coefficients $d_{0}$ and $d_{1}$ are unknown. Here,
$r_{S}=r_{S}(\theta,\varphi)$ is an implicit solution to Eq. (4) measuring the
distance from the origin to points on the interface, $R_{\text{min}}$ and
$R_{\text{max}}$ are the inner and outer boundaries of the solution domain and
finally, $H_{0}$ and $H_{1}$ denote ANN based functions modeling a deviation
from the leading order dipolar behavior of the induced electric field. A
breakdown of the proposed ansatz (8) is detailed below.
To get around the singularity at the origin of the Laplacian operator in
spherical coordinates [36], we assume a practical origin of the radial
coordinate: $R_{\text{min}}\ll 1$. Similarly, to enable a numerical
calculation, the infinity $r\to\infty$ is replaced by a practical infinity:
$R_{\text{max}}\gg 1$. To mitigate the effect of such a truncation, we modify
the boundary conditions (3) by including the dipole contributions at the
practical infinity and practical origin, leading to the first terms on the
right hand side of the ANN ansatz (8). Then, the approximate solution domain
is a spherical shell with inner and outer radii $R_{\text{min}}$ and
$R_{\text{max}}$, wherein the modified boundary conditions are satisfied by
construction. In the limit, $R_{\text{min}}=0$ and $R_{\text{max}}\to\infty$,
boundary conditions (3) are recovered.
To inform ANN ansatz (8) with the leading octupolar radial trends, namely
$r^{+3}$ inside and $r^{-4}$ outside the particle, as well as the symmetry
constraints of Eqs. (6,7), we select
$\displaystyle H_{0}$
$\displaystyle=\cos(\theta)r^{-4}\text{atanh}\left[\eta\text{NN}_{0}(r^{*},\theta^{*},\varphi^{*};\boldsymbol{\xi}_{0})\right],$
(9a) $\displaystyle H_{1}$
$\displaystyle=\cos(\theta)r^{+3}\text{atanh}\left[\eta\text{NN}_{1}(r^{*},\theta^{*},\varphi^{*};\boldsymbol{\xi}_{1})\right],$
(9b)
so that the parameters $\boldsymbol{\xi}_{i}$ of neural networks
$\text{NN}_{i}$, with $i=0,1$, are adapted to learn deviations from octupolar
radial trends. The symmetry constraints are imposed on $\text{NN}_{i}$ through
a reparameterization of spatial variables $(r^{*},\theta^{*},\varphi^{*})$ and
the inclusion of $\cos(\theta)$. The neural network architectures, the
constant $\eta$, and the reparameterized coordinate variables
$(r^{*},\theta^{*},\varphi^{*})$, are discussed below.
We describe both $\text{NN}_{i}$ as multilayer perceptrons [37] each
consisting of four densely connected hidden layers with $16$ neurons per layer
$\displaystyle\boldsymbol{x}^{[k]}=\tanh(\boldsymbol{W}^{[k]}\cdot\boldsymbol{x}^{[k-1]}+\boldsymbol{b}^{[k]}),$
(10)
where $k$ is the index of the current layer, and the weight
$\boldsymbol{W}^{[k]}$ and bias $\boldsymbol{b}^{[k]}$ operate a linear
transformation of the input vector $\boldsymbol{x}^{[k-1]}$. As the bias of
both input and output layer are selected to be zero vectors, the neural
network eventually consists of $880$ trainable variables. Instead of the
canonical rectified linear unit (ReLU) which vanishes upon second-order
differentiation, we assign the activation function to be the hyperbolic
tangent function. Since the output of $\text{NN}_{i}$ measures a deviation
from the octupolar trend, it must remain bounded, with an amplitude that
depends on the strength of the external field, the geometry of the dielectric
inclusion and the mismatch at the dielectric interface. Therefore, an
additional operation $\text{atanh}(\eta\cdot)$ is included in Eqs. (9) to
transform the output of $\text{NN}_{i}$, ranging from $[-1,1]$, into an
interval $[-\text{atanh}(\eta),\text{atanh}(\eta)]$. A comparison of the
proposed activation function with the canonical linear and $\tanh$ activation
is discussed in Appendix A. In this work, with $E_{\text{ext}}=1$, we select
$\eta=0.99$. The model parameters
$\boldsymbol{\xi}_{i}=[\boldsymbol{W}_{i},\boldsymbol{b}_{i}]$, as well as
$d_{i}$ defined in Eqs. (8a, 8b), are determined by minimizing the loss
function discussed in §2.3.
Noting that the Laplacian operator is of second order, we consider the
following continuous and differentiable transformation of angular variables
$\displaystyle\theta^{*}=-\cos(2\theta)\quad\mbox{and}\quad\varphi^{*}=-\cos(2\varphi),$
(11)
which simultaneously maps $\theta\in[0,\pi]$ and $\varphi\in[-\pi,\pi]$ to the
first quadrant and rescales them to the range $[-1,1]$. Similarly, the radial
variable $r\in[R_{\text{min}},R_{\text{max}}]$ is normalized to the range
$[-1,1]$ using the min-max normalization
$\displaystyle
r^{*}=\left\\{\begin{array}[]{c}-1+2\dfrac{r-\text{min}(r_{S})}{R_{\text{max}}-\text{min}(r_{S})},\quad\mbox{for}\quad
r>r_{S},\\\\[10.0pt]
-1+2\dfrac{r-R_{\text{min}}}{\text{max}(r_{S})-R_{\text{min}}},\quad\mbox{for}\quad
r<r_{S}.\end{array}\right.$ (14)
With $(r^{*},\theta^{*},\varphi^{*})$, the outputs of neural networks are
symmetric with respect to the mid-plane $\theta=0$, and the anti-symmetry is
enforced by the multipliers $\cos(\theta)$ in Eqs. (9). A flowchart for the
proposed ANN ansatz is sketched in Fig. 2$(c)$
### 2.3 Loss function
Given governing equation (1) and boundary conditions (2), we write the loss
function as:
$\displaystyle L=L_{ge}+L_{bc},$ (15)
where $L_{ge}$ and $L_{bc}$ measure the mean squared deviations of the ansatz
functions (8a, 8b) from the exact solutions of the governing equation (1) and
the dielectric interface boundary conditions (2), respectively. Although an
exact minimization of the loss function (15) ensures the uniqueness of the
solution to the boundary value problem, an approximation of that solution by
neural networks yields a small nonzero loss. The first term writes as:
$\displaystyle L_{ge}$
$\displaystyle=\frac{w_{g_{0}}}{N_{0}+N_{b}}\sum_{j=1}^{N_{0}+N_{b}}\left[\tilde{r}_{j}^{\beta_{0}}\sin^{2}(\theta_{j})\nabla^{2}\phi_{0}(\boldsymbol{r}_{j})\right]^{2}+\frac{w_{g_{1}}}{N_{1}+N_{b}}\sum_{j=1}^{N_{1}+N_{b}}\left[\tilde{r}_{j}^{\beta_{1}}\sin^{2}(\theta_{j})\nabla^{2}\phi_{1}(\boldsymbol{r}_{j})\right]^{2},$
(16)
where the $\boldsymbol{r}_{j}=(r_{j},\theta_{j},\varphi_{j})$ denote the
$j$-th collocation points sampled from a superposition of a uniform
distribution and a Gaussian distribution centered at $r_{S}$ as shown in Fig.
2$(b)$; $N_{0}$, $N_{1}$ and $N_{b}$ denote the number of collocation points
outside, inside, and on the interface of the dielectric particle,
respectively. The multiplier $\sin^{2}(\theta_{j})$ is included to compensate
the singularity of the Laplacian operator at $\theta=0,\pi$. The Laplacian has
vanishing magnitude at large $r$ values, whereas it is diverging near
$R_{\text{min}}$. Therefore, the solutions $\phi_{i}$ are not equally
optimized throughout the solution domain. Inspired by van der Meer et al.
[38], we introduce the scaling factors $\tilde{r}_{j}^{\beta_{i}}$, with
$\tilde{r}_{j}=r_{j}/r_{S}(\theta_{j},\varphi_{j})$ defined at each
collocation point. We select the exponents $\beta_{0}=4$ and $\beta_{1}=1$, in
order to ensure that the radial, polar, and azimuthal components of $L_{ge}$
are of the same order throughout. A breakdown of the Laplacian
$\nabla^{2}\phi$ componentwise and its visualization are presented in Fig. 2
$(d)$ and $(e)$. With increasing $N$, larger and larger derivatives associated
with sharper and sharper edges and corners cause a strong mismatch among
components of $\nabla^{2}\phi_{i}$ near the interface. Consequently, large
oscillations of the loss function emerge during the training process, leading
to a slow convergence. In order to balance such a mismatch, weights
$w_{g_{0}}$ and $w_{g_{1}}$ are introduced in Eq. (16) in addition to a
normalization by $r_{S}(\theta_{i},\varphi_{i})$.
The dielectric interface boundary conditions (2) lead to the following loss
function:
$\displaystyle L_{bc}$
$\displaystyle=\frac{1}{N_{b}}\sum_{j=1}^{N_{b}}\left\\{w_{b_{t}}\left[\phi_{0}(\boldsymbol{r}_{j})-\phi_{1}(\boldsymbol{r}_{j})\right]^{2}+\frac{w_{b_{n}}}{|\nabla
S|^{2}}\left[{\epsilon_{r}}^{-1}\nabla\phi_{0}(\boldsymbol{r}_{j})\cdot\nabla
S-\nabla\phi_{1}(\boldsymbol{r}_{j})\cdot\nabla S\right]^{2}\right\\}.$ (17)
The weights $w_{b_{t}}$ and $w_{b_{n}}$ help balance the losses of tangential
and normal boundary conditions during the training. Convergence and accuracy
are significantly improved when the normalization factor $|\nabla
S(r,\theta,\varphi)|$ is included.
## 3 Computational results
Throughout the experiments, the loss function (15) was evaluated over a
sampling of $2^{n}$ collocation points, with $n=13$ on the boundary, $n=14$
inside, and $n=15$ outside of the particle, respectively. The numbers of
collocation points were selected to accommodate the GPU memory. The gradients
of the loss function with respect to model parameters ($d_{i}$ and
$\boldsymbol{\xi}_{i}$) were computed using automatic differentiation [39];
they were subsequently applied to update the model parameters by using the
ADAM optimizer [40] with a starting learning rate of $10^{-3}$. At each
$2,000$ iterations, the learning rates were adjusted to be the twice and half
of the current loss magnitude for $d_{i}$ and $NN_{i}$, respectively. The
collocation points were re-sampled after each $10,000$ iterations, suggesting
an unsupervised multi-task learning using mini-batch gradient descent with an
infinite set of collocation points. Our numerical models were implemented
using Python language and TensorFlow backend [41]. During the training, we
keep $w_{g_{1}}=w_{b_{n}}=w_{b_{t}}=1$ constant, but with the value of
$w_{g_{0}}\in(0,1]$ varies for different cases. The selection of $w_{g_{0}}$
is detailed in Appendix B. As a reference, on a workstation equipped with two
Nvdia GeForce GTX 1080 Ti graphics cards, each iteration takes around $0.1$
second.
Figure 3: Dielectric $(a)$ oblate $(a=2/3,b=c=1)$ and $(b)$ prolate
$(a=3/2,b=c=1)$ particles with $\epsilon_{r}=6$ placed in a homogeneous field
$\boldsymbol{E}_{\text{ext}}=\hat{\boldsymbol{z}}$. From top to bottom: the
electric potential $\phi$, the electric fields $E_{x}/E_{\text{ext}}$ and
$E_{z}/E_{\text{ext}}$, and a distribution of $E_{z}/E_{\text{ext}}$ along
principal axes on $xz$-plane spanned by $r$ and $\theta$. As a reminder, the
exact electric potential outside a dielectric spheroid ($a,b=c=1$) is:
$\phi_{0}=-E_{0}z\left[1+(\epsilon_{r}-1)n_{z}^{\xi}\right]/\left[1+(\epsilon_{r}-1)n_{z}^{\infty}\right]$,
where
$n_{z}^{a}=(a/2)\int_{0}^{a}\mathrm{d}s/\left[(s+1)^{2}\sqrt{s+a^{2}}\right]$
and $\xi=z^{2}-1$ along $z$-axis, cf. [26].
The present model allows us to calculate the induced potential over the whole
domain. By differentiation, we obtain the total electric field,
$\boldsymbol{E}$, the polarization field
$\boldsymbol{P}=\frac{\epsilon_{r}-1}{4\pi}\boldsymbol{E}$ inside the
particle, and the source of it, which is the induced surface charge,
$\sigma=-\boldsymbol{P}\cdot\hat{\boldsymbol{n}}$, as illustrated in Fig.
2$(f)$. The polarizability,
$\boldsymbol{\alpha}=\boldsymbol{p}/E_{\text{ext}}$, where $\boldsymbol{p}$ is
the induced moment given by
$\boldsymbol{p}=\frac{\epsilon_{r}-1}{4\pi}\int_{V}\boldsymbol{E}\mathrm{d}V.$
(18)
Considering our problem symmetry, all terms but the dipole one cancel upon
integration over volume, leading to:
$p_{x}=p_{y}=0\quad\mbox{and}\quad
p_{z}=\frac{\epsilon_{r}-1}{4\pi}E_{z}|_{r=0}V,$ (19)
where $E_{z}|_{r=0}$ is the amplitude of the electric field at the center of
the particle. In order to compare with previous works, we introduce the volume
normalized polarizability defined by Sihvola et al. [30] as:
$\overline{\alpha}_{j}=4\pi\alpha_{j}/V$, with $j=x,y,z$. Thus,
$\displaystyle\overline{\alpha}_{z}\approx(\epsilon_{r}-1)\frac{E_{z}|_{r=R_{\text{min}}}}{E_{\text{ext}}},$
(20)
because $r=0$ is excluded from the solution domain.
In the following, we fix the parameters $E_{\text{ext}}=R=b=c=1$, and let
$R_{\text{min}}=0.01$. In order to keep the dipole contributions on the
boundaries of the solution domain of the same order, we take
$R_{\text{max}}=10$. The remaining three parameters: $N$, $a$, and
$\epsilon_{r}$, enable one to assess the emergence of edges and corners, the
stretching/squeeze of the geometry, and the dielectric mismatch at the
interface.
Let us first assess the present method with dielectric spheroids for which
there are analytical solutions and then apply it to quasi-cubes with
increasing values of $N$.
### 3.1 Spheroid
For $N=1$, Eq. (4) defines a unit sphere for $a=1$, and it deforms into a
spheroid as $a$ deviates from $a=1$. In both cases, the induced electric field
inside the dielectric particle is uniform and given by an analytical
expression [20, 24, 25, 26], thereby providing a benchmark for the accuracy of
the proposed ANN approach. Here, we consider a sphere ($a=1$), an oblate shape
with $a=2/3$ and a prolate one with $a=3/2$.
The training process is stopped when the loss function (15) drops below
$10^{-5}$. From the electric potential $\phi(r,\theta,\varphi)$, we calculate
the Cartesian components of the electric field $E_{x}$, $E_{y}$ and $E_{z}$ by
means of the chain rule. Their distributions inside and outside an oblate
spheroid with $a=2/3$ and a prolate spheroid with $a=3/2$ with relative
dielectric constant $\epsilon_{r}=6$ are plotted in Fig. 3. Indeed, the
calculated induced electric field is quite uniform inside the dielectric
particle and decays towards $E_{\text{ext}}$ outside, with values in good
agreement with theory [20, 24, 26].
In Fig. 4 we show the distribution of charges on the dielectric interface for
spheroidal particles. The accumulation of positive charges on the north pole
and negative charges on the south pole leads to an induced electric field
which counteracts $\boldsymbol{E}_{\text{ext}}$. Our calculated surface charge
distribution agrees nicely with the exact solution except for a small
deviation at the tips of the oblate spheroid in Fig. 4$(d)$, where the high
curvature in the $xz$-plane leads to large variation of $L_{ge}$ components of
$\phi_{0}$, degrading the numerical accuracy.
Figure 4: Visualization of surface charge distribution for: $(a)$ an oblate
$(a=2/3,b=c=1)$, $b$ a spherical $(a=1,b=c=1)$, and $(c)$ a prolate
$(a=3/2,b=c=1)$ dielectric particles with $\epsilon_{r}=6$. Comparison of the
obtained surface charge distributions with their exact solutions on a cross
section with $(d)$ $xz$-plane $(\varphi=0)$ and $(e)$ $yz$-plane
$(\varphi=\pi/2)$. Table 1: Calculated normalized polarizabilities of
dielectric spheroidal particles compared with the corresponding exact values
and relative errors.
| $\epsilon_{r}=2$ | $\epsilon_{r}=6$ | $\epsilon_{r}=10$
---|---|---|---
This work | Exact | Error(%) | This work | Exact | Error(%) | This work | Exact | Error(%)
$L\leq 10^{-4}$ | Oblate | 0.7845 | 0.78306 | 0.185 | 2.1028 | 2.09623 | 0.313 | 2.5888 | 2.57627 | 0.486
Sphere | 0.7492 | 0.75000 | 0.107 | 1.8711 | 1.87500 | 0.208 | 2.2521 | 2.25000 | 0.093
Prolate | 0.7245 | 0.72280 | 0.235 | 1.7179 | 1.71377 | 0.241 | 2.0391 | 2.02175 | 0.858
$L\leq 10^{-5}$ | Oblate | 0.78355 | 0.78306 | 0.063 | 2.09661 | 2.09623 | 0.017 | 2.57710 | 2.57627 | 0.032
Sphere | 0.75001 | 0.75000 | 0.001 | 1.87507 | 1.87500 | 0.004 | 2.25008 | 2.25000 | 0.004
Prolate | 0.72314 | 0.72280 | 0.047 | 1.71379 | 1.71377 | 0.001 | 2.02157 | 2.02175 | 0.009
To obtain the polarizability, we compute the integral in Eq. (18) using a
Monte Carlo method [42]. The comparison is made for two values reached by the
loss function $L$ during optimization. As a reminder, the normalized
polarizability (columns “Exact") of a spheroid with axis ($a,b=c=1$) is:
$\overline{\alpha}_{z}=(\epsilon_{r}-1)/\left[1+(\epsilon_{r}-1)n_{z}^{\infty}\right]$,
with $n_{z}^{\infty}$ defined in caption of Fig. 3, cf. [24]. Calculated
normalized polarizabilities are within less than $1\%$ their exact values
depending upon the loss function accuracy, see Table. 1. More precisely, a
$99\%$ accuracy is found for $L\leq 10^{-4}$ and it reaches $99.9\%$ when the
loss function is minimized below $L\leq 10^{-5}$. A further minimization to
$L\leq 10^{-6}$ would require a significant increase in computation time for a
marginal increase in accuracy. Therefore, we shall limit ourselves to a loss
level $L\leq 10^{-5}$ for applications to quasi-cubic inclusions in the next
section.
Figure 5: Contours for the normalized electric field $E_{z}/E_{\text{ext}}$
associated with dielectric quasi-cubes ($N=6$) with $(a)$ $\epsilon_{r}=2$ and
$(b)$ $\epsilon_{r}=8$ on the $xz$-plane. The insets reveal a saddle-shaped
field inside particles. Distribution of $E_{z}/E_{\text{ext}}$ along $(c)$
$x$\- and $(d)$ $z$-axes for dielectric particles with various values of
$\epsilon_{r}=2,4,6,8$.
### 3.2 Quasi-cube
For quasi-cubic particles with $N>1$, the emergence of edges and corners
strongly modifies the electric potential and its derivatives. Fig. 5 shows the
normalized induced electric field $E_{z}/E_{\text{ext}}$ in a quasi-cubic
particle with $N=6$ for values of $\epsilon_{r}=2,4,6,8$. The strong rise of
the electric field along the $z$-axis as one approaches the particle from
outside, its discontinuous drop at the interface, and the continuous decrease
of $E_{z}$ along the $x$-axis from infinity to the origin, all together lead
to a saddle-shaped field inside the particle, as shown in Fig. 5$(a,b)$. With
varying values of $\epsilon_{r}$, despite an apparent difference of amplitude,
the electric field remains qualitatively unchanged.
Figure 6: Shape dependent polarizability for quasi-cubic particles with
$\epsilon_{r}=2,4,6,8$ where, as shown in the inset, points indicate values
computed using Eq. (21) and curves are fitted polynomial functions with an
asymptote $\tilde{\alpha}_{z}|_{N\to\infty}=1$. The shaded regions indicate
deviation of the computed data points to the fitted curve.
Figure 7: $(a)$ Visualization of surface charge on dielectric quasi-cubic
particles with $\epsilon_{r}=6$ and $N=2,4,6,8$. $(b)$ Comparison of the
obtained surface charge distribution on cross sections $\varphi=0$ and
$\varphi=\pi/4$ of quasi-cubes with $N\in[1,10]$.
In Fig. 5$(c,d)$, it is observed that the induced electric field
$E_{z}/E_{\text{ext}}$ at the center of the particle is higher for quasi-cubes
than for a sphere, which implies a higher value of the normalized
polarizability for quasi-cubes than that for a sphere, cf. Eq. (20). To
quantify the relation between the polarizability and the shape of the
particle, we plot
$\displaystyle\tilde{\alpha}_{z}(N)=\frac{\overline{\alpha}_{z}(N)-\overline{\alpha}_{z}^{1}}{\overline{\alpha}_{z}^{\infty}-\overline{\alpha}_{z}^{1}},$
(21)
as a function of $N$ in Fig. 6. In most cases, $\overline{\alpha}_{z}$
obtained using the integral (18) is systematically smaller than a direct
evaluation using Eq. (20) by $0.2\%$. This small deviation can arise from
multiple origins, e.g. numerical accuracy, finite size effect, nonzero dipolar
component in neural networks, complicating the task of sourcing out the prime
contribution. In Eq. (21), the polarizability of quasi-cubic particles is re-
scaled by that of a sphere $\overline{\alpha}^{1}_{z}$ and that of a cube
$\overline{\alpha}_{z}^{\infty}$ to enable a comparison with different values
of $\epsilon_{r}$; hence, $\tilde{\alpha}_{z}=0$ for a sphere and
$\tilde{\alpha}_{z}=1$ for a cube, independent of $\epsilon_{r}$. For the
dependence of $\overline{\alpha}_{z}^{\infty}$ upon $\epsilon_{r}$ we use the
approximation formula given in Ref. [30]. Then, the obtained polarizabilities
are fitted to a polynomial function of $N$ which displays an asymptotic
behavior $\tilde{\alpha}_{z}(N)\to 1$ as $N\to\infty$. It is observed that
$\tilde{\alpha}_{z}(N)$ exhibits a rapid transition to almost $80\%$ and
$90\%$ of the asymptotic value for $N\leq 10$ and $N\leq 20$, respectively.
This numerical result is in accordance with the fact that the geometry of Eq.
(4) converges virtually to a cube for $N>8$, as seen in Fig. 1. Note that the
transition of $\tilde{\alpha}_{z}(N)$ to its asymptotic value is slower as
$\epsilon_{r}$ increases.
In order to investigate the behavior of induced surface charges with the
emergence of ever sharper edges and corners, we visualize $\sigma$ for quasi-
cubic particles with fixed $\epsilon_{r}=6$ and increasing values of
$N\in[1,10]$ in Fig. 7. Compared with the sinusoidal charge distribution on
the surface of a sphere, charges accumulate towards edges and are peaked at
corners as $N$ increases. Such a re-distribution of surface charges leads to
an enhanced dipole moment, which in turns leads to the higher value of
polarizability observed in Fig. 6, thereby a higher electric field at the
center of dielectric quasi-cubic particles observed in Fig. 5.
As a baseline comparison, we compare our ansatz with a vanilla ansatz. For the
latter, we replaced $H_{0}$ and $H_{1}$, cf. Eqs. (9), by multilayer
perceptrons with the same network architecture as $\text{NN}_{i}$ but a linear
activation at the output layer. The input variables $r$, $\theta$ and
$\varphi$ were rescaled to $[-1,1]$ using the min-max normalization. As
explicitly shown in Fig. 8 $(a)$ and $(b)$, the inclusion of physical
constraints in ansatz (9) comparatively reduces the optimization effort by a
large degree. It is worth noting that in order to capture the transition
regime from sphere to cube, classical numerical methods, e.g. finite element,
would require a re-mesh and a re-simulation for each different values of $N$.
Alternatively, the proposed ANN approach is able to tackle more efficiently
this progressive transition. Since each change in $N$ will only affect the
boundary conditions and few collocation points near the dielectric interface,
while Laplace’s equation remains satisfied in the bulk of the solution domain,
a minimization initiated from a previous converged solution for, e.g. $N-1$,
leads to a drastic reduction in computational time as shown in Fig. 8 $(c)$
and $(d)$. For piece-wise homogeneous dielectric media considered in this
work, $\epsilon_{r}$ appears only through the normal boundary condition (2b).
Thus, for simulations with different values of $\epsilon_{r}$ and a fixed
value of $N$, neural networks are committed to learn the change in mismatch,
which is induced by a continuous variation of $\epsilon_{r}$, at the
dielectric interface, cf. Fig. 8 $(e)$ and $(f)$. Once the loss function is
minimized below the target value, only the converged solutions, which consist
of the model variables and the structure of the ANN ansatz, are stored
independently of collocation points. Unlike finite element methods whose
solutions are defined on meshes, our ANN approach provides a continuous
mapping: $(r,\theta,\varphi)\to\phi$ over the entire solution domain
$r\in[R_{\text{min}},R_{\text{max}}]$, resembling analytical solutions.
Figure 8: Decay of loss function $L$ for the case $N=3$ and $\epsilon_{r}=4$
initiated from $(a)$ random conditions using the proposed ansatz (8); $(b)$
random conditions using the vanilla ansatz; $(c)$ converged solution of $N=2$
and $\epsilon_{r}=4$; $(d)$ converged solution of $N=4$ and $\epsilon_{r}=4$;
$(e)$ converged solution of $N=3$ and $\epsilon_{r}=2$; $(f)$ converged
solution of $N=3$ and $\epsilon_{r}=6$. Iteration stops at $L\leq 10^{-4}$ or
after $150,000$ iterations. In $(a)$ $L$ decays below $10^{-4}$ within
$60,000$ iterations; whereas in $(b)$ $L$ remains of order of $10^{-3}$ after
$150,000$ iterations. In $(a)$ and $(b)$, the first $100$ iterations are
removed for a better visualization. Due to the slow convergence in $(b)$, the
learning rate is decreased by a factor of $1.2$ at every $2000$ iterations
until a minimal learning rate $5\times 10^{-5}$ is reached.
## 4 Conclusions
Recent advances in machine learning enable a revisit of longstanding physics
problems from a new perspective. We have presented a neural networks based
calculation for the response of dielectric particles with shapes varying from
sphere to cube, placed in an external uniform electric field. Our ansatz
intertwined boundary conditions at the borders of the solution domain, as well
as symmetry constraints resulting from the geometry of the particle and the
external field, with neural networks. Then, solving Laplace’s equation with
dielectric boundary conditions was translated to a minimization of a loss
function defined in Sec. 2.3. To evaluate the accuracy, we applied the
proposed ANN approach to spheroids with various aspect ratios and relative
dielectric constants. An overall $99.9\%$ accuracy for the normalized
polarizability was achieved, with however, a slight deviation of surface
charge from the exact solution at the tips of an oblate spheroid with large
curvature, as shown in Fig. 4$(d)$.
As a sphere progressively deforms into a cube, the accumulation of surface
charges towards the ever sharper edges and corners leads to a rapid transition
of polarizability to its asymptotic value at lower values of $N\leq 10$. This
implies that the shape effect has a significant impact on determining the
induced polarizability by dielectric nano-particles. The enhanced
polarizability with increasing values of $N$ leads to an amplified dipole
moment, which, in turn, results in a higher electric field at the center of a
quasi-cubic particle than that of a sphere, independent of the relative
dielectric constant of the particle.
Since neural networks are infinitely differentiable, instead of interpolating
among discrete values, auto-differentiation enables a conversion of the
obtained ANN solution to higher order derivatives, which are again continuous
functions. This feature can be advantageous in a broad range of physical
applications where higher-order derivatives are of interest. By contrast to
the finite element method, where the entire mesh is required for computation,
the ANN methods during training use only a fraction of collation points to
compute the loss function and update the model parameters by mini-batch
stochastic gradient descent techniques. A successive re-sampling enables a
complete covering of the solution domain with sufficient number of iterations.
Therefore, the mesh-free ANN approach overcomes two major deficiencies of the
finite element method stemming from the mesh dependence: (i) the finite
differentiability; and (ii) the high memory usage associated with fine mesh.
Finally, it is worth emphasizing some limitations which need to be overcome in
future works. Loss functions stemming from physics problems often consist of
several components, e.g. governing equation, initial and boundary conditions,
symmetries and conservation laws. Since the loss function is not strictly
zero, an imbalance in its components can deteriorate the accuracy of the
solution. In this work, we introduced weights to balance the amplitudes of
each loss component during the training process. However, it is unclear, how
does the selection of these weights affect the numerical accuracy. Therefore,
it would be beneficial to establish a relation between the accuracy and the
relative amplitudes of each components, and devise an adaptive algorithm which
automatically adjusts the weights during the training process.
## Appendix
## Appendix A Output activation
The linear $x$, $\tanh(x)$, and $g(x)=\text{atanh}(\eta\tanh(x))$ activation
functions and their derivatives are shown in Fig. 9. By varying
$\eta\in(0,1)$, we adjust the output of ANN to an arbitrary bounded interval.
Alternatively, one can achieve the same purpose by scaling the $\tanh$ output.
However, as observed from Fig. 9$(b)$ that a scaling up of $\tanh$ output
leads to a sharp variation of gradients. Therefore, we select the activation
function at the output layer to be $g(x)$.
Figure 9: Comparison of $(a)$ linear $x$, $\tanh$, and proposed $g(x)$
activation functions; and $(b)$ their derivatives with respect to $x$.
## Appendix B Weight tuning
The weight $w_{g_{0}}$ is selected to balance the components of $L_{ge}$ on
the dielectric interface. It is observed from Fig. 10 that, due to the
emergence of ever sharper corners with increasing $N$, the components of
$L_{ge}$ inside and outside of the particle differ in magnitudes
substantially. To enable an optimization of $\phi_{0}$ and $\phi_{1}$ on the
same footing, we take the spherical case $N=1$ as a reference and select
$w_{g_{0}}=0.3,0.2,0.15$ for $N=3,6,9$, respectively. However, since the
relative magnitudes of each loss component is not known a priori, an adaptive
algorithm which adjusts $w_{g_{0}}$ during the training can be beneficial.
Figure 10: Visualization of the radial (blue), the polar (orange) and the
azimuthal (green) components of $L_{ge}$ against $\theta$ on the interface
with $\varphi=\pi/4$ and $\epsilon_{r}=4$. The solid and the dashed lines are
associated with $\phi_{0}$ and $\phi_{1}$, respectively. The gray vertical
lines mark the locations of corners for a canonical cube.
## Acknowledgment
The authors would like to thank Steven Blundell and Tan Nguyen for discussions
that initiated this work, Liang Mong Koh, Sean Ooi and Kavitha Srinivasan for
providing computational resources and Subodh Mhaisalkar for support. The
computational work for this article was partially performed on resources of
the National Supercomputing Centre, Singapore (https://www.nscc.sg).
## References
* [1] A. N. Gorban and D. C. Wunsch, “The general approximation theorem,” in _Proceedings of the International Joint Conference on Neural Networks_ , 1998\.
* [2] D. A. Winkler and T. C. Le, “Performance of deep and shallow neural networks, the universal approximation theorem, activity cliffs, and qsar,” _Molecular Informatics_ , vol. 36, no. 1-2, p. 1600118, 2017.
* [3] H. Lin and S. Jegelka, “Resnet with one-neuron hidden layers is a universal approximator,” in _Advances in Neural Information Processing Systems_ , 2018\.
* [4] A. J. Meade Jr. and A. A. Fernandez, “The numerical solution of linear ordinary differential equations by feedforward neural networks,” _Mathematical and Computer Modelling_ , vol. 19, no. 12, pp. 1–25, 1994.
* [5] ——, “Solution of nonlinear ordinary differential equations by feedforward neural networks,” _Mathematical and Computer Modelling_ , vol. 20, no. 9, pp. 19–44, 1994.
* [6] I. E. Lagaris, A. Likas, and D. I. Fotiadis, “Artificial neural networks for solving ordinary and partial differential equations,” _IEEE Transactions on Neural Networks_ , vol. 9, no. 5, pp. 987 – 1000, 1998.
* [7] I. E. Lagaris, A. C. Likas, and D. G. Papageorgiou, “Neural-network methods for boundary value problems with irregular boundaries,” _IEEE Transactions on Neural Networks_ , vol. 11, no. 5, pp. 1041 – 1049, 2000.
* [8] K. S. McFall and J. R. Mahan, “Artificial neural network method for solution of boundary value problems with exact satisfaction of arbitrary boundary conditions,” _IEEE Transactions on Neural Networks_ , vol. 20, no. 8, pp. 1221 – 1233, 2009.
* [9] M. Raissi, “Deep hidden physics models: Deep learning of nonlinear partial differential equations,” _Journal of Machine Learning Research_ , vol. 19, no. 25, pp. 1–24, 2018.
* [10] Y. Yang and P. Perdikaris, “Adversarial uncertainty quantification in physics-informed neural networks,” _Journal of Computational Physics_ , vol. 394, pp. 136–152, 2019.
* [11] C. Rackauckas, Y. Ma, J. Martensen, C. Warner, K. Zubov, R. Supekar, D. Skinner, A. Ramadhan, and A. Edelman, “Universal differential equations for scientific machine learning,” arXiv:2001.04385.
* [12] M. Raissi, Z. Wang, M. S. Triantafyllou, and G. E. Karniadakis, “Deep learning of vortex-induced vibrations,” _Journal of Fluid Mechanics_ , vol. 861, pp. 119–137, 2018.
* [13] M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” _Journal of Computational Physics_ , vol. 378, pp. 686–707, 2019.
* [14] M. Raissi, A. Yazdani, and G. E. Karniadakis, “Hidden fluid mechanics: Learning velocity and pressure fields from flow visualizations,” _Science_ , vol. 367, no. 6481, pp. 1026–1030, 2020.
* [15] Y. Chen, L. Lu, G. E. Karniadakis, and L. D. Negro, “Physics-informed neural networks for inverse problems in nano-optics and metamaterials,” _Optics Express_ , vol. 28, no. 8, p. 11618, 2020.
* [16] J. C. Wong, A. Gupta, and Y. S. Ong, “Can transfer neuroevolution tractably solve your differential equations?” arXiv:2101.01998.
* [17] S. Wang, X. Yu, and P. Perdikaris, “When and why pinns fail to train: A neural tangent kernel perspective,” arXiv:2007.14527.
* [18] S. Wang, Y. Teng, and P. Perdikaris, “Understanding and mitigating gradient pathologies in physics-informed neural networks,” arXiv:2001.04536.
* [19] M. Elhamod, J. Bu, C. Singh, M. Redell, A. Ghosh, W. C. L. V. Podolskiy, and A. Karpatne, “Cophy-pgnn: Learning physics-guided neural networks withcompeting loss functions for solving eigenvalue problems,” arXiv:2007.01420.
* [20] V. Klimov, _Nanoplasmonics_. CRC Press, Taylor and Francis Group, 2013.
* [21] C. Delerue and M. Lannoo, _Nanostructures: theory and modeling_. Springer Science & Business Media, 2013.
* [22] M. A. Becker et al., “Bright triplet excitons in caesium lead halide perovskites,” _Nature_ , vol. 553, no. 1, pp. 189–193, 2018.
* [23] T. P. T. Nguyen, S. A. Blundell, and C. Guet, “One-photon absorption by inorganic perovskite nanocrystals: a theoretical study,” _Physical Review B_ , vol. 101, p. 195414, 2020.
* [24] L. D. Landau, E. M. Lifshitz, and L. P. Pitaevshiǐ, _Electrodynamics of continuous media_. Pergamon Press, 1984\.
* [25] J. D. Jackson, _Classical Electrodynamics_. John Wiley & Sons, 2007.
* [26] J. A. Stratton, _Electromagnetic Theory_. John Wiley & Sons, 2007.
* [27] T. W. Edwards and J. V. Bladel, “Electrostatic dipole moment of a dielectric cube,” _Applied Scientific Research, Section B_ , vol. 9, no. 9, pp. 151–154, 1961.
* [28] D. F. Herrick and T. A. Senior, “The dipole moment of a dielectric cube,” _IEEE Transactions on Antennas and Propagation_ , vol. 25, pp. 590–592, 1977\.
* [29] L. Eyges and P. Gianino, “Polarizabilities of rectangular dielectric cylinders and of a cube,” _IEEE Transactions on Antennas and Propagation_ , vol. 27, pp. 557–560, 1979.
* [30] J. Avelin, H. H. R. Sharma, and A. H. Sihvola, “Polarizability analysis of cubical and square-shaped dielectric scatterers,” _IEEE Transactions on Antennas and Propagation_ , vol. 49, no. 3, pp. 451–457, 2001.
* [31] A. Sihvola, P. Ylä-Oijala, S. Järvenpää, and J. Avelin, “Polarizabilities of platonic solids,” _IEEE Transactions on Antennas and Propagation_ , vol. 52, no. 3, pp. 451–457, 2004.
* [32] A. Sihvola, “Dielectric polarization and particle shape effects,” _Journal of Nanomaterials_ , vol. 2007, pp. 1–9, 2007.
* [33] J. Helsing and K.-M. Perfekt, “On the polarizability and capacitance of the cube,” _Applied and Computational Harmonic Analysis_ , vol. 34, p. 445–468, 2013.
* [34] M. V. Kovalenko, L. Protesescu, and M. I. Bodnarchuk, “Properties and potential optoelectronic applications of lead halide perovskite nanocrystals,” _Science_ , vol. 358, no. 6364, pp. 745–750, 2017.
* [35] Y. Tang et al., “Highly stable perovskite supercrystals via oil-in-oil templating,” _Nano Letters_ , vol. 20, no. 8, p. 5997–6004, 2020.
* [36] A. A. Khelashvili and T. P. Nadareishvili, “Singular behavior of the laplace operator in polar spherical coordinates and some of its consequences for the radial wave function at the origin of coordinates physics of particles and nuclei letters,” _Physics of Particles and Nuclei Letters_ , vol. 12, pp. 11–25, 2015.
* [37] G. Cybenko, “Approximation by superpositions of a sigmoidal function,” _Mathematics of Control, Signals and Systems_ , vol. 2, pp. 303–314, 1989\.
* [38] R. van der Meer, C. Oosterlee, and A. Borovykh, “Optimally weighted loss functions for solving pdes with neural networks,” _arXiv:2002.06269_ , 2020\.
* [39] A. G. Baydin, B. A. Pearlmutter, A. A. Radul, and J. M. Siskind, “Automatic differentiation in machine learning: a survey,” _Journal of Machine Learning Research_ , vol. 18, no. 1, pp. 5595–5637, 2017.
* [40] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in _International Conference on Learning Representations_ , 2015.
* [41] M. Abadi et al., “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” _arXiv:1603.04467_ , 2016.
* [42] R. Y. Rubinstein and D. P. Kroese, _Simulation and the Monte Carlo method_. John Wiley & Sons, 2016.
|
# Aperiodic subshifts of finite type on groups which are not finitely
generated
Sebastián Barbieri
###### Abstract
We provide an example of a non-finitely generated group which admits a
nonempty strongly aperiodic SFT. Furthermore, we completely characterize the
groups with this property in terms of their finitely generated subgroups and
the roots of their conjugacy classes.
_Keywords: symbolic dynamics, subshift of finite type, aperiodicity._
_MSC2020:_ 37B10.
Let $A$ be a finite set and $G$ a group with identity $1$. Consider the set of
configurations $A^{G}=\\{x\colon G\to A\\}$ endowed with the left shift action
$G\curvearrowright A^{G}$ given by
$(gx)(h)=x(g^{-1}h)\mbox{ for every }x\in A^{G}\mbox{ and }g,h\in G.$
A set $X\subset A^{G}$ is called a subshift of finite type (SFT) if there
exists a finite set $F\subset G$ and $L\subset A^{F}$ such that $x\in X$ if
and only if for every $g\in G$ we have that the restriction of $gx$ to $F$ is
in $L$. An SFT can be though of as a set of colorings of $G$ using colors from
$A$ which satisfy a finite set of local rules encoded by $L$.
An SFT $X$ is called strongly aperiodic (SA) if the shift action is free,
namely, given $x\in X$ we have that $gx=x$ can only hold for $g=1$. Clearly
$X=\varnothing$ is SA, but this case is not very interesting. This definition
begs the question of whether nonempty SA SFTs actually exist and the answer
turns out to depend on which group $G$ we are considering.
Let us illustrate the failure of the existence of nonempty SA SFT in the case
$G=\mathbb{Z}$. Given $A,F$ and $L$ which define an SFT $X$ we may assume
without loss of generality that $F$ is of the form $\\{0,\dots,n-1\\}$ for
some $n\geq 1$. Consider the finite directed graph with vertices $A^{n+1}$ and
such that there is an edge from $u_{0}\dots u_{n}$ to $v_{0}\dots v_{n}$ if
and only if $u_{1}\dots u_{n}=v_{0}\dots v_{n-1}\in L$. It is clear that the
elements of $X$ are precisely the projections of the bi-infinite walks in this
graph to the first symbol in each coordinate (for further details, see [17]).
If $X$ is nonempty, it follows that the graph must contain a cycle, say of
length $m>0$, and consequently there must exist $x\in X$ for which $mx=x$,
showing that $X$ is not SA.
The case of $G=\mathbb{Z}^{2}$ is much more interesting. In [20] Wang studied
the algorithmic problem of deciding, given as inputs $A,F$ and $L$ whether the
SFT $X$ they define is empty or not111To be rigorous, Wang studied an
equivalent problem where the SFT is described by square tiles with colored
edges and in any tiling of $\mathbb{Z}^{2}$ the colors at the edges must
match.. Wang did not solve this problem but showed that if $\mathbb{Z}^{2}$
does not admit any nonempty SA SFT, then the problem would be algorithmically
decidable. Years later Berger solved the problem [7] showing that it was
algorithmically undecidable, and in doing so constructed the first known
example of a nonempty SA SFT in $\mathbb{Z}^{2}$. After this breakthrough,
several beautiful SA SFTs on $\mathbb{Z}^{2}$ have been constructed, see [15,
16, 19].
In recent years the problem of classifying the groups which admit a nonempty
SA SFT has gained notoriety. This has led to wonderful discoveries that relate
this property with the algorithmic and geometric properties of groups. For
instance, the property of admitting a nonempty SA SFT is invariant under
commensurability [8] and under quasi-isometries of finitely presented groups
[9]. It is also known that finitely generated groups with infinitely many ends
(in particular, virtually free groups) cannot admit nonempty SA SFTs [9], and
that finitely generated and recursively presented groups which admit SA SFTs
must necessarily have decidable word problem [13]. Within the groups that
satisfy these constraints, several are known to admit SA SFTs. For instance
polycyclic-by-finite groups which are not virtually cyclic [3, 14], one-ended
word-hyperbolic groups [10, 11], some Baumslag-Solitar groups [1, 2, 12],
groups of the form $\mathbb{Z}^{d}\rtimes_{\varphi}G$ with $d\geq 2$,
$\varphi\in\operatorname{GL}_{d}(\mathbb{Z})$ and $G$ infinite, finitely
generated and with decidable word problem [5], groups which are the direct
product of three infinite, finitely generated groups with decidable word
problem [4], the Grigorchuk group, and more generally, any finitely generated
branch group with decidable word problem [4], and any self-simulable group
with decidable word problem such as Thompson’s $V$,
$\operatorname{GL}_{n}(\mathbb{Z})$ and $\operatorname{SL}_{n}(\mathbb{Z})$
for $n\geq 5$, or the direct product of any two finitely generated non-
amenable groups with decidable word problem [6].
A common point shared by all the existence results mentioned above is that
they apply to finitely generated groups. For obvious reasons this is not very
surprising: if $X\subset A^{G}$ is an SFT on a group $G$ described by
$L\subset A^{F}$ and $H=\langle F\rangle$ is the subgroup of $G$ finitely
generated by $F$, we may consider instead $Y\subset A^{H}$ as the SFT which is
defined by $L$ but on $H$. It turns out that configurations in $X$ can be seen
as independent copies of configurations of $Y$ on each coset of $H$.
###### Proposition 1.
Let $H\leqslant G$, $F\subset H$ finite, $L\subset A^{F}$ and let $X,Y$ be the
SFTs defined by $L$ on $G$ and $H$ respectively. Let $(g_{i})_{i\in I}$ be a
set of left coset representatives of $H$ in $G$. Then $x\in X$ if and only if
for every $i\in I$ we have $y_{i}=(x(g_{i}h))_{h\in H}\in Y$.
###### Proof.
Let $x\in X$ and $i\in I$. For every $u\in H$ we have
$(uy_{i})(h)=y_{i}(u^{-1}h)=x(g_{i}u^{-1}h)=(ug_{i}^{-1}x)(h)$. As $x\in X$,
it follows that $(ug_{i}^{-1}x)|_{F}\in L$ and thus $(uy_{i})|_{F}\in L$ and
$y_{i}\in Y$. Conversely, suppose $y_{i}\in Y$ for every $i\in I$ and let
$g\in G$. We may choose $i\in I$ and $v\in H$ such that $g^{-1}=g_{i}v$ and
thus we have
$(gx)(h)=(v^{-1}g_{i}^{-1}x)(h)=x(g_{i}vh)=y_{i}(vh)=(v^{-1}y_{i})(h)$ for
every $h\in H$. As $y_{i}\in Y$, it follows that $v^{-1}y_{i}|_{F}\in L$ and
thus $gx|_{F}\in L$, showing that $x\in X$. ∎
Based on the former argument, one should expect that non-finitely generated
groups would have a hard time admitting nonempty SA SFTs, as one could always
pick the same configuration on each coset and use that to create periodicity.
Let us exemplify that this intuition is wrong with a real world situation.
Let us suppose the reader is walking on the street, while suddenly they get
jumped by a thief who demands: “Quickly! give me an example of a non-finitely
generated group which admits a nonempty SA SFT or I shall take your wallet!”.
The reader, scared by the gravity of the situation, might wrongly answer that
such examples do not exist by explaining the argument above. The thief, with a
victorious grin on their face, would take out their portable blackboard and
write down the following example.
###### Example 2.
Consider $G=\mathbb{Q}^{2}$ and let $Y\subset A^{\mathbb{Z}^{2}}$ be a
nonempty SA SFT. Let $X\subset A^{\mathbb{Q}^{2}}$ be given by the condition
$x\in X\mbox{ if and only if }(sx)|_{\mathbb{Z}^{2}}\in Y,\mbox{ for every
}s\in\mathbb{Q}^{2}.$
As $Y$ is a nonempty SFT, it follows that $X$ is a nonempty SFT. For
$q\in\mathbb{Q}^{2}$, we may write
$q=\left(\frac{p_{1}}{r_{1}},\frac{p_{2}}{r_{2}}\right)$ with
$p_{1},p_{2},r_{1},r_{2}\in\mathbb{Z}$ and $r_{1}r_{2}\neq 0$. It follows that
$r_{1}r_{2}q\in\mathbb{Z}^{2}$. Now suppose we have $qx=x$, then we also have
$(r_{1}r_{2}q)x=x$ and from the fact that $x|_{\mathbb{Z}^{2}}\in Y$, we
obtain that necessarily $r_{1}r_{2}q=(0,0)$. As $r_{1}r_{2}\neq 0$, we deduce
that $q=(0,0)$ and thus $X$ is SA. $\ocircle$
Faced with this example, the reader would have no other choice but to
surrender their wallet to the thief. The argument for nonexistence sketched
above is incomplete: while it is true that one may choose independently any
configuration in every coset, it may happen that powers of the coset
representatives end up inevitably falling on a non-trivial element of the
finitely generated subgroup and thus destroy any global translational
symmetry. As we shall show, it turns out that modulo conjugacy this is
essentialy the sole way that SA SFTs can arise in non-finitely generated
groups. In fact, we shall show that global aperiodicity may arise even if the
subshift in the finitely generated subgroup is not strongly aperiodic.
For an SFT $X\subset A^{G}$, define its free part as the set
$\operatorname{Free}(X)=G\setminus\bigcup_{x\in
X}\operatorname{Stab}_{G}(x)=\\{g\in G:gx\neq x\mbox{ for every }x\in X\\}.$
In particular, a nonempty SFT $X$ is SA if and only if
$\operatorname{Free(X)}=G\setminus\\{1\\}$. For a subset $M\subset G$, let us
denote the set of its roots in $G$ by
$R_{G}(M)=\\{g\in G:\mbox{ there is }n>0\mbox{ such that }g^{n}\in M\\}.$
Finally, given $g\in G$ let us denote its conjugacy class by
$\operatorname{Cl}(g)=\\{tgt^{-1}:t\in G\\}$.
###### Proposition 3.
Let $H\leqslant G$, $F\subset H$ finite, $L\subset A^{F}$ and let $X,Y$ be
nonempty SFTs defined by $L$ on $G$ and $H$ respectively. We have that
$g\in\operatorname{Free}(X)$ if and only if
$\operatorname{Cl}(g)\cap R_{G}(\operatorname{Free}(Y))\neq\varnothing.$
###### Proof.
If $g\notin\operatorname{Free}(X)$, then there exists $x\in X$ such that
$gx=x$. Suppose there is $n>0$, $h\in H$ and $t\in G$ such that
$g^{n}=t^{-1}ht$. It follows that we would have $t^{-1}htx=g^{n}x=x$ and thus
$htx=tx$. Letting $z=tx$, it follows that $hz=z$. As $z\in X$, it follows that
if we let $y=z|_{H}\in Y$ we have $hy=y$ and thus
$h\notin\operatorname{Free}(Y)$. This shows that $\operatorname{Cl}(g)\cap
R_{G}(\operatorname{Free}(Y))=\varnothing$.
Conversely, let $g\in G$ be such that $\operatorname{Cl}(g)\cap
R_{G}(\operatorname{Free}(Y))=\varnothing$ and choose (using the axiom of
choice) a set $(\ell_{i})_{i\in I}$ of left coset representatives of $H$.
There is a unique well-defined permutation $\varphi\colon I\to I$ which
satisfies $g\ell_{i}=\ell_{\varphi(i)}h$ for some $h\in H$. In particular,
$\varphi$ induces an action of $\mathbb{Z}$ on $I$ given by $m\cdot
i=\varphi^{m}(i)$ for $m\in\mathbb{Z}$ and $i\in I$.
Choose (using the axiom of choice) $J\subset I$ such that it contains exactly
one representative of each orbit of $\varphi$, that is, for every $i\in I$
there is a unique $j\in J$ such that $i=\varphi^{m}(j)$ for some (non
necessarily unique) $m\in\mathbb{Z}$. For $n\in\mathbb{Z}$ and $j\in J$ let
$h_{n,j}=\ell_{\varphi^{n}(j)}^{-1}g^{n}\ell_{j}\in H.$
We will now define a configuration $x\in A^{G}$. Let $y^{*}\in Y$ be a fixed
configuration and let $j\in J$, there are two cases to consider:
1. 1.
If $\\{\varphi^{m}(j)\\}_{m\in\mathbb{Z}}$ is infinite, we let $y_{j}=y^{*}$
and define $x(\ell_{\varphi^{m}(j)}s)=h_{m,j}y_{j}(s)=y_{j}(h^{-1}_{m,j}s)$
for every $s\in H$ and $m\in\mathbb{Z}$.
2. 2.
If $\\{\varphi^{m}(j)\\}_{m\in\mathbb{Z}}$ is finite, let $n>0$ be the least
positive integer such that $\varphi^{n}(j)=j$. As
$h_{n,j}=\ell_{j}^{-1}g^{n}\ell_{j}\in\operatorname{Cl}(g^{n})$ it follows by
our assumption that $h_{n,j}\notin\operatorname{Free}(Y)$. Therefore there
exists $y_{j}\in Y$ such that $h_{n,j}y_{j}=y_{j}$. We define
$x(\ell_{\varphi^{m}(j)}s)=h_{m,j}y_{j}(s)=y_{j}(h_{m,j}^{-1}s)$ for every
$s\in H$ and $m\in\mathbb{Z}$. Notice that as $h_{n,j}y_{j}=y_{j}$, this is
well-defined.
By construction, we have that $(x(\ell_{i}h))_{h\in H}\in Y$ for every $i\in
I$ and thus by Proposition 1 we have that $x\in X$. Let us show that $gx=x$.
Indeed, for $j\in J,m\in\mathbb{Z}$ and $s\in H$ we have
$gx(\ell_{\varphi^{n}(j)}s)=x(g^{-1}\ell_{\varphi^{n}(j)}s)=x(\ell_{\varphi^{n-1}(j)}\ell_{\varphi^{n-1}(j)}^{-1}g^{-1}\ell_{\varphi^{n}(j)}s)$
As $\ell_{\varphi^{n-1}(j)}^{-1}g^{-1}\ell_{\varphi^{n}(j)}\in H$, we obtain
that
$x(\ell_{\varphi^{n-1}(j)}\ell_{\varphi^{n-1}(j)}^{-1}g^{-1}\ell_{\varphi^{n}(j)}s)=y_{j}(h_{n-1,j}^{-1}\ell_{\varphi^{n-1}(j)}^{-1}g^{-1}\ell_{\varphi^{n}(j)}s)$.
From here we obtain that
$gx(\ell_{\varphi^{n}(j)}s)=y_{j}(h_{n-1,j}^{-1}\ell_{\varphi^{n-1}(j)}^{-1}g^{-1}\ell_{\varphi^{n}(j)}s)=y_{j}(h_{n,j}^{-1}s)=x(\ell_{\varphi^{n}(j)}s)$
This shows that $gx=x$ and thus we conclude that
$g\notin\operatorname{Free}(X)$. ∎
The previous proposition tells us that we may determine whether a group admits
a nonempty SA SFT by just looking at its finitely generated subgroups.
###### Theorem 4.
A group $G$ admits a nonempty strongly aperiodic subshift of finite type if
and only if there exists a finitely generated subgroup $H\leqslant G$ and a
nonempty SFT $Y\subset A^{H}$ such that for every $g\in G\setminus\\{1\\}$ we
have
$\operatorname{Cl}(g)\cap R_{G}(\operatorname{Free}(Y))\neq\varnothing.$
###### Proof.
If $X\subset A^{G}$ is a nonempty SA SFT given by $F\subset G$ finite and
$L\subset A^{F}$, then $H=\langle F\rangle$ is a finitely generated subgroup
and by Proposition 3 for every $g\in\operatorname{Free}(X)=G\setminus\\{1\\}$
we have $\operatorname{Cl}(g)\cap
R_{G}(\operatorname{Free}(Y))\neq\varnothing$, where $Y\subset A^{H}$ is the
nonempty SFT given by $L$ on $H$. Conversely, if $H\leq G$ is a finitely
generated subgroup, $Y\subset A^{H}$ a nonempty SFT given by $F\subset H$
finite and $L\subset A^{F}$, and such that for every $g\in G\setminus\\{1\\}$
we have $\operatorname{Cl}(g)\cap
R_{G}(\operatorname{Free}(Y))\neq\varnothing$, then again by Proposition 3 we
have that $G\setminus\\{1\\}\subset\operatorname{Free}(X)$ for the nonempty
SFT $X\subset A^{G}$ induced by $L$ on $G$. Thus $X$ is SA. ∎
###### Corollary 5.
Let $G$ be a group and $H\leqslant G$ a finitely generated subgroup which
admits a nonempty strongly aperiodic SFT and such that for every $g\in
G\setminus\\{1\\}$ we have $\operatorname{Cl}(g)\cap
R_{G}(H\setminus\\{1\\})\neq\varnothing$. Then $G$ admits a nonempty strongly
aperiodic SFT.
Notice that for any nonempty subshift $Y\subset A^{H}$ we have
$\operatorname{Free}(Y)\subset H\setminus\\{1\\}$ and thus the condition
$\operatorname{Cl}(g)\cap R_{G}(\operatorname{Free}(Y))\neq\varnothing$ always
implies that $\operatorname{Cl}(g)\cap
R_{G}(H\setminus\\{1\\})\neq\varnothing$. This trivial remark suggests the
question of whether there are any examples where $X$ is SA but $Y$ is not. The
following example due to Salo (also in an article by Jeandel [13] with a
slightly different construction) shows that this case can indeed occur.
###### Example 6.
A result of Osin [18] shows that every countable torsion-free group can be
embedded in a $2$-generated group with exactly two conjugacy classes. In
particular there is a $2$-generated group $G$ with two conjugacy classes and
such that there is $t\in G$ with $\langle t\rangle\simeq\mathbb{Z}$. Let
$A=\\{\mathtt{a},\mathtt{b}\\}$, $F=\\{1,t\\}$ and
$L=\\{(1\mapsto\mathtt{a},t\mapsto\mathtt{b}),(1\mapsto\mathtt{b},t\mapsto\mathtt{a})\\}$,
that is, we let $X\subset\\{\mathtt{a},\mathtt{b}\\}^{G}$ be the SFT which
consists of all maps for which $x(gt)\neq x(g)$ for every $g\in G$.
Let $Y\subset\\{\mathtt{a},\mathtt{b}\\}^{\langle t\rangle}$ be the SFT
defined by $F$ and $L$ on $\langle F\rangle\simeq\mathbb{Z}$. On the one hand,
we have that $Y$ is nonempty and consists on the two periodic configurations
which alternate symbols, thus
$\operatorname{Free}(Y)=\\{t^{2n+1}:n\in\mathbb{Z}\\}$ consists of the odd
powers of $t$, in particular $Y$ is not SA. On the other hand, as there is
only one non-trivial conjugacy class, we have that for every $g\in
G\setminus\\{1\\}$, $t\in\operatorname{Cl}(g)\cap
R_{G}(\operatorname{Free}(Y))$ which is nonempty, and thus $X$ is SA by
Theorem 4. $\ocircle$
Acknowledgments: The author is grateful to Ville Salo for finding a subtle
error in the first version of this paper and suggesting the Osin group
example. The author also wishes to thank the referees for suggesting various
improvements. The author was supported by the FONDECYT grant 11200037.
S. Barbieri, Departamento de Matemática y ciencia de la computación,
Universidad de Santiago de Chile, Santiago. Chile.
E-mail address<EMAIL_ADDRESS>
## References
* [1] N. Aubrun and J. Kari. Tiling problems on Baumslag-Solitar groups. Electronic Proceedings in Theoretical Computer Science, 128:35–46, 2013.
* [2] N. Aubrun and J. Kari. On the domino problem of the Baumslag-Solitar groups. Theoretical Computer Science, 894:12–22, 2021.
* [3] A. Ballier and M. Stein. The domino problem on groups of polynomial growth. Groups, Geometry, and Dynamics, 12(1):93–105, 2018.
* [4] S. Barbieri. A geometric simulation theorem on direct products of finitely generated groups. Discrete Analysis, pages 1–25, Paper No. 9, 2019.
* [5] S. Barbieri and M. Sablik. A generalization of the simulation theorem for semidirect products. Ergodic Theory and Dynamical Systems, 39(12):3185–3206, 2019.
* [6] S. Barbieri, M. Sablik, and V. Salo. Groups with self-simulable zero-dimensional dynamics. arXiv:2104.05141, 2021.
* [7] R. Berger. The Undecidability of the Domino Problem. American Mathematical Society, 1966.
* [8] D. Carroll and A. Penland. Periodic points on shifts of finite type and commensurability invariants of groups. New York Journal of Mathematics, 21:811–822, 2015.
* [9] D. B. Cohen. The large scale geometry of strongly aperiodic subshifts of finite type. Advances in Mathematics, 308:599–626, 2017.
* [10] D. B. Cohen and C. Goodman-Strauss. Strongly aperiodic subshifts on surface groups. Groups, Geometry, and Dynamics, 11(3):1041–1059, 2017.
* [11] D. B. Cohen, C. Goodman-Strauss, and Y. Rieck. Strongly aperiodic subshifts of finite type on hyperbolic groups. Ergodic Theory and Dynamical Systems, pages 1–44, 2021.
* [12] S. J. Esnay and E. Moutot. Aperiodic SFTs on Baumslag-Solitar groups. Theoretical Computer Science, 2022.
* [13] E. Jeandel. Aperiodic subshifts of finite type on groups. arXiv:1501.06831, 2015.
* [14] E. Jeandel. Aperiodic subshifts on polycyclic groups. arXiv:1510.02360, 2015.
* [15] E. Jeandel and M. Rao. An aperiodic set of 11 Wang tiles. Advances in Combinatorics, pages Paper No. 1, 37, 2021.
* [16] J. Kari. A small aperiodic set of Wang tiles. Discrete Mathematics, 160:259–264, 1996.
* [17] D. A. Lind and B. Marcus. An Introduction to Symbolic Dynamics and Coding. Cambridge University Press, 2 edition, 2021.
* [18] D. Osin. Small cancellations over relatively hyperbolic groups and embedding theorems. Annals of Mathematics. Second Series, 172(1):1–39, 2010.
* [19] R. M. Robinson. Undecidability and nonperiodicity for tilings of the plane. Inventiones Mathematicae, 12:177–209, 1971.
* [20] H. Wang. Proving theorems by pattern recognition, II. In Computation, Logic, Philosophy, pages 159–192. Springer Netherlands, 1961.
|
# Dirichlet spaces over chord-arc domains
Huaying Wei Department of Mathematics and Statistics, Jiangsu Normal
University Xuzhou 221116, PR China<EMAIL_ADDRESS>and Michel Zinsmeister
Institut Denis Poisson, Université d’ Orléans, Orléans, 45067, France
<EMAIL_ADDRESS>
###### Abstract.
If $U$ is a $C^{\infty}$ function with compact support in the plane, we let
$u$ be its restriction to the unit circle $\mathbb{S}$, and denote by
$U_{i},\,U_{e}$ the harmonic extensions of $u$ respectively in the interior
and the exterior of $\mathbb{S}$ on the Riemann sphere. About a hundred years
ago, Douglas [9] has shown that
$\displaystyle\iint_{\mathbb{D}}|\nabla U_{i}|^{2}(z)dxdy$
$\displaystyle=\iint_{\bar{\mathbb{C}}\backslash\bar{\mathbb{D}}}|\nabla
U_{e}|^{2}(z)dxdy$
$\displaystyle=\frac{1}{2\pi}\iint_{\mathbb{S}\times\mathbb{S}}\left|\frac{u(z_{1})-u(z_{2})}{z_{1}-z_{2}}\right|^{2}|dz_{1}||dz_{2}|,$
thus giving three ways to express the Dirichlet norm of $u$. On a rectifiable
Jordan curve $\Gamma$ we have obvious analogues of these three expressions,
which will of course not be equal in general. The main goal of this paper is
to show that these $3$ (semi-)norms are equivalent if and only if $\Gamma$ is
a chord-arc curve.
###### Key words and phrases:
Dirichlet space, chord-arc curve, quasicircle, Ahlfors-regular curve, Douglas
formula
###### 2020 Mathematics Subject Classification:
31A05, 31C25, 30C62
Research supported by the National Natural Science Foundation of China (Grant
No. 12271218).
## 1\. Introduction
If $\Omega$ is a domain of the Riemann sphere, the Dirichlet space
$\mathcal{D}(\Omega)$ is the space of harmonic functions
$U:\Omega\mapsto\mathbb{C}$ such that
$\iint_{\Omega}|\nabla U|^{2}(z)dxdy<\infty.$
The Dirichlet space over the unit disk $\mathbb{D}$ is, together with the
Hardy and Bergman spaces, one of three most classical Hilbert spaces in the
unit disk. The key references are survey articles [5, 24] and the book [12].
It plays an important role in domains as distinct as Minimal Surfaces,
Operator Theory and Teichmüller Theory (e.g.[22]). Concerning the latter,
there has been recently a regain of interest with the explosion of results
about Weil-Petersson curves; that is the connected component of the identity
in the universal Teichmüller space viewed as a complex Hilbert manifold; see
[7, 27, 28, 30].
In the case of unit disk, we will see in the next section that functions
$U_{i}$ in $\mathcal{D}(\mathbb{D})$ have well-defined limits $u$ on the unit
circle $\mathbb{S}$ that moreover characterize $U_{i}$. We may thus consider
the Dirichlet space $\mathcal{D}(\mathbb{D})$ as a quotient space of a space
of functions defined on $\mathbb{S}$ modulo constants. Moreover, using the
reflection $z\mapsto 1/\bar{z}$ we have that this latter space coincides with
the space of boundary values of functions $U_{e}$ in
$\mathcal{D}(\mathbb{D}_{e})$ , where $\mathbb{D}_{e}$ is the unbounded
component of $\bar{\mathbb{C}}\backslash\mathbb{S}$. A deep theorem of Douglas
[9] asserts that this space of functions coincides with the space
$H^{1/2}(\mathbb{S})$. Its norm may thus be expressed in three equal ways as
follows:
$\displaystyle\|u\|_{i}^{2}:=\frac{1}{2\pi}\iint_{\mathbb{D}}|\nabla
U_{i}|^{2}(z)dxdy$ $\displaystyle=$
$\displaystyle\|u\|_{e}^{2}:=\frac{1}{2\pi}\iint_{\mathbb{D}_{e}}|\nabla
U_{e}|^{2}(z)dxdy$ $\displaystyle=$
$\displaystyle\|u\|_{H^{1/2}(\mathbb{S})}^{2}:=\iint_{\mathbb{S}\times\mathbb{S}}\Big{|}\frac{u(z_{1})-u(z_{2})}{z_{1}-z_{2}}\Big{|}^{2}\frac{|dz_{1}|}{2\pi}\frac{|dz_{2}|}{2\pi},$
where $\|u\|_{i}^{2}$ is called the interior energy of $u$, while
$\|u\|_{e}^{2}$ is the exterior energy. $\|\cdot\|_{H^{1/2}(\mathbb{S})}$ is
called the Douglas norm, and the equality of the first and third integrals is
the Douglas formula, as introduced by Douglas in his solution of the Plateau
problem ([9],[3]). The formula inspired important developments in the theory
of Dirichlet forms; see [13].
Inspired by Problem 38 of [5] which consists in developing a theory of
Dirichlet spaces in planar domains, our aim in this paper is to start
investigating Dirichlet spaces over planar domains and more specifically to
find the right class of Jordan domains for which these three norms make sense
and are equivalent (we cannot of course expect the equality in general). Since
the analogue of the Douglas norm makes sense only for rectifiable curves we
will , from now on, restrict our study to rectifiable Jordan curves. For such
a curve $\Gamma$ we thus define the space (modulo constants) $H^{1/2}(\Gamma)$
of mesurable functions $u:\Gamma\to\mathbb{C}$ such that
$\|u\|_{H^{1/2}(\Gamma)}^{2}:=\frac{1}{4\pi^{2}}\iint_{\Gamma\times\Gamma}\Big{|}\frac{u(z_{1})-u(z_{2})}{z_{1}-z_{2}}\Big{|}^{2}|dz_{1}||dz_{2}|<\infty.$
About the other two norms $\|\cdot\|_{i},\,\|\cdot\|_{e}$, we will show that
it happens to be true in the good case that the spaces of boundary functions
of $\mathcal{D}(\Omega)$ and $\mathcal{D}(\Omega_{e})$ coincide, where
$\Omega$ and $\Omega_{e}$ denote the bounded and unbounded components of the
complement of $\Gamma$, respectively. In order to make an explanation on this
claim at this stage, we consider instead the space $E(\Gamma)$ of restrictions
to $\Gamma$ of $C^{\infty}$ functions with compact support in the whole plane,
and will see that the norms $\|\cdot\|_{i},\,\|\cdot\|_{e}$ are equivalent on
$E(\Gamma)$ for a rectifiable Jordan curve $\Gamma$ if and only if it is a
quasicircle. We may define, for such a curve, $\mathcal{H}(\Gamma)$ as the
completion of $E(\Gamma)$ with respect to one of these two equivalent norms
(but this is not the order in which we will derive things).
The main result of this paper will be that
$H^{1/2}(\Gamma)=\mathcal{H}(\Gamma)$ if and only if $\Gamma$ is a chord-arc
curve, a rectifiable quasicircle with the property of Ahlfors-regularity.
The paper is structured as follows: In Section 2 we give the precise
definition of the Dirichlet space over any Jordan domain. In Section 3 we
study the case of quasicircles and define precisely the “two-sided” space
$\mathcal{H}(\Gamma)$. Section 4 is devoted to prove that
$\mathcal{H}(\Gamma)=H^{1/2}(\Gamma)$ when $\Gamma$ is a chord-arc curve. In
Section 5 we consider the sharpness of the result in Section 4: we prove more
precisely that if $\Gamma$ is a rectifiable quasicircle such that
$\mathcal{H}(\Gamma)\subset H^{1/2}(\Gamma)$ then $\Gamma$ must be chord-arc
and the two spaces are equal. On the other hand we construct a rectifiable
quasicircle $\Gamma$ that is not chord-arc but such that
$H^{1/2}(\Gamma)\subset\mathcal{H}(\Gamma)$.
## 2\. Dirichlet spaces over a Jordan domain
Recall that if $\Omega$ is a bounded Jordan domain of the complex plane
$\mathbb{C}$, the Dirichlet space $\mathcal{D}(\Omega)$ is the space of
harmonic functions $F$ on $\Omega$ with finite Dirichlet energy $D(F)$, where
the energy of any $C^{1}$ map on $\Omega$ is defined as the
$L^{2}(\Omega)$-norm of the gradient vector $\nabla F(w)=(F_{u},F_{v})$.
Precisely,
$D(F):=\frac{1}{2\pi}\iint_{\Omega}|\nabla F|^{2}(w)dudv<+\infty.$ (1)
The complex notation is much more convenient. Let us first note
$F_{\bar{w}}=(F_{u}+iF_{v})/2$ and $F_{w}=(F_{u}-iF_{v})/2$. This gives
$D(F)=1/\pi\iint_{\Omega}(|F_{w}|^{2}+|F_{\bar{w}}|^{2})dudv$. The space
$\mathcal{D}(\Omega)$ is a Hilbert space modulo constant functions. Let
$\varphi$ map another bounded Jordan domain $\Omega^{\prime}$ conformally onto
$\Omega$. Using
$|G_{z}|^{2}+|G_{\bar{z}}|^{2}=(|F_{w}|^{2}+|F_{\bar{w}}|^{2})\circ\varphi(z)|\varphi^{\prime}(z)|^{2},$
we see that $F\mapsto G:=F\circ\varphi$ is a bijective isometry between
$\mathcal{D}(\Omega)$ and $\mathcal{D}(\Omega^{\prime})$. Similarly, an anti-
conformal mapping $\varphi(\bar{z})$ also induces the invariance of Dirichlet
energies.
In the case of classical Dirichlet space $\mathcal{D}(\mathbb{D})$, one may
make the theory precise by the use of Fourier series. For a real-valued
function $u\in\mathcal{D}(\mathbb{D})$, let $v$ be the unique harmonic
conjugate function of $u$ in $\mathbb{D}$ with the requirement $v(0)=0$ so
that $\Phi=u+iv$ is holomorphic. An easy calculation involving Parseval
formula leads to, writing $\Phi(z)=\sum_{n=0}^{\infty}a_{n}z^{n}$,
$D(\Phi)=\sum_{n=1}^{\infty}n|a_{n}|^{2}.$
By Cauchy-Riemann equations, $D(u)=D(v)$, and then,
$D(u)=\frac{1}{2}\sum_{n=1}^{\infty}n|a_{n}|^{2}$. In particular,
$\sum_{n=1}^{\infty}|a_{n}|^{2}<\infty$, which means that $\Phi\in
H^{2}(\mathbb{D})$, the Hardy space of analytic functions on $\mathbb{D}$, and
that the function $u$ belongs to the Hardy space of harmonic functions
$h^{2}(\mathbb{D})$. As a consequence, $u$ has angular limits almost
everywhere on the unit circle $\mathbb{S}$.
Suppose now that $\Omega$ is a domain bounded by a rectifiable Jordan curve
$\Gamma$. Let $\varphi$ map $\mathbb{D}$ conformally onto $\Omega$. Then
$\varphi$ extends to a homeomorphism of the closures
$\mathbb{D}\cup\mathbb{S}$ and $\Omega\cup\Gamma$. Using F. and M. Riesz
theorem for the curve $\Gamma$ being rectifiable (see [18]), we have that a
subset of $\mathbb{S}$ has measure zero if and only if its image under
$\varphi$ has length zero, and it also makes sense to speak of a tangential
direction almost everywhere on $\Gamma$. Furthermore, the mapping $\varphi$
preserves angles at almost every boundary point on $\Gamma$, see [10, 23] for
details. Consequently, for any $F\in\mathcal{D}(\Omega)$, since
$G:=F\circ\varphi\in\mathcal{D}(\mathbb{D})$ one may say that $F$ has angular
limits $f(w)$ almost everywhere on $\Gamma$ such that $f\circ\varphi\in
L^{2}(\mathbb{S})$. So $F$ can be recovered from its boundary function by the
“Poisson integral” of $f$, in the sense that
$F=P(f\circ\varphi)\circ\varphi^{-1}$ (2)
where $P$ stands for the classical Poisson integral in the unit disk
$\mathbb{D}$. If $f_{1}$ and $f_{2}$ are boundary functions of $F_{1}$ and
$F_{2}$ in $\mathcal{D}(\Omega)$, respectively, and $f_{1}=f_{2}$ almost
everywhere on $\Gamma$, we then have $F_{1}=F_{2}$ by (2). Hence, we may say
that $f_{1}=f_{2}$ if they are equal except possibly on a subset with length
zero. The one to one correspondence $F\leftrightarrow f$ allows us to view the
Dirichlet space on $\Omega$ as a space of functions defined on $\Gamma$. We
denote it by $\mathcal{H}(\Gamma,\Omega)$.
Since the function $F$ in (2) is the solution to Poisson’s equation $\Delta
F=0$ in the domain $\Omega$ with boundary condition $F|_{\Gamma}=f$,
Dirichlet’s principle says that $F$ can be obtained as the minimizer of
Dirichlet energies $D(V)$ amongst all $C^{1}$ extensions $V$ to $\Omega$ of
$f$. We call $\|f\|_{i}^{2}:=D(F)$ the interior energy of $f$. Clearly,
$\mathcal{H}(\Gamma,\Omega)$ is the function space consisting of all $f$ with
a finite semi-norm $\|f\|_{i}$.
Let us make one remark on Dirichlet’s principle: The requirement for $C^{1}$
extensions can be relaxed. Precisely, let $\dot{W}^{1,2}(\Omega)$ be the
homogeneous Sobolev space of locally integrable functions on any domain
$\Omega$ of the Riemann sphere with $L^{2}$-integrable gradient taken in the
sense of distributions, equipped with the natural $L^{2}(\Omega)$-norm of the
gradient vector $\nabla F$ as in (1). It has an important subspace
$\dot{W}^{1,2}_{0}(\Omega)$ defined as the closure in $\dot{W}^{1,2}(\Omega)$
of $C_{0}^{\infty}(\Omega)$, the space of $C^{\infty}$ functions with compact
support included in $\Omega$. By the Meyers-Serrin theorem [21], the space
$C^{\infty}(\Omega)\cap\dot{W}^{1,2}(\Omega)$ is dense in
$\dot{W}^{1,2}(\Omega)$. With this setting, a simple approximation argument
shows that
$D(F)=\min\\{D(V):V\;\text{ranges over all functions
in}\;\dot{W}^{1,2}(\Omega)\;\text{with}\;V-F\in\dot{W}_{0}^{1,2}(\Omega)\\}.$
(3)
By the Jordan curve theorem the complement of the boundary Jordan curve
$\Gamma$ of $\Omega$ has two components: one is $\Omega=\Omega_{i}$, the so-
called interior of $\Gamma$, and the second is $\Omega_{e}$, the exterior of
$\Gamma$. In order to define a Dirichlet space over $\Omega_{e}$, we may first
assume without loss of generality that $0\in\Omega$, and we consider the
reflection $\iota(z)=\frac{1}{\bar{z}}.$ It maps $\Omega_{e}$ onto a Jordan
domain $\tilde{\Omega}=\tilde{\Omega}_{i}$ bounded by a bounded Jordan curve
$\tilde{\Gamma}$. The image of $\Omega$ is $\tilde{\Omega}_{e}$, the exterior
of $\tilde{\Gamma}$. One may see that the point $0=\iota(\infty)$ is an
interior point of $\tilde{\Omega}$. Define $\mathcal{D}(\Omega_{e})$ as the
set $\\{F:=\tilde{F}\circ\iota,\,\tilde{F}\in\mathcal{D}(\tilde{\Omega})\\}$.
Then the Dirichlet energy of $F$ over $\Omega_{e}$ can be written as
$D(F)=\frac{1}{2\pi}\iint_{\Omega_{e}\backslash{\\{\infty\\}}}|\nabla
F|^{2}(w)dudv.$
That is, the point $\infty$ can be removed from the domain of integration
without changing the convergence properties or the value of integral. This is
justified as follows: if $F\in\mathcal{D}(\Omega_{e})$ then for any
$\epsilon>0$ there is an $R>0$ such that
$\iint_{|w|>R}|\nabla F|^{2}(w)dudv=\iint_{|z|\leq
1/R}|\nabla\tilde{F}|^{2}(z)dxdy<\epsilon.$
Let $\tilde{\varphi}$ map $\mathbb{D}=\iota(\mathbb{D}_{e})$ conformally onto
$\tilde{\Omega}=\iota(\Omega_{e})$. The one to one correspondence
$\tilde{F}=P(\tilde{f}\circ\tilde{\varphi})\circ\tilde{\varphi}^{-1}\leftrightarrow\tilde{f}$
between the elements of $\mathcal{D}(\tilde{\Omega})$ and
$\mathcal{H}(\tilde{\Gamma},\tilde{\Omega})$ leads to the one to one
correspondence $F=\tilde{F}\circ\iota\leftrightarrow f=\tilde{f}\circ\iota$
between the elements of $\mathcal{D}(\Omega_{e})$ and
$\mathcal{H}(\Gamma,\Omega_{e})$. Here, $\mathcal{H}(\Gamma,\Omega_{e})$ is a
function space consisting of all boundary functions $f$ of elements $F$ of
$\mathcal{D}(\Omega_{e})$ assigned a semi-norm $\|f\|_{e}$, where
$\|f\|_{e}^{2}:=D(F)$. We call $\|f\|_{e}^{2}$ the exterior energy of $f$.
For the simple case $\Omega=\mathbb{D}$, we notice that
$\tilde{\Omega}=\mathbb{D}$ by $\iota(z)=z$ on $\mathbb{S}$. Then, the
identity operator
$\mathcal{H}(\mathbb{S},\mathbb{D})\to\mathcal{H}(\mathbb{S},\mathbb{D}_{e})$
is an isometric isomorphism with respect to $\|\cdot\|_{i}$ and
$\|\cdot\|_{e}$. In the next section, we investigate the Jordan domain to
which this property may be generalized.
## 3\. Dirichlet spaces over quasidisks
In this section we look for a sufficient and necessary condition on the
rectifiable Jordan curve $\Gamma$ so that the interior and exterior energies
on $\Gamma$ are equivalent. Precisely, we will show the following
###### Theorem 1.
With the above notation, the identity operator
$\mathcal{H}(\Gamma,\Omega)\to\mathcal{H}(\Gamma,\Omega_{e})$ is a bounded
isomorphism with respect to $\|\cdot\|_{i}$ and $\|\cdot\|_{e}$ if and only if
$\Gamma$ is a quasicircle.
Before starting the proof of Theorem 1, we recall some preliminary facts about
quasicircles and quasisymmetric mappings; see [2] for additional background. A
quasicircle is the image of a circle under a quasiconformal mapping of the
complex plane $\mathbb{C}$, and the inner domain of a quasicircle is called a
quasidisk. Here, by quasiconformal mapping $f$ of $\mathbb{C}$ we mean a
homeomorphism $f$ whose gradient in the sense of distribution belongs to
$L^{2}_{loc}(\mathbb{C})$ and satisfies
$f_{\bar{z}}=\mu(z)f_{z}$
for an essentially uniformly bounded function $\mu\in L^{\infty}(\mathbb{C})$
bounded by some constant $k<1$. Here, $f$ may also be called
$k$-quasiconformal to specify the constant $k$.
A sense-preserving homeomorphism $h$ of $\mathbb{S}$ is called a conformal
welding of the Jordan curve $\Gamma$ if $h=\psi^{-1}\circ\varphi$ where
$\varphi$ and $\psi$ are conformal maps from $\mathbb{D}$ onto $\Omega$ and
from $\mathbb{D}_{e}$ onto $\Omega_{e}$, respectively. So there are many
weldings of $\Gamma$ but they differ from each other by compositions with
Möbius transformations of $\mathbb{S}$. In particular, the conformal welding
of a quasicircle is exactly a quasisymmetric homeomorphism of $\mathbb{S}$,
and the conformal maps $\varphi$ and $\psi$ for a quasicircle can be extended
to $\mathbb{C}$ quasiconformally. Saying a homeomorphism $h$ of $\mathbb{S}$
is quasisymmetric means that there exists a constant $C_{1}>0$ such that
$C_{1}^{-1}\leq\frac{|h(e^{i(\theta+\alpha)})-h(e^{i\theta})|}{|h(e^{i\theta})-h(e^{i(\theta-\alpha)})|}\leq
C_{1}$
for all $\theta\in\mathbb{R}$ and $-\pi/2<\alpha\leq\pi/2$. Here, the optimal
constant $C_{1}$ is called the quasisymmetry constant of $h$. A sense-
preserving homeomorphism $h$ of $\mathbb{S}$ is quasisymmetric if and only if
$h$ preserves the modules of quadrilaterals quasi-invariantly, namely, there
exists a constant $C_{2}>0$ such that for any quadrilateral $Q$ it holds that
$C_{2}^{-1}m(Q)\leq m(h(Q))\leq C_{2}m(Q).$
Moreover, the optimal constant $C_{2}$ and the quasisymmetry constant $C_{1}$
of $h$ only depend on each other. Here, by quadrilateral $Q$ we mean the unit
disk $\mathbb{D}$ together with a pair of disjoint closed arcs on the boundary
$\mathbb{S}$. It is a well-known fact that for a quadrilateral $Q$ with two
disjoint closed arcs $\alpha_{1}$, $\beta_{1}$ on $\mathbb{S}$, its module
$m(Q)$ multiplied by $\frac{1}{2\pi}$ is exactly the minimum of Dirichlet
energies $D(P(f))$ of harmonic functions $P(f)$ on $\mathbb{D}$ ranging over
all boundary values $f$ with $f=0$ on $\alpha_{1}$ and $f=1$ on $\beta_{1}$
(see [6]). Then, by the definition of $\|f\|_{i}^{2}$, we have
$\frac{1}{2\pi}m(Q)=\min\|f\|_{i}^{2}$.
The above concept of quasisymmetry of $\mathbb{S}$ onto $\mathbb{S}$ was
introduced by Beurling and Ahlfors [6], and later formulated for general
metric spaces by Tukia and Väisälä [29]. For our purpose, we only need to
consider the quasisymmetric mapping from a curve $\Gamma_{1}$ onto the other
$\Gamma_{2}$. Let $h:\Gamma_{1}\to\Gamma_{2}$ be a sense-preserving mapping.
Let $\eta:[0,+\infty)\to[0,+\infty)$ be an increasing homeomorphism with
$\lim_{t\to+\infty}\eta(t)=+\infty$. We say that $h$ is $\eta$-quasisymmetric
if for each triple $z_{0}$, $z_{1}$, $z_{2}\in\Gamma_{1}$ we have
$\frac{|h(z_{0})-h(z_{1})|}{|h(z_{0})-h(z_{2})|}\leq\eta\bigg{(}\frac{|z_{0}-z_{1}|}{|z_{0}-z_{2}|}\bigg{)}.$
The more general quasisymmetric mapping on an open subset of $\mathbb{C}$ can
be defined in the same way. It is known that if $h:\mathbb{C}\to\mathbb{C}$ is
a $k$-quasiconformal homeomorphism of $\mathbb{C}$ then $h$ is
$\eta$-quasisymmetric where $\eta$ depends only on $k$. Conversely, if
$h:D\to\mathbb{C}$ is an $\eta$-quasisymmetric mapping on a domain $D$ then it
is quasiconformal (see e.g. Chapter 3 of [4]).
Concerning the quasisymmetric homeomorphism of $\mathbb{S}$ onto $\mathbb{S}$,
the following result of Nag-Sullivan [22] is well-known.
###### Proposition 1.
A sense-preserving homeomorphism $h$ of $\mathbb{S}$ is quasisymmetric if and
only if the composition operator $V_{h}:g\mapsto g\circ h$ gives an
isomorphism of $H^{1/2}(\mathbb{S})$, that is, $V_{h}$ and $(V_{h})^{-1}$ (or
$V_{h^{-1}}$) are bounded linear operators. Here, the operator norm
$\|V_{h}\|$ depends only on the quasisymmetry constant of $h$.
We will also give its generalization to the quasisymmetry of $\mathbb{S}$ onto
a curve $\Gamma$ as three corollaries of main theorems in the remainder, which
can be summarized by
###### Proposition 2.
Let $\Gamma$ be a rectifiable quasicircle. Let $h$ be a sense-preserving
homeomorphism of $\mathbb{S}$ onto $\Gamma$. The composition operator
$V_{h}:g\mapsto g\circ h$ is defined on $H^{1/2}(\Gamma)$. Consider the
following four statements:
* (a)
$\Gamma$ is chord-arc,
* (b)
$h$ is quasisymmetric,
* (c1)
$(V_{h})^{-1}$ is a bounded linear operator from $H^{1/2}(\mathbb{S})$ into
$H^{1/2}(\Gamma)$,
* (c2)
$V_{h}$ is a bounded linear operator from $H^{1/2}(\Gamma)$ into
$H^{1/2}(\mathbb{S})$.
If any two of the above three statements $\rm(a)$,$\rm(b)$,and $\rm(c1)$ hold,
then the third one holds true, while $\rm(a)$ and $\rm(b)$ imply $\rm(c2)$,
$\rm(a)$ and $\rm(c2)$ imply $\rm(b)$, but $\rm(b)$ and $\rm(c2)$ does not
necessarily imply $\rm(a)$.
Actually one can view $\rm(a,c1)\Rightarrow\rm(b)$ and
$\rm(a,c2)\Rightarrow\rm(b)$ are due to Proposition 1. The complete version of
$\rm(a,b)\Rightarrow\rm(c1,c2)$ is Corollary 1. $\rm(b,c1)\Rightarrow\rm(a)$
and $\rm(b,c2)\nRightarrow\rm(a)$ are just Corollary 2 and Corollary 3,
respectively.
We are now ready to prove Theorem 1.
###### Proof.
If $\Gamma$ is a quasicircle, then there is a bi-Lipschitz map $\phi$ that
fixes $\Gamma$ pointwise and exchanges the two complementary components
$\Omega$ and $\Omega_{e}$ of $\Gamma$, see [2]. For any
$f\in\mathcal{H}(\Gamma,\Omega)$, recall that
$F=P(f\circ\varphi)\circ\varphi^{-1}$ extends $f$ to $\Omega$, minimizing
Dirichlet energies for the boundary function $f$. Then $F\circ\phi$ extends
$f$ to $\Omega_{e}$ and
$\frac{1}{2\pi}\iint_{\Omega_{e}\backslash\\{\infty\\}}|\nabla(F\circ\phi)|^{2}(w)dudv\approx\frac{1}{2\pi}\iint_{\Omega}|\nabla
F|^{2}(z)dxdy=\|f\|_{i}^{2}$
where the implied constants depend only on the bi-Lipschitz constant of
$\phi$. We conclude by the definition of $\|f\|_{e}^{2}$ and (3) that
$\|f\|_{e}^{2}\leq\frac{1}{2\pi}\iint_{\Omega_{e}\backslash\\{\infty\\}}|\nabla(F\circ\phi)|^{2}(w)dudv$
and thus $\|f\|_{e}\lesssim\|f\|_{i}$. The roles of $\Omega$ and $\Omega_{e}$
can be switched in the above argument, so that we also have
$\|f\|_{i}\lesssim\|f\|_{e}$ for any $f\in\mathcal{H}(\Gamma,\Omega_{e})$.
This gives a proof of sufficiency of Theorem 1.
Conversely, let $\varphi$ map $\mathbb{D}$ conformally onto $\Omega$ with the
normalizations $\varphi(0)=0$ and $\varphi^{\prime}(0)=1$, and
$\tilde{\varphi}$ map $\mathbb{D}=\iota(\mathbb{D}_{e})$ conformally onto
$\tilde{\Omega}=\iota(\Omega_{e})$ with $\tilde{\varphi}(0)=0$,
$\tilde{\varphi}^{\prime}(0)=1$. We denote
$\iota\circ\tilde{\varphi}\circ\iota$ by $\psi$ that maps $\mathbb{D}_{e}$
onto $\Omega_{e}$. Then, these three maps are homeomorphisms of the closures.
For any $f\in\mathcal{H}(\Gamma,\Omega)$, we see that $f\circ\iota$ denoted by
$\tilde{f}$ is defined on $\tilde{\Gamma}$. See the following commutative
diagram for a picturesque description of these maps. According to it we see
$\displaystyle\tilde{f}\circ\tilde{\varphi}$
$\displaystyle=(f\circ\iota)\circ(\iota\circ\psi\circ\iota)$
$\displaystyle=f\circ\psi\circ\iota=f\circ\varphi\circ(\varphi^{-1}\circ\psi)\circ\iota\;$
$\displaystyle=f\circ\varphi\circ(\varphi^{-1}\circ\psi).$
Set $\psi^{-1}\circ\varphi=h$ so that
$f\circ\varphi=\tilde{f}\circ\tilde{\varphi}\circ h.$ (4)
Denote the quadrilateral with two disjoint closed arcs $\alpha_{1}$,
$\beta_{1}$ on $\mathbb{S}$ by $Q$, and denote the image of $Q$ under $h$ by
$Q^{\prime}$, the quadrilateral with two disjoint closed arcs
$\alpha_{2}:=h(\alpha_{1})$ and $\beta_{2}:=h(\beta_{1})$.
Take $f\in\mathcal{H}(\Gamma,\Omega)$ so that $f\circ\varphi$ is given only on
part of $\mathbb{S}$: $f\circ\varphi=0$ on $\alpha_{1}$ and $f\circ\varphi=1$
on $\beta_{1}$. By (4) we see that $\tilde{f}\circ\tilde{\varphi}=0$ on
$\alpha_{2}$ and $\tilde{f}\circ\tilde{\varphi}=1$ on $\beta_{2}$. Then, we
have $\frac{1}{2\pi}m(Q)=\min\|f\circ\varphi\|_{i}^{2}=\min\|f\|_{i}^{2}$
ranging over all desired boundary functions $f$. Moreover, the minimum value
is attained on the function $f_{0}\circ\varphi$ which is $0,1$ on
$\alpha_{1}$, $\beta_{1}$ and whose normal derivative vanishes on the
complementary arcs (see [6]), that is, $\frac{1}{2\pi}m(Q)=\|f_{0}\|_{i}^{2}$.
Similarly, we have
$\frac{1}{2\pi}m(Q^{\prime})=\min\|\tilde{f}\circ\tilde{\varphi}\|_{i}^{2}=\min
D(P(\tilde{f}\circ\tilde{\varphi})\circ\tilde{\varphi}^{-1}\circ\iota)=\min\|f\|_{e}^{2}\leq\|f_{0}\|_{e}^{2}.$
By the isomorphism of the identity operator
$\mathcal{H}(\Gamma,\Omega)\to\mathcal{H}(\Gamma,\Omega_{e})$, we have that
$\|f_{0}\|_{e}^{2}\approx\|f_{0}\|_{i}^{2}$, and thus $m(Q^{\prime})\lesssim
m(Q)$. The above reasoning clearly implies the other inequality $m(Q)\lesssim
m(Q^{\prime})$ by exchanging the roles of $Q$ and $Q^{\prime}$. Then from
$m(Q^{\prime})\approx m(Q)$ it follows that $h$ is quasisymmetric, and thus
$\Gamma$ is a quasicircle. ∎
Theorem 1 implies that $\Gamma$ is a quasicircle if and only if
$\mathcal{H}(\Gamma,\Omega)=\mathcal{H}(\Gamma,\Omega_{e})$. Now we define a
“two-sided” space $\mathcal{H}(\Gamma)$ to be
$\mathcal{H}(\Gamma)=\mathcal{H}(\Gamma,\Omega)=\mathcal{H}(\Gamma,\Omega_{e})$.
This is regarded as a Banach space consisting of all functions $f$ whose
harmonic extensions $F$ to $\Omega$ have finite Dirichlet energies, where $f$
is assigned a norm $\|f\|_{\mathcal{H}(\Gamma)}:=\sqrt{D(F)}$ by ignoring the
difference of complex constant functions. Moreover,
$\|f\circ\varphi\|_{H^{1/2}(\mathbb{S})}^{2}$ is, by the Douglas formula,
equal to $D(F)$, so we also have
$\|f\|_{\mathcal{H}(\Gamma)}=\|f\circ\varphi\|_{H^{1/2}(\mathbb{S})}.$ (5)
We end this section with three comments about its content:
1. (i)
As we have learned from one of the referees, Theorem 1 has been proved by
Schippers-Staubach [25] (also see [26]) with the use of Proposition 1, even
without the assumption of rectifiability, and the space $\mathcal{H}(\Gamma)$
has already been constructed by them. For the sake of completeness, we give a
new proof of Theorem 1 above taking a more direct approach for the rectifiable
case.
2. (ii)
Let $\varphi$ be a quasiconformal mapping of $\mathbb{C}$ and
$\Omega=\varphi(\mathbb{D})$. Gol’dshtein et al. ([14], see also [15]) have
noticed that if $F\in\mathcal{D}(\Omega)$ then $F$ extended by
$z\mapsto F\circ\varphi\Bigg{(}\frac{1}{\overline{\varphi^{-1}(z)}}\Bigg{)}$
(6)
belongs to the homogeneous Sobolev space $\dot{W}^{1,2}(\mathbb{C})$. If now
$g\in C_{0}^{\infty}(\mathbb{C})$, the set of infinitely smooth functions in
$\mathbb{C}$ with compact support, we clearly have
$\iint_{\Omega}|\nabla g|^{2}(z)dxdy\leq\iint_{\mathbb{C}}|\nabla
g|^{2}(z)dxdy<+\infty,$
and in particular $g|_{\Gamma}\in\mathcal{H}(\Gamma)$ by Dirichlet’s
principle. Let $f\in\mathcal{H}(\Gamma)$ and $F$ its extension to $\mathbb{C}$
of the form (6) as a function in $\dot{W}^{1,2}(\mathbb{C})$. There is a
sequence of functions $F_{n}$ in $C_{0}^{\infty}(\mathbb{C})$ converging to
$F$ in $\dot{W}^{1,2}(\mathbb{C})$ (see e.g. Chapter 11 of [19]); using
Dirichlet’s principle again, we may conclude that the space of restrictions to
$\Gamma$ of $C_{0}^{\infty}(\mathbb{C})$ is dense in $\mathcal{H}(\Gamma)$.
3. (iii)
Suppose $\varphi$ maps a quasidisk $\Omega^{\prime}$ onto another one $\Omega$
conformally. It is easy to see that $f\mapsto f\circ\varphi$ is an isometric
isomorphism from $\mathcal{H}(\Gamma)$ onto $\mathcal{H}(\Gamma^{\prime})$. We
now claim that if $\varphi$ is $k$-quasiconformal then $f\mapsto
f\circ\varphi$ is still a bounded isomorphism from $\mathcal{H}(\Gamma)$ onto
$\mathcal{H}(\Gamma^{\prime})$. It is a consequence of quasiconformality and
the change of variable formula. Specifically, if $F$ is the harmonic extension
to $\Omega$ of $f\in\mathcal{H}(\Gamma)$ as in (2) then
$D(F\circ\varphi)\lesssim D(F)$, where the implied constant only depends on
$k$. By (3) and Theoem 1, we conclude that
$\|f\circ\varphi\|_{\mathcal{H}(\Gamma^{\prime})}^{2}\leq
D(F\circ\varphi)\lesssim D(F)=\|f\|_{\mathcal{H}(\Gamma)}^{2}.$
Combined with the quasiconformality of $\varphi^{-1}$, the above argument
actually implies the double inequality
$\|f\circ\varphi\|_{\mathcal{H}(\Gamma^{\prime})}\approx\|f\|_{\mathcal{H}(\Gamma)}$.
## 4\. Chord-arc curves
In this section we establish the equivalence between the Banach spaces
$\mathcal{H}(\Gamma)$ and $H^{1/2}(\Gamma)$ when the curve $\Gamma$ is chord-
arc. Let us start with a geometric description of chord-arc curves.
###### Definition 1.
A rectifiable Jordan curve $\Gamma$ is said to be a chord-arc curve (or
$K$-chord-arc curve) if there exists a (least) positive constant $K$, called
the chord-arc constant, such that
$\mathrm{length}(\Gamma(z_{1},z_{2}))\leq K|z_{1}-z_{2}|,\quad\text{for
all}\;z_{1},z_{2}\in\Gamma$
where $\Gamma(z_{1},z_{2})$ is the shorter arc of $\Gamma$ between $z_{1}$ and
$z_{2}$.
Chord-arc curves are also called “Lavrentiev curves”. The inner domain of a
chord-arc curve is called a chord-arc domain. A chord-arc curve is the image
of a circle under a bi-Lipschitz homeomorphism of $\mathbb{C}$. A sense-
preserving bi-Lipschitz map of $\mathbb{C}$ onto $\mathbb{C}$ is
quasiconformal but not vice versa. Hence, a chord-arc curve must be a
quasicircle. Indeed, it is exactly a quasicircle with “ regularity property”
that will be explored in detail in the next section. In this section, We
assume without loss of generality that $\text{length}(\Gamma)=2\pi$.
Now we can state the main result of this section.
###### Theorem 2.
The identity operator $\mathcal{H}(\Gamma)\to H^{1/2}(\Gamma)$ is a bounded
isomorphism with respect to $\|\cdot\|_{\mathcal{H}(\Gamma)}$ and
$\|\cdot\|_{H^{1/2}(\Gamma)}$ if the curve $\Gamma$ is chord-arc.
Before proceeding the proof of Theorem 2, we point out the following basic
observation.
###### Lemma 1.
Let $\Gamma$ be a rectifiable Jordan curve with $\text{length}(\Gamma)=2\pi$.
Set $z(s)$, $0\leq s<2\pi$, to be an arc-length parametrization of $\Gamma$.
Then it holds that, for any $s,t\in[0,2\pi)$,
$\frac{\pi}{2}|e^{it}-e^{is}|=\pi\left|\sin{\frac{t-s}{2}}\right|\geq|z(t)-z(s)|.$
(7)
Moreover, if $\Gamma$ is $K$-chord-arc, then
$\frac{1}{K}|e^{it}-e^{is}|=\frac{2}{K}\left|\sin{\frac{t-s}{2}}\right|\leq|z(t)-z(s)|.$
(8)
###### Proof.
If $|t-s|\leq\pi$, it is easy to see that
$\Big{|}\sin{\frac{t-s}{2}}\Big{|}\geq\frac{2}{\pi}\Big{|}\frac{t-s}{2}\Big{|}\geq\frac{1}{\pi}|z(t)-z(s)|.$
If $\pi<|t-s|<2\pi$, we may also see that
$\Big{|}\sin{\frac{t-s}{2}}\Big{|}=\sin{\frac{2\pi-|t-s|}{2}}\geq\frac{2}{\pi}\cdot\frac{2\pi-|t-s|}{2}\geq\frac{1}{\pi}|z(t)-z(s)|.$
This proves the inequality (7).
Further, with the use of the chord-arc condition one may see that if
$|t-s|\leq\pi$,
$\Big{|}\sin{\frac{t-s}{2}}\Big{|}\leq\frac{1}{2}|t-s|\leq\frac{K}{2}|z(t)-z(s)|,$
and if $\pi<|t-s|<2\pi$,
$\Big{|}\sin{\frac{t-s}{2}}\Big{|}\leq\frac{1}{2}(2\pi-|t-s|)\leq\frac{K}{2}|z(t)-z(s)|.$
This gives a proof of the inequality (8). ∎
###### Proof of Theorem 2.
Suppose that $\Gamma$ is a $K$-chord-arc curve with
$\text{length}(\Gamma)=2\pi$. We notice by Lemma 1 that its arc-length
parametrization $z(e^{is})$, $0\leq s<2\pi$, satisfies
$\frac{1}{K}|e^{it}-e^{is}|\leq|z(e^{it})-z(e^{is})|\leq\frac{\pi}{2}|e^{it}-e^{is}|$
(9)
for any $s,t\in[0,2\pi)$. Then, we see that $z$ is a bi-Lipschitz embedding of
$\mathbb{S}$ into $\mathbb{C}$. Recall that $\varphi$ is a conformal map of
$\mathbb{D}$ onto the chord-arc domain $\Omega$, and a homeomorphism of
closures $\mathbb{D}\cup\mathbb{S}$ onto $\Omega\cup\Gamma$. It follows that
$\varphi$ restricted to $\mathbb{S}$ is a quasisymmetric mapping of
$\mathbb{S}$ onto $\Gamma$. As a consequence, $z^{-1}\circ\varphi$ is a
quasisymmetry of $\mathbb{S}$. Note that
$\|f\|_{H^{1/2}(\Gamma)}^{2}=\frac{1}{4\pi^{2}}\int_{0}^{2\pi}\int_{0}^{2\pi}\Big{|}\frac{f(z(e^{it}))-f(z(e^{is}))}{z(e^{it})-z(e^{is})}\Big{|}^{2}dtds$
and
$\|f\circ
z\|_{H^{1/2}(\mathbb{S})}^{2}=\frac{1}{4\pi^{2}}\int_{0}^{2\pi}\int_{0}^{2\pi}\Big{|}\frac{f(z(e^{it}))-f(z(e^{is}))}{e^{it}-e^{is}}\Big{|}^{2}dtds.$
Using (9) gives
$\frac{4}{\pi^{2}}\|f\circ
z\|_{H^{1/2}(\mathbb{S})}^{2}\leq\|f\|_{H^{1/2}(\Gamma)}^{2}\leq K^{2}\|f\circ
z\|_{H^{1/2}(\mathbb{S})}^{2}.$
By Proposition 1, the quasisymmetry of $z^{-1}\circ\varphi$ implies
$\|f\circ\varphi\|_{H^{1/2}(\mathbb{S})}=\|f\circ
z\circ(z^{-1}\circ\varphi)\|_{H^{1/2}(\mathbb{S})}\approx\|f\circ
z\|_{H^{1/2}(\mathbb{S})}.$
Then, we conclude that
$\|f\circ\varphi\|_{H^{1/2}(\mathbb{S})}\approx\|f\|_{H^{1/2}(\Gamma)}.$ (10)
It follows from (5) that
$\|f\|_{\mathcal{H}(\Gamma)}\approx\|f\|_{H^{1/2}(\Gamma)}.$ This completes
the proof of Theorem 2. ∎
We noticed recently that the corresponding result to Theorem 2 for the
critical Besov space has been obtained in [20] via different reasoning. By
checking this proof we can see that (10) is still valid when replacing the
conformal mapping $\varphi$ by any quasisymmetric mapping $h$ of $\mathbb{S}$
onto $\Gamma$, and thus we have the following
###### Corollary 1.
Let $h$ be a quasisymmetric mapping of $\mathbb{S}$ onto a chord-arc curve
$\Gamma$. Then the composition operator $V_{h}:f\mapsto f\circ h$ gives a
bounded isomorphism of $H^{1/2}(\Gamma)$ onto $H^{1/2}(\mathbb{S})$.
## 5\. About the necessity of the chord-arc condition
In the last section (i.e. Theorem 2) we gave a proof of the fact that if
$\Gamma$ is a chord-arc curve then $\mathcal{H}(\Gamma)=H^{1/2}(\Gamma)$. In
other words we have proven the existence of a constant $C>0$ such that the two
following inequalities hold for chord-arc curves:
* (a)
$\|\cdot\|_{H^{1/2}(\Gamma)}\leq C\|\cdot\|_{\mathcal{H}(\Gamma)}$.
* (b)
$\|\cdot\|_{\mathcal{H}(\Gamma)}\leq C\|\cdot\|_{H^{1/2}(\Gamma)}$.
Inequality (a) is equivalent to $\mathcal{H}(\Gamma)\subset H^{1/2}(\Gamma)$
and Inequality (b) to $H^{1/2}(\Gamma)\subset\mathcal{H}(\Gamma)$. The purpose
of this section is to examine the possible converse to Theorem 2; that is the
following theorem.
###### Theorem 3.
Let $\Gamma$ be a rectifiable quasicircle. The following two statements hold.
* (1)
If (a) holds then $\Gamma$ is a chord-arc curve.
* (2)
There exists a non-chord-arc rectifiable quasicircle $\Gamma$ such that (b)
holds.
We will discuss notions of Ahlfors-regularity in the next subsection; that
will be the key tool in the proof of part $(1)$ of Theorem 3 and also have
independent interests of their own. We then prove parts $(1)$ and $(2)$ of
Theorem 3 in the final two subsections.
### 5.1. Ahlfors-regular curves
In this section we discuss about Ahlfors-regular curves, sometimes also named
Ahlfors-David regular curves. Ahlfors’ name appears because this author
already considered this condition in [1] and David’s one because this author
proved that these curves are precisely the curves for which the Cauchy
operator is bounded on $L^{2}$ (see [8]). In the sequel we will simply call
regular these curves whose precise definition is
###### Definition 2.
Let $\Gamma$ be a rectifiable curve in the plane. We say that $\Gamma$ is
M-regular if for any $z\in\mathbb{C}$ and $r>0$,
$\mathrm{length}(\Gamma\cap D(z,r))\leq Mr,$
where $D(z,r)$ stands for the open disk centered at $z$ and of radius $r$.
Our aim is to prove the following theorem giving two equivalent properties of
regularity. The first of these properties involves the Riemann sphere $S^{2}$,
which is $\mathbb{C}\cup\\{\infty\\}$ equipped with the metric
$d\rho=\frac{|dz|}{1+|z|^{2}}.$
We will denote by s-length the spherical length of a curve. The group of
conformal automorphisms of $S^{2}$ is the group of Möbius transformations
$z\mapsto\frac{az+b}{cz+d},\quad ad-bc=1,$
which is thus isomorphic to ${\rm PSL}(2,\mathbb{C})$.
###### Lemma 2.
If $\Gamma$ is an $M$-regular curve then $T(\Gamma)$ is, for any Möbius
transformation $T$, $12M$-regular.
###### Proof.
Every Möbius transformation $T$ may be factorized as $T=S_{1}\circ\Theta\circ
S_{2}$ where $S_{i},\,i=1,2$, are similitudes, and $\Theta(z)=1/z$. The
property of regularity, along with the constant $M$, is obviously invariant
for similitudes so it suffices to prove the lemma for $\Theta$.
Let us consider an $M$-regular curve $\Gamma$. Set
$z_{0}=r_{0}e^{it}\in\mathbb{C},\,r>0,\,D=D(z_{0},r)$. Without loss of
generality we may assume that $z_{0}$ is real positive (i.e., $t=0$). Set
$\Delta=\Theta^{-1}(D)$. If $r<r_{0}/2$ then $\Delta$ is the disk centered at
$\frac{r_{0}}{r_{0}^{2}-r^{2}}$ with radius
$\frac{r}{r_{0}^{2}-r^{2}}<\frac{4r}{3r_{0}^{2}}$. Moreover if
$z\in\Delta,\,|z|\geq\frac{1}{r_{0}+r}>\frac{2}{3r_{0}}$. It follows that
$\displaystyle\mathrm{length}(\Theta(\Gamma)\cap D)$
$\displaystyle=\int_{\Gamma\cap\Delta}\frac{|dz|}{|z|^{2}}\leq\int_{\Gamma\cap\Delta}\frac{|dz|}{\left(2/(3r_{0})\right)^{2}}$
$\displaystyle\leq
M\left(\frac{4r}{3r_{0}^{2}}\right)\left(\frac{3r_{0}}{2}\right)^{2}=3Mr.$
If now $r\geq r_{0}/2$, $D$ is included in $D(0,3r)$ and $\Delta$ in
$\\{|z|>1/(3r)\\}$. We may then write
$\displaystyle\mathrm{length}(\Theta(\Gamma)\cap D)$
$\displaystyle=\int_{\Gamma\cap\Delta}\frac{|dz|}{|z|^{2}}\leq\int_{\Gamma\cap\\{|z|>\frac{1}{3r}\\}}\frac{|dz|}{|z|^{2}}$
$\displaystyle=\sum\limits_{n\geq
0}\int_{\Gamma\cap\\{\frac{2^{n}}{3r}<|z|\leq\frac{2^{n+1}}{3r}\\}}\frac{|dz|}{|z|^{2}}\leq\sum\limits_{n\geq
0}M\frac{2^{n+1}}{3r}\frac{9r^{2}}{2^{2n}}=12Mr.$
∎
We are now ready to state the main result of this subsection.
###### Theorem 4.
Let $\Gamma$ be a rectifiable curve in the plane. Then the following
statements are equivalent.
* (i)
$\Gamma$ is an $M$-regular curve,
* (ii)
There exists $K>0$ such that for any Möbius transformation $T$,
$\text{\rm s-length}(T(\Gamma))\leq K,$
* (iii)
There exists $C>0$ such that for every $w\notin\Gamma$,
$\mathrm{length}(M_{w}(\Gamma))\leq\frac{C}{d(w,\Gamma)},$
where $M_{w}(z)=1/(z-w)$ is an inversion and $d(w,\Gamma)$ is the distance
from $w$ to $\Gamma$.
###### Proof.
We first show that (i) $\Rightarrow$ (ii). Suppose $\Gamma$ is $M$-regular. It
follows from Lemma 2 that for any Möbius transformation $T$, $T(\Gamma)$ is
$12M$-regular. Let $\eta:\,[0,1]\to\mathbb{C}$ be an absolutely continuous
parametrization of $T(\Gamma)$. We see the s-length of $T(\Gamma)$ is
$\int_{0}^{1}\frac{|\eta^{\prime}(t)|dt}{1+|\eta(t)|^{2}}=I+\sum\limits_{n=0}^{\infty}I_{n},$
where
$I=\int_{|\eta(t)|<1}\frac{|\eta^{\prime}(t)|dt}{1+|\eta(t)|^{2}}\leq 12M$
while
$I_{n}=\int_{2^{n}\leq|\eta(t)|<2^{n+1}}\frac{|\eta^{\prime}(t)|dt}{1+|\eta(t)|^{2}}\leq\frac{12M2^{n+1}}{1+2^{2n}}\leq
12M2^{1-n},$
and thus we have $\text{\rm s-length}(T(\Gamma))\leq K$ by taking $K=60M$.
Now we prove (ii) $\Rightarrow$ (iii). Let $\gamma:\,[0,1]\to\mathbb{C}$ be an
absolutely continuous parametrization of $\Gamma$. Set $w\notin\Gamma$ so that
$\mathrm{length}(M_{w}(\Gamma))=\int_{0}^{1}\frac{|\gamma^{\prime}(t)|dt}{|\gamma(t)-w|^{2}}.$
We may write, by definition of $d:=d(w,\Gamma),$
$|\gamma(t)-w|^{2}\geq\frac{1}{2}(|\gamma(t)-w|^{2}+d^{2})$
for all $t\in[0,1]$, so that, if we define $\eta(t)=\frac{\gamma(t)-w}{d},$
$\int_{0}^{1}\frac{|\gamma^{\prime}(t)|dt}{|\gamma(t)-w|^{2}}\leq
2\int_{0}^{1}\frac{|\gamma^{\prime}(t)|dt}{d^{2}+|\gamma(t)-w|^{2}}=\frac{2}{d}\int_{0}^{1}\frac{|\eta^{\prime}(t)|dt}{1+|\eta(t)|^{2}}\leq\frac{2K}{d}$
by (ii).
Finally, we prove (iii) $\Rightarrow$ (i). This is the hardest part of the
proof. We first notice that in order to test the “regularity property” of a
curve we may replace disks by squares of diameter not greater than the
diameter of $\Gamma$, moreover centered on $\Gamma$. Let $\mathcal{C}$ be such
a square. By the invariance of regularity under similitudes, one may assume
that $\mathcal{C}$ has side-length $1$ and center $0$. We define
$L=\mathrm{length}(\mathcal{C}\cap\Gamma).$
The goal is to estimate $L$ from above by a constant depending only on $C$,
the constant in (iii). If $L\leq 100C$, we are done. In the other case let $n$
be the integer part of $\frac{L}{2C}$. It implies that $L<3Cn$. It remains to
prove that $n$ is bounded from above by a constant depending only on $C$. We
will do this in two steps.
We first cut $\mathcal{C}$ into $n^{2}$ sub-squares of side-length
$\frac{1}{n}$, that we call $\mathcal{C}_{j}$. We denote by $k\mathcal{C}_{j}$
( $k>0$) the intersection of $\mathcal{C}$ and the square with the same center
as $\mathcal{C}_{j}$ whose side-length is $k$ times the side-length of
$\mathcal{C}_{j}$. If $z_{0}\in\mathcal{C}\\!\setminus\\!\\!\Gamma$ then we
have on one side
$\int_{\Gamma\cap\mathcal{C}}|z-z_{0}|^{-2}|dz|\geq 2L,$
while, on the other side, by (iii),
$\int_{\Gamma\cap\mathcal{C}}|z-z_{0}|^{-2}|dz|\leq\frac{C}{d(z_{0},\Gamma)}.$
Consequently,
$d(z_{0},\Gamma)\leq\frac{C}{2L}\leq\frac{1}{4n},$ (11)
from which it follows that every $\mathcal{C}_{j}$ meets $\Gamma$. Since
diam$(\mathcal{C}_{j})\leq$diam$(\Gamma)/10$, $\Gamma$ also meets the
complement of $3\mathcal{C}_{j}$ and, as a consequence,
$\mathrm{length}(\Gamma\cap 3\mathcal{C}_{j})\geq\frac{1}{n}.$
Next, note that
$\int_{\Gamma}|z-z_{0}|^{-2}|dz|\geq\frac{1}{9}\sum\limits_{j\in
J}\int_{3\mathcal{C}_{j}\cap\Gamma}|z-z_{0}|^{-2}|dz|,$
where $J=\\{j:z_{0}\notin 27\mathcal{C}_{j}\\}$. If $z,z^{\prime}\in
3\mathcal{C}_{j},\,j\in J,$ then $|z^{\prime}-z_{0}|\leq 2|z-z_{0}|$, from
which it follows easily that
$\displaystyle\iint_{3\mathcal{C}_{j}}|z-z_{0}|^{-2}dxdy$ $\displaystyle\leq
n\times\mathrm{length}(\Gamma\cap
3\mathcal{C}_{j})\times\Big{(}\frac{3}{n}\Big{)}^{2}\times\max_{z\in
3\mathcal{C}_{j}}|z-z_{0}|^{-2}$
$\displaystyle\leq\frac{36}{n}\int_{\Gamma\cap
3\mathcal{C}_{j}}|z-z_{0}|^{-2}|dz|.$
By comparison with an integral we have
$\displaystyle\sum\limits_{j\in J}\iint_{3\mathcal{C}_{j}}|z-z_{0}|^{-2}dxdy$
$\displaystyle\geq\int_{0}^{\pi/4}\int_{12/n}^{1/2}\frac{1}{r}drd\theta$
$\displaystyle\geq\frac{\pi}{4}\log\frac{n}{24}.$
Combining these estimates we deduce that
$\int_{\Gamma}|z-z_{0}|^{-2}|dz|\geq\frac{n}{500}\log\frac{n}{24},$
and this, combined with (iii), implies
$d(z_{0},\Gamma)\leq\frac{500C}{n\log\frac{n}{24}}.$ (12)
Let $n^{\prime}$ be the integer part of $\frac{n\log\frac{n}{24}}{2000C}$. We
can suppose that $n^{\prime}$ is bigger than $n$. If not, $n$ must be bounded
from above by a constant depending only on $C$, and then we are done. By (12)
we have
$d(z_{0},\Gamma)\leq\frac{1}{4n^{\prime}}$ (13)
which is an improvement of (11). Then we repeat the above discussion. We cut
$\mathcal{C}$ into $n^{\prime 2}$ sub-squares of side-length
$\frac{1}{n^{\prime}}$, and we realize that each of these sub-squares meets
$\Gamma$. On the other hand, $\Gamma$ meets the complement of
$3\mathcal{C}_{j}$. Then, we see
$\mathrm{length}(\Gamma\cap 3\mathcal{C}_{j})\geq\frac{1}{n^{\prime}}.$
As a consequence,
$L>\frac{1}{9}\times\sum_{j=1}^{n^{\prime 2}}\mathrm{length}(\Gamma\cap
3\mathcal{C}_{j})>\frac{n^{\prime}}{9}>\frac{1}{10}\times\frac{n\log\frac{n}{24}}{2000C}.$
(14)
Combining this with $L<3Cn$ we conclude that $n<24e^{60000C^{2}}$. Finally, we
deduce that $L<72Ce^{60000C^{2}},$ where $C$ is the constant occurring in
(iii). The proof of the theorem is complete. ∎
### 5.2. Necessity of the chord-arc property: part $(1)$ of Theorem 3
In this subsection we prove part $(1)$ of Theorem 3. To this end we first
consider $w\in\Omega_{e}$ and apply the Inequality (a) to the analytic
function $f(z)=\frac{1}{z-w}$ defined on $\Omega\cup\Gamma$. The square of the
integral on the right hand side is then
$\displaystyle\|f\|_{\mathcal{H}(\Gamma)}^{2}$
$\displaystyle=\frac{1}{\pi}\iint_{\Omega}|f^{\prime}(z)|^{2}=\frac{1}{\pi}\iint_{\Omega}|z-w|^{-4}dxdy$
$\displaystyle\leq\frac{1}{\pi}\iint_{|z-w|>d(w,\Gamma)}|z-w|^{-4}dxdy$
$\displaystyle=d(w,\Gamma)^{-2}$
as we see using polar coordinates. The square of the integral on the left hand
side is
$\displaystyle\|f\|_{H^{1/2}(\Gamma)}^{2}$
$\displaystyle=\frac{1}{4\pi^{2}}\iint_{\Gamma\times\Gamma}\left|\frac{f(z)-f(\zeta)}{z-\zeta}\right|^{2}|dz||d\zeta|$
$\displaystyle=\frac{1}{4\pi^{2}}\iint_{\Gamma\times\Gamma}\left|\frac{\frac{1}{z-w}-\frac{1}{\zeta-w}}{z-\zeta}\right|^{2}|dz||d\zeta|$
$\displaystyle=\frac{1}{4\pi^{2}}\int_{\Gamma}\int_{\Gamma}\frac{1}{|z-w|^{2}|\zeta-w|^{2}}|dz||d\zeta|$
$\displaystyle=\left(\frac{1}{2\pi}\mathrm{length}\left(M_{w}(\Gamma)\right)\right)^{2}$
by Fubini theorem and with the notations of last subsection.
We have thus proven that if (a) is valid then there exists a constant $C>0$
such that
$\mathrm{length}(M_{w}(\Gamma))\leq\frac{C}{d(w,\Gamma)}$ (15)
for any $w\in\Omega_{e}$. Next we consider $w\in\Omega$ and apply $(a)$ to the
analytic function $f(z)=\frac{1}{z-w}$ defined on $\Omega_{e}\cup\Gamma$.
Using the fact that $\mathcal{H}(\Gamma)$ is a “two-sided” space for a
quasicircle $\Gamma$, we may similarly see that the inequality (15) remains
true for any $w\in\Omega$. We can then invoke Theorem 4, and see that $\Gamma$
has to be regular. Consequently, $\Gamma$ is chord-arc since we have assumed
it is a quasicircle. This completes the proof of part $(1)$ of Theorem 3.
As noted above, it follows from Proposition 1 that when $h$ is a
quasisymmetric mapping from $\mathbb{S}$ onto a rectifiable quasicircle
$\Gamma$ it holds that
$\|f\|_{\mathcal{H}(\Gamma)}=\|f\circ\varphi\|_{H^{1/2}(\mathbb{S})}\approx\|f\circ
h\|_{H^{1/2}(\mathbb{S})}$, and thus part $(1)$ of Theorem 3 does immediately
imply the following fact.
###### Corollary 2.
Let $h$ be a quasisymmetric mapping of $\mathbb{S}$ onto a rectifiable
quasicircle $\Gamma$. Let the composition operator $(V_{h})^{-1}:f\mapsto
f\circ h^{-1}$ be a bounded linear operator from $H^{1/2}(\mathbb{S})$ into
$H^{1/2}(\Gamma)$. Then $\Gamma$ is a chord-arc curve.
### 5.3. Non-Smirnov domains: part $(2)$ of Theorem 3
In this subsection we present a counter-example to show part $(2)$ of Theorem
3. When $\Gamma$ is a Jordan curve with $\Omega$ as interior domain and
$\varphi:\mathbb{D}\to\Omega$ a Riemann mapping, we know (F. and M. Riesz
theorem [23]) that $\Gamma$ is rectifiable if and only if $\varphi^{\prime}$
belongs to the Hardy space $H^{1}(\mathbb{D})$.
###### Definition 3.
Let $\Gamma$ be a rectifiable curve in the plane. We say that $\Omega$ is a
Smirnov domain if $\varphi^{\prime}$ is an outer function of
$H^{1}(\mathbb{D})$; that is,
$\log|\varphi^{\prime}(z)|=\int_{\mathbb{S}}p(z,\zeta)\log|\varphi^{\prime}(\zeta)||d\zeta|\quad\text{\rm
for}\;z\in\mathbb{D},$ (16)
where $p$ is the Poisson kernel.
It has been shown by Lavrentiev [18] that chord-arc domains are Smirnov
domains and later by the second author [31] that Jordan domains with Ahlfors-
regular boundary also have Smirnov property. On the other hand there exists a
rectifiable quasidisk $\Omega$ whose Riemann map $\varphi$ satisfies that
$\varphi^{\prime}$ is an inner function, see [11],[17]; that is,
$|\varphi^{\prime}(\zeta)|=1\quad\text{\rm for almost\;all
}\,\zeta\in\mathbb{S},\quad|\varphi^{\prime}(z)|<1\quad\text{\rm
for}\,z\in\mathbb{D}.$ (17)
In particular, harmonic measure on $\Gamma$ with respect to $\varphi(0)$ is
equal to arc-length measure on $\Gamma$ despite the fact that $\Omega$ is not
a disk. But the Smirnov condition (16) is not satisfied because of (17).
We are going to exploit this fact in order to show that this domain satisfies
Inequality (b) even if it is not chord-arc. This follows immediately from the
following observation
$\displaystyle\|f\|^{2}_{H^{1/2}(\Gamma)}$
$\displaystyle=\int_{0}^{2\pi}\int_{0}^{2\pi}\left|\frac{f(\varphi(e^{it}))-f(\varphi(e^{is}))}{\varphi(e^{it})-\varphi(e^{is})}\right|^{2}\frac{dt}{2\pi}\frac{ds}{2\pi}$
$\displaystyle\geq\frac{4}{\pi^{2}}\int_{0}^{2\pi}\int_{0}^{2\pi}\Big{|}\frac{f(\varphi(e^{it}))-f(\varphi(e^{is}))}{e^{it}-e^{is}}\Big{|}^{2}\frac{dt}{2\pi}\frac{ds}{2\pi}$
$\displaystyle=\frac{4}{\pi^{2}}\|f\|^{2}_{\mathcal{H}(\Gamma)},\quad\text{for}\,f\in
H^{1/2}(\Gamma).$
Here, the inequality “$\geq$” is due to the inequality (7). This completes the
proof of part $(2)$ of Theorem 3.
Immediately, by taking the quasisymmetric mapping $h$ to be $\varphi$ on
$\mathbb{S}$ we have the following
###### Corollary 3.
There exists a quasisymmetric mapping $h$ of $\mathbb{S}$ onto a rectifiable
quasicircle $\Gamma$ such that the composition operator $V_{h}:f\mapsto f\circ
h$ is a bounded linear operator from $H^{1/2}(\Gamma)$ into
$H^{1/2}(\mathbb{S})$, but $\Gamma$ is non-chord-arc.
As a side remark, in [16], it is proven that if $\Omega$ is a quasidisk such
that $\varphi^{\prime}$, the derivative of its Riemann map, is a singular
inner function, then $\Omega_{e}$ has to be a Smirnov domain. It follows that
there are counterexamples which are Smirnov domains.
Acknowledgement We would like to thank Yves Meyer for having shared his idea
leading to (iii) $\Rightarrow$ (i) of Theorem 4. We also thank the referees
for many constructive comments, which helped us to greatly improve the quality
of this paper.
## Statements and Declarations
Conflict of interest. On behalf of all authors, the corresponding author
states that there is no conflict of interest regarding the publication of this
paper.
Data Availability. This paper has no associated data.
## References
* [1] Ahlfors, L.V.: Zur Theorie der Uberlagerungsflachen. Acta. Math. 65 (1935), 157-194.
* [2] Ahlfors, L.V.: Lectures on quasiconformal mappings. Van Nostrand Math. Studies, 10 (1966).
* [3] Ahlfors, L.V.: Conformal Invariants, Topics in Geometric Function Theory. McGraw-Hill, 1973.
* [4] Astala, K., Iwaniec, T. and Martin, G.J.: Elliptic Partial Differential Equations and Quasiconformal Mappings in the Plane. Princeton University Press, Princeton, 2009.
* [5] Arcozzi, N., Rochberg, R., Sawyer, T. and Wick, B.: The Dirichlet Space: A Survey. New York J. Math. 17a (2011), 45-86.
* [6] Ahlfors, L.V. and Beurling A.: The boundary correspondence under quasiconformal mappings. Acta Math. 96 (1956), 125-142.
* [7] Bishop, C.: Function-Theoretic characterizations of Weil-Petersson Curves. Rev. Math. Iberoam 38 (2022), no. 7, 2355-2384.
* [8] David, G.: Opérateurs intégraux singuliers sur certaines courbes du plan complexe. Ann. Sci. École Norm. Sup. 17 (1984), no. 4, 157-189.
* [9] Douglas, J.: Solution of the problem of Plateau. Trans. Amer. Math. Soc. 33 (1931), no. 1, 263-321.
* [10] Duren, P.L.: Theory of $H^{p}$ spaces. Academic Press, New York, 1970.
* [11] Duren, P.L., Shapiro, H.S. and Shields, A.L.: Singular measures and domains not of Smirnov type. Duke Math. J. 33 (1966), 247-254.
* [12] EI-Fallah, O., Kellay, K., Mashreghi, J. and Ransford, T.: A Primer on the Dirichlet space. Cambridge University Press, Cambridge, 2014.
* [13] Fukushima, M., Oshima, Y. and Takeda, M.: Dirichlet Forms and symmetric Markov Processes. Vol. 19. Walter de Gruyter, 2010.
* [14] Go’ldshtein, V., Lattuflin, T. and Vodop’yanov, S.: Criteria for extensions of functions of the class $L_{2}^{1}$ from unbounded plane domains. Siberian Math. J. (english translation) 20:2 (1979), 298-301.
* [15] Jones, P.W.: Quasiconformal mappings and extendability of functions in Sobolev spaces. Acta Math. 147 (1981), 71-88.
* [16] Jones, P. and Smirnov, S.: On V. I. Smirnov domains. Ann. Acad. Sci. Fenn. Math. 24 (1999), 105-108.
* [17] Kahane, J.-P.: Trois notes sur les ensembles parfaits linéaires. Enseign. Math. 15 (1969),185-192.
* [18] Lavrentiev, M.: Boundary problems in the theory of univalent functions. Amer. Math. Soc. Transl. Ser. 2, 32 (1963), 1-35.
* [19] Leoni, G.: A First Course in Sobolev Spaces. Volume 18 of Graduate Studies in Mathematics, second edition. American Mathematical Society, Providence, RI, 2017.
* [20] Liu, T. and Shen, Y.: The jump problem for the critical Besov space. Math. Z. 59, 306 (2024).
* [21] Meyers, N.G. and Serrin, J.: $H=W$. Proc. Nat. Acad. Sci. U.S.A. 51 (1964), no. 6, 1055-1056.
* [22] Nag, S. and Sullivan, D.: Teichmüller Theory and the universal periodic mapping via quantum calculus and the $H^{1/2}$ space on the circle. Osaka J. Math. 32 (1995), no. 1, 1-34.
* [23] Pommerenke, C.: Boundary behavior of conformal maps. Grundlehren Math. Wiss., vol. 299, Springer-Verlag, Berlin, 1992.
* [24] Ross, W.T.: The classical Dirichlet space. In Recent advances in operator-related function theory, pp. 171-197. Contemp. Math. 393, Amer. Math. Soc., Providence, RI, 2006.
* [25] Schippers, E. and Staubach, W.: Harmonic reflection in quasicircles and well-posedness of a Riemann-Hilbert problem on quasidisks. J. Math. Anal. Appl. 448 (2017), 864-884.
* [26] Schippers, E. and Staubach, W.: Analysis on quasidisks: A unified approach through transmission and jump problems. EMS Surv. Math. Sci. 9 (2022), 31-97.
* [27] Shen, Y.: Weil-Petersson Teichmüller Space. Amer. J. Math. 140 (2018), 1041-1074.
* [28] Takhtajan, L.A. and Teo, L.-P.: Weil-Petersson metric on the Universal Teichmüller Space. Mem. Amer. Math. Soc. 183 (2006), no. 861.
* [29] Tukia, P. and Väisälä, J.: Quasisymmetric embeddings of metric spaces. Ann. Acad. Sci. Fenn. Ser. A. I. Math. 5 (1980), 97-114.
* [30] Wang, Y.: Equivalent descriptions of the Loewner energy. Invent. Math. 218 (2019), no. 2, 573-621.
* [31] Zinsmeister, M.: Domaines de Lavrentiev. Publ. Math. Orsay, 1985.
|
# On extropy of past lifetime distribution
Osman Kamari
University of Human Development
Sulaymaniyah, Iraq
Francesco Buono
Università di Napoli Federico II
Italy
###### Abstract
Recently Qiu et al. (2017) have introduced residual extropy as measure of
uncertainty in residual lifetime distributions analogues to residual entropy
(1996). Also, they obtained some properties and applications of that. In this
paper, we study the extropy to measure the uncertainty in a past lifetime
distribution. This measure of uncertainty is called past extropy. Also it is
showed a characterization result about the past extropy of largest order
statistics.
Keywords: Reversed residual lifetime, Past extropy, Characterization, Order
statistics.
AMS Subject Classification: 94A17, 62B10, 62G30
## 1 Introduction
The concept of Shannon entropy as a seminal measure of uncertainty for a
random variable was proposed by Shannon (1948). Shannon entropy $H(f)$ for a
non-negative and absolutely continuous random variable $X$ is defined as
follows:
$H\left(f\right)=-\mathbb{E}[\log f(x)]=-\int_{0}^{+\infty}f\left(x\right)\log
f\left(x\right)\ \mathrm{d}x,$ (1)
where F and f are cumulative distribution function (CDF) and probability
density function (pdf), respectively. There are huge literatures devoted to
the applications, generalizations and properties of Shannon entropy (see, e.g.
Cover and Thomas, 2006).
Recently, a new measure of uncertainty was proposed by Lad et al. (2015)
called extyopy as a complement dual of Shannon entropy (1948). For a non-
negative random variable $X$ the extropy is defined as below:
$J\left(X\right)=-\frac{1}{2}\int_{0}^{+\infty}f^{2}(x)\mathrm{d}x.$ (2)
It’s obvious that $J\left(X\right)\leq 0.$
One of the statistical applications of extropy is to score the forecasting
distributions using the total log scoring rule.
Study of duration is an interest subject in many fields of science such as
reliability, survival analysis, and forensic science. In these areas, the
additional life time given that the component or system or a living organism
has survived up to time $t$ is termed the residual life function of the
component. If $X$ is the life of a component, then
$X_{t}=\left(X-t|X>t\right)$ is called the residual life function. If a
component is known to have survived to age $t$ then extropy is no longer
useful to measure the uncertainty of remaining lifetime of the component.
Therefore, Ebrahimi (1996) defined the entropy for resudial lifetime
$X_{t}=\left(X-t|X>t\right)$ as a dynamic form of uncertainty called the
residual entropy at time $t$ and defined as
$H(X;t)=-\int_{t}^{+\infty}\frac{f(x)}{\overline{F}(t)}\log\frac{f(x)}{\overline{F}(t)}\
\mathrm{d}x,$
where $\overline{F}(t)=\mathbb{P}(X>t)=1-F(t)$ is the survival (reliability)
funltion of $X$.
Analogous to residual entropy, Qiu et al. (2017) defined the extropy for
residual lifetime $X_{t}$ called the residual extropy at time $t$ and defined
as
$J\left(X_{t}\right)=-\frac{1}{2}\int_{0}^{+\infty}f_{X_{t}}^{2}(x)\mathrm{d}x=-\frac{1}{2\overline{F}^{2}(t)}\int_{t}^{+\infty}f^{2}(x)\mathrm{d}x.$
(3)
In many situations, uncertainty can relate to the past. Suppose the random
variable $X$ is the lifetime of a component, system or a living organism,
having an absolutely continuous distribution function $F_{X}(t)$ and the
density function $f_{X}(t)$. For $t>0$, let the random variable
${}_{t}X=(t-X|X<t)$ be the time elapsed after failure till time $t$, given
that the component has already failed at time $t$. We denote the random
variable ${}_{t}X$, the reversed residual life (past lifetime). For instance,
at time $t$, one has under gone a medical test to check for a certain disease.
Suppose that the test result is positive. If $X$ is the age when the patient
was infected, then it is known that $X<t$. Now the question is, how much time
has elapsed since the patient has been infected by this disease? Based on this
idea, Di Crescenzo and Longobardi (2002) introduced the entropy of the
reversed residual lifetime ${}_{t}X$ as a dynamic measure of uncertainty
called past entropy as follows:
$H\left(X;[t]\right)=-\int_{0}^{t}\frac{f(x)}{F(t)}\log\frac{f(x)}{F(t)}\
\mathrm{d}x.$
This measure is dual of residual entropy introduced by Ebrahimi (1996).
In this paper, we study the extropy for ${}_{t}X$ as dual of residual extropy
that is called past extropy and it is defined as below (see also Krishnan et
al. (2020)):
$J\left({}_{t}X\right)=-\frac{1}{2}\int_{0}^{+\infty}f_{{}_{t}X}^{2}(x)\mathrm{d}x=-\frac{1}{2F^{2}(t)}\int_{0}^{t}f^{2}(x)\mathrm{d}x,$
(4)
where $f_{{}_{t}X}(x)=\frac{f(t-x)}{F(t)}$, for $x\in(0,t)$. It can be seen
that for $t\geq 0$, $J\left({}_{t}X\right)$ possesses all the properties of
$J(X)$.
###### Remark 1.
It’s clear that $J\left({}_{+\infty}X\right)=J(X)$.
Past extropy has applications in the context of information theory,
reliability and survival analysis, insurance, forensic science and other
related fields beceuse in that a lifetime distribution truncated above is of
utmost importance.
The paper is organized as follows: in section 2, an approach to measure of
uncertainty in the past lifetime distribution is proposed. Then it is studied
a characterization result with the reversed failure rate. Following a
characterization result is given based on past extropy of the largest order
statistics in section 3.
## 2 Past extropy and some characterizations
Analogous to residual extropy (Qiu et al. (2017)), the extropy for ${}_{t}X$
is called past extropy and for a non-negative random variable $X$ is as below:
$J\left({}_{t}X\right)=-\frac{1}{2}\int_{0}^{+\infty}f_{{}_{t}X}^{2}(x)\mathrm{d}x=-\frac{1}{2F^{2}(t)}\int_{0}^{t}f^{2}(x)\mathrm{d}x,$
(5)
where $f_{{}_{t}X}(x)=\frac{f(t-x)}{F(t)}$, for $x\in(0,t)$ is the density
function of ${}_{t}X$. It’s clear that $J({{}_{t}}X)\leq 0$ while the residual
entropy of a continuous distribution may take any value on
$[-\infty,+\infty]$. Also, $J\left({}_{+\infty}X\right)=J(X)$.
###### Example 1.
* a)
If $X\sim Exp(\lambda)$, then
$J\left({}_{t}X\right)=-\frac{\lambda}{4}\frac{1+\mathrm{e}^{-\lambda
t}}{1-\mathrm{e}^{-\lambda t}}$ for $t>0$. This shows that the past extropy of
exponential distribution is an increasing function of t.
* b)
If $X\sim U(0,b)$, then $J\left({}_{t}X\right)=-\frac{1}{2t}$.
* c)
If $X$ has power distribution with parameter $\alpha>0$, i.e. $f(x)=\alpha
x^{(\alpha-1)}$, $0<x<1$, then
$J\left({}_{t}X\right)=\frac{-\alpha^{2}}{2(2\alpha-1)t}$.
* d)
If $X$ has Pareto distribution with parameters $\theta>0,x_{0}>0$, i.e.
$f(x)=\frac{\theta}{x_{0}}\frac{x_{0}^{\theta+1}}{x^{\theta+1}}$, $x>x_{0}$,
then
$J\left({}_{t}X\right)=\frac{\theta^{2}}{2(2\theta+1)(t^{\theta}-x_{0}^{\theta})^{2}}\left[\frac{x_{0}^{2\theta}}{t}-\frac{t^{2\theta}}{x_{0}}\right]$.
There is a functional relation between past extropy and residual extropy as
follows:
$J(X)=F^{2}(t)J\left({}_{t}X\right)+\overline{F}^{2}(t)J\left(X_{t}\right),\forall
t>0.$
In fact
$\displaystyle
F^{2}(t)J\left({}_{t}X\right)+\overline{F}^{2}(t)J\left(X_{t}\right)$
$\displaystyle=$
$\displaystyle-\frac{1}{2}\int_{t}^{+\infty}f^{2}(x)\mathrm{d}x-\frac{1}{2}\int_{0}^{t}f^{2}(x)\mathrm{d}x$
$\displaystyle=$
$\displaystyle-\frac{1}{2}\int_{0}^{+\infty}f^{2}(x)\mathrm{d}x=J(X).$
From (4) we can rewrite the following expression for the past extropy:
$J\left({}_{t}X\right)=\frac{-\tau^{2}(t)}{2f^{2}(t)}\int_{0}^{t}f^{2}(x)\mathrm{d}x,$
where $\tau(t)=\frac{f(t)}{F(t)}$ is the reversed failure rate.
###### Definition 1.
A random variable is said to be increasing (decreasing) in past extropy if
$J\left({}_{t}X\right)$ is an increasing (decreasing) function of $t$.
###### Theorem 2.1.
$J\left({}_{t}X\right)$ is increasing (decreasing) if and only if
$J\left({}_{t}X\right)\leq(\geq)\frac{-1}{4}\tau(t)$.
###### Proof.
From (5) we get
$\frac{\mathrm{d}}{\mathrm{d}t}J\left({}_{t}X\right)=-2\tau(t)J\left({}_{t}X\right)-\frac{1}{2}\tau^{2}(t).$
Then $J\left({}_{t}X\right)$ is increasing if and only if
$2\tau(t)J\left({}_{t}X\right)+\frac{1}{2}\tau^{2}(t)\leq 0,$
but $\tau(t)\geq 0$ so this is equivalent to
$J\left({}_{t}X\right)\leq-\frac{1}{4}\tau(t).$
∎
###### Theorem 2.2.
The past extropy $J\left({}_{t}X\right)$ of $X$ is uniquely determined by
$\tau(t)$.
###### Proof.
From (5) we get
$\frac{\mathrm{d}}{\mathrm{d}t}J\left({}_{t}X\right)=-2\tau(t)J\left({}_{t}X\right)-\frac{1}{2}\tau^{2}(t).$
So we have a linear differential equation of order one and it can be solved in
the following way
$J\left({}_{t}X\right)=\mathrm{e}^{-2\int_{t_{0}}^{t}\tau(s)\mathrm{d}s}\left[J\left({}_{t_{0}}X\right)-\int_{t_{0}}^{t}\frac{1}{2}\tau^{2}(s)\mathrm{e}^{2\int_{t_{0}}^{s}\tau(y)\mathrm{d}y}\mathrm{d}s\right],$
where we can use the boundary condition $J\left({}_{+\infty}X\right)=J(X)$, so
we get
$J\left({}_{t}X\right)=\mathrm{e}^{2\int_{t}^{+\infty}\tau(s)\mathrm{d}s}\left[J\left(X\right)+\int_{t}^{+\infty}\frac{1}{2}\tau^{2}(s)\mathrm{e}^{-2\int_{s}^{+\infty}\tau(y)\mathrm{d}y}\mathrm{d}s\right].$
(6)
∎
###### Example 2.
Let $X\sim Exp(\lambda)$, with reversed failure rate
$\tau(t)=\frac{\lambda\mathrm{e}^{-\lambda t}}{1-\mathrm{e}^{-\lambda t}}$. It
follows from (6) that
$\displaystyle J\left({}_{t}X\right)$ $\displaystyle=$
$\displaystyle\mathrm{e}^{2\int_{t}^{+\infty}\frac{\lambda\mathrm{e}^{-\lambda
s}}{1-\mathrm{e}^{-\lambda
s}}\mathrm{d}s}\left[J\left(X\right)+\int_{t}^{+\infty}\frac{1}{2}\frac{\lambda^{2}\mathrm{e}^{-2\lambda
s}}{\left(1-\mathrm{e}^{-\lambda
s}\right)^{2}}\mathrm{e}^{-2\int_{s}^{+\infty}\frac{\lambda\mathrm{e}^{-\lambda
y}}{1-\mathrm{e}^{-\lambda y}}\mathrm{d}y}\mathrm{d}s\right]$ $\displaystyle=$
$\displaystyle\left(1-\mathrm{e}^{-\lambda
t}\right)^{-2}\left[J(X)+\frac{1}{2}\int_{t}^{+\infty}\lambda^{2}\mathrm{e}^{-2\lambda
s}\mathrm{d}s\right]$ $\displaystyle=$
$\displaystyle\frac{\lambda}{4}\frac{\mathrm{e}^{-2\lambda
t}-1}{\left(1-\mathrm{e}^{-\lambda
t}\right)^{2}}=-\frac{\lambda}{4}\frac{1+\mathrm{e}^{-\lambda
t}}{1-\mathrm{e}^{-\lambda t}},$
so we find again the same result of example 1.
Using the following definition (see Shaked and Shanthikumar, 2007), we show
that $J\left({}_{t}X\right)$ is increasing in $t$.
###### Definition 2.
Let $X$ and $Y$ be two non-negative variables with reliability functions
$\overline{F},\overline{G}$ and pdfs $f,g$ respectively. $X$ is smaller than
$Y$
* a)
in the likelihood ratio order, denoted by $X\leq_{lr}Y$, if
$\frac{f(x)}{g(x)}$ is decreasing in $x\geq 0$;
* b)
in the usual stochastic order, denoted by $X\leq_{st}Y$ if
$\overline{F}(x)\leq\overline{G}(x)$ for $x\geq 0$.
###### Remark 2.
It is well known that if $X\leq_{lr}Y$ then $X\leq_{st}Y$ and $X\leq_{st}Y$ if
and only if $\mathbb{E}(\varphi(Y))\leq(\geq)\mathbb{E}(\varphi(X))$ for any
decreasing (increasing) function $\varphi$.
###### Theorem 2.3.
Let $X$ be a random variable with CDF $F$ and pdf $f$. If
$f\left(F^{-1}(x)\right)$ is decreasing in $x\geq 0$, then
$J\left({}_{t}X\right)$ is increasing in $t\geq 0$.
###### Proof.
Let $U_{t}$ be a random variable with uniform distribution on $(0,F(t))$ with
pdf $g_{t}(x)=\frac{1}{F(t)}$ for $x\in(0,F(t))$, then based on (4) we have
$\displaystyle J\left({}_{t}X\right)$ $\displaystyle=$
$\displaystyle-\frac{1}{2F^{2}(t)}\int_{0}^{F(t)}f\left(F^{-1}(u)\right)\mathrm{d}u=-\frac{1}{2F(t)}\int_{0}^{F(t)}g_{t}(u)f\left(F^{-1}(u)\right)\mathrm{d}u$
$\displaystyle=$
$\displaystyle-\frac{1}{2F(t)}\mathbb{E}\left[f\left(F^{-1}(U_{t})\right)\right].$
Let $0\leq t_{1}\leq t_{2}$. If $0<x\leq F(t_{1})$, then
$\frac{g_{t_{1}}(x)}{g_{t_{2}}(x)}=\frac{F(t_{2})}{F(t_{1})}$ is a non-
negative constant. If $F(t_{1})<x\leq F(t_{2})$, then
$\frac{g_{t_{1}}(x)}{g_{t_{2}}(x)}=0$. Therefore
$\frac{g_{t_{1}}(x)}{g_{t_{2}}(x)}$ is decreasing in $x\in(0,F(t_{2}))$, which
implies $U_{t_{1}}\leq_{lr}U_{t_{2}}$. Hence $U_{t_{1}}\leq_{st}U_{t_{2}}$ and
so
$0\leq\mathbb{E}\left[f\left(F^{-1}(U_{t_{2}})\right)\right]\leq\mathbb{E}\left[f\left(F^{-1}(U_{t_{1}}\right)\right]$
using the assumption that $f\left(F^{-1}(U_{t})\right)$ is a decreasing
function. Since $0\leq\frac{1}{F(t_{2})}\leq\frac{1}{F(t_{1})}$ then
$J\left({}_{t_{1}}X\right)=-\frac{1}{2F(t_{1})}\mathbb{E}\left[f\left(F^{-1}(U_{t_{1}})\right)\right]\leq-\frac{1}{2F(t_{2})}\mathbb{E}\left[f\left(F^{-1}(U_{t_{2}})\right)\right]=J\left({}_{t_{2}}X\right).$
∎
###### Remark 3.
Let $X$ be a random variable with CDF $F(x)=x^{2}$, for $x\in(0,1)$. Then
$f\left(F^{-1}(x)\right)=2\sqrt{x}$ is increasing in $x\in(0,1)$. However
$J\left({}_{t}X\right)=-\frac{2}{3t}$ is increasing in $t\in(0,1)$. So the
condition in theorem 2.3 that $f\left(F^{-1}(x)\right)$ is decreasing in $x$
is sufficient but not necessary.
## 3 Past extropy of order statistics
Let $X_{1},X_{2},\dots,X_{n}$ be a random sample with distribution function
$F$, the order statistics of the sample are defined by the arrangement
$X_{1},X_{2},\dots,X_{n}$ from the minimum to the maximum by
$X_{(1)},X_{(2)},\dots,X_{(n)}$. Qiu and Jia (2017) defined the residual
extropy of the $i-th$ order statistics and showed that the residual extropy of
order statistics can determine the underlying distribution uniquely. Let
$X_{1},X_{2},\dots,X_{n}$ be continuous and i.i.d. random variables with CDF
$F$ indicate the lifetimes of $n$ components of a parallel system. Also
$X_{1:n},X_{2:n},\dots,X_{n:n}$ be the ordered lifetimes of the components.
Then $X_{n:n}$ represents the lifetime of parallel system with CDF
$F_{X_{n:n}}(x)=(F(x))^{n}$, $x>0$. The CDF of
$\left[t-X_{n:n}|X_{n:n}<t\right]$ is $1-\left(\frac{F(t-x)}{F(t)}\right)^{n}$
where $\left[t-X_{n:n}|X_{n:n}<t\right]$ is called reversed residual lifetime
of the system. Now past extropy for reversed residual lifetime of parallel
system with distribution function $F_{X_{n:n}}(x)$ is as follows:
$J\left({}_{t}X_{n:n}\right)=-\frac{n^{2}}{2(F(t))^{2n}}\int_{0}^{t}f^{2}(x)[F(x)]^{2n-2}\mathrm{d}x.$
###### Theorem 3.1.
If $X$ has an increasing pdf $f$ on $[0,T]$, with $T>t$, then
$J\left({}_{t}X_{n:n}\right)$ is decreasing in $n\geq 1$.
###### Proof.
The pdf of $(X_{n:n}|X_{n:n}\leq t)$ can be expressed as
$g_{n:n}^{t}(x)=\frac{nf(x)F^{n-1}(x)}{F^{n}(t)},\mbox{ }x\leq t.$
We note that
$\frac{g_{2n-1:2n-1}^{t}(x)}{g_{2n+1:2n+1}^{t}(x)}=\frac{2n-1}{2n+1}\frac{F^{2}(t)}{F^{2}(x)}$
is decreasing in $x\in[0,t]$ and so $(X_{2n-1:2n-1}|X_{2n-1:2n-1}\leq
t)\leq_{lr}(X_{2n+1:2n+1}|X_{2n+1:2n+1}\leq t)$ which implies
$(X_{2n-1:2n-1}|X_{2n-1:2n-1}\leq t)\leq_{st}(X_{2n+1:2n+1}|X_{2n+1:2n+1}\leq
t)$. If $f$ is increasing on $[0,T]$ we have
$\mathbb{E}\left[f\left(X_{2n-1:2n-1}\right)|X_{2n-1:2n-1}\leq
t\right]\leq\mathbb{E}\left[f\left(X_{2n+1:2n+1}\right)|X_{2n+1:2n+1}\leq
t\right].$
From the definition of the past extropy it follows that
$\displaystyle J\left({}_{t}X_{n:n}\right)$ $\displaystyle=$
$\displaystyle-\frac{n^{2}}{2F^{2n}(t)}\int_{0}^{t}f^{2}(x)F^{2n-2}(x)\mathrm{d}x$
$\displaystyle=$
$\displaystyle\frac{-n^{2}}{2(2n-1)F(t)}\int_{0}^{t}\frac{(2n-1)F^{2n-2}(x)f(x)}{F^{2n-1}(t)}f(x)\mathrm{d}x$
$\displaystyle=$
$\displaystyle\frac{-n^{2}}{2(2n-1)F(t)}\mathbb{E}\left[f\left(X_{2n-1:2n-1}\right)|X_{2n-1:2n-1}\leq
t\right].$
Then it follows that
$\displaystyle\frac{J\left({}_{t}X_{n:n}\right)}{J\left({}_{t}X_{n+1:n+1}\right)}$
$\displaystyle=$
$\displaystyle\frac{n^{2}}{(n+1)^{2}}\frac{2n-1}{2n+1}\frac{\mathbb{E}\left[f\left(X_{2n-1:2n-1}\right)|X_{2n-1:2n-1}\leq
t\right]}{\mathbb{E}\left[f\left(X_{2n+1:2n+1}\right)|X_{2n+1:2n+1}\leq
t\right]}$ $\displaystyle\leq$
$\displaystyle\frac{\mathbb{E}\left[f\left(X_{2n-1:2n-1}\right)|X_{2n-1:2n-1}\leq
t\right]}{\mathbb{E}\left[f\left(X_{2n+1:2n+1}\right)|X_{2n+1:2n+1}\leq
t\right]}\leq 1.$
Since the past extropy of a random variable is non-negative we have
$J\left({}_{t}X_{n:n}\right)\geq J\left({}_{t}X_{n+1:n+1}\right)$ and the
proof is completed. ∎
###### Example 3.
Let $X$ be a random variable distribuited as a Weibull with two parameters,
$X\sim W2(\alpha,\lambda)$, i.e. $f(x)=\lambda\alpha
x^{\alpha-1}\exp\left(-\lambda x^{\alpha}\right)$. It can be showed that for
$\alpha>1$ this pdf has a maximum point
$T=\left(\frac{\alpha-1}{\lambda\alpha}\right)^{\frac{1}{\alpha}}$. Let us
consider the case in which $X$ has a Weibull distribution with parameters
$\alpha=2$ and $\lambda=1$, $X\sim W2(2,1)$ and so $T=\frac{\sqrt{2}}{2}$. The
hypothesis of the theorem 3.1 are satisfied for $t=0.5<T=\frac{\sqrt{2}}{2}$.
Figure 1 shows that $J\left({}_{0.5}X_{n:n}\right)$ is decreasing in
$n\in\\{1,2,\dots,10\\}$. Moreover the result of the theorem 3.1 does not hold
for the smallest order statistic as shown in figure 2.
Figure 1: $J\left({}_{0.5}X_{n:n}\right)$ of a $W2(2,1)$ for $n=1,2,\dots,10$
Figure 2: $J\left({}_{0.5}X_{1:n}\right)$ of a $W2(2,1)$ for $n=1,2,\dots,10$
In the case in which $X$ has an increasing pdf on $[0,T]$ with $T>t$ we give a
lower bound for $J\left({}_{t}X\right)$.
###### Theorem 3.2.
If $X$ has an increasing pdf $f$ on $[0,T]$, with $T>t$, then
$J\left({}_{t}X\right)\geq-\frac{\tau(t)}{2}$.
###### Proof.
From the definition we get
$\displaystyle J\left({}_{t}X\right)$ $\displaystyle=$
$\displaystyle-\frac{1}{2F^{2}(t)}\int_{0}^{t}f^{2}(x)\mathrm{d}x$
$\displaystyle=$
$\displaystyle\frac{-f(t)}{2F(t)}+\frac{1}{2F^{2}(t)}\int_{0}^{t}F(x)f^{\prime}(x)\mathrm{d}x$
$\displaystyle\geq$ $\displaystyle-\frac{\tau(t)}{2}.$
∎
###### Example 4.
Let $X\sim W2(2,1)$, as in example 3, so we know that its pdf is increasing in
$[0,T]$ with $T=\frac{\sqrt{2}}{2}$. The hypothesis of the theorem 3.2 are
satisfied for $t<T=\frac{\sqrt{2}}{2}$. Figure 3 shows that the function
$-\frac{\tau(t)}{2}$ (in red) is a lower bound for the past extropy (in
black). We remark that the theorem gives information only for $t\in[0,T]$, in
fact for larger values of $t$ the function $-\frac{\tau(t)}{2}$ could not be a
lower bound anymore, as showed in figure 3.
Figure 3: $J\left({}_{t}X\right)$ (in black) and $-\frac{\tau(t)}{2}$ (in red)
of a $W2(2,1)$
Qiu (2016), Qiu and Jia (2017) showed that extropy of the $i-th$ order
statistics and residual extropy of the $i-th$ order statistics can
characterize the underlying distribution uniquely. In the following theorem,
whose proof requires next lemma, we show that the past extropy of the largest
order statistic can uniquely characterize the underlying distribution.
###### Lemma 3.1.
Let $X$ and $Y$ be non-negative random variables such that
$J\left(X_{n:n}\right)=J\left(Y_{n:n}\right)$, $\forall n\geq 1$. Then
$X\overset{d}{=}Y$.
###### Proof.
From the definition of the extropy,
$J\left(X_{n:n}\right)=J\left(Y_{n:n}\right)$ holds if and only if
$\int_{0}^{+\infty}F_{X}^{2n-2}(x)f_{X}^{2}(x)\mathrm{d}x=\int_{0}^{+\infty}F_{Y}^{2n-2}(x)f_{Y}^{2}(x)\mathrm{d}x$
i.e. if and only if
$\int_{0}^{+\infty}F_{X}^{2n-2}(x)\tau_{X}(x)\mathrm{d}F^{2}_{X}(x)=\int_{0}^{+\infty}F_{Y}^{2n-2}(x)\tau_{Y}(x)\mathrm{d}F^{2}_{Y}(x).$
Putting $u=F^{2}_{X}(x)$ in the left side of the above equation and
$u=F^{2}_{Y}(x)$ in the right side we have
$\int_{0}^{1}u^{n-1}\tau_{X}\left(F_{X}^{-1}(\sqrt{u})\right)\mathrm{d}u=\int_{0}^{1}u^{n-1}\tau_{Y}\left(F_{Y}^{-1}(\sqrt{u})\right)\mathrm{d}u.$
that is equivalent to
$\int_{0}^{1}u^{n-1}\left[\tau_{X}\left(F_{X}^{-1}(\sqrt{u})\right)-\tau_{Y}\left(F_{Y}^{-1}(\sqrt{u})\right)\right]\mathrm{d}u=0\mbox{
}\forall n\geq 1.$
Then from Lemma 3.1 of Qui (2017) we get
$\tau_{X}\left(F_{X}^{-1}(\sqrt{u})\right)=\tau_{Y}\left(F_{Y}^{-1}(\sqrt{u})\right)$
for all $u\in(0,1)$. By taking $\sqrt{u}=v$ we have
$\tau_{X}\left(F_{X}^{-1}(v)\right)=\tau_{Y}\left(F_{Y}^{-1}(v)\right)$ and so
$f_{X}\left(F_{X}^{-1}(v)\right)=f_{Y}\left(F_{Y}^{-1}(v)\right)$ for all
$v\in(0,1)$. This is equivalent to
$(F_{X}^{-1})^{\prime}(v)=(F_{Y}^{-1})^{\prime}(v)$ i.e.
$F_{X}^{-1}(v)=F_{Y}^{-1}(v)+C$, for all $v\in(0,1)$ with $C$ constant. But
for $v=0$ we have $F_{X}^{-1}(0)=F_{Y}^{-1}(0)=0$ and so $C=0$. ∎
###### Theorem 3.3.
Let $X$ and $Y$ be two non-negative random variables with cumulative
distribution functions $F(x)$ and $G(x)$, respectively. Then $F$ and $G$
belong to the same family of distributions if and only if for $t\geq 0$,
$n\geq 1$,
$J\left({}_{t}X_{n:n}\right)=J\left({}_{t}Y_{n:n}\right).$
###### Proof.
It sufficies to prove the sufficiency. $J\left({}_{t}X_{n:n}\right)$ is the
past extropy for $X_{n:n}$ but it is also the extropy for the variable
${}_{t}X_{n:n}$. So through lemma 3.1 we get ${}_{t}X\overset{d}{=}\mbox{
}_{t}Y$. Then $\frac{F(t-x)}{F(t)}=\frac{G(t-x)}{G(t)}$, for $x\in(0,t)$. If
exists $t^{\prime}$ such that $F(t^{\prime})\neq G(t^{\prime})$ then in
$(0,t^{\prime})$ $F(x)=\alpha G(x)$ with $\alpha\neq 1$. But for all
$t>t^{\prime}$, exists $x\in(0,t)$ such that $t-x=t^{\prime}$ and so $F(t)\neq
G(t)$ and as in the precedent step we have $F(x)=\alpha G(x)$ for $x\in(0,t)$.
Letting $t$ to $+\infty$ we have a contradiction because $F$ and $G$ are both
distribution function and their limit is 1. ∎
## 4 Conclusion
In this paper we studied a measure of uncertainty, the past extropy. It is the
extropy of the inactivity time. It is important in the moment in which with an
observation we find our system down and we want to investigate about how much
time has elapsed after its fail. Moreover we studied some connections with the
largest order statistic.
## 5 Acknowledgement
Francesco Buono is partially supported by the GNAMPA research group of INdAM
(Istituto Nazionale di Alta Matematica) and MIUR-PRIN 2017, Project
”Stochastic Models for Complex Systems” (No. 2017 JFFHSH).
On behalf of all authors, the corresponding author states that there is no
conflict of interest.
## References
* [1] Cover, T. M. and Thomas, J. A., 2006. Elements of Information Theory, (2nd edn). Wiley, New York.
* [2] Di Crescenzo, A., Longobardi, M., 2002. Entropy-based measure of uncertainty in past lifetime distributions, Journal of Applied Probability, 39, 434–440.
* [3] Ebrahimi, N., 1996. How to measure uncertainty in the residual life time distribution. Sankhya: The Indian Journal of Statistics, Series A, 58, 48–56.
* [4] Krishnan, A. S., Sunoj S. M., Nair N. U., 2020. Some reliability properties of extropy for residual and past lifetime random variables. Journal of the Korean Statistical Society. https://doi.org/10.1007/s42952-019-00023-x.
* [5] Lad, F., Sanfilippo, G., Agrò, G., 2015. Extropy: complementary dual of entropy. Statistical Science 30, 40–58.
* [6] Qiu, G., 2017. The extropy of order statistics and record values. Statistics & Probability Letters, 120, 52–60.
* [7] Qiu, G., Jia, K., 2018. The residual extropy of order statistics. Statistics & Probability Letters, 133, 15–22.
* [8] Shaked, M., Shanthikumar, J. G., 2007. Stochastic orders. Springer Science & Business Media.
* [9] Shannon, C. E., 1948. A mathematical theory of communication. Bell System Technical Journal, 27, 379–423.
|
# Wave packet dynamics and edge transport in anomalous Floquet topological
phases
Miguel F. Martínez Department of Physics, KTH Royal Institute of Technology,
106 91, Stockholm, Sweden TCM Group, Cavendish Laboratory, University of
Cambridge, JJ Thomson Avenue, Cambridge CB3 0HE, United Kingdom F. Nur Ünal
<EMAIL_ADDRESS>TCM Group, Cavendish Laboratory, University of Cambridge, JJ
Thomson Avenue, Cambridge CB3 0HE, United Kingdom
###### Abstract
The possibility of attaining chiral edge modes under periodic driving has
spurred tremendous attention both theoretically and experimentally, especially
in light of anomalous Floquet topological phases that feature vanishing Chern
numbers unlike any static counterpart. We here consider a periodically
modulated honeycomb lattice and experimentally relevant driving protocols,
which allows us to obtain edge modes of various character in a simple model.
We calculate the phase diagram over a wide range of parameters and recover an
anomalous topological phase with quasienergy gaps harbouring edge states with
opposite chirality. Motivated by the advances in single-site control in
optical lattices, we investigate wave packet dynamics localized at the edges
in distinct Floquet topological regimes that cannot be achieved in
equilibrium. We analyse transport properties in edge modes which originate
from the same bands, but with support at different quasienergies and
sublattices as well as possessing different chiralities. We find that an
anomalous Floquet topological phase can in general generate more robust chiral
edge motion than a Haldane phase, allowing for more effective loading of the
wave packet into edge channels. Our results demonstrate that the rich
interplay of wave packet dynamics and topological edge states can serve as a
versatile tool in ultracold quantum gases in optical lattices.
## I Introduction
Topologically protected phenomena entail a prominent research direction in
condensed matter physics Hasan and Kane (2010); Qi and Zhang (2011). A wide
range of novel phases arising from the interplay of topology and symmetries
have been theorised and observed, with intriguing features being unearthed
regularly especially in highly nontrivial many-body or out-of-equilibrium
settings Thouless _et al._ (1982); Bernevig _et al._ (2006); Fang _et al._
(2012); Kane and Mele (2005); Slager _et al._ (2013); Kruthoff _et al._
(2017); Po _et al._ (2017); Bradlyn _et al._ (2017); Jotzu _et al._ (2014);
Aidelsburger _et al._ (2015); Tran _et al._ (2017); Asteria _et al._
(2019); Kemp _et al._ (2022); Tan _et al._ (2019); Xu _et al._ (2022);
Jangjan _et al._ (2022); Jangjan and Hosseini (2020). Regarding the latter,
rapid developments have extended topological characterisations to periodically
driven Floquet systems Roy and Harper (2017); Oka and Aoki (2009); Kitagawa
_et al._ (2010); Rudner _et al._ (2013); Ünal _et al._ (2019a);
Wintersperger _et al._ (2020) and dynamic quench settings with new invariants
Wang _et al._ (2017); Tarnowski _et al._ (2019); Ünal _et al._ (2019b); Hu
and Zhao (2020), even reaching to exotic multi-gap topologies with non-Abelian
braiding properties Ünal _et al._ (2020); Slager _et al._ (2022); Ahn _et
al._ (2019); Bouhon _et al._ (2020a, b); Jiang _et al._ (2021a, b). From an
experimental point of view, Floquet engineering Goldman and Dalibard (2014);
Eckardt (2017); Bukov _et al._ (2015) has been established as a powerful tool
to realise paradigmatic models in periodically driven non-equilibrium quantum
matter in platforms such as ultracold atoms Cooper _et al._ (2019); Wang _et
al._ (2018); Račiūnas _et al._ (2018); Wintersperger _et al._ (2020); Reichl
and Mueller (2014) and photonic lattices Maczewsky _et al._ (2017); Mukherjee
_et al._ (2017), allowing for not only high degree of control and efficient
quantum simulations but also exploring new regimes unattainable in
equilibrium.
In a Floquet system, where energy is not a conserved quantity due to broken
continuous time-translation invariance, one can adopt a description in terms
of a periodic quasienergy since discrete time translations are still present.
An effective quasienergy spectrum and the topological information are encoded
by the time evolution over a period, $T$. Upon evaluating stroboscopically
Eckardt (2017), a quasienergy can be defined as phase eigenvalues of the time
evolution operator, namely as $\varepsilon_{n}T\in[-\pi,\pi)$ for $n$ number
of bands, modulo $2\pi$ in a Floquet Brillouin Zone (FBZ). The fact that
quasienergy bands are phases forming a circle induces one additional,
anomalous, $\pi$-gap connecting the bands through the FBZ edge. The
periodicity of the Floquet spectrum has paved the way for novel phases that
truly arise from this out-of-equilibrium nature such as helical edge states
crossing across the FBZ, anomalous Floquet Anderson insulators and anomalous
Dirac string phases Budich _et al._ (2017); Titum _et al._ (2016); Slager
_et al._ (2022).
Most interestingly, the possibility to obtain anomalous edge states in the
FBZ-edge gap renders the equilibrium topological classification in terms of
the Chern number, $C_{n}$, in two dimensions inept to characterize driven
systems Kitagawa _et al._ (2010); Rudner _et al._ (2013). Rather than
invariants of individual bands, one needs to consider winding numbers,
$W_{g}$, associated with gaps centered around, e.g. $g=0$ and $g=\pi$ for two
levels. Consequently, the anomalous Floquet topological phase has attracted
great attention, characterised by the winding number combination
$[W_{0},W_{\pi}]=[1,1]$ harbouring edge states in both gaps despite a
vanishing Chern number Wintersperger _et al._ (2020); Rudner _et al._
(2013). Experimentally, individual edge modes have been probed in photonic
lattices Maczewsky _et al._ (2017); Mukherjee _et al._ (2017). Advances in
optical lattices have allowed for directly measuring the topological
invariants Wintersperger _et al._ (2020); Tarnowski _et al._ (2019); Jotzu
_et al._ (2014) and in particular distinguishing the quasienergy gaps to
unambiguously assign the observed winding numbers to individual gaps. However,
coherent edge dynamics and transport properties in different quasienergy gaps,
particularly with respect to each other and equilibrium phases, remain an open
question. Recent advances in single-site accessibility Gross and Bakr (2021)
in optical lattices now offer new possibilities for the creation of sharp
edges and probing topological edge modes by using localised wave packets to
investigate unique Floquet topological features as well as the effect of
different quasienergy gaps associated to different branch cuts.
In this work, we consider a periodically driven two-band model in two
dimensions (2D) that is also experimentally relevant and analyse transport
properties in different quasienergy gaps focusing on the distinct Floquet
nature Ünal _et al._ (2019a); Wintersperger _et al._ (2020). We calculate
the phase diagram over a wide range of parameters and contrast different
driving protocols. Going beyond the anomalous $[1,1]$-phase that has been
originally introduced, this allows us to reach an unexplored anomalous Floquet
topological phase where a pair of edge states with opposite chiralities are
induced in different gaps ($[W_{0},W_{\pi}]=[\pm 1,\mp 1]$) supported by the
same two bands with finite Chern number, which cannot be obtained in
equilibrium. We investigate the wave packet dynamics in various topological,
and in particular anomalous, phases. We study populating edge modes at
different quasienergies and with different winding numbers by applying kicks,
controlling the shape of the wave packet, and examine their robustness and
efficiency of preparation. Addressing chiral edge dynamics in different phases
where only a single gap or both gaps harbour edge modes, we show that an
anomalous Floquet topological phase can give rise to much more robust edge
transport than equilibrium Chern insulating phases. We further analyze the
effect of the Floquet gauge and the sublattice character of the edge states.
## II The Model
We consider a honeycomb lattice in two dimensions with a Hamiltonian given by
$\hat{H}=-\sum_{\langle
i,j\rangle}J\hat{a}^{\dagger}_{i}\hat{a}^{\phantom{{\dagger}}}_{j}+\dfrac{\Delta}{2}\sum_{i\in
A}\hat{a}^{\dagger}_{i}\hat{a}^{\phantom{{\dagger}}}_{i}-\frac{\Delta}{2}\sum_{i\in
B}\hat{a}^{\dagger}_{i}\hat{a}^{\phantom{{\dagger}}}_{i},$ (1)
where $\hat{a}^{\dagger}_{i}(\hat{a}^{\phantom{{\dagger}}}_{i})$ creates
(annihilates) a particle on lattice site $i$, with a nearest-neighbor
tunnelling strength $J$ and an energy offset $\Delta$ between the two
sublattices $A$ and $B$. We will introduce the periodic driving via the
modulation of the hopping amplitudes $J_{m}$ for $m=1,2,3$ along the three
nearest-neighbor vectors $\bm{d}_{1}=(0,-1)a,\,\bm{d}_{2}=(-1/2,\sqrt{3}/2)a$
and $\bm{d}_{3}=(1/2,\sqrt{3}/2)a$, where $a$ is the nearest-neighbor
distance. The Hamiltonian at a given time instance is diagonal with respect to
quasimomentum k, and hence can be written as
$\hat{H}(\textbf{{k}},t)=-\sum_{m=1}^{3}J_{m}(t)\big{(}\cos(d_{m}\textbf{{k}})\sigma_{x}+\sin(d_{m}\textbf{{k}})\sigma_{y}\big{)}+\frac{\Delta}{2}\sigma_{z},$
(2)
where $\boldsymbol{\sigma}$ are the Pauli matrices.
The driving protocols that we implement are of step-wise nature Kitagawa _et
al._ (2010); Rudner _et al._ (2013), which not only offers conceptual
simplicity for our theoretical characterisation, but are also experimentally
relevant as they have been recently implemented in optical lattices
Wintersperger _et al._ (2020) with a smoothed modulation Quelle _et al._
(2017). Namely, one period of the drive is divided in three even steps of
length $T/3$. For the first driving scheme, the tunnelling is allowed
cyclically only along one of the three directions during each stage with
amplitude $J$ Rudner _et al._ (2013); Ünal _et al._ (2019a). Secondly, we
employ another protocol, where during the $m^{th}$-step, the tunneling
$J_{m}=\lambda J$ is enhanced by a factor of $\lambda$, while the tunneling
along the other two directions are kept fixed at $J$ Kitagawa _et al._
(2010). The first drive can be seen as a limiting case of the second one for
$\lambda\rightarrow\infty,J\rightarrow 0$, while keeping $\lambda J$ fixed. We
will illustrate in detail the difference between the two schemes in the
following.
Since during each step of the driving cycle, the Hamiltonian (2) becomes time-
independent, $\hat{H}_{m}(\textbf{{k}})$, for both driving protocols, the
time-evolution operator at the end of one period can be written as
$\hat{\mathcal{U}}(\textbf{{k}},T)=e^{-i\hat{H}_{3}(\textbf{{k}})\frac{T}{3}}e^{-i\hat{H}_{2}(\textbf{{k}})\frac{T}{3}}e^{-i\hat{H}_{1}(\textbf{{k}})\frac{T}{3}}=e^{-i\hat{\mathcal{H}}_{F}(\textbf{{k}})T},$
(3)
where we set $\hbar=1$ throughout this paper. This stroboscopic evolution is
captured by the Floquet Hamiltonian $\hat{\mathcal{H}}_{F}(\textbf{{k}})$,
defining the quasienergy spectrum through
$\hat{\mathcal{H}}_{F}(\textbf{{k}})|\phi_{n}(\textbf{{k}})\rangle=\varepsilon_{n}|\phi_{n}(\textbf{{k}})\rangle$.
The Berry curvature and the Chern numbers of these quasienergy bands are
calculated using the Floquet eigenstates $\phi_{n}(\textbf{{k}})$. These
topological invariants in the 2D momentum-space BZ have, however, been shown
to be insufficient to capture the Floquet topology Rudner _et al._ (2013).
One instead needs to consider also the time evolution within the period,
$\hat{\mathcal{U}}(\textbf{{k}},t)$.
Figure 1: Phase diagrams of the step-wise driven honeycomb lattice with
corresponding winding numbers $[W_{0},W_{\pi}]$. (a) For the first driving
protocol with switching tunnelling amplitudes along the three directions
on/off completely, for modulation frequency $\omega$ and sublattice offset
$\Delta$. (b) The second protocol where tunnelling amplitudes are cyclically
enhanced by a factor of $\lambda$, here $\Delta=2J$.
In Floquet settings, the two bands can close and re-open in two distinct ways;
in the quasienergy gaps at zero but also at $\pi$, corresponding to a change
in the branch cut for defining $\hat{\mathcal{H}}_{F}(\textbf{{k}})$, as
opposed to one possibility in a static system. This quasienergy gap labelling
originating from the time-periodicity requires a topological characterisation
that employs winding numbers defined in the $(k_{x},k_{y},t)$-space. Each gap
closing induces a transfer of Berry curvature between the bands, leaving
chiral edge states behind in their respective gaps characterized by finite
winding numbers. When the transitions in zero and $\pi$ gaps trivialise each
other, we arrive at an anomalous Floquet topological phase with a vanishing
Chern number Rudner _et al._ (2013). The latter can in general be expressed
as the difference of the winding numbers (net number of edge states in a gap
factoring in their chiralities) above and below a band,
$C_{n}=W_{n,above}-W_{n,below}$. In the Floquet case, the extra $\pi$-gap
renders the spectrum unbounded and, hence, offers more interesting
possibilities.
## III Phase diagrams
Fig. 1 demonstrates the phase diagrams that we numerically calculate in our
driven honeycomb models for a representative parameter range. The winding
numbers can in general be computed using the time evolution operator at every
point in the $(2+1)$D parameter space Rudner _et al._ (2013), although this
may prove computationally and experimentally demanding in most cases. Instead,
we here employ an approach based on tracking the change of the winding numbers
in each gap as introduced in Ref. Ünal _et al._ (2019a) and successfully
implemented in Munich Wintersperger _et al._ (2020) to measure anomalous
winding numbers. In particular, in the high-frequency regime we utilise the
equilibrium topological classification based on Chern numbers with a trivial
winding number in the $\pi$-gap Ünal _et al._ (2019a); Nathan and Rudner
(2015). In the case of the second driving protocol, $\lambda=1$ automatically
satisfies the static definition. We depart from the topological invariants
that we calculate for these initial parameters at high frequencies for both
driving protocols. As model parameters are being tuned, we compute the winding
numbers, which change via band touching points in each gap, by evaluating the
charge of these topological singularities in a gap-specific way (see
Supplementary Material for details See ). The band singularities (hence, edge
modes) at the FBZ edge involve a change in the branch cut by $\pi$. We further
confirm these winding numbers by computing the Hopf invariant at
representative points in a given phase, which has been shown to equal the
winding numbers Ünal _et al._ (2019b).
Figure 2: Quasienergy spectra for a ribbon with armchair (upper panel) and
zigzag (lower panel) terminations, with crystal momentum $k_{\parallel}$ along
the periodic direction. In the $[1,1]$ phase, here given for the first driving
protocol introduced in the main text, zero- and $\pi$-gap edge states occur
well-separate in momentum at the fine-tuned point $\omega=4J/3,\Delta=0$ (a,d)
and $\omega=1.5J,\Delta=0.5J$ (b,e). (c,f) In the $[-1,1]$ phase with the
second driving scheme, the two edge states appear closer in momentum, for
$\omega=4.5J$, $\Delta=2J$, $\lambda=3$.
For the first driving protocol where the tunneling amplitudes are cyclically
turned on and off completely, the relevant tuning parameters are the
sublattice offset and frequency. Indeed, this simple model illustrates a rich
phase diagram including the previously predicted and observed anomalous
Floquet topological phase $([1,1])$ (see Fig. 1a), which can be understood by
considering the limit of $\Delta=0$ and $\omega=4J/3$. For particles starting
from one of the sublattices, this fine-tuned point corresponds to a complete
population transfer to the other sublattice at the end of each step with
tunnelling allowed for a time of $JT/3=\pi/2$. As we follow the driving cycle,
it can be easily seen that particles in the bulk remain localized and only
circularly move around each hexagon, ending up in alternating sublattice
flavor at the end of each period and, hence, mixing the pseudospin character.
However, in a finite system, particles move along the edge in a direction set
by the chirality of the drive, corresponding to unit winding numbers of same
sign in both gaps despite the trivial invariant of the bulk bands.
The second driving scheme on the other hand provides one more knob to tune,
namely the driving amplitude $\lambda$. This allows for reaching more exotic
phases as we present in Fig. 1b as a function of $\lambda$ and $\omega$ for a
fixed sublattice offset $\Delta=2J$. We identify a phase with winding numbers
$[W_{0},W_{\pi}]=[-1,1]$, which we numerically verify to be inaccessible using
the first driving protocol owing to the less number of tuning parameters in
that case. Distinct from the previously introduced anomalous phase, this is a
hitherto-unexplored anomalous Floquet topological phase that harbours edge
states in both zero and the anomalous–$\pi$ gaps with opposite chiralities,
supported by the same two bands. Hence, the lower (upper) band carries a Chern
number $C_{1}=-2\;(C_{2}=+2)$. This anomalous phase is unique to the driven
system, since in equilibrium there is only one gap where the topological
transition can occur between the two bands (i.e. the zero-gap). This gap could
in principle host two edge modes of opposite chiralities also in the static
case, provided that they occur at two different quasimomenta. This would
however correspond to vanishing Chern numbers of the two bands, making the
anomalous $[-1,1]$ phase exclusively emerging in the Floquet setting.
Interestingly, these subtle differences also reflect on the edge transport and
wave-packet dynamics as will be illustrated subsequently.
## IV Anomalous edge states
In order to investigate transport properties and chiral edge dynamics in
different Floquet (anomalous) topological phases, we consider a ribbon
geometry extended along the $x$-direction, with $N_{y}$ layers along the
finite $y$-direction. We present the edge spectra in Fig. 2 for both armchair
and zigzag terminations. At the fine-tuned point within the $[1,1]$ phase of
the first driving protocol, the Floquet spectrum features completely flat
bands corresponding to the localised bulk motion with extended edge modes
crossing the entire FBZ, see Fig. 2(a,d). Although the armchair termination
folds these two edge states to the same point, the zigzag spectrum reveals
that the edge modes are well separated in momentum: While the zero-gap states
form at the $K$-point with finite momentum $\pi/\sqrt{3}$ in units of the
lattice constant, $\pi$-gap states appear at the $\Gamma$-point. We find that
this is true in general in the $[1,1]$ phase owing to the nature of band
inversions required in this phase, also away from the fine-tuned case as
illustrated in Fig. 2(b,e) with dispersive bands, as well as for the second
driving protocol. Since armchair and zigzag terminations correspond to
projecting along perpendicular directions in the momentum space,
$k\rightarrow-k$ symmetry is naturally broken in the presence of a finite
sublattice offset for the latter (see Fig. 2(b,e)). The edge modes nonetheless
still carry a large momentum difference. On the contrary, in the $[-1,1]$
phase in Fig. 2(c,f), the two edge modes with opposite chiralities appear
closer in momentum, which will bring about an important distinction for the
dynamics in the two anomalous phases.
Figure 3: Distribution of edge states at quasienergy zero (left panel) and
$\pi$ (right panel), marked by dots in the insets on their corresponding
quasienergy spectra, on a cylinder periodic along the $x$-direction. (a,b) In
the $[1,1]$ phase with the same parameters as Fig. 2b, both edge states are
counterclockwise and localised at the same sublattices. (c,d) In the $[-1,1]$
phase for the parameters given in Fig. 2c, while the $\pi$-gap state localises
on $A$ at the top edge, the zero energy state with the opposite chirality
localises on the $B$ sublattice.
The different chiralities of the edge states in the anomalous Floquet
topological phases also affect their sublattice character as shown in Fig. 3.
We here consider a cylinder geometry with periodic boundary conditions
connecting $N_{x}$ layers along the $x$-direction. While in the $[1,1]$ phase,
the counter-clockwise edge states in the zero-gap are localised on the $A(B)$
sublattice on the upper (lower) end of the cylinder, the $\pi$-gap states
support the same chiral motion localised on the same sublattice flavors. In
the case of $[W_{0},W_{\pi}]=[-1,1]$, however, the system harbours both
clockwise and counter-clockwise edge modes. Fig. 3(c,d) demonstrates that this
is facilitated by swapping of the sublattice character of the zero-gap states
along with their chirality. Hence, on the upper/lower end of the cylinder, the
two edge modes support opposite currents in different sublattices.
Interestingly, the layers where zero and $\pi$-gap currents have maximum
density, depicted by the size of the circles, are also different. We emphasize
that these two chiral modes do not hybridize despite being on the same edge as
they are well separated in quasienergy. We now analyse how the interplay
between the edge modes located at different momentum, quasienergy, sublattices
and with different chiralities affect their transport properties, especially
with respect to each other and in different Floquet (anomalous) topological
phases.
Figure 4: Overlap of a wave packet with the Floquet eigenstates (bottom panel)
and its evolution (upper panel) at the edge sites (shaded two layers in the
inset) on a cylinder periodic along $x$ with $N_{x}=104$, $N_{y}=41$ layers.
The initial wave packets have $\sigma_{x}=1$, $\sigma_{y}=0.5$ as depicted in
the inset where the radius of the circles is proportional to the initial
probability at each site. (a, e) The $[1,1]$ phase at the fine-tuned point.
The wave packet initialised without a kick follows a clear chiral motion. For
the $[1,1]$ phase for the same parameters as in Fig. 2b, the wave packet is
initially given (b,f) a small kick $\textbf{{q}}=(-0.17,0)$ and (c, g) a
larger kick $\textbf{{q}}=(1.56,0)$, to target the Dirac cones at the $\pi$\-
and zero-gap respectively. As revealed by the overlaps, applying appropriate
kicks allows us to mainly populate a given gap, where the dynamics evolve
similarly. (d, h) The $[-1,1]$ phase for the same parameters as Fig. 2c. We
now can largely populate both gaps simultaneously without a kick since the
opposite chirality edge modes appear closer in momentum. The wave packet
separates into two, giving rise to topologically protected currents travelling
in both directions at the edge, with the Chern number ($|C|=2$) corresponding
to the difference of them.
## V Wave packet dynamics in anomalous topological phases
Ultracold atomic systems have recently enjoyed rapid advances in single-site
accessibility and local control with techniques like quantum gas microscopes
and optical tweezers Gross and Bakr (2021); Kaufman and Ni (2021); Braun _et
al._ (2023); Nixon _et al._ (2023). Giving access to the creation of sharp
edges, these pose a timely question and offer new opportunities for
investigating wave packet dynamics localised at the edges of a topological
system as a powerful tool in optical lattices. The models that we implement
allow us to retrieve a rich phase diagram within a simple geometry, where we
compare edge transport in the conventional Haldane ($[1,0]$) Haldane (1988)
and the Haldane-like phases ($[0,1]$), which are gauge equivalent, with the
anomalous Floquet topological phase ($[1,1]$). Indeed, we show that edge state
population in a given gap can be mostly controlled by employing appropriate
kicks. Most importantly, we find that the $[1,1]$ phase allows for a more
robust chiral edge motion than the Haldane phases. We analyse the effects of
opposite chiralities with the opportunity provided by the second anomalous
phase ($[-1,1]$) and the simultaneous activation of both edge modes giving
rise to interesting chiral edge dynamics.
We consider a cylindrical geometry of $N_{x}$ by $N_{y}$ layers and a Gaussian
wave packet,
$\Psi(x,y)=\exp{\left\\{-(x-x_{0})^{2}/4\sigma^{2}_{x}-(y-y_{0})^{2}/4\sigma^{2}_{y}+iq_{x}x+iq_{y}y\right\\}}/\mathcal{N}$,
initially localized at the upper edge (see Fig. 4e inset), at $(x_{0},y_{0})$
with a spread given by $(\sigma_{x},\sigma_{y})$ and normalization
$\mathcal{N}$. We allow for an initial kick with momentum
$\textbf{{q}}=(q_{x},q_{y})$ which can be applied to control the overlap of
the wave packet with edge states that are projected from the specified momenta
onto the edge. We numerically calculate the evolution of the wave packet and
present the probability at the edge sites in the upper two layers. In the
$[1,1]$ phase, a wave packet without any kick ($\textbf{{q}}=0$) predominantly
overlaps with the $\pi$-gap states in Fig. 4(e) and (f), at and away from the
fine-tuned point, since these edge states form at $\Gamma$ point (c.f. Fig.
2). The corresponding wave packet dynamics display a clear chiral motion for
long times around the edge of the cylinder which is periodic along the
$x$-direction. While some of the probability naturally disperses into the bulk
for the dispersive $[1,1]$ phase (see Fig. 4b), at the fine-tuned point the
edge states are exclusively localised at $A$ sublattices. The probability at
$B$ sites, hence, follows a chiral motion around each hexagon with completely
flat bulk bands, giving rise to some probability dwelling around the initial
position at all times in Fig. 4a.
Due to the large separation of the zero and $\pi$-gap states in the $[1,1]$
phase, we can populate the edge modes at the zero-gap by applying an initial
kick of amount $|\textbf{{q}}|=|K|$ to the wave packet. Fig. 4(c,g) displays a
similar chiral motion with the not-kicked wave packet that is visibly
indistinguishable, which is now mainly supported by the zero-gap states. We
note that the kick direction (along the chiral edge mode or opposite to it) in
fact does not matter as it only gives an overall phase, where the overlaps
with eigenstates do not change and we observe the same dynamics. Moreover,
since these wave packets are well localised in position space, we still obtain
some spurious probability at the other gap, with or without a kick. This
originates from the extensive overlap of the edge modes from the two gaps in
position space (see Fig. 3(a,b)), despite the fact that they are well
separated in momentum. Nevertheless, in both cases, we can control the chiral
motion to be carried predominantly by the target quasienergy-gap modes. This
shows that although these edge modes form at different gaps and despite the
difference in the branch cuts by $\pi$ while defining them, the wave packets
can be controlled to populate mainly a given edge mode by applying appropriate
kicks depending on their localisation in quasimomentum rather than
quasienergy. We present wider wave packets in the Supplementary Material where
the population of a given gap can be further increased, which could be
experimentally realised for example by scanning a wider region with the laser
initialising the wave packets.
On the other hand, in the [-1,1] phase achievable with the second driving
protocol, both zero and $\pi$ gaps harbour edge modes but their separation in
momentum is less pronounced. Both edge modes can then be extensively populated
with the same wave packet as demonstrated in Fig. 4h without applying any
kick. Distinctively, since the winding numbers have opposite signs, we observe
that the wave packet separates into two (see Fig. 4d), with a topologically
protected current going in both directions at the edge of the cylinder, where
the $\pi$-modes travel slightly faster in accordance with the spectrum (c.f.
Fig. 2(c,f)). We emphasise that in this phase, the quasienery bands carry a
Chern number $|C|=2$, which is visible in the difference of the chiral
currents at the edge of the system, rather than two topologically protected
channels travelling along the same direction that would be expected in a
static setting.
Figure 5: Edge dynamics of a wave packet (a) with an initial kick
$\textbf{{q}}=(\pi/\sqrt{3},0)$ in the $[1,0]$ phase for $\omega=2.2J$,
$\Delta=0$, and (b) without a kick in the $[0,1]$ phase for $\omega=2J$,
$\Delta=1J$. Both dynamics in the Haldane phases show qualitative differences
with Fig. 4a-c, as they are less pronounced. (c) Percentage of the total
probability carried by edge states in each gap, along the $\Delta=0$ cut of
Fig. 1a crossing different phases. The edge state population is much higher in
the $[1,1]$ (red shaded area) than in the $[1,0]$ phase (green shaded area),
for a wave packet with an initial kick as in (a) to target zero-gap states.
(d) Similarly, the total probability per gap along a diagonal cut on the phase
diagram crossing from $[1,1]$ to $[0,1]$, where the wave packet is given a
small initial momentum to target the $\pi$-gap states forming close to
$\Gamma$. A greater probability is supported by the two edge channels in the
$[1,1]$ phase along both cuts (c,d), quantifying that edge transport is
overall more robust in the anomalous phase.
The anomalous Floquet topological phases carry distinct features arising from
their out of equilibrium nature. First of all, both edge modes behave
effectively as one single channel in the $[1,1]$ phase due to their similar
chirality. It is, hence, instructive to explore whether this anomalous Floquet
topological phase (with two edge states of the same chirality at different
quasienergy gaps) gives rise to a different edge transport than the Haldane
phases with a single chiral mode, i.e. to contrast anomalous Floquet
topological phases with the equilibrium Chern insulating phases. We
demonstrate the wave packet dynamics in the $[1,0]$ and $[0,1]$ phases in Fig.
5a and b respectively. While we apply a kick ($\textbf{{q}}=K$) to the wave
packet to target the zero-gap states in $[1,0]$ as they form at the $K$ point,
we generally obtain a chiral transport at the edge as expected in both phases.
Most importantly, upon comparing with the Haldane phases as visible in their
color scales, we observe that the wave packet dynamics in the anomalous
Floquet phase $[1,1]$ is much more robust, with the edge separating more from
the bulk. To quantify this finding, we consider cuts on the phase diagram
(Fig. 1a) across different phases and evaluate the total overlap of a wave
packet with the edge states in Fig. 5(c,d). Targeting the zero-gap states with
a kick, we indeed find that the total percentage of the edge state population
is overall much higher in the $[1,1]$ than in the $[1,0]$ phase, where the
former comprises mostly zero-gap states that are further enhanced by the
$\pi$-gap contribution (see Fig. 5c). Similarly, Fig. 5d shows that the wave
packet, now initialised without a kick, overlaps with the $\pi$-gap states
much more in $[1,1]$ than $[0,1]$, where the zero-gap states further
strengthen edge dynamics in the former. We observe that this behaviour is in
general present across the phase diagram and also for the second driving
protocol.
Remarkably, this demonstrates that a wave packet can be prepared to populate
the edge modes more easily and efficiently in the anomalous phase than in
Haldane phases. The anomalous topological phase then supports a more
pronounced chiral edge motion with less leakage into the bulk. The more robust
edge transport in the anomalous Floquet phase stems from the fact that the
edge of the system accommodate two different channels rather than one,
increasing the relative weight of the edge channels compared to the bulk so
that they better separate spatially. Compared to the topological phases with
equilibrium counterparts, there is one more gap available to harbour edge
states in the anomalous Floquet topological phase, which renders stronger edge
transport possible supported with more edge states in the driven system. This
can also aid novel anomalous Floquet Anderson phases under disorder Titum _et
al._ (2016). We note that this qualitative analysis naturally depends on
system details such as the bulk gap width and the properties of the phase.
Indeed, we also obtain that the total edge population by a wave packet is
overall larger for the $[-1,1]$ phase than the Haldane-like phase (see
Supplementary Material). Although the trend is less pronounced due to
different chiralities and parameter dependencies, the effect is still clearly
discernible when the anomalous phase harbours two edge channels.
Figure 6: In the $[-1,1]$ phase with the parameters in Fig. 2c, the wave
packet is initialised without a kick on a smaller cylinder with $N_{x}=52$,
$N_{y}=41$. (a) Opposite going currents circle the cylinder and cross without
disruption. (b) Two wave packets initialised at the edge pass through each
other without hybridising as they are well separate in quasienergy.
The anomalous Floquet phase $[-1,1]$ manifests appealing features as well with
the edge transport of opposite chiralities supported by the same bands. Since
these edge modes are located at different quasienergy gaps and have support on
different sublattices, they do not hybridize as demonstrated in Fig. 6a. When
we consider a cylinder with a smaller size, we confirm that the opposite going
currents at the edge circle the cylinder and pass through each other without
disruption for several cycles. Similarly, when we introduce two wave packets,
the crossing of the edge currents is clearly visible in Fig. 6b. We note that
introducing two wave packets is an experimentally promising route for probing
these anomalous topological dynamics with opposite chiralities, where
localised particles can be created around circular-shaped sharp edges punched
in a system with laser potentials.
Although the analysis carried out here is generic for any parameters in a
given phase, the particular details of different edge currents depend on
various aspects. For example, we observe some tails developing and travelling
with different velocities in some dynamics. This generally arises when a lot
of momentum states are stimulated due to the finite size of the wave packet,
where the velocity of a given edge mode might be different at different
quasimomenta (c.f. Fig. 2) as well as different velocities that can be
associated to different edge modes. We analyse some important details that can
give rise to varying edge dynamics in a real system next.
## VI Floquet gauge and sublattice dependence
For the wave packet dynamics presented in previous sections, we numerically
calculate the time evolution by employing the Floquet Hamiltonian given in
Eq.(3). While such stroboscopic definitions are useful, we verify that these
dynamic features are overall reproduced also by following the exact time
evolution under the two driving protocols, justifying the employment of an
effective description neglecting the micromotion Eckardt (2017). Although we
here set the initial time to zero ($t_{0}=0$) for simplicity, the time
evolution operator and hence the Floquet Hamiltonian in Eq.(3) actually depend
on this so-called Floquet gauge,
$\hat{\mathcal{H}}^{t_{0}}_{F}(\textbf{{k}})$, as well as the Floquet
eigenstates Eckardt (2017); Bukov _et al._ (2015).
Figure 7: (a) Illustration of the relation between the upper/lower edge
dynamics and the sublattice character by considering the fine-tuned point of
the first driving protocol. Starting from $t_{0}=0$ and following the chiral
motion with $J_{1}\rightarrow J_{2}\rightarrow J_{3}$ turned on cyclically, a
particle localised on $A$ sublattice at the upper edge moves along the edge
leftwards (red arrow). Starting from a different initial time of $t_{0}=T/3$
induces a bulk motion around the hexagon (purple). At the lower edge, the
sublattice characters giving rise to the edge and bulk motions are swapped,
where starting from $B$ at $t_{0}=0$ generates edge motion (blue). (b,c)
Dynamics on a cylinder in the $[-1,1]$ phase for the same parameters as in
Fig. 2c, where we localise the wave packet without a kick at the same $x$
position on the upper and lower edges respectively. We average over $20$
equally spaced initial times $t_{0}$ within a period, which produce similar
chiral motion.
The Floquet gauge can influence the wave packet dynamics in experiments
especially at lower frequencies where the details within a period become more
crucial. We elucidate this by considering the extreme limits of a wave packet
completely localised on an $A$ site at the upper edge in Fig. 7a and the
fined-tuned point in the first driving protocol, where we obtain complete
population transfer between sublattices at the end of each step of the drive.
When we start from $t_{0}=0$ with the tunneling amplitudes are turned on/off
cyclically as $J_{1}\rightarrow J_{2}\rightarrow J_{3}$, particles move along
the edge leftwards. However, if we consider another initial time e.g.
$t_{0}=T/3$ within the same sequence (i.e. $J_{2}\rightarrow J_{3}\rightarrow
J_{1}$), particles would follow a chiral bulk motion around the hexagon. Away
from the fine-tuned point, eigenstates mix sublattice flavors at varying
degrees, but the Floquet gauge dependence remains. While this can give rise to
some modifications in exact dynamics observed, it does not alter our general
conclusions. Furthermore, this is naturally less pronounced for more balanced
or wider wave packets.
The upper and lower edges of the system may give rise to different dynamics as
well, since the pseudospin character reflects onto these two physical edges
oppositely. Namely, the chiral edge motion observed at the upper edge when we
start from the sublattice $A$ as depicted in Fig. 7a can be obtained at the
bottom edge starting from the $B$ sublattice (and similarly for the bulk
localised states). When $\Delta=0$, the Hamiltonian can be cast into a
diagonal form in the pseudospin. The up spin in the upper edge correspond to
the down spin at the lower edge, which hence can bring about different
dynamics depending on their population by the wave packet. While in general
for $\Delta\neq 0$, the spin characters are mixed, they are mixed in the same
but opposite way at the upper and lower edges, i.e. some “up-down” mixture on
one end corresponds to the opposite “down-up” mixture on the other end.
Changing the sign of the offset $\Delta\rightarrow-\Delta$ then inverts the
upper/lower edge character, which we confirm in our numerics.
Overall, when we average out over various dynamics, the upper and lower ends
of the system naturally give rise to the same behaviour as it samples through
different initial times and sublattice characters. We consider a wave packet
localised at the same way at the upper and lower edges of a cylinder, at the
same $x$ positions in the geometry depicted in Fig. 7a. We numerically
calculate the chiral edge currents starting from different $t_{0}$ initial
times (for 20 data points equally spaced over one period) which correspond to
different eigenstates. We demonstrate the average wave packet dynamics at the
respective edges in Fig. 7(b,c). While the upper/lower edge dynamics might be
very different at a given $t_{0}$, we observe that the $\pi$-gap states
generate the similar leftward/rightward currents at the top/bottom edges on
average as expected, as well as the zero-gap modes supporting opposite
chirality. We note that the negligible difference between the upper/lower
dynamics in the figure stems from the coarse sampling for averaging, where
more data points wash out these differences further.
## VII Conclusion
In this work, we focus on the topological edge transport and chiral motion at
the edge of a periodically driven system, particularly in anomalous Floquet
topological phases unique to these out of equilibrium settings Kitagawa _et
al._ (2010); Rudner _et al._ (2013). In light of recent developments in
single-site accessibility in optical lattices, which offers the possibility to
probe the dynamics locally at sharp edges, we investigate wave packet dynamics
as a versatile tool in exploring topologically protected Floquet edge modes.
We consider a honeycomb lattice under experimentally relevant and conceptually
insightful step-wise modulated driving protocols Ünal _et al._ (2019a);
Wintersperger _et al._ (2020), which allow us to retrieve a rich phase
diagram, involving the conventional Haldane (-like) phases that can be
realised in equilibrium as well as two different anomalous Floquet topological
phases with no static counterparts that harbour edge states of different
chiralities in both quasienergy gaps.
We show that, in the $[1,1]$ phase, the edge modes living in different
quasienergies behave predominantly like a single channel with the same
chirality and sublattice character, where wave packets can be controlled to
mostly populate one of the gaps by applying appropriate kicks as they are well
separate in momentum. While this behaviour is similar to the Haldane phases
with a topologically protected edge mode present only in a single gap Haldane
(1988), we find that the anomalous $[1,1]$ phase can generally support a more
robust edge transport with edge states separating more from the bulk. We
explain this with having two different channels coming from both gaps
accommodated at the edge of the system, which enhances the population of the
edge modes by the wave packets, providing an overall advantage over Haldane
phases with equilibrium counterparts.
We further show a hitherto-unexplored anomalous phase with opposite winding
numbers ($[-1,1]$) that can be achieved using the second driving protocol in a
honeycomb lattice, where the system features both clockwise and anti-clockwise
currents now carried mainly by different sublattices at a given edge. Since
the edge modes of opposite chiralities form much closer in momentum, we
observe that wave packets localised at the edge largely populate both
channels, which generates two independent currents going in opposite ways that
can transverse through one another without hybridising. We also analyse the
dependence of the dynamics on the details of the drive and the exact position
where the wave packets are localised. We discuss that these effects can be
controlled by changing the shape of the wave packet or averaging over
different initial times. Our results demonstrate that investigating Floquet
topological edge modes by using wave packets in optical lattices can reveal
unique out-of-equilibrium features. These insights can be also employed in
phases involving more number of edge states and with different chirality
combinations Zhang _et al._ (2022), which might reveal more interesting edge
dynamics and more robust edge transport engaging multiple edge channels.
Photonic lattices employing quantum walks offer another promising route to
study various anomalous Floquet phases with different winding number
combinations Adiyatullin _et al._ (2023).
Note added–We have became aware of a relevant experimental work during the
completion of this manuscript, which has been announced following our work in
Ref. Braun _et al._ (2023), studying and probing wave packet dynamics in an
anomalous Floquet topological phase.
Data sharing not applicable to this article as no datasets were generated or
analysed during the current study.
###### Acknowledgements.
We thank Nigel Cooper and Robert-Jan Slager, for insightful discussions and
comments on the manuscript, as well as the experimental team Monika
Aidelsburger, Raphaël Saint-Jalm, Christoph Braun, Alexander Hesse and
Johannes Arceri. F.N.Ü. acknowledges funding from the Royal Society under a
Newton International Fellowship, the Marie Skłodowska-Curie programme of the
European Commission Grant No 893915, and Trinity College Cambridge. This work
received funding from the European Research Council (ERC) under the European
Union’s Horizon 2020 research and innovation program (Grant Agreement No.
101001902).
## References
* Hasan and Kane (2010) M. Z. Hasan and C. L. Kane, “Colloquium: Topological insulators,” Rev. Mod. Phys. 82, 3045–3067 (2010).
* Qi and Zhang (2011) Xiao-Liang Qi and Shou-Cheng Zhang, “Topological insulators and superconductors,” Rev. Mod. Phys. 83, 1057–1110 (2011).
* Thouless _et al._ (1982) D. J. Thouless, M. Kohmoto, M. P. Nightingale, and M. den Nijs, “Quantized hall conductance in a two-dimensional periodic potential,” Phys. Rev. Lett. 49, 405–408 (1982).
* Bernevig _et al._ (2006) B Andrei Bernevig, Taylor L Hughes, and Shou-Cheng Zhang, “Quantum spin hall effect and topological phase transition in hgte quantum wells,” science 314, 1757–1761 (2006).
* Fang _et al._ (2012) Chen Fang, Matthew J. Gilbert, and B. Andrei Bernevig, “Bulk topological invariants in noninteracting point group symmetric insulators,” Phys. Rev. B 86, 115112 (2012).
* Kane and Mele (2005) C. L. Kane and E. J. Mele, “${Z}_{2}$ topological order and the quantum spin hall effect,” Phys. Rev. Lett. 95, 146802 (2005).
* Slager _et al._ (2013) Robert-Jan Slager, Andrej Mesaros, Vladimir Juričić, and Jan Zaanen, “The space group classification of topological band-insulators,” Nat. Phys. 9, 98 (2013).
* Kruthoff _et al._ (2017) Jorrit Kruthoff, Jan de Boer, Jasper van Wezel, Charles L. Kane, and Robert-Jan Slager, “Topological classification of crystalline insulators through band structure combinatorics,” Phys. Rev. X 7, 041069 (2017).
* Po _et al._ (2017) Hoi Chun Po, Ashvin Vishwanath, and Haruki Watanabe, “Symmetry-based indicators of band topology in the 230 space groups,” Nat. Commun. 8, 50 (2017).
* Bradlyn _et al._ (2017) Barry Bradlyn, L. Elcoro, Jennifer Cano, M. G. Vergniory, Zhijun Wang, C. Felser, M. I. Aroyo, and B. Andrei Bernevig, “Topological quantum chemistry,” Nature 547, 298 (2017).
* Jotzu _et al._ (2014) Gregor Jotzu, Michael Messer, Rémi Desbuquois, Martin Lebrat, Thomas Uehlinger, Daniel Greif, and Tilman Esslinger, “Experimental realization of the topological Haldane model with ultracold fermions,” Nature 515, 237 (2014).
* Aidelsburger _et al._ (2015) Monika Aidelsburger, Michael Lohse, C Schweizer, Marcos Atala, Julio T Barreiro, S Nascimbene, NR Cooper, Immanuel Bloch, and N Goldman, “Measuring the Chern number of Hofstadter bands with ultracold bosonic atoms,” Nat. Phys. 11, 162–166 (2015).
* Tran _et al._ (2017) Duc Thanh Tran, Alexandre Dauphin, Adolfo G Grushin, Peter Zoller, and Nathan Goldman, “Probing topology by “heating”: Quantized circular dichroism in ultracold atoms,” Science advances 3, e1701207 (2017).
* Asteria _et al._ (2019) L. Asteria, D.T. Tran, T. Ozawa, M. Tarnowski, B. S. Rem, N. Flashner, K. Sengstock, N. Goldman, and C. Weitenberg, “Measuring quantized circular dichroism in ultracold topological matter,” Nat. Phys 15, 449 (2019).
* Kemp _et al._ (2022) Cameron J. D. Kemp, Nigel R. Cooper, and F. Nur Ünal, “Nested-sphere description of the $n$-level chern number and the generalized bloch hypersphere,” Phys. Rev. Res. 4, 023120 (2022).
* Tan _et al._ (2019) Xinsheng Tan, Dan-Wei Zhang, Zhen Yang, Ji Chu, Yan-Qing Zhu, Danyu Li, Xiaopei Yang, Shuqing Song, Zhikun Han, Zhiyuan Li, Yuqian Dong, Hai-Feng Yu, Hui Yan, Shi-Liang Zhu, and Yang Yu, “Experimental measurement of the quantum metric tensor and related topological phase transition with a superconducting qubit,” Phys. Rev. Lett. 122, 210401 (2019).
* Xu _et al._ (2022) Peng Xu, Wei Zheng, and Hui Zhai, “Topological micromotion of floquet quantum systems,” Phys. Rev. B 105, 045139 (2022).
* Jangjan _et al._ (2022) Milad Jangjan, Luis E. F. Foa Torres, and Mir Vahid Hosseini, “Floquet topological phase transitions in a periodically quenched dimer,” Phys. Rev. B 106, 224306 (2022).
* Jangjan and Hosseini (2020) Milad Jangjan and Mir Vahid Hosseini, “Floquet engineering of topological metal states and hybridization of edge states with bulk states in dimerized two-leg ladders,” Scientific Reports 10, 14256 (2020).
* Roy and Harper (2017) Rahul Roy and Fenner Harper, “Periodic table for floquet topological insulators,” Phys. Rev. B 96, 155118 (2017).
* Oka and Aoki (2009) Takashi Oka and Hideo Aoki, “Photovoltaic hall effect in graphene,” Phys. Rev. B 79, 081406 (2009).
* Kitagawa _et al._ (2010) Takuya Kitagawa, Erez Berg, Mark Rudner, and Eugene Demler, “Topological characterization of periodically driven quantum systems,” Phys. Rev. B 82, 235114 (2010).
* Rudner _et al._ (2013) Mark S. Rudner, Netanel H. Lindner, Erez Berg, and Michael Levin, “Anomalous edge states and the bulk-edge correspondence for periodically driven two-dimensional systems,” Phys. Rev. X 3, 031005 (2013).
* Ünal _et al._ (2019a) F. Nur Ünal, Babak Seradjeh, and André Eckardt, “How to directly measure floquet topological invariants in optical lattices,” Phys. Rev. Lett. 122, 253601 (2019a).
* Wintersperger _et al._ (2020) Karen Wintersperger, Christoph Braun, F Nur Ünal, André Eckardt, Marco Di Liberto, Nathan Goldman, Immanuel Bloch, and Monika Aidelsburger, “Realization of an anomalous floquet topological system with ultracold atoms,” Nature Physics 16, 1058–1063 (2020).
* Wang _et al._ (2017) Ce Wang, Pengfei Zhang, Xin Chen, Jinlong Yu, and Hui Zhai, “Scheme to measure the topological number of a chern insulator from quench dynamics,” Phys. Rev. Lett. 118, 185701 (2017).
* Tarnowski _et al._ (2019) Matthias Tarnowski, F Nur Ünal, Nick Fläschner, Benno S Rem, André Eckardt, Klaus Sengstock, and Christof Weitenberg, “Measuring topology from dynamics by obtaining the Chern number from a linking number,” Nat. Commun. 10 (2019).
* Ünal _et al._ (2019b) F. Nur Ünal, André Eckardt, and Robert-Jan Slager, “Hopf characterization of two-dimensional floquet topological insulators,” Phys. Rev. Research 1, 022003 (2019b).
* Hu and Zhao (2020) Haiping Hu and Erhai Zhao, “Topological invariants for quantum quench dynamics from unitary evolution,” Phys. Rev. Lett. 124, 160402 (2020).
* Ünal _et al._ (2020) F. Nur Ünal, Adrien Bouhon, and Robert-Jan Slager, “Topological euler class as a dynamical observable in optical lattices,” Phys. Rev. Lett. 125, 053601 (2020).
* Slager _et al._ (2022) Robert-Jan Slager, Adrien Bouhon, and F. Nur Ünal, “Floquet multi-gap topology: Non-abelian braiding and anomalous dirac string phase,” (2022), arXiv:2208.12824 .
* Ahn _et al._ (2019) Junyeong Ahn, Sungjoon Park, and Bohm-Jung Yang, “Failure of nielsen-ninomiya theorem and fragile topology in two-dimensional systems with space-time inversion symmetry: Application to twisted bilayer graphene at magic angle,” Phys. Rev. X 9, 021013 (2019).
* Bouhon _et al._ (2020a) Adrien Bouhon, QuanSheng Wu, Robert-Jan Slager, Hongming Weng, Oleg V. Yazyev, and Tomáš Bzdušek, “Non-abelian reciprocal braiding of weyl points and its manifestation in zrte,” Nature Physics 16, 1137–1143 (2020a).
* Bouhon _et al._ (2020b) Adrien Bouhon, Tomáš Bzdušek, and Robert-Jan Slager, “Geometric approach to fragile topology beyond symmetry indicators,” Phys. Rev. B 102, 115135 (2020b).
* Jiang _et al._ (2021a) Bin Jiang, Adrien Bouhon, Zhi-Kang Lin, Xiaoxi Zhou, Bo Hou, Feng Li, Robert-Jan Slager, and Jian-Hua Jiang, “Experimental observation of non-abelian topological acoustic semimetals and their phase transitions,” Nature Physics 17, 1239–1246 (2021a).
* Jiang _et al._ (2021b) Tianshu Jiang, Qinghua Guo, Ruo-Yang Zhang, Zhao-Qing Zhang, Biao Yang, and C. T. Chan, “Four-band non-abelian topological insulator and its experimental realization,” Nature Communications 12, 6471 (2021b).
* Goldman and Dalibard (2014) N. Goldman and J. Dalibard, “Periodically driven quantum systems: Effective hamiltonians and engineered gauge fields,” Phys. Rev. X 4, 031027 (2014).
* Eckardt (2017) André Eckardt, “Colloquium: Atomic quantum gases in periodically driven optical lattices,” Rev. Mod. Phys. 89, 011004 (2017).
* Bukov _et al._ (2015) Marin Bukov, Luca D’Alessio, and Anatoli Polkovnikov, “Universal high-frequency behavior of periodically driven systems: from dynamical stabilization to floquet engineering,” Advances in Physics 64, 139–226 (2015).
* Cooper _et al._ (2019) N. R. Cooper, J. Dalibard, and I. B. Spielman, “Topological bands for ultracold atoms,” Rev. Mod. Phys. 91, 015005 (2019).
* Wang _et al._ (2018) Botao Wang, F. Nur Ünal, and André Eckardt, “Floquet engineering of optical solenoids and quantized charge pumping along tailored paths in two-dimensional Chern insulators,” Phys. Rev. Lett. 120, 243602 (2018).
* Račiūnas _et al._ (2018) Mantas Račiūnas, F. Nur Ünal, Egidijus Anisimovas, and André Eckardt, “Creating, probing, and manipulating fractionally charged excitations of fractional chern insulators in optical lattices,” Phys. Rev. A 98, 063621 (2018).
* Reichl and Mueller (2014) Matthew D. Reichl and Erich J. Mueller, “Floquet edge states with ultracold atoms,” Phys. Rev. A 89, 063628 (2014).
* Maczewsky _et al._ (2017) Lukas J. Maczewsky, Julia M. Zeuner, Stefan Nolte, and Alexander Szameit, “Observation of photonic anomalous Floquet topological insulators,” Nat. Commun. 8 (2017), 10.1038/ncomms13756.
* Mukherjee _et al._ (2017) Sebabrata Mukherjee, Alexander Spracklen, Manuel Valiente, Erika Andersson, Patrik Öhberg, Nathan Goldman, and Robert R Thomson, “Experimental observation of anomalous topological edge modes in a slowly driven photonic lattice,” Nat. Commun. 8 (2017), 10.1038/ncomms13918.
* Budich _et al._ (2017) Jan Carl Budich, Ying Hu, and Peter Zoller, “Helical floquet channels in 1d lattices,” Phys. Rev. Lett. 118, 105302 (2017).
* Titum _et al._ (2016) Paraj Titum, Erez Berg, Mark S. Rudner, Gil Refael, and Netanel H. Lindner, “Anomalous floquet-anderson insulator as a nonadiabatic quantized charge pump,” Phys. Rev. X 6, 021013 (2016).
* Gross and Bakr (2021) C. Gross and W.S. Bakr, “Quantum gas microscopy for single atom and spin detection,” Nat. Phys. 17, 1316 (2021), and references therein.
* Quelle _et al._ (2017) A Quelle, C Weitenberg, K Sengstock, and C Morais Smith, “Driving protocol for a floquet topological phase without static counterpart,” New Journal of Physics 19, 113010 (2017).
* Nathan and Rudner (2015) Frederik Nathan and Mark S Rudner, “Topological singularities and the general classification of floquet–bloch systems,” New Journal of Physics 17, 125014 (2015).
* (51) “See Supplementary material at xxxx for details.” .
* Kaufman and Ni (2021) A.M. Kaufman and KK. Ni, “Quantum science with optical tweezer arrays of ultracold atoms and molecules.” Nat. Phys. 17, 1324 (2021), and references therein.
* Braun _et al._ (2023) Christoph Braun, Raphaël Saint-Jalm, Alexander Hesse, Johannes Arceri, Immanuel Bloch, and Monika Aidelsburger, “Real-space detection and manipulation of topological edge modes with ultracold atoms,” (2023), arXiv:2304.01980 .
* Nixon _et al._ (2023) Georgia M Nixon, F Nur Ünal, and Ulrich Schneider, “Individually tunable tunnelling coefficients in optical lattices using local periodic driving,” arXiv preprint arXiv:2309.12124 (2023), 10.48550/arXiv.2309.12124.
* Haldane (1988) F. D. M. Haldane, “Model for a quantum hall effect without landau levels: Condensed-matter realization of the ”parity anomaly”,” Phys. Rev. Lett. 61, 2015–2018 (1988).
* Zhang _et al._ (2022) J.-Y. Zhang, C.-R. Yi, L. Zhang, R.-H. Jiao, K.-Y. Shi, H. Y., W. Z., X.-J. L., S. Chen, and J.-W. Pan, “Tuning anomalous floquet topological bands with ultracold atoms,” (2022), arXiv:2211.04739 .
* Adiyatullin _et al._ (2023) Albert F. Adiyatullin, Lavi K. Upreti, Corentin Lechevalier, Clement Evain, Francois Copie, Pierre Suret, Stephane Randoux, Pierre Delplace, and Alberto Amo, “Topological properties of floquet winding bands in a photonic lattice,” Phys. Rev. Lett. 130, 056901 (2023).
|
# A problem for Hankel measures on Hardy space
Guanlong Bao and Fangqin Ye Guanlong Bao
Department of Mathematics
Shantou University
Shantou, Guangdong 515063, China<EMAIL_ADDRESS>Fangqin Ye
Business School
Shantou University
Shantou, Guangdong 515063, China<EMAIL_ADDRESS>
###### Abstract.
In this note, we investigate a condition related to the characterization of
Hankel measures on Hardy space. We address a problem mentioned by J. Xiao in
2000.
###### Key words and phrases:
Hankel measure; Hardy space; Hankel matrix.
###### 2010 Mathematics Subject Classification:
30H10, 47B35
The work was supported by NNSF of China (No. 11720101003 and No. 11571217),
Department of Education of Guangdong Province (No. 2017KQNCX078) and STU
Scientific Research Foundation for Talents (No. NTF17020 and No. STF17005).
## 1\. Introduction
Let $\mathbb{D}$ be the open unit disk in the complex plan $\mathbb{C}$ and
let $H(\mathbb{D})$ be the space of analytic functions in $\mathbb{D}$. Recall
that for $0<p<\infty$ the Hardy space $H^{p}$ consists of functions $f\in
H(\mathbb{D})$ such that
$\|f\|_{H^{p}}=\sup_{0<r<1}\left(\frac{1}{2\pi}\int_{0}^{2\pi}|f(re^{i\theta})|^{p}d\theta\right)^{1/p}<\infty.$
It is well known that every function $f\in H^{p}$ has non-tangential limits
$f(\zeta)$ for almost every $\zeta$ on the unit circle $\mathbb{T}$. See [8]
for the theory of Hardy spaces.
Let $BMOA$ be the space of analytic functions of bounded mean oscillation. It
is well known (cf. [4, 9]) that the space $BMOA$ can be defined as the set of
functions $f\in H(\mathbb{D})$ satisfying
$\|f\|_{BMOA}^{2}=\sup_{a\in\mathbb{D}}\int_{\mathbb{D}}|f^{\prime}(z)|^{2}(1-|\sigma_{a}(z)|^{2})dA(z)<\infty,$
where
$dA(z)=\frac{1}{\pi}dxdy=\frac{r}{\pi}drd\theta,\ \ z=x+iy=re^{i\theta},$
and
$\sigma_{a}(z)=\frac{a-z}{1-\overline{a}z}$
is the Möbius transformation of $\mathbb{D}$ interchanging $a$ and $0$. The
Fefferman-Stein duality theorem tells us that the dual space of $H^{1}$ is
$BMOA$. Also, $BMOA$ is a proper subset of the Bloch space $\mathcal{B}$
consisting of functions $f\in H(\mathbb{D})$ for which
$\|f\|_{\mathcal{B}}=\sup_{z\in\mathbb{D}}(1-|z|^{2})|f^{\prime}(z)|<\infty.$
Given an arc $I$ of $\mathbb{T}$ with arclength $|I|$, the Carleson box $S(I)$
is
$S(I)=\left\\{r\zeta\in\mathbb{D}:1-\frac{|I|}{2\pi}<r<1,\ \zeta\in
I\right\\}.$
A complex measure $\mu$ on $\mathbb{D}$ is called a Carleson measure if
$\sup_{I\subseteq\mathbb{T}}\frac{|\mu|(S(I))}{|I|}<\infty.$
It is well known (cf. [8] ) that $\mu$ is a Carleson measure if and only if
there exists a positive constant $C$ such that
$\int_{\mathbb{D}}|f(z)|^{2}d|\mu|(z)\leq C\|f\|^{2}_{H^{2}}$
for all $f\in H^{2}$.
Following J. Xiao [15], a complex measure $\mu$ on $\mathbb{D}$ is said to be
a Hankel measure if there exists a positive constant $C$ such that
(1.1) $\left|\int_{\mathbb{D}}f^{2}(z)d\mu(z)\right|\leq C\|f\|^{2}_{H^{2}}$
for all $f$ in $H^{2}$. It is clear that if $\mu$ is a Hankel measure, then
$|\mu(\mathbb{D})|<\infty$. The name of Hankel measure has its root in the
study of Hankel matrices (see also [11, 14]). Clearly, a Carleson measure must
be a Hankel measure. Answering a problem posed by J. Peetre, J. Xiao [15] gave
a series of complete characterizations of Hankel measures involving Carleson
measures, the duality between $H^{1}$ and $BMOA$, and Hankel operators.
We refer to [1, p. 11] and [16] for the study of complex measure inequalities
similar to (1.1) in the setting of the classical Dirichlet space and weighted
Bergman spaces, respectively.
From [15], a necessary condition for a complex measure $\mu$ on $\mathbb{D}$
to be a Hankel measure is
(1.2)
$\sup_{w\in\mathbb{D}}\left|\int_{\mathbb{D}}\frac{1-|w|^{2}}{(1-\overline{w}z)^{2}}d\mu(z)\right|<\infty.$
By this observation, J. Xiao [15, p. 139] mentioned the following problem: Is
condition (1.2) sufficient for $\mu$ to be a Hankel measure too? In this note,
we show that in general the answer to this problem is negative. We give a
complex measure $\mu$ on $\mathbb{D}$ satisfying condition (1.2) but $\mu$ is
not a Hankel measure. A positive Borel measure $\mu$ on [0, 1) can be seen as
a Borel measure on $\mathbb{D}$ by identifying it with the measure
$\tilde{\mu}$ defined by
$\tilde{\mu}(E)=\mu(E\cap[0,1)),$
for any Borel subset $E$ of $\mathbb{D}$. If $\mu$ is a positive Borel measure
on [0, 1), we obtain that condition (1.2) holds if and only if $\mu$ is a
Hankel measure, which also implies some new information of a Hankel matrix
acting on Hardy spaces and Dirichlet type spaces.
## 2\. Condition (1.2) and Hankel measures
The section is devoted to consider whether condition (1.2) is an equivalent
description of Hankel measure. For a complex measure $\mu$ on $\mathbb{D}$,
the function $P_{\mu}$ is given by
$P_{\mu}(w)=\int_{\mathbb{D}}\frac{1}{1-w\overline{z}}d\mu(z),\ \
w\in\mathbb{D}.$
We will show that the problem considered in this note is equivalent to
investigate those functions $P_{\overline{\mu}}\in\mathcal{B}$ must actually
satisfying $P_{\overline{\mu}}\in BMOA$.
There exist several complete descriptions of Hankel measures in [15]. Now we
cite only some of them as follows.
###### Theorem A.
Let $\mu$ be a complex measure on $\mathbb{D}$. Then the following conditions
are equivalent.
1. (1)
$\mu$ is a Hankel measure.
2. (2)
There exists a positive constant $C$ such that
$\left|\int_{\mathbb{D}}f(z)d\mu(z)\right|\leq C\|f\|_{H^{1}},$
for all $f\in H^{1}$.
3. (3)
$P_{\overline{\mu}}$ is in $BMOA$.
4. (4)
$\sup_{I\subseteq\mathbb{T}}\frac{1}{|I|}\int_{S(I)}\left|\int_{\mathbb{D}}\frac{\overline{z}d\overline{\mu}(z)}{(1-w\overline{z})^{2}}\right|^{2}(1-|w|^{2})dA(w)<\infty.$
The following theorem characterizes lacunary series in the Bloch space
$\mathcal{B}$ and $BMOA$ (see [9] and [12]).
###### Theorem B.
Let $f\in H(\mathbb{D})$ with the power series expansion
$f(z)=\sum_{k=1}^{\infty}a_{k}z^{n_{k}}$ and suppose there exists $\lambda>1$
such that $n_{k+1}\geq\lambda n_{k}$ for all $k$. Then the following
assertions hold.
1. (1)
$f\in\mathcal{B}$ if and only if the sequence $\\{a_{k}\\}$ is bounded.
2. (2)
$f\in BMOA$ if and only if
$\sum_{k=1}^{\infty}|a_{k}|^{2}<\infty.$
In 1995, R. Aulaskari, J. Xiao and R. Zhao [3] introduced $\mathcal{Q}_{p}$
spaces which attracted a lot of attention in recent years. For $0<p<\infty$,
the space $\mathcal{Q}_{p}$ consists of functions $f\in H(\mathbb{D})$ with
$\|f\|_{\mathcal{Q}_{p}}^{2}=\sup_{a\in\mathbb{D}}\,\int_{\mathbb{D}}|f^{\prime}(z)|^{2}\left(1-|\sigma_{a}(z)|^{2}\right)^{p}dA(z)<\infty.$
Clearly, $\mathcal{Q}_{1}=BMOA$. By [2], we know that for $1<p<\infty$, all
$\mathcal{Q}_{p}$ spaces are the same and equal to the Bloch space
$\mathcal{B}$. See J. Xiao’s monographs [17, 18] for more results of
$\mathcal{Q}_{p}$ spaces.
The following result can be founded in [17, p. 29].
###### Theorem C.
Suppose $0<p<\infty$. Let $f(z)=\sum_{n=0}^{\infty}a_{n}z^{n}$ with
$\\{a_{n}\\}$ being a decreasing sequence of nonnegative numbers. Then
$f\in\mathcal{Q}_{p}$ if and only if $a_{n}=O(\frac{1}{n})$.
We first show that the problem considered in this note is associated with
certain self-improving property of functions $P_{\overline{\mu}}$.
###### Theorem 2.1.
Let $\mu$ be a complex measure on $\mathbb{D}$. Then the following two
statements are equivalent.
1. (1)
If $\mu$ satisfies condition (1.2), then $\mu$ is a Hankel measure.
2. (2)
If $P_{\overline{\mu}}\in\mathcal{B}$, then $P_{\overline{\mu}}\in BMOA$.
###### Proof.
For $w\in\mathbb{D}$, one gets
$\displaystyle wP_{\overline{\mu}}(w)$ $\displaystyle=$
$\displaystyle\int_{\mathbb{D}}\frac{w}{1-w\overline{z}}d\overline{\mu}(z)$
$\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\left(\int_{\mathbb{D}}\overline{z}^{n}d\overline{\mu}(z)\right)w^{n+1}.$
Then
$\displaystyle\sup_{w\in\mathbb{D}}(1-|w|^{2})|\left(wP_{\overline{\mu}}(w)\right)^{\prime}|$
$\displaystyle=$
$\displaystyle\sup_{w\in\mathbb{D}}(1-|w|^{2})\left|\sum_{n=0}^{\infty}(n+1)\left(\int_{\mathbb{D}}\overline{z}^{n}d\overline{\mu}(z)\right)w^{n}\right|$
$\displaystyle=$
$\displaystyle\sup_{w\in\mathbb{D}}\left|\int_{\mathbb{D}}\frac{1-|w|^{2}}{(1-w\overline{z})^{2}}d\overline{\mu}(z)\right|$
$\displaystyle=$
$\displaystyle\sup_{w\in\mathbb{D}}\left|\int_{\mathbb{D}}\frac{1-|w|^{2}}{(1-\overline{w}z)^{2}}d\mu(z)\right|.$
Consequently, the function $w\rightarrow wP_{\overline{\mu}}(w)$ belongs to
$\mathcal{B}$ if and only if condition (1.2) holds for $\mu$.
By the growth estimates of functions in $\mathcal{B}$ (see [17, p. 6] for
example), one gets that there exists a positive $C$ such that
$|f(z)-f(0)|\leq C\|f\|_{\mathcal{B}}\log\frac{1}{1-|z|}$
for all $f\in\mathcal{B}$. Bearing in mind this estimate, we see that
$P_{\overline{\mu}}\in\mathcal{B}$ if and only if the function $w\rightarrow
wP_{\overline{\mu}}(w)$ belongs to $\mathcal{B}$.
Thus we proved that $P_{\overline{\mu}}\in\mathcal{B}$ if and only if $\mu$
satisfies condition (1.2). By Theorem A, $\mu$ is a Hankel measure if and only
if $P_{\overline{\mu}}\in BMOA$. These imply the desired result. ∎
The following result shows that in general the answer to J. Xiao’s problem is
negative.
###### Proposition 2.2.
There exists a complex measure $\mu$ on $\mathbb{D}$ such that condition (1.2)
holds but $\mu$ is not a Hankel measure.
###### Proof.
Set $d\overline{\mu}(z)=f(z)dA(z)$ for $z\in\mathbb{D}$, where
$f(z)=1+\sum_{k=1}^{\infty}(1+2^{k})z^{2^{k}}$. Then
$\overline{\mu}(\mathbb{D})=1$. By the Parseval formula, for any positive
integer $n$,
$\displaystyle\int_{\mathbb{D}}\overline{z}^{n}d\overline{\mu}(z)$
$\displaystyle=$
$\displaystyle\frac{1}{\pi}\int_{0}^{1}r^{n+1}dr\int_{0}^{2\pi}e^{-in\theta}[1+\sum_{k=1}^{\infty}(1+2^{k})r^{2^{k}}e^{i2^{k}\theta}]d\theta$
$\displaystyle=$ $\displaystyle\begin{cases}&\enspace 0,\ \ \text{if}\ \
n\not\in\\{2^{k}:k=1,2,3,\cdots\\},\\\ &\enspace 1,\ \ \text{if}\ \
n\in\\{2^{k}:k=1,2,3,\cdots\\}.\end{cases}$
Hence, for any $w\in\mathbb{D}$,
$\displaystyle P_{\overline{\mu}}(w)$ $\displaystyle=$
$\displaystyle\sum_{n=0}^{\infty}\left(\int_{\mathbb{D}}\overline{z}^{n}d\overline{\mu}(z)\right)w^{n}$
$\displaystyle=$ $\displaystyle 1+\sum_{k=1}^{\infty}w^{2^{k}}.$
By Theorem B, $P_{\overline{\mu}}\in\mathcal{B}$ but
$P_{\overline{\mu}}\not\in BMOA$. Via Theorem A and the proof of Theorem 2.1,
we see that the measure $\mu$ satisfies condition (1.2), but $\mu$ is not a
Hankel measure. ∎
For a positive Borel measure $\mu$ on [0, 1) and a nonnegative integer $n$,
denote by $\mu[n]$ the moment of order $n$ of $\mu$, that is,
$\mu[n]=\int_{0}^{1}t^{n}d\mu(t).$
The following result shows that there exist measures $\mu$ such that if
$P_{\overline{\mu}}\in\mathcal{B}$, then $P_{\overline{\mu}}\in BMOA$. It
follows from Theorem 2.1 that in this case $\mu$ is a Hankel measure if and
only if $\mu$ satisfies condition (1.2).
###### Theorem 2.3.
Let $\mu$ be a positive Borel measure on [0, 1). Then the following conditions
are equivalent.
1. (1)
$\mu$ is a Hankel measure.
2. (2)
$\mu$ satisfies condition (1.2), that is,
$\sup_{w\in\mathbb{D}}\left|\int_{0}^{1}\frac{1-|w|^{2}}{(1-\overline{w}t)^{2}}d\mu(t)\right|<\infty.$
3. (3)
$P_{\mu}\in\mathcal{Q}_{p}$ for some $0<p<\infty$.
4. (4)
$P_{\mu}\in\mathcal{Q}_{p}$ for all $0<p<\infty$.
5. (5)
$\mu[n]=O(\frac{1}{n})$.
###### Proof.
If $\mu$ is a positive Borel measure on [0, 1), then for $w\in\mathbb{D}$,
$P_{\overline{\mu}}(w)=P_{\mu}(w)=\sum_{n=0}^{\infty}\left(\int_{0}^{1}t^{n}d\mu(t)\right)w^{n}=\sum_{n=0}^{\infty}\mu[n]w^{n}.$
Clearly, $\left\\{\mu[n]\right\\}$ is a decreasing sequence with nonnegative
numbers. By Theorem C, we see that (3)$\Leftrightarrow$(4)$\Leftrightarrow$
(5). By the proof of Theorem 2.1, $P_{\overline{\mu}}\in\mathcal{B}$ if and
only if $\mu$ satisfies condition (1.2). From Theorem A,
$P_{\overline{\mu}}\in BMOA$ if and only if $\mu$ is Hankel measure. Bear in
mind that $\mathcal{B}=\mathcal{Q}_{2}$ and $BMOA=\mathcal{Q}_{1}$. Thus both
(1) and (2) are equivalent to (5). The proof is complete. ∎
Let $\mu$ be a positive Borel measure on [0, 1). Denote by $\mathcal{H}_{\mu}$
the Hankel matrix $(\mu[n+k])_{n,k\geq 0}$. This matrix induces formally an
operator, denoted also by $\mathcal{H}_{\mu}$, on $H(\mathbb{D})$ in the
following sense. For $f(z)=\sum_{n=0}^{\infty}a_{n}z^{n}\in H(\mathbb{D})$, by
multiplication of the matrix with the sequence of Taylor coefficients of the
function $f$,
$\\{a_{n}\\}_{n\geq
0}\mapsto\left\\{\sum_{k=0}^{\infty}\mu[n+k]a_{k}\right\\}_{n\geq 0},$
define
$\mathcal{H}_{\mu}(f)(z)=\sum_{n=0}^{\infty}\left(\sum_{k=0}^{\infty}\mu[n+k]a_{k}\right)z^{n}.$
If $\mu$ is the Lebesgue measure on [0, 1), the matrix $\mathcal{H}_{\mu}$
reduces to the classical Hilbert matrix $\mathcal{H}=((n+k+1)^{-1})_{n,k\geq
0}$, which induces the classical Hilbert operator $\mathcal{H}$. The Hankel
matrix $\mathcal{H}_{\mu}$ acting on $H^{2}$ was studied in [13, 14]. For all
Hardy spaces $H^{p}$, the operator $\mathcal{H}_{\mu}$ was investigated in [6,
10]. The Dirichlet type space $\mathcal{D}_{\alpha}$, $\alpha\in\mathbb{R}$,
consists of functions $f(z)=\sum_{n=0}^{\infty}a_{n}z^{n}\in H(\mathbb{D})$
for which
$\|f\|_{\mathcal{D}_{\alpha}}^{2}=\sum_{n=0}^{\infty}(n+1)^{1-\alpha}|a_{n}|^{2}<\infty.$
The Hankel matrix $\mathcal{H}_{\mu}$ acting on $\mathcal{D}_{\alpha}$ spaces
was considered in [5, 7]. It is worth referring to [10] for the recent
development of $\mathcal{H}_{\mu}$ acting on spaces of analytic functions. The
following theorem is from the related literatures mentioned before.
###### Theorem D.
Let $\mu$ be a positive Borel measure on [0, 1). Suppose $1<p<\infty$ and
$0<\alpha<2$. Then the following conditions are equivalent.
1. (1)
$\mu$ is a Carleson measure.
2. (2)
$\sup_{w\in\mathbb{D}}\int_{0}^{1}\frac{1-|w|^{2}}{|1-\overline{w}t|^{2}}d\mu(t)<\infty.$
3. (3)
$\mu[n]=O(\frac{1}{n})$.
4. (4)
$\mathcal{H}_{\mu}$ is bounded on $H^{p}$.
5. (5)
$\mathcal{H}_{\mu}$ is bounded on $\mathcal{D}_{\alpha}$.
Combining Theorem 2.3 with Theorem D, we obtain immediately new
characterizations of a Hankel matrix $\mathcal{H}_{\mu}$ acting on Hardy
spaces and Dirichlet type spaces in terms of Hankel measures and
$\mathcal{Q}_{p}$ functions.
## References
* [1] N. Arcozzi, R. Rochberg, E. Sawyer and B. Wick, Function spaces related to the Dirichlet space, J. Lond. Math. Soc., 83 (2011), 1-18.
* [2] R. Aulaskari and P. Lappan, Criteria for an analytic function to be Bloch and a harmonic or meromorphic function to be normal, Complex analysis and its applications, Pitman Res. Notes in Math., 305, Longman Sci. Tec., Harlow, 1994, 136-146.
* [3] R. Aulaskari, J. Xiao and R. Zhao, On subspaces and subsets of $BMOA$ and $UBC$, Analysis, 15 (1995), 101-121.
* [4] A. Baernstein, Analytic functions of bounded mean oscillation, Aspects of Contemporary Complex Analysis, Academic Press, 1980, 3-36.
* [5] G. Bao and H. Wulan, Hankel matrices acting on Dirichlet spaces, J. Math. Anal. Appl., 409 (2014), 228-235.
* [6] C. Chatzifountas, D. Girela and J. Peláez, A generalized Hilbert matrix acting on Hardy spaces, J. Math. Anal. Appl., 413 (2014), 154-168.
* [7] E. Diamantopoulos, Operators induced by Hankel matrices on Dirichlet spaces, Analysis (Munich), 24 (2004), 345-360.
* [8] P. Duren, Theory of $H^{p}$ Spaces, Academic Press, New York, 1970.
* [9] D. Girela, Analytic functions of bounded mean oscillation. In: Complex Function Spaces, Mekrijärvi 1999 Editor: R. Aulaskari. Univ. Joensuu Dept. Math. Rep. Ser. 4, Univ. Joensuu, Joensuu (2001) pp. 61-170.
* [10] D. Girela and N. Merchán, A Hankel matrix acting on spaces of analytic functions, Integral Equations Operator Theory, 89 (2017), 581-594.
* [11] S. Janson and J. Peetre, A new generalization of Hankel operators (the case of higher weights), Math. Nachr., 132 (1987), 313-328.
* [12] Ch. Pommerenke, On Bloch functions, J. London Math. Soc., 2 (1970), 689-695.
* [13] S. Power, Vanishing Carleson measures, Bull. London Math. Soc., 12 (1980), 207-210.
* [14] H. Widom, Hankel matrices, Trans. Amer. Math. Soc., 121 (1966), 1-35.
* [15] J. Xiao, Hankel measures on Hardy space, Bull. Austral. Math. Soc., 62 (2000), 135-140.
* [16] J. Xiao, Pseudo-Carleson measures for weighted Bergman spaces, Michigan Math. J., 47 (2000), 447-452.
* [17] J. Xiao, Holomorphic $\mathcal{Q}$ Classes, Springer, LNM 1767, Berlin, 2001.
* [18] J. Xiao, Geometric $\mathcal{Q}_{p}$ Functions, Birkhäuser Verlag, Basel-Boston-Berlin, 2006.
|
Eindhoven University of Technology, The
<EMAIL_ADDRESS>University of
Technology, The<EMAIL_ADDRESS>project
has received funding from the European Research Council (ERC) under the
European Union’s Horizon 2020 research and innovation programme (grant
agreement No 803421, ReduceSearch). Huib Donkers and Bart M. P.
Jansen[100]Theory of computation Graph algorithms analysis [100]Theory of
computation Parameterized complexity and exact algorithms An extended abstract
of this work appeared in the Proceedings of the 47th Workshop on Graph-
Theoretical Concepts in Computer Science, WG 2021John Q. Open and Joan R.
Access 2 42nd Conference on Very Important Topics (CVIT 2016) CVIT 2016 CVIT
2016 December 24–27, 2016 Little Whinging, United Kingdom 42 23
# Preprocessing to Reduce the Search Space: Antler Structures for Feedback
Vertex Set
Huib Donkers Bart M. P. Jansen
###### Abstract
The goal of this paper is to open up a new research direction aimed at
understanding the power of preprocessing in speeding up algorithms that solve
NP-hard problems exactly. We explore this direction for the classic Feedback
Vertex Set problem on undirected graphs, leading to a new type of graph
structure called _antler decomposition_ , which identifies vertices that
belong to an optimal solution. It is an analogue of the celebrated _crown
decomposition_ which has been used for Vertex Cover. We develop the graph
structure theory around such decompositions and develop fixed-parameter
tractable algorithms to find them, parameterized by the number of vertices for
which they witness presence in an optimal solution. This reduces the search
space of fixed-parameter tractable algorithms parameterized by the solution
size that solve Feedback Vertex Set.
###### keywords:
kernelization, preprocessing, feedback vertex set, graph decomposition
###### category:
## 1 Introduction
The goal of this paper is to open up a new research direction aimed at
understanding the power of preprocessing in speeding up algorithms that solve
NP-hard problems exactly [25, 30]. In a nutshell, this new direction can be
summarized as: how can an algorithm identify part of an optimal solution in an
efficient preprocessing phase? We explore this direction for the classic [37]
Feedback Vertex Set problem on undirected graphs, leading to a new graph
structure called _antler_ which reveals vertices that belong to an optimal
feedback vertex set.
We start by motivating the need for a new direction in the theoretical
analysis of preprocessing. The use of preprocessing, often via the repeated
application of reduction rules, has long been known [3, 4, 43] to speed up the
solution of algorithmic tasks in practice. The introduction of the framework
of parameterized complexity [20] in the 1990s made it possible to also analyze
the power of preprocessing _theoretically_ , through the notion of
kernelization. It applies to _parameterized decision problems_
$\Pi\subseteq\Sigma^{*}\times\mathbb{N}$, in which every instance
$x\in\Sigma^{*}$ has an associated integer parameter $k$ which captures one
dimension of its complexity. For Feedback Vertex Set, typical choices for the
parameter include the size of the desired solution or structural measures of
the complexity of the input graph. A kernelization for a parameterized problem
$\Pi$ is then a polynomial-time algorithm that reduces any instance with
parameter value $k$ to an equivalent instance, of the same problem, whose
total size is bounded by $f(k)$ for some computable function $f$ of the
parameter alone. The function $f$ is the _size_ of the kernelization.
A substantial theoretical framework has been built around the definition of
kernelization [16, 21, 26, 28, 30]. It includes deep techniques for obtaining
kernelization algorithms [10, 27, 38, 42], as well as tools for ruling out the
existence of small kernelizations [11, 18, 22, 29, 31] under complexity-
theoretic hypotheses. This body of work gives a good theoretical understanding
of polynomial-time data compression for NP-hard problems.
However, we argue that these results on kernelization _do not_ explain the
often exponential speed-ups (e.g. [3], [5, Table 6]) caused by applying
effective preprocessing steps to non-trivial algorithms. Why not? A
kernelization algorithm guarantees that the input _size_ is reduced to a
function of the parameter $k$; but the running time of modern parameterized
algorithms for NP-hard problems is not exponential in the total input size.
Instead, fixed-parameter tractable (FPT) algorithms have a running time that
scales polynomially with the input size, and which only depends exponentially
on a problem parameter such as the solution size or treewidth. Hence an
exponential speed-up of such algorithms cannot be explained by merely a
decrease in input size, but only by a decrease in the _parameter_!
We therefore propose the following novel research direction: to investigate
how preprocessing algorithms can decrease the parameter value (and hence
search space) of FPT algorithms, in a theoretically sound way. It is
nontrivial to phrase meaningful formal questions in this direction. To
illustrate this difficulty, note that strengthening the definition of
kernelization to “a preprocessing algorithm that is guaranteed to always
output an equivalent instance of the same problem with a strictly smaller
parameter” is useless. Under minor technical assumptions, such an algorithm
would allow the problem to be solved in polynomial time by repeatedly reducing
the parameter, and solving the problem using an FPT or XP algorithm once the
parameter value becomes constant. Hence NP-hard problems do not admit such
parameter-decreasing algorithms. To formalize a meaningful line of inquiry, we
take our inspiration from the Vertex Cover problem, the fruit fly of
parameterized algorithms.
A rich body of theoretical and applied algorithmic research has been devoted
to the exact solution of the Vertex Cover problem [5, 23, 32, 33]. A standard
2-way branching algorithm can test whether a graph $G$ has a vertex cover of
size $k$ in time $\mathcal{O}(2^{k}(n+m))$, which can be improved by more
sophisticated techniques [14]. The running time of the algorithm scales
linearly with the input size, and exponentially with the size $k$ of the
desired solution. This running time suggests that to speed up the algorithm by
a factor $1000$, one either has to decrease the input size by a factor $1000$,
or decrease $k$ by $\log_{2}(1000)\approx 10$.
It turns out that state-of-the-art preprocessing strategies for Vertex Cover
indeed often _succeed_ in decreasing the size of the solution that the follow-
up algorithm has to find, by means of crown-reduction [2, 15, 24], or the
intimately related Nemhauser-Trotter reduction based on the linear-programming
relaxation [41]. Recall that a vertex cover in a graph $G$ is a set
$S\subseteq V(G)$ such that each edge has at least one endpoint in $S$.
Observe that if $H\subseteq V(G)$ is a set of vertices with the property that
there exists a minimum vertex cover of $G$ containing all of $H$, then $G$ has
a vertex cover of size $k$ if and only if $G-H$ has a vertex cover of size
$k-|H|$. Therefore, if a preprocessing algorithm can identify a set of
vertices $H$ which are guaranteed to belong to an optimal solution, then it
can effectively reduce the parameter of the problem by restricting to a search
for a solution of size $k-|H|$ in $G-S$.
A _crown decomposition_ (cf. [1, 15, 24], [16, §2.3], [28, §4]) of a graph $G$
serves exactly this purpose. It consists of two disjoint vertex sets
$(\mathsf{head},\mathsf{crown})$, such that $\mathsf{crown}$ is a non-empty
independent set whose neighborhood is contained in $\mathsf{head}$, and such
that the graph $G[\mathsf{head}\cup\mathsf{crown}]$ has a matching $M$ of size
$|\mathsf{head}|$. As $\mathsf{crown}$ is an independent set, the matching $M$
assigns to each vertex of $\mathsf{head}$ a private neighbor in
$\mathsf{crown}$. It certifies that any vertex cover in $G$ contains at least
$|\mathsf{head}|$ vertices from $\mathsf{head}\cup\mathsf{crown}$, and as
$\mathsf{crown}$ is an independent set with
$N_{G}(\mathsf{crown})\subseteq\mathsf{head}$, a simple exchange argument
shows there is indeed an optimal vertex cover in $G$ containing all of
$\mathsf{head}$ and none of $\mathsf{crown}$. Since there is a polynomial-time
algorithm to find a crown decomposition if one exists [2, Thm. 11–12], this
yields the following preprocessing guarantee for Vertex Cover: if the input
instance $(G,k)$ has a crown decomposition $(\mathsf{head},\mathsf{crown})$,
then a polynomial-time algorithm can reduce the problem to an equivalent one
with parameter at most $k-|\mathsf{head}|$, thereby giving a formal guarantee
on reduction in the parameter based on the structure of the input.111The
effect of the crown reduction rule can also be theoretically explained by the
fact that interleaving basic 2-way branching with exhaustive crown reduction
yields an algorithm whose running time is only exponential in the _gap_
between the size of a minimum vertex cover and the cost of an optimal solution
to its linear-programming relaxation [39]. However, this type of result cannot
be generalized to Feedback Vertex Set since it is already NP-complete to
determine whether there is a feedback vertex set whose size matches the cost
of the linear-programming relaxation (Corollary A.6).
As the first step of our proposed research program into parameter reduction
(and thereby, search space reduction) by a preprocessing phase, we present a
graph decomposition for Feedback Vertex Set which can identify vertices $S$
that belong to an optimal solution; and which therefore facilitate a reduction
from finding a solution of size $k$ in graph $G$, to finding a solution of
size $k-|S|$ in $G-S$. While there has been a significant amount of work on
kernelization for Feedback Vertex Set [12, 13, 34, 36, 45], the corresponding
preprocessing algorithms do not succeed in finding vertices that belong to an
optimal solution, other than those for which there is a self-loop or those
which form the center a flower (consisting of $k+1$ otherwise vertex-disjoint
cycles [12, 13, 45], or a technical relaxation of this notion [34]). In
particular, apart from the trivial self-loop rule, earlier preprocessing
algorithms can only conclude a vertex $v$ belongs to all optimal solutions (of
a size $k$ which must be given in advance) if they find a suitable packing of
cycles witnessing that solutions without $v$ must have size larger than $k$.
In contrast, our argumentation will be based on _local_ exchange arguments,
which can be applied independently of the global solution size $k$.
We therefore introduce a new graph decomposition for preprocessing Feedback
Vertex Set. To motivate it, we distill the essential features of a crown
decomposition. Effectively, a crown decomposition of $G$ certifies that $G$
has a minimum vertex cover containing all of $\mathsf{head}$, because (i) any
vertex cover has to pick at least $|\mathsf{head}|$ vertices from
$\mathsf{head}\cup\mathsf{crown}$, as the matching $M$ certifies that
$\mathrm{\textsc{vc}}(G[\mathsf{head}\cup\mathsf{crown}])\geq|\mathsf{head}|$,
while (ii) any minimum vertex cover $S^{\prime}$ in
$G-(\mathsf{head}\cup\mathsf{crown})$ yields a minimum vertex cover
$S^{\prime}\cup\mathsf{head}$ in $G$, since
$N_{G}(\mathsf{crown})\subseteq\mathsf{head}$ and $\mathsf{crown}$ is an
independent set. To obtain similar guarantees for Feedback Vertex Set, we need
a decomposition to supply disjoint vertex sets
$(\mathsf{head},\mathsf{antler})$ such that (i) any minimum feedback vertex
set contains at least $|\mathsf{head}|$ vertices from
$\mathsf{head}\cup\mathsf{antler}$, and (ii) any minimum feedback vertex set
$S^{\prime}$ in $G-(\mathsf{head}\cup\mathsf{antler})$ yields a minimum
feedback vertex set $S^{\prime}\cup\mathsf{head}$ in $G$. To achieve (i), it
suffices for $G[\mathsf{head}\cup\mathsf{antler}]$ to contain a set of
$|\mathsf{head}|$ vertex-disjoint cycles (implying that each cycle contains
exactly one vertex of $\mathsf{head}$); to achieve (ii), it suffices for
$G[\mathsf{antler}]$ to be acyclic, with each tree $T$ of the forest
$G[\mathsf{antler}]$ connected to the remainder
$V(G)\setminus(\mathsf{head}\cup\mathsf{antler})$ by at most one edge
(implying that all cycles through $\mathsf{antler}$ intersect
$\mathsf{head}$). We call such a decomposition a 1-antler. Here _antler_
refers to the shape of the forest $G[\mathsf{antler}]$, which no longer
consists of isolated spikes of a crown (see Figure 1). The prefix $1$
indicates it is the simplest type of antler; we present a generalization
later. An antler is _non-empty_ if
$\mathsf{head}\cup\mathsf{crown}\neq\emptyset$, and the _width_ of the antler
is defined to be $|\mathsf{head}|$.
Figure 1: Graph structures showing there is an optimal solution containing all
black vertices and no gray vertices, certified by the bold subgraph. Left:
Crown decomposition for Vertex Cover. Right: Antler for Feedback Vertex Set.
For legibility, the number of edges in the drawing has been restricted. It
therefore has degree-2 vertices which make it reducible by standard reduction
rules; but adding all possible edges between gray and black vertices leads to
a structure of minimum degree at least three which is still a 1-antler.
Unfortunately, assuming $\mathsf{P}$ $\neq$ $\mathsf{NP}$ there is no
_polynomial-time_ algorithm that always outputs a non-empty 1-antler if one
exists. We prove this in Appendix A. However, for the purpose of making a
preprocessing algorithm that reduces the search space, we can allow FPT time
in a parameter such as $|\mathsf{head}|$ to find a decomposition. Each fixed
choice of $|\mathsf{head}|$ would then correspond to a reduction rule which
identifies a small ($|\mathsf{head}|$-sized) part of an optimal feedback
vertex set, for which there is a simple certificate for it being part of an
optimal solution. Such a reduction rule can then be iterated in the
preprocessing phase, thereby potentially decreasing the target solution size
(and search space) by an arbitrarily large amount. Hence we consider the
parameterized complexity of testing whether a graph admits a non-empty
1-antler with $|\mathsf{head}|\leq k$, parameterized by $k$. On the one hand,
we show this problem to be W[1]-hard in Appendix B. This hardness turns out to
be a technicality based on the forced bound on $|\mathsf{head}|$, though. We
provide the following FPT algorithm which yields a search-space reducing
preprocessing step.
###### Theorem 1.1.
There is an algorithm that runs in $2^{\mathcal{O}(k^{5})}\cdot
n^{\mathcal{O}(1)}$ time that, given an undirected multigraph $G$ on $n$
vertices and integer $k$, either correctly determines that $G$ does not admit
a non-empty 1-antler of width at most $k$, or outputs a set $S$ of at least
$k$ vertices such that there exists an optimal feedback vertex set in $G$
containing all vertices of $S$.
Hence if the input graph admits a non-empty 1-antler of width at most $k$, the
algorithm is guaranteed to find at least $k$ vertices that belong to an
optimal feedback vertex set, thereby reducing the search space.
Based on this positive result, we go further and generalize our approach
beyond 1-antlers. For a 1-antler $(\mathsf{head},\mathsf{antler})$ in $G$, the
set of $|\mathsf{head}|$ vertex-disjoint cycles in
$G[\mathsf{head}\cup\mathsf{antler}]$ forms a very simple certificate that any
feedback vertex set of $G$ contains at least $|\mathsf{head}|$ vertices from
$\mathsf{head}\cup\mathsf{antler}$. We can generalize our approach to identify
part of an optimal solution, by allowing more complex certificates of
optimality. The following interpretation of a 1-antler is the basis of the
generalization: for a 1-antler $(\mathsf{head},\mathsf{antler})$ in $G$, there
is a subgraph $G^{\prime}$ of $G[\mathsf{head}\cup\mathsf{antler}]$ (formed by
the $|\mathsf{head}|$ vertex-disjoint cycles) such that
$V(G^{\prime})\supseteq\mathsf{head}$ and $\mathsf{head}$ is an optimal
feedback vertex set of $G^{\prime}$; and furthermore this subgraph
$G^{\prime}$ is simple because all its connected components, being cycles,
have a feedback vertex set of size $1$. For an arbitrary integer $z$, we
therefore define a $z$-antler in an undirected multigraph graph $G$ as a pair
of disjoint vertex sets $(\mathsf{head},\mathsf{antler})$ such that (i) any
minimum feedback vertex set in $G$ contains at least $|\mathsf{head}|$
vertices from $\mathsf{head}\cup\mathsf{antler}$, as witnessed by the fact
that $G[\mathsf{head}\cup\mathsf{antler}]$ has a subgraph $G^{\prime}$ for
which $\mathsf{head}$ is an optimal feedback vertex set and with each
component of $G^{\prime}$ having a feedback vertex set of size at most $z$;
and (ii) the graph $G[\mathsf{antler}]$ is acyclic, with each tree $T$ of the
forest $G[\mathsf{antler}]$ connected to the remainder
$V(G)\setminus(\mathsf{head}\cup\mathsf{antler})$ by at most one edge. (So
condition (ii) is not changed compared to a 1-antler.) Our main result is the
following.
###### Theorem 1.2.
There is an algorithm that runs in $2^{\mathcal{O}(k^{5}z^{2})}\cdot
n^{\mathcal{O}(z)}$ time that, given an undirected multigraph $G$ on $n$
vertices and integers $k\geq z\geq 0$, either correctly determines that $G$
does not admit a non-empty $z$-antler of width at most $k$, or outputs a set
$S$ of at least $k$ vertices such that there exists an optimal feedback vertex
set in $G$ containing all vertices of $S$.
In fact, we prove a slightly stronger statement. If a graph $G$ can be reduced
to a graph $G^{\prime}$ by iteratively removing $z$-antlers, each of width at
most $k$, and the sum of the widths of this sequence of antlers is $t$, then
we can find in time $f(k,z)\cdot n^{\mathcal{O}(z)}$ a subset of at least $t$
vertices of $G$ that belong to an optimal feedback vertex set. This implies
that if a complete solution to Feedback Vertex Set can be assembled by
iteratively combining $\mathcal{O}(1)$-antlers of width at most $k$, then the
entire solution can be found in time $f^{\prime}(k)\cdot n^{\mathcal{O}(1)}$.
Hence our work uncovers a new parameterization in terms of the complexity of
the solution structure, rather than its size, in which Feedback Vertex Set is
fixed-parameter tractable.
Our algorithmic results are based on a combination of graph reduction and
color coding. We use reduction steps inspired by the kernelization algorithms
[12, 45] for Feedback Vertex Set to bound the size of $\mathsf{antler}$ in the
size of $\mathsf{head}$. After such reduction steps, we use color coding [6]
to help identify antler structures. A significant amount of effort goes into
proving that the reduction steps preserve antler structures and the optimal
solution size.
## 2 Preliminaries
For any family of sets $X_{1},\ldots,X_{\ell}$ indexed by
$\\{1,\ldots,\ell\\}$ we define for all $1\leq i\leq\ell$ the following
$X_{<i}:=\bigcup_{1\leq j<i}X_{j}$, $X_{>i}:=\bigcup_{i<j\leq\ell}X_{j}$,
$X_{\leq i}:=X_{i}\cup X_{<i}$ and $X_{\geq i}:=X_{i}\cup X_{>i}$. For a
function $f\colon A\to B$, let $f^{-1}\colon B\to 2^{A}$ denote the _preimage
function of $f$_, that is $f^{-1}(a)=\\{b\in B\mid f(b)=a\\}$. For some set
$D$ of size $n$ and integer $k$ with $n\geq k$ an _$(n,k)$ -universal set for
$D$_ is a family $\mathcal{U}$ of subsets of $D$ such that for all $S\subseteq
D$ of size at most $k$ we have $\\{S\cap U\mid U\in\mathcal{U}\\}=2^{S}$.
###### Theorem 2.1 ([40, Thm. 6], cf. [16, Thm. 5.20]).
For any set $D$ and integers $n$ and $k$ with $|D|=n\geq k$, an
$(n,k)$-universal set $\mathcal{U}$ for $D$ with
$|\mathcal{U}|=2^{\mathcal{O}(k)}\log n$ can be created in
$2^{\mathcal{O}(k)}n\log n$ time.
All graphs considered in this paper are undirected multigraphs, which may have
loops. Based on the incidence representation of multigraphs (cf. [26, Example
4.9]) we represent a multigraph $G$ by a vertex set $V(G)$, an edge set
$E(G)$, and a function $\iota\colon E(G)\to 2^{V(G)}$ where $\iota(e)$ is the
set of one or two vertices incident to $e$ for all $e\in E(G)$. In the context
of an algorithm with input graph $G$ we use $n=|V(G)|$ and $m=|E(G)|$. We
assume we can retrieve and update number of edges between two vertices in
constant time, hence we can ensure in $\mathcal{O}(n^{2})$ time that there are
at most two edges between any to vertices, meaning $m\in\mathcal{O}(n^{2})$.
For a vertex set $X\subseteq V(G)$ let $G[X]$ denote the subgraph of $G$
induced by $X$. For a set of vertices and edges $Y\subseteq V(G)\cup E(G)$ let
$G-Y$ denote the graph obtained from $G[V(G)\setminus Y]$ by removing all
edges in $Y$. For a singleton set $\\{v\\}$ we write $G-v$ instead of
$G-\\{v\\}$. For two graphs $G$ and $H$ the graph $G\cap H$ is the graph on
vertex set $V(G)\cap V(H)$ and edge set $E(G)\cap E(H)$. For $v\in V(G)$ the
open neighborhood of $v$ in $G$ is $N_{G}(v):=\\{u\in V(G)\mid\exists e\in
E(G)\colon\\{u,v\\}=\iota(e)\\}$. For $X\subseteq V(G)$ let
$N_{G}(X):=\bigcup_{v\in X}N_{G}(v)\setminus X$. The degree $\deg_{G}(v)$ of a
vertex $v$ in $G$ is the number of edge-endpoints incident to $v$, where a
self-loop contributes two endpoints. For two disjoint vertex sets
$X,Y\subseteq V(G)$ the number of edges in $G$ with one endpoint in $X$ and
another in $Y$ is denoted by $e(X,Y)$. To simplify the presentation, in
expressions involving $N_{G}(..)$ and $e(..,..)$ we sometimes use a subgraph
$H$ as argument as a shorthand for the vertex set $V(H)$ that is formally
needed. For a vertex $v\in V(G)$ and an integer $k$, a _$v$ -flower of order
$k$_ is a collection of $k$ cycles in $G$ whose vertex sets only intersect in
$v$.
###### Lemma 2.2 (Cf. [L]emma 3.9).
RaymondT17] If $v$ is a vertex in an undirected multigraph $G$ such that $v$
does not have a self-loop and $G-v$ is acyclic, then we can find in
$\mathcal{O}(n)$ time a set $X\subseteq V(G)\setminus\\{v\\}$ such that $G-X$
is acyclic and $G$ contains a $v$-flower of order $|X|$.
###### Proof 2.3.
We prove the existence of such a set $X$ and $v$-flower by induction on
$|V(G)|$. The inductive proof can easily be translated into a linear-time
algorithm. If $G$ is acyclic, output $X=\emptyset$ and a $v$-flower of order
$0$. Otherwise, since $v$ does not have a self-loop there is a tree $T$ of the
forest $G-v$ such that $G[V(T)\cup\\{v\\}]$ contains a cycle. Root $T$ at an
arbitrary vertex and consider a deepest node $x$ in $T$ for which the graph
$G[V(T_{x})\cup\\{v\\}]$ contains a cycle $C$. Then any feedback vertex set of
$G$ that does not contain $v$, has to contain at least one vertex of $T_{x}$;
and the choice of $x$ as a deepest vertex implies that $x$ lies on all cycles
of $G$ that intersect $T_{x}$. By induction on $G^{\prime}:=G-V(T_{x})$ and
$v$, there is a feedback vertex set $X^{\prime}\subseteq
V(G^{\prime})\setminus\\{v\\}$ of $G^{\prime}$ and a $v$-flower in
$G^{\prime}$ of order $|X^{\prime}|$. We obtain a $v$-flower of order
$|X^{\prime}|+1$ in $G$ by adding $C$, while
$X:=X^{\prime}\cup\\{x\\}\subseteq V(G)\setminus\\{v\\}$ is a feedback vertex
set of size $|X^{\prime}|+1$.
## 3 Feedback Vertex Cuts and Antlers
In this section we present properties of antlers and related structures. A
Feedback Vertex Set (FVS) in a graph $G$ is a vertex set $X\subseteq V(G)$
such that $G-S$ is acyclic. The feedback vertex number of a graph $G$, denoted
by $\operatorname{\textsc{fvs}}(G)$, is the minimum size of a FVS in $G$. A
_Feedback Vertex Cut_ (FVC) in a graph $G$ is a pair of disjoint vertex sets
$(C,F)$ such that $C,F\subseteq V(G)$, $G[F]$ is a forest, and for each tree
$T$ in $G[F]$ we have $e(T,G-(C\cup F))\leq 1$. The _width_ of a FVC $(C,F)$
is $|C|$, and $(C,F)$ is _empty_ if $|C\cup F|=0$.
###### Observation 3.1.
If $(C,F)$ is a FVC in $G$ then any cycle in $G$ containing a vertex from $F$
also contains a vertex from $C$. The set $C$ is a FVS in $G[C\cup F]$, hence
$|C|\geq\operatorname{\textsc{fvs}}(G[C\cup F])$.
###### Observation 3.2.
If $(C,F)$ is a FVC in $G$ then for any $X\subseteq V(G)$ we have that
$(C\setminus X,F\setminus X)$ is a FVC in $G-X$. Additionally, for any
$Y\subseteq E(G)$ we have that $(C,F)$ is a FVC in $G-Y$.
We now present one of the main concepts for this work. An _antler_ in a graph
$G$ is a FVC $(C,F)$ in $G$ such that
$|C|\leq\operatorname{\textsc{fvs}}(G[C\cup F])$. Then by 3.1 the set $C$ is a
minimum FVS in $G[C\cup F]$ and no cycle in $G-C$ contains a vertex from $F$.
We observe:
###### Observation 3.3.
If $(C,F)$ is an antler in $G$, then
$\operatorname{\textsc{fvs}}(G)=|C|+\operatorname{\textsc{fvs}}(G-(C\cup F))$.
For a graph $G$ and vertex set $C\subseteq V(G)$, a _$C$ -certificate_ is a
subgraph $H$ of $G$ such that $C$ is a minimum FVS in $H$. We say a
$C$-certificate has _order_ $z$ if for each component $H^{\prime}$ of $H$ we
have $\operatorname{\textsc{fvs}}(H^{\prime})=|C\cap V(H^{\prime})|\leq z$.
For an integer $z\geq 0$, a $z$-antler in $G$ is an antler $(C,F)$ in $G$ such
that $G[C\cup F]$ contains a $C$-certificate of order $z$. Note that a
$0$-antler has width $0$.
###### Observation 3.4.
If $(C,F)$ is a $z$-antler in $G$ for some $z\geq 0$, then for any $X\subseteq
C$, we have that $(C\setminus X,F)$ is a $z$-antler in $G-X$.
While antlers may intersect in non-trivial ways, the following proposition
relates the sizes of the cross-intersections.
###### Proposition 3.5.
If $(C_{1},F_{1})$ and $(C_{2},F_{2})$ are antlers in $G$, then $|C_{1}\cap
F_{2}|=|C_{2}\cap F_{1}|$.
###### Proof 3.6.
We show $\operatorname{\textsc{fvs}}(G[F_{1}\cup F_{2}])=|C_{1}\cap F_{2}|$.
First we show $\operatorname{\textsc{fvs}}(G[F_{1}\cup F_{2}])\geq|C_{1}\cap
F_{2}|$ by showing $(C_{1}\cap F_{2},F_{1})$ is an antler in $G[F_{1}\cup
F_{2}]$. Clearly $(C_{1},F_{1})$ is an antler in $G[F_{1}\cup F_{2}\cup
C_{1}]$, so then by 3.4 $(C_{1}\cap F_{2},F_{1})$ is an antler in $G[F_{1}\cup
F_{2}\cup C_{1}]-(C_{1}\setminus F_{2})=G[F_{1}\cup F_{2}]$.
Second we show $\operatorname{\textsc{fvs}}(G[F_{1}\cup F_{2}])\leq|C_{1}\cap
F_{2}|$ by showing $G[F_{1}\cup F_{2}]-(C_{1}\cap F_{2})$ is acyclic. Note
that $G[F_{1}\cup F_{2}]-(C_{1}\cap F_{2})=G[F_{1}\cup F_{2}]-C_{1}$. Suppose
$G[F_{1}\cup F_{2}]-C_{1}$ contains a cycle. We know this cycle does not
contain a vertex from $C_{1}$, however it does contain at least one vertex
from $F_{1}$ since otherwise this cycle exists in $G[F_{2}]$ which is a
forest. We know from 3.1 that any cycle in $G$ containing a vertex from
$F_{1}$ also contains a vertex from $C_{1}$. Contradiction. The proof for
$\operatorname{\textsc{fvs}}(G[F_{1}\cup F_{2}])=|C_{2}\cap F_{1}|$ is
symmetric. It follows that $|C_{1}\cap
F_{2}|=\operatorname{\textsc{fvs}}(G[F_{1}\cup F_{2}])=|C_{2}\cup F_{1}|$.
Lemma 3.7 shows that what remains of a $z$-antler $(C_{1},F_{1})$ when
removing a different antler $(C_{2},F_{2})$, again forms a smaller $z$-antler.
We will rely on this lemma repeatedly to ensure that after having found and
removed an incomplete fragment of a width-$k$ $z$-antler, the remainder of
that antler persists as a $z$-antler to be found later.
###### Lemma 3.7.
For any integer $z\geq 0$, if a graph $G$ has a $z$-antler $(C_{1},F_{1})$ and
another antler $(C_{2},F_{2})$, then $(C_{1}\setminus(C_{2}\cup
F_{2}),F_{1}\setminus(C_{2}\cup F_{2}))$ is a $z$-antler in $G-(C_{2}\cup
F_{2})$.
Before we prove Lemma 3.7, we prove a weaker claim:
###### Proposition 3.8.
If $(C_{1},F_{1})$ and $(C_{2},F_{2})$ are antlers in $G$, then
$(C_{1}\setminus(C_{2}\cup F_{2}),F_{1}\setminus(C_{2}\cup F_{2}))$ is an
antler in $G-(C_{2}\cup F_{2})$.
###### Proof 3.9.
For brevity let $C_{1}^{\prime}:=C_{1}\setminus(C_{2}\cup F_{2})$ and
$F_{1}^{\prime}:=F_{1}\setminus(C_{2}\cup F_{2})$ and
$G^{\prime}:=G-(C_{2}\cup F_{2})$. First note that
$(C_{1}^{\prime},F_{1}^{\prime})$ is a FVC in $G^{\prime}$ by 3.2. We proceed
to show that $\operatorname{\textsc{fvs}}(G^{\prime}[C_{1}^{\prime}\cup
F_{1}^{\prime}])\geq|C_{1}^{\prime}|$. By 3.4 $(\emptyset,F_{2})$ is an antler
in $G-C_{2}$, so then by 3.2 we have $(\emptyset,F_{2}\cap(C_{1}\cup F_{1}))$
is a FVC in $G[C_{1}\cup F_{1}]-C_{2}$. Since a FVC of width $0$ is an antler
we can apply 3.3 and obtain $\operatorname{\textsc{fvs}}(G[C_{1}\cup
F_{1}]-C_{2})=\operatorname{\textsc{fvs}}(G[C_{1}\cup F_{1}]-(C_{2}\cup
F_{2}))=\operatorname{\textsc{fvs}}(G^{\prime}[C_{1}^{\prime}\cup
F_{1}^{\prime}])$. We derive
$\displaystyle\operatorname{\textsc{fvs}}(G^{\prime}[C_{1}^{\prime}\cup
F_{1}^{\prime}])$ $\displaystyle=\operatorname{\textsc{fvs}}(G[C_{1}\cup
F_{1}]-C_{2})$ $\displaystyle\geq\operatorname{\textsc{fvs}}(G[C_{1}\cup
F_{1}])-|C_{2}\cap(C_{1}\cup F_{1})|$ $\displaystyle=|C_{1}|-|C_{2}\cap
C_{1}|-|C_{2}\cap F_{1}|$ Since $C_{1}\cap F_{1}=\emptyset$
$\displaystyle=|C_{1}|-|C_{2}\cap C_{1}|-|C_{1}\cap F_{2}|$ By Proposition 3.5
$\displaystyle=|C_{1}|-|(C_{2}\cap C_{1})\cup(C_{1}\cap F_{2})|$ Since
$C_{2}\cap F_{2}=\emptyset$ $\displaystyle=|C_{1}\setminus(C_{2}\cup
F_{2})|=|C_{1}^{\prime}|.$
We can now prove Lemma 3.7.
###### Proof 3.10.
For brevity let $C_{1}^{\prime}:=C_{1}\setminus(C_{2}\cup F_{2})$ and
$F_{1}^{\prime}:=F_{1}\setminus(C_{2}\cup F_{2})$ and
$G^{\prime}:=G-(C_{2}\cup F_{2})$. By Proposition 3.8 we know
$(C_{1}^{\prime},F_{1}^{\prime})$ is an antler, so it remains to show that
$G^{\prime}[C_{1}^{\prime}\cup F_{1}^{\prime}]$ contains a
$C_{1}^{\prime}$-certificate of order $z$. Since $(C_{1},F_{1})$ is a
$z$-antler in $G$, we have that $G[C_{1}\cup F_{1}]$ contains a
$C_{1}$-certificate of order $z$. Let $H$ denote this $C_{1}$-certificate and
let $\overline{H}$ be the set of all edges and vertices in
$G^{\prime}[C_{1}^{\prime}\cup F_{1}^{\prime}]$ that are not in $H$. Now
$(C_{1},F_{1})$ is a $z$-antler in $G^{\prime\prime}:=G-\overline{H}$ since it
is a FVC by 3.2 and $G^{\prime\prime}[C_{1}\cup F_{1}]$ contains a
$C_{1}$-certificate of order $z$ since $H$ is a subgraph of
$G^{\prime\prime}$. Note that $(C_{2},F_{2})$ is also an antler in
$G^{\prime\prime}$ since $\overline{H}$ does not contain vertices or edges
from $G[C_{2}\cup F_{2}]$. It follows that $(C_{1}^{\prime},F_{1}^{\prime})$
is an antler in $G^{\prime\prime}$ by Proposition 3.8, so
$G^{\prime\prime}[C_{1}^{\prime}\cup F_{1}^{\prime}]$ is a
$C_{1}^{\prime}$-certificate in $G^{\prime\prime}$. Clearly this is a
$C_{1}^{\prime}$-certificate of order $z$ since
$G^{\prime\prime}[C_{1}^{\prime}\cup F_{1}^{\prime}]$ is a subgraph of $H$.
Since $G^{\prime\prime}[C_{1}^{\prime}\cup F_{1}^{\prime}]$ is a subgraph of
$G^{\prime}[C_{1}^{\prime}\cup F_{1}^{\prime}]$ it follows that
$G^{\prime}[C_{1}^{\prime}\cup F_{1}^{\prime}]$ contains a
$C_{1}^{\prime}$-certificate of order $z$.
Lemma 3.11 shows that we can consider consecutive removal of two $z$-antlers
as the removal of a single $z$-antler.
###### Lemma 3.11.
For any integer $z\geq 0$, if a graph $G$ has a $z$-antler $(C_{1},F_{1})$ and
$G-(C_{1}\cup F_{1})$ has a $z$-antler $(C_{2},F_{2})$ then $(C_{1}\cup
C_{2},F_{1}\cup F_{2})$ is a $z$-antler in $G$.
###### Proof 3.12.
Since $(C_{1},F_{1})$ is a $z$-antler in $G$ we know $G[C_{1}\cup F_{1}]$
contains a $C_{1}$-certificate of order $z$, similarly $(G-(C_{1}\cup
F_{1}))[C_{2}\cup F_{2}]$ contains a $C_{2}$-certificate of order $z$. The
union of these certificate forms a $(C_{1}\cup C_{2})$-certificate of order
$z$ in $G[C_{1}\cup C_{2}\cup F_{1}\cup F_{2}]$. It remains to show that
$(C_{1}\cup C_{2},F_{1}\cup F_{2})$ is a FVC in $G$.
First we show $G[F_{1}\cup F_{2}]$ is acyclic. Suppose for contradiction that
$G[F_{1}\cup F_{2}]$ contains a cycle $\mathcal{C}$. Since $(C_{1},F_{1})$ is
a FVC in $G$, any cycle containing a vertex from $F_{1}$ also contains a
vertex from $C_{1}$, hence $\mathcal{C}$ does not contain vertices from
$F_{1}$. Therefore $\mathcal{C}$ can only contain vertices from $F_{2}$. This
is a contradiction with the fact that $G[F_{2}]$ is acyclic.
Finally we show that for each tree $T$ in $G[F_{1}\cup F_{2}]$ we have
$e(T,G-(C_{1}\cup C_{2}\cup F_{1}\cup F_{2}))\leq 1$. If $V(T)\subseteq F_{2}$
this follows directly from the fact that $(C_{2},F_{2})$ is a FVC in
$G-(C_{1}\cup F_{1})$. Similarly if $V(T)\subseteq F_{1}$ this follows
directly from the fact that $(C_{1},F_{1})$ is a FVC in $G$. So suppose $T$ is
a tree that contains vertices from both $F_{1}$ and $F_{2}$. Since $T$ is
connected, each tree in $T[F_{1}]$ contains a neighbor of a vertex in a tree
in $T[F_{2}]$. Hence no tree in $T[F_{1}]$ contains a neighbor of
$V(G-(C_{1}\cup C_{2}\cup F_{1}\cup F_{2}))$, so $e(V(T)\cap
F_{1},G-(C_{1}\cup C_{2}\cup F_{1}\cup F_{2}))=0$. To complete the proof we
show $e(V(T)\cap F_{2},G-(C_{1}\cup C_{2}\cup F_{1}\cup F_{2}))\leq 1$. Recall
each tree in $G[F_{2}]$ has at most $1$ edge to $G-(C_{1}\cup C_{2}\cup
F_{1}\cup F_{2})$, so it suffices to show that $T[F_{2}]$ is connected.
Suppose $T[F_{2}]$ is not connected, then let $u,v\in F_{2}$ be vertices from
different components of $T[F_{2}]$. Since $T$ is connected, there is a path
from $u$ to $v$. This path must use a vertex $w\in V(T-F_{2})\subseteq F_{1}$.
Let $T^{\prime}$ denote the tree in $T[F_{1}]$ that contains this vertex.
Since $(C_{1},F_{1})$ is a FVC in $G$ we have that $e(T^{\prime},F_{2})\leq
e(T^{\prime},G-(C_{1}\cup F_{1}))\leq 1$ hence no vertex in $T^{\prime}$ can
be part of a path from $u$ to $v$ in $T$. This contradicts our choice of
$T^{\prime}$.
The last structural property of antlers, given in Lemma 3.15, derives a bound
on the number of trees of a forest $G[F]$ needed to witness that $C$ is an
optimal FVS of $G[C\cup F]$. Lemma 3.15 is a corollary to the following lemma.
###### Lemma 3.13.
If a graph $G$ contains a $C$-certificate $H$ of order $z\geq 0$ for some
$C\subseteq V(G)$, then $H$ contains a $C$-certificate $\hat{H}$ of order $z$
such that $\hat{H}-C$ has at most $\frac{|C|}{2}(z^{2}+2z-1)$ trees.
###### Proof 3.14.
Consider a tree $T$ in $H-C$, we show that
$\operatorname{\textsc{fvs}}(H-V(T))=\operatorname{\textsc{fvs}}(H)$ if
1. 1.
for all $v\in C$ such that $H[V(T)\cup\\{v\\}]$ has a cycle, $H-V(T)$ contains
an order-$z$ $v$-flower, and
2. 2.
for all $\\{u,v\\}\in\binom{N_{H}(T)}{2}$ there are at least $z+1$ other trees
in $H-C$ adjacent to $u$ and $v$.
Consider the component $H^{\prime}$ of $H$ that contains $T$. It suffices to
show that
$\operatorname{\textsc{fvs}}(H^{\prime}-V(T))=\operatorname{\textsc{fvs}}(H^{\prime})$.
Clearly
$\operatorname{\textsc{fvs}}(H^{\prime}-V(T))\leq\operatorname{\textsc{fvs}}(H^{\prime})$
so it remains to show that
$\operatorname{\textsc{fvs}}(H^{\prime}-V(T))\geq\operatorname{\textsc{fvs}}(H^{\prime})$.
Assume
$\operatorname{\textsc{fvs}}(H^{\prime}-V(T))<\operatorname{\textsc{fvs}}(H^{\prime})$,
then let $X$ be a FVS in $H^{\prime}-V(T)$ with
$|X|<\operatorname{\textsc{fvs}}(H^{\prime})=|C\cap V(H^{\prime})|\leq z$. For
any $v\in C\cap V(H^{\prime})$ such that $H[T\cup\\{v\\}]$ has a cycle we know
from condition 1 that $H^{\prime}-V(T)$ has $z>|X|$ cycles that intersect only
in $v$, hence $v\in X$. By condition 2 we have that all but possibly one
vertex in $N_{G}(T)$ must be contained in $X$, since if there are two vertices
$x,y\in N_{G}(T)\setminus X$ then $H-V(T)-X$ has at least $z+1-|X|\geq 2$
internally vertex-disjoint paths between $x$ and $y$ forming a cycle and
contradicting our choice of $X$. Since there is at most one vertex $v\in
N_{G}(T)\setminus X$ and $H[T\cup\\{v\\}]$ does not have a cycle, we have that
$H^{\prime}-X$ is acyclic, a contraction since
$|X|<\operatorname{\textsc{fvs}}(H^{\prime})$.
The desired $C$-certificate $\hat{H}$ can be obtained from $H$ by iteratively
removing trees from $H-C$ for which both conditions hold. We show that if no
such tree exists, then $H-C$ has at most $\frac{|C|}{2}(z^{2}+2z-1)$ trees.
Each tree $T$ for which condition 1 fails can be charged to a vertex $v\in C$
that witnesses this, i.e., $H[T\cup\\{v\\}]$ has a cycle and there are at most
$z$ trees $T^{\prime}$ such that $T^{\prime}\cup v$ has a cycle. Clearly each
vertex $v\in C$ can be charged at most $z$ times, hence there are at most
$z\cdot|C|$ trees violating condition 1. Similarly each tree $T$ for which
condition 2 fails can be charged to a pair of vertices
$\\{u,v\\}\in\binom{N_{H}(T)}{2}$ for which at most $z+1$ trees in $H-T$ are
adjacent to $u$ and $v$. Clearly each pair of vertices can be charged at most
$z+1$ times. Additionally each pair consists of vertices from the same
component of $H$. Let $H_{1},\ldots,H_{\ell}$ be the components in $H$, then
there are at most $\sum_{1\leq i\leq\ell}\binom{|C\cap
V(H_{i})|}{2}=\sum_{1\leq i\leq\ell}\frac{1}{2}|C\cap V(H_{i})|(|C\cap
V(H_{i})|-1)\leq\sum_{1\leq i\leq\ell}\frac{1}{2}|C\cap
V(H_{i})|(z-1)=\frac{|C|}{2}(z-1)$ such pairs. Thus $H-C$ has at most
$z\cdot|C|+(z+1)\cdot\frac{|C|}{2}(z-1)=\frac{|C|}{2}(z^{2}+2z-1)$ trees
violating condition 2.
We can now give an upper bound on the number of trees in $G[F]$ required for a
$z$-antler $(C,F)$.
###### Lemma 3.15.
Let $(C,F)$ be a $z$-antler in a graph $G$ for some $z\geq 0$. There exists an
$F^{\prime}\subseteq F$ such that $(C,F^{\prime})$ is a $z$-antler in $G$ and
$G[F^{\prime}]$ has at most $\frac{|C|}{2}(z^{2}+2z-1)$ trees.
###### Proof 3.16.
Since $(C,F)$ is a $z$-antler, $G[C\cup F]$ contains a $C$-certificate $H$ or
order $z$ and by Lemma 3.13 we know $H$ contains a $C$-certificate $\hat{H}$
of order $z$ such that $\hat{H}-C$ has at most $\frac{|C|}{2}(z^{2}+2z-1)$
components. Take $F^{\prime}:=V(H-C)$ then $G[F^{\prime}]$ has at most
$\frac{|C|}{2}(z^{2}+2z-1)$ components and $\hat{H}$ is a subgraph of $G[C\cup
F^{\prime}]$, meaning $(C,F^{\prime})$ is a $z$-antler.
## 4 Finding Feedback Vertex Cuts
As described in Section 1, our algorithm to identify vertices in antlers uses
color coding. To allow a relatively small family of colorings to identify an
entire antler structure $(C,F)$ with $|C|\leq k$, we need to bound $|F|$ in
terms of $k$ as well. We therefore use several graph reduction steps. In this
section, we show that if there is a width-$k$ antler whose forest $F$ is
significantly larger than $k$, then we can identify a reducible structure in
the graph. To identify a reducible structure we will also use color coding. In
Section 5 we show how to reduce such a structure while preserving antlers and
optimal feedback vertex sets.
Define the function $f_{r}\colon\mathbb{N}\to\mathbb{N}$ as
$f_{r}(x)=2x^{3}+3x^{2}-x$. We say a FVC $(C,F)$ is _reducible_ if
$|F|>f_{r}(|C|)$, and we say $(C,F)$ is a _single-tree_ FVC if $G[F]$ is
connected.
###### Definition 4.1.
A FVC $(C,F)$ is _simple_ if $|F|\leq 2f_{r}(|C|)$ and one of the following
holds: (a) $G[F]$ is connected, or (b) all trees in $G[F]$ have a common
neighbor $v$ and there exists a single-tree FVC $(C,F_{2})$ with $v\in
F_{2}\setminus F$ and $F\subseteq F_{2}$.
The algorithm we will present can identify a reducible FVC when it is also
simple. First we show that such a FVC always exists when the graph contains a
single-tree reducible FVS.
###### Lemma 4.2.
If a graph $G$ contains a reducible single-tree FVC $(C,F)$ there exists a
simple reducible FVC $(C,F^{\prime})$ with $F^{\prime}\subseteq F$.
###### Proof 4.3.
We use induction on $|F|$. If $|F|\leq 2f_{r}(|C|)$ then $(C,F)$ is simple by
condition (a). Assume $|F|>2f_{r}(|C|)$. Since $(C,F)$ is a FVC and $G[F]$ is
connected there is at most one vertex $v\in F$ that has a neighbor in
$V(G)\setminus(C\cup F)$. If no such vertex exists, take $v\in F$ to be any
other vertex. Observe that $(C,F\setminus\\{v\\})$ is a FVC. Consider the
following cases:
* •
All trees in $G[F]-v$ contain at most $f_{r}(|C|)$ vertices. Let $F^{\prime}$
be the vertices of an inclusion minimal set of trees of $G[F]-v$ such that
$|F^{\prime}|>f_{r}(|C|)$. Clearly $|F^{\prime}|\leq 2f_{r}(|C|)$ since
otherwise the set is not inclusion minimal. Each tree in $G[F^{\prime}]$
contains a neighbor of $v$ and $F^{\prime}\subseteq F$, hence $(C,F^{\prime})$
is simple by condition (b), and $(C,F^{\prime})$ is reducible since
$|F^{\prime}|>f_{r}(|C|)$.
* •
There is a tree $T$ in $G[F]-v$ that contains more than $f_{r}(|C|)$ vertices.
Now $(C,V(T))$ is a single-tree reducible FVC with $|V(T)|<|F|$, so the
induction hypothesis applies.
We proceed to show how a simple reducible FVC can be found using color coding.
A vertex coloring of $G$ is a function $\chi\colon
V(G)\to\\{\mathsf{\dot{C}},\mathsf{\dot{F}}\\}$. We say a simple FVC $(C,F)$
is _properly colored_ by a coloring $\chi$ if
$F\subseteq\chi^{-1}(\mathsf{\dot{F}})$ and $C\cup
N_{G}(F)\subseteq\chi^{-1}(\mathsf{\dot{C}})$.
###### Lemma 4.4.
Given a graph $G$ and coloring $\chi$ of $G$ that properly colors a simple
reducible FVC $(C,F)$, a reducible FVC $(C^{\prime},F^{\prime})$ can be found
in $\mathcal{O}(n^{3})$ time.
###### Proof 4.5.
If $(C,F)$ is simple by condition (a), i.e, $G[F]$ is connected, it is easily
verified that we can find a reducible FVC in $\mathcal{O}(n^{3})$ time as
follows: Consider the set $\mathcal{T}$ of all trees in
$G[\chi^{-1}(\mathsf{\dot{F}})]$. For each tree $T\in\mathcal{T}$, if there is
a vertex $u\in N_{G}(T)$ such that $e(\\{u\\},T)=1$ take
$C^{\prime}:=N_{G}(T)\setminus\\{u\\}$, otherwise take $C^{\prime}:=N_{G}(T)$.
If $|V(T)|>f_{r}(|C^{\prime}|)$ return $(C^{\prime},V(T))$.
In the remainder of the proof we assume that $(C,F)$ is simple by condition
(b).
### Algorithm
For each $u\in\chi^{-1}_{V}(\mathsf{\dot{C}})$ consider the set $\mathcal{T}$
of all trees $T$ in $G[\chi^{-1}_{V}(\mathsf{\dot{F}})]$ such that
$e(\\{u\\},T)=1$. Let
$C^{\prime}\subseteq\chi^{-1}_{V}(\mathsf{\dot{C}})\setminus\\{u\\}$ be the
set of vertices (excluding $u$) with a neighbor in at least two trees in
$\mathcal{T}$ and let $\mathcal{T}_{1}$ be the set of trees $T\in\mathcal{T}$
for which $N_{G}(T)\subseteq C^{\prime}\cup\\{u\\}$. Now consider the set of
trees $\mathcal{T}_{2}=\mathcal{T}\setminus\mathcal{T}_{1}$ as a set of
objects for a 0-1 knapsack problem where we define for each
$T\in\mathcal{T}_{2}$ its weight as
$|N_{G}(T)\setminus(C^{\prime}\cup\\{u\\})|$ and its value as $|V(T)|$. Using
the dynamic programming algorithm [46] for the 0-1 knapsack problem we compute
for all $0\leq
b\leq|N_{G}(V(\mathcal{T}_{2}))\setminus(C^{\prime}\cup\\{u\\})|$ a set of
trees $\mathcal{T}_{2}^{b}\subseteq\mathcal{T}_{2}$ with a combined weight
$\sum_{T\in\mathcal{T}_{2}^{b}}|N_{G}(T)\setminus(C^{\prime}\cup\\{u\\})|\leq
b$ such that the combined value $\sum_{T\in\mathcal{T}_{2}^{b}}|V(T)|$ is
maximized. If for any such $b$ we have
$|V(\mathcal{T}_{1})|+|V(\mathcal{T}_{2}^{b})|>f_{r}(|C^{\prime}|+b)$ then
take $\hat{C}:=C^{\prime}\cup N_{G}(V(\mathcal{T}_{2}^{b}))\setminus\\{u\\}$
and $\hat{F}:=V(\mathcal{T}_{1})\cup V(\mathcal{T}_{2}^{b})$ and return
$(\hat{C},\hat{F})$.
### Correctness
To show that $(\hat{C},\hat{F})$ is a FVC, first note that $G[\hat{F}]$ is a
forest. For each tree $T$ in this forest we have $e(T,\\{u\\})=1$ and
$N_{G}(T)\subseteq C^{\prime}\cup\\{u\\}\cup
N_{G}(V(\mathcal{T}_{2}^{b}))=\hat{C}\cup\\{u\\}$. It follows that
$e(T,G-(\hat{C}\cup\hat{F}))=e(T,\\{u\\})=1$. To show that $(\hat{C},\hat{F})$
indeed reducible observe that
$\sum_{T\in\mathcal{T}_{2}^{b}}|N_{G}(T)\setminus(C^{\prime}\cup\\{u\\})|=|\bigcup_{T\in\mathcal{T}_{2}^{b}}N_{G}(T)\setminus(C^{\prime}\cup\\{u\\})|$
since if two trees $T_{1},T_{2}\in\mathcal{T}_{2}^{b}$ have a common neighbor
that is not $u$, it must be in $C^{\prime}$ by definition, hence the
neighborhoods only intersect on $C^{\prime}\cup\\{u\\}$. We can now deduce
$|\hat{F}|=|V(\mathcal{T}_{1})|+|V(\mathcal{T}_{2}^{b})|>f_{r}(|C^{\prime}|+b)\geq
f_{r}(|C^{\prime}|+\sum_{T\in\mathcal{T}_{2}^{b}}|N_{G}(T)\setminus(C^{\prime}\cup\\{u\\})|)=f_{r}(|C^{\prime}\cup
N_{G}(V(\mathcal{T}_{2}^{b}))\setminus\\{u\\}|)=f_{r}(|\hat{C}|)$.
It remains to show that if $\chi$ properly colors a simple reducible FVC
$(C,F)$ then for some $u\in\chi^{-1}_{V}(\mathsf{\dot{C}})$ there exists a $b$
such that $|V(\mathcal{T}_{1})|+|V(\mathcal{T}_{2}^{b})|\geq
f_{r}(|C^{\prime}|+b)$. Recall that we assumed $(C,F)$ is simple by condition
(b), i.e., all trees in $G[F]$ have a common neighbor $v$ and there exists a
single-tree FVC $(C,F_{2})$ with $v\in F_{2}\setminus F$ and $F\subseteq
F_{2}$. Since $(C,F)$ is properly colored we know
$v\in\chi^{-1}(\mathsf{\dot{C}})$, so in some iteration we will have $u=v$.
Consider the sets $\mathcal{T}$, $\mathcal{T}_{1}$, $\mathcal{T}_{2}$, and
$C^{\prime}$ as defined in this iteration. We first show $C^{\prime}\subseteq
C$. If $w\in C^{\prime}$ then $w$ has a neighbor in two trees in
$\mathcal{T}$. This means there are two internally vertex disjoint paths
between $v$ and $w$, forming a cycle. Since $v\in F_{2}$ we have by 3.1 for
the FVC $(C,F_{2})$ that this cycle must contain a vertex in $C$ which is
therefore different from $v$. Recall that $(C,F)$ is properly colored, hence
all vertices in $C$ have color $\mathsf{\dot{C}}$. Note that the internal
vertices of these paths all have color $\mathsf{\dot{F}}$ because they are
vertices from trees in $G[\chi^{-1}_{V}(\mathsf{\dot{F}})]$. Hence $w\in C$
and therefore $C^{\prime}\subseteq C$. To complete the proof we show
###### Claim 1.
There exists a value $b$ such that
$|V(\mathcal{T}_{1})|+|V(\mathcal{T}_{2}^{b})|\geq f_{r}(|C^{\prime}|+b)$.
Recall that we assumed existence of a properly colored FVC $(C,F)$ that is
reducible and simple by condition (b) witnessed by the FVC $(C,F_{2})$.
Consider the set $\mathcal{T}^{\prime}$ of trees in $G[F]$. Note that any tree
$T^{\prime}$ in $\mathcal{T}^{\prime}$ is a tree in
$G[\chi^{-1}(\mathsf{\dot{F}})]$ since $(C,F)$ is properly colored and note
that $T^{\prime}$ contains a neighbor of $v$. If $e(T^{\prime},\\{v\\})>1$
then $G[F_{2}]$ contains a cycle, contradicting that $(C,F_{2})$ is a FVC in
$G$, hence $e(T^{\prime},\\{v\\})=1$. It follows that
$T^{\prime}\in\mathcal{T}$, meaning
$\mathcal{T^{\prime}}\subseteq\mathcal{T}$. Take
$\mathcal{T}^{\prime}_{2}=\mathcal{T}^{\prime}\setminus\mathcal{T}_{1}=\mathcal{T}^{\prime}\cap\mathcal{T}_{2}$
and
$b=\sum_{T\in\mathcal{T}^{\prime}_{2}}|N_{G}(V(T))\setminus(C^{\prime}\cup\\{v\\}|$.
Clearly $\mathcal{T}^{\prime}_{2}$ is a candidate solution for the 0-1
knapsack problem with capacity $b$, hence
$|V(\mathcal{T}_{2}^{b})|\geq|V(\mathcal{T}^{\prime}_{2})|$. We deduce
$\displaystyle|V(\mathcal{T}_{1})|+|V(\mathcal{T}_{2}^{b})|$
$\displaystyle\geq|V(\mathcal{T}_{1})|+|V(\mathcal{T}^{\prime}_{2})|\geq|V(\mathcal{T}^{\prime})|=|F|$
$\displaystyle>f_{r}(|C|)$ since $(C,F)$ is reducible
$\displaystyle=f_{r}(|C^{\prime}\cup C|)$ since $C^{\prime}\subseteq C$
$\displaystyle=f_{r}(|C^{\prime}\cup(N_{G}(F)\setminus\\{v\\})\cup C|)$ since
$N_{G}(\mathcal{T}^{\prime}_{2})\setminus\\{v\\}\subseteq C$
$\displaystyle\geq
f_{r}(|C^{\prime}\cup(N_{G}(\mathcal{T}^{\prime}_{2})\setminus\\{v\\})|)$
since $f_{r}$ is non-decreasing
$\displaystyle=f_{r}(|C^{\prime}|+|N_{G}(\mathcal{T}^{\prime}_{2})\setminus(C^{\prime}\cup\\{v\\})|)$
since $|A\cup B|=|A|+|B\setminus A|$ $\displaystyle>f_{r}(|C^{\prime}|+b)$
### Running time
For each $u\in\chi^{-1}_{V}(\mathsf{\dot{C}})$ we perform a number of
$\mathcal{O}(n+m)$ time operations and run the dynamic programming algorithm
for a problem with $\mathcal{O}(n)$ objects and a capacity of $\mathcal{O}(n)$
yielding a run time of $\mathcal{O}(n^{2})$ for each $u$ or
$\mathcal{O}(n^{3})$ for the algorithm as a whole.
It can be shown that whether a FVC of width $k$ is properly colored is
determined by at most $1+k+2f_{r}(k)=\mathcal{O}(k^{3})$ relevant vertices. By
creating an $(n,\mathcal{O}(k^{3}))$-universal set for $V(G)$ using Theorem
2.1, we can obtain in $2^{\mathcal{O}(k^{3})}\cdot n\log n$ time a set of
$2^{\mathcal{O}(k^{3})}\cdot\log n$ colorings that contains a coloring for
each possible assignment of colors for these relevant vertices. By applying
Lemma 4.4 for each coloring we obtain the following lemma:
###### Lemma 4.6.
There exists an algorithm that, given a graph $G$ and an integer $k$, outputs
a (possibly empty) FVC $(C,F)$ in $G$. If $G$ contains a reducible single-tree
FVC of width at most $k$ then $(C,F)$ is reducible. The algorithm runs in time
$2^{\mathcal{O}(k^{3})}\cdot n^{3}\log n$.
###### Proof 4.7.
Take $s=2f_{r}(k)+k+1$. By Theorem 2.1 an $(n,s)$-universal set $\mathcal{U}$
for $V(G)$ of size $2^{\mathcal{O}(s)}\log n$ can be created in
$2^{\mathcal{O}(s)}n\log n$ time. For each $Q\in\mathcal{U}$ let $\chi_{Q}$ be
the coloring of $G$ with $\chi^{-1}(\mathsf{\dot{C}})=Q$. Run the algorithm
from Lemma 4.4 on $\chi_{Q}$ for every $Q\in\mathcal{U}$ and return the first
reducible FVC. If no reducible FVC was found return $(\emptyset,\emptyset)$.
We obtain an overall run time of $2^{\mathcal{O}(s)}\cdot n^{3}\log
n=2^{\mathcal{O}(k^{3})}\cdot n^{3}\log n$.
To prove correctness assume $G$ contains a reducible single-tree FVC $(C,F)$
with $|C|\leq k$. By Lemma 4.2 we know $G$ contains a simple reducible FVC
$(C,F^{\prime})$. Coloring $\chi$ properly colors $(C,F^{\prime})$ if all
vertices in $F^{\prime}\cup C\cup N_{G}(F^{\prime})$ are assigned the correct
color. Hence at most $|F^{\prime}|+|C+N_{G}(F^{\prime})|\leq 2f_{r}(k)+k+1=s$
vertices need to have the correct color. By construction of $\mathcal{U}$,
there is a $Q\in\mathcal{U}$ such that $\chi_{Q}$ assigns the correct colors
to these vertices. Hence $\chi_{Q}$ properly colors $(C,F^{\prime})$ and by
Lemma 4.4 a reducible FVC is returned.
## 5 Reducing Feedback Vertex Cuts
We apply reduction operations inspired by [12, 45] on the subgraph $G[C\cup
F]$ for a FVC $(C,F)$ in $G$. We give 5 reduction operations and show at least
one is applicable if $|F|>f_{r}(|C|)$. The operations reduce the number of
vertices $v\in F$ with $\deg_{G}(v)<3$ or reduce $e(C,F)$. The following lemma
shows that this is sufficient to reduce the size of $F$.
###### Lemma 5.1.
Let $G$ be a multigraph with minimum degree at least $3$ and let $(C,F)$ be a
FVC in $G$. We have $|F|\leq e(C,F)$.
###### Proof 5.2.
We first show that the claim holds if $G[F]$ is a tree. For all $i\geq 0$ let
$V_{i}:=\\{v\in F\mid\operatorname{deg}_{G[F]}(v)=i\\}$. Note that since
$G[F]$ is connected, $V_{0}\neq\emptyset$ if and only if $|F|=1$ and the claim
is trivially true, so suppose $V_{0}=\emptyset$. We first show $|V_{\geq
3}|<|V_{1}|$.
$\displaystyle 2|E(G[F])|$ $\displaystyle=\sum_{v\in
F}\operatorname{deg}_{G[F]}(v)\geq|V_{1}|+2|V_{2}|+3|V_{\geq 3}|$
$\displaystyle 2|E(G[F])|$
$\displaystyle=2(|V(G[F])|-1)=2|V_{1}|+2|V_{2}|+2|V_{\geq 3}|-2$
We obtain $|V_{1}|+2|V_{2}|+3|V_{\geq 3}|\leq 2|V_{1}|+2|V_{2}|+2|V_{\geq
3}|-2$ hence $|V_{\geq 3}|<|V_{1}|$. We know all vertices in $F$ have degree
at least $3$ in $G$, so $e(V(G)\setminus F,F)\geq
2|V_{1}|+|V_{2}|>|V_{1}|+|V_{2}|+|V_{\geq 3}|=|F|$. By definition of FVC there
is at most one vertex in $F$ that has an edge to $V(G)\setminus(C\cup F)$, all
other edges must be between $C$ and $F$. We obtain $1+e(C,F)>|F|$.
If $G[F]$ is a forest, then let $F_{1},\ldots,F_{\ell}$ be the vertex sets for
each tree in $G[F]$. Since $(C,F_{i})$ is a FVC in $G$ for all $1\leq
i\leq\ell$, we know $e(C,F_{i})\geq|F_{i}|$ for all $1\leq i\leq\ell$, and
since $F_{1},\ldots,F_{\ell}$ is a partition of $F$ we conclude
$e(C,F)=\sum_{1\leq i\leq\ell}e(C,F_{i})\geq\sum_{1\leq
i\leq\ell}|F_{i}|=|F|$.
Next, we give the reduction operations. These operations apply to a graph $G$
and yield a new graph $G^{\prime}$ and vertex set $S\subseteq V(G)\setminus
V(G^{\prime})$. An operation is _FVS-safe_ if for any minimum feedback vertex
set $S^{\prime}$ of $G^{\prime}$, the set $S\cup S^{\prime}$ is a minimum
feedback vertex set of $G$. An operation is _antler-safe_ if for all $z\geq 0$
and any $z$-antler $(C,F)$ in $G$, there exists a $z$-antler
$(C^{\prime},F^{\prime})$ in $G^{\prime}$ with $C^{\prime}\cup
F^{\prime}=(C\cup F)\cap V(G^{\prime})$ and $|C^{\prime}|=|C|-|(C\cup F)\cap
S|$.
###### Operation 1.
If $u,v\in V(G)$ are connected by more than two edges, remove all but two of
these edges to obtain $G^{\prime}$ and take $S:=\emptyset$.
###### Operation 2.
If $v\in V(G)$ has degree exactly $2$ and no self-loop, obtain $G^{\prime}$ by
removing $v$ from $G$ and adding an edge $e$ with $\iota(e)=N_{G}(v)$. Take
$S:=\emptyset$.
Operations 1 and 2 are well established and FVS-safe. Additionally Operation 1
can easily be seen to be antler-safe. To see that Operation 2 is antler-safe,
consider a $z$-antler $(C,F)$ in $G$ for some $z\geq 0$. If $v\not\in C$ it is
easily verified that $(C,F\setminus\\{v\\})$ is a $z$-antler in $G^{\prime}$.
If $v\in C$ pick a vertex $u\in N_{G}(v)\cap F$ and observe that $(\\{u\\}\cup
C\setminus\\{v\\},F\setminus\\{u\\})$ is a $z$-antler in $G^{\prime}$.
###### Operation 3.
If $(C,F)$ is an antler in $G$, then $G^{\prime}:=G-(C\cup F)$ and $S:=C$.
###### Lemma 5.3.
Operation 3 is FVS-safe and antler-safe.
###### Proof 5.4.
To show Operation 3 is FVS-safe, let $Z$ be a minimum FVS of $G^{\prime}$. Now
$(Z,V(G^{\prime}-Z))$ is an antler in $G^{\prime}=G-(C\cup F)$ so then $G$
contains the antler $(Z\cup C,V(G^{\prime}-Z)\cup F)=(Z\cup S,V(G-(Z\cup S)))$
by Lemma 3.11. It follows that $Z\cup S$ is a minimum FVS of $G$.
To show Operation 3 is antler-safe, let $z\geq 0$ and let $(\hat{C},\hat{F})$
be an arbitrary $z$-antler in $G$, then by Lemma 3.7 $(\hat{C}\setminus(C\cup
F),\hat{F}\setminus(C\cup F)$ is a $z$-antler in $G^{\prime}=G-(C\cup F)$. We
deduce:
$\displaystyle|\hat{C}\setminus(C\cup F)|$
$\displaystyle=|\hat{C}|-|\hat{C}\cap C|-|\hat{C}\cap F|$ since $C\cap
F=\emptyset$ $\displaystyle=|\hat{C}|-|\hat{C}\cap C|-|C\cap\hat{F}|$ by
Proposition 3.5 $\displaystyle=|\hat{C}|-|(\hat{C}\cap C)\cup(C\cap\hat{F})|$
since $\hat{C}\cap\hat{F}=\emptyset$
$\displaystyle=|\hat{C}|-|(\hat{C}\cup\hat{F})\cap
C|=|\hat{C}|-|(\hat{C}\cup\hat{F})\cap\hat{S}^{\prime}|.$
###### Operation 4.
If $(C,F)$ is a FVC in $G$ and for some $v\in C$ the graph $G[F\cup\\{v\\}]$
contains a $v$-flower of order $|C|+1$, then $G^{\prime}:=G-v$ and
$S:=\\{v\\}$.
###### Lemma 5.5.
Operation 4 is FVS-safe and antler-safe.
###### Proof 5.6.
We first show that any minimum FVS in $G$ contains $v$. Let $X$ be a minimum
FVS in $G$. If $v\not\in X$ then $|F\cap X|>|C|$ since $G[F\cup\\{v\\}]$
contains a $v$-flower of order $|C|+1$. Take $X^{\prime}:=C\cup(X\setminus
F)$, clearly $|X^{\prime}|<|X|$ so $G-X^{\prime}$ must contain a cycle since
$X$ was minimum. This cycle must contain a vertex from $X\setminus
X^{\prime}\subseteq F$, so by 3.1 this cycle must contain a vertex from $C$,
but $C\subseteq X^{\prime}$. Contradiction.
To show Operation 4 is FVS-safe, suppose $Z$ is a minimum FVS of
$G^{\prime}=G-v$. Clearly $Z\cup\\{v\\}$ is a FVS in $G$. To show that
$Z\cup\\{v\\}$ is minimum suppose $Z^{\prime}$ is a smaller FVS in $G$. We
know $v\in Z$ so $Z^{\prime}\setminus\\{v\\}$ is a FVS in $G-v$, but
$|Z^{\prime}\setminus\\{v\\}|<|Z|$ contradicting optimality of $Z$.
To show Operation 4 is antler-safe, suppose $(\hat{C},\hat{F})$ is a
$z$-antler in $G$ for some $z\geq 0$. We show
$(\hat{C}\setminus\\{v\\},\hat{F})$ is a $z$-antler in $G^{\prime}$. If
$v\in\hat{C}$ then this follows directly from 3.4, so suppose
$v\not\in\hat{C}$. Note that $v\in\hat{F}$ would contradict that any minimum
FVS in $G$ contains $v$ by 3.3. So
$G[\hat{C}\cup\hat{F}]=G^{\prime}[\hat{C}\cup\hat{F}]$ and
$(\hat{C}\setminus\\{v\\},\hat{F})=(\hat{C},\hat{F})$ is a FVC in
$G^{\prime}=G-v$ by 3.2, hence $(\hat{C}\setminus\\{v\\},\hat{F})$ is a
$z$-antler in $G^{\prime}$.
###### Operation 5.
If $(C,F)$ is a FVC in $G$, $v\in C$, and $X\subseteq F$ such that
$G[F\cup\\{v\\}]-X$ is acyclic, and if $T$ is a tree in $G[F]-X$ containing a
vertex $w\in N_{G}(v)$ such that for each $u\in N_{G}(T)\setminus\\{v\\}$
there are more than $|C|$ other trees $T^{\prime}\neq T$ in $G[F]-X$ for which
$\\{u,v\\}\subseteq N_{G}(T^{\prime})$, then take $S:=\emptyset$ and obtain
$G^{\prime}$ by removing the unique edge between $v$ and $w$, and adding
double-edges between $v$ and $u$ for all $u\in N_{G}(V(T))\setminus\\{v\\}$.
###### Lemma 5.7.
Operation 5 is FVS-safe and antler-safe.
###### Proof 5.8.
Suppose $(\hat{C},\hat{F})$ is a $z$-antler in $G$ for some $z\geq 0$. We
first prove the following claim:
###### Claim 2.
For all $u\in N_{G}(T)\setminus\\{v\\}$ we have $v\in\hat{F}\Rightarrow
u\in\hat{C}$ and $u\in\hat{F}\Rightarrow v\in\hat{C}$.
Each tree of $G[F]-X$ supplies a path between $u$ and $v$, hence there are
more than $|C|+1$ internally vertex-disjoint paths between $u$ and $v$.
Suppose $v\in\hat{F}$, we show $u\in\hat{C}$. The proof of the second
implication is symmetric. Suppose for contradiction that $u\not\in\hat{C}$.
All except possibly one of the disjoint paths between $u$ and $v$ must contain
a vertex in $\hat{C}$ by 3.1 since any two disjoint paths form a cycle
containing a vertex from $\hat{F}$. Let $Y\subseteq\hat{C}$ be the set of
vertices in $\hat{C}$ that are in a tree of $G[F]-X$ with neighbors of $u$ and
$v$, so $|Y|>|C|$. Then $|C\cup\hat{C}\setminus Y|<|\hat{C}|$ we derive a
contradiction by showing $G[\hat{C}\cup\hat{F}]-(C\cup\hat{C}\setminus Y)$ is
acyclic. We know $Y\subseteq F$, so any cycle in $G$ containing a vertex from
$Y$ also contains a vertex from $C$ by 3.1. So if
$G[\hat{C}\cup\hat{F}]-(C\cup\hat{C}\setminus Y)$ contains a cycle, then so
does $G[\hat{C}\cup\hat{F}]-(C\cup\hat{C})$ which contradicts that $\hat{C}$
is a (minimum) FVS in $G[\hat{C}\cup\hat{F}]$ since $(\hat{C},\hat{F})$ is an
antler in $G$.
To prove Operation 5 is antler-safe, we show that $(\hat{C},\hat{F})$ is also
a $z$-antler in $G^{\prime}$. Suppose $v\not\in\hat{C}\cup\hat{F}$, then
$G[\hat{C}\cup\hat{F}]=G^{\prime}[\hat{C}\cup\hat{F}]$ as $G$ and $G^{\prime}$
only differ on edges incident to $v$. It remains to show that for each tree
$T^{\prime}$ in $G^{\prime}[\hat{F}]$ we have
$e(T^{\prime},G^{\prime}-(\hat{C}\cup\hat{F}))\leq 1$. Suppose $T^{\prime}$ is
a tree in $G^{\prime}[\hat{F}]$ with
$e(T^{\prime},G^{\prime}-(\hat{C}\cup\hat{F}))>1$. Since
$e(T^{\prime},G-(\hat{C}\cup\hat{F}))\leq 1$ we know that at least one of the
edges added between $v$ and some $u\in N_{G}(T)$ has an endpoint in
$V(T^{\prime})\subseteq\hat{F}$. Since $v\not\in\hat{F}$ we have
$u\in\hat{F}$, so $v\in\hat{C}$ by 2 contradicting our assumption
$v\not\in\hat{C}\cup\hat{F}$.
Suppose $v\in\hat{C}\cup\hat{F}$, we first show that $(\hat{C},\hat{F})$ is a
FVC in $G^{\prime}$. If $v\in\hat{C}$ this is clearly the case, so suppose
$v\in\hat{F}$. From 2 it follows that
$N_{G}(T)\setminus\\{v\\}\subseteq\hat{C}$, so all edges added in $G^{\prime}$
are incident to vertices in $\hat{C}$ hence $(\hat{C},\hat{F})$ is still a FVC
in $G^{\prime}$. We now show that $G^{\prime}[\hat{C}\cup\hat{F}]$ contains an
$\hat{C}$-certificate of order $z$. We know $G[\hat{C}\cup\hat{F}]$ contains
an $\hat{C}$-certificate of order $z$. Let $H$ be an arbitrary component of
this certificate. Take $Y:=V(H)\cap\hat{C}$, so $Y$ is a minimum FVS in $H$.
It suffices to show that $Y$ is also a minimum FVS of $G^{\prime}\cap H$. This
is easily seen to be true when $v\not\in V(H)$, so suppose $v\in V(H)$. First
we argue that $(G^{\prime}\cap H)-Y$ is acyclic. This is easily seen to be
true when $v\in Y$ since $G$ and $G^{\prime}$ only differ in edges incident to
$v$, so suppose $v\not\in Y$. Then $v\not\in\hat{C}$ hence $v\in\hat{F}$ and
by 2 $N_{G}(T)\setminus\\{v\\}\subseteq\hat{C}$. It follows that
$V(H)\cap(N_{G}(T)\setminus\\{v\\})\subseteq Y$ so clearly $(G^{\prime}\cap
H)-Y$ is acyclic since all edges in $G^{\prime}\cap H$ that are not in $H$ are
incident to a vertex in $Y$.
To show $Y$ is a minimum FVS, suppose there is a FVS $Y^{\prime}$ of
$G^{\prime}\cap H$ with $|Y^{\prime}|<|Y|$. Since $H-v$ is a subgraph of
$G^{\prime}\cap H$ we know $H-(Y^{\prime}\cup\\{v\\})$ is acyclic, but since
$|Y^{\prime}|<|Y|=\operatorname{\textsc{fvs}}(H)$ we also know $H-Y^{\prime}$
contains a cycle. This cycle must contain the edge $\\{v,w\\}$ since otherwise
this cycle is also present in $(G^{\prime}\cap H)-Y^{\prime}$. Then there must
be some $u\in N_{G}(T)\setminus\\{v\\}$ on the cycle, so $u,v\not\in
Y^{\prime}$. But $G^{\prime}$ contains a double-edge between $u$ and $v$ so
$(G^{\prime}\cap H)-Y^{\prime}$ contains a cycle, contradicting that
$Y^{\prime}$ is a FVS in $G^{\prime}\cap H$.
We finally show Operation 5 is FVS-safe. Let $Z^{\prime}$ be a minimum FVS in
$G^{\prime}$, and suppose $Z^{\prime}$ is not a FVS in $G$. Then
$G-Z^{\prime}$ contains a cycle. This cycle contains the edge $\\{v,w\\}$
since otherwise $G^{\prime}-Z^{\prime}$ also contains this cycle. Since
$G^{\prime}$ contains double-edges between $v$ and all $u\in
N_{G}(T)\setminus\\{v\\}$ and $v\not\in Z^{\prime}$, it follows that
$N_{G}(T)\setminus\\{v\\}\subseteq Z^{\prime}$, but then no cycle in
$G-Z^{\prime}$ can intersect $T$ and $\\{v,w\\}$ is not part of a cycle in
$G-Z^{\prime}$. We conclude by contradiction that $Z^{\prime}$ is a FVS in
$G$. To prove optimality, consider a minimum FVS $Z$ in $G$ and observe that
$(Z,V(G-Z))$ is an antler in $G$. Since Operation 5 is antler-safe we know
$G^{\prime}$ contains an antler $(C^{\prime},F^{\prime})$ with $C^{\prime}\cup
F^{\prime}=(Z\cup V(G-Z))\cap V(G^{\prime})=V(G^{\prime})$ and
$|C^{\prime}|=|Z|-|(Z\cup V(G-Z))\cap S|=|Z|$. Since $C^{\prime}\cup
F^{\prime}=V(G^{\prime})$ we know $C^{\prime}$ is a FVS in $G^{\prime}$, hence
$\operatorname{\textsc{fvs}}(G^{\prime})\leq|C^{\prime}|=|Z|$, hence
$\operatorname{\textsc{fvs}}(G)=|Z|\geq\operatorname{\textsc{fvs}}(G^{\prime})=|Z^{\prime}|$.
Finally we show that when we are given a reducible FVC $(C,F)$ in $G$, then we
can find and apply an operation in $\mathcal{O}(n^{2})$ time. With a more
careful analysis better running time bounds can be shown, but this does not
affect the final running time of the main algorithm.
###### Lemma 5.9.
Given a graph $G$ and a reducible FVC $(C,F)$ in $G$, we can find and apply an
operation in $\mathcal{O}(n^{2})$ time.
###### Proof 5.10.
Note that if a vertex $v\in V(G)$ has a self-loop then $(\\{v\\},\emptyset)$
is an antler in $G$ and we can apply Operation 3. If a vertex $v$ has degree 0
or 1 then $(\emptyset,\\{v\\})$ is an antler in $G$. Hence Operations 1,
LABEL:, 2, LABEL:, and 3 can always be applied if the graph contains a self-
loop, a vertex with degree less than 3, or more than 2 edges between two
vertices. So assume $G$ is a graph with no self-loops, minimum degree at least
3, and at most two edges between any pair of vertices.
By Lemma 5.1 we have $e(C,F)\geq|F|>2|C|^{3}+3|C|^{2}-|C|$ so then there must
be a vertex $v$ in $C$ with more than
$\frac{1}{|C|}\cdot(2|C|^{3}+3|C|^{2}-|C|)=2|C|^{2}+3|C|-1$ edges to $F$. By
Lemma 2.2 we can find a set $X\subseteq F$ such that $G[F\cup\\{v\\}]-X$ is
acyclic and $G[F\cup\\{v\\}]$ contains a $v$-flower of order $|X|$. Hence if
$|X|\geq|C|+1$ Operation 4 can be applied, so assume $|X|\leq|C|$. For each
$u\in X\cup C\setminus\\{v\\}$ that is not connected to $v$ by a double-edge,
mark up to $|C|+1$ trees $T^{\prime}$ in $G[F]-X$ for which $\\{u,v\\}\in
N_{G}(T)$. Note that we marked at most $(|C|+1)\cdot|X\cup
C\setminus\\{v\\}|\leq(|C|+1)\cdot(2|C|-1)=2|C|^{2}+|C|-1$ trees. Since $v$
has exactly one edge to each marked tree ($G[F\cup\\{v\\}]-X$ is acyclic) and
at most two edges to each vertex in $X$, this accounts for at most
$2|C|^{2}+3|C|-1$ edges from $v$ to $F$, so there must be at least one more
edge from $v$ to a vertex $w\in F$, hence Operation 5 applies.
It can easily be verified that all operations described can be performed in
$\mathcal{O}(n^{2})$ time.
## 6 Finding and Removing Antlers
We will find antlers using color coding, using coloring functions of the form
$\chi\colon V(G)\cup
E(G)\to\\{\mathsf{\dot{F}},\mathsf{\dot{C}},\mathsf{\dot{R}}\\}$. For all
$c\in\\{\mathsf{\dot{F}},\mathsf{\dot{C}},\mathsf{\dot{R}}\\}$ let
$\chi^{-1}_{V}(c)=\chi^{-1}(c)\cap V(G)$. For any integer $z\geq 0$, a
$z$-antler $(C,F)$ in a graph $G$ is _$z$ -properly colored_ by a coloring
$\chi$ if all of the following hold: (i)
$F\subseteq\chi^{-1}_{V}(\mathsf{\dot{F}})$, (ii)
$C\subseteq\chi^{-1}_{V}(\mathsf{\dot{C}})$, (iii) $N_{G}(F)\setminus
C\subseteq\chi^{-1}_{V}(\mathsf{\dot{R}})$, and (iv) $G[C\cup
F]-\chi^{-1}(\mathsf{\dot{R}})$ is a $C$-certificate of order $z$. Recall that
$\chi^{-1}(\mathsf{\dot{R}})$ can contain edges as well as vertices so for any
subgraph $H$ of $G$ the graph $H-\chi^{-1}(\mathsf{\dot{R}})$ is obtained from
$H$ by removing both vertices and edges. It can be seen that if $(C,F)$ is a
$z$-antler, then there exists a coloring that $z$-properly colors it. Consider
for example a coloring where a vertex $v$ is colored $\mathsf{\dot{C}}$ (resp.
$\mathsf{\dot{F}}$) if $v\in C$ (resp. $v\in F$), all other vertices are
colored $\mathsf{\dot{R}}$, and for some $C$-certificate $H$ of order $z$ in
$G[C\cup F]$ all edges in $H$ have color $\mathsf{\dot{F}}$ and all other
edges have color $\mathsf{\dot{R}}$. The property in of a properly colored
$z$-antler described in Lemma 6.1 will be useful to prove correctness of the
color coding algorithm.
###### Lemma 6.1.
For any $z\geq 0$, if a $z$-antler $(C,F)$ in graph $G$ is $z$-properly
colored by a coloring $\chi$ and $H$ is a component of $G[C\cup
F]-\chi^{-1}(\mathsf{\dot{R}})$ then each component $H^{\prime}$ of $H-C$ is a
component of $G[\chi^{-1}_{V}(\mathsf{\dot{F}})]-\chi^{-1}(\mathsf{\dot{R}})$
with $N_{G-\chi^{-1}(\mathsf{\dot{R}})}(H^{\prime})\subseteq C\cap V(H)$.
###### Proof 6.2.
Note that since $C\cap\chi^{-1}_{V}(\mathsf{\dot{F}})=\emptyset$ we have that
the statement $N_{G-\chi^{-1}(\mathsf{\dot{R}})}(H^{\prime})\subseteq C\cap
V(H)$ implies that
$N_{G[\chi^{-1}_{V}(\mathsf{\dot{F}})]-\chi^{-1}(\mathsf{\dot{R}})}(H^{\prime})=\emptyset$
and hence that $H^{\prime}$ is a component of
$G[\chi^{-1}_{V}(\mathsf{\dot{F}})]-\chi^{-1}(\mathsf{\dot{R}})$. We show
$N_{G-\chi^{-1}(\mathsf{\dot{R}})}(H^{\prime})\subseteq C\cap V(H)$.
Suppose $v\in N_{G-\chi^{-1}(\mathsf{\dot{R}})}(H^{\prime})$ and let $u\in
V(H^{\prime})$ be a neighbor of $v$ in $G-\chi^{-1}(\mathsf{\dot{R}})$. Since
$V(H^{\prime})\subseteq F$ we know $u\in F$. Since $(C,F)$ is $z$-properly
colored we also have $N_{G}(F)\setminus C=\chi^{-1}(\mathsf{\dot{R}})$, hence
$N_{G}(u)\subseteq C\cup F\cup\chi^{-1}(\mathsf{\dot{R}})$ so then
$N_{G-\chi^{-1}}(u)\subseteq C\cup F$. By choice of $u$ we have $v\in
N_{G-\chi^{-1}}(u)\subseteq C\cup F$. So since $u,v\in C\cup F$, and $u$ and
$v$ are neighbors in $G-\chi^{-1}(\mathsf{\dot{R}})$ we know $u$ and $v$ are
in the same component of $G[C\cup F]-\chi^{-1}(\mathsf{\dot{R}})$, hence $v\in
V(H)$.
Suppose $v\not\in C$, so $v\in F$. Since also $u\in F$ we know that $u$ and
$v$ are in the same component of $G[F]-\chi^{-1}(\mathsf{\dot{R}})$, so $v\in
H^{\prime}$, but then $v\not\in N_{G-\chi^{-1}(\mathsf{\dot{R}})}(H^{\prime})$
contradicting our choice of $v$. It follows that $v\in C$ hence $v\in C\cap
V(H)$. Since $v\in N_{G-\chi^{-1}(\mathsf{\dot{R}})}(H^{\prime})$ was
arbitrary $N_{G-\chi^{-1}(\mathsf{\dot{R}})}(H^{\prime})\subseteq C\cap V(H)$.
We now show that a $z$-antler can be obtained from a suitable coloring of the
graph.
###### Lemma 6.3 ($\bigstar$).
A $n^{\mathcal{O}(z)}$ time algorithm exists taking as input an integer $z\geq
0$, a graph $G$, and a coloring $\chi$ and producing as output a $z$-antler
$(C,F)$ in $G$, such that for any $z$-antler $(\hat{C},\hat{F})$ that is
$z$-properly colored by $\chi$ we have $\hat{C}\subseteq C$ and
$\hat{F}\subseteq F$.
###### Proof 6.4.
We define a function $W_{\chi}\colon 2^{\chi^{-1}_{V}(\mathsf{\dot{C}})}\to
2^{\chi^{-1}_{V}(\mathsf{\dot{F}})}$ as follows: for any
$C\subseteq\chi^{-1}_{V}(\mathsf{\dot{C}})$ let $W_{\chi}(C)$ denote the set
of all vertices that are in a component $H$ of
$G[\chi^{-1}_{V}(\mathsf{\dot{F}})]-\chi^{-1}(\mathsf{\dot{R}})$ for which
$N_{G-\chi^{-1}(\mathsf{\dot{R}})}(H)\subseteq C$. The algorithm we describe
updates the coloring $\chi$ and recolors any vertex or edge that is not part
of a $z$-properly colored antler to color $\mathsf{\dot{R}}$.
1. 1.
Recolor all edges to color $\mathsf{\dot{R}}$ when one of its endpoints has
color $\mathsf{\dot{R}}$.
2. 2.
For each component $H$ of $G[\chi^{-1}_{V}(\mathsf{\dot{F}})]$ we recolor all
vertices of $H$ and their incident edges to color $\mathsf{\dot{R}}$ if $H$ is
not a tree or $e(H,\chi^{-1}_{V}(\mathsf{\dot{R}}))>1$.
3. 3.
For each subset $C\subseteq\chi^{-1}_{V}(\mathsf{\dot{C}})$ of size at most
$z$, mark all vertices in $C$ if $\operatorname{\textsc{fvs}}(G[C\cup
W_{\chi}(C)]-\chi^{-1}(\mathsf{\dot{R}}))=|C|$.
4. 4.
If $\chi^{-1}_{V}(\mathsf{\dot{C}})$ contains unmarked vertices we recolor
them to color $\mathsf{\dot{R}}$, remove markings made in step 3 and repeat
from step 1.
5. 5.
If all vertices in $\chi^{-1}_{V}(\mathsf{\dot{C}})$ are marked in step 3,
return $(\chi^{-1}_{V}(\mathsf{\dot{C}}),\chi^{-1}_{V}(\mathsf{\dot{F}}))$.
### Running time
The algorithm will terminate after at most $n$ iterations since in every
iteration the number of vertices in $\chi^{-1}_{V}(\mathsf{\dot{R}})$
increases. Steps 1, 2, 4, and 5 can easily be seen to take no more than
$\mathcal{O}(n^{2})$ time. Step 3 can be performed in $\mathcal{O}(4^{z}\cdot
n^{z+1})$ time by checking for all $\mathcal{O}(n^{z})$ subsets
$C\in\chi^{-1}_{V}(\mathsf{\dot{C}})$ of size at most $z$ whether the graph
$G[C\cup W_{\chi}(C)]-\chi^{-1}(\mathsf{\dot{R}})$ has feedback vertex number
$z$. This can be done in time $\mathcal{O}(4^{z}\cdot n)$ [35]. Hence the
algorithm runs in time $n^{\mathcal{O}(z)}$.
### Correctness
We show that any $z$-properly colored antler prior to executing the algorithm
remains $z$-properly colored after termination and that in step 5,
$(\chi^{-1}_{V}(\mathsf{\dot{C}}),\chi^{-1}_{V}(\mathsf{\dot{F}}))$ is a
$z$-antler in $G$. Since
$(\chi^{-1}_{V}(\mathsf{\dot{C}}),\chi^{-1}_{V}(\mathsf{\dot{F}}))$ contains
all properly colored antlers this proves correctness.
###### Claim 3 ($\bigstar$).
All $z$-antlers $(\hat{C},\hat{F})$ that are $z$-properly colored by $\chi$
prior to executing the algorithm are also $z$-properly colored by $\chi$ after
termination of the algorithm.
To show the algorithm preserves properness of the coloring, we show that every
individual recoloring preserves properness, that is, if an arbitrary
$z$-antler is $z$-properly colored prior to the recoloring, it is also
$z$-properly colored after the recoloring.
Suppose an arbitrary $z$-antler $(\hat{C},\hat{F})$ is $z$-properly colored by
$\chi$. An edge is only recolored when one of its endpoints has color
$\mathsf{\dot{R}}$. Since these edges are not in $G[\hat{C}\cup\hat{F}]$ its
color does change whether $(\hat{C},\hat{F})$ is colored $z$-properly. All
other operations done by the algorithm are recolorings of vertices to color
$\mathsf{\dot{R}}$. We show that any time a vertex $v$ is recolored we have
that $v\not\in\hat{C}\cup\hat{F}$, meaning $(\hat{C},\hat{F})$ remains colored
$z$-properly.
Suppose $v$ is recolored in step 2, then we know $\chi(v)=\mathsf{\dot{F}}$,
and $v$ is part of a component $H$ of $G[\chi^{-1}_{V}(\mathsf{\dot{F}})]$.
Since $\chi$ $z$-properly colors $(\hat{C},\hat{F})$ we have
$\hat{F}\subseteq\chi^{-1}_{V}(\mathsf{\dot{F}})$ but
$N_{G}(\hat{F})\cap\chi^{-1}_{V}(\mathsf{\dot{F}})=\emptyset$, so since $H$ is
a component of $G[\chi^{-1}_{V}(\mathsf{\dot{F}})]$ we know either
$V(H)\subseteq\hat{F}$ or $V(H)\cap\hat{F}=\emptyset$. If
$V(H)\cap\hat{F}=\emptyset$ then clearly $v\not\in\hat{C}\cup\hat{F}$. So
suppose $V(H)\subseteq\hat{F}$, then $H$ is a tree in $G[\hat{F}]$. Since $v$
was recolored and $H$ is a tree it must be that
$e(H,\chi^{-1}_{C}(\mathsf{\dot{R}}))>1$ but this contradicts that
$(\hat{C},\hat{F})$ is a FVC.
Suppose $v$ is recolored in step 4, then we know $v$ was not marked during
step 3 and $\chi(v)=\mathsf{\dot{C}}$, so $v\not\in\hat{F}$. Suppose that
$v\in\hat{C}$. We derive a contradiction by showing that $v$ was marked in
step 3. Since $(\hat{C},\hat{F})$ is $z$-properly colored, we know that
$G[\hat{C}\cup\hat{F}]-\chi^{-1}(\mathsf{\dot{R}})$ is a $\hat{C}$-certificate
of order $z$, so if $H$ is the component of
$G[\hat{C}\cup\hat{F}]-\chi^{-1}(\mathsf{\dot{R}})$ containing $v$ then
$\operatorname{\textsc{fvs}}(H)=|\hat{C}\cap V(H)|\leq z$. Since $\hat{C}\cap
V(H)\subseteq\hat{C}\subseteq\chi^{-1}_{V}(\mathsf{\dot{C}})$ we know that in
some iteration in step 3 we have $C=\hat{C}\cap V(H)$. To show that $v$ was
marked, we show that $\operatorname{\textsc{fvs}}(G[C\cup
W_{\chi}(C))]-\chi^{-1}(\mathsf{\dot{R}}))=|C|$. We know
$G[W_{\chi}(C)]-\chi^{-1}(\mathsf{\dot{R}})$ is a forest since it is a
subgraph of $G[\chi^{-1}_{V}(\mathsf{\dot{F}})]$ which is a forest by step 2,
so we have that $\operatorname{\textsc{fvs}}(G[C\cup
W_{\chi}(C)]-\chi^{-1}(\mathsf{\dot{R}}))\leq|C|$. To show
$\operatorname{\textsc{fvs}}(G[C\cup
W_{\chi}(C))]-\chi^{-1}(\mathsf{\dot{R}}))\geq|C|$ we show that $H$ is a
subgraph of $G[C\cup W_{\chi}(C))]-\chi^{-1}(\mathsf{\dot{R}})$. By Lemma 6.1
we have that each component $H^{\prime}$ of $H-\hat{C}$ is also a component of
$G[\chi^{-1}_{V}(\mathsf{\dot{F}})]-\chi^{-1}(\mathsf{\dot{R}})$ with
$N_{G-\chi^{-1}(\mathsf{\dot{R}})}(H^{\prime})\subseteq\hat{C}\cap V(H)=C$.
Hence $V(H-\hat{C})=V(H-C)\subseteq W_{\chi}(C)$ so $H$ is a subgraph of
$G[C\cup W_{\chi}(C)]$. Since $H$ is also a subgraph of
$G[\hat{C}\cup\hat{F}]-\chi^{-1}(\mathsf{\dot{R}})$ we conclude that $H$ is a
subgraph of $G[C\cup W_{\chi}(C))]-\chi^{-1}(\mathsf{\dot{R}})$ and therefore
$\operatorname{\textsc{fvs}}(G[C\cup
W_{\chi}(C))]-\chi^{-1}(\mathsf{\dot{R}}))\geq\operatorname{\textsc{fvs}}(H)=|C|$.
###### Claim 4 ($\bigstar$).
In step 5, $(\chi^{-1}_{V}(\mathsf{\dot{C}}),\chi^{-1}_{V}(\mathsf{\dot{F}}))$
is a $z$-antler in $G$.
We know $(\chi^{-1}_{V}(\mathsf{\dot{C}}),\chi^{-1}_{V}(\mathsf{\dot{F}}))$ is
a FVC in $G$ because each component of $G[\chi^{-1}_{V}(\mathsf{\dot{F}})]$ is
a tree and has at most one edge to a vertex not in
$\chi^{-1}_{V}(\mathsf{\dot{C}})$ by step 2. It remains to show that
$G[\chi^{-1}_{V}(\mathsf{\dot{C}})\cup\chi^{-1}_{V}(\mathsf{\dot{F}})]$
contains a $\chi^{-1}_{V}(\mathsf{\dot{C}})$-certificate of order $z$. Note
that in step 5 the coloring $\chi$ is the same as in the last execution of
step 3. Let $\mathcal{C}\subseteq 2^{\chi^{-1}_{V}(\mathsf{\dot{C}})}$ be the
family of all subsets $C\subseteq\chi^{-1}_{V}(\mathsf{\dot{C}})$ that have
been considered in step 3 and met the conditions for marking all vertices in
$C$, i.e., $\operatorname{\textsc{fvs}}(G[C\cup
W_{\chi}(C)]-\chi^{-1}(\mathsf{\dot{R}}))=|C|\leq z$. Since all vertices in
$\chi^{-1}_{V}(\mathsf{\dot{C}})$ have been marked during the last execution
of step 3 we know
$\bigcup_{C\in\mathcal{C}}C=\chi^{-1}_{V}(\mathsf{\dot{C}})$.
Let $C_{1},\ldots,C_{|\mathcal{C}|}$ be the sets in $\mathcal{C}$ in an
arbitrary order and define $D_{i}:=C_{i}\setminus C_{<i}$ for all $1\leq
i\leq|\mathcal{C}|$. Observe that $D_{1},\ldots,D_{|\mathcal{C}|}$ is a
partition of $\chi^{-1}_{V}(\mathsf{\dot{C}})$ with $|D_{i}|\leq z$ and
$C_{i}\subseteq D_{\leq i}$ for all $1\leq i\leq|\mathcal{C}|$. Note that
$D_{i}$ may be empty for some $i$. We now show that
$G[\chi^{-1}_{V}(\mathsf{\dot{C}})\cup\chi^{-1}_{V}(\mathsf{\dot{F}})]$
contains a $\chi^{-1}_{V}(\mathsf{\dot{C}})$-certificate of order $z$. We do
this by showing there are $|\mathcal{C}|$ vertex disjoint subgraphs of
$G[\chi^{-1}_{V}(\mathsf{\dot{C}})\cup\chi^{-1}_{V}(\mathsf{\dot{F}})]$, call
them $G_{1},\ldots,G_{|\mathcal{C}|}$, such that
$\operatorname{\textsc{fvs}}(G_{i})=|D_{i}|\leq z$ for each $1\leq
i\leq|\mathcal{C}|$. Take $G_{i}:=G[D_{i}\cup(W_{\chi}(D_{\leq i})\setminus
W_{\chi}(D_{<i}))]-\chi^{-1}(\mathsf{\dot{R}})$ for all $1\leq
i\leq|\mathcal{C}|$. First we show that for any $i\neq j$ the graphs $G_{i}$
and $G_{j}$ are vertex disjoint. Clearly $D_{i}\cap D_{j}=\emptyset$. We can
assume $i<j$, so $D_{\leq i}\subseteq D_{<j}$ and then $W_{\chi}(D_{\leq
i})\subseteq W_{\chi}(D_{<j})$. By successively dropping two terms, we deduce
$\displaystyle(W_{\chi}(D_{\leq i})\setminus
W_{\chi}(D_{<i}))\cap(W_{\chi}(D_{\leq j})\setminus W_{\chi}(D_{<j}))$
$\displaystyle\subseteq W_{\chi}(D_{\leq i})\cap(W_{\chi}(D_{\leq j})\setminus
W_{\chi}(D_{<j}))$ $\displaystyle\subseteq W_{\chi}(D_{\leq i})\setminus
W_{\chi}(D_{<j})=\emptyset.$
We complete the proof by showing $\operatorname{\textsc{fvs}}(G_{i})=|D_{i}|$
for all $1\leq i\leq\ell$. Recall that $D_{i}=C_{i}\setminus C_{<i}$. Since
$C_{i}\in\mathcal{C}$ we know $C_{i}$ is an optimal FVS in $G[C_{i}\cup
W_{\chi}(C_{i})]-\chi^{-1}(\mathsf{\dot{R}})$, so then clearly $D_{i}$ is an
optimal FVS in $G[C_{i}\cup
W_{\chi}(C_{i})]-\chi^{-1}(\mathsf{\dot{R}})-C_{<i}=G[D_{i}\cup
W_{\chi}(C_{i})]-\chi^{-1}(\mathsf{\dot{R}})$. We know that $C_{i}\subseteq
D_{\leq i}$ so then also $W_{\chi}(C_{i})\subseteq W_{\chi}(D_{\leq i})$. It
follows that $D_{i}$ is an optimal FVS in $G[D_{i}\cup W_{\chi}(D_{\leq
i})]-\chi^{-1}(\mathsf{\dot{R}})$. In this graph, all vertices in
$W_{\chi}(D_{<i})$ must be in a component that does not contain any vertices
from $D_{i}$, so this component is a tree and we obtain
$|D_{i}|=\operatorname{\textsc{fvs}}(G[D_{i}\cup W_{\chi}(D_{\leq
i})]-\chi^{-1}(\mathsf{\dot{R}}))=\operatorname{\textsc{fvs}}(G[D_{i}\cup
W_{\chi}(D_{\leq
i})]-\chi^{-1}(\mathsf{\dot{R}})-W_{\chi}(D_{<i}))=\operatorname{\textsc{fvs}}(G[D_{i}\cup(W_{\chi}(D_{\leq
i})\setminus
W_{\chi}(D_{<i}))]-\chi^{-1}(\mathsf{\dot{R}}))=\operatorname{\textsc{fvs}}(G_{i})$.
It can be seen from 3 that for any $z$-properly colored antler
$(\hat{C},\hat{F})$ we have $\hat{C}\subseteq\chi^{-1}_{V}(\mathsf{\dot{C}})$
and $\hat{F}\subseteq\chi^{-1}_{V}(\mathsf{\dot{F}})$. 4 completes the
correctness argument.
If a graph $G$ contains a reducible single-tree FVC of width at most $k$ then
we can find and apply an operation by Lemma 4.6 and Lemma 5.9. If $G$ does not
contain such a FVC, but $G$ does contain a non-empty $z$-antler $(C,F)$ of
width at most $k$, then using Lemma 3.15 we can prove that whether $(C,F)$ is
$z$-properly colored is determined by the color of at most $26k^{5}z^{2}$
relevant vertices and edges. Using two $(n+m,26k^{5}z^{2})$-universal sets, we
can create a set of colorings that is guaranteed to contain a coloring that
$z$-properly colors $(C,F)$. Using Lemma 6.3 we find a non-empty $z$-antler
and apply Operation 3. We obtain the following:
###### Lemma 6.5 ($\bigstar$).
Given a graph $G$ and integers $k\geq z\geq 0$. If $G$ contains a non-empty
$z$-antler of width at most $k$ we can find and apply an operation in
$2^{\mathcal{O}(k^{5}z^{2})}\cdot n^{\mathcal{O}(z)}$time.
###### Proof 6.6.
Consider the following algorithm: Use Lemma 4.6 to obtain a FVC
$(C_{1},F_{1})$ in $2^{\mathcal{O}(k^{3})}\cdot n^{3}\log n$ time. If
$(C_{1},F_{1})$ is reducible we can find and apply an operation in
$\mathcal{O}(n^{2})$ time by Lemma 5.9 so assume $(C_{1},F_{1})$ is not
reducible. Create two $(n+m,26k^{5}z^{2})$-universal sets $\mathcal{U}_{1}$
and $\mathcal{U}_{2}$ for $V(G)\cup E(G)$ using Theorem 2.1. Define for each
pair $(Q_{1},Q_{2})\in\mathcal{U}_{1}\times\mathcal{U}_{2}$ the coloring
$\chi_{Q_{1},Q_{2}}$ of $G$ that assigns all vertices and edges in $Q_{1}$
color $\mathsf{\dot{C}}$, all vertices and edges in $Q_{2}\setminus Q_{1}$
color $\mathsf{\dot{F}}$, and all vertices and edges not in $Q_{1}\cup Q_{2}$
color $\mathsf{\dot{R}}$. For each
$(Q_{1},Q_{2})\in\mathcal{U}_{1}\times\mathcal{U}_{2}$ obtain in
$n^{\mathcal{O}(z)}$ time a $z$-antler $(C_{2},F_{2})$ by running the
algorithm from Lemma 6.3 on $G$ and $\chi_{Q_{1},Q_{2}}$. If $(C_{2},F_{2})$
is not empty, apply Operation 3 to remove $(C_{2},F_{2})$, otherwise report
$G$ does not contain a $z$-antler of width at most $k$.
### Running time
By Theorem 2.1, the sets $\mathcal{U}_{1}$ and $\mathcal{U}_{2}$ have size
$2^{\mathcal{O}(k^{5}z^{2})}\log n$ and can be created in
$2^{\mathcal{O}(k^{5}z^{2})}\cdot n\log n$ time. It follows that there are
$|\mathcal{U}_{1}\times\mathcal{U}_{2}|=2^{\mathcal{O}(k^{5}z^{2})}\log^{2}n$
colorings for which we apply the $n^{\mathcal{O}(z)}$ time algorithm from
Lemma 6.3. We obtain an overall running time of
$2^{\mathcal{O}(k^{5}z^{2})}\cdot n^{\mathcal{O}(z)}$. Since a $z$-antler has
width at least $z$, we can assume $k\geq z$, hence
$2^{\mathcal{O}(k^{5}z^{2})}\cdot n^{\mathcal{O}(z)}\leq
2^{\mathcal{O}(k^{7})}\cdot n^{\mathcal{O}(z)}$.
### Correctness
Suppose $G$ contains a $z$-antler $(C,F)$ of width at most $k$, we show the
algorithm finds an operation to apply. By Lemma 3.15 we know that there exists
an $F^{\prime}\subseteq F$ such that $(C,F^{\prime})$ is a $z$-antler where
$G[F^{\prime}]$ has at most $\frac{|C|}{2}(z^{2}+2z-1)$ trees. For each tree
$T$ in $G[F^{\prime}]$ note that $(C,V(T))$ is a single-tree FVC of width
$|C|\leq k$. If for some tree $T$ in $G$ the FVC $(C,V(T))$ is reducible, then
$(C_{1},F_{1})$ is reducible by Lemma 4.6 and we find an operation using Lemma
5.9, so suppose for all trees $T$ in $G[F^{\prime}]$ that $|V(T)|\leq
f_{r}(|C|)$. So then $|F^{\prime}|\leq\frac{|C|}{2}(z^{2}+2z-1)\cdot
f_{r}(|C|)$. We show that in this case there exists a pair
$(Q_{1},Q_{2})\in\mathcal{U}_{1}\times\mathcal{U}_{2}$ such that
$\chi_{Q_{1},Q_{2}}$ $z$-properly colors $(C,F^{\prime})$.
Whether a coloring $z$-properly colors $(C,F^{\prime})$ is only determined by
the colors of $C\cup F^{\prime}\cup N_{G}(F^{\prime})\cup E(G[C\cup
F^{\prime}])$.
###### Claim 5.
$|C\cup F^{\prime}\cup N_{G}(F^{\prime})\cup E(G[C\cup F^{\prime}])|\leq
26k^{5}z^{2}$.
Note that $|N_{G}(F^{\prime})\setminus C|\leq\frac{|C|}{2}(z^{2}+2z-1)$ since
no tree in $G[F^{\prime}]$ can have more than one neighbor outside $C$.
Additionally we have
$\displaystyle|E(G[C\cup F^{\prime}])|$
$\displaystyle\leq|E(G[C])|+|E(G[F^{\prime}])|+e(C,F^{\prime})$
$\displaystyle\leq|E(G[C])|+|F^{\prime}|+|C|\cdot|F^{\prime}|$ since
$G[F^{\prime}]$ is a forest
$\displaystyle\leq|C|^{2}+(|C|+1)\cdot|F^{\prime}|$
$\displaystyle\leq|C|^{2}+(|C|+1)\cdot\frac{|C|}{2}(z^{2}+2z-1)\cdot
f_{r}(|C|)$ $\displaystyle\leq
k^{2}+(k+1)\cdot\frac{k}{2}(z^{2}+2z-1)\cdot(2k^{3}+3k^{2}-k)$
$\displaystyle\leq
k^{2}+\frac{z^{2}+2z-1}{2}\cdot(k^{2}+k)\cdot(2k^{3}+3k^{2}-k)$
$\displaystyle\leq k^{2}+\frac{z^{2}+2z-1}{2}\cdot 2k^{2}\cdot 5k^{3}$ since
$k=0$ or $k\geq 1$ $\displaystyle\leq k^{2}+\frac{3z^{2}}{2}\cdot 10k^{5}$
since $z=0$ or $z\geq 1$ $\displaystyle\leq k^{2}+15k^{5}z^{2}\leq
16k^{5}z^{2},$
hence
$\displaystyle|C\cup F^{\prime}\cup N_{G}(F^{\prime})\cup E(G[C\cup
F^{\prime}])|$ $\displaystyle=|C|+|F^{\prime}|+|N_{G}(F^{\prime})\setminus
C|+|E(G[C\cup F^{\prime}])|$
$\displaystyle\leq|C|+\frac{z^{2}+2z+1}{2}f_{r}(|C|)+\frac{|C|}{2}(z^{2}+2z-1)+16k^{5}z^{2}$
$\displaystyle\leq
k+\frac{3z^{2}}{2}f_{r}(k)+\frac{k}{2}(2z^{2})+16k^{5}z^{2}$
$\displaystyle\leq k+\frac{3}{2}z^{2}\cdot(2k^{3}+3k^{2}-k)+z^{2}\cdot
k+16k^{5}z^{2}$ $\displaystyle\leq k+\frac{3}{2}z^{2}\cdot 5k^{3}+z^{2}\cdot
k+16k^{5}z^{2}\leq 26k^{5}z^{2}.$
By construction of $\mathcal{U}_{1}$ and $\mathcal{U}_{2}$ there exist
$Q_{1}\in\mathcal{U}_{1}$ and $Q_{2}\in\mathcal{U}_{2}$ such that
$\chi_{Q_{1},Q_{2}}$ $z$-properly colors $(C,F^{\prime})$. Therefore the
algorithm from Lemma 6.3 returns a non-empty $z$-antler for
$\chi_{Q_{1},Q_{2}}$ and Operation 3 can be executed.
Note that applying an operation reduces the number of vertices or increases
the number of double-edges. Hence by repeatedly using Lemma 6.5 to apply an
operation we obtain, after at most $\mathcal{O}(n^{2})$ iterations, a graph in
which no operation applies. By Lemma 6.5 this graph does not contain a non-
empty $z$-antler of width at most $k$. We show that this method reduces the
solution size at least as much as iteratively removing $z$-antlers of width at
most $k$. We first describe the behavior of such a sequence of antlers. For
integer $k\geq 0$ and $z\geq 0$, we say a sequence of disjoint vertex sets
$C_{1},F_{1},\ldots,C_{\ell},F_{\ell}$ is a _$z$ -antler-sequence_ for a graph
$G$ if for all $1\leq i\leq\ell$ the pair $(C_{i},F_{i})$ is a $z$-antler in
$G-(C_{<i}\cup F_{<i})$. The _width_ of a $z$-antler-sequence is defined as
$\max_{1\leq i\leq\ell}|C_{1}|$.
###### Proposition 6.7.
If $C_{1},F_{1},\ldots,C_{\ell},F_{\ell}$ is a _$z$ -antler-sequence_ for some
graph $G$, then the pair $(C_{\leq i},F_{\leq i})$ is a $z$-antler in $G$ for
any $1\leq i\leq\ell$.
###### Proof 6.8.
We use induction on $i$. Clearly the statement holds for $i=1$, so suppose
$i>1$. By induction $(C_{<i},F_{<i})$ is a $z$-antler in $G$, and since
$(C_{i},F_{i})$ is a $z$-antler in $G-(C_{<i}\cup F_{<i})$ we have by Lemma
3.11 that $(C_{<i}\cup C_{i},F_{<i}\cup F_{i})=(C_{\leq i},F_{\leq i})$ is a
$z$-antler in $G$.
The following theorem describes that repeatedly applying Lemma 6.5 reduces the
solution size at least as much as repeatedly removing $z$-antlers of width at
most $k$. By taking $t=1$ we obtain Theorem 1.2.
###### Theorem 6.9.
Given as input a graph $G$ and integers $k\geq z\geq 0$ we can find in
$2^{\mathcal{O}(k^{5}z^{2})}\cdot n^{\mathcal{O}(z)}$ time a vertex set
$S\subseteq V(G)$ such that
1. 1.
there is a minimum FVS in $G$ containing all vertices of $S$, and
2. 2.
if $C_{1},F_{1},\ldots,C_{t},F_{t}$ is a $z$-antler sequence of width at most
$k$ then $|S|\geq|C_{\leq t}|$.
###### Proof 6.10.
We first describe the algorithm.
### Algorithm
We use Lemma 6.5 to apply an operation in $G$ and obtain the resulting graph
$G^{\prime}$ and vertex set $S$. If no applicable operation was found return
an empty vertex set $S:=\emptyset$. Otherwise we recursively call our
algorithm on $G^{\prime}$ with integers $z$ and $k$ to obtain a vertex set
$S^{\prime}$ and return the vertex set $S\cup S^{\prime}$.
### Running time
Note that since every operation reduces the number of vertices or increases
the number of double-edges, after at most $\mathcal{O}(n^{2})$ operations we
obtain a graph where no operation can be applied. Therefore after at most
$\mathcal{O}(n^{2})$ recursive calls the algorithm terminates. We obtain a
running time of $2^{\mathcal{O}(k^{5}z^{2})}\cdot n^{\mathcal{O}(z)}$.
### Correctness
We prove correctness by induction on the recursion depth, which is shown the
be finite by the run time analysis.
First consider the case that no operation was found. Clearly condition 1 holds
for $G^{\prime}:=G$ and $S:=\emptyset$. To show condition 2 suppose
$C_{1},F_{1},\ldots,C_{t},F_{t}$ is a $z$-antler-sequence of width at most $k$
for $G$. The first non-empty antler in this sequence is a $z$-antler of width
at most $k$ in $G$. Since no operation was found using Lemma 6.5 it follows
that $G$ does not contain a non-empty $z$-antler of width at most $k$. Hence
all antlers in the sequence must be empty and $|C_{\leq t}|=0$, so condition 2
holds for $G^{\prime}:=G$ and $S:=\emptyset$.
For the other case, suppose $G^{\prime}$ and $S$ are obtained by applying an
operation, then since this operation is FVS-safe we know for any minimum FVS
$S^{\prime\prime}$ of $G^{\prime}$ that $S\cup S^{\prime\prime}$ is a minimum
FVS in $G$. Since $S^{\prime}$ is obtained from a recursive call there is a
minimum FVS in $G^{\prime}$ containing all vertices of $S^{\prime}$. Let
$S^{\prime\prime}$ be such a FVS in $G^{\prime}$, so $S^{\prime}\subseteq
S^{\prime\prime}$ then we know $S\cup S^{\prime\prime}$ is a minimum FVS in
$G$. It follows that there is a minimum FVS in $G$ containing all vertices of
$S\cup S^{\prime}$, proving condition 1.
To prove condition 2 suppose $C_{1},F_{1},\ldots,C_{t},F_{t}$ is a $z$-antler-
sequence of width at most $k$ for $G$. We first prove the following:
###### Claim 6.
There exists a $z$-antler-sequence
$C_{1}^{\prime},F_{1}^{\prime},\ldots,C_{t}^{\prime},F_{t}^{\prime}$ of width
at most $k$ for $G^{\prime}$ such that
1. 1.
$C_{\leq t}^{\prime}\cup F_{\leq t}^{\prime}=V(G^{\prime})\cap(C_{\leq t}\cup
F_{\leq t})$ and
2. 2.
$|C_{\leq t}^{\prime}|=\sum_{1\leq i\leq t}|C_{i}|-|(C_{i}\cup F_{i})\cap S|$.
We use induction on $t$. Since $G^{\prime}$ and $S$ are obtained through an
antler-safe operation and $(C_{1},F_{1})$ is a $z$-antler in $G$, we know that
$G^{\prime}$ contains a $z$-antler $(C_{1}^{\prime},F_{1}^{\prime})$ such that
$C_{1}^{\prime}\cup F_{1}^{\prime}=(C_{1}\cup F_{1})\cap V(G^{\prime})$ and
$|C_{1}^{\prime}|=|C_{1}|-|(C_{1}\cup F_{1})\cap S|$. The claim holds for
$t=1$.
For the induction step, consider $t>1$. By applying induction to the
length-$(t-1)$ prefix of the sequence, there is a $z$-antler sequence
$C_{1}^{\prime},F_{1}^{\prime},\ldots,C_{t-1}^{\prime},F_{t-1}^{\prime}$ of
width at most $k$ for $G^{\prime}$ such that both conditions hold. We have by
Proposition 6.7 that $(C_{\leq t},F_{\leq t})$ is a $z$-antler in $G$. Since
$G^{\prime}$ and $S$ are obtained through an antler-safe operation from $G$
there is a $z$-antler $(C^{\prime},F^{\prime})$ in $G^{\prime}$ such that
$C^{\prime}\cup F^{\prime}=V(G^{\prime})\cap(C_{\leq t}\cup F_{\leq t})$ and
$|C^{\prime}|=|C_{\leq t}|-|S\cap(C_{\leq t}\cup F_{\leq t})|$. Take
$C_{t}^{\prime}:=C^{\prime}\setminus(C_{<t}^{\prime}\cup F_{<t}^{\prime})$ and
$F_{t}^{\prime}:=F^{\prime}\setminus(C_{<t}^{\prime}\cup F_{<t}^{\prime})$. By
Lemma 3.7 we have that $(C_{t}^{\prime},F_{t}^{\prime})$ is a $z$-antler in
$G^{\prime}-(C_{<t}^{\prime}\cup F_{<t}^{\prime})$, it follows that
$C_{1}^{\prime},F_{1}^{\prime},\ldots,C_{t}^{\prime},F_{t}^{\prime}$ is a
$z$-antler-sequence for $G^{\prime}$. We first show condition 1.
$\displaystyle C_{\leq t}^{\prime}\cup F_{\leq t}^{\prime}$
$\displaystyle=C_{t}^{\prime}\cup F_{t}^{\prime}\cup C_{<t}^{\prime}\cup
F_{<t}^{\prime}$ $\displaystyle=C^{\prime}\cup F^{\prime}$ by choice of
$C_{t}^{\prime}$ and $F_{t}^{\prime}$ $\displaystyle=V(G^{\prime})\cap(C_{\leq
t}\cup F_{\leq t})$ by choice of $C^{\prime}$ and $F^{\prime}$.
To prove condition 2 and the $z$-antler-sequence
$C_{1}^{\prime},F_{1}^{\prime},\ldots,C_{t}^{\prime},F_{t}^{\prime}$ has width
at most $k$ we first show $|C_{t}^{\prime}|=|C_{t}|-|(C_{t}\cup F_{t})\cap
S|$. For this observe that $(C_{\leq t}^{\prime},F_{\leq t}^{\prime})$ is an
antler in $G^{\prime}$ by Proposition 6.7.
$\displaystyle|C_{t}^{\prime}|$ $\displaystyle=|C_{\leq
t}^{\prime}|-|C_{<t}^{\prime}|$ since $C_{i}^{\prime}\cap
C_{j}^{\prime}=\emptyset$ for all $i\neq j$
$\displaystyle=\operatorname{\textsc{fvs}}(G^{\prime}[C_{\leq t}^{\prime}\cup
F_{\leq t}^{\prime}])-|C_{<t}^{\prime}|$ by the above
$\displaystyle=\operatorname{\textsc{fvs}}(G^{\prime}[V(G^{\prime})\cap(C_{\leq
t}\cup F_{\leq t})])-|C_{<t}^{\prime}|$ by condition 1
$\displaystyle=\operatorname{\textsc{fvs}}(G^{\prime}[C^{\prime}\cup
F^{\prime}])-|C_{<t}^{\prime}|$ by choice of $C^{\prime}$ and $F^{\prime}$
$\displaystyle=|C^{\prime}|-|C_{<t}^{\prime}|$ since $(C^{\prime},F^{\prime})$
is an antler in $G^{\prime}$ $\displaystyle=|C^{\prime}|-\sum_{1\leq
i<t}(|C_{i}|-|(C_{i}\cup F_{i})\cap S|)$ by induction $\displaystyle=|C_{\leq
t}|-|S\cap(C_{\leq t}\cup F_{\leq t})|-\sum_{1\leq i<t}(|C_{i}|-|(C_{i}\cup
F_{i})\cap S|)$ $\displaystyle=\sum_{1\leq i\leq t}(|C_{i}|-|S\cap(C_{i}\cup
F_{i})|)-\sum_{1\leq i<t}(|C_{i}|-|(C_{i}\cup F_{i})\cap S|)$ since
$C_{1},F_{1},\ldots,C_{t},F_{t}$ are pairwise disjoint
$\displaystyle=|C_{t}|-|(C_{t}\cup F_{t})\cap S|$
We know the $z$-antler-sequence
$C_{1}^{\prime},F_{1}^{\prime},\ldots,C_{t-1},F_{t-1}$ has width at most $k$,
so to show that this $z$-antler-sequence has width at most $k$ it suffices to
prove that $|C_{t}^{\prime}|\leq k$. Indeed
$|C_{t}^{\prime}|=|C_{t}|-|(C_{t}\cup F_{t})\cap S|\leq|C_{t}|\leq k$.
To complete the proof of 6 we now derive condition 2:
$\displaystyle|C_{\leq t}^{\prime}|$
$\displaystyle=|C_{t}^{\prime}|+|C_{<t}^{\prime}|$ since $C_{t}^{\prime}\cap
C_{<t}^{\prime}=\emptyset$ $\displaystyle=|C_{t}|-|(C_{t}\cup F_{t})\cap
S|+|C_{<t}^{\prime}|$ $\displaystyle=|C_{t}|-|(C_{t}\cup F_{t})\cap
S|+\sum_{1\leq i\leq t-1}(|C_{i}|-|(C_{i}\cup F_{i})\cap S|)$ by induction
$\displaystyle=\sum_{1\leq i\leq t}(|C_{i}|-|(C_{i}\cup F_{i})\cap S|).$
To complete the proof of condition 2 from Theorem 6.9 we show $|S\cup
S^{\prime}|\geq|C_{\leq t}|$. By 6 we know a $z$-antler-sequence
$C_{1}^{\prime},F_{1}^{\prime},\ldots,C_{t}^{\prime},F_{t}^{\prime}$ of width
at most $k$ for $G^{\prime}$ exists. Since $S^{\prime}$ is obtained from a
recursive call we have $|S^{\prime}|\geq|C_{\leq t}^{\prime}|$, so then
$\displaystyle|S\cup S^{\prime}|$ $\displaystyle=|S|+|S^{\prime}|$
$\displaystyle\geq|S|+|C_{\leq t}^{\prime}|$ $\displaystyle=|S|+\sum_{1\leq
i\leq t}(|C_{i}|-|(C_{i}\cup F_{i})\cap S|)$ by 6
$\displaystyle=|S|+\sum_{1\leq i\leq t}|C_{i}|-\sum_{1\leq i\leq t}|(C_{i}\cup
F_{i})\cap S|$ $\displaystyle=|S|+|C_{\leq t}|-|S\cap(C_{\leq t}\cup F_{\leq
t})|$ since
$C_{1}^{\prime},F_{1}^{\prime},\ldots,C_{t}^{\prime},F_{t}^{\prime}$ are
disjoint $\displaystyle=|S|+|C_{\leq t}|-|S|$ $\displaystyle\geq|C_{\leq t}|.$
As a corollary to this theorem, we obtain a new type of parameterized-
tractability result for Feedback Vertex Set. For an integer $z$, let the
$z$-antler complexity of fvs on $G$ be the minimum number $k$ for which there
exists a (potentially long) sequence $C_{1},F_{1},\ldots,C_{t},F_{t}$ of
disjoint vertex sets such that for all $1\leq i\leq t$, the pair
$(C_{i},F_{i})$ is a $z$-antler of width at most $k$ in $G-(C_{<i}\cup
F_{<i})$, and such that $G-(C_{\leq t}\cup F_{\leq t})$ is acyclic (implying
that $C_{\leq t}$ is a feedback vertex set in $G$). If no such sequence
exists, the $z$-antler complexity of $G$ is $+\infty$.
Intuitively, Corollary 6.11 states that optimal solutions can be found
efficiently when they are composed out of small pieces, each of which has a
low-complexity certificate for belonging to some optimal solution.
###### Corollary 6.11.
There is an algorithm that, given a graph $G$, returns an optimal feedback
vertex set in time $f(k^{*})\cdot n^{\mathcal{O}(z^{*})}$, where
$(k^{*},z^{*})$ is any pair of integers such that the $z^{*}$-antler
complexity of $G$ is at most $k^{*}$.
###### Proof 6.12.
Let $(k^{*},z^{*})$ be such that the $z^{*}$-antler complexity of $G$ is at
most $k^{*}$. Let $p_{1}\in\mathcal{O}(k^{5}z^{2}),p_{2}\in\mathcal{O}(z)$ be
concrete functions such that the running time of Theorem 6.9 is bounded by
$2^{p_{1}(k,z)}\cdot n^{p_{2}(z)}$. Consider the pairs
$\\{(k^{\prime},z^{\prime})\in\mathbb{N}^{2}\mid 1\leq z^{\prime}\leq
k^{\prime}\leq n\\}$ in order of increasing value of the running-time
guarantee $2^{p_{1}(k,z)}\cdot n^{p_{2}(z)}$. For each such pair
$(k^{\prime},z^{\prime})$, start from the graph $G$ and invoke Theorem 6.9 to
obtain a vertex set $S$ which is guaranteed to be contained in an optimal
solution. If $G-S$ is acyclic, then $S$ itself is an optimal solution and we
return $S$. Otherwise we proceed to the next pair $(k^{\prime},z^{\prime})$.
### Correctness
The correctness of Theorem 6.9 and the definition of $z$-antler complexity
ensure that for $(k^{\prime},z^{\prime})=(k^{*},z^{*})$, the set $S$ is an
optimal solution. In particular, if $C_{1},F_{1},\ldots,C_{t},F_{t}$ is a
sequence of vertex sets witnessing that the $z^{*}$-antler complexity of $G$
is at most $k^{*}$, then (2) of Theorem 6.9 is guaranteed to output a set $S$
of size at least $\sum_{1\leq i\leq t}|C_{i}|$, which is equal to the size of
an optimal solution on $G$ by definition.
### Running time
For a fixed choice of $(k^{\prime},z^{\prime})$ the algorithm from Theorem 6.9
runs in time $2^{\mathcal{O}((k^{\prime})^{5}(z^{\prime})^{2})}\cdot
n^{\mathcal{O}(z^{\prime})}\leq 2^{\mathcal{O}((k^{*})^{5}(z^{*})^{2})}\cdot
n^{\mathcal{O}(z^{*})}$ because we try pairs $(k^{\prime},z^{\prime})$ in
order of increasing running time. As we try at most $n^{2}$ pairs before
finding the solution, the corollary follows.
To conclude, we reflect on the running time of Corollary 6.11 compared to
running times of the form
$2^{\mathcal{O}(\operatorname{\textsc{fvs}}(G))}\cdot n^{\mathcal{O}(1)}$
obtained by FPT algorithms for the parameterization by solution size. If we
exhaustively apply Lemma 5.9 with the FVC $(C,V(G)\setminus C)$, where $C$ is
obtained from a 2-approximation algorithm [9], then this gives an _antler-
safe_ kernelization: it reduces the graph as long as the graph is larger than
$f_{r}(|C|)$. This opening step reduces the instance size to
$\mathcal{O}(\operatorname{\textsc{fvs}}(G)^{3})$ without increasing the
antler complexity. As observed before, after applying $\mathcal{O}(n^{2})$
operations we obtain a graph in which no operations can be applied. This leads
to a running time of $\mathcal{O}(n^{4})$ of the kernelization. Running
Theorem 6.9 to solve the reduced instance yields a total running time of
$2^{\mathcal{O}(k^{5}z^{2})}\operatorname{\textsc{fvs}}(G)^{\mathcal{O}(z)}+\mathcal{O}(n^{4})$.
This is asymptotically faster than
$2^{\mathcal{O}(\operatorname{\textsc{fvs}}(G))}$ when $z\leq
k=o(\sqrt[7]{\operatorname{\textsc{fvs}}(G)})$ and
$\operatorname{\textsc{fvs}}(G)=\omega(\log n)$, which captures the intuitive
idea sketched above that our algorithmic approach has an advantage when there
is an optimal solution that is large but composed of small pieces for which
there are low-complexity certificates.
## 7 Conclusion
We have taken the first steps of a research program to investigate how and
when a preprocessing phase can guarantee to identify parts of an optimal
solution to an NP-hard problem, thereby reducing the search space of the
follow-up algorithm. Aside from the technical results concerning antler
structures for Feedback Vertex Set and their algorithmic properties, we
consider the conceptual message of this research program an important
contribution of our theoretical work on understanding the power of
preprocessing and the structure of solutions to NP-hard problems.
This line of investigation opens up a host of opportunities for future
research. For combinatorial problems such as Vertex Planarization, Odd Cycle
Transversal, and Directed Feedback Vertex Set, which kinds of substructures in
inputs allow parts of an optimal solution to be identified by an efficient
preprocessing phase? Is it possible to give preprocessing guarantees not in
terms of the size of an optimal solution, but in terms of measures of the
stability [7, 8, 17] of optimal solutions under small perturbations? Some
questions also remain open concerning the concrete technical results in the
paper. Can the running time of Theorem 1.2 be improved to $f(k)\cdot
n^{\mathcal{O}(1)}$? We conjecture that it cannot, but have not been able to
prove this. A related question applies to Vertex Cover: Is there an algorithm
running in time $f(k)\cdot n^{\mathcal{O}(1)}$ that, given a graph $G$ which
has disjoint vertex sets $(C,H)$ such that $N_{G}(C)\subseteq H$ and $H$ of
size $k$ is an optimal vertex cover in $G[C\cup H]$, outputs a set of size at
least $k$ that is part of an optimal vertex cover in $G$? (Note that this is
an easier target than computing such a decomposition of width $k$ if one
exists, which can be shown to be W[1]-hard.)
To apply the theoretical ideas on antlers in the practical settings that
motivated their investigation, it would be interesting to determine which
types of antler can be found in _linear_ time. One can show that a slight
extension of the standard reduction rules [16, FVS.1–FVS.5] for Feedback
Vertex Set can be used to detect 1-antlers of width $1$ in linear time. Can
the running time of Theorem 1.1 be improved to $f(k)\cdot(n+m)$? It would also
be interesting to investigate which types of antlers are present in practical
inputs.
## References
* [1] Faisal N. Abu-Khzam, Rebecca L. Collins, Michael R. Fellows, Michael A. Langston, W. Henry Suters, and Christopher T. Symons. Kernelization algorithms for the vertex cover problem: Theory and experiments. In Proc. 6th ALENEX/ANALC, pages 62–69, 2004.
* [2] Faisal N. Abu-Khzam, Michael R. Fellows, Michael A. Langston, and W. Henry Suters. Crown structures for vertex cover kernelization. Theory Comput. Syst., 41(3):411–430, 2007. doi:10.1007/s00224-007-1328-0.
* [3] Tobias Achterberg, Robert E. Bixby, Zonghao Gu, Edward Rothberg, and Dieter Weninger. Presolve reductions in mixed integer programming. Technical Report 16-44, ZIB, Takustr.7, 14195 Berlin, 2016. URL: http://nbn-resolving.de/urn:nbn:de:0297-zib-60370.
* [4] Tobias Achterberg and Roland Wunderling. Mixed Integer Programming: Analyzing 12 Years of Progress, pages 449–481. Springer Berlin Heidelberg, 2013. doi:10.1007/978-3-642-38189-8_18.
* [5] Takuya Akiba and Yoichi Iwata. Branch-and-reduce exponential/FPT algorithms in practice: A case study of vertex cover. Theor. Comput. Sci., 609:211–225, 2016. doi:10.1016/j.tcs.2015.09.023.
* [6] Noga Alon, Raphael Yuster, and Uri Zwick. Color-coding. J. ACM, 42(4):844–856, 1995.
* [7] Haris Angelidakis, Pranjal Awasthi, Avrim Blum, Vaggos Chatziafratis, and Chen Dan. Bilu-Linial stability, certified algorithms and the independent set problem. In Michael A. Bender, Ola Svensson, and Grzegorz Herman, editors, Proc. 27th ESA, volume 144 of LIPIcs, pages 7:1–7:16. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPIcs.ESA.2019.7.
* [8] Pranjal Awasthi, Avrim Blum, and Or Sheffet. Center-based clustering under perturbation stability. Inf. Process. Lett., 112(1-2):49–54, 2012. doi:10.1016/j.ipl.2011.10.006.
* [9] Vineet Bafna, Piotr Berman, and Toshihiro Fujito. A 2-approximation algorithm for the undirected feedback vertex set problem. SIAM Journal on Discrete Mathematics, 12(3):289–297, 1999. doi:10.1137/S0895480196305124.
* [10] Hans L. Bodlaender, Fedor V. Fomin, Daniel Lokshtanov, Eelko Penninkx, Saket Saurabh, and Dimitrios M. Thilikos. (meta) kernelization. J. ACM, 63(5):44:1–44:69, 2016. doi:10.1145/2973749.
* [11] Hans L. Bodlaender, Bart M. P. Jansen, and Stefan Kratsch. Kernelization lower bounds by cross-composition. SIAM J. Discrete Math., 28(1):277–305, 2014. doi:10.1137/120880240.
* [12] Hans L. Bodlaender and Thomas C. van Dijk. A cubic kernel for feedback vertex set and loop cutset. Theory Comput. Syst., 46(3):566–597, 2010. doi:10.1007/s00224-009-9234-2.
* [13] Kevin Burrage, Vladimir Estivill-Castro, Michael R. Fellows, Michael A. Langston, Shev Mac, and Frances A. Rosamond. The undirected feedback vertex set problem has a poly($k$) kernel. In Hans L. Bodlaender and Michael A. Langston, editors, Proc. 2nd IWPEC, volume 4169 of Lecture Notes in Computer Science, pages 192–202. Springer, 2006. doi:10.1007/11847250_18.
* [14] Jianer Chen, Iyad A. Kanj, and Ge Xia. Improved upper bounds for vertex cover. Theor. Comput. Sci., 411(40-42):3736 – 3756, 2010. doi:10.1016/j.tcs.2010.06.026.
* [15] Benny Chor, Mike Fellows, and David W. Juedes. Linear kernels in linear time, or how to save $k$ colors in $O$$(n^{2})$ steps. In Proc. 30th WG, pages 257–269, 2004. doi:10.1007/978-3-540-30559-0_22.
* [16] Marek Cygan, Fedor V. Fomin, Lukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michal Pilipczuk, and Saket Saurabh. Parameterized Algorithms. Springer, 2015. doi:10.1007/978-3-319-21275-3.
* [17] Amit Daniely, Nati Linial, and Michael E. Saks. Clustering is difficult only when it does not matter. CoRR, abs/1205.4891, 2012. URL: http://arxiv.org/abs/1205.4891.
* [18] Holger Dell and Dieter van Melkebeek. Satisfiability allows no nontrivial sparsification unless the polynomial-time hierarchy collapses. J. ACM, 61(4):23:1–23:27, 2014. doi:10.1145/2629620.
* [19] Huib Donkers and Bart M. P. Jansen. A Turing kernelization dichotomy for structural parameterizations of $\mathcal{F}$-minor-free deletion. In Ignasi Sau and Dimitrios M. Thilikos, editors, Proc. 45th WG, volume 11789 of Lecture Notes in Computer Science, pages 106–119. Springer, 2019. doi:10.1007/978-3-030-30786-8_9.
* [20] Rod Downey and Michael R. Fellows. Parameterized Complexity. Monographs in Computer Science. Springer, New York, 1999.
* [21] Rodney G. Downey and Michael R. Fellows. Fundamentals of Parameterized Complexity. Texts in Computer Science. Springer, 2013. doi:10.1007/978-1-4471-5559-1.
* [22] Andrew Drucker. New limits to classical and quantum instance compression. SIAM J. Comput., 44(5):1443–1479, 2015. doi:10.1137/130927115.
* [23] M. Ayaz Dzulfikar, Johannes Klaus Fichte, and Markus Hecher. The PACE 2019 parameterized algorithms and computational experiments challenge: The fourth iteration (invited paper). In Bart M. P. Jansen and Jan Arne Telle, editors, Proc. 14th IPEC, volume 148 of LIPIcs, pages 25:1–25:23. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPIcs.IPEC.2019.25.
* [24] Michael R. Fellows. Blow-ups, win/win’s, and crown rules: Some new directions in FPT. In Hans L. Bodlaender, editor, Proc. 29th WG, volume 2880 of Lecture Notes in Computer Science, pages 1–12. Springer, 2003. doi:10.1007/978-3-540-39890-5_1.
* [25] Michael R. Fellows. The lost continent of polynomial time: Preprocessing and kernelization. In Hans L. Bodlaender and Michael A. Langston, editors, Proc. 2nd IWPEC, volume 4169 of Lecture Notes in Computer Science, pages 276–277. Springer, 2006. doi:10.1007/11847250_25.
* [26] J. Flum and M. Grohe. Parameterized Complexity Theory. Springer-Verlag, 2006. doi:10.1007/3-540-29953-X.
* [27] Fedor V. Fomin, Daniel Lokshtanov, Neeldhara Misra, and Saket Saurabh. Planar $\mathcal{F}$-Deletion: Approximation, kernelization and optimal FPT algorithms. In Proc. 53rd FOCS, pages 470–479, 2012. doi:10.1109/FOCS.2012.62.
* [28] Fedor V. Fomin, Daniel Lokshtanov, Saket Saurabh, and Meirav Zehavi. Kernelization: Theory of Parameterized Preprocessing. Cambridge University Press, 2019. doi:10.1017/9781107415157.
* [29] Lance Fortnow and Rahul Santhanam. Infeasibility of instance compression and succinct PCPs for NP. J. Comput. Syst. Sci., 77(1):91–106, 2011. doi:10.1016/j.jcss.2010.06.007.
* [30] Jiong Guo and Rolf Niedermeier. Invitation to data reduction and problem kernelization. SIGACT News, 38(1):31–45, 2007. doi:10.1145/1233481.1233493.
* [31] Danny Hermelin, Stefan Kratsch, Karolina Soltys, Magnus Wahlström, and Xi Wu. A completeness theory for polynomial (Turing) kernelization. Algorithmica, 71(3):702–730, 2015. doi:10.1007/s00453-014-9910-8.
* [32] Demian Hespe, Sebastian Lamm, Christian Schulz, and Darren Strash. Wegotyoucovered: The winning solver from the PACE 2019 implementation challenge, vertex cover track. CoRR, abs/1908.06795, 2019. arXiv:1908.06795.
* [33] Demian Hespe, Christian Schulz, and Darren Strash. Scalable kernelization for maximum independent sets. ACM Journal of Experimental Algorithmics, 24(1):1.16:1–1.16:22, 2019. doi:10.1145/3355502.
* [34] Yoichi Iwata. Linear-time kernelization for feedback vertex set. In Proc. 44th ICALP, volume 80 of LIPIcs, pages 68:1–68:14, 2017. doi:10.4230/LIPIcs.ICALP.2017.68.
* [35] Yoichi Iwata and Yusuke Kobayashi. Improved analysis of highest-degree branching for feedback vertex set. In Bart M. P. Jansen and Jan Arne Telle, editors, Proc. 14th IPEC, volume 148 of LIPIcs, pages 22:1–22:11. Schloss Dagstuhl - Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPIcs.IPEC.2019.22.
* [36] Bart M. P. Jansen, Venkatesh Raman, and Martin Vatshelle. Parameter ecology for feedback vertex set. Tsinghua Science and Technology, 19(4):387–409, 2014. doi:10.1109/TST.2014.6867520.
* [37] R. M. Karp. Reducibility Among Combinatorial Problems. In Complexity of Computer Computations, pages 85–103. Plenum Press, 1972.
* [38] Stefan Kratsch and Magnus Wahlström. Representative sets and irrelevant vertices: New tools for kernelization. In Proc. 53rd FOCS, pages 450–459, 2012. doi:10.1109/FOCS.2012.46.
* [39] Daniel Lokshtanov, N. S. Narayanaswamy, Venkatesh Raman, M. S. Ramanujan, and Saket Saurabh. Faster parameterized algorithms using linear programming. ACM Trans. Algorithms, 11(2):15:1–15:31, 2014. doi:10.1145/2566616.
* [40] Moni Naor, Leonard J. Schulman, and Aravind Srinivasan. Splitters and near-optimal derandomization. In 36th Annual Symposium on Foundations of Computer Science, Milwaukee, Wisconsin, USA, 23-25 October 1995, pages 182–191. IEEE Computer Society, 1995. doi:10.1109/SFCS.1995.492475.
* [41] G.L. Nemhauser and L.E.jun. Trotter. Vertex packings: structural properties and algorithms. Math. Program., 8:232–248, 1975. doi:10.1007/BF01580444.
* [42] Marcin Pilipczuk, Michal Pilipczuk, Piotr Sankowski, and Erik Jan van Leeuwen. Network sparsification for steiner problems on planar and bounded-genus graphs. ACM Trans. Algorithms, 14(4):53:1–53:73, 2018. doi:10.1145/3239560.
* [43] W. V. Quine. The problem of simplifying truth functions. The American Mathematical Monthly, 59(8):521–531, 1952. URL: http://www.jstor.org/stable/2308219, doi:10.2307/2308219.
* [44] Jean-Florent Raymond and Dimitrios M. Thilikos. Recent techniques and results on the erdős-pósa property. Discret. Appl. Math., 231:25–43, 2017. doi:10.1016/j.dam.2016.12.025.
* [45] Stéphan Thomassé. A $4k^{2}$ kernel for feedback vertex set. ACM Trans. Algorithms, 6(2), 2010. doi:10.1145/1721837.1721848.
* [46] P. Toth. Dynamic programming algorithms for the zero-one knapsack problem. Computing, 25:29–45, 1980. doi:10.1007/BF02243880.
## Appendix A NP-hardness of finding 1-antlers
###### Lemma A.1 ([19, Lemma 2 for $H=K_{3}$]).
There is a polynomial-time algorithm that, given a CNF formula $\Phi$, outputs
a graph $G$ and a collection $\mathcal{H}=\\{H_{1},\ldots,H_{\ell}\\}$ of
vertex-disjoint cycles in $G$, such that $\Phi$ is satisfiable if and only if
$G$ has a feedback vertex set of size $\ell$.
###### Definition A.2.
A 1-antler in an undirected multigraph $G$ is a pair of disjoint vertex sets
$(C,F)$ such that:
1. 1.
$G[F]$ is acyclic,
2. 2.
each tree $T$ of the forest $G[F]$ is connected to $V(G)\setminus(C\cup F)$ by
at most one edge, and
3. 3.
the graph $G[C\cup F]$ contains $|C|$ vertex-disjoint cycles.
A 1-antler is called _non-empty_ if $H\cup C\neq\emptyset$.
Will use the following easily verified consequence of this definition.
###### Observation A.3.
If $(C,F)$ is a 1-antler in an undirected multigraph $G$, then for each vertex
$c\in C$ the graph $G[\\{c\\}\cup F]$ contains a cycle.
In the terminology of Section 1, the cut $C$ corresponds to $\mathsf{head}$
and the forest $F$ corresponds to $\mathsf{antler}$. Observe that this self-
contained definition of a 1-antler is equivalent to the general definition of
$z$-antler from Section 3 for $z=1$.
###### Corollary A.4.
Assuming $\mathsf{P}$ $\neq$ $\mathsf{NP}$, there is no polynomial-time
algorithm that, given a graph $G$, outputs a non-empty 1-antler in $G$ or
concludes that no non-empty 1-antler exists.
###### Proof A.5.
We need the following simple claim.
###### Claim 7.
If $G$ contains a packing of $\ell\geq 1$ vertex-disjoint cycles and a
feedback vertex set of size $\ell$, then $G$ admits a non-empty 1-antler.
Let $C$ be a feedback vertex set in $G$ of size $\ell$ and let
$F:=V(G)\setminus C$.
Now suppose there is a polynomial-time algorithm to find a non-empty 1-antler
decomposition, if one exists. We use it to solve CNF-SAT in polynomial time.
Given an input formula $\Phi$ for CNF-SAT, use Lemma A.1 to produce in
polynomial time a graph $G$ and a packing $\mathcal{H}$ of $\ell$ vertex-
disjoint cycles in $G$, such that $\Phi$ is satisfiable if and only if
$\operatorname{\textsc{fvs}}(G)=\ell$. The following recursive polynomial-time
algorithm correctly tests, given a graph $G$ and a packing $\mathcal{H}$ of
some $\ell\geq 0$ vertex-disjoint cycles in $G$, whether
$\operatorname{\textsc{fvs}}(G)=\ell$.
1. 1.
If $\ell=0$, then output yes if and only if $G$ is acyclic.
2. 2.
If $\ell>0$, run the hypothetical algorithm to find a non-empty 1-antler
$(C,F)$.
1. (a)
If a non-empty 1-antler $(C,F)$ is returned, then let $\mathcal{H^{\prime}}$
consist of those cycles in the packing not intersected by $C$, and let
$\ell^{\prime}:=|\mathcal{H^{\prime}}|$. Return the result of recursively
running the algorithm on $G^{\prime}:=G-(C\cup F)$ and $\mathcal{H^{\prime}}$
to test whether $G^{\prime}$ has a feedback vertex set of size
$|\mathcal{H^{\prime}}|$.
2. (b)
Otherwise, return no.
The claim shows that the algorithm is correct when it returns no. Observation
3.3 shows that if we recurse, we have $\operatorname{\textsc{fvs}}(G)=\ell$ if
and only if $\operatorname{\textsc{fvs}}(G-(C\cup H))=\ell^{\prime}$; hence
the result of the recursion is the correct output. Since the number of
vertices in the graph reduces by at least one in each iteration, the overall
running time is polynomial assuming the hypothetical algorithm to compute a
non-empty 1-antler. Hence using Lemma A.1 we can decide CNF-SAT in polynomial
time.
###### Corollary A.6.
It is NP-complete to determine whether
$\operatorname{\textsc{fvs}}(G)=\operatorname{\textsc{fvs}}LP(G)$. Here
$\operatorname{\textsc{fvs}}(G)$ denotes the minimum size of a feedback vertex
set in $G$, and $\operatorname{\textsc{fvs}}LP(G)$ denotes the minimum cost of
a solution to the linear programming relaxation of Feedback Vertex Set on $G$.
###### Proof A.7.
Membership in NP is trivial; we prove hardness. Suppose such an algorithm
exists. As above, we use it to solve CNF-SAT in polynomial time. Given an
input $\Phi$ for SAT, use Lemma A.1 to produce in polynomial time a graph $G$
and packing $\mathcal{H}$ of $\ell$ vertex-disjoint cycles in $G$, such that
$\Phi$ is satisfiable if and only if $\operatorname{\textsc{fvs}}(G)=\ell$.
Compute the cost $c$ of an optimal solution to the linear programming
relaxation of Feedback Vertex Set on $G$, using the ellipsoid method. By the
properties of a relaxation, if $c>\ell$ then
$\operatorname{\textsc{fvs}}(G)>\ell$, and hence we can safely report that
$\Phi$ is unsatisfiable. If $c\leq\ell$, then the existence of $\ell$ vertex-
disjoint cycles in $G$ implies that $c=\ell$. Run the hypothetical algorithm
to test whether
$\operatorname{\textsc{fvs}}(G)=\operatorname{\textsc{fvs}}LP(G)$. If the
answer is yes, then $G$ has a feedback vertex set of size $\ell$ and hence
$\Phi$ is satisfiable; if not, then $\Phi$ is unsatisfiable.
## Appendix B W[1]-hardness of finding bounded-width 1-antlers
We consider the following parameterized problem.
Bounded-Width 1-Antler Detection Input: An undirected multigraph $G$ and an
integer $k$. Parameter: $k$. Question: Does $G$ admit a non-empty 1-antler
$(C,F)$ with $|C|\leq k$?
We prove that Bounded-Width 1-Antler Detection is W[1]-hard by a reduction
from Multicolored Clique, which is defined as follows.
Multicolored Clique Input: An undirected simple graph $G$, an integer $k$, and
a partition of $V(G)$ into sets $V_{1},\ldots,V_{k}$. Parameter: $k$.
Question: Is there a clique $S$ in $G$ such that for each $1\leq i\leq k$ we
have $|S\cap V_{i}|=1$?
The sets $V_{i}$ are referred to as _color classes_ , and a solution clique
$S$ is called a _multicolored clique_. It is well-known that Multicolored
Clique is W[1]-hard (cf. [16, Thm. 13.25]). Our reduction is inspired by the
W[1]-hardness of detecting a Hall set [16, Exercise 13.28]. In the proof, we
use the following shorthand notation: for a positive integer $n$, we denote by
$[n]$ the set $\\{1,\ldots,n\\}$.
###### Theorem B.1.
Bounded-Width 1-Antler Detection is W[1]-hard.
###### Proof B.2.
We give a parameterized reduction [16, Def. 13.1] from the Multicolored Clique
problem. By inserting isolated vertices if needed, we may assume without loss
of generality that for the input instance $(G,k,V_{1},\ldots,V_{k})$ we have
$|V_{1}|=|V_{2}|=\ldots=|V_{k}|=n$ for some $n\geq k(k-1)+4$. For each
$i\in[k]$, fix an arbitrary labeling of the vertices in $V_{i}$ as $v_{i,j}$
for $j\in[n]$. Given this instance, we construct an input
$(G^{\prime},k^{\prime})$ for Bounded-Width 1-Antler Detection as follows.
1. 1.
For each $i\in[k]$, for each $j\in[n]$, create a set
$U_{i,j}=\\{u_{i,j,\ell}\mid\ell\in[k]\setminus\\{i\\}\\}$ of vertices in
$G^{\prime}$ to represent $v_{i,j}$. Intuitively, vertex $u_{i,j,\ell}$
represents the connection that the $j$th vertex from the $i$th color class
should have to the neighbor in the $\ell$th color class chosen in the solution
clique.
2. 2.
Define $\mathcal{U}:=\bigcup_{i\in[k]}\bigcup_{j\in[n]}U_{i,j}$. Insert
(single) edges to turn $\mathcal{U}$ into a clique in $G^{\prime}$.
3. 3.
For each edge $e$ in $G$ between vertices of different color classes, let
$e=\\{v_{i,j},v_{i^{\prime},j^{\prime}}\\}$ with $i<i^{\prime}$, and insert
two vertices into $G^{\prime}$ to represent $e$:
* •
Insert a vertex $w_{e}$, add an edge from $w_{e}$ to each vertex in
$U_{i,j}\cup U_{i^{\prime},j^{\prime}}$, and then add a second edge between
$w_{e}$ and $u_{i,j,i^{\prime}}$.
* •
Insert a vertex $w_{e^{\prime}}$, add an edge from $w_{e^{\prime}}$ to each
vertex in $U_{i,j}\cup U_{i^{\prime},j^{\prime}}$, and then add a second edge
between $w_{e^{\prime}}$ and $u_{i^{\prime},j^{\prime},i}$.
Let $W$ denote the set of vertices of the form $w_{e},w_{e^{\prime}}$ inserted
to represent an edge of $G$. Observe that $W$ is an independent set in $G$.
4. 4.
Finally, insert a vertex $u^{*}$ into $G^{\prime}$. Add a single edge from
$u^{*}$ to all other vertices of $G^{\prime}$, to make $u^{*}$ into a
universal vertex.
This concludes the construction of $G^{\prime}$. Note that $G^{\prime}$
contains double-edges, but no self-loops. We set $k^{\prime}:=k(k-1)$, which
is appropriately bounded for a parameterized reduction. It is easy to see that
the reduction can be performed in polynomial time. It remains to show that $G$
has a multicolored $k$-clique if and only if $G^{\prime}$ has a nonempty
1-antler of width at most $k$. To illustrate the intended behavior of the
reduction, we first prove the forward implication.
###### Claim 8.
If $G$ has a multicolored clique of size $k$, then $G^{\prime}$ has a non-
empty 1-antler of width $k^{\prime}$.
Suppose $S$ is a multicolored clique of size $k$ in $G$. Choose indices
$j_{1},\ldots,j_{k}$ such that $S\cap V_{i}=\\{v_{i,j_{i}}\\}$ for all
$i\in[k]$. Define a 1-antler $(C,F)$ in $G^{\prime}$ as follows:
* •
$C=\bigcup_{i\in[k]}U_{i,j_{i}}$.
* •
$F=\\{w_{e},w_{e^{\prime}}\mid e\text{\leavevmode\nobreak\ is an edge
in\leavevmode\nobreak\ $G$ between distinct vertices of\leavevmode\nobreak\
$S$}\\}$.
Since each set $U_{i,j_{i}}$ has size $k-1$, it follows that
$|C|=k(k-1)=k^{\prime}$. Since $F\subseteq W$ is an independent set in
$G^{\prime}$, it also follows that $G^{\prime}[F]$ is acyclic. Each tree $T$
in the forest $G^{\prime}[F]$ consists of a single vertex $w_{e}$ or
$w_{e^{\prime}}$. By construction, there is exactly one edge between $T$ and
$V(G^{\prime})\setminus(C\cup F)$; this is the edge to the universal vertex
$u^{*}$. It remains to verify that $G^{\prime}[C\cup F]$ contains $|C|$
vertex-disjoint cycles, each containing exactly one vertex of $C$. Consider an
arbitrary vertex $u_{i,j_{i},\ell}$ in $C$; we show we can assign it a cycle
in $G^{\prime}[C\cup F]$ so that all assigned cycles are vertex-disjoint.
Since $S$ is a clique, there is an edge $e$ in $G$ between $v_{i,j_{i}}$ and
$v_{\ell,j_{\ell}}$, and the corresponding vertices $w_{e},w_{e^{\prime}}$ are
in $F$. If $i<\ell$, then $w_{e}\in F$ and there are two edges between
$u_{i,j_{i},\ell}$ and $w_{e}$, forming a cycle on two vertices. If $i>\ell$,
then there is a cycle on two vertices $u_{i,j_{i},\ell}$ and $w_{e^{\prime}}$.
Since for any vertex of the form $w_{e}$ or $w_{e^{\prime}}$ there is a unique
vertex of $C$ that it has a double-edge to, the resulting cycles are indeed
vertex-disjoint. This proves that $(C,F)$ is a 1-antler of width $k^{\prime}$.
Before proving reverse implication, we establish some structural claims about
the structure of 1-antlers in $G^{\prime}$.
###### Claim 9.
If $(C,F)$ is a non-empty 1-antler in $G^{\prime}$ with $|C|\leq k^{\prime}$,
then the following holds:
1. 1.
$\mathcal{U}\cap F=\emptyset$.
2. 2.
$u^{*}\notin C\cup F$.
3. 3.
$W\cap C=\emptyset$.
4. 4.
$C\subseteq\mathcal{U}$, $F\subseteq W$, and each tree of the forest
$G^{\prime}[F]$ consists of a single vertex.
5. 5.
For each vertex $w\in F$ we have $N_{G^{\prime}}(w)\cap\mathcal{U}\subseteq
C$.
6. 6.
$F\neq\emptyset$.
(1) Assume for a contradiction that there is a vertex
$u_{i,j,\ell}\in\mathcal{U}\cap F$. Since $G^{\prime}[F]$ is a forest by
Property (1) of Definition A.2, while $\mathcal{U}$ is a clique in
$G^{\prime}$, it follows that $|F\cap\mathcal{U}|\leq 2$. By Property (2), for
a vertex in $F$, there is at most one of its neighbors that belongs to neither
$F$ nor $C$. Since $|\mathcal{U}|\geq n\geq k(k-1)+4$, and $u_{i,j,\ell}\in F$
is adjacent to all other vertices of $\mathcal{U}$ since that set forms a
clique, the fact that $|F\cap\mathcal{U}|\leq 2$ implies that
$|C\cap\mathcal{U}|\geq|\mathcal{U}|-2-1\geq k(k-1)+4-3>k^{\prime}$. So
$|C|>k^{\prime}$, which contradicts that $(C,F)$ is a 1-antler with $|C|\leq
k^{\prime}$.
(2) Since $u^{*}$ is a universal vertex in $G^{\prime}$, the set
$\mathcal{U}\cup\\{u^{*}\\}$ is a clique and the preceding argument shows that
$u^{*}\notin F$. To prove the claim we show that additionally, $u^{*}\notin
C$. Assume for a contradiction that $u^{*}\in C$. By Observation A.3, the
graph $G^{\prime}[\\{u^{*}\\}\cup F]$ contains a cycle. Since $W$ is an
independent set in $G^{\prime}$ and $u^{*}$ is not incident on any double-
edges, the graph $G^{\prime}[\\{u^{*}\\}\cup W]$ is acyclic. Hence to get a
cycle in $G^{\prime}[\\{u^{*}\\}\cup F]$, the set $F$ contains at least one
vertex that is not in $W$ and not $u^{*}$; hence this vertex belongs to
$\mathcal{U}$. So $\mathcal{U}\cap F\neq\emptyset$; but this contradicts Claim
9(1).
(3) Assume for a contradiction that $w\in W\cap C$. Again by Observation A.3,
there is a cycle in $G^{\prime}[\\{w\\}\cup F]$, and since $G^{\prime}$ does
not have any self-loops this implies $N_{G^{\prime}}(w)\cap F\neq\emptyset$.
But by construction of $G^{\prime}$ we have
$N_{G^{\prime}}(w)\subseteq\mathcal{U}\cup\\{u^{*}\\}$, so $F$ contains a
vertex of either $\mathcal{U}$ or $u^{*}$. But this contradict either Claim
9(2) or Claim 9(3).
(4) Since the sets $\mathcal{U},W,\\{u^{*}\\}$ form a partition of
$V(G^{\prime})$, the preceding subclaims imply $C\subseteq\mathcal{U}$ and
$F\subseteq W$. Since $W$ is an independent set in $G^{\prime}$, this implies
that each tree $T$ of the forest $G^{\prime}[W]$ consists of a single vertex.
(5) Consider a vertex $w\in F$, which by itself forms a tree in
$G^{\prime}[F]$. Since $u^{*}\notin C\cup F$, the edge between $T$ and $u^{*}$
is the unique edge connecting $T$ to a vertex of $V(G^{\prime})\setminus(C\cup
F)$, and therefore all neighbors of $T$ other than $u^{*}$ belong to $C\cup
F$. Since a vertex $w\in W$ has
$N_{G^{\prime}}(w)\subseteq\mathcal{U}\cup\\{u^{*}\\}$, it follows that
$N_{G^{\prime}}(w)\cap\mathcal{U}\subseteq C$.
(6) By the assumption that $(C,F)$ is non-empty, we have $C\cup
F\neq\emptyset$. This implies that $F\neq\emptyset$: if $C$ would contain a
vertex $c$ while $F=\emptyset$, then by Observation A.3 the graph
$G^{\prime}[\\{c\\}\cup F]=G^{\prime}[\\{c\\}]$ would contain a cycle, which
is not the case since $G^{\prime}$ has no self-loops. Hence $F\neq\emptyset$.
With these structural insights, we can prove the remaining implication.
###### Claim 10.
If $G^{\prime}$ has a non-empty 1-antler $(C,F)$ with $|C|\leq k^{\prime}$,
then $G$ has a multicolored clique of size $k$.
Let $(C,F)$ be a non-empty 1-antler in $G^{\prime}$ with $|C|\leq k^{\prime}$.
By Claim 9 we have $C\subseteq\mathcal{U}$, while $F\subseteq W$ and
$F\neq\emptyset$. Consider a fixed vertex $w\in F$. Since $F\subseteq W$,
vertex $w$ is of the form $w_{e}$ or $w_{e^{\prime}}$ constructed in Step 3 to
represent some edge $e$ of $G$. Choose $i^{*}\in[k],j^{*}\in[n]$ such that
$v_{i^{*},j^{*}}\in V_{i^{*}}$ is an endpoint of edge $e$ in $G$. By
construction we have $U_{i^{*},j^{*}}\subseteq N_{G^{\prime}}(w)$ and
therefore Claim 9(5) implies $U_{i^{*},j^{*}}\subseteq C$.
Consider an arbitrary $\ell\in[k]\setminus\\{i^{*}\\}$. Then
$u_{i^{*},j^{*},\ell}\in U_{i^{*},j^{*}}\subseteq C$, so by Observation A.3
the graph $G^{\prime}[\\{u_{i^{*},j^{*},\ell}\\}\cup F]$ contains a cycle.
Since $G^{\prime}[F]$ is an independent set and $G^{\prime}$ has no self-
loops, this cycle consists of two vertices joined by a double-edge. By
construction of $G^{\prime}$, such a cycle involving $u_{i^{*},j^{*},\ell}$
exists only through vertices $w_{e}$ or $w_{e^{\prime}}$ where $e$ is an edge
of $G$ connecting $v_{i^{*},j^{*}}$ to a neighbor in class $V_{\ell}$.
Consequently, $F$ contains a vertex $w$ that represents such an edge $e$. Let
$v_{\ell,j_{\ell}}$ denote the other endpoint of $e$. Then
$N_{G^{\prime}}(w)\supseteq U_{i^{*},j^{*}}\cup U_{\ell,j_{\ell}}$, and by
Claim 9(5) we therefore have $U_{\ell,j_{\ell}}\subseteq C$.
Applying the previous argument for all $\ell\in[k]\setminus\\{i^{*}\\}$,
together with the fact that $U_{i^{*},j^{*}}\subseteq C$, we find that for
each $i\in[k]$ there exists a value $j_{i}$ such that $U_{i,j_{i}}\subseteq
C$. Since $|C|\leq k(k-1)$ while each such set $U_{i,j_{i}}$ has size $k-1$,
it follows that the choice of $j_{i}$ is uniquely determined for each
$i\in[k]$, and that there are no other vertices in $C$. To complete the proof,
we argue that the set $S=\\{v_{i,j_{i}}\mid i\in[k]\\}$ is a clique in $G$.
Consider an arbitrary pair of distinct vertices
$v_{i,j_{i}},v_{i^{\prime},j_{i^{\prime}}}$ in $S$, and choose the indices
such that $i<i^{\prime}$. We argue that $G$ contains an edge $e$ between these
vertices, as follows. Since $u_{i,j_{i},i^{\prime}}\in U_{i,j_{i}}\subseteq
C$, by Observation A.3 the graph $G^{\prime}[\\{u_{i,j_{i},i^{\prime}}\\}\cup
F]$ contains a cycle. As argued above, the construction of $G^{\prime}$ and
the fact that $F\subseteq W$ ensure that this cycle consists of
$u_{i,j_{i},i^{\prime}}$ joined to a vertex in $F$ by a double-edge. By Step 3
and the fact that $i<i^{\prime}$, this vertex is of the form $w_{e}$ for an
edge $e$ in $G$ connecting $v_{i,j_{i}}$ to a vertex
$v_{i^{\prime},j^{\prime}}$ in $V_{i^{\prime}}$. By construction of
$G^{\prime}$ we have $U_{i^{\prime},j^{\prime}}\subseteq
N_{G^{\prime}}(w_{e})$, and then $w_{e}\in F$ implies by Claim 9(5) that
$U_{i^{\prime},j^{\prime}}\subseteq C$. Since we argued above that for index
$i^{\prime}$ there is a unique choice $j_{i^{\prime}}$ with
$U_{i^{\prime},j_{i^{\prime}}}\subseteq C$, we must have
$j^{\prime}=j_{i^{\prime}}$. Hence the vertex $w_{e}$ contained in $F$
represents the edge of $G$ between $v_{i,j_{i}}$ and
$v_{i^{\prime},j_{i^{\prime}}}$ in $G$, which proves in particular that the
edge exists. As the choice of vertices was arbitrary, this shows that $S$ is a
clique in $G$. As it contains exactly one vertex from each color class, graph
$G$ has a multicolored clique of size $k$.
|
# Learned Construction Grammars Converge Across Registers
Given Increased Exposure
Jonathan Dunn
University of Canterbury
Christchurch, NZ
<EMAIL_ADDRESS>
Harish Tayyar Madabushi
University of Sheffield
Sheffield, UK
<EMAIL_ADDRESS>
###### Abstract
This paper measures the impact of increased exposure on whether learned
construction grammars converge onto shared representations when trained on
data from different registers. Register influences the frequency of
constructions, with some structures common in formal but not informal usage.
We expect that a grammar induction algorithm exposed to different registers
will acquire different constructions. To what degree does increased exposure
lead to the convergence of register-specific grammars? The experiments in this
paper simulate language learning in 12 languages (half Germanic and half
Romance) with corpora representing three registers (Twitter, Wikipedia, Web).
These simulations are repeated with increasing amounts of exposure, from 100k
to 2 million words, to measure the impact of exposure on the convergence of
grammars. The results show that increased exposure does lead to converging
grammars across all languages. In addition, a shared core of register-
universal constructions remains constant across increasing amounts of
exposure.
## 1 Exposure and Convergence
The central question that this work aims to answer is whether register-
specific grammars converge onto shared representations when exposed to more
training data. Variation in the context of production, called register
variation, has a significant impact on the frequency of constructions. For
example, imperative and wh-question constructions are much more frequent in
informal or conversational speech, while declarative constructions are much
more frequent in formal written usage Fodor and Crowther (2002); Sampson
(2002).
At the same time, usage-based grammar views language as a complex adaptive
system that emerges given exposure to usage Bybee (2006); Beckner et al.
(2009). Thus, a language learner is expected to be strongly influenced by the
observed frequency of constructions. Given this wide variance in the frequency
of constructions across registers, it is conceivable that learners exposed to
different registers learn different constructions.
This paper simulates the language acquisition process from a usage-based
perspective by learning Construction Grammars, called CxGs Goldberg (1995,
2006); Langacker (2008). A constructional approach to language focuses on
symbolic form-meaning mappings that are potentially idiomatic. Previous work
on computational CxG has explored how to discover potential constructions
Wible and Tsao (2010); Forsberg et al. (2014); Dunn (2017), the process of
construction learning Barak and Goldberg (2017); Barak et al. (2017), and
whether constructional information is implicitly encoded in models like BERT
Tayyar Madabushi et al. (2020).
A commonly discussed example of a construction is the ditransitive in (a1).
CxGs use a constraint-based formalism in which each slot in the construction
is defined by a particular slot-constraint; in (a1) these are syntactic
constraints. One of the important ideas in CxG is that constructions
themselves carry a meaning. For example, the ditransitive construction carries
a meaning of transfer regardless of the meaning of the particular verb that is
used in the ditransitive. In some cases, this notion of transfer also follows
from the meaning of the verb, as in (a2) with sold. But, in other cases,
utterances like (a3) can take on a meaning of transfer that is not present in
the verb smile. Constructions can also have idiomatic children, such as (a4),
in which an item-specific slot-constraint defines a sub-class of the
construction, like (a5), whose meaning is not entirely predictable or
transparent given the parent construction.
(a1) [ Syn: np – Syn: vp – Syn: np – Syn: np ]
(a2) “He sold them a car."
(a3) “He smiled himself an upgrade."
(a4) “He gave them a hand."
(a5) [ Syn: np – Lex: give – Syn: np – Lex: hand ]
Language | Code | Family
---|---|---
Danish | dan | Germanic
Dutch | nld | Germanic
English | eng | Germanic
German | deu | Germanic
Norwegian | nor | Germanic
Swedish | swe | Germanic
Catalan | cat | Romance
French | fra | Romance
Italian | ita | Romance
Portuguese | por | Romance
Romanian | ron | Romance
Spanish | spa | Romance
Table 1: Sources of Language Data, with 2 million words each for the tw, wk,
and cc registers
Register is a distinct pattern of usage that is associated with the context of
production. A substantial body of research has shown that register is a major
source of linguistic variation (Biber, 2012). Recent work has shown that the
impact of register variation exceeds the impact of geographic variation in
many cases (Dunn, 2021). The result of register variation is that large
corpora often contain a number of distinct sub-corpora, each with their own
unique patterns of usage (Sardinha, 2018; Cvrček et al., 2020). In other
words, a gigaword web-crawled corpus is not simply a flat collection of many
written documents: there is, instead, a register-based grouping of sub-corpora
which often contain significantly different linguistic forms.
This paper simulates the acquisition of constructions by incrementally
increasing the amount of exposure: 100k words, 200k words, 300k words and so
on up to 2 million words Alishahi and Stevenson (2008); Matusevych et al.
(2013); Beekhuizen et al. (2015). This provides a series of grammars, each
representing a different state in the learning process. This experiment is
repeated across three registers, each with a unique set of constructional
frequencies: Wikipedia (formal), Twitter (informal), and Web (mixed). Each
register has a progression of 20 register-specific grammars, with each grammar
representing different levels of exposure.
Is there a level of exposure at which register-specific grammars reach a
stable shared representation of the language? On the one hand, it is possible
that register-specific grammars are maintained as unique sub-sets of
linguistic behaviour. In this case, grammars would not converge given
increased exposure. On the other had, it is possible that register-specific
constructions fade away as increased exposure leads to more generalized
grammars. In this case, grammars would converge, becoming more similar and
less register-specific as they are learned through exposure to more training
data.
To avoid language-specific generalizations, this experiment is repeated across
six Germanic languages (Danish, Dutch, English, German, Norwegian, Swedish)
and six Romance languages (Catalan, French, Italian, Portuguese, Romanian,
Spanish). Each grammar contains a set of constructions that have been learned
to best represent the training data. Thus, for each stage in the learning
process, we can measure the overlap between register-specific grammars
(Twitter to Web, Twitter to Wikipedia, and Wikipedia to Web). When register-
specific grammars have a higher overlap this means that they share more of
their constructional representations. In other words, higher overlap means
that the grammars are more similar.
We say that grammars converge when they have a higher degree of overlap or
similarity. This paper develops two measures of grammar similarity to capture
different aspects of convergence: a fuzzy Jaccard similarity that captures
convergence across even rare constructions and a frequency-weighted Jaccard
similarity that focuses on the core constructions. These two measures of
convergence allow us to model the degree to which construction grammars learn
register-specific representations.
| Construction (Type) | | Construction (Type)
---|---|---|---
(b) | [ Syn: n \- Lex: of \- Syn: det \- Sem:<587> ] | (c) | [ Lex: while \- Sem:<113> \- Syn: adp ]
| Examples (Tokens) | | Examples (Tokens)
(b1) | ‘spirit of the alchemist’ | (c1) | ‘while working out’
(b2) | ‘provinces of the empire’ | (c2) | ‘while sitting by’
(b3) | ‘raft of the medusa’ | (c3) | ‘while going through’
(b4) | ‘constellations of the zodiac’ | (c4) | ‘while carrying out’
(b5) | ‘myth of the anaconda’ | (c5) | ‘while sticking around’
Table 2: Examples of Constructions (Types) and Instances of Constructions
(Tokens) for English
There are three possible outcomes for this experimental framework: First, it
is possible that grammars converge as exposure increases. This convergence
would indicate that register variation becomes less important given more data:
the grammars contain the same constructions regardless of the input register.
In other words, this would indicate that more data overall is able to
compensate for register variation. Second, it is possible that grammars do not
converge as the amount of exposure increases. This outcome would indicate that
each register represents a unique sub-grammar, a distinct set of linguistic
behaviours. In this case, more data overall would never compensate for
register variation. These two competing hypotheses are tested in Experiment 1
(Section 5). This experiment finds that increased exposure does, in fact, lead
to converging grammars. This finding supports the first hypothesis that is
described above.
Third, another possible outcome is that not all languages pattern together. In
other words, it may be the case that some Germanic languages converge given
increased exposure but some Romance languages retain register-specific
constructions. Part of the experimental design is to repeat the same framework
across many languages to determine whether the outcome is specific to one or
another language. We will see, in Experiment 1, that there is variation across
languages in both (i) the rate of convergence and (ii) the upper limit when
grammars stop converging. However, it remains the case that all languages show
increased convergence given increased exposure.
Given that the first experiments show that grammars do converge given
increased exposure, we undertake an additional experiment: is there a core
construction grammar for each language? Construction grammars have a proto-
type structure, meaning that some representations are central (and thus very
frequent) while others are peripheral (and thus somewhat rare). We use the
frequency-weighted Jaccard similarity in Experiment 2 (Section 6) to determine
whether the overall rate of convergence changes when we focus on the core of
the grammar rather than the periphery. These experiments show that for most
languages the core grammar is acquired very early with little change in
convergence given increased exposure.
## 2 Experimental Design
The basic experimental approach is to learn grammars over increasing amounts
of exposure: from 100k words to 2 million words in increments of 100k words
(thus creating 20 grammars per condition). This series of grammars simulates
the accumulation of grammatical knowledge as the amount of exposure increases.
This approach is repeated across each of the three registers that represent
different contexts of production.
The register-specific data used to progressively learn grammars is collected
from three sets of corpora: tweets (tw), Wikipedia articles (wk), and web
pages (cc for Common Crawl). This dataset is summarized in Table 1. The corpus
contains the same amount of data per register per language Dunn (2020); Dunn
and Adams (2020).
The pairwise similarity relationships between grammars differ, in part,
because some sources of data are more similar to one another. For example, cc
and wk are more similar registers and thus their grammars have a baseline
similarity that is higher than wk and tw. In other words, the Wikipedia
grammars (CxG-wk) are more similar to the web grammars (CxG-cc) than to the
Twitter grammars (CxG-tw). What matters, then, is the degree to which the
relative similarity between grammars changes as the amount of exposure
increases. This approach controls for the underlying similarity between
registers.
## 3 Learning Constructions
The grammar induction algorithm used to learn constructions is taken from
previous work Dunn (2017). At its core, this model of CxGs has three main
components: First, a psychologically-plausible measure of association, the
$\Delta P$, is used to measure the entrenchment of particular representations
Ellis (2007); Gries (2013); Dunn (2018c). Second, an association-based beam
search is used to identify constructions of arbitrary length by finding the
most entrenched representation for each training sentence in reference to a
matrix of $\Delta P$ values Dunn (2019a). Third, a Minimum Description Length
measure is used as an optimization function, balancing the trade-off between
increased storage of item-specific constructions and increased computation of
generalized constructions Dunn (2018b). We briefly review each of these
components of the algorithm in this section.
Constructions are constraint-based syntactic representations in which
individual slots are limited to particular syntactic, semantic, or lexical
items Goldberg (1995). Construction Grammars are usage-based in the sense that
constructions range from very idiomatic phrases (like “give me a hand") to
very abstract sequences (like np -> det adj np). One of the many advantages of
CxGs is that they represent actual usage, which means that they are capable of
identifying syntactic variation across dialects Dunn (2018a, 2019c, 2019b) and
even across individuals Dunn and Nini (2021). But the disadvantage is that the
induction algorithm must learn both the units of representation (i.e.,
semantic categories) as well as these multi-dimensional slot-constraints. For
example, the constructions in (b) and (c) in Table 2 include all three types
of representation (lexical, syntactic, semantic) so that the algorithm must be
able to navigate across these three representations during the learning
process.
The grammar induction algorithm starts by defining the types of
representation. lexical constraints are based on word-forms, without
lemmatization. These are the simplest and most idiomatic types of constraints.
syntactic constraints are formulating using the universal part-of-speech
tagset Petrov et al. (2012) and implemented using the Ripple Down Rules
algorithm Nguyen et al. (2016). semantic constraints are based on
distributional semantics, with k-means clustering applied to discretize pre-
trained fastText embeddings Grave et al. (2019). The semantic constraints in
(b) and (c) are formulated using the index of the corresponding clusters. A
complete list of semantic domains used in this paper, along with a grammar for
English, are available in the supplementary material.
Each sentence in the input corpus is transformed into these three parallel
dimensions of representation (lexical, syntactic, semantic). In the first
stage of the algorithm, a co-occurrence matrix is produced that represents the
association between all pairs of representations using the $\Delta P$ measure,
shown below. What distinguishes the $\Delta P$ from more common measures like
PPMI Church and Hanks (1989); Dagan et al. (1993) is that it has direction-
specific variants that take ordering into account, thus helping to capture
syntactic patterns. The measure, calculated left-to-right, is the probability
that two units occur together ($X$ and $Y$) adjusted by the probability that
$X$ occurs alone. In this notation, $Y_{P}$ indicates that unit $Y$ is present
and $Y_{A}$ that unit $Y$ is absent.
$\Delta P_{LR}=p(X_{P}|Y_{P})-p(X_{P}|Y_{A})$ (1)
Given (i) a three-dimensional representation of each sentence and (ii) a co-
occurrence matrix with the directional association for each pair of
representations, a beam search algorithm is used to find the most entrenched
sequence of constraints for each sentence in the training corpus. The basic
idea behind this search is to traverse all possible paths of constraints,
ending each path when the cumulative $\Delta P$ falls below a threshold Dunn
(2019a). For each sentence, the sequence of slot-constraints with the highest
cumulative association is added to a provisional grammar. In CxG, some
representations are very entrenched (grammaticalized) and others are only
slightly entrenched Goldberg (2011, 2016). The optimum sub-set of
constructions is then selected from this provisional grammar.
The grammar induction model itself is based on the Minimum Description Length
paradigm Grünwald and Rissanen (2007); Goldsmith (2001, 2006). In this kind of
model, observed probabilities are used to calculate the encoding size of a
grammar ($L_{1}$) as well as the encoding size of a test corpus given that
grammar ($L_{2}$). Usage-based grammar posits a trade-off between memory and
computation; this is modelled by MDL’s combination of $L_{1}$ and $L_{2}$
encoding size Dunn (2018b).
The best grammar is the one which minimizes this metric on a test corpus. In
(2), $G$ refers to the grammar being evaluated and $D$ refers to the test
corpus. For example, this is used to choose the parameters of the beam search
algorithm described above. In practice, the use of MDL to evaluate grammars is
quite similar to the use of perplexity to evaluate language models; for
example, the MDL metric is specific to each test corpus. The advantage of the
MDL metric for usage-based grammar is that it distinguishes between the
complexity of the grammar ($L_{1}$) and the fit between the grammar and the
test corpus ($L_{2}$).
$MDL=\min\limits_{G}\\{{L_{1}(G)+L_{2}(D\mid G)}\\}$ (2)
This induction algorithm provides a grammar of constructions to describe the
training corpus, where the grammar is chosen to minimize the MDL metric. The
grammar is a set of constructions, each of which is a sequence of slot-
constraints. And each slot-constraint is formulated using the basic inventory
of lexical, syntactic, and semantic fillers.
In previous work, the induction algorithm used alternating training and
testing sets to refine grammars. A large background corpus was used to
estimate the $\Delta P$ matrix that guides the selection of slot-constraints.
The experiments here, however, depend on limiting the amount of training data
as a means of controlling for different levels of exposure. In each condition,
then, the same training data is used for each stage in the algorithm. In other
words, the 200k word exposure condition has access only to the 200k word
training corpus (with the implicit exception of the pre-trained embeddings).
Each model is trained on the same underlying corpus, so that the 500k word
condition is given the same data as the 400k word condition plus an additional
100k words of new data.
Figure 1: Overlap between pairs of register-specific grammars by amount of
exposure, using Fuzzy Jaccard Similarity for Germanic languages to measure
Constructional Overlap (Experiment 1). Each line represents the similarity
between two grammars learned from different registers. Figure 2: Overlap
between pairs of register-specific grammars by amount of exposure, using Fuzzy
Jaccard Similarity for Romance languages to measure Constructional Overlap
(Experiment 1). Each line represents the similarity between two grammars
learned from different registers.
## 4 Measuring Grammar Similarity
A grammar in this context is a set of constructions, where each construction
is a sequence of slot-constraints. Our central measure of overlap between
grammars, then, is based on the Jaccard similarity, where values close to 1
represent very similar grammars and values close to 0 represent very different
grammars.
$J(A,B)=\frac{|A\cap B|}{|A\cup B|}$ (3)
One of the challenges with CxGs is that two different representations could
use different slot-constraints to capture a similar set of utterances,
essentially providing two versions of the same construction. Consider the
construction in (d), a part of the English web-based grammar. The tokens in
(d1) through (d4) are tokens of this construction. An alternate constraint
that specifies the rather than det might be chosen, leading to a different
representation for the same underlying construction.
(d) [ Lex: how to – Syn: v – Syn: det – Syn: n ]
(d1) ‘how to get the job’
(d2) ‘how to track a phone’
(d3) ‘how to improve the system’
(d4) ‘how to start a blog’
The challenge for calculating convergence using the Jaccard similarity between
grammars is that similar constructions could capture the same set of tokens.
For example, the syntactic constraint (det) could be replaced with a lexical
constraint (the) in the construction in (d). The Jaccard similarity on its own
would not capture the similarity between these two alternate formulations of
what is ultimately the same underlying construction.
Our first measure is thus a fuzzy jaccard similarity, in which the definition
of set membership is extended to very similar constructions. In this measure,
a sub-sequence matching algorithm is used to find how many slot-constraints
are shared between two constructions, taking order into account. Any two
constructions above a threshold of 0.71 shared sub-sequences are considered a
match. This threshold is chosen because it allows one slot-constraint to
differ between most constructions while still considering them to be similar
representations. For example, two six-slot constructions must share five
constraints in order to count as a match at this threshold. This measure thus
provides a better approximation of construction similarity, focusing on
constructions with slightly different internal constraints or with an added
slot-constraint in one position.
Our second measure is a frequency-weighted jaccard similarity, in which the
importance of each construction is weighted relative to its frequency in an
independent corpus. For each language, a background corpus is created from a
mix of different registers: Open Subtitles and Global Voices (news articles)
Tiedemann (2012) and Bible translations Christodoulopoulos and Steedman
(2015). This background corpus represents usage external to the three main
registers used in the experiments.
The frequency of each construction is derived from 500k words of this
background corpus, so that very common constructions are given more weight in
the similarity measure. The basic idea is that some constructions are part of
the core grammar, thus being frequent in all registers. This weighted measure
captures convergence within this core grammar by focusing on those
constructions which are most frequent in an independent corpus.
These two measures based on Jaccard similarity provide three values for each
condition: cc-wk, cc-tw, and wk-tw. Each of the values represents a pairwise
similarity between two register-specific grammars. The higher these values,
the more the learner is converging onto a shared grammar.
Figure 3: Overlap between pairs of register-specific grammars by amount of
exposure, using Weighted Jaccard Similarity for Germanic languages to measure
Core Grammatical Overlap (Experiment 2). Each line represents the similarity
between two grammars learned from different registers. Figure 4: Overlap
between pairs of register-specific grammars by amount of exposure, using
Weighted Jaccard Similarity for Romance languages to measure Core Grammatical
Overlap (Experiment 2). Each line represents the similarity between two
grammars learned from different registers.
We have thus formulated two measures of grammatical overlap. The first, fuzzy
Jaccard, captures the overall similarity between grammars. The second,
frequency-weighted Jaccard, captures the similarity between grammars with a
focus on the core constructions that are frequent across registers (ignoring
the long tail of register-specific forms). The following sections apply these
measures of grammatical overlap to the CxGs exposed to increasing amounts of
usage. In each case, higher values represent increased convergence between
grammars.
## 5 Experiment 1: Constructional Overlap
To what degree do grammars converge onto shared representations as they are
exposed to increasing amounts of data from different registers? The first
experiment uses the fuzzy Jaccard similarity to model convergence. The results
are shown in Figure 1 and Figure 2. In both cases, the y axis represents the
similarity between register-specific grammars, with higher values representing
more convergent sets of constructions. Note that each line represents the
similarity between two register-specific grammars. And the x axis represents
the amount of exposure, moving from 100k words up to 2 million words.
Languages are labelled using their language code, as listed in Table 1.
We notice, first, that there is a baseline difference in the similarity
between registers. In every language, for example, wk and tw are the least
similar. And, in every language, cc and wk are the most similar. This pattern
is shared across all 12 languages because the underlying contexts of
production have this similarity, with Wikipedia the most formal and Twitter
the least formal. The distance between registers does not matter here. What
does matter is the relative change in distance as the algorithm is exposed to
more data (i.e., cc-tw at 200k words and cc-tw at 2 million words).
We notice, second, that in every language grammars converge with increased
exposure. The overall similarity between constructions increases as more
training data is observed. This is true for both Germanic and Romance
languages, showing this to be a robust generalization.
Languages do differ, however, in (i) the overall amount of similarity and (ii)
the rate of convergence. In the first case, within Germanic languages the
range of grammar similarity is generally comparable, starting at approximately
0.2 and ending at approximately 0.4. Some languages (like English) reach a
higher level of convergence. Other languages have register-specific patterns:
in Norwegian the similarity between cc-wk is quite high throughout, but in
Swedish the similarity between cc-wk is the same as the similarity between cc-
tw. In other words, the ordering of register similarity is constant across
languages but the distance is not. Romance languages have a generally higher
overall rate of similarity, although there is a wide gap between French (the
highest) and Romanian (the lowest). This means that there is some variation
across languages in terms of how similar the register-specific grammars
become.
There is also variation in the rate of convergence. Some languages (like
Swedish) have a somewhat flat rate of convergence while other languages (like
English) have a steeper curve. A flat curve represents a slow convergence
while a steep curve represents a rapid convergence as exposure increases. Some
languages have bursts of convergence at specific amounts of exposure. For
example, Norwegian has a steep increase until about 300k words, then remains
flat until about 1 million words before beginning to converge again. In other
words, if we think about the growth curve of similarity, the pattern of
convergence differs across languages. The cause of these differences is a
matter for future research. The basic conclusion here is that all languages
show increasing convergence with increasing exposure. In other words,
register-specific syntactic patterns generalize as the induction algorithm
encounters more training data.
## 6 Experiment 2: The Core Constructions
Although many constructions have varying frequency across different registers,
we would expect that the grammar of each language also has a core set of very
frequent constructions which are shared across all registers. In other words,
register variation itself should not be strong enough to erase the syntactic
generalizations provided by a grammar. The frequency-weighted Jaccard
similarity measure is used to find this core set of constructions: how much do
grammars change with increasing exposure when we focus on the most frequent
constructions? As explained above, the frequency weighting is derived from
independent corpora that represent a different set of registers.
The weighted similarity measures are shown in Figure 3 and Figure 4, again
with the y axis showing similarity (higher values mean more overlap between
grammars) and the x axis showing increasing amounts of exposure. Each line
represents the similarity between two register-specific grammars. The overall
level of similarity is significantly higher here. The Germanic languages range
from approximately 0.7 (Dutch) to approximately 0.8 (Swedish). The Romance
languages have the same range, with Spanish the lowest and Portuguese the
highest. This overall increase in similarity shows that, when focusing on the
core constructions, the register-specific grammars converge quickly.
We notice, second, that the growth curve for the frequency-weighted measure
shows very little change after a certain point. For many languages, like
Portuguese and Italian, there is a sharp increase after several hundred
thousand words of exposure. This indicates that the initial grammars (based on
low exposure) are not adequate. After more exposure, however, the stable core
of constructions is acquired. Once that initial burst of acquisition is
complete, there are no significant changes. This is not true for Norwegian
(which continues to show a continuous growth of convergence) or for Romanian
(which has an initial decline and a much later burst of similarity). But the
overall pattern across all languages is that the core set of constructions
remains stable after a small amount of exposure.
## 7 Conclusions
These experiments show that register-specific grammars converge onto shared
constructions as they are exposed to more training data. This is observed
across 12 languages and three registers. At the same time, each language has a
core set of frequent constructions which is not influenced by register
variation. This core CxG is acquired for each language given a limited amount
of exposure and does not change significantly as exposure increases. These
results are important for describing the interaction between syntactic
generalizations (the core grammars) and syntactic variation (the register-
specific grammars).
## References
* Alishahi and Stevenson (2008) A. Alishahi and S. Stevenson. 2008. A Computational Model of Early Argument Structure Acquisition. _Cognitive Science_ , 32(5):789–834.
* Barak and Goldberg (2017) L. Barak and A Goldberg. 2017. Modeling the Partial Productivity of Constructions. In _Proceedings of AAAI 2017 Spring Symposium on Computational Construction Grammar and Natural Language Understanding_ , pages 131–138. Association for the Advancement of Artificial Intelligence.
* Barak et al. (2017) L. Barak, A. Goldberg, and S. Stevenson. 2017. Comparing Computational Cognitive Models of Generalization in a Language Acquisition Task. In _Proceedings of the Conference on Empirical Methods in NLP_ , pages 96–106. Association for Computational Linguistics.
* Beckner et al. (2009) C. Beckner, N. Ellis, R. Blythe, J. Holland, J. Bybee, J. Ke, M. Christiansen, D. Larsen-Freeman, W. Croft, and T. Schoenemann. 2009. Language Is a Complex Adaptive System: Position Paper. _Language Learning_ , 59:1–26.
* Beekhuizen et al. (2015) B. Beekhuizen, R. Bod, A. Fazly, S. Stevenson, and A. Verhagen. 2015. A Usage-Based Model of Early Grammatical Development. In _Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics_ , pages 46–54. Association for Computational Linguistics.
* Biber (2012) D. Biber. 2012. Register as a predictor of linguistic variation. _Corpus Linguistics and Linguistic Theory_ , 8(1):9–37.
* Bybee (2006) J. Bybee. 2006. From Usage to Grammar: The mind’s response to repetition. _Language_ , 82(4):711–733.
* Christodoulopoulos and Steedman (2015) C. Christodoulopoulos and M. Steedman. 2015. A massively parallel corpus: The Bible in 100 languages. _Language Resources and Evaluation_ , 49:375–395.
* Church and Hanks (1989) K. Church and P. Hanks. 1989. Word association norms, mutual information, and lexicography. In _27th Annual Meeting of the Association for Computational Linguistics_ , pages 76–83. Association for Computational Linguistics.
* Cvrček et al. (2020) V. Cvrček, Z. Komrsková, D. Lukeš, P. Poukarová, A. Řehořková, A. Zasina, and V. Benko. 2020. Comparing web-crawled and traditional corpora. _Language Resources and Evaluation_ , 54(3):713–745.
* Dagan et al. (1993) I. Dagan, S. Marcus, and S. Markovitch. 1993. Contextual word similarity and estimation from sparse data. In _31st Annual Meeting of the Association for Computational Linguistics_ , pages 164–171. Association for Computational Linguistics.
* Dunn (2017) J. Dunn. 2017. Computational learning of construction grammars. _Language & Cognition_, 9(2):254–292.
* Dunn (2018a) J. Dunn. 2018a. Finding Variants for Construction-Based Dialectometry: A Corpus-Based Approach to Regional CxGs. _Cognitive Linguistics_ , 29(2):275–311.
* Dunn (2018b) J. Dunn. 2018b. Modeling the Complexity and Descriptive Adequacy of Construction Grammars. In _Proceedings of the Society for Computation in Linguistics_ , pages 81–90. Association for Computational Linguistics.
* Dunn (2018c) J. Dunn. 2018c. Multi-unit association measures: Moving beyond pairs of words. _International Journal of Corpus Linguistics_ , 23:183–215.
* Dunn (2019a) J. Dunn. 2019a. Frequency vs. Association for Constraint Selection in Usage-Based Construction Grammar. In _Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics_. Association for Computational Linguistics.
* Dunn (2019b) J. Dunn. 2019b. Global Syntactic Variation in Seven Languages: Toward a Computational Dialectology. _Frontiers in Artificial Intelligence_ , 2:15.
* Dunn (2019c) J. Dunn. 2019c. Modeling Global Syntactic Variation in English Using Dialect Classification. In _Proceedings of Workshop on NLP for Similar Languages, Varieties and Dialects_ , pages 42–53. Association for Computational Linguistics.
* Dunn (2020) J. Dunn. 2020. Mapping languages: The Corpus of Global Language Use. _Language Resources and Evaluation_ , 54:999–1018.
* Dunn (2021) J. Dunn. 2021. Representations of language varieties are reliable given corpus similarity measures. In _Proceedings of the Eighth Workshop on NLP for Similar Languages, Varieties and Dialects_ , pages 28–38. Association for Computational Linguistics.
* Dunn and Adams (2020) J. Dunn and B. Adams. 2020. Geographically-balanced gigaword corpora for 50 language varieties. In _Proceedings of the 12th Language Resources and Evaluation Conference_ , pages 2528–2536. European Language Resources Association.
* Dunn and Nini (2021) J. Dunn and A. Nini. 2021. Production vs Perception: The Role of Individuality in Usage-Based Grammar Induction. In _Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics_ , pages 149–159. Association for Computational Linguistics.
* Ellis (2007) N. Ellis. 2007. Language Acquisition as Rational Contingency Learning. _Applied Linguistics_ , 27(1):1–24.
* Fodor and Crowther (2002) J. Fodor and C. Crowther. 2002. Understanding Stimulus Poverty Arguments. _The Linguistic Review_ , 19(1-2):105–145.
* Forsberg et al. (2014) M. Forsberg, R. Johansson, L. Bckstrm, L. Borin, B. Lyngfelt, J. Olofsson, and J. Prentice. 2014. From Construction Candidates to Constructicon Entries: An experiment using semi-automatic methods for identifying constructions in corpora. _Constructions and Frames_ , 6(1):114–135.
* Goldberg (1995) A. Goldberg. 1995. _Constructions: A Construction Grammar Approach to Argument Structure_. Cognitive Theory of Language and Culture Series. University of Chicago Press.
* Goldberg (2006) A. Goldberg. 2006. _Constructions at work: The nature of generalization in language_. Oxford University Press, Oxford.
* Goldberg (2011) A. Goldberg. 2011. Corpus evidence of the viability of statistical preemption. _Cognitive Linguistics_ , 22(1):131–154.
* Goldberg (2016) A. Goldberg. 2016. Partial Productivity of Linguistic Constructions: Dynamic categorization and Statistical preemption. _Language & Cognition_, 8(3):369–390.
* Goldsmith (2001) J. Goldsmith. 2001. Unsupervised Learning of the Morphology of a Natural Language. _Computational Linguistics_ , 27(2):153–198.
* Goldsmith (2006) J. Goldsmith. 2006. An Algorithm for the Unsupervised Learning of Morphology. _Natural Language Engineering_ , 12(4):353–371.
* Grave et al. (2019) E. Grave, P. Bojanowski, P. Gupta, A. Joulin, and T. Mikolov. 2019. Learning word vectors for 157 languages. In _International Conference on Language Resources and Evaluation_ , pages 3483–3487. European Language Resources Association.
* Gries (2013) St. Th Gries. 2013. 50-something years of work on collocations: What is or should be next. _International Journal of Corpus Linguistics_ , 18(1):137–165.
* Grünwald and Rissanen (2007) P. Grünwald and J. Rissanen. 2007. _The Minimum Description Length Principle_. MIT Press.
* Langacker (2008) R. Langacker. 2008. _Cognitive Grammar A basic introduction_. Oxford University Press, Oxford.
* Matusevych et al. (2013) Y. Matusevych, A. Alishahi, and A. Backus. 2013. Computational Simulations of Second Language Construction Learning. In _Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics_ , pages 47–56. Association for Computational Linguistics.
* Nguyen et al. (2016) Dat Quoca Dai Quocb Nguyen, Dat Quoca Dai Quocb Nguyen, Dang Ducc Pham, and Son Baod Pham. 2016. A robust transformation-based learning approach using ripple down rules for part-of-speech tagging. _AI Communications_ , 29(3):409–422.
* Petrov et al. (2012) S. Petrov, D. Das, and R. McDonald. 2012. A universal part-of-speech tagset. In _Proceedings of the Eighth Conference on Language Resources and Evaluation_ , pages 2089–2096. European Language Resources Association.
* Sampson (2002) G. Sampson. 2002. Exploring the richness of the stimulus. _The Linguistic Review_ , 19(1-2):73–104.
* Sardinha (2018) T. Sardinha. 2018. Dimensions of variation across Internet registers. _International Journal of Corpus Linguistics_ , 23(2):125–157.
* Tayyar Madabushi et al. (2020) H. Tayyar Madabushi, L. Romain, D. Divjak, and P. Milin. 2020. CxGBERT: BERT meets construction grammar. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 4020–4032. International Committee on Computational Linguistics.
* Tiedemann (2012) J. Tiedemann. 2012. Parallel Data, Tools and Interfaces in OPUS. In _Proceedings of the International Conference on Language Resources and Evaluation_. European Language Resources Association.
* Wible and Tsao (2010) D. Wible and N. Tsao. 2010. StringNet as a Computational Resource for Discovering and Investigating Linguistic Constructions. In _Proceedings of the Workshop on Extracting and Using Constructions in Computational Linguistics_ , pages 25–31. Association for Computational Linguistics.
|
# Vikhr: The Family of Open-Source Instruction-Tuned Large Language Models for
Russian
Aleksandr Nikolich
ITMO University
<EMAIL_ADDRESS>
&Konstantin Korolev
HSE University
<EMAIL_ADDRESS>
&Artem Shelmanov
MBZUAI
<EMAIL_ADDRESS>
###### Abstract
There has been a surge in the development of various Large Language Models
(LLMs). However, text generation for languages other than English often faces
significant challenges, including poor generation quality and the reduced
computational performance due to the disproportionate representation of tokens
in model’s vocabulary. In this work, we address these issues and introduce
Vikhr, a new state-of-the-art open-source instruction-tuned LLM designed
specifically for the Russian language. “Vikhr” refers to the name of the
Mistral LLM series and means “strong gust of wind.” Unlike previous efforts
for Russian that utilize computationally inexpensive LoRA adapters on top of
English-oriented models, Vikhr features an adapted tokenizer vocabulary and
undergoes the continued pre-training and instruction tuning of all weights.
This approach not only enhances the model’s performance but also significantly
improves its computational and contextual efficiency. The remarkable
performance of Vikhr across various Russian-language benchmarks can also be
attributed to our efforts in expanding instruction datasets and corpora for
continued pre-training. Vikhr not only sets the new state of the art among
open-source LLMs for Russian, but even outperforms some proprietary closed-
source models on certain benchmarks. The model weights, instruction sets, and
code are publicly available111https://huggingface.co/Vikhrmodels.
## 1 Introduction
Instruction tuning has unlocked in Large Language Models (LLMs) vast zero-shot
capabilities without the need of careful prompt engineering Ouyang et al.
(2022). The most rapid research and development efforts are currently devoted
to English LLMs. There has been a surge in English open-source models: Llama
series Touvron et al. (2023a, b), Mistral series Jiang et al. (2023), Vicuna
series Chiang et al. (2023), etc. This growth is driven by the abundance of
raw training data in English and dedicated efforts to create comprehensive
sets of instruction-output pairs. Despite the fact that LLMs oriented on
English have some multilingual capabilities Zhao et al. (2024) due to small
portions of texts in various languages leaked into their training datasets
Touvron et al. (2023a), their overall performance in these languages remains
relatively low. Although they can usually generate portions of coherent texts,
these models struggle with reasoning in non-English languages, lack culture-
specific knowledge, and are highly inefficient in terms of tokenization. This
inefficiency arises due to the way bite-pair tokenization algorithms work:
they split the infrequent words into multiple tokens. Since multilingual data
typically represents a small portion of the training dataset, non-English
words are often split in many pieces. This leads to more steps during prompt
processing and text generation, shorter effective context windows, and
ultimately lower quality Tikhomirov and Chernyshev (2023); Petrov et al.
(2024). This disparity places non-English languages at a disadvantage.
There is a research direction focused on developing multilingual LLMs that
work well for multiple popular languages: BLOOMz Muennighoff et al. (2023),
mGPT Shliazhko et al. (2022), Bactrian-X Li et al. (2023), PALO Maaz et al.
(2024), Aya101 from CohereAI Üstün et al. (2024), etc. These models are
typically trained on rich multilingual datasets and are less skewed towards
English. However, when aiming to perform well across multiple languages
simultaneously, these models must still share their vocabulary and parameters.
This often hinders their performance for each particular language in
isolation, especially for the popular smaller model sizes, such as 7B and 13B.
The goal of maximizing the LLM performance for a specific language within a
certain number of parameters has led researchers to develop bi-lingual LLMs.
For example, Jais Sengupta et al. (2023) focus only on English and Arabic. The
inclusion of English data in pre-training alongside Arabic data is motivated
by the significantly larger volume of English data available. This helps LLMs
substantially enhance skills such as logical and common sense reasoning, which
are also applied when generating text in Arabic.
Russian is one of the high-resource languages and is typically represented in
multilingual LLMs. Additionally, there are several proprietary closed-source
LLMs, such as MTS AI, GigaChat, and YandexGPT, that meet or even surpass their
English-oriented flagship competitors when it comes to text processing and
generation in Russian. However, controllable research often requires white-box
access to LLM logits and layer outputs, the ability to modify weights and a
model architecture, and consistent answers for reproducibility, which is often
impossible in closed-source LLMs due to their constant development and
retirement. There are only a few open-source LLMs designed for Russian (such
as Saiga Gusev (2023), ruGPT AI Forever (2022), ruadapt Tikhomirov and
Chernyshev (2023)). Of these, only Saiga and ruadapt are instruction-tuned.
This work aims to build an efficient and effective open-source instruction-
following LLM for Russian facilitating multilingual natural language
processing research. Building even a small LLM that targets a particular
language from scratch requires a lot of computational resources. Consequently,
many researchers simply fine-tune LoRA adapters Hu et al. (2021) for English-
oriented LLMs on some language-specific data. While this approach can improve
model generation quality, it does not address computational inefficiency
because the tokenizer and model vocabulary remain unchanged. In contrast, our
approach not only fine-tunes a base LLM on Russian language data but also
reconstructs its underlying tokenizer and vocabulary, alongside suggesting an
improved method for continued pre-training. Additionally, we have
significantly expanded the available Russian datasets for instruction tuning.
The developed LLM achieves state-of-the-art results for the Russian language
among other open-source counterparts across a wide range of benchmarks.
Contributions of the paper are the following:
* •
We have constructed Vikhr – a state-of-the-art open-source instruction-
following LLM oriented on the Russian language. In addition to its high
generation quality, Vikhr features an efficient tokenizer that enables rapid
text generation and good context utilization.
* •
We have developed a pipeline for adapting English-oriented LLMs to the Russian
language. The pipeline implements vocabulary adaptation, continued pre-
training with regularization to prevent “catastrophic forgetting”, and
instruction tuning.
* •
We have expanded the datasets for continued pre-training of Russian language
models and previously available instruction datasets.
* •
We conducted an extensive evaluation of several open-source LLMs on evaluation
benchmarks for Russian, demonstrating that Vikhr achieves new state-of-the-art
results.
## 2 Related Work
One of the first notable series of generative LLMs for Russian is ruGPT AI
Forever (2022); Zmitrovich et al. (2023). The authors created several models
trained for the vanilla language modelling task with the sizes of up to 13b.
The models were created from the scratch and trained on large Russian corpora.
They are able to handle the linguistic nuances of Russian more effectively
than multilingual models Muennighoff et al. (2022). Since the training data
was mostly in Russian, these models have efficient tokenization, but the lack
of multilingual data (e.g. in English) limits their performance. ruGPT models
are not instruction tuned.
Gusev (2023) suggests to leverage reasoning capabilities of existing English-
oriented LLMs and adapt them to the Russian language by training LoRA
adapters. They also create an Alpaca-like set of Russian instruction-output
pairs and performed instruction tuning. They have established the Saiga model
series, which has a competitive performance and used to be a reasonable choice
for off-the-shelf open-source Russian LLM for the past year. However, the
tokenizer in theses models is not adapted, so they experience issues with
context and computational efficiency.
Tikhomirov and Chernyshev (2023) address these issues in Saiga. In addition to
model tuning on Russian data, they also adapt the model tokenizer. They note
that improving tokenization helps to both improve the efficiency of the model
and its performance while reducing memory consumption. However, during
continued pre-training, the authors freeze the model weights except LM heads
and token embeddings, which probably results in the suboptimal performance.
In this work, we take advantage of pre-trained English-oriented LLMs, adapt
LLM tokenizer for better computational efficiency, leverage continued pre-
training on vast Russian-language corpora with regularization for preventing
“catastrophic forgetting”, construct a novel extended set of Russian
instruction-output pairs, and perform instruction tuning. The created LLM
adaptation pipeline along with the data for continued pre-training and
instruction tuning enables Vikhr to achieve new state-of-the-art results for
Russian, maintain high performance for English, and demonstrate high
computational efficiency.
## 3 LLM Construction Pipeline
The construction of Vikhr starts from one of English-oriented LLMs. In this
work, we discuss the Vikhr model based on Mistral 7B. The strong logical and
common reasoning capabilities, as well as the extensive world knowledge
present in these LLMs provide an excellent starting point for our model. These
features partially transfer to Vikhr, enhancing its performance in generating
text in Russian. The process of LLM adaptation to Russian starts with the
vocabulary adaptation. Then we perform continued pre-training of the LLM on
large Russian datasets to mitigate the vocabulary shift and introduce culture
specific knowledge. Finally, we perform fine-tuning of Vikhr on a set of
instruction-output pairs in Russian.
### 3.1 Vocabulary Adaptation
Content | Length | Tokenization Result
---|---|---
Original Sentence | 31 | Машинное обучение изменяет мир
Mistral Tokenizer | 13 | [‘Ма’, ‘шин’, ‘ное’, ‘об’, ‘у’, ‘чение’, ‘из’, ‘мен’, ‘я’, ‘ет’, ‘ми’, ‘р’ ]
Vikhr Tokenizer | 7 | [‘Ма’, ‘шин’, ‘ное’, ‘обучение’, ‘изменяет’, ‘мир’]
Table 1: Tokenizer comparisons between the original Mistral model and Vikhr
Figure 1: The Vikhr tokenizer efficiency in comparison to tokenizers of other
models.
The big drawback of English-oriented LLMs is that each Russian word would be
split into multiple tokens: a common case is when symbols in the word become
an individual tokens (see example in Table 1). This slows down the generation
by multiple times, reduces the amount of information that could be stored in
the context, and drastically hurts the generation quality.
To mitigate this problem in Vikhr, we adopt the approach suggested in Cui et
al. (2023); Tikhomirov and Chernyshev (2023), where authors rebuild the
tokenizer using a language-specific corpus. In particular, we trained a
SentencePiece tokenizer Kudo and Richardson (2018) with a 40k vocabulary on
the RuLM dataset Gusev (2023). As can be seen from Figure 1, the resulting
tokenizer for Russian is much more efficient than the tokenizer of the
original English-oriented model.
Data Source | | Approx. size
---
(GB)
| Tokens
---
(Billion)
Scientific papers | 20 | 2.5
News articles | 4 | 1
Wikipedia | 25 | 4
Habr | 6 | 1
Other sources | 20 | 2.5
Table 2: The statistics of the Russian-language datasets for continued pre-
training.
### 3.2 Continued Pre-training
The new vocabulary requires also new embedding matrices and LM heads. The
tokens that were present in the original vocabulary are initialized with the
old embeddings, the new tokens are initialized by averaging the embeddings of
their pieces in the original embedding matrix Hewitt (2021). The similar
approach is also applied to LM heads. Training model with these modifications
requires much more computational resources than the mainstream technique for
adaptation of LLMs to new languages based on LoRA adapters Hu et al. (2021),
as it requires to perform continued pre-training of the whole model and on
much more language-specific data to mitigate the shift in the vocabulary.
The dataset for continued pre-training is constructed from Russian Wikipedia,
news articles, scientific papers, top 100k up-voted posts on Habr, and some
other sources. The statistics of these datasets is presented in Table 2. The
total number of tokens used for this step is 11 billion.
We note that the continued pre-training of a LLM might partially eliminate the
reasoning capabilities present in the original English-oriented model. This
drastically affects the model performance. In our preliminary experiments,
continued pre-training may result even in worse performance on Russian
benchmarks compared to the original model. To alleviate the “catastrophic
forgetting”, we use the loss regularization with KL penalty between the
probability distribution of Vikhr and the reference English-oriented original
LLM:
$L_{\text{Vikhr}}=L_{\text{CE}}+KL\left(P_{\text{Vikhr}}\|P_{\text{Ref
}}\right).$ (1)
In practice, we implement this approach using the SLERP interpolation of model
losses Goddard et al. (2024).
Hyperparam. | Value
---|---
LR | $1\times 10^{-3}$
AdamW eps | $1\times 10^{-8}$
Num warmup steps | 10
AdamW betas | $0.99$, $0.95$
Accumulation steps | $128$
Batch size | $3$
Epochs | $1$
Sequence length | $1024$
Table 3: The hyperparameters for continued pre-training.
To speed up the process of continued pre-training, we use an optimized Flash
attention
implementation222https://huggingface.co/docs/optimum/bettertransformer/tutorials/convert.
As an optimization algorithm, we leverage AdamW as it trades some memory
efficiency in favor of robustness to the hyperparameter choice. The
hyperparameters used for continued pre-training are presented in Table 3.
### 3.3 Instruction Tuning
Instruction tuning is an essential step in reaching high zero-shot performance
with LLMs. It also allows to obtain more natural communication with the model
without complex prompting. Further fine-tuning techniques such as RLHF Ouyang
et al. (2022), which require input from the assessors, are also crucial for
such tasks as multicriteria alignment. However, the most significant
performance gains are still achieved through instruction tuning Jha et al.
(2023).
Previously, Gusev (2023) constructed an open-source set of instruction-output
pairs for the Russian language (Saiga). The core Saiga dataset was created
similar to Alpaca by querying ChatGPT (gpt-3.5-turbo) Taori et al. (2023). In
this work, we extend this set by translating two English instruction datasets.
First, we translated instructions for the FLAN model Wei et al. (2021) and
generated answers in Russian using ChatGPT. Originally, FLAN instructions were
constructed automatically from annotated datasets using templates to
facilitate multitask and zero-shot capabilities of seq2seq models. Later, it
was shown that this data also helps to improve decoder-only chat-oriented
models as well. Second, we construct
Veles333https://huggingface.co/datasets/Vikhrmodels/Veles-2.5 by translating
the English OpenHermes Teknium (2023) instruction dataset. We also include
without translation Nectar444https://huggingface.co/datasets/berkeley-
nest/Nectar Zhu et al. (2023) – the English instruction dataset. It helps to
keep the performance of Vikhr high also for English. Since the majority of the
outputs were machine generated there are many low quality outputs. To mitigate
this problem, we filtered out low quality pairs using a reward model trained
on human data. The statistics of the Vikhr instruction datasets is presented
in Table 4.
Instruction Set | Language | # instances
---|---|---
Veles | Russian | 30k
Nectar | English | 50k
Saiga | Russian | 100k
ruFLAN | Russian | 500k
Table 4: The statistics of instruction datasets. Hyperparam. | Value
---|---
LR | $1\times 10^{-5}$
AdamW, eps | $1\times 10^{-8}$
Num warmup steps | 10
AdamW, betas | $0.99$, $0.95$
Accumulation steps | $64$
Batch size | $3$
Num epochs | $3$
Sequence length | $1024$
Table 5: The hyperparameters for instruction tuning.
LLM | | Pre-train on
---
Russian
| Training
---
Method
En-MMLU | Ru-MMLU | CheGeKa | | Russian
---
SuperGLUE
MERA
MTS AI Chat 7B (closed-source) $\diamondsuit$ | false | sft+dpo | - | 0.689 | 0.083 | 0.56 | 0.479
GigaChat-7B (closed-source) $\diamondsuit$ | true | sft+dpo | - | 0.67 | 0.451* | 0.71* | 0.479
aya101 | false | pt+sft | 0.41 | 0.37 | 0.005 | 0.36 | 0.320
Mistral-7B-Instruct-v0.2 | false | none | 0.60 | 0.78 | 0.005 | 0.57 | 0.400
rccmsu/ruadapt-mistral-7b-v0.1 | false | pt+sft | 0.61 | 0.72 | 0.005 | 0.64 | 0.421
rugpt13b | true | none | 0.25 | 0.25 | 0.132 | 0.52 | 0.208
saiga-mistral-7b-lora | false | sft | 0.60 | 0.76 | 0.223 | 0.64 | 0.442
saiga-llama3-8b | false | sft | 0.59 | 0.78 | 0.225 | 0.66 | 0.476
Vikhr-7B-instruct_0.2 | true | pt+sft | 0.62 | 0.80 | 0.231 | 0.67 | 0.485
Table 6: Evaluation results for Russian and multilingual LLMs. Pre-train on
Russian means that the model underwent (continued) pre-training on Russian
data. The following abbreviations are used: sft – instruction tuning, pt –
(continued) pre-training; dpo – direct preference optimization.
$\diamondsuit$ The results for GigaChat and MTS AI are taken from the
leaderboards. The best result among open-source models is highlighted with
bold, the second best is underscored. The best result among closed-source
proprietary models is marked with *.
Contrary to Saiga, we do not use LoRA adapters and just as in the phase of
continued pre-training, we update all model parameters. The hyperparameters
for the instruction tuning phase are presented in Table 5.
### 3.4 Hardware
Vikhr was trained on eight NVIDIA A100 GPUs 80GB. We spend approximately 1,000
GPU hours for the continued pre-training phase and 60 hours for instruction
tuning.
## 4 Experiments
### 4.1 Experimental Setup
#### Benchmarks.
The evaluation was performed on MMLU Hendrycks et al. (2021), Ru-
MMLU555https://github.com/NLP-Core-Team/mmlu_ru, CheGeKa, Russian SuperGLUE
Shavrina et al. (2020), and MERA Fenogenova et al. (2024). MMLU (En-MMLU)
evaluates LLMs across 57 subjects with multiple-choice questions, assessing a
model’s broad knowledge and reasoning abilities. We use this benchmark to
verify that the model retains bi-lingual capabilities. In the results, we
report the accuracy@1 score. RuMMLU is a translation of MMLU with GPT-3.5 to
Russian. Just as for MMLU, we report the accuracy@1 score. CheGeKa is based on
questions from the game “What? Where? When?”. This benchmark contains
challenging open-ended questions, requiring logical reasoning and world
knowledge. It includes 29,376 training and 416 test instances. The reported
evaluation metric is the F1 score. Russian SuperGLUE is a benchmark similar to
well-known English SuperGLUE Wang et al. (2019). It tests LLMs on various
natural language understanding tasks like reading comprehension and textual
entailment. The metric reported in the results is accuracy@1. The MERA
benchmark encompasses 21 evaluation tasks for generative LLMs in 11 skill
domains. Note that among other tasks MERA also includes CheGeKa, RuMMLU, and
one of the subtasks of SuperGLUE (RWSD). The reported evaluation metric is the
total score, which is the average of scores across all non-diagnostic tasks.
#### Baselines.
We compare Vikhr to six open-source and two proprietary closed-source
competitors of the similar size. Open-source models: aya101 – a massively
multilingual LLM from CohereAI that follows instructions in 101
languages666https://huggingface.co/CohereForAI/aya-101, it shows state-of-the-
art results among massively multilingual LLMs; Mistral-7B-0.2-instruct – an
English-oriented LLM that was used as the base model for Vikhr;
rccmsu/ruadapt_mistral_saiga_7b_v0.1 – a Russian-oriented LLM that was
constructed from the Mistral model using similar adaptations of the tokenizer,
token embeddings, and the LM head Tikhomirov and Chernyshev (2023); saiga-
mistral-7b-lora and saiga-llama3-8b – two versions of the Saiga models based
on English-oriented LLMs and obtained by fine-tuning LoRA adapters on the
Saiga instruction dataset777https://huggingface.co/collections/IlyaGusev.
Closed-source proprietary models for Russian: MTS AI
Chat888https://huggingface.co/MTSAIR/multi_verse_model and GigaChat-7b. The
access to GigaChat weights is closed, so the reported results are taken from
the leaderboards999https://mera.a-ai.ru/ru/submits/10257. The results of MTS
AI Chat are also taken from the
leaderboard101010https://mera.a-ai.ru/ru/submits/10290.
### 4.2 Results
The evaluation results are presented in Table 6. As we can see, Vikhr
outperforms all open-source models, including the ones that were built
specifically for Russian. It also slightly outperforms its parent model
Mistral on the En-MMLU benchmark, which might be the result of longer pre-
training. The second place with close scores for all 4 Russian language
benchmarks is obtained by the Saiga model based on recently released Llama-3.
The high scores of this model probably are the result of the transfer of the
outstanding performance of Llama-3. Since Saiga based on Llama-3 outperforms
Saiga based on Mistral, we expect that applying our adaptation pipeline to
Llama-3 would also help further improving the state of the art.
We note that the original Mistral-7B-0.2-instruct, despite being an English-
oriented model, demonstrates competitive performance in 3 out of 4 Russian
benchmarks. This indicates demonstrates that such models could be viable
alternatives. The only dataset, where its performance is very low is CheGeKa,
which is related to open-ended question-answering. This may be due to the lack
of culture-specific knowledge, as the English-oriented model has not seen much
Russian texts. Note that the MTS AI Chat also shows very low results on
CheGeKa, which might also indicate the lack of culture-specific knowledge.
The proprietary model GigaChat substantially outperforms Vikhr on CheGeKa and
notably on Russian SuperGLUE. We assume this is due to the use of much larger
Russian datasets for pre-training. However, surprisingly, it falls behind
Vikhr on Ru-MMLU. On all benchmarks, Vikhr outperforms the the proprietary
competitor from MTS AI.
## 5 Conclusion
We have presented Vikhr – a new state-of-the-art open-source instruction-
following LLM oriented on the Russian language. To create Vikhr, we developed
a comprehensive pipeline for adapting English-oriented LLMs to Russian. The
pipeline includes the adaptation of the tokenizer vocabulary, continued pre-
training of the entire model, and instruction tuning. We have also constructed
a new dataset for instruction tuning by expanding the Saiga dataset with
automatically translated and cleaned English instruction datasets. Our
extensive work enabled Vikhr to outperform the known baselines, while
maintaining computational efficiency.
We hope that the published models will foster the research on LLMs and enhance
the diversity of languages incorporated into research agendas.
## Limitations
We do not introduce additional restrictions to the usage of our models.
However, the users must comply with the license of the base model and
instruction datasets.
We do not implement RLHF / DPO fine-tuning of Vikhr due to the lack of the
resources for human annotation. We expect further performance improvements
from these techniques.
We do not introduce additional instruction-output pairs to facilitate LLM
alignment. However, we note that the majority of the data for supervised fine-
tuning of Vikhr are obtained from the ChatGPT model series, so our model
partially inherits its alignment.
## References
* AI Forever (2022) AI Forever. 2022. ru-gpts: Generative pre-trained transformer models for russian. https://github.com/ai-forever/ru-gpts.
* Chiang et al. (2023) Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality.
* Cui et al. (2023) Yiming Cui, Ziqing Yang, and Xin Yao. 2023. Efficient and effective text encoding for chinese llama and alpaca. _arXiv preprint arXiv:2304.08177_.
* Fenogenova et al. (2024) Alena Fenogenova, Artem Chervyakov, Nikita Martynov, Anastasia Kozlova, Maria Tikhonova, Albina Akhmetgareeva, Anton Emelyanov, Denis Shevelev, Pavel Lebedev, Leonid Sinev, et al. 2024. Mera: A comprehensive llm evaluation in russian. _arXiv preprint arXiv:2401.04531_.
* Goddard et al. (2024) Charles Goddard, Shamane Siriwardhana, Malikeh Ehghaghi, Luke Meyers, Vlad Karpukhin, Brian Benedict, Mark McQuade, and Jacob Solawetz. 2024. Arcee’s mergekit: A toolkit for merging large language models. _arXiv preprint arXiv:2403.13257_.
* Gusev (2023) Ilya Gusev. 2023. rulm: A toolkit for training neural language models. https://github.com/IlyaGusev/rulm.
* Hendrycks et al. (2021) Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. _Proceedings of the International Conference on Learning Representations (ICLR)_.
* Hewitt (2021) John Hewitt. 2021. Initializing new word embeddings for pretrained language models.
* Hu et al. (2021) Edward J Hu, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, et al. 2021. Lora: Low-rank adaptation of large language models. In _International Conference on Learning Representations_.
* Jha et al. (2023) Aditi Jha, Sam Havens, Jeremy Dohmann, Alex Trott, and Jacob Portes. 2023. Limit: Less is more for instruction tuning across evaluation paradigms. _arXiv preprint arXiv:2311.13133_.
* Jiang et al. (2023) Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. 2023. Mistral 7b. _arXiv preprint arXiv:2310.06825_.
* Kudo and Richardson (2018) Taku Kudo and John Richardson. 2018. Sentencepiece: A simple and language independent subword tokenizer and detokenizer for neural text processing. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 66–71.
* Li et al. (2023) Haonan Li, Fajri Koto, Minghao Wu, Alham Fikri Aji, and Timothy Baldwin. 2023. Bactrian-x: A multilingual replicable instruction-following model with low-rank adaptation. _arXiv preprint arXiv:2305.15011_.
* Maaz et al. (2024) Muhammad Maaz, Hanoona Rasheed, Abdelrahman Shaker, Salman Khan, Hisham Cholakal, Rao M Anwer, Tim Baldwin, Michael Felsberg, and Fahad S Khan. 2024. Palo: A polyglot large multimodal model for 5b people. _arXiv preprint arXiv:2402.14818_.
* Muennighoff et al. (2022) Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng-Xin Yong, Hailey Schoelkopf, et al. 2022. Crosslingual generalization through multitask finetuning. _arXiv preprint arXiv:2211.01786_.
* Muennighoff et al. (2023) Niklas Muennighoff, Thomas Wang, Lintang Sutawika, Adam Roberts, Stella Biderman, Teven Le Scao, M Saiful Bari, Sheng Shen, Zheng Xin Yong, Hailey Schoelkopf, et al. 2023. Crosslingual generalization through multitask finetuning. In _The 61st Annual Meeting Of The Association For Computational Linguistics_.
* Ouyang et al. (2022) Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022. Training language models to follow instructions with human feedback. _Advances in neural information processing systems_ , 35:27730–27744.
* Petrov et al. (2024) Aleksandar Petrov, Emanuele La Malfa, Philip Torr, and Adel Bibi. 2024. Language model tokenizers introduce unfairness between languages. _Advances in Neural Information Processing Systems_ , 36.
* Sengupta et al. (2023) Neha Sengupta, Sunil Kumar Sahu, Bokang Jia, Satheesh Katipomu, Haonan Li, Fajri Koto, Osama Mohammed Afzal, Samta Kamboj, Onkar Pandit, Rahul Pal, et al. 2023. Jais and jais-chat: Arabic-centric foundation and instruction-tuned open generative large language models. _arXiv preprint arXiv:2308.16149_.
* Shavrina et al. (2020) Tatiana Shavrina, Alena Fenogenova, Emelyanov Anton, Denis Shevelev, Ekaterina Artemova, Valentin Malykh, Vladislav Mikhailov, Maria Tikhonova, Andrey Chertok, and Andrey Evlampiev. 2020. Russiansuperglue: A russian language understanding evaluation benchmark. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 4717–4726.
* Shliazhko et al. (2022) Oleh Shliazhko, Alena Fenogenova, Maria Tikhonova, Vladislav Mikhailov, Anastasia Kozlova, and Tatiana Shavrina. 2022. mgpt: Few-shot learners go multilingual. _arXiv preprint arXiv:2204.07580_.
* Taori et al. (2023) Rohan Taori, Ishaan Shum, Pieter Abbeel, Carlos Guestrin, and Percy Liang. 2023. Stanford alpaca: An instruction-following language model. _GitHub_.
* Teknium (2023) Teknium. 2023. Openhermes 2.5: An open dataset of synthetic data for generalist llm assistants.
* Tikhomirov and Chernyshev (2023) Mikhail Tikhomirov and Daniil Chernyshev. 2023. Impact of tokenization on llama russian adaptation. _arXiv preprint arXiv:2312.02598_.
* Touvron et al. (2023a) Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023a. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_.
* Touvron et al. (2023b) Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yasmine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_.
* Wang et al. (2019) Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. _Advances in neural information processing systems_ , 32.
* Wei et al. (2021) Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. In _International Conference on Learning Representations_.
* Zhao et al. (2024) Jun Zhao, Zhihao Zhang, Qi Zhang, Tao Gui, and Xuanjing Huang. 2024. Llama beyond english: An empirical study on language capability transfer. _arXiv preprint arXiv:2401.01055_.
* Zhu et al. (2023) Banghua Zhu, Evan Frick, Tianhao Wu, Hanlin Zhu, and Jiantao Jiao. 2023. Starling-7b: Improving llm helpfulness & harmlessness with rlaif.
* Zmitrovich et al. (2023) Dmitry Zmitrovich, Alexander Abramov, Andrey Kalmykov, Maria Tikhonova, Ekaterina Taktasheva, Danil Astafurov, Mark Baushenko, Artem Snegirev, Tatiana Shavrina, Sergey Markov, et al. 2023. A family of pretrained transformer language models for russian. _arXiv preprint arXiv:2309.10931_.
* Üstün et al. (2024) Ahmet Üstün, Viraat Aryabumi, Zheng-Xin Yong, Wei-Yin Ko, Daniel D’souza, Gbemileke Onilude, Neel Bhandari, Shivalika Singh, Hui-Lee Ooi, Amr Kayid, Freddie Vargus, Phil Blunsom, Shayne Longpre, Niklas Muennighoff, Marzieh Fadaee, Julia Kreutzer, and Sara Hooker. 2024. Aya model: An instruction finetuned open-access multilingual language model. _arXiv preprint arXiv:2402.07827_.
|
# Quantum information theory and
free semialgebraic geometry:
one wonderland through two looking
glasses
Gemma De las Cuevas Institut für Theoretische Physik, Technikerstr. 21a,
A6020 Innsbruck, Austria and Tim Netzer Institut für Mathematik,
Technikerstr. 13, A6020 Innsbruck, Austria
###### Abstract.
We illustrate how quantum information theory and free (i.e. noncommutative)
semialgebraic geometry often study similar objects from different
perspectives. We give examples in the context of positivity and separability,
quantum magic squares, quantum correlations in non-local games, and positivity
in tensor networks, and we show the benefits of combining the two
perspectives. This paper is an invitation to consider the intersection of the
two fields, and should be accessible for researchers from either field.
Live free or die.
Motto of New Hampshire
## 1\. Introduction
The ties between physics, computer science and mathematics are historically
strong and multidimensional. It has often happened that mathematical
inventions which were mere products of imagination (and thus thought to be
useless for applications) have later played a crucial role in physics or
computer science. A superb example is that of imaginary numbers and their use
in complex Hilbert spaces in quantum mechanics.111Who would have thought that
the square root of $-1$ would have any physical relevance? See [Re21]. Other
examples include number theory and its use in cryptography, or Riemannian
geometry and its role in General Relativity. It is also true that physicists
tend to be only aware of the mathematical tools useful for them — so there are
many branches of mathematics which have not found an outlet in physics.
The relevance of a statement depends on the glass through which we look at it.
There are statements which are mathematically unimpressive but physically very
impressive. A good example is entanglement. Mathematically, the statement that
the positivity cone of the tensor product space is larger than the tensor
product of the local cones is interesting, but not particularly wild or
surprising. Yet, the physical existence of entangled particles is, from our
perspective, truly remarkable. In other words, while the mathematics is easy
to understand, the physics is mind-blowing. This is particularly true
regarding Bell’s Theorem: while it is mathematically not specially deep, we
regard the experimental violation of Bell inequalities [Gi15, He15, Sh15] as
very deep indeed. Another example is the no-cloning theorem — it is
mathematically trivial, yet it has very far-reaching physical consequences. On
the other hand, there are many mathematically impressive statements which are
— so far — physically irrelevant. Finally, there are statements which can be
both mathematically deep and central for quantum information theory, such as
Stinespring’s Dilation Theorem.
Figure 1. Free semialgebraic geometry and quantum information often look at
similar landscapes from different perspectives, as in the fantastic world of
this woodcut by M. C. Escher.
The goal of this paper is to illustrate how two relatively new disciplines in
physics and mathematics — quantum information theory and free semialgebraic
geometry — have a lot in common (Fig. 1). ‘Free’ means noncommutative, because
it is _free of the commutation relation_. So free semialgebraic geometry
studies noncommutative versions of semialgebraic sets. On the other hand,
quantum information theory is (mathematically) a noncommutative generalisation
of classical information theory. So, intuitively, ‘free’ is naturally linked
to ‘quantum’. Moreover, in both fields, positivity plays a very important
role. Semialgebraic geometry examines questions arising from nonnegativity,
like polynomial inequalities. In quantum information theory, quantum states
are represented by positive semidefinite matrices. Positivity also gives rise
to convexity, which is central in both fields, as we will see.
So the two disciplines often study the same mathematical objects from
different perspectives. As a consequence, they often ask different questions.
For example, in quantum information theory, given an element of a tensor
product space, one wants to know whether it is positive semidefinite, and how
this can be efficiently represented and manipulated. In free semialgebraic
geometry, the attention is focused on the geometry of the set of all such
elements (see LABEL:tab:comparison).
Table 1. Examples of the different approaches of quantum information theory and free semialgebraic geometry in studying essentially the same mathematical objects. The various notions will be explained throughout the paper. Quantum information theory | Free semialgebraic geometry
---|---
_Emphasis on the element_ | _Emphasis on the set_
Given $\rho=\sum_{\alpha}A_{\alpha}\otimes B_{\alpha}$, is it positive semidefinite (psd)? | Given $\\{A_{\alpha}\\}$, characterise the set of $\\{B_{\alpha}\\}$ such that $\sum_{\alpha}A_{\alpha}\otimes B_{\alpha}$ is psd.
Block positive matrices / Separable psd matrices. | Largest / smallest operator system over the psd cone.
Every POVM can be dilated to a PVM. | The free convex hull of the set of PVMs is the set of POVMs.
Can a correlation matrix $p$ be realised by a quantum strategy? | Is $p$ in the free convex hull of the free independence model?
We believe that much is to be learnt by bridging the gap among the two
communities — in knowledge, notation and perspective. In this paper we hope to
illustrate this point. This paper is thus meant to be accessible for
physicists and mathematicians.
Obviously we are not the first or the only ones to notice these similarities.
In this paper, however, we will mainly concentrate on what we have learnt from
collaborating in recent years, and we will review a few other works. The
selection of other works is not comprehensive and reflects our partial
knowledge. We also remark that quantum information theory and free
semialgebraic geometry are not the only ones studying positivity in tensor
product spaces. _Compositional distributional semantics_ , for example,
represents the meaning of words by positive semidefinite matrices, and the
composition of meanings is thus given by positivity preserving maps — see e.g.
[De20c, Co20].
This article is organised as follows. We will first explain basic concepts in
quantum information theory and free semialgebraic geometry (Section 2) — the
reader familiar with them can skip the corresponding section. Then we will
explain how they are related (Section 3), and we will end with some closing
words (Section 4).
## 2\. Some basic concepts
Here we present some basic concepts in quantum information theory (Section
2.1) and free semialgebraic geometry (Section 2.2).
Throughout the paper we denote the set of $r\times s$ complex matrices by
${\rm Mat}_{r,s}$, the set of $r\times r$ complex matrices by ${\rm Mat}_{r}$,
and we use the identification of ${\rm Mat}_{r}\otimes{\rm Mat}_{s}$ with
${\rm Mat}_{rs}$. We will also often use the real subspace of ${\rm Mat}_{r}$
containing the Hermitian elements, called ${\rm Her}_{r}$, and use that ${\rm
Her}_{r}\otimes{\rm Her}_{s}$ is identified with ${\rm Her}_{rs}$. The
$d$-fold cartesian product of ${\rm Her}_{r}$ is denoted ${\rm Her}_{r}^{d}$.
### 2.1. Basic concepts from quantum information theory
Here we briefly introduce some concepts from quantum information theory. We
focus on finite-dimensional quantum systems of which we do not assume to have
perfect knowledge (in the language of quantum information, these are called
_mixed states_). See, e.g. [wi, Ni00], for a more general overview.
The state of a quantum system is modelled by a normalized positive
semidefinite matrix, i.e. a
$\rho\in{\rm Mat}_{d}\mbox{ with }\rho\succcurlyeq 0\mbox{ and
}\mathrm{tr}(\rho)=1,$
where $\succcurlyeq 0$ denotes positive semidefinite (psd), i.e. Hermitian
with nonnegative eigenvalues, and the trace, $\mathrm{tr}$, is the sum of the
diagonal elements. We reserve the symbol $\geqslant 0$ for nonnegative
numbers. A measurement on the system is modelled by a _positive operator
valued measure_ (POVM), i.e. a set of psd matrices $\tau_{i}$ that sum to the
identity:
$\tau_{1},\ldots,\tau_{n}\in{\rm Mat}_{d}\mbox{ with all }\tau_{i}\succcurlyeq
0\mbox{ and }\sum_{i}\tau_{i}=I_{d}.$
The probability to obtain outcome $i$ on state $\rho$ is given by
(1) $\displaystyle\mathrm{tr}(\rho\tau_{i}).$
Note that these probabilities sum to 1 because of the normalisation condition
on the $\tau_{i}$’s and $\rho$.
When the system is composed of several subsystems, the global state space is
modelled as a tensor product of the local spaces,
(2) $\displaystyle{\rm Mat}_{d}={\rm Mat}_{d_{1}}\otimes\cdots\otimes{\rm
Mat}_{d_{n}},$
where $d=d_{1}\cdots d_{n}$.
A state $\rho$ is called separable (w.r.t. a given tensor product structure)
if it can be written as
$\rho=\sum_{i=1}^{r}\rho_{i}^{(1)}\otimes\cdots\otimes\rho_{i}^{(n)}\quad\mbox{with
all }\rho_{i}^{(j)}\succcurlyeq 0.$
This is obviously a stronger requirement than $\rho$ being psd — not every
$\rho$ is separable. Separable states are not too interesting from a quantum
information perspective: not separable states are called entangled, and
entanglement is necessary for many quantum information tasks.
[width=.5cm,color = darkgray]3
A quantum channel is the most general transformation on quantum states.
Mathematically, it is modelled by a linear trace-preserving map
$T\colon{\rm Mat}_{d}\to{\rm Mat}_{s}$
that is completely positive. Complete positivity means that the maps
${\rm id}_{n}\otimes T\colon{\rm Mat}_{nd}\to{\rm Mat}_{ns}$
are positive (i.e. map psd matrices to psd matrices) for all $n$, where ${\rm
id}_{n}$ is the identity map on ${\rm Mat}_{n}$.
Any linear map $T\colon{\rm Mat}_{d}\to{\rm Mat}_{s}$ is uniquely determined
by its Choi matrix
$C_{T}:=\sum_{i,j=1}^{d}E_{ij}\otimes
T\mathopen{}\mathclose{{}\left(E_{ij}}\right)\in{\rm Mat}_{ds},$
where $E_{ij}$ is the matrix with a 1 in the $(i,j)$-position and 0
elsewhere.222If this is expressed in the so-called computational basis (which
is one specific orthonormal basis), this is written $E_{ij}=|i\rangle\langle
j|$ in quantum information. It is a basic fact that $T$ is completely positive
if and only if $C_{T}$ is psd (see for example [Pau02, Wo11]). Moreover, a
completely positive map $T$ is _entanglement-breaking_ [Ho03] if and only if
$C_{T}$ is a separable matrix, and $T$ is a positive map if and only if
$C_{T}$ is _block positive_ , i.e.
$\mathrm{tr}((\sigma\otimes\tau)C_{T})\geqslant 0\quad\textrm{for all
}\sigma\succcurlyeq 0,\tau\succcurlyeq 0.$
Note that this is weaker than $C_{T}$ being psd, in which case
$\mathrm{tr}(\chi C_{T})\geqslant 0$ for all $\chi\succcurlyeq 0$ (see Table
2). We also remark that this link between positivity notions of linear maps
and their Choi matrices does not involve the normalisation conditions on the
maps (e.g. preserving the trace) or the matrices (e.g. having a given trace).
Linear map | Element in tensor product space
---|---
$T:{\rm Mat}_{d}\to{\rm Mat}_{s}$ | $\rho\in{\rm Mat}_{d}\otimes{\rm Mat}_{s}$
Entanglement-breaking map | Separable matrix
Completely positive map | Positive semidefinite matrix
Positive map | Block positive matrix
Table 2. Correspondence between notions of positivity for linear maps and
their Choi matrices. Entanglement-breaking maps are a subset of completely
positive maps, which are a subset of positive maps. The same is true for the
right column, of course.
### 2.2. Basic concepts from (free) semialgebraic geometry
We now introduce some basic concepts from free (i.e. noncommutative)
semialgebraic geometry. For a slightly more detailed introduction, see [Ne19]
and references therein.
Our setup starts by considering a $\mathbb{C}$-vector space $V$ with an
involution $*$. The two relevant examples are, first, the case where $V$ is
the space of matrices and * is the transposition with complex conjugation —
denoted $\dagger$ in quantum information —, and, second, $\mathbb{C}^{d}$ with
entrywise complex conjugation.
The fixed points of the involution are called self-adjoint, or Hermitian,
elements. We denote the set of Hermitian elements of $V$ by $V_{\rm her}$.
This is an $\mathbb{R}$-subspace of $V$, in which the real things
happen.333Because it is where positivity and other interesting phenomena
happen.
In the _free_ setup, we do not only consider $V$ but also higher levels
thereof. Namely, for any $s\in\mathbb{N}$, we consider the space of $s\times
s$-matrices with entries over $V$,
${\rm Mat}_{s}(V)=V\otimes{\rm Mat}_{s}.$
Recall that ${\rm Mat}_{s}$ refers to $s\times s$-matrices with entries over
$\mathbb{C}$. ${\rm Mat}_{s}(V)$ is a $\mathbb{C}$-vector space with a
‘natural’ involution, consisting of transposing and applying $*$ entrywise.
This thus promotes $V$ and $*$ to an entire hierarchy of levels, namely ${\rm
Mat}_{s}(V)$ for all $s\in\mathbb{N}$ with the just described involution.
We are now ready to define the most general notion of a free real set. This is
nothing but a collection
$\mathcal{C}=\mathopen{}\mathclose{{}\left(C_{s}}\right)_{s\in\mathbb{N}}$
where each $C_{s}\subseteq{\rm Mat}_{s}(V)_{\rm her}=:{\rm Her}_{s}(V)$. We
call $C_{s}$ the set at level $s$.
To make things more interesting, one often imposes conditions that connect the
levels. One important example is free convexity, which is defined as follows.
For any
$\tau_{i}\in C_{t_{i}}\quad\mbox{with}\quad i=1,\ldots,n,$
and
(3) $\displaystyle v_{i}\in{\rm
Mat}_{t_{i},s}\quad\mbox{with}\quad\sum_{i=1}^{n}v_{i}^{*}v_{i}=I_{s},$
it holds that
(4) $\displaystyle\sum_{i=1}^{n}v_{i}^{*}\tau_{i}v_{i}\in C_{s}.$
Note that in (4) matrices over the complex numbers (namely $v_{i}$) are
multiplied with matrices over $V$ (namely $\tau_{i}$). This is defined as
matrix multiplication in the usual way for $v_{i}^{*}\tau_{i}v_{i}$, and using
that elements of $V$ can be multiplied with complex numbers and added. For
example, for $n=1$, $t=2$, $s=1$, $\tau=(\mu_{i,j})$ with $\mu_{i,j}\in V$ for
$i=1,2$, and $v=(\lambda_{1},\lambda_{2})^{t}$ with
$\lambda_{i}\in\mathbb{C}$, we have
$v^{*}\tau v=\sum_{i,j}\bar{\lambda}_{i}\lambda_{j}\mu_{i,j}.$
Note that if free convexity holds, then every $C_{s}$ is a convex set in the
real vector space ${\rm Her}_{s}(V)$. But free convexity is generally a
stronger condition than ‘classical’ convexity, as we will see.
In addition, a conic version of free convexity is obtained when giving up the
normalization condition on the $v_{i}$, i.e. the right hand side of Eq. (3).
In this case, $\mathcal{C}$ is called an abstract operator system (usually
with the additional assumption that every $C_{s}$ is a proper convex cone).
[width=.5cm,color = darkgray]3
Now, free semialgebraic sets are free sets arising from polynomial
inequalities. This will be particularly important for the connection we hope
to illustrate in this paper. In order to define these, take $V=\mathbb{C}^{d}$
with the involution provided by entrywise conjugation, so that $V_{\rm
her}=\mathbb{R}^{d}$. Let $z_{1},\ldots,z_{d}$ denote free variables, that is,
noncommuting variables. We can imagine each $z_{i}$ to represent a matrix of
_arbitrary size_ — later we will substitute $z_{i}$ by a matrix of a given
size, and this size will correspond to the level of the free semialgebraic
set.
Now let $\omega$ be a finite _word_ in the letters $z_{1},\ldots,z_{d}$, that
is, an ordered tuple of these letters. For example, $\omega$ could be
$z_{1}z_{1}z_{4}$ or $z_{4}z_{5}z_{4}$. In addition, let
$\sigma_{\omega}\in{\rm Mat}_{m}$ be a matrix (of some fixed size $m$) that
specifies the coefficients of word $\omega$; this is called the coefficient
matrix. A matrix polynomial in the free variables $z_{1},\ldots,z_{d}$ is an
expression
$p=\sum_{\omega}\sigma_{\omega}\otimes\omega,$
where the sum is over all finite words $\omega$, and where only finitely many
coefficient matrices $\sigma_{\omega}$ are nonzero.
We denote the reverse of word $\omega$ by $\omega^{*}$. For example, if
$\omega=z_{1}z_{2}z_{3}$ then $\omega^{*}=z_{3}z_{2}z_{1}$. In addition,
$(\sigma_{\omega})^{*}$ is obtained by transposition and complex conjugation
of $\sigma_{\omega}.$ If the coefficient matrices fulfill
(5) $\displaystyle(\sigma_{\omega})^{*}=\sigma_{\omega^{*}},$
then for any tuple of Hermitian matrices $(\tau_{1},\ldots,\tau_{d})\in{\rm
Her}_{s}^{d}$ we have that
$p(\tau_{1},\ldots,\tau_{d})=\sum_{\omega}\sigma_{\omega}\otimes\omega(\tau_{1},\ldots,\tau_{d})\in{\rm
Her}_{ms}.$
That is, $p$ evaluated at the Hermitian matrices $\tau_{1},\ldots,\tau_{d}$ is
a Hermitian matrix itself.
So, for a given matrix polynomial $p$ satisfying condition (5), we define the
free semialgebraic set at level $s$ as the set of Hermitian matrices of size
$s$ such that $p$ evaluated at them is psd:
$C_{s}(p):=\mathopen{}\mathclose{{}\left\\{(\tau_{1},\ldots,\tau_{d})\in{\rm
Her}_{s}^{d}\mid p(\tau_{1},\ldots,\tau_{d})\succcurlyeq 0}\right\\}.$
Finally we define the free semialgebraic set as the collection of all such
levels:
$\mathcal{C}(p):=\mathopen{}\mathclose{{}\left(C_{s}(p)}\right)_{s\in\mathbb{N}}.$
For example, let $E_{ii}$ denote the matrix with a 1 in entry $(i,i)$ and 0
elsewhere. Then the matrix polynomial
$p=\sum_{i=1}^{d}E_{ii}\otimes z_{i}$
defines the following free semialgebraic set at level $s$
$C_{s}(p)=\mathopen{}\mathclose{{}\left\\{(\tau_{1},\ldots,\tau_{d})\in{\rm
Her}_{s}^{d}\mid
E_{11}\otimes\tau_{1}+\ldots+E_{dd}\otimes\tau_{d}\succcurlyeq 0}\right\\}.$
The positivity condition is equivalent to $\tau_{i}\succcurlyeq 0$ for all
$i$, which gives this free set the name free positive orthant. Note that for
$s=1$, the ‘free’ variables become real numbers,
$C_{1}(p)=\mathopen{}\mathclose{{}\left\\{(a_{1},\ldots,a_{d})\in\mathbb{R}^{d}\mid
a_{i}\geqslant 0\>\>\forall i}\right\\},$
which defines the positive orthant in $d$ dimensions.
It is easy to see that any free semialgebraic set is closed under direct sums,
meaning that if $(\tau_{1},\ldots,\tau_{d})\in C_{s}(p)$,
$(\chi_{1},\ldots,\chi_{d})\in C_{r}(p)$ then
$(\tau_{1}\oplus\chi_{1},\ldots,\tau_{d}\oplus\chi_{d})\in C_{r+s}(p),$
where $\tau_{i}\oplus\chi_{i}$ denotes the block diagonal sum of two Hermitian
matrices. This is because
$p(\tau_{1}\oplus\chi_{1},\ldots\tau_{d}\oplus\chi_{d})=p(\tau_{1},\ldots\tau_{d})\oplus
p(\chi_{1},\ldots\chi_{d}),$ which is psd if and only if each of the terms is
psd.
Note also that a semialgebraic set is a Boolean combination of $C_{1}(p_{i})$
for a finite set of polynomials $p_{i}$. A ‘free semialgebraic set’ is thus a
noncommutative generalisation thereof, with the difference that usually a
single polynomial $p$ is considered.
[width=.5cm,color = darkgray]3
A very special case of free semialgebraic sets are _free spectrahedra_ , which
arise from linear matrix polynomials. A linear matrix polynomial is a matrix
polynomial where every word $\omega$ depends only on one variable, i.e.
$\ell=\sigma_{0}\otimes 1+\sum_{i=1}^{d}\sigma_{i}\otimes z_{i}$
with 1 being the empty word, and all $\sigma_{i}\in{\rm Her}_{m}$. The
corresponding free set at level $s$ is given by
$C_{s}(\ell)=\mathopen{}\mathclose{{}\left\\{(\tau_{1},\ldots,\tau_{d})\in{\rm
Her}_{s}^{d}\mid\sigma_{0}\otimes
I_{s}+\sum_{i=1}^{d}\sigma_{i}\otimes\tau_{i}\succcurlyeq 0}\right\\}$
and $\mathcal{C}(\ell)$ is called a free spectrahedron. The first level set,
$C_{1}(\ell)=\mathopen{}\mathclose{{}\left\\{(a_{1},\ldots,a_{d})\in\mathbb{R}^{d}\mid\sigma_{0}+a_{1}\sigma_{1}+\cdots+a_{d}\sigma_{d}\succcurlyeq
0}\right\\},$
is known as a classical spectrahedron, or simply, a spectrahedron (see Fig. 2
for some three-dimensional spectrahedra). If all $\sigma_{i}$ are diagonal in
the same basis, then the spectrahedron $C_{1}(\ell)$ becomes a polyhedron.
(Intuitively, polyhedra have flat facets whereas the borders of spectrahedra
can be round, as in Fig. 2.) Thus, every polyhedron is a spectrahedron, but
not vice versa.
While the linear image (i.e. the _shadow_) of a polyhedron is a polyhedron,
the shadow of a spectahedron need not be a spectahedron. The forthcoming book
[NP] presents a comprehensive treatment of spectrahedra and their
shadows.444Shadows can be very different from the actual thing, as this shadow
art by Kumi Yamashita shows.
Figure 2. Some three-dimensional spectrahedra taken from [NP]. Spectrahedra
are convex sets described by a linear matrix inequality, and polyhedra are
particular cases of spectrahedra.
## 3\. One wonderland through two looking glasses
Let us now explain some recent results that illustrate how concepts and
methods from the two disciplines interact. We will focus on positivity and
separability (Section 3.1), quantum magic squares (Section 3.2), non-local
games (Section 3.3), and positivity in tensor networks (Section 3.4).
### 3.1. Positivity and separability
For fixed $d,s\in\mathbb{N}$ consider the set of states and separable states
in ${\rm Mat}_{d}\otimes{\rm Mat}_{s}$, namely ${\rm State}_{d,s}$ and ${\rm
Sep}_{d,s}$, respectively. Both sets are closed in the real vector space ${\rm
Her}_{d}\otimes{\rm Her}_{s}$. Moreover, both are semialgebraic, since ${\rm
State}_{d,s}$ is a classical spectrahedron, and ${\rm Sep}_{d,s}$ can be
proven to be semialgebraic using the projection theorem/quantifier elimination
in the theory of real closed fields (see, e.g., [PD01]).
It has long been known that ${\rm Sep}_{d,s}$ is a strict subset of ${\rm
State}_{d,s}$ whenever $d,s>1$. A recent work by Fawzi [Faw19], building on
Scheiderer’s [Sc18], strengthens this result, by showing that the geometry of
these two sets is significantly different:
###### Theorem 1 ([Faw19]).
If $d+s>5$ then ${\rm Sep}_{d,s}$ is not a spectrahedral shadow.
Recall that a spectahedral shadow is the linear image of a spectrahedron.
Together with the relations of Table 2, it follows from the previous result
that the corresponding sets of linear maps $T:{\rm Mat}_{d}\to{\rm Mat}_{s}$
satisfy that:
* (i)
Entanglement-breaking maps form a convex semialgebraic set which is not a
spectrahedral shadow,
* (ii)
Completely positive maps form a spectrahedron, and
* (iii)
Positive maps form a convex semialgebraic set which is not a spectrahedral
shadow. This follows from (i), the duality of positive maps and entanglement-
breaking maps, and the fact that duals of spectrahedral shadows are also
spectrahedral shadows [NP].
Let us now consider the set of states and separable states as free sets.
Namely, for fixed $d\geqslant 1$ let
${\rm State}_{d}:=\mathopen{}\mathclose{{}\left({\rm
State}_{d,s}}\right)_{s\in\mathbb{N}}\quad\mbox{ and }\quad{\rm
Sep}_{d}:=\mathopen{}\mathclose{{}\left({\rm
Sep}_{d,s}}\right)_{s\in\mathbb{N}}.$
This is a particular case of the setup described above, where $V={\rm
Mat}_{d}$ and the involution is provided by $\dagger$. Moreover, both sets
satisfy the condition of free convexity (Eq. (4)). In addition, ${\rm
State}_{d}$ is a free spectrahedron, whereas ${\rm Sep}_{d}$ is not, since for
fixed $s$ it is not even a classical spectrahedral shadow at level $s$ due to
Theorem 1.
Viewing states as free sets also leads to an easy conceptual proof of the
following result [D19], which was first proven by Cariello [Car].
###### Theorem 2 ([D19, Car]).
For arbitrary $d,s\in\mathbb{N}$, if $\rho\in{\rm State}_{d,s}$ is of tensor
rank $2$, i.e. it can be written as
(6) $\displaystyle\rho=\sigma_{1}\otimes\tau_{1}+\sigma_{2}\otimes\tau_{2},$
where $\sigma_{i}$ and $\tau_{i}$ are Hermitian, then it is separable.
Note that $\sigma_{i}$ and $\tau_{i}$ need not be psd. Let us sketch the proof
of [D19] to illustrate the method.
###### Proof.
Consider the linear matrix polynomial $\ell=\sigma_{1}\otimes
z_{1}+\sigma_{2}\otimes z_{2}$, where $\sigma_{1},\sigma_{2}$ are given in Eq.
(6). The fact that $\rho$ is a state means that the corresponding free set of
level $s$ contains $(\tau_{1},\tau_{2})$:
$(\tau_{1},\tau_{2})\in C_{s}(\ell).$
At level one, the spectrahedron $C_{1}(\ell)$ is a convex cone in
$\mathbb{R}^{2}$. A convex cone in the plane must be a _simplex cone_ , i.e. a
cone whose number of extreme rays equals the dimension of the space. In
$\mathbb{R}^{2}$ this means that the cone is spanned by two vectors,
$C_{1}(\ell)={\rm cone}\\{v_{1},v_{2}\\},$
where $v_{1},v_{2}\in\mathbb{R}^{2}$. When the cone at level one is a simplex
cone, the free convex cone is fully determined [ev, FNT].
In addition, the sets
$T_{s}:=\mathopen{}\mathclose{{}\left\\{v_{1}\otimes\eta_{1}+v_{2}\otimes\eta_{2}\mid
0\preccurlyeq\eta_{i}\in{\rm Her}_{s}}\right\\}$
also give rise to a free convex cone
$\mathopen{}\mathclose{{}\left(T_{s}}\right)_{s\in\mathbb{N}}$, and we have
that $T_{1}=C_{1}(\ell)$.
These two facts imply that $T_{s}=C_{s}(\ell)$ for all $s\in\mathbb{N}$. Using
a representation for $(\tau_{1},\tau_{2})$ in $T_{s}$, and substituting into
Eq. (6) results in a separable decomposition of $\rho$. ∎
The crucial point in the proof is that when the cone at level one is a simplex
cone, the free convex cone is fully determined. This is not a very deep
insight — it can easily be reduced to the case of the positive orthant, where
it is obvious.
Note that the separable decomposition of $\rho$ obtained in the above proof
contains only two terms — in the language of [D19, DN], $\rho$ has separable
rank 2.
[width=.5cm,color = darkgray]3
References [BN1, BN2] also propose to use free spectrahedra to study some
problems in quantum information theory, but from a different perspective.
Given $d$ Hermitian matrices $\sigma_{1},\ldots,\sigma_{d}\in{\rm Her}_{m}$,
one would like to know whether they fulfill
$0\preccurlyeq\sigma_{i}\preccurlyeq I_{m},$
because this implies that each $\sigma_{i}$ gives rise to the binary POVM
consisting of $\sigma_{i},I_{m}-\sigma_{i}$. In addition, one would like to
know whether $\sigma_{1},\ldots,\sigma_{d}$ are jointly measurable, meaning
that these POVMs are the marginals of one POVM (see [BN1] for an exact
definition).
Now use $\sigma_{1},\ldots,\sigma_{d}$ to construct the linear matrix
polynomial
$\ell:=I_{m}\otimes 1-\sum_{i=1}^{d}(2\sigma_{i}-I_{m})\otimes z_{i}$
and consider its free spectrahedron
$\mathcal{C}(\ell)=\mathopen{}\mathclose{{}\left(C_{s}(\ell)}\right)_{s\in\mathbb{N}}.$
Define the matrix diamond as the free spectrahedron
$\mathcal{D}=\mathopen{}\mathclose{{}\left(D_{s}}\right)_{s\in\mathbb{N}}$
with
$D_{s}:=\mathopen{}\mathclose{{}\left\\{(\tau_{1},\ldots,\tau_{d})\in{\rm
Her}_{s}^{d}\mid I_{s}-\sum_{i=1}^{d}\pm\tau_{i}\succcurlyeq 0}\right\\},$
where all possible choices of signs $\pm$ are taken into account. Note that
$D_{1}$ is just the unit ball of $\mathbb{R}^{d}$ in $1$-norm, which explains
the name diamond. Note also that $D_{1}\subseteq S_{1}(\ell)$ is equivalent to
$0\preccurlyeq\sigma_{i}\preccurlyeq I_{m}$ for all $i=1,\ldots,d$. Since
these finitely many conditions can be combined into a single linear matrix
inequality (using diagonal blocks of matrix polynomials), $\mathcal{D}$ is
indeed a free spectrahedron. The following result translates the joint
measurability to the containment of free spectrahedra:
###### Theorem 3 ([BN1]).
$\sigma_{1},\ldots,\sigma_{d}$ are jointly measurable if and only if
$\mathcal{D}\subseteq\mathcal{C}(\ell)$.
That one free spectrahedron is contained in another,
$\mathcal{D}\subseteq\mathcal{C}(\ell)$, means that each of their
corresponding levels satisfy the same containment, i.e. $D_{s}\subseteq
C_{s}(\ell)$ for all $s\in\mathbb{N}$.
The containment of spectrahedra and free spectrahedra has received
considerable attention recently [bental, hecp, hedi, FNT, PSS]. One often
studies inclusion constants for containment, which determine how much the
small spectrahedron needs to be shrunk in order to obtain inclusion. In [BN1,
BN2] this is used to quantify the degree of incompatibility, and to obtain
lower bounds on the joint measurability of quantum measurements.
### 3.2. Quantum magic squares
Let us now look at magic squares and their quantum cousins.
A magic square is a $d\times d$-matrix with positive entries such that every
row and column sums to the same number (see Fig. 3 for two beautiful
examples.) A doubly stochastic matrix is a $d\times d$-matrix with real
nonnegative entries, in which each row and each column sums to $1$. So doubly
stochastic matrices contain a probability measure in each row and each column.
For example, dividing every entry of Dürer’s magic square by 34 results in a
doubly stochastic matrix. Now, the set of doubly stochastic matrices forms a
polytope, whose vertices consist of the permutation matrices, i.e. doubly
stochastic matrices with a single 1 in every row and column and 0 elsewhere
(that is, permutations of the identity matrix). This is the content of the
famous Birkhoff–von Neumann Theorem.
Figure 3. (Left) The magic square on the façade of the Sagrada Família in
Barcelona, where every row and column adds to 33. (Right) The magic square in
Albrecht Dürer’s lithograph _Melencolia I_ , where every row and column adds
to 34.
A ‘quantum’ generalization of a doubly stochastic matrix is obtained by
putting a POVM (defined in Section 2.1) in each row and each column of a
$d\times d$-matrix. This defines a quantum magic square [D20]. That is, in
passing from doubly stochastic matrices to quantum magic squares, we promote
the nonnegative numbers to psd matrices. The normalisation conditions on the
numbers (that they sum to 1) become the normalisations of the POVM (that they
sum to the identity matrix).
What is a quantum generalisation of a permutation matrix? Permutation matrices
only contain 0s and 1s, so in passing to the quantum version, we promote 0 and
1 to orthogonal projectors (given that 0 and 1 are the only numbers that
square to themselves). The relevant notion is thus that of a _projection
valued measure_ (PVM), in which each measurement operator
$\tau_{1},\ldots,\tau_{d}$ is an orthogonal projection,
$\tau_{i}^{2}=\tau_{i}$. Quantum permutation matrices are magic squares
containing a PVM in each row and column [Ba].555 See the closely related
notion of _quantum Latin squares_ [Mu16, Ho20], which in essence are quantum
permutation matrices with rank 1 projectors.
While PVMs are a special case of POVMs, every POVM dilates to a PVM (see,
e.g., [Pau02]):
###### Theorem 4 (Naimark’s Dilation Theorem).
Let $\tau_{1},\ldots,\tau_{d}$ (of size $m\times m$) form a POVM. Then there
exists a PVM $\sigma_{1},\ldots,\sigma_{d}$ (of size $n\times n$, for some
$n$) and a matrix $v\in{\rm Mat}_{n,m}$ such that
$v^{*}\sigma_{i}v=\tau_{i}\mbox{ for all }i=1,\ldots,d.$
In terms of free sets, this theorem states that _the free convex hull of the
set of PVMs is precisely the set of POVMs_. Both sets are free semialgebraic,
and the POVMs even form a free spectrahedron.
Through the glass of free semialgebraic geometry, quantum magic squares form a
free spectrahedron over the space $V={\rm Mat}_{d}$, equipped with entrywise
complex conjugation as an involution. Level $s$ corresponds to POVMs with
matrices of size $s\times s$, and thus level 1 corresponds to doubly
stochastic matrices. We thus recover the magic in the classical world at level
1, and we have an infinite tower of levels on top of that expressing the
quantum case.
Furthermore, quantum permutation matrices form a free semialgebraic set whose
first level consists of permutation matrices. The ‘classical magic’ is thus
again found at level 1, and the quantum magic is expressed in an infinite
tower on top of it.
Now, recall that the Birkoff–von Neumann theorem says that the convex hull of
the set of permutation matrices is the set of doubly stochastic matrices. So
the permutation matrices are the vertices of the polytope of doubly stochastic
matrices. In the light of the towers of quantum magic squares and quantum
permutation matrices, this theorem fully characterises what happens at level
one. We ask whether a similar characterisation is possible for the quantum
levels: _Is the free convex hull of quantum permutation matrices equal to the
set of quantum magic squares?_
This question can be phrased in terms of dilations as follows. By Naimark’s
Dilation Theorem we know that every POVM dilates to a PVM. The question is
whether this also holds for a two-dimensional array of POVMs, i.e. whether
every square of POVMs can dilated to a square of PVMs. The non-trivial part is
that the dilation must work simultaneously for all POVMs in the rows and
columns. The two-dimensional version of Naimark’s Dilation Theorem can thus be
phrased as: _Does every quantum magic square dilate to a quantum permutation
matrix?_
The answer to these questions is ‘no’: these quantum generalisations fail to
be true in the simplest nontrivial case. This means that there must exist very
strange (and thus very interesting) quantum magic squares:
###### Theorem 5 ([D20]).
For each $d\geqslant 3$, the free convex hull of the free semialgebraic set of
$d\times d$ quantum permutation matrices is strictly contained in the free
spectrahedron of quantum magic squares. This strict containment already
appears at level $s=2$.
The latter statement means that there is a $d\times d$-matrix with POVMs of
size $2\times 2$ in each row and column which does not dilate to a matrix with
a PVM in each row and column.
In words, the tower of quantum levels does not admit the same kind of ‘easy’
characterisation as level one or the case of a single POVM — at least not the
natural generalisations we have considered here. This is yet another sign of
the richer structure of the quantum world compared to the classical one.
### 3.3. Non-local games and quantum correlations
Consider a game with two players, Alice and Bob, and a referee. The referee
chooses a question randomly from finite sets $\mathcal{Q}_{A}$ and
$\mathcal{Q}_{B}$ for Alice and Bob, respectively, and sends them to Alice and
Bob. Upon receiving her question, Alice chooses from a finite set
$\mathcal{A}_{A}$ of answers, and similarly Bob chooses his answer from the
finite set $\mathcal{A}_{B}$. They send their answers to the referee, who
computes a winning function
$w\colon\mathcal{Q}_{A}\times\mathcal{Q}_{B}\times\mathcal{A}_{A}\times\mathcal{A}_{B}\to\\{0,1\\}$
to determine whether they win or lose the game (value of $w$ being 1 or 0,
respectively).
During the game, Alice and Bob know both the winning function $w$ and the
probability measure on $\mathcal{Q}_{A}\times\mathcal{Q}_{B}$ used by the
referee to choose the questions. So before the game starts Alice and Bob agree
on a joint strategy. However, during the game Alice and Bob are ‘in separate
rooms’ (or in separate galaxies) so they cannot communicate. In particular,
Alice will not know Bob’s question and vice versa. In order to find the
strategy that maximises the winning probability, Alice and Bob have to solve
an optimisation problem.666Thus, strictly speaking, this is not a game in the
game-theoretic sense, but (just) an optimisation problem.
What kind of strategies may Alice and Bob choose? It depends on the resources
they have. First, in a classical deterministic strategy, both Alice and Bob
reply deterministically to each of their questions, and they do so
independently of each other. This is described by two functions
$c_{A}\colon\mathcal{Q}_{A}\to\mathcal{A}_{A}\quad\mbox{ and }\quad
c_{B}\colon\mathcal{Q}_{B}\to\mathcal{A}_{B},$
which specify which answer Alice and Bob give to each question.
Slightly more generally, in a classical randomised strategy, Alice and Bob’s
answers are probabilistic, but still independent of each other. This is
described by
$r_{A}\colon\mathcal{Q}_{A}\to\mathcal{\rm Pr}(\mathcal{A}_{A})\quad\mbox{ and
}\quad r_{B}\colon\mathcal{Q}_{B}\to\mathcal{\rm Pr}(\mathcal{A}_{B}),$
where ${\rm Pr}(S)$ denotes the set of probability measures on the set $S$.
Namely, if Alice receives question $a$, the probability that she answers $x$
is given by $r_{A}(a)(x)$, where $r_{A}(a)$ is the probability measure on
$\mathcal{A}_{A}$ corresponding to question $a$. Similarly, Bob answers $y$ to
$b$ with probability $r_{B}(b)(y)$. Since Alice and Bob answer independently
of each other, the joint probability of answering $x,y$ upon questions $a,b$
is the product of the two,
(7) $\displaystyle p(x,y\mid a,b)=r_{A}(a)(x)\cdot r_{B}(b)(y).$
Finally, a quantum strategy allows them to share a bipartite state
$\rho\in{\rm State}_{d,s}$. The questions determine which measurement to apply
to their part of the state, and the measurement outcomes determine the
answers. This is described by functions
(8) $\displaystyle q_{A}\colon\mathcal{Q}_{A}\to{\rm
POVM}_{d}(\mathcal{A}_{A})\quad\mbox{ and }\quad
q_{B}\colon\mathcal{Q}_{B}\to{\rm POVM}_{s}(\mathcal{A}_{B})$
whose image is the set of POVMs with matrices of size $d\times d$ and $s\times
s$, respectively, on the respective sets of answers. The probability that
Alice answers $x$ upon receiving $a$ is described by $q_{A}(a)(x)$, which is
the psd matrix that the POVM $q_{A}(a)$ assigns to answer $x$. Similarly,
Bob’s behaviour is modelled by $q_{B}(b)(y)$. Since they act independently of
each other, this is described by the tensor product of the two. Using rule
(1), we obtain that their joint probability is given by
(9) $\displaystyle p(x,y\mid
a,b)=\mathrm{tr}\mathopen{}\mathclose{{}\left(\rho\mathopen{}\mathclose{{}\left(q_{A}(a)(x)\otimes
q_{B}(b)(y)}\right)}\right).$
Now, the table of conditional probabilities
$\mathopen{}\mathclose{{}\left(p(x,y\mid
a,b)}\right)_{(a,b,x,y)\in\mathcal{Q}_{A}\times\mathcal{Q}_{B}\times\mathcal{A}_{A}\times\mathcal{A}_{B}}$
is called the correlation matrix of the respective strategy. For any given
kind of strategy, the set of correlation matrices is the feasible set of the
optimisation problem that Alice and Bob have to solve. The objective function
of this optimisation problem is given by the winning probability. Since this
objective function is linear in the correlation matrix entries, one can
replace the feasible set by its convex hull.
The important fact is that quantum strategies cannot be reproduced by
classical randomised strategies:
###### Theorem 6 ([Be, Cl69]).
If at least $2$ questions and $2$ answers exist for both Alice and Bob, the
convex hull of correlation matrices of classical randomised strategies is
strictly contained in the set of correlation matrices of quantum strategies.
For classical randomised strategies, passing to the convex hull has the
physical interpretation of including a _hidden variable_. The latter is a
variable whose value is unknown to us, who are describing the system, and it
is usually denoted $\lambda$. However, this mysterious variable $\lambda$ is
shared between Alice and Bob, and it will determine the choice of their POVMs
together with their respective questions $a,b$. This is the physical
interpretation of the convex hull
$p(x,y\mid a,b)=\sum_{\lambda}q_{\lambda}\>r_{A}(a,\lambda)(x)\cdot
r_{B}(b,\lambda)(y),$
where $q_{\lambda}$ is the probability of the hidden variable taking the value
$\lambda$. For example, we can imagine that Alice and Bob are listening to a
radio station which plays songs from a certain list, but this is a ‘private’
radio station to which we have no access. The song at the moment of playing
the game (i.e. receiving the questions) will determine the value of $\lambda$
(i.e. $\lambda$ is an index of that list).
Theorem 6 thus states that quantum strategies cannot be emulated by classical
strategies, even if we take into account ‘mysterious’ hidden variables.
[width=.5cm,color = darkgray]3
Let us now approach these results from the perspective of free sets. Assume
for simplicity that all four sets
$\mathcal{Q}_{A},\mathcal{Q}_{B},\mathcal{A}_{A},\mathcal{A}_{B}$ have two
elements. A quantum strategy consists of a state $\rho\in{\rm State}_{d,s}$
and the following psd matrices for Alice and Bob, respectively, satisfying
this normalisation condition:
(10) $\displaystyle\begin{split}&\sigma_{j}^{(i)}\succcurlyeq 0\mbox{ and
}\tau_{j}^{(i)}\succcurlyeq 0\\\ &\mbox{such that
}\sum_{j}\sigma_{j}^{(i)}=I_{d}\mbox{ and
}\sum_{j}\tau_{j}^{(i)}=I_{s},\end{split}$
where $i,j=1,2$. The superscript refers to the questions and the subscript to
the answers. The correlation matrix is given by
$\mathopen{}\mathclose{{}\left(\mathrm{tr}\mathopen{}\mathclose{{}\left(\rho\>\mathopen{}\mathclose{{}\left(\sigma_{k}^{(i)}\otimes\tau_{l}^{(j)}}\right)}\right)}\right)_{i,j,k,l}.$
Using the spectral decomposition of $\rho=\sum_{r}v_{r}v_{r}^{*}$, it can be
written as
(11)
$\displaystyle\mathopen{}\mathclose{{}\left(\sum_{r}v_{r}^{*}\mathopen{}\mathclose{{}\left(\sigma_{k}^{(i)}\otimes\tau_{l}^{(j)}}\right)v_{r}}\right)_{i,j,k,l},$
where $v_{r}\in\mathbb{C}^{d}\otimes\mathbb{C}^{s}$ and
$\sum_{r}v_{r}^{*}v_{r}=1$. Through the looking glass of free semialgebraic
geometry, this is first level of a free convex hull. To see this, define the
free set $\mathcal{I}$ as
(12) $\displaystyle\mathcal{I}=\bigcup_{d,s\geq
1}\mathopen{}\mathclose{{}\left\\{\mathopen{}\mathclose{{}\left(\sigma_{k}^{(i)}\otimes\tau_{l}^{(j)}}\right)_{i,j,k,l}\in{\rm
Mat}_{4}({\rm Mat}_{d}\otimes{\rm Mat}_{s})\mid\sigma_{k}^{(i)}\mbox{and
}\tau_{l}^{(j)}\mbox{satisfy }\eqref{eq:normsigmatau}}\right\\}$
(Note that the 4 is due the fact that we have 2 questions and 2 answers; more
generally we would have a matrix of size
$|\mathcal{Q}_{A}||\mathcal{Q}_{B}|\times|\mathcal{A}_{A}||\mathcal{A}_{B}|$.
Note also that the ordering of questions and answers of Alice and Bob is
irrelevant for the following discussion.)
If we look at level 1 of this free set, we encounter that $I_{1}$ is the
subset of ${\rm Mat}_{4}(\mathbb{R})$ consisting precisely of the correlation
matrices of classical randomized strategies. In other words, when $d=s=1$, the
formula coincides with that of (7). Furthermore, higher levels of this free
set contain the tensor products of POVMs of Alice and Bob in the corresponding
space ${\rm Mat}_{d}$ and ${\rm Mat}_{s}.$ Since $I_{1}$ is called the
independence model in algebraic statistics [Dr], we call $\mathcal{I}$ the
free independence model, since this is the natural noncommutative
generalisation of independent strategies.
Let us now consider the free convex hull of $\mathcal{I}$. First of all,
computing the conditional probabilities of a pair of POVMs with a given state
$\rho$ corresponds to compressing to level 1 with the vectors $\\{v_{r}\\}$
given by the spectral decomposition of $\rho$, as in (11). So _the set of
quantum correlations is the first level of the free convex hull of the free
independence model._
We thus encounter an interesting phenomenon: the free convex hull of a free
set can be larger than the classical convex hull at a fixed level.
Specifically, the convex hull of $I_{1}$ is the set of classical correlations,
whereas the free convex hull of $\mathcal{I}$ at level 1 is the set of quantum
correlations, which are different by Theorem 6. In fact, wilder things can
happen: fractal sets can arise in the free convex hull of free semialgebraic
sets [al]. We wonder what these results imply for the corresponding quantum
information setup.
Now, in the free convex hull of $\mathcal{I}$, what do higher levels
correspond to? Compressing to lower levels (i.e. with smaller $ds$)
corresponds to taking the partial trace with a psd matrix of size smaller than
$ds$. This results in 4 psd matrices (one for each $i,j,k,l$), each of size
$<ds$, and which not need be an elementary tensor product.
What about ‘compressing’ to higher levels? Any compression to a higher level
can be achieved by direct sums of the POVMs of Alice and Bob and a compression
to a lower level as we just described. The number of elements in this direct
sum is precisely $n$ in (4). Another way of seeing that the direct sum is
needed is by noting that, if $n=1$, the matrices $v_{i}$ cannot fulfill the
normalisation condition on the right hand side of (4). In quantum information
terms, this says that a POVM in a given dimension cannot be transformed to a
POVM in a larger dimension by means of an isometry, because the terms will sum
to a projector instead of the identity.
Let us make two final remarks. The first one is that $\mathcal{I}$ is not a
free semialgebraic set, for the simple reason that it is not closed under
direct sums (which is a property of these sets, as we saw in Section 2.2), as
is easily checked.
The second remark is that the free convex hull of the free independence model
is not closed. This follows from the fact that, at level 1, this free convex
hull fails to be closed, as shown in [sl] and for smaller sizes in [dy].
###### Theorem 7 ([sl, dy]).
For at least $5$ questions and $2$ answers, the set of quantum correlation
matrices is not closed.
In our language, this implies that the level $ds$ — which is to be compressed
to level 1 in the construction of the free convex hull — cannot be upper
bounded. That is, the higher $ds$ the more things we will obtain in its
compression to level 1.
In the recent preprint [Fu21] the membership problem in the closure of the set
of quantum correlations is shown to be undecidable, for a fixed (and large
enough) size of the sets of questions and answers.
[width=.5cm,color = darkgray]3
A computational approach to quantum correlations, comparable to sums-of-
squares and moment relaxation approaches in polynomial optimisation [blek,
NP], is the NPA-hierarchy [NPA1, NPA2, NPA3]. We briefly describe the approach
here, omitting technical details. Assume one is given a table
$p=\mathopen{}\mathclose{{}\left(p(x,y\mid
a,b)}\right)_{(a,b,x,y)\in\mathcal{Q}_{A}\times\mathcal{Q}_{B}\times\mathcal{A}_{A}\times\mathcal{A}_{B}},$
and the task is to check whether it is the correlation matrix of a quantum
strategy. The NPA hierarchy provides a family of necessary conditions, each
more stringent than the previous one, for $p$ to be a quantum strategy.
In order to understand the NPA hierarchy, we will first _assume_ that $p$ is a
correlation matrix, i.e. there is a state $\rho$ and strategies such that (9)
holds. We will use this state and strategies to define a positive functional
on a certain algebra. Namely, we consider the game $*$-algebra
$\mathcal{G}:=\mathbb{C}\langle\mathcal{Q}_{A}\times\mathcal{A}_{A},\mathcal{Q}_{B}\times\mathcal{A}_{B}\rangle.$
This is an algebra of polynomials in certain noncommuting variables.
Explicitly, for each question and answer pair from Alice and Bob, $(a,x)$ and
$(b,y)$, there is an associated self-adjoint variable, $z_{(a,x)}$ and
$z_{(b,y)}$, respectively. $\mathcal{G}$ consists of all polynomials with
complex coefficients in these variables; for example, the monomial
$z_{(a,x)}z_{(b,y)}\in\mathcal{G}$. Now, _if we had the strategy
$\rho,q_{A},q_{B}$_ we could construct a linear functional
$\varphi\colon\mathcal{G}\to\mathbb{C}$
by evaluating the variables $z_{(a,x)}$ and $z_{(b,y)}$ at the psd matrices
$q_{A}(a)(x)\otimes I_{s}$ and $I_{d}\otimes q_{B}(b)(y)$, respectively, and
computing the trace inner product with the state $\rho$. So, in particular,
evaluating $\varphi$ at the monomial $z_{(a,x)}z_{(b,y)}$ would yield
(13) $\displaystyle\varphi(z_{(a,x)}z_{(b,y)})$ $\displaystyle={\rm
tr}\mathopen{}\mathclose{{}\left(\rho\mathopen{}\mathclose{{}\left(\mathopen{}\mathclose{{}\left(q_{A}(a)(x)\otimes
I_{s}}\right)\cdot\mathopen{}\mathclose{{}\left(I_{d}\otimes
q_{B}(b)(y)}\right)}\right)}\right)$ (14) $\displaystyle=p(x,y\mid a,b).$
The crucial point is that $\varphi$ evaluated at this monomial needs to have
the value $p(x,y\mid a,b)$ _for any strategy_ realising $p$. In other words,
the linear constraint on $\varphi$ expressed in Equation (14) must hold even
if we do not know the strategy. This functional must satisfy other nice
properties independently of the strategy too, such as being positive.
This perspective is precisely the one we now take. Namely, we assume that the
strategy $\rho,q_{A},q_{B}$ is not given (since our question is whether $p$ is
a quantum strategy at all), and we search for a functional on $\mathcal{G}$
that has the stated properties (or other properties, depending on the kind of
strategies one is looking for). When restricted to a finite-dimensional
subspace of $\mathcal{G}$, this becomes a semidefinite optimisation problem,
as can be easily checked. The dimension of this subspace will be the parameter
indicating the level of the hierarchy, which is gradually increased.
Solvability of all these semidefinite problems is thus a necessary condition
for $p$ to be a quantum correlation matrix. In words, the levels of the NPA
hierarchy form an outer approximation to the set of correlations. Conversely,
if all/many of these problems are feasible, one can apply a (truncated)
Gelfand-Naimark-Segal (GNS) construction (see for example [Pau02]) to the
obtained functional, and thereby try to construct a quantum strategy that
realises $p$. This is the content of the NPA hierarchy from the perspective of
free semialgebraic geometry.
### 3.4. Positivity in tensor networks
Let us finally explain some results about positivity in tensor networks. The
results are not as much related to free semialgebraic geometry as to
positivity and sums of squares, as we will see.
Since the state space of a composite quantum system is given by the tensor
product of smaller state spaces (Eq. (2)), the global dimension $d$ grows
exponentially with the number of subsystems $n$. Very soon it becomes
infeasible to work with the entire space — to describe $n=270$ qubits
$d_{i}=2$, we would need to deal with a space dimension $d\sim 2^{270}\sim
10^{80}$, the estimated number of atoms in the Universe. To describe anything
at the macro-scale involving a mole of particles, $\sim 10^{23}$, we would
need a space dimension of $\sim 2^{{10}^{23}}$, which is much larger than a
googol ($10^{100}$), but smaller than a googolplex ($10^{10^{100}}$). These
absurd numbers illustrates how quickly the Hilbert space description becomes
impractical — in practice, it works well for a few tens of qubits.777The lack
of scalability of this description is far from being a unique case in physics
— most theories are not scalable. One needs to find the new relevant degrees
of freedom at the new scale, which will define an emergent theory.
Fortunately, many physically relevant states admit an efficient description.
The ultimate reason is that physical interactions are local (w.r.t. a
particular tensor product decomposition; this decomposition typically reflects
spatial locality). The resulting relevant states admit a description using
only a few terms for every local Hilbert space. The main idea of tensor
networks is precisely to use a few matrices for every local Hilbert space
${\rm Mat}_{d_{i}}$ (Eq. (2); see, e.g., [Or18, Ci20]).
Now, this idea interacts with _positivity_ in a very interesting way (Fig. 4).
Positivity is a property in the global space ${\rm Mat}_{d}$ which cannot be
easily translated to positivity properties in the local spaces. As we will
see, there is a ‘tension’ between using a few matrices for each local Hilbert
space and representing the positivity locally. This mathematical interplay has
implications for the description of quantum many-body systems, among others.
Figure 4. The notion of positivity gives rise to convexity, which gives rise
to many surprising effects when interacting with the multiplicity of systems,
as in this lithograph by M. C. Escher.
Let us see one example of a tensor network decomposition where this
_positivity problem_ appears. To describe a mixed state in one spatial
dimension with periodic boundary conditions we use the matrix product density
operator form (MPDO) of $\rho$,
$\rho=\sum_{i_{1},\ldots,i_{n}=1}^{r}\rho_{i_{1},i_{2}}^{(1)}\otimes\rho_{i_{2},i_{3}}^{(2)}\otimes\cdots\otimes\rho_{i_{n},i_{1}}^{(n)}.$
The smallest such $r$ is called the operator Schmidt rank of $\rho$ [Ve04b,
Zw04]. Clearly, every state admits an MPDO form, and the ones with small $r$
can be handled efficiently. But how is the positivity of $\rho$ reflected in
the local matrices? Clearly, if all local matrices are psd (i.e.
$\rho_{ij}^{(k)}\succcurlyeq 0$) then $\rho$ will be psd. But some sums of
non-psd matrices will also give rise to a global psd matrix, since negative
subspaces may cancel in the sum. Can one easily characterise the set of local
matrices whose sum is psd? The short answer is ‘no’.
For further reference, if all local matrices are psd, so that $\rho$ is
separable, the corresponding $r$ is called the separable rank of $\rho$ [DN,
DHN].
To obtain a local certificate of positivity, we first express
$\rho=\xi\xi^{*}$ (which is possible only if $\rho$ is psd) and then apply the
tensor network ‘philosophy’ to $\xi$, i.e. express $\xi$ as an MPDO:
$\rho=\xi\xi^{*}\quad\mbox{ with
}\quad\xi=\sum_{i_{1},\ldots,i_{n}=1}^{r}\xi_{i_{1},i_{2}}^{(1)}\otimes\xi_{i_{2},i_{3}}^{(2)}\otimes\cdots\otimes\xi_{i_{n},i_{1}}^{(n)}.$
This is the local purification form of $\rho$. Note that there are many $\xi$
that satisfy $\rho=\xi\xi^{*}$, as $\xi$ need not be Hermitian or a square
matrix (it could be a column vector). The smallest $r$ among all such $\xi$ is
called the purification rank of $\rho$.
The interesting point for the purposes of this paper is that the purification
rank is a _noncommutative generalisation_ of the positive semidefinite rank of
a nonnegative matrix. There are many more such connections: the separable
rank, the translational invariant (t.i.) purification rank, and the t.i.
separable rank are noncommutative generalisations of the nonnegative rank, the
cpsd rank and the cp rank of nonnegative matrices, respectively [DN]. As a
matter of fact, this connection holds in much greater generality, as we will
explain below. In all of these cases, the ranks coincide for quantum states
that are diagonal in the computational basis.
From our perspective, this connection is beneficial for both sides. For
example, for quantum many-body systems, this insight together with the results
by [GPT] leads to the following result:
###### Theorem 8 ([DSPC, DN]).
The purification rank cannot be upper bounded by a function of the operator
Schmidt rank only. The separable rank cannot be upper bounded by a function of
the purification rank only.
(It is worth noting these separations are not robust, as they disappear in the
approximate case for certain norms [De20].)
Conversely, the quantum perspective provides a natural and well-motivated path
for generalisation of the ‘commutative’ results about cpsd rank, cp rank, etc.
For example, in [GPT] it is shown that the extension complexity of a polytope
w.r.t. a given cone is given by the rank of the slack matrix of that polytope
w.r.t. that cone. We wonder whether this result could be generalisation to the
noncommutative world. This would give a _geometric_ interpretation of the
purification rank, the separable rank and their symmetric versions, perhaps as
extension complexities of some objects.
[width=.5cm,color = darkgray]3
Symmetry is a central property in physics, both conceptually and practically.
Conceptually, symmetry is the other side of the coin of a conserved quantity
(by Noether’s theorem). Practically, it allows for more efficient mathematical
descriptions, as symmetric objects have fewer degrees of freedom. For example,
in the above context, $\rho$ is translational invariant if it remains
unchanged under cyclic permutations of the local systems. This raises the
question: is there an MPDO form that explicitly expresses this symmetry? For
example, the following form does,
$\rho=\sum_{i_{1},\ldots,i_{n}=1}^{r}\rho_{i_{1},i_{2}}\otimes\rho_{i_{2},i_{3}}\otimes\cdots\otimes\rho_{i_{n},i_{1}},$
because it uses the same matrices on every site, and the arrangement of
indices is such that a cyclic permutation of the local systems does not change
$\rho$. But does this hold for other symmetries too?
The existence of such _invariant decompositions_ and their corresponding ranks
has been studied in a very general framework [DHN]. Explicitly, every tensor
decomposition is represented as a simplicial complex, where the individual
tensor product spaces are associated to the vertices, and the summation
indices to the facets. The symmetry is modelled by a group action on the
simplicial complex. The central result is that an invariant decomposition
exists if the group action is free on the simplicial complex [DHN]. Just to
give one example, if $\rho\in{\rm Mat}_{d}\otimes{\rm Mat}_{d}$ is separable
and symmetric, it will in general not admit a decomposition of the type
$\rho=\sum_{\alpha}\rho_{\alpha}\otimes\rho_{\alpha}\quad\mbox{with all
}\rho_{\alpha}\mbox{ psd,}$
but it will have one of the type
$\rho=\sum_{\alpha,\beta}\rho_{\alpha,\beta}\otimes\rho_{\beta,\alpha}\quad\mbox{with
all }\rho_{\alpha,\beta}\mbox{ psd}.$
From the perspective of our framework, this is due the fact that the group
permuting the two end points of an edge does not act freely on the edge
connecting them. But this group action can be made free if the two points are
connected by two edges, leading to the two indices $\alpha,\beta$ in the above
sum. This is one example of a _refinement_ of a simplicial complex, which
makes the action of the group free [DHN].
Finally, we remark that this framework of tensor decompositions with
invariance can not only be applied to quantum-many body systems, but to any
object in a tensor product space. One example are multivariate symmetric
polynomials with positivity conditions [Kl21].
[width=.5cm,color = darkgray]3
A related question is the existence of invariant decompositions _uniform_ in
the system size. Namely, given a tensor
$\rho=\mathopen{}\mathclose{{}\left(\rho_{\alpha,\beta}}\right)_{\alpha,\beta=1,\ldots,r}$
with all $\rho_{\alpha,\beta}\in{\rm Mat}_{d}$, define
$\tau_{n}(\rho):=\sum_{i_{1},\ldots,i_{n}=1}^{r}\rho_{\alpha_{1},\alpha_{2}}\otimes\rho_{\alpha_{2},\alpha_{3}}\otimes\cdots\otimes\rho_{\alpha_{n},\alpha_{1}}\in{\rm
Mat}_{d^{n}}$
for all $n\in\mathbb{N}$. The result, in this case, is very different from the
fixed $n$ case:
###### Theorem 9 ([DCCW]).
Let $d,r\geqslant 7$. Then it is undecidable whether
$\tau_{n}(\rho)\succcurlyeq 0$ for all $n\in\mathbb{N}$.
Using this result it can be shown that a translationally invariant local
purification of $\tau_{n}(\rho)$ _uniform in the system size_ need not exist
[DCCW].
The proof of this theorem uses a reduction from the matrix mortality problem.
In the latter, given a finite set of matrices $M_{\alpha}\in{\rm
Mat}_{d}(\mathbb{Z})$, one is asked whether there is a word $w$ such that
$0=M_{w_{1}}\cdots M_{w_{n}}\in{\rm Mat}_{d}(\mathbb{Z})$. While this problem
is noncommutative (because matrix multiplication is), the problem about
$\tau_{n}(\rho)$ is ‘more’ noncommutative. Intuitively, if all
$\rho_{\alpha,\beta}$ are diagonal, we recover a version of the matrix
mortality problem. Note also that the space where $\tau_{n}(\rho)$ lives grows
with $n$, in contrast to the matrix mortality problem.
The decidability of a similar problem can be studied for more general
algebras. In that case, $\rho_{\alpha,\beta}$ is in a certain algebra, and
$\tau_{n}(\rho)$ is asked to be in a certain cone [Gr21].
[width=.5cm,color = darkgray]3
Let us finally explain a computational approach for the finite case. So
consider $n$ fixed and recall that after specifying some local matrices
$\rho_{i}^{(j)}$, one wants to know whether
$\rho=\sum_{i}\rho_{i}^{(1)}\otimes\cdots\otimes\rho_{i}^{(n)}$
is psd. Since the $n$ is fixed, this problem is decidable, but computing and
diagonalising $\rho$ is impossible for large values of $n$ (in fact it is NP-
hard [Kl14]). So one has to come up with a different idea. What can be
computed are certain moments of $\rho$, i.e. the numbers
$\mathrm{tr}(\rho^{k})$ for small enough $k$. This follows from the
observation that the moments only require local matrix products,
$\mathrm{tr}(\rho^{k})=\sum_{i_{1},\ldots,i_{k}=1}^{r}\mathrm{tr}\mathopen{}\mathclose{{}\left(\rho_{i_{1}}^{(1)}\cdots\rho_{i_{k}}^{(1)}}\right)\cdots\mathrm{tr}\mathopen{}\mathclose{{}\left(\rho_{i_{1}}^{(n)}\cdots\rho_{i_{k}}^{(n)}}\right).$
These few moments can then be used to compute optimal upper and lower bounds
on the distance of $\rho$ to the cone of psd matrices [D20/2]. Specifically,
to compute this distance it suffices to compare $\rho$ with $f(\rho)$, where
$f\colon\mathbb{R}\to\mathbb{R}$ is the function that leaves the positive
numbers unchanged, and sets the negative numbers to zero. We then approximate
$f$ by polynomial functions $q$ of low degree, so that $\mathrm{tr}(q(\rho))$
only uses a few moments of $\rho$. The best results were obtained with certain
sums of squares approximations, which can be computed with a linear or
semidefinite optimisation.
## 4\. Closing words
We have illustrated how quantum information theory and free semialgebraic
geometry often study very similar mathematical objects from different
perspectives. We have given the examples of positivity and separability
(Section 3.1), quantum magic squares (Section 3.2), non-local games (Section
3.3), and positivity in tensor networks (Section 3.4). In all of these cases,
we have tried to illustrate how results can be transferred among the two
fields, and how this can be beneficial for the two perspectives. As mentioned
in the introduction, there are many similar such connections which have not
been covered here.
Going back to New Hampshire’s motto, we conclude that it is undecidable to
determine whether to live free or die, because both the question of whether
matrices generate a free semigroup and the matrix mortality problem are
undecidable.
_Acknowledgements_.— GDLC acknowledges support from the Austrian Science Fund
(FWF) with projects START Prize Y1261-N and the Stand Alone project P33122-N.
TN acknowledges support from the Austrian Science Fund (FWF) with Stand Alone
project P29496-N35.
## References
@article{al}
* AUTHOR = Alekseev, V., AUTHOR = Netzer, T., AUTHOR= Thom, A., TITLE = Quadratic modules, $C^{*}$-algebras, and free convexity, JOURNAL = Trans. Amer. Math. Soc., FJOURNAL = Transactions of the American Mathematical Society, VOLUME = 372, NUMBER = 11, PAGES = 7525–7539, YEAR = 2019,
@book{arv}
* AUTHOR = Arveson, W., TITLE = An invitation to $C^{*}$-algebras, NOTE = Graduate Texts in Mathematics, No. 39, PUBLISHER = Springer-Verlag, New York-Heidelberg, YEAR = 1976, PAGES = x+106,
BanicaT.BichonJ.CollinsB.Quantum permutation groups: a surveyNoncommutative
harmonic analysis with applications to probabilityBanach Center Publ.78Polish
Acad. Sci. Inst. Math., Warsaw13–342007@incollection{Ba, author = {Banica,
T.}, author = {Bichon, J.}, author = {Collins, B.}, title = {Quantum
permutation groups: a survey}, booktitle = {Noncommutative harmonic analysis
with applications to probability}, series = {Banach Center Publ.}, volume =
{78}, pages = {13–34}, publisher = {Polish Acad. Sci. Inst. Math., Warsaw},
year = {2007}} @article{Be}
* AUTHOR = Bell, J. S., TITLE = On the Einstein Podolsky Rosen paradox, JOURNAL = Phys. Phys. Fiz., FJOURNAL = Physics Physique Fizika, VOLUME = 1, YEAR = 1964, NUMBER = 3, PAGES = 195–200,
Ben-TalA.NemirovskiA.On tractable approximations of uncertain linear matrix
inequalities affected by interval uncertaintySIAM J. Optim.SIAM Journal on
Optimization1220023811–833@article{bental, author = {Ben-Tal, A.}, author =
{Nemirovski, A.}, title = {On tractable approximations of uncertain linear
matrix inequalities affected by interval uncertainty}, journal = {SIAM J.
Optim.}, fjournal = {SIAM Journal on Optimization}, volume = {12}, year =
{2002}, number = {3}, pages = {811–833}} @book{blek}
* TITLE = Semidefinite optimization and convex algebraic geometry, SERIES = MOS-SIAM Series on Optimization, VOLUME = 13, EDITOR = Blekherman, G., EDITOR = Parrilo, P. A., EDITOR =Thomas, R. R., PUBLISHER = SIAM, Philadelphia, PA, YEAR = 2013, PAGES = xx+476,
@article{BN1}
* AUTHOR = Bluhm, A., AUTHOR =Nechita, I., TITLE = Joint measurability of quantum effects and the matrix diamond, JOURNAL = J. Math. Phys., FJOURNAL = Journal of Mathematical Physics, VOLUME = 59, YEAR = 2018, NUMBER = 11, PAGES = 112202,
@article{BN2}
* AUTHOR = Bluhm, A., AUTHOR = Nechita, I., TITLE = Compatibility of quantum measurements and inclusion constants for the matrix jewel, JOURNAL = SIAM J. Appl. Algebra Geom., FJOURNAL = SIAM Journal on Applied Algebra and Geometry, VOLUME = 4, YEAR = 2020, NUMBER = 2, PAGES = 255,
ChoiM. D.Positive semidefinite biquadratic formsLinear Algebra Appl.Linear
Algebra and its Applications121975295–100@article{Ch75, author = {Choi, M.\
D.}, title = {Positive semidefinite biquadratic forms}, journal = {Linear
Algebra Appl.}, fjournal = {Linear Algebra and its Applications}, volume =
{12}, year = {1975}, number = {2}, pages = {95–100}} CarielloD.Separability
for weak irreducible matricesQuantum Inf. Comput.1420141308@article{Car,
author = {Cariello, D.}, title = {Separability for weak irreducible matrices},
journal = {Quantum Inf. Comput.}, number = {14}, year = {2014}, pages =
{1308}} author=Perez-Garcia, D. author=Schuch, N. author=Verstraete, F.Cirac,
I.arXiv:2011.12127Matrix Product States and Projected Entangled Pair States:
Concepts, Symmetries, and Theorems2020@article{Ci20, author = {{Cirac, I.}
author={Perez-Garcia, D.} author={Schuch, N.} author={Verstraete, F.}},
journal = {arXiv:2011.12127}, title = {{Matrix Product States and Projected
Entangled Pair States: Concepts, Symmetries, and Theorems}}, year = {2020}}
author=Horne, M. A. author=Shimony, A. author=Holt, R. A.Clauser, J. F.Phys.
Rev. Lett.880Proposed Experiment to Test Local Hidden-Variable
Theories231969@article{Cl69, author = {{Clauser, J. F.} author={Horne, M. A.}
author={Shimony, A.} author={Holt, R. A.}}, journal = {Phys. Rev. Lett.},
pages = {880}, title = {{Proposed Experiment to Test Local Hidden-Variable
Theories}}, volume = {23}, year = {1969}} author= Meichanetzidis, K.Coecke,
B.arXiv:2001.00862Meaning updating of density matrices2020@article{Co20,
author = {{Coecke, B.} author= {Meichanetzidis, K.}}, journal =
{arXiv:2001.00862}, title = {{Meaning updating of density matrices}}, year =
{2020}} De las CuevasG.SchuchN.Perez-GarciaD.CiracJ. I.Purifications of
multipartite states: limitations and constructive methodsNew J.
Phys.151230212013@article{DSPC, author = {De las Cuevas, G.}, author =
{Schuch, N.}, author = {Perez-Garcia, D.}, author = {Cirac, J.\ I.}, title =
{Purifications of multipartite states: limitations and constructive methods},
journal = {New J.\ Phys.}, volume = {15}, pages = {123021}, year = {2013}} De
las CuevasG.CubittT. S.CiracJ. I.WolfM. M.Pérez-GarcíaD.Fundamental
limitations in the purifications of tensor networksJ. Math. Phys.Journal of
Mathematical Physics5720167071902@article{DCCW, author = {De las Cuevas, G.},
author = {Cubitt, T.\ S.}, author = {Cirac, J.\ I.}, author = {Wolf, M.\ M.},
author = {P\'{e}rez-Garc\'{\i}a, D.}, title = {Fundamental limitations in the
purifications of tensor networks}, journal = {J. Math. Phys.}, fjournal =
{Journal of Mathematical Physics}, volume = {57}, year = {2016}, number = {7},
pages = {071902}} De las CuevasG.DrescherT.NetzerT.Separability for mixed
states with operator schmidt rank twoQuantum32019203@article{D19, author = {
De las Cuevas, G.}, author = { Drescher, T.}, author = {Netzer, T.}, title =
{Separability for Mixed States with Operator Schmidt Rank Two}, journal =
{Quantum}, number = {3}, year = {2019}, pages = {203}} De las
CuevasG.DrescherT.NetzerT.Quantum magic squares: Dilations and their
limitationsJ. Math. Phys.J. Math. Phys.61202011111704@article{D20, author =
{De las Cuevas, G.}, author = {Drescher, T.}, author = {Netzer, T.}, title =
{Quantum magic squares: {D}ilations and their limitations}, journal = {J.
Math. Phys.}, fjournal = {J. Math. Phys.}, volume = {61}, year = {2020},
number = {11}, pages = {111704}} De las CuevasG.FritzT.NetzerT.Optimal bounds
on the positivity of a matrix from a few momentsComm. Math.
Phys.Communications in Mathematical Physics37520201105–126@article{D20/2,
author = {De las Cuevas, G.}, author = { Fritz, T.}, author = {Netzer, T.},
title = {Optimal bounds on the positivity of a matrix from a few moments},
journal = {Comm. Math. Phys.}, fjournal = {Communications in Mathematical
Physics}, volume = {375}, year = {2020}, number = {1}, pages = {105–126}}
@article{Gr21}
* author=De las Cuevas, G., author = Graf, J., author=Netzer, T., journal = In preparation, title = Computational complexity of algebras in a chain, year = 2021
De las CuevasG.Hoogsteder RieraM.NetzerT.Tensor decompositions on simplicial
complexes with invariancearXiv:1909.017372019@article{DHN, author = {De las
Cuevas, G.}, author = {Hoogsteder Riera, M.}, author = {Netzer, T.}, title =
{Tensor decompositions on simplicial complexes with invariance}, journal =
{arXiv:1909.01737}, year = {2019}} @article{De20c}
* author = De las Cuevas, G author=Klingler, A. author=Lewis, M. author=Netzer, T., journal = arXiv:2005.14134, mendeley-groups = Biblio-IST, title = Cats climb entail mammals move: preserving hyponymy in distributional semantics, year = 2020
@article{De20}
* arxivId = 2004.10219, author = De las Cuevas, G. author =Klingler, A. author=Netzer, T., journal = arXiv:2004.10219, title = Approximate tensor decompositions: disappearance of many separations, year = 2020
De las CuevasG.KlinglerA.NetzerT.In preparationGeneral polynomial
decompositions with invariance and positivity2021@article{Kl21, author = {De
las Cuevas, G.}, author = {Klingler, A.}, author = {Netzer, T.}, journal = {In
preparation}, title = {{General polynomial decompositions with invariance and
positivity}}, year = {2021}} @article{DN}
* AUTHOR = De las Cuevas, G., AUTHOR = Netzer, T., TITLE = Mixed states in one spatial dimension: decompositions and correspondence with nonnegative matrices, JOURNAL = J. Math. Phys., FJOURNAL = Journal of Mathematical Physics, VOLUME = 61, YEAR = 2020, NUMBER = 4, PAGES = 041901,
@book{Dr}
* AUTHOR = Drton, M., AUTHOR=Sturmfels, B., AUTHOR=Sullivant, S., TITLE = Lectures on algebraic statistics, SERIES = Oberwolfach Seminars, VOLUME = 39, PUBLISHER = Birkhäuser Verlag, Basel, YEAR = 2009, PAGES = viii+171,
@article{dy}
* AUTHOR = Dykema, K., AUTHOR=Paulsen, V. I., AUTHOR=Prakash, J., TITLE = Non-closure of the set of quantum correlations via graphs, JOURNAL = Comm. Math. Phys., FJOURNAL = Communications in Mathematical Physics, VOLUME = 365, YEAR = 2019, NUMBER = 3, PAGES = 1125–1142,
@article{ev}
* AUTHOR = Evert, E. AUTHOR = Helton, J. W., AUTHOR=Klep, I., AUTHOR=McCullough, S., TITLE = Extreme points of matrix convex sets, free spectrahedra, and dilation theory, JOURNAL = J. Geom. Anal., FJOURNAL = Journal of Geometric Analysis, VOLUME = 28, YEAR = 2018, NUMBER = 2, PAGES = 1373–1408,
FawziH.The set of separable states has no finite semidefinite representation
except in dimension 3x2arXiv:1905.025752019@article{Faw19, author = {Fawzi,
H.}, title = {The set of separable states has no finite semidefinite
representation except in dimension 3x2}, journal = {arXiv:1905.02575}, year =
{2019}} author=Miller, C. A. author=Slofstra, W.Fu, H.arXiv:2101.11087The
membership problem for constant-sized quantum correlations is
undecidable2021@article{Fu21, author = {{Fu, H.} author={Miller, C. A.}
author={Slofstra, W.}}, journal = {arXiv:2101.11087}, title = {{The membership
problem for constant-sized quantum correlations is undecidable}}, year =
{2021}} FritzT.NetzerT.ThomA.Spectrahedral containment and operator systems
with finite-dimensional realizationSIAM J. Appl. Algebra Geom.SIAM Journal on
Applied Algebra and Geometry120171556–574@article{FNT, author = {Fritz, T.},
author = {Netzer, T.}, author = {Thom, A.}, title = {Spectrahedral containment
and operator systems with finite-dimensional realization}, journal = {SIAM J.
Appl. Algebra Geom.}, fjournal = {SIAM Journal on Applied Algebra and
Geometry}, volume = {1}, year = {2017}, number = {1}, pages = {556–574}}
@article{Gi15}
* author = Giustina, M. author=Versteegh, M. A. M. author=Wengerowsky, S. author=Handsteiner, J. author=Hochrainer, A. author=Phelan, K. author=Steinlechner, F. author=Kofler, J. author=Larsson, J.-Å. author=Abellán, C. author=Amaya, W. author=Pruneri, V. author=Mitchell, M. W. author=Beyer, J. author=Gerrits, T. author=Lita, A. E. authr=Shalm, L. K. author=Nam, S. W. author=Scheidl, T. author=Ursin, R. author=Wittmann, B. author=Zeilinger, A., journal = Phys. Rev. Lett., pages = 250401, title = Significant-Loophole-Free Test of Bell’s Theorem with Entangled Photons, volume = 115, year = 2015
@article{GPT}
* AUTHOR = Gouveia, J., AUTHOR = Parrilo, P. A., AUTHOR = Thomas, R. R., TITLE = Lifts of convex sets and cone factorizations, JOURNAL = Math. Oper. Res., FJOURNAL = Mathematics of Operations Research, VOLUME = 38, YEAR = 2013, NUMBER = 2, PAGES = 248–264,
HeltonJ. W.KlepI.McCulloughS.The matricial relaxation of a linear matrix
inequalityMath. Program.Mathematical Programming. A Publication of the
Mathematical Programming Society13820131-2, Ser. A401–445@article{hecp, author
= {Helton, J. W.}, author = {Klep, I.}, author = {McCullough, S.}, title =
{The matricial relaxation of a linear matrix inequality}, journal = {Math.
Program.}, fjournal = {Mathematical Programming. A Publication of the
Mathematical Programming Society}, volume = {138}, year = {2013}, number =
{1-2, Ser. A}, pages = {401–445}} HeltonJ.
W.KlepI.McCulloughS.SchweighoferM.Dilations, linear matrix inequalities, the
matrix cube problem and beta distributionsMem. Amer. Math.
Soc.25720191232106@article{hedi, author = {Helton, J. W.}, author = {Klep,
I.}, author = {McCullough, S.}, author = {Schweighofer, M.}, title =
{Dilations, linear matrix inequalities, the matrix cube problem and beta
distributions}, journal = {Mem. Amer. Math. Soc.}, volume = {257}, year =
{2019}, number = {1232}, pages = {106}} author=Bernien, H. author=Dréau, A. E.
author=Reiserer, A. author=Kalb, N. author=Blok, M. S. author=Ruitenberg, J.
author=Vermeulen, R. F. L. author=Schouten, R. N. author=Abellán, C.
author=Amaya, W. author=Pruneri, V. author=Mitchell, M. W. author=Markham, M.
author=Twitchen, D. J. author=Elkouss, D. author=Wehner, S. author=Taminiau,
T. H. author=Hanson, R.Hensen, B.Nature682Loophole-free Bell inequality
violation using electron spins separated by 1.3
kilometres5262015@article{He15, author = {{Hensen, B.} author={Bernien, H.}
author={Dr{\'{e}}au, A. E.} author={Reiserer, A.} author={Kalb, N.}
author={Blok, M. S.} author={Ruitenberg, J.} author={Vermeulen, R. F. L.}
author={Schouten, R. N.} author={Abell{\'{a}}n, C.} author={Amaya, W.}
author={Pruneri, V.} author={Mitchell, M. W.} author={Markham, M.}
author={Twitchen, D. J.} author={Elkouss, D.} author={Wehner, S.}
author={Taminiau, T. H.} author={Hanson, R.}}, journal = {Nature}, pages =
{682}, title = {{Loophole-free Bell inequality violation using electron spins
separated by 1.3 kilometres}}, volume = {526}, year = {2015}}
HorodeckiM.author=Ruskai, M. B.Shor, P. W.Rev. Math. Phys.629Entanglement
breaking channels152003@article{Ho03, author = {Horodecki, M.}, author =
{{Shor, P. W.} author={Ruskai, M. B.}}, journal = {Rev. Math. Phys.}, pages =
{629}, title = {Entanglement Breaking Channels}, volume = {15}, year = {2003}}
author = Rudnicki, Ł. author = Życzkowski, K.Horodecki, P.arXiv:2002.03233Five
open problems in quantum information2020@article{Ho20, author = {{Horodecki,
P.} author = {Rudnicki, {\L}.} author = {\\.{Z}yczkowski, K.}}, journal =
{arXiv:2002.03233}, title = {{Five open problems in quantum information}},
year = {2020}} KlieschM.GrossD.EisertJ.Phys. Rev. Lett.160503Matrix product
operators and states: NP-hardness and undecidability1132014@article{Kl14,
author = {Kliesch, M.}, author = {Gross, D.}, author = {Eisert, J.}, journal =
{Phys. Rev. Lett.}, pages = {160503}, title = {{Matrix product operators and
states: NP-hardness and undecidability}}, volume = {113}, year = {2014}}
author =Vicary, J.Musto, B.Quantum Inf. Comput.1318Quantum Latin squares and
unitary error bases162016@article{Mu16, author = {{Musto, B.} author ={Vicary,
J.}}, journal = {Quantum Inf. Comput.}, pages = {1318}, title = {{Quantum
Latin squares and unitary error bases}}, volume = {16}, year = {2016}}
NavascuésM.PironioS.AcínA.Bounding the set of quantum correlationsPhys. Rev.
Lett.980104012007@article{NPA1, author = {Navascu\'{e}s, M.}, author =
{Pironio, S.}, author = {Ac\'{i}n, A.}, title = {Bounding the set of quantum
correlations}, journal = {Phys. Rev. Lett.}, volume = {98}, pages = {010401},
year = {2007}} NavascuésM.PironioS.AcínA.A convergent hierarchy of
semidefinite programs characterizing the set of quantum correlationsNew J.
Phys.100730132008@article{NPA2, author = {Navascu\'{e}s, M.}, author =
{Pironio, S.}, author = {Ac\'{i}n, A.}, title = {A convergent hierarchy of
semidefinite programs characterizing the set of quantum correlations}, journal
= {New J. Phys.}, volume = {10}, pages = {073013}, year = {2008}}
@article{NPA3}
* AUTHOR = Navascués, M., AUTHOR = Vértesi, T., TITLE = Bounding the set of finite dimensional quantum correlations, JOURNAL = Phys. Rev. Lett., FJOURNAL = Physical Review Letters, VOLUME = 115, YEAR = 2015, PAGES = 020501,
NetzerT.Free semialgebraic geometryInternat. Math.
Nachrichten24031–412019@article{Ne19, author = {Netzer, T.}, title = {Free
Semialgebraic Geometry}, journal = {Internat. Math. Nachrichten}, number =
{240}, pages = {31–41}, year = {2019}} =Plaumann, D.Netzer, T. AUTHORGeometry
of linear matrix inequalitiesForthcoming2021@book{NP, author = {{Netzer, T.}
AUTHOR ={Plaumann, D.}}, title = {Geometry of Linear Matrix Inequalities},
note = {Forthcoming}, year = {2021}} author=Chuang, I. L.Nielsen, M.
A.Cambridge University PressQuantum Computation and Quantum
Information2000@book{Ni00, author = {{Nielsen, M. A.} author={Chuang, I. L.}},
publisher = {Cambridge University Press}, title = {{Quantum Computation and
Quantum Information}}, year = {2000}} OrúsR.Tensor networks for complex
systemsNat. Rev. Phys.1538–5502019@article{Or18, author = {Or\'us, R.}, title
= {Tensor networks for complex systems}, journal = {Nat. Rev. Phys.}, volume =
{1}, pages = {538–550}, year = {2019}} @article{PSS}
* AUTHOR = Passer, B., AUTHOR = Shalit, O. M., AUTHOR = Solel, B., TITLE = Minimal and maximal matrix convex sets, JOURNAL = J. Funct. Anal., FJOURNAL = Journal of Functional Analysis, VOLUME = 274, YEAR = 2018, NUMBER = 11, PAGES = 3197–3253,
PaulsenV.Completely bounded maps and operator algebrasCambridge Studies in
Advanced Mathematics78Cambridge University Press2002@book{Pau02, author =
{Paulsen, V.}, title = {Completely bounded maps and operator algebras}, series
= {Cambridge Studies in Advanced Mathematics}, volume = {78}, publisher =
{Cambridge University Press}, year = {2002}} =Delzell, C. N.Prestel, A.
AUTHORPositive polynomialsSpringer Monographs in MathematicsSpringer-Verlag,
Berlin2001@book{PD01, author = {{Prestel, A.} AUTHOR ={Delzell, C.\ N.}},
title = {Positive polynomials}, series = {Springer Monographs in Mathematics},
publisher = {Springer-Verlag, Berlin}, year = {2001}} author=Trillo, D.
author=Weilenmann, M. author=Thinh, L. P. author=Tavakoli, A. author=Gisin, N.
author=Acin, A. author=Navascues, M.Renou, M.-O.arXiv:2101.10873Quantum
physics needs complex numbers2021@article{Re21, author = {{Renou, M.-O.}
author={Trillo, D.} author={Weilenmann, M.} author={Thinh, L. P.}
author={Tavakoli, A.} author={Gisin, N.} author={Acin, A.} author={Navascues,
M.}}, journal = {arXiv:2101.10873}, title = {{Quantum physics needs complex
numbers}}, year = {2021}} ScheidererC.Spectrahedral shadowsSIAM J. Appl.
Algebra Geom.SIAM Journal on Applied Algebra and
Geometry22018126–44@article{Sc18, author = {Scheiderer, C.}, title =
{Spectrahedral shadows}, journal = {SIAM J. Appl. Algebra Geom.}, fjournal =
{SIAM Journal on Applied Algebra and Geometry}, volume = {2}, year = {2018},
number = {1}, pages = {26–44}} @article{Sh15}
* author = Shalm, L. K., author=Meyer-Scott, E. author=Christensen, B. G. author=Bierhorst, P. author=Wayne, M. A. author=Gerrits, T., author=Glancy, S., author=Hamel, D. R. , author=Allman, M. S. , author=Coakley, K. J. , author=Dyer, S. D. , author=Hodge, C. , author=Lita, A. E. , author=Verma, V. B. , author=Lambrocco, C. , author=Tortorici, E. , author=Migdall, A. L. , author=Zhang, Y. , author=Kumor, D. R. , author=Farr, W. H. , author=Marsili, F. , author=Shaw, M. D. , author=Stern, J. A. , author=Abellán, C. , author=Amaya, W. , author=Pruneri, V. , author=Jennewein, T. , author=Mitchell, M. W. , author=Kwiat, P. G. , author=Bienfang, J. C. , author=Mirin, R. P. , author=Knill, E., author=Nam, S. W. , journal = Phys. Rev. Lett., pages = 250402, title = Strong Loophole-Free Test of Local Realism, volume = 115, year = 2015
@article{sl}
* AUTHOR = Slofstra, W., TITLE = The set of quantum correlations is not closed, JOURNAL = Forum Math., Pi, VOLUME = 7, YEAR = 2019, PAGES = e1, 41,
@article{Ve04b}
* author = Verstraete, F. author= Porras, D. author=Cirac, J. I., journal = Phys. Rev. Lett., pages = 227205, title = Density Matrix Renormalization Group and Periodic Boundary Conditions: A Quantum Information Perspective, volume = 93, year = 2004
@article{Wo11}
* author = Wolf, M. M., title = Quantum Channels & Operations: A Guided Tour, url = https://www-m5.ma.tum.de/foswiki/pub/M5/Allgemeines/MichaelWolf/QChannelLecture.pdf, year = 2011
@book{wi}
* , AUTHOR = Wilde, M. M., TITLE = Quantum Information Theory, PUBLISHER = Cambridge University Press, YEAR = 2017, PAGES = xvii+757,
@article{Zw04}
* author = Zwolak, M. author = Vidal, G., journal = Phys. Rev. Lett., pages = 207205, title = Mixed-State Dynamics in One-Dimensional Quantum Lattice Systems: A Time-Dependent Superoperator Renormalization Algorithm, volume = 93, year = 2004
|
# An Incremental Gray-box Physical Adversarial Attack on Neural Network
Training
Rabiah Al-qudah1, Moayad Aloqaily1, Bassem Ouni2, Mohsen Guizani1, Thierry
Lestable2
2{bassem.ouni<EMAIL_ADDRESS>
1Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI), UAE
2Technology Innovation Institute, Abu Dhabi, UAE
E-mails: 1{rabiah.alqudah; moayad.aloqaily<EMAIL_ADDRESS>
###### Abstract
Neural networks have demonstrated remarkable success in learning and solving
complex tasks in a variety of fields. Nevertheless, the rise of those networks
in modern computing has been accompanied by concerns regarding their
vulnerability to adversarial attacks. In this work, we propose a novel
gradient-free, gray box, incremental attack that targets the training process
of neural networks. The proposed attack, which implicitly poisons the
intermediate data structures that retain the training instances between
training epochs acquires its high-risk property from attacking data structures
that are typically unobserved by professionals. Hence, the attack goes
unnoticed despite the damage it can cause. Moreover, the attack can be
executed without the attackers’ knowledge of the neural network structure or
training data making it more dangerous. The attack was tested under a
sensitive application of secure cognitive cities, namely, biometric
authentication. The conducted experiments showed that the proposed attack is
effective and stealthy. Finally, the attack effectiveness property was
concluded from the fact that it was able to flip the sign of the loss gradient
in the conducted experiments to become positive, which indicated noisy and
unstable training. Moreover, the attack was able to decrease the inference
probability in the poisoned networks compared to their unpoisoned counterparts
by 15.37%, 14.68%, and 24.88% for the Densenet, VGG, and Xception,
respectively. Finally, the attack retained its stealthiness despite its high
effectiveness. This was demonstrated by the fact that the attack did not cause
a notable increase in the training time, in addition, the Fscore values only
dropped by an average of 1.2%, 1.9%, and 1.5% for the poisoned Densenet, VGG,
and Xception, respectively.
###### Index Terms:
Adversarial Attacks, Data Poisoning, Neural Networks, Iris Recognition.
## I Introduction
Cognitive cities [1] are proactive, hyper-connected, and citizen-driven cities
that are designed to minimize resource consumption, in order to achieve
sustainability. In addition, the vast advancement in Artificial Intelligence
(AI) and Internet of Things (IoT) technologies have enhanced the evolution of
research that integrates both technologies to deliver and automate services
for cognitive cities’ residents. In fact, the great development that emerged
from the integration of those technologies has brought unforeseen exposures to
cybersecurity, in addition to novel attacks that need to be addressed in order
to deliver secure automation to cognitive cities.
Securing access to different services and facilities, such as connected
buildings and data centers, and managing the flow of foot traffic are crucial
requirements when adopting the cognitive city paradigm. Those requirements can
be implemented using biometric authentication such as fingerprint recognition
and iris recognition. Despite the benefits of biometric authentication,
privacy concerns and security attacks pose serious challenges to this
technology after deployment. Attacks that target biometric recognition systems
typically include presenting human characteristics or artifacts directly to a
biometric system to interfere or bias with its standard operation. Such
attacks can result in granting access to unauthorized individuals into secured
premises, allowing tailgating, or triggering denial of service by rejecting
the biometrics of authorized individuals. For instance, in 2017, the Chaos
Computer Club executed a successful attack on the Samsung Galaxy S8 iris
scanner using a simple photograph and a contact lens [2].
On a different note, neural networks have gained wide popularity in the past
decade due to their supremacy in terms of accuracy and minimal need for human
intervention. Moreover, those networks are data hungry and are very sensitive
to patterns they are exposed to during the training phase. On the other hand,
neural networks are vulnerable and can be biased even with the introduction of
simple adversarial attacks. For example, altering a single pixel in the data
fed to an image classifier can disrupt the learning experience and result in a
biased model [3].
Adversarial attacks are considered white box when the attacker has full access
to the neural network and data, while gray box attacks assume having access to
either and black box attacks assume access to neither. Those attacks can be
categorized into digital attacks and physical attacks. Digital attacks
engineer pixel values of input images, whereas physical attacks insert pixel
patches that represent real world objects into the input image instance.
Attacker goals can vary from faulting the predictions of a certain class, in
what is called “targeted attacks”. Moreover, an attack can be “non-targeted”
and aim to fault the model in general.
Furthermore, attacks that target faulting the inference phase have been
extensively studied in the literature. On the contrary, only a handful of
papers focused on faulting the training phase and the intermediate values
related to its computations. In 2022, Breier et al. introduced the first
attack that directly targets the training phase by perturbing the ReLu values
while training [4]. In fact, the lack of research attention on attacks that
target the training phase puts many applications that rely on neural networks
in jeopardy. In this work, we propose and test a novel attack that focuses on
faulting the training process of neural networks in the domain of biometric
authentication through iris recognition. The contributions of this work can be
summarized as follows:
1. 1.
We introduce a novel gradient-free, data poisoning attack that incrementally
and directly targets the training set during the training process of a neural
network with minimal knowledge by the attacker. To the best of our knowledge,
this is the first attack that executes between training epochs and targets the
intermediate data structures of the training phase.
2. 2.
We conduct extensive experimental verification on the proposed attack to test
its effectiveness and stealthiness. We define four evaluation criteria to
quantify the effect of the attack, namely, the average of the loss change, the
average inference probability, the training time difference, and the
performance degradation measure.
3. 3.
We experiment the proposed attack on an important aspect of a cognitive city,
namely, iris recognition. To the best of our knowledge, this is the first
attempt to test the effect of an adversarial attack that occurs during
training on the domain of iris recognition.
The rest of this paper is organized as follows: the most recent literature on
the domain of physical attacks and iris recognition is presented in Section
II. The proposed methods are outlined in Section III. The results are
described and discussed in Section IV. Finally, Section V concludes and
summarizes the main highlights and observations of this work.
## II Related Work
### II-A Attacks on Neural Networks
Patch attacks are physical attacks that replace a subset of pixels in an image
with pixels from adversarial patches to bias a model [5]. While many studies
have proposed attacks that target faulting the inference phase [6, 7], only a
handful of papers focused on faulting the training phase and the intermediate
values related to its computations [4]. For example, Zhao et al. [6] applied
the alternating direction method of multipliers at the inference time to solve
the optimization problem of the targeted fault sneaking attack. The results
showed that the attack was successful and stealthy, moreover, the success rate
was approximately 100% when the number of targeted images was less than 10.
Whereas, the success rate decreased as the number of fooled images increased.
Furthermore, the work in [7] studied the effects of bitwise perturbations at
inference time on 19 deep networks. The vulnerable parameters of the
experimented networks were identified using heuristic functions. The results
showed that most deep architectures have at least one parameter that causes an
accuracy loss of over 90% when a bit-flip is executed on their bitwise
representation.
In addition, the Fast Gradient Sign Method (FGSM) has been widely used in the
literature as an attacking strategy [8]. This method includes adding noise
whose direction is the same as the gradient of the cost function with respect
to the data using a trained model. The work in [4], proposed the first attack
that targets the training phase by changing the values of the ReLu function to
bias the neural network. The novel attack was proven to be effective and
stealthy.
### II-B Attacks on Iris Recognition Systems
The crucial role iris recognition has played in securing premises, in addition
to the threatening effects of breaching such authentication systems, have made
iris biometric authentication systems an active target for adversarial
attacks. A novel morph attack on iris recognition systems was tackled in [9].
Sharma et al. generated morphed iris images using the Indian Institute of
Technology Delhi (IITD) Iris Database and West Virginia University (WVU)
multi-modal datasets. The morph attack achieved a success rate higher than 90%
on two state-of-the-art iris recognition methods, which indicates the
vulnerability of iris recognition systems.
In order to protect against the increasing attacks, researchers have also
focused on studying countermeasures and detection mechanisms for iris
recognition attacks. For example, Thukral et al. [10] proposed an iris
spoofing detection system that utilized Gabor filters and Histogram of
Gradient (HOG) bins to extract features. Next, a Support Vector Machine (SVM)
was used to detect if the extracted features represented fake or real iris.
The proposed system was able to detect spoofing attacks with an accuracy of
98%. Finally, Tapia et al. [11] tackled testing the liveness of the scanned
iris to protect the system from being fooled by printed images or artificial
eyes. The proposed work utilized a MobileNetV2 network, which was trained from
scratch. Moreover, the authors increased the number of filters and weighted
each class based on the number of its instances. The proposed method was able
to accurately classify irises with competitive Bona Fide Classification Error
Rates (BPCER) of less than 4% in all experiments.
## III Physical Gray-box Adversarial Attacks
A labeled training set of size $s$ can be represented as $DS=\\{(x^{i}$,
$y^{i})\\}^{s}_{i=1}$, where $y^{i}\in\mathcal{Y}$ and $\mathcal{Y}$ is the
set of all possible output classes for an image classification problem. When
training a deep classifier, we aim to optimize a discriminant function
$\mathcal{F}$ that maps each instance,$x^{i}$, to the class associated with
the highest class probability, as can be seen in Equation 1. This optimization
process takes place during the training process by passing $DS$ to a deep
classifier for a number of training rounds. The number of training rounds will
be referred to as $Epochs$ throughout the rest of this paper. The
aforementioned setting of training $\mathcal{F}$ without any attacks will be
referred to as the base model throughout this work.
$\mathcal{F}\rightarrow argmax(P(\mathcal{Y}\mid x^{i}))$ (1)
### III-A Attack Definition
In our proposed attack, an attacker aims to corrupt the training process by
perturbing the training instances incrementally between training epochs in
order to optimize a corrupted poisoned discriminant function
$\mathcal{F^{\prime}}$ that produces faulty probability distributions over the
possible output classes. The attack is executed implicitly in multiple rounds.
In each poisoning round, a poisoning procedure that selects $X\subseteq DS$ of
size $|X|=\alpha*s$ is executed, where $\alpha\in(0\%,100\%]$ is the poisoning
percentage coefficient. The attacker’s goal is to replace $X=\\{(x^{i}$,
$y^{i})\\}^{|X|}_{i=1}$ with a poisoned set $X^{\prime}=\\{g(x^{i}$),
$y^{i})\\}^{|X|}_{i=1}$, where g(.) is the poisoning function that modifies
$x^{i}$ at a pixel level. The poisoning function replaces the pixels that fall
within a selected area, namely $Patch_{Area}$, with faulty pixels,
$x^{\prime}$, in order to corrupt the image representation and result in a
faulty training process. The poisoning function can be seen in Equation 2,
where $W$ and $H$ are the width and height of the training image instance
$x^{i}$.
$g(x)=\begin{cases}x^{\prime}_{u,v}&if(u,v)\in Patch_{Area}\\\ &\hskip
85.35826pt,u\in[0,W),v\in[0,H)\\\ x_{u,v}&Else\end{cases}$ (2)
The attack targets the intermediate data structures, where the training
instances are saved. In addition, it is executed incrementally between
training epochs, such that a different $X$ is selected every poisoning round
in order to accumulate the poisoned training instances and increase the
effectiveness of the attack.
The attack frequency coefficient determines the number of poisoning rounds and
is called $\beta\in[1,Epochs]$. When the value of $\beta$ is chosen to be 1,
then the attack will be executed after each training epoch causing an
increased risk of damage. On the contrary, if the value is chosen to be
$Epochs$, then the poisoning process will only happen once after the first
training epoch.
### III-B Poisoning Strategy
Function $g(.)$ in Equation 2 replaces pixels in a training instance within
the defined poisoning area, $Patch_{Area}$. This poisoning procedure can be
implemented in multiple ways. In this work, we opted to implement $g(.)$ to
execute local perturbations and global perturbations [12]. It is worth
mentioning that only one type of perturbations was considered in each of the
conducted experiments in this work.
In the local perturbations setting, a small area called physical patch in the
training instance is selected and replaced with pixels from another image. In
this work, the physical patch was chosen to be close to the training set
domain, hence it was an image of a human eye. It is worth mentioning that the
size of the $Patch_{Area}$ and its location are randomized, and optimizing
them is out of the scope of this work [5].
On the other hand, in the global perturbations setting, all the instances in
$X$ are replaced with another randomly selected image from the training set.
This way the classifier will be exposed to a highly redundant training set
which corrupts the training process by increasing the risk of overfitting.
Both poisoning strategies are not easy to blacklist, since the local setting
only alters a small area of each instance and the global perturbation setting
uses an image from within the training instances in a manner that imitates
image augmentation, which is a benign, widely used technique in training
neural networks.
### III-C Attack Characteristics
The attack specifications can be summarized as:
1. 1.
The attack is non-targeted: the attack definition in III-A shows that no
restrictions apply on the choice of $y^{i}$ in $X$. Moreover, the value of
$y^{i}$ remains unchanged after poisoning takes place in $X^{\prime}$.
2. 2.
The attack does not affect packets delay: the attack only targets the training
phase, whereas the inference phase is executed in the usual manner. Hence, the
attack is stealthy in the sense that it does not affect the packet delay when
the deep classifier is deployed on the cloud.
3. 3.
The attack samples without replacement: to guarantee faster and stealthier
execution, $X$ is sampled every poisoning round without replacement; that is
an instance can only be included once in $X$ at a certain poisoning round,
however an instance can be included in multiple poisoning rounds. This implies
that the network will be exposed to a different training set after every
poisoning round, which results in a higher training instability.
4. 4.
The attack is incremental for increased effectiveness: the poisoned instances
in $X^{\prime}$ accumulate in the training set after each poisoning round and
throughout the training phase, which in turn intensifies the effect of
poisoning even at a low value of $\alpha$.
5. 5.
The attack is gradient-free [13] and is gray box: the attack is gray box since
we assume that the attacker only has access to the intermediate data
structures of the training process without the need to access the physical
path of the training instances or the neural network architecture. In other
words, the attack is agnostic to the neural network architecture. The attack
is also gradient-free since it perturbs the training data between epochs
without the need to access the gradients of the attacked neural network.
6. 6.
The attack targets intermediate data structures: typically developers’ efforts
are more focused on preparing and preprocessing the training set before
training. On the other hand, what happens during training and the values of
the intermediate data structures that keep the training instances are
overlooked, especially that the training is usually conducted on powerful
servers with limited physical access. Hence, this attack which poisons the
data implicitly between training epochs, acquires its high risk property from
attacking data structures that are typically not monitored by professionals,
and hence the attack goes unnoticed despite the damage it causes.
### III-D Evaluation Metrics
In each experiment, the neural networks will be evaluated and compared in
terms of the following evaluation measures:
1. 1.
Attack effectiveness measures: an attack is called effective if it achieves
its intended goals. In our proposed attack, the goal is to expose the deep
classifier to an unstable training process, which in turn, will result in
faulty probability distributions produced by the network at the inference
stage.
1. (a)
Average of Loss Change $(ALC)$: the loss function is typically expected to
decrease as the training process progresses. This is due to backpropagation,
which reflects what the network learned during each training epoch. The $ALC$
measures the average change in the loss value over the training epochs, and
the sign of this evaluation metric is a leading element, as it reflects
whether the loss was decreasing or increasing throughout training. Executing
the attack is expected to cause instability in the training process due to the
noisy poisoned data and, hence, increase the $ALC$ value. The $ALC$ can be
defined as follows, where the $\ell$ is the loss and $Epochs$ is the number of
training epochs:
$ALC=\frac{\sum_{i=1}^{Epochs}(\ell_{i}-\ell_{i-1})}{Epochs-1}$ (3)
2. (b)
Average Inference Probability (AIP): the softmax function is typically used in
the last layer of deep classifiers to normalize the output to a probability
distribution over the possible output classes. Each test instance is
classified as the class of the highest probability. In this evaluation
criterion, we assess the effect of the attack on the probabilities produced by
the model at the inference stage, as typically higher probabilities imply more
confidence about the selected class. As a result, a decreased average
probability reflects the effectiveness of the attack on the final output of
the model. $AIP$ can be calculated using Equation 4, where $t^{i}$ is a test
instance.
$AIP=Average(argmax(P(Y\mid t^{i})))$ (4)
2. 2.
Attack stealthiness measures: an attack is called stealthy if the evaluation
metrics of the corrupted classifier $\mathcal{F^{\prime}}$ are close to the
metrics of the base model $\mathcal{F}$ [4].
1. (a)
Training Time Difference $(TTD)$: training a neural network can be a lengthy
process, especially when the training instances are large. Hence, it is
crucial to ensure that executing the attack will not cause an observable added
amount of time to the training phase, in order to keep the attack unnoticed.
The $TTD$ measure can be defined as follows:
$TTD=TrainingTime^{\prime}-TrainingTime_{base}$ (5)
where $TrainingTime_{base}$ is the time taken to train the base model, and
$TrainingTime^{\prime}$ is the training time when the neural network is
trained with poisoned data.
2. (b)
Performance Degradation Measure (PDM): in order to confirm the attack
stealthiness, the metrics of the poisoned classifier need to be reasonably
close to the metrics of the base classifier. In this evaluation criterion, the
difference between the macro Fscore of the base model and each poisoned model
is calculated, as described in Equation 6, where $Fscore^{\prime}$ is the
Fscore of a poisoned model.
$PDM={Fscore}_{base}-Fscore^{\prime}$ (6)
### III-E Datasets
The proposed attack perturbs images and hence can target any computer vision
application. Nevertheless, we opted to apply it to an iris recognition
dataset, due to the significance of this domain. The CASIA Iris Subject Ageing
dataset [14] was considered in our experiments. This dataset was collected by
the National Laboratory of Pattern Recognition (NLPR) in China in April 2009
and April 2013. In this work, the subset of CASIA Iris Subject Ageing which
was collected in 2009 using the H100 sensor was chosen due to its high
diversity and good size. The subset comprises 37912 instances of the left and
right eyes of 48 individuals. The dataset instances pose some challenging
scenarios, like glasses, partially closed eyes, Moreover, some instances have
very low brightness. The cross-validation method was used to train and
evaluate the neural networks, and 100 images from each user subject were
randomly selected for the test dataset.
### III-F Technical and Experimental Setup
Three state-of-the-art deep classifiers, namely, Densenet, VGG, and Xception
were considered for this work. Moreover, the number of epochs was set to 10,
the cross entropy loss function was used and the networks were trained with a
learning rate of .01 on the Google Colab Pro platform which utilizes NVIDIA
GPUs. It is worth mentioning that the code of this work is available on Github
[15].
Each of the 3 considered deep classifiers were experimented with $\alpha$
values of 5%, 10%,15%, and 20%, as 20% is typically the maximum poisoning
percentage considered in the literature [16]. In the experiments description
and results, the local perturbations poisoning strategy will be referred to as
$P$, and the global perturbations strategy will be referred to as $R$.
## IV Results and Discussion
In this section, the results of evaluating the proposed attack will be
presented in detail. Figures 1, 2, and 3 depict the results of the evaluation
metrics described in III-D. In all figures, the result of the base model is
depicted as the origin point ($\alpha$ = 0%).
Figure 1: Experimental results of the Average of Loss Change (ALC) values
Figure 2: Experimental results of the Average Inference Probability (AIP)
values Figure 3: Experimental results of the Performance Degradation Measure
(PDM) values
### IV-A Analysis of Attack Effectiveness
Figure 1 depicts the $ALC$ results for the 3 considered deep classifiers. A
positive $ALC$ value indicates increasing loss values and poor training,
whereas a low negative value indicates a more stable training process. From
Figure 1, it can be noted that increasing $\alpha$ was always associated with
increased $ALC$ values, hence, it can be concluded that increasing the
poisoning percentage increases the effectiveness of the attack.
On a different note, increasing the attack frequency (i.e., a lower $\beta$)
resulted in increased effectiveness in all experiments. In the experiments
where $\beta$’s value was set to 1, the $ALC$ kept increasing as the value of
$\alpha$ increased, and the value was positive in all experiments where
$\alpha\geq 10\%$. On the other hand, when $\beta=Epochs$, the $ALC$ results
were increasing but negative in all experiments, which means that the loss
values were still decreasing but at a lower rate compared to the base model
and the experiments of higher frequency.
The $AIP$ results are depicted in Figure 2, where it can be seen that
increasing the value of $\alpha$ resulted in decreasing the $AIP$ in all
experiments. However, this decrease varied in the experiments; for example,
the decrease was slight, even when $\alpha$ increased, in the experiments
where $\beta$=$Epochs$. On the other hand, increasing $\alpha$ with a higher
frequency ($\beta=1$) resulted in a more noticeable drop in the $AIP$ values.
For example, it can be seen in Figure 2(c) that the $AIP$ value dropped by
24.88% when $\alpha=20\%$ and $\beta=1$ in the random poisoning experiment,
$R$. Whereas, the $AIP$ value only dropped by 5% when we only changed the
value of $\beta$ to be equal to the number of $Epochs$. Furthermore, the
highest drop in the $AIP$ in the poisoned networks compared to their
unpoisoned counterparts at inference time was 15.37%, 14.68%, and 24.88% for
the Densenet, VGG, and Xception, respectively. Overall, we can conclude that
the attack was effective in all conducted experiments. Moreover, the attack
effectiveness has a positive correlation with the percentage $\alpha$ and
frequency $\beta$.
### IV-B Analysis of Attack Stealthiness
It is crucial to keep the proposed attack undetected. The attack can be easily
noticed if it takes long to execute, thus, to ensure the attack stealthiness,
the $TTD$ measure is monitored in all experiments. Among all conducted
experiments, the maximum $TTD$ value was 63 seconds. Hence, the attack did not
add a noticeable period of time to the training time of the base model.
Moreover, to monitor the stealthiness of the attack, the $PDM$ values were
recorded as can be seen in Figure 3. The maximum $PDM$ value was recorded for
the VGG network with $\alpha=20\%$ and $\beta=1$ in the random poisoning
experiment, $R$. Overall, the average $PDM$ values were 1.2%, 1.9%, and 1.5%
for the Densenet, VGG, and Xception, respectively. Hence, it can be concluded
that the attack demonstrated a stealthy behavior.
### IV-C Analysis of Poisoning Strategy
As explained in Section III-B, the attack was experimented under local
perturbations setting ($P$) and global perturbations setting ($R$). The
influence of the perturbation type was highly associated with the value of
$\beta$. It can be seen in Figures 1, 2 and 3 that in the experiments of low
frequency, where $\beta=Epochs$, both perturbation types achieved comparable
results. On the other hand, when the poisoning rounds were executed after
every epoch, where $\beta$=1, the attack showed the highest effectiveness in
the global perturbations setting, $P$.
Finally, the results showed that the proposed attack is effective and
stealthy. Those properties increase when the attack is intensified by
increasing the value of $\alpha$, increasing the number of affected pixels,
similar to the case of global perturbations, and decreasing $\beta$ for higher
execution frequency. Moreover, the proposed attack inherits its riskiness from
attacking unobserved data structures that usually reside on powerful servers
with limited physical access. The attack is also incremental and accumulates
poisoned data gradually to intensify its effectiveness across the training
epochs. In addition, the attack requires no knowledge about the neural network
structure, as all experiments in this work were conducted using the same
injection code.
## V Conclusion and Future work
Neural networks are vulnerable to adversarial attacks. Moreover, the digital
transformation adopted worldwide implies continuous acquisition and analytics
of big streams of data, which has brought novel digital threats and unforeseen
exposures to cybersecurity. In this work, we propose a novel gradient-free,
gray box, incremental attack that targets the intermediate data structures of
the training phase of neural networks. The attack has 3 main parameters: the
attack percentage coefficient, the attack frequency coefficient, and the
poisoning strategy. In all conducted experiments, it was noted that the attack
stealthiness and effectiveness had a positive correlation with the
aforementioned parameters.
Moreover, the attack resulted in unstable training, as it made the loss values
increase which in turn indicates poor learning and generalization. Moreover,
the attack was able to decrease the probability of the output class ($AIP$) in
the poisoned networks compared to their unpoisoned counterparts at inference
time by 15.37%, 14.68%, and 24.88% for the Densenet, VGG, and Xception,
respectively. Despite its effectiveness, the attack remained stealthy as it
only dropped the Fscore values by 1.2%, 1.9%, and 1.5% for the poisoned
Densenet, VGG, and Xception, respectively.
In future works, further sensitivity analyses will be conducted on existing
and new parameters, such as the type of communication protocol, and the area
and size of the patch area. Moreover, the attack will be compared to other
iris recognition attacks.
## Acknowledgements
This research was supported by the Technology Innovation Institute (TII), Abu
Dhabi, UAE, under the CyberAI project (grant number: TII/DSRC/2022/3036).
## References
* [1] J. Machin, E. Batista, A. Ballesté, and A. Solanas, “Privacy and security in cognitive cities: A systematic review,” _Applied Sciences_ , vol. 11, p. 4471, 05 2021.
* [2] A. Morales, J. Fierrez, J. Galbally, and M. Gomez-Barrero, “Introduction to iris presentation attack detection,” in _Handbook of Biometric Anti-Spoofing, 2nd Ed._ , 2019.
* [3] J. Su, D. V. Vargas, and K. Sakurai, “One pixel attack for fooling deep neural networks,” _IEEE Transactions on Evolutionary Computation_ , vol. 23, no. 5, pp. 828–841, 2019.
* [4] J. Breier, X. Hou, M. Ochoa, and J. Solano, “Foobar: Fault fooling backdoor attack on neural network training,” _IEEE Transactions on Dependable and Secure Computing_ , pp. 1–1, 2022.
* [5] X. Li and S. Ji, “Generative dynamic patch attack,” 2021.
* [6] P. Zhao, S. Wang, C. Gongye, Y. Wang, Y. Fei, and X. Lin, “Fault sneaking attack: a stealthy framework for misleading deep neural networks,” _2019 56th ACM/IEEE Design Automation Conference (DAC)_ , pp. 1–6, 2019.
* [7] S. Hong, P. Frigo, Y. Kaya, C. Giuffrida, and T. Dumitraş, “Terminal brain damage: Exposing the graceless degradation in deep neural networks under hardware fault attacks,” in _Proceedings of the 28th USENIX Conference on Security Symposium_ , ser. SEC’19. USA: USENIX Association, 2019, p. 497–514.
* [8] V. W. Anelli, A. Bellogín, Y. Deldjoo, T. Di Noia, and F. A. Merra, “Msap: Multi-step adversarial perturbations on recommender systems embeddings,” _The International FLAIRS Conference Proceedings_ , vol. 34, April 2021.
* [9] R. Sharma and A. Ross, “Image-level iris morph attack,” in _2021 IEEE International Conference on Image Processing (ICIP)_ , 2021, pp. 3013–3017.
* [10] A. Thukral, Jyoti, and M. Kumar, “Iris spoofing through print attack using svm classification with gabor and hog features,” in _2022 International Conference for Advancement in Technology (ICONAT)_ , 2022, pp. 1–6.
* [11] J. E. Tapia, S. Gonzalez, and C. Busch, “Iris liveness detection using a cascade of dedicated deep learning networks,” _IEEE Transactions on Information Forensics and Security_ , vol. 17, pp. 42–52, 2022.
* [12] C. Yang, A. Kortylewski, C. Xie, Y. Cao, and A. Yuille, “Patchattack: A black-box texture-based attack with reinforcement learning,” in _Computer Vision – ECCV 2020_ , A. Vedaldi, H. Bischof, T. Brox, and J.-M. Frahm, Eds. Cham: Springer International Publishing, 2020, pp. 681–698.
* [13] M. Alzantot, Y. Sharma, S. Chakraborty, H. Zhang, C.-J. Hsieh, and M. Srivastava, “Genattack: Practical black-box attacks with gradient-free optimization,” 2018.
* [14] National Laboratory of Pattern Recognition (NLPR) - Institute of Automation, Chinese Academy of Sciences (CASIA), “CASIA Iris Subject Ageing,” http://biometrics.idealtest.org/, Accessed in October 4th 2022.
* [15] Artifitialleap-MBZUAI, “Incremental Training Data Attack,” https://github.com/Artifitialleap-MBZUAI/IncrementalTrainingDataPoisoning, October 2022.
* [16] M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru, and B. Li, “Manipulating machine learning: Poisoning attacks and countermeasures for regression learning,” _2018 IEEE Symposium on Security and Privacy (SP)_ , pp. 19–35, 2018.
|
* Rogers et al. (2013) Rogers, T. M., Lin, D. N. C., McElwaine, J. N., & Lau, H. H. B. 2013, ApJ, 772, 21
* Sadeghi Ardestani et al. (2017) Sadeghi Ardestani, L., Guillot, T., & Morel, P. 2017, MNRAS, 472, 2590
* Santos et al. (2003) Santos, N. C., Israelian, G., Mayor, M., Rebolo, R., & Udry, S. 2003, A&A, 398, 363
* Savonije & Papaloizou (1983) Savonije, G. J. & Papaloizou, J. C. B. 1983, MNRAS, 203, 581
* Savonije & Papaloizou (1984) Savonije, G. J. & Papaloizou, J. C. B. 1984, MNRAS, 207, 685
* Savonije & Papaloizou (1997) Savonije, G. J. & Papaloizou, J. C. B. 1997, MNRAS, 291, 633
* Savonije, Papaloizou & Alberts (1995) Savonije, G. J., Papaloizou, J. C. B. & Alberts, F. 1995, MNRAS, 277, 471
* Schlaufman & Winn (2013) Schlaufman K. C. & Winn J. N., 2013, ApJ, 772, 143
* Schneider et al. (2011) Schneider, J., Dedieu, C., Le Sidaner, P., Savalle, R., & Zolotukhin, I. 2011, A&A, 532, A79
* Skumanich (1972) Skumanich, A. 1972, ApJ, 171, 565
* Siess et al. (2000) Siess, L., Dufour, E. & Forestini, M. 2000, A&A, 358, 593
* Spiegel & Veronis (1960) Spiegel, E. A., & Veronis, G. 1960, ApJ, 131, 442
* Stevenson (1979) Stevenson, D. J. 1979, GApFD, 12, 139
* Strugarek et al. (2017) Strugarek, A., Bolmont, E., Mathis, S., Brun, A.S., Réville, V., Gallet, F., Charbonnel, C. 2017, ApJL, 847, 2
* Strugarek et al. (2014) Strugarek A., Brun A. S., Matt S. P. & Réville V., 2014, ApJ, 795, 86
* Strugarek et al. (2015) Strugarek, A., Brun, A. S., Matt, S. P., & Réville, V. 2015, ApJ, 815, 111
* Terquem et al. (1998) Terquem C., Papaloizou J. C. B., Nelson R. P. & Lin D. N. C. 1998, ApJ, 502, 788
* Valdettaro et al. (2007) Valdettaro, L., Rieutord, M., Braconnier, T., & Fraysse, V. 2007, Journal of Computational and Applied Mathematics, 205, 382
* Vick & Lai (2020) Vick, M. & Lai, D. 2020, MNRAS, 496, 3
* Weinberg et al. (2017) Weinberg, N. N., Sun, M., Arras, P., & Essick, R. 2017, ApJ, 849, L11
* Witte & Savonije (2002) Witte, M.G. & Savonije, G.J. 2002, A&A, 386, 222
* Zahn (1966) Zahn, J. P. 1970, A&A, 29, 489
* Zahn (1970) Zahn, J. P. 1975, A&A, 4, 452
* Zahn (1975) Zahn, J. P. 1975, A&A, 41, 329
* Zahn (1977) Zahn, J.-P. 1977, A&A, 57, 383
* Zahn, Talon & Matias (1997) Zahn, J.-P., Talon, S. & Matias, J. 1997, A&A, 322, 320
## Appendix A Computation of a particular solution in the radiative zone near
the interface
The goal of this section is to compute a particular solution of Eq. (20) in
the radiative zone, near the interface. In such a configuration, Eq. (20)
becomes
$\frac{d^{2}\psi}{d\eta^{2}}+v^{2}\eta\psi=v^{2}\eta Z.$ (156)
For a given function $f$, we will write in this section
$\frac{df}{d\eta}=f^{\prime}$. We choose
$\psi_{1}(\eta)=\text{Ai}[v^{\frac{2}{3}}(-\eta)]$ and
$\psi_{2}(\eta)=\text{Bi}[v^{\frac{2}{3}}(-\eta)]$ as the two basis solutions
of the corresponding homogeneous equation. Their Wronskian $\Lambda_{A}$ can
be written as
$\Lambda_{A}=\psi_{1}\psi_{2}^{\prime}-\psi_{2}\psi_{1}^{\prime}=-\frac{v^{\frac{2}{3}}}{\pi}.$
(157)
Then a particular solution of the inhomogeneous Airy equation, vanishing at
the interface, can be expressed as
$\begin{split}\psi_{\text{p}}(\eta)=&-\left(\int_{0}^{\eta}{\Lambda_{A}^{-1}v^{2}\eta
Z\psi_{2}(\eta)d\eta}\right)\psi_{1}(\eta)\\\
&+\left(\int_{0}^{\eta}{\Lambda_{A}^{-1}v^{2}\eta
Z\psi_{1}(\eta)d\eta}\right)\psi_{2}(\eta).\end{split}$ (158)
From Eq. (156) we obtain
$\frac{\psi_{\text{p}}(\eta)}{v^{-\frac{2}{3}}\pi}=\left(\int_{0}^{\eta}{\\!\\!\\!-Z\psi_{2}^{\prime\prime}(\eta)d\eta}\right)\psi_{1}(\eta)+\left(\int_{0}^{\eta}{Z\psi_{1}^{\prime\prime}(\eta)d\eta}\right)\psi_{2}(\eta),$
(159)
which leads to
$\begin{split}\frac{\psi_{\text{p}}(\eta)}{v^{-\frac{2}{3}}\pi}=&-Z(\eta)\Lambda_{A}+Z(0)\left[\psi_{2}^{\prime}(0)\psi_{1}(\eta)-\psi_{1}^{\prime}(0)\psi_{2}(\eta)\right]\\\
&+\left[Z^{\prime}\psi_{2}\right]_{0}^{\eta}\psi_{1}-\left[Z^{\prime}\psi_{1}\right]_{0}^{\eta}\psi_{2}+\mathcal{I},\end{split}$
(160)
with
$\mathcal{I}=-\left(\int_{0}^{\eta}{Z^{\prime\prime}\psi_{2}(\eta)d\eta}\right)\psi_{1}+\left(\int_{0}^{\eta}{Z^{\prime\prime}\psi_{1}(\eta)d\eta}\right)\psi_{2}$.
Such a term will be neglected from now on by assuming that the equilibrium
tide varies slowly compared to the dynamical tide in the radiative zone. Since
$\psi_{1}^{\prime}(0)=-v^{\frac{2}{3}}\text{Ai}^{\prime}(0)$ and
$\psi_{2}^{\prime}(0)=-v^{\frac{2}{3}}\text{Bi}^{\prime}(0)$, we obtain:
$\begin{split}\psi_{\text{p}}(\eta)=&Z(\eta)-Z(0)\pi\left[\text{Bi}^{\prime}(0)\psi_{1}(\eta)-\text{Ai}^{\prime}(0)\psi_{2}(\eta)\right]\\\
&-v^{-\frac{2}{3}}\pi
Z^{\prime}(0)\left[\text{Bi}(0)\psi_{1}(\eta)-\text{Ai}(0)\psi_{2}(\eta)\right].\end{split}$
(161)
Knowing that
$\pi=[\text{Ai}(0)\text{Bi}^{\prime}(0)-\text{Bi}(0)\text{Ai}^{\prime}(0)]^{-1}=\frac{1}{2}3^{\frac{3}{2}}\Gamma\left(\frac{4}{3}\right)\Gamma\left(\frac{2}{3}\right)$
the particular solution becomes
$\begin{split}\psi_{\text{p}}(\eta)=Z(\eta)+\left(\frac{\tau}{2}\right)^{\frac{1}{3}}\left[\alpha_{\text{rad,p}}J_{\frac{1}{3}}(\tau)+\beta_{\text{rad,p}}J_{-\frac{1}{3}}(\tau)\right],\end{split}$
(162)
where
$\displaystyle\alpha_{\text{rad,p}}=-\frac{dZ}{d\eta}(0)\left(\frac{v}{3}\right)^{-\frac{2}{3}}\Gamma\left(\frac{4}{3}\right)$
and $\beta_{\text{rad,p}}=-Z(0)\Gamma\left(\frac{2}{3}\right)$.
## Appendix B Forcing term and radial displacement for a low-mass star
We aim in this section to link the forcing term to the radial displacement
linked to the dynamical tide, which is comparing formulations from Zahn (1975)
and Goodman & Dickson (1998). One can express the derivative of the radial
displacement at the interface linked to the dynamical tide with the notations
we used in Sect. 2:
$\left.\partial_{r}{\xi_{r}^{\text{dyn}}}\right|_{\text{int}}=\partial_{r}\left[{\rho_{0}^{-\frac{1}{2}}r^{-2}(\rho_{0}^{-\frac{1}{2}}X-Z)}\right]_{\text{int}}.$
(163)
By expressing the radial displacement $\xi_{r}$ in the ($S_{+},S_{-}$) basis,
knowing that $\mathcal{T}_{1}=0$ we obtain from Eqs. (44), (60) and (63):
$\begin{split}&\left.\partial_{r}{\xi_{r}^{\text{dyn}}}\right|_{\text{int}}=\left.\partial_{r}\left(\rho_{0}^{-\frac{1}{2}}r^{-2}\right)\right|_{\text{int}}\left[\frac{\beta_{\text{conv}}+\beta_{\text{conv,p}}}{\Gamma\left(\frac{2}{3}\right)}-Z(0)\right]\\\
&+\rho_{0}^{-\frac{1}{2}}(r_{\text{int}})r_{\text{int}}^{-2}\left[\frac{\beta_{\text{conv}}+\beta_{\text{conv,p}}}{\Gamma(\frac{2}{3})}\frac{\frac{d}{dr}(\rho_{0}^{-\frac{1}{2}}X_{h})_{\text{int}}}{(\rho_{0}^{-\frac{1}{2}}X_{h})_{\text{int}}}+\mathcal{T}-\partial_{r}Z(0)\right]\end{split}$
(164)
with
$\begin{split}\mathcal{T}=&\rho_{0}^{\frac{1}{2}}(r_{\text{int}})r_{\text{int}}^{2}\left\\{-\frac{\rho_{0}^{\prime}(r_{\text{int}})}{\rho_{0}(r_{\text{int}})}-\frac{\left(\rho_{0}^{-\frac{1}{2}}Z\right)_{\text{int}}^{\prime}}{\left(\rho_{0}^{-\frac{1}{2}}Z\right)_{\text{int}}}+\frac{X_{1}^{\prime}(r_{\text{int}})}{X_{1}(r_{\text{int}})}\right\\}\left(\frac{\varphi_{T}}{g}\right)_{\text{int}}\\\
&+\rho_{0}^{\frac{1}{2}}(r_{\text{int}})\mathcal{F}.\end{split}$ (165)
Since $C_{2}=0$, the homogeneous solution $X_{h}$ in the convective zone is
proportional to the basis solution $X_{1}$, which leads to
$\partial_{r}{\xi_{r}^{\text{dyn}}}=\left.\frac{\partial_{r}(\rho_{0}^{-1}r^{-2}X_{h})}{\rho_{0}^{-1}r^{-2}X_{h}}\right|_{\text{int}}\xi_{r}^{\text{dyn}}(r_{\text{int}})+r_{\text{int}}^{-2}\mathcal{F},$
(166)
where
$\displaystyle\xi_{r}^{\text{dyn}}(r_{\text{int}})=\rho_{0}^{-\frac{1}{2}}(r_{\text{int}})r_{\text{int}}^{-2}\left(\frac{\beta_{\text{conv}}+\beta_{\text{conv,p}}}{\Gamma\left(\frac{2}{3}\right)}-Z(0)\right)$.
By assuming slow variations of the density and the radius compared to the
characteristic length of variation of the gravity waves in the radial
direction
$\lambda=v^{-\frac{2}{3}}=\omega^{\frac{2}{3}}\left[l(l+1)\right]^{-\frac{1}{3}}\left|\frac{dN^{2}}{d\ln
r}\right|^{-\frac{1}{3}}_{r_{\text{int}}}r_{\text{int}}$ (Goodman & Dickson,
1998; Kushnir et al., 2017), we obtain from Eq. (44)
$\left.\partial_{r}{\xi_{r}^{\text{dyn}}}\right|_{\text{int}}\sim\frac{\Gamma(\frac{2}{3})}{3^{\frac{2}{3}}\Gamma(\frac{4}{3})}\frac{\alpha_{\text{conv}}}{\beta_{\text{conv}}}\frac{\xi_{r}^{\text{dyn}}(r_{\text{int}})}{\lambda}+r_{\text{int}}^{-2}\mathcal{F}.$
(167)
Since
$\xi_{r}^{\text{dyn}}(r_{\text{int}})\sim\lambda\left.\partial_{r}\xi_{r}^{\text{dyn}}\right|_{\text{int}}$
(Goodman & Dickson, 1998) and $\alpha_{\text{conv}}\ll\beta_{\text{conv}}$, we
have
$\left.\partial_{r}\xi_{r}^{\text{dyn}}\right|_{\text{int}}\sim
r_{\text{int}}^{-2}\mathcal{F},$ (168)
which ensures the equivalence between the formulations from Zahn (1975) and
Goodman & Dickson (1998).
## Appendix C Stress-free condition at the stellar surface
The goal of this section is to investigate the consequences on tidal
dissipation of low-mass stars of a stress-free boundary condition coupled with
a nonzero density at the stellar surface. In this case, the Lagrangian
perturbation of pressure vanishes at $r=R_{\star}$:
$\delta
P=\rho_{0}(R_{\star})\left[y-g_{0}(R_{\star})\xi_{r}(R_{\star})\right]=0.$
(169)
We will consider the case where $\rho_{0}(R_{\star})>0$ and
$\frac{\rho_{0}^{\prime}(R_{\star})}{\rho_{0}(R_{\star})}$ is the only large
parameter involved, to account for the density behavior near the photosphere.
From Eqs. (12) in the anelastic approximation, we obtain
$\xi_{r}(R_{\star})=-\frac{\varphi_{T}(R_{\star})}{g_{0}(R_{\star})}+\frac{\omega^{2}}{\omega_{\text{dyn}}^{2}}\frac{1}{l(l+1)R_{\star}}\partial_{r}(r^{2}\xi_{r}).$
(170)
By assuming low frequency waves, i.e. $\omega\ll\omega_{\text{dyn}}$, the
stress-free condition becomes
$X(R_{\star})=-\rho_{0}(R_{\star})R_{\star}^{2}\frac{\varphi_{T}(R_{\star})}{g_{0}(R_{\star})},$
(171)
which accounts for the equilibrium tide. In the convective zone, we choose the
basis solution $X_{1}$ as the solution of the homogeneous equation associated
to Eq. (35) verifying $X_{1}(R_{\star})=X(R_{\star})$ and
$X_{1}^{\prime}(R_{\star})=X^{\prime}(R_{\star})$. This way, we fix
$C_{2}=\mathcal{T}_{0}=\mathcal{T}_{2}=0$. As in section 5.1, near the
surface, $X_{1}$ is a solution of the following equation:
$X^{\prime\prime}-\frac{\rho_{0}^{\prime}}{\rho_{0}}X^{\prime}=0,$ (172)
which leads to $X_{1}^{\prime}\propto\rho_{0}$. We now impose that
$X^{\prime}(R_{\star})=X_{1}^{\prime}(R_{\star})=R_{\star}^{2}\rho_{0}(R_{\star})$.
If we assume that the interface between convective and radiative zone is close
to the stellar surface, one can assume that
$\begin{split}X_{1}^{\prime}(r)&=R_{\star}^{2}\left[\rho_{0}(R_{\star})+\rho_{0}^{\prime}(R_{\star})(r-R_{\star})\right]\\\
X_{1}(r)&=X_{1}(R_{\star})\\!+\\!R_{\star}^{2}\\!\left[\rho_{0}(R_{\star})(r-R_{\star})\\!+\\!\frac{1}{2}\rho_{0}^{\prime}(R_{\star})(r-R_{\star})^{2}\\!\right].\end{split}$
(173)
Such expressions are adopted as a boundary condition in our numerical
treatment of tidal dissipation to deal with the surface singularity in the
case of solar-type stars. Furthermore, if we assume that
$\displaystyle\frac{\rho_{0}(R_{\star})}{\rho_{0}^{\prime}(R_{\star})}\ll
R_{\star}^{2}(\alpha-1)^{2}$ we obtain
$\displaystyle\frac{X_{1}^{\prime}(R_{\star})}{X_{1}(r_{\text{int}})}=\frac{1}{-\frac{\varphi_{T}(R_{\star})}{g(R_{\star})}+R_{\star}(\alpha-1)+\frac{1}{2}\frac{\rho_{0}^{\prime}(R_{\star})}{\rho_{0}(R_{\star})}R_{\star}^{2}(\alpha-1)^{2}}\ll
1,$ (174)
$\displaystyle\frac{X_{1}(R_{\star})}{X_{1}(r_{\text{int}})}=\frac{-\frac{\varphi_{T}(R_{\star})}{g(R_{\star})}}{-\frac{\varphi_{T}(R_{\star})}{g(R_{\star})}+R_{\star}(\alpha-1)+\frac{1}{2}\frac{\rho_{0}^{\prime}(R_{\star})}{\rho_{0}(R_{\star})}R_{\star}^{2}(\alpha-1)^{2}}\ll
1.$ (175)
The $\mathcal{T}_{1}$ term then becomes
$\begin{split}\mathcal{T}_{1}=&\frac{\varphi_{T}(R_{\star})}{g(R_{\star})}\alpha^{-2}\frac{\rho_{0}^{\prime}(R_{\star})}{\rho_{0}(R_{\star})}\frac{X_{1}(R_{\star})}{X_{1}(r_{\text{int}})}\\\
&\sim-2\left(\frac{\varphi_{T}(R_{\star})}{g(R_{\star})}\right)^{2}\alpha^{-2}(1-\alpha)^{-2}R_{\star}^{-2}.\end{split}$
(176)
Furthermore, in the case of a thin convective layer, from Eq. (127) we have
$r_{\text{int}}^{-2}\mathcal{F}=3\frac{1-\gamma}{1-\alpha}\frac{\alpha^{5}}{\beta}\left(\frac{2\alpha}{3}-1\right)\alpha^{-2}\frac{\varphi_{T}(R_{\star})}{g(R_{\star})}R_{\star}^{-1}.$
(177)
Then we obtain
$\frac{\mathcal{T}_{1}}{r_{\text{int}}^{-2}\mathcal{F}}=-\frac{2}{3}\frac{\beta}{\alpha^{5}(1-\gamma)(1-\alpha)\left(\displaystyle\frac{2\alpha}{3}-1\right)}\frac{\varphi_{T}(R_{\star})}{g(R_{\star})}R_{\star}^{-1}.$
(178)
Since $\gamma=\frac{\alpha^{3}(1-\beta)}{\beta(1-\alpha^{3})}$ we have
$\frac{\mathcal{T}_{1}}{r_{\text{int}}^{-2}\mathcal{F}}=-\frac{2}{3}\frac{\beta^{2}(1+\alpha+\alpha^{2})}{\alpha^{5}\left(\displaystyle\frac{2\alpha}{3}-1\right)(\beta-\alpha^{3})}\frac{\varphi_{T}(R_{\star})}{g(R_{\star})}R_{\star}^{-1}.$
(179)
The values of such a ratio for all the possible values of
$\alpha=R_{r}/R_{\star}$ and $\beta=M_{r}/M_{\star}$ are represented in Fig.
13. As we assumed that the convective zone is sufficiently thin to linearize
the density profile, only values of $\alpha$ close to 1 are relevant in this
analysis. Therefore, we can assume that the $\mathcal{T}_{1}$ term marginally
affects our prescription for tidal dissipation in the radiative zone in the
case of a surface density sufficiently weak. A more detailed study on surface
boundary conditions is left for future work.
Figure 13: Value of $\mathcal{T}_{1}/r_{\text{int}}^{-2}\mathcal{F}$ for all
the possible values of $\alpha=R_{r}/R_{\star}$ and $\beta=M_{r}/M_{\star}$.
In black: values of $\alpha$ and $\beta$ for which the ratio is equal to 1.
## Appendix D A wave breaking criterion for solar-type stars
To provide a general criterion for wave braking in the case of solar-type
stars, we rely on the non-linearity factor $\epsilon_{nl}$, which is the ratio
of the amplitude of the radial displacement to the radial wavelength (Press,
1981; Barker & Ogilvie, 2010; Barker, 2020)
$\epsilon_{nl}=|k_{r}\xi_{r}|.$ (180)
Non-linearities become important when $\epsilon_{nl}\geq 1$. In particular,
wave braking is likely to occur. One can assess this non-linearity factor
through the energy luminosity of the tidal gravity wave. Indeed, from Eq.
(75), we have
$|L_{E}|=\frac{\omega^{3}}{2l(l+1)}|C_{W}|^{2}.$ (181)
Furthermore, by defining $\xi_{r}^{\text{dyn}}$ the radial displacement linked
to the dynamical tide, we obtain in the WKBJ approximation:
$\xi_{r}^{\text{dyn}}(r)=\rho_{0}^{-\frac{1}{2}}r^{-2}C_{W}\frac{1}{\sqrt{k_{r}}}e^{\epsilon
i(\tau_{W}-\tau_{0})},$ (182)
where $k_{r}\approx\sqrt{\frac{N^{2}}{\omega^{2}}\frac{l(l+1)}{r^{2}}}$ is the
radial wavenumber. This leads to
$|C_{W}|^{2}=\rho_{0}r^{3}\frac{N}{\omega}\sqrt{l(l+1)}\left|\xi_{r}^{\text{dyn}}\right|^{2}.$
(183)
The energy luminosity then becomes:
$|L_{E}|=\frac{N\omega^{2}\rho_{0}r^{3}}{2\sqrt{l(l+1)}}\left|\xi_{r}^{\text{dyn}}\right|^{2}.$
(184)
Hence, one can assess $\epsilon_{nl}$ as
$\epsilon_{nl}=\sqrt{\frac{2[l(l+1)]^{\frac{3}{2}}N|L_{E}|}{\rho_{0}r^{5}\omega^{4}}},$
(185)
which is similar to the expression obtained in Eq. (53) in Barker (2020).
Furthermore, in the case of a solar-type star, we find that the energy
luminosity $L_{E}$ is equal to
$L_{E}=-\frac{3^{\frac{2}{3}}\Gamma^{2}\left(\displaystyle\frac{1}{3}\right)}{8\pi}\omega^{\frac{11}{3}}\left[l(l+1)\right]^{-\frac{4}{3}}\rho_{0}(r_{\text{int}})r_{\text{int}}\left|\frac{dN^{2}}{d\ln
r}\right|_{r_{\text{int}}}^{-\frac{1}{3}}\mathcal{F}^{2}.$ (186)
with
$\mathcal{F}=\int_{r_{\text{int}}}^{R_{\star}}{\left[\left(\frac{r^{2}\varphi_{T}}{g_{0}}\right)^{\prime\prime}-\frac{l(l+1)}{r^{2}}\left(\frac{r^{2}\varphi_{T}}{g_{0}}\right)\right]\frac{X_{1}}{X_{1}(r_{\text{int}})}}dr.$
(187)
As $L_{E}\propto\mathcal{F}^{2}$, we introduce $|L_{E}|=L_{E,0}\
m_{p}^{2}n^{4}\omega^{\frac{11}{3}}$, where $L_{E,0}$ is independent of the
tidal frequency and planetary properties, $m_{p}$ is the planetary mass and
$n$ the mean motion of its orbit. Hence, wave braking may occur if
$\epsilon_{nl}\geq 1$, which leads to
$\frac{2[l(l+1)]^{\frac{3}{2}}NL_{E,0}m_{p}^{2}n^{4}}{\rho_{0}r^{5}}\omega^{-\frac{1}{3}}\geq
1$ (188)
In the absence of stellar rotation, we have $\omega=2n$. Furthermore, near the
stellar center, the radial profile of the Brunt-Väisälä is approximately
linear. We then assume that $N\approx Cr$, where $C\approx 8\times 10^{-11}\
\text{m}^{-1}.\text{s}^{-1}$ for the current Sun (Barker, 2020). Such a
quantity is here estimated at a given stellar mass and stellar age by relying
on STAREVOL grids. Following Goodman & Dickson (1998), one can estimate the
location $r_{\text{inner}}$ of the inner turning point, defined as $N=\omega$,
as follows:
$r_{\text{inner}}=\frac{2n}{C}.$ (189)
Then, estimating the non-linearity factor at the inner turning point gives
$\frac{2^{-\frac{10}{3}}[l(l+1)]^{\frac{3}{2}}C^{5}L_{E,0}}{\rho_{0}(r_{\text{inner}})}m_{p}^{2}n^{-\frac{1}{3}}\geq
1$ (190)
which, by introducing the orbital period $P_{\text{orb}}$, gives the following
criterion on the planetary mass
$m_{p}\geq(2\pi)^{\frac{1}{6}}\frac{2^{\frac{5}{3}}\rho_{0}(r_{\text{inner}})}{[l(l+1)]^{\frac{3}{4}}C^{\frac{5}{2}}L_{E,0}^{\frac{1}{2}}}P_{\text{orb}}^{-\frac{1}{6}}\equiv
M_{cr}.$ (191)
We then find a similar result as the Barker & Ogilvie (2010) criterion (see
also Barker 2011, 2020), which is based on an overturning of the
stratification. Such a condition depends weakly on the orbital period (a
decrease by one order of magnitude in $P_{\text{orb}}$ lead to an increase of
$M_{\text{cr}}$ of around 31.9 %). If a given planet has a mass higher than
$M_{\text{cr}}$, then wave breaking may occur in the star, and the tidal
quality factor is expected to behave according the results of our work.
Otherwise, other dissipation processes, like radiative damping in the case of
progressive waves or critical layers, may lead to a similar tidal dissipation.
We present in Fig. 14 the evolution of $M_{\text{cr}}$ as a function of the
age of the system, for stellar masses ($M_{\star}$) between 0.4 and 1.4
$M_{\odot}$. As already pointed out in Barker (2020), for stellar masses
larger than 0.9 $M_{\odot}$, the critical planetary mass may fall below 1
Jupiter mass for all ages greater than 10 Gyr. It then allows super-Earths and
hot Neptunes to trigger wave braking in their host stars during the sub-giant
phase and the RGB.
Figure 14: Evolution of the critical planetary mass $M_{\text{cr}}$ as a
function of the age of the system, for stellar masses ($M_{\star}$) between
0.4 and 1.4 $M_{\odot}$. Wave braking may occur for planetary masses higher
than $M_{\text{cr}}$.
## Appendix E Angular momentum transport and tidal torque
The goal of this section is to clarify the relationship between the angular
momentum transport and the net torque applied to the radiative zone. To this
end, we consider a radiative zone between $r=r_{0}$ and $r=r_{1}>r_{0}$. The
equation for the transport of angular momentum, horizontally averaged and
focusing only on waves, is given by Mathis (2009):
$\rho_{0}\frac{d}{dt}\left(r^{2}\int_{\theta=0}^{\theta=\pi}\sin^{3}\theta\
\Omega d\theta\right)=-\frac{1}{2\pi
r^{2}}\partial_{r}\left(r^{2}\int_{\theta=0}^{\theta=\pi}F_{J}\sin\theta
d\theta\right),$ (192)
where $\Omega$ is the angular velocity of the radiative zone and $F_{J}$ is
the radial component of flux of angular momentum transported by gravity waves’
Reynolds stresses, whose expression is given in Eq. (78). By integrating along
the radial and latitudinal directions, this leads to
$\frac{dJ_{RZ}}{dt}=-\int_{r_{0}}^{r_{1}}(\partial_{r}L_{J})dr,$ (193)
where $\displaystyle
J_{RZ}=\rho_{0}\int_{r=r_{0}}^{r=r_{1}}\int_{\theta=0}^{\theta=\pi}\int_{\varphi=0}^{\varphi=2\pi}r^{4}\sin^{3}\theta\
\Omega\ d\varphi d\theta dr$ is the total angular momentum of the radiative
zone and $\displaystyle
L_{J}=2\pi\int_{\theta=0}^{\theta=\pi}r^{2}F_{J}\sin\theta d\theta$ is the
luminosity of angular momentum. Hence we obtain
$\frac{dJ_{RZ}}{dt}=L_{J}(r_{0})-L_{J}(r_{1}).$ (194)
In the case of an inward energy transport ($\epsilon=1$, corresponding to the
configuration of solar-type stars), tidal gravity waves are excited at
$r=r_{1}$ and are totally dissipated before reaching the radius $r=r_{0}$.
Hence we obtain
$\frac{dJ_{RZ}}{dt}=-L_{J}(r_{1}).$ (195)
In the case of an outward energy transport ($\epsilon=-1$, corresponding to
the massive and intermediate-mass stars’ configuration), tidal gravity waves
are excited at $r=r_{0}$ and are totally dissipated before reaching the radius
$r=r_{1}$. In this configuration, we have
$\frac{dJ_{RZ}}{dt}=L_{J}(r_{0}).$ (196)
Hence, one can compute the torque $T$ applied to the whole radiative zone with
a single expression:
$T=-\epsilon L_{J,\text{exc}},$ (197)
where $L_{J,\text{exc}}$ is the luminosity of angular momentum estimated at
the region of excitation of tidal gravity waves.
|
# Surface Quasigeostrophic Turbulence in Variable Stratification
###### Abstract
Numerical and observational evidence indicates that, in regions where mixed-
layer instability is active, the surface geostrophic velocity is largely
induced by surface buoyancy anomalies. Yet, in these regions, the observed
surface kinetic energy spectrum is steeper than predicted by uniformly
stratified surface quasigeostrophic theory. By generalizing surface
quasigeostrophic theory to account for variable stratification, we show that
surface buoyancy anomalies can generate a variety of dynamical regimes
depending on the stratification’s vertical structure. Buoyancy anomalies
generate longer range velocity fields over decreasing stratification and
shorter range velocity fields over increasing stratification. As a result, the
surface kinetic energy spectrum is steeper over decreasing stratification than
over increasing stratification. An exception occurs if the near surface
stratification is much larger than the deep ocean stratification. In this
case, we find an extremely local turbulent regime with surface buoyancy
homogenization and a steep surface kinetic energy spectrum, similar to
equivalent barotropic turbulence. By applying the variable stratification
theory to the wintertime North Atlantic, and assuming that mixed-layer
instability acts as a narrowband small-scale surface buoyancy forcing, we
obtain a predicted surface kinetic energy spectrum between $k^{-4/3}$ and
$k^{-7/3}$, which is consistent with the observed wintertime $k^{-2}$
spectrum. We conclude by suggesting a method of measuring the buoyancy
frequency’s vertical structure using satellite observations.
## 1 Introduction
### 1.1 Geostrophic flow induced by surface buoyancy
Geostrophic flow in the upper ocean is induced by either interior potential
vorticity anomalies, $q$, or surface buoyancy anomalies, $b|_{z=0}$. At first,
it was assumed that the surface geostrophic flow observed by satellite
altimeters is due to interior potential vorticity (Stammer, 1997; Wunsch,
1997). It was later realized, however, that under certain conditions, upper
ocean geostrophic flow can be inferred using the surface buoyancy anomaly
alone (Lapeyre and Klein, 2006; LaCasce and Mahadevan, 2006). Subsequently,
Lapeyre (2009) used a numerical ocean model to show that the surface buoyancy
induced geostrophic flow dominates the $q$-induced geostrophic flow over a
large fraction of the North Atlantic in January. Lapeyre then concluded that
the geostrophic velocity inferred from satellite altimeters in the North
Atlantic must usually be due to surface buoyancy anomalies instead of interior
potential vorticity.
Similar conclusions have been reached in later numerical studies using the
effective surface quasigeostrophic (eSQG, Lapeyre and Klein, 2006) method. The
eSQG method aims to reconstruct three-dimensional velocity fields in the upper
ocean: it assumes that surface buoyancy anomalies generate an exponentially
decaying streamfunction with a vertical attenuation determined by the buoyancy
frequency, as in the uniformly stratified surface quasigeostrophic model (Held
et al., 1995). Because the upper ocean does not typically have uniform
stratification, an ”effective” buoyancy frequency is used, which is also
intended to account for interior potential vorticity anomalies (Lapeyre and
Klein, 2006). In practice, however, this effective buoyancy frequency is
chosen to be the vertical average of the buoyancy frequency in the upper
ocean. Qiu et al. (2016) derived the surface streamfunction from sea surface
height in a $\nicefrac{{1}}{{30}}^{\circ}$ model of the Kuroshio Extension
region in the North Pacific and used the eSQG method to reconstruct the three-
dimensional vorticity field. They found correlations of 0.7-0.9 in the upper
1000 m between the reconstructed and model vorticity throughout the year. This
result was also found to hold in a $\nicefrac{{1}}{{48}}^{\circ}$ model with
tidal forcing (Qiu et al., 2020).
A clearer test of whether the surface flow is induced by surface buoyancy is
to reconstruct the geostrophic flow directly using the sea surface buoyancy or
temperature (Isern‐Fontanet et al., 2006). This approach was taken by
Isern‐Fontanet et al. (2008) in the context of a
$\nicefrac{{1}}{{10}}^{\circ}$ numerical simulation of the North Atlantic.
When the geostrophic velocity is reconstructed using sea surface temperature,
correlations between the reconstructed velocity and the model velocity
exceeded 0.7 over most of the North Atlantic in January. Subsequently,
Miracca-Lage et al. (2022) used a reanalysis product with a grid spacing of 10
km to reconstruct the geostrophic velocity using both sea surface buoyancy and
temperature over certain regions in the South Atlantic. The correlations
between the reconstructed streamfunctions and the model streamfunction had a
seasonal dependence, with correlations of 0.7-0.8 in winter and 0.2-0.4 in
summer.
Observations also support the conclusion that a significant portion of the
surface geostrophic flow may be due to surface buoyancy anomalies over a
substantial fraction of the World Ocean. González-Haro and Isern-Fontanet
(2014) reconstructed the surface streamfunction using
$\nicefrac{{1}}{{3}}^{\circ}$ satellite altimeter data (for sea surface
height) and $\nicefrac{{1}}{{4}}^{\circ}$ microwave radiometer data (for sea
surface temperature). If the surface geostrophic velocity is due to sea
surface temperature alone, then the streamfunction constructed from sea
surface temperature should be perfectly correlated with the streamfunction
constructed from sea surface height. The spatial correlations between the two
streamfunctions was found to be seasonal. For the wintertime northern
hemisphere, high correlations (exceeding 0.7-0.8) are observed near the Gulf
Stream and Kuroshio whereas lower correlations (0.5-0.6) are seen in the
eastern recirculating branch of North Atlantic and North Pacific gyres [a
similar pattern was found by Isern‐Fontanet et al. (2008) and Lapeyre (2009)].
In summer, correlations over the North Atlantic and North Pacific plummet to
0.2-0.5, again with lower correlations in the recirculating branch of the
gyres. In contrast to the strong seasonality observed in the northern
hemisphere, correlation over the Southern Ocean typically remain larger than
0.8 throughout the year.
Another finding is that more of the surface geostrophic flow is due to surface
buoyancy anomalies in regions with high eddy kinetic energy, strong thermal
gradients, and deep mixed layers (Isern‐Fontanet et al., 2008; González-Haro
and Isern-Fontanet, 2014; Miracca-Lage et al., 2022). These are the same
conditions under which we expect mixed-layer baroclinic instability to be
active (Boccaletti et al., 2007; Mensa et al., 2013; Sasaki et al., 2014;
Callies et al., 2015). Indeed, one model of mixed-layer instability consists
of surface buoyancy anomalies interacting with interior potential vorticity
anomalies at the base of the mixed-layer (Callies et al., 2016). The dominance
of the surface buoyancy induced velocity in regions of mixed-layer instability
suggests that, to a first approximation, we can think of mixed-layer
instability as energizing the surface buoyancy induced part of the flow
through vertical buoyancy fluxes and the concomitant release of kinetic energy
at smaller scales.
### 1.2 Surface quasigeostrophy in uniform stratification
The dominance of the surface buoyancy induced velocity suggests that a useful
model for upper ocean geostrophic dynamics is the surface quasigeostrophic
model (Held et al., 1995; Lapeyre, 2017), which describes the dynamics induced
by surface buoyancy anomalies over uniform stratification. The primary
difference between surface quasigeostrophic dynamics and two-dimensional
barotropic dynamics (Kraichnan, 1967) is that surface quasigeostrophic eddies
have a shorter interaction range than their two-dimensional barotropic
counterparts. One consequence of this shorter interaction range is a flatter
kinetic energy spectrum (Pierrehumbert et al., 1994). Letting $k$ be the
horizontal wavenumber, then two-dimensional barotropic turbulence theory
predicts a kinetic energy spectrum of $k^{-5/3}$ upscale of small-scale
forcing and a $k^{-3}$ spectrum downscale of large-scale forcing (Kraichnan,
1967). If both types of forcing are present, then we expect a spectrum between
$k^{-5/3}$ and $k^{-3}$, with the realized spectrum depending on the relative
magnitude of small-scale to large-scale forcing (Lilly, 1989; Maltrud and
Vallis, 1991). In contrast, the corresponding spectra for surface
quasigeostrophic turbulence are $k^{-1}$ (upscale of small-scale forcing) and
$k^{-5/3}$ (downscale of large-scale forcing) (Blumen, 1978), both of which
are flatter than the corresponding two-dimensional barotropic spectra.111The
uniformly stratified geostrophic turbulence theory of Charney (1971) provides
spectral predictions similar to the two-dimensional barotropic theory (See
Callies and Ferrari, 2013).
The above considerations lead to the first discrepancy between the surface
quasigeostrophic model and ocean observations. As we have seen, we expect
wintertime surface geostrophic velocities near major extratropical currents to
be primarily due to surface buoyancy anomalies. Therefore, the predictions of
surface quasigeostrophic theory should hold. If we assume that mesoscale
baroclinic instability acts as a large-scale forcing and that mixed-layer
baroclinic instability acts as a small-scale forcing to the upper ocean (we
assume a narrowband forcing in both cases, although this may not be the case,
see Khatri et al., 2021), then we expect a surface kinetic energy spectrum
between $k^{-1}$ and $k^{-5/3}$. However, both observations and numerical
simulations of the Gulf Stream and Kuroshio find kinetic energy spectra close
to $k^{-2}$ in winter (Sasaki et al., 2014; Callies et al., 2015; Vergara et
al., 2019), which is steeper than predicted.
The second discrepancy relates to the surface transfer function implied by
uniformly stratified surface quasigeostrophic theory. The surface transfer
function, $\mathcal{F}(\bm{k})$, is defined as (Isern-Fontanet et al., 2014)
$\mathcal{F}(\bm{k})=\frac{\hat{\psi}_{\bm{k}}}{\hat{b}_{\bm{k}}},$ (1)
where $\hat{\psi}_{\bm{k}}$ and $\hat{b}_{\bm{k}}$ are the Fourier amplitudes
of the geostrophic streamfunction, $\psi$, and the buoyancy, $b$, at the
ocean’s surface, and $\bm{k}$ is the horizontal wavevector. Uniformly
stratified surface quasigeostrophic theory predicts an isotropic transfer
function $\mathcal{F}(k)\sim k^{-1}$ (Held et al., 1995). Using a
$\nicefrac{{1}}{{12}}^{\circ}$ ocean model and focusing on the western coast
of Australia, González-Haro et al. (2020) confirmed that the transfer function
between sea surface temperature and sea surface height is indeed isotropic but
found that the transfer function is generally steeper than $k^{-1}$. In
another study using a $\nicefrac{{1}}{{16}}^{\circ}$ model of the
Mediterranean Sea, Isern-Fontanet et al. (2014) found that the transfer
function below 100 km has seasonal dependence closely related to mixed-layer
depth, fluctuating between $k^{-1}$ and $k^{-2}$.
In the remainder of this article, we account for these discrepancies by
generalizing the uniformly stratified surface quasigeostrophic model (Held et
al., 1995) to account for variable stratification (section 2). Generally, we
find that the surface kinetic energy spectrum in surface quasigeostrophic
turbulence depends on the stratification’s vertical structure (section 3); we
recover the Blumen (1978) spectral predictions only in the limit of uniform
stratification. Stratification controls the kinetic energy spectrum by
modifying the interaction range of surface quasigeostrophic eddies, and we
illustrate this dependence by examining the turbulence under various idealized
stratification profiles (section 4). We then apply the theory to the North
Atlantic in both winter and summer, and find that the surface transfer
function is seasonal, with a $\mathcal{F}(k)\sim k^{-3/2}$ dependence in
winter and a $\mathcal{F}(k)\sim k^{-1/2}$ dependence in summer. Moreover, in
the wintertime North Atlantic, the theory predicts a surface kinetic energy
spectrum between $k^{-4/3}$ and $k^{-7/3}$, which is consistent with both
observations and numerical simulations (section 5). Finally, in section 6, we
discuss the validity of theory at other times and locations.
## 2 The inversion function
### 2.1 Physical space equations
Consider an ocean of depth $H$ with zero interior potential vorticity $(q=0)$
so that the geostrophic streamfunction satisfies
$\nabla^{2}\psi+\frac{\partial}{\partial
z}\left(\frac{1}{\sigma^{2}}\frac{\partial\psi}{\partial
z}\right)=0\quad\text{for }z\in(-H,0).$ (2)
In this equation, $\nabla^{2}$ is the horizontal Laplacian, $\psi$ is the
geostrophic streamfunction, and
$\sigma(z)=N(z)/f,$ (3)
where $N(z)$ is the depth-dependent buoyancy frequency and $f$ is the constant
local value of the Coriolis frequency. We refer to $\sigma(z)$ as the
_stratification_ for the remainder of this article. The horizontal geostrophic
velocity is obtained from $\bm{u}=\hat{\bm{z}}\times\bm{\nabla}\bm{\psi}$
where $\hat{\bm{z}}$ is the vertical unit vector.
The upper surface potential vorticity is given by (Bretherton, 1966)
$\theta=-\frac{1}{\sigma_{0}^{2}}\frac{\partial\psi}{\partial
z}\bigg{|}_{z=0},$ (4)
where $\sigma_{0}=\sigma(0)$. The surface potential vorticity is related to
the surface buoyancy anomaly through
$b|_{z=0}=-f\,\sigma_{0}^{2}\,\theta.$ (5)
The time-evolution equation at the upper boundary is given by
$\frac{\partial\theta}{\partial
t}+\mathrm{J}\left(\psi,\theta\right)=F-D\quad\text{at }z=0,$ (6)
where
$\mathrm{J}\left(\theta,\psi\right)=\partial_{x}\theta\,\partial_{y}\psi-\partial_{y}\theta\,\partial_{x}\psi$
represents the advection of $\theta$ by the horizontal geostrophic velocity
$\bm{u}$, $F$ is the buoyancy forcing at the upper boundary, and $D$ is the
dissipation.
We assume a bottom boundary condition of
$\psi\rightarrow 0\text{ as }z\rightarrow-\infty,$ (7)
which is equivalent to assuming the bottom boundary, $z=-H$, is irrelevant to
the dynamics. In section 5, we find that this assumption is valid in the mid-
latitude North Atlantic open ocean at horizontal scales smaller than $\approx
500$ km. We consider alternative boundary conditions in appendix A.
### 2.2 Fourier space equations
Assuming a doubly periodic domain in the horizontal prompts us to consider the
Fourier expansion of $\psi$,
$\psi(\bm{r},z,t)=\sum_{\bm{k}}\hat{\psi}_{\bm{k}}(t)\,\Psi_{k}(z)\,\mathrm{e}^{\mathrm{i}\bm{k}\cdot\bm{r}},$
(8)
where $\bm{r}=(x,y)$ is the horizontal position vector, $\bm{k}=(k_{x},k_{y})$
is the horizontal wavevector, and $k=\left\lvert\bm{k}\right\rvert$ is the
horizontal wavenumber. The wavenumber dependent non-dimensional vertical
structure, $\Psi_{k}(z)$, is determined by the boundary-value problem222To
derive the vertical structure equation (9), substitute the Fourier
representation (8) into the vanishing potential vorticity condition (2),
multiply by $\mathrm{e}^{-i\bm{l}\cdot\bm{r}}$, take an area average, and use
the identity
$\frac{1}{A}\,\int_{A}\mathrm{e}^{\mathrm{i}\left(\bm{k}-\bm{l}\right)\cdot\bm{r}}\,\mathrm{d}{\bm{r}}=\delta_{\bm{k},\bm{l}}$
where $\delta_{\bm{k},\bm{l}}$ is the Kronecker delta, and $A$ is the
horizontal area of the periodic domain.
$-\frac{\mathrm{d}}{\mathrm{d}z}\left(\frac{1}{\sigma^{2}}\frac{\mathrm{d}\Psi_{k}}{\mathrm{d}z}\right)+k^{2}\,\Psi_{k}=0,$
(9)
with upper boundary condition
$\Psi_{k}(0)=1$ (10)
and bottom boundary condition
$\Psi_{k}\rightarrow 0\text{ as }z\rightarrow-\infty.$ (11)
The upper boundary condition (10) is a normalization for the vertical
structure, $\Psi_{k}(z)$, chosen so that
$\psi(\bm{r},z=0,t)=\sum_{\bm{k}}\hat{\psi}_{\bm{k}}(t)\,\mathrm{e}^{\mathrm{i}\bm{k}\cdot\bm{r}}.$
(12)
Consequently, the surface potential vorticity (4) is given by
$\theta(\bm{r},t)=\sum_{\bm{k}}\hat{\theta}_{\bm{k}}(t)\,\mathrm{e}^{\mathrm{i}\bm{k}\cdot\bm{r}},$
(13)
where
$\hat{\theta}_{\bm{k}}=-m(k)\,\hat{\psi}_{\bm{k}},$ (14)
and the inversion function $m(k)$ (with dimensions of inverse length) is
defined as
$m(k)=\frac{1}{\sigma_{0}^{2}}\,\frac{\mathrm{d}\Psi_{k}(0)}{\mathrm{d}z}.$
(15)
In all our applications, we find the inversion function to be a positive
monotonically increasing function of $k$ [i.e., $m(k)>0$ and
$\mathrm{d}m/\mathrm{d}k\geq 0$]. The inversion function is related to the
transfer function (1) through
$\mathcal{F}(k)=\frac{1}{f\,\sigma_{0}^{2}\,m(k)}=\left[f\,\frac{\mathrm{d}\Psi_{k}(0)}{\mathrm{d}z}\right]^{-1},$
(16)
which shows that the transfer function, evaluated at a wavenumber $k$, is
related to the characteristic vertical scale of $\Psi_{k}(z)$.
### 2.3 The case of constant stratification
To recover the well-known case of the uniformly stratified surface
quasigeostrophic model (Held et al., 1995), set $\sigma=\sigma_{0}$. Then
solving the vertical structure equation (9) along with boundary conditions
(10) and (11) yields the exponentially decaying vertical structure,
$\Psi_{k}(z)=\mathrm{e}^{\sigma_{0}\,k\,z}.$ (17)
Substituting $\Psi_{k}(z)$ into the definition of the inversion function (15),
we obtain
$m(k)=k/\sigma_{0},$ (18)
and hence [through the inversion relation (13)] a linear-in-wavenumber
inversion relation of
$\hat{\theta}_{\bm{k}}=-(k/\sigma_{0})\,\hat{\psi}_{\bm{k}}.$ (19)
In appendix A, we show that $m(k)\rightarrow k/\sigma_{0}$ as
$k\rightarrow\infty$ for arbitrary stratification $\sigma(z)$. Therefore, at
sufficiently small horizontal scales, surface quasigeostrophic dynamics
behaves as in constant stratification regardless of the functional form of
$\sigma(z)$.
## 3 Surface quasigeostrophic turbulence
Suppose a two-dimensional barotropic fluid is forced in the wavenumber
interval $[k_{1},k_{2}]$. In such a fluid, Kraichnan (1967) argued that two
inertial ranges will form: one inertial range for $k<k_{1}$ where kinetic
energy cascades to larger scales and another inertial range for $k>k_{2}$
where enstrophy cascades to smaller scales. Kraichnan’s argument depends on
three properties of two-dimensional vorticity dynamics. First, that there are
two independent conserved quantities; namely, the kinetic energy and the
enstrophy. Second, that turbulence is sufficiently local in wavenumber space
so that the only available length scale is $k^{-1}$. Third, that the inversion
relation between the vorticity and the streamfunction is scale invariant.
There are two independent conserved quantities in surface quasigeostrophic
dynamics, as in Kraichnan’s two-dimensional fluid; namely the total energy,
$E$, and the surface potential enstrophy, $P$. However, the second and third
properties of two-dimensional vorticity dynamics do not hold for surface
quasigeostrophic dynamics. Even if the turbulence is local in wavenumber
space, there are two available length scales at each wavenumber $k$; namely,
$k^{-1}$ and $[m(k)]^{-1}$. Moreover, the inversion relation (14) is generally
not scale invariant.333A function $m(k)$ is scale invariant if $m(\lambda
k)=\lambda^{s}m(k)$ for all $\lambda$, where $s$ is a real number. In
particular, note that power laws, $m(k)=k^{\alpha}$, are scale invariant.
Therefore, the arguments in Kraichnan (1967) do not hold in general for
surface quasigeostrophic dynamics.
Even so, in the remainder of this section we show that there is a net inverse
cascade of total energy and a net forward cascade of surface potential
enstrophy even if there are no inertial ranges in the turbulence. Then we
consider the circumstances under which we expect inertial ranges to form.
Finally, assuming the existence of an inertial range, we derive the spectra
for the cascading quantities. We begin, however, with some definitions.
### 3.1 Quadratic quantities
The two quadratic quantities needed for the cascade argument are the volume-
integrated total mechanical energy per mass per unit area,
$\displaystyle E$
$\displaystyle=\frac{1}{2\,A}\int_{V}\left(\left\lvert\bm{\nabla}\psi\right\rvert^{2}+\frac{1}{\sigma^{2}}\left\lvert\frac{\partial\psi}{\partial
z}\right\rvert^{2}\right)\mathrm{d}V$ (20)
$\displaystyle=-\frac{1}{2}\,\overline{\psi|_{z=0}\,\theta}=\frac{1}{2}\sum_{\bm{k}}m(k)\,\left\lvert\hat{\psi}_{\bm{k}}\right\rvert^{2},$
and the upper surface potential enstrophy,
$P=\frac{1}{2}\,\overline{\theta^{2}}=\frac{1}{2}\sum_{\bm{k}}\left[m(k)\right]^{2}\left\lvert\hat{\psi}_{\bm{k}}\right\rvert^{2},$
(21)
where the overline denotes an area average over the periodic domain. Both
quantities are time-independent in the absence of forcing and dissipation, as
can be seen by multiplying the time-evolution equation (6) by either
$-\psi|_{z=0}$ or $\theta$ and taking an area average.
Two other quadratics we use are the surface kinetic energy
$K=\frac{1}{2}\overline{\left\lvert\bm{\nabla}\psi\right\rvert_{z=0}^{2}}=\frac{1}{2}\sum_{\bm{k}}k^{2}\,\left\lvert\hat{\psi}_{\bm{k}}\right\rvert^{2}$
(22)
and the surface streamfunction variance
$S=\frac{1}{2}\overline{\left(\psi|_{z=0}\right)^{2}}=\frac{1}{2}\sum_{\bm{k}}\left\lvert\hat{\psi}_{\bm{k}}\right\rvert^{2}.$
(23)
Moreover, the isotropic spectrum $\mathscr{A}(k)$ of a quadratic quantity $A$
is defined by
$A=\int_{0}^{\infty}\mathscr{A}(k)\,\mathrm{d}k,$ (24)
so that the isotropic spectra of $E,P,K,$ and $S$ are given by
$\mathscr{E}(k),\mathscr{P}(k),\mathscr{K}(k),$ and $\mathscr{S}(k)$. The
isotropic spectra are then related by
$\mathscr{P}(k)=m(k)\,\mathscr{E}(k)=\left[m(k)\right]^{2}\,\mathscr{S}(k)$
(25)
and
$\mathscr{K}(k)=k^{2}\,\mathscr{S}(k).$ (26)
For $\mathscr{A}(k)=\mathscr{E}(k)$ or $\mathscr{A}(k)=\mathscr{P}(k)$, there
is a time-evolution equation of the form (Gkioulekas and Tung, 2007)
$\frac{\partial\mathscr{A}(k)}{\partial t}+\frac{\partial\Pi_{A}(k)}{\partial
k}=F_{A}(k)-D_{A}(k),$ (27)
where $\Pi_{A}(k)$ is the transfer of the spectral density $\mathscr{A}(k)$
from $(0,k)$ to $(k,\infty)$, and $D_{A}(k)$ and $F_{A}(k)$ are the
dissipation and forcing spectra of $A$, respectively. In an inertial range
where $A$ is the cascading quantity, then $\Pi_{A}(k)=\varepsilon_{A}$ where
$\varepsilon_{A}$ is a constant and thus $\partial\Pi_{A}(k)/\partial k=0$.
### 3.2 The inverse and forward cascade
For a fluid with the variable stratification inversion relation (14) that is
forced in the wavenumber interval $[k_{1},k_{2}]$, Gkioulekas and Tung (2007)
prove the following two inequalities for stationary turbulence,
$\displaystyle\int_{0}^{k}\frac{\mathrm{d}m(k^{\prime})}{\mathrm{d}k^{\prime}}\,\Pi_{E}(k^{\prime})\,\mathrm{d}k^{\prime}<0,\,\,\text{for
all }k>k_{2},$ (28)
$\displaystyle\int_{k}^{\infty}\frac{\mathrm{d}m(k^{\prime})}{\mathrm{d}k^{\prime}}\,\frac{\Pi_{P}(k^{\prime})}{[m(k^{\prime})]^{2}}\,\mathrm{d}k^{\prime}>0,\,\,\text{for
all }k<k_{1}.$ (29)
These two inequalities do not require the existence of inertial ranges, only
that the inversion function $m(k)$ is an increasing function of $k$.
Therefore, if $\mathrm{d}m(k)/\mathrm{d}k>0$, then there is a net inverse
cascade of total energy and a net forward cascade of surface potential
enstrophy.
### 3.3 When do inertial ranges form?
The lack of scale invariance along with the presence of two length scales,
$k^{-1}$ and $[m(k)]^{-1}$, prevents the use of the Kraichnan (1967) argument
to establish the existence of an inertial range. However, suppose that in a
wavenumber interval, $[k_{a},k_{b}]$, the inversion function takes the power
law form
$m(k)\approx m_{\alpha}\,k^{\alpha},$ (30)
where $m_{\alpha}>0$ and $\alpha>0$. Then, in this wavenumber interval, the
inversion relation takes the form of the $\alpha$-turbulence inversion
relation (Pierrehumbert et al., 1994),
$\hat{\xi}_{\bm{k}}=-k^{\alpha}\,\hat{\psi}_{\bm{k}},$ (31)
with $\xi=\theta/m_{\alpha}$. The inversion relation (31) is then scale
invariant in the wavenumber interval $[k_{a},k_{b}]$. Moreover, $k^{-1}$ is
the only available length scale if the turbulence is sufficiently local in
wavenumber space. It follows that if the wavenumber interval $[k_{a},k_{b}]$
is sufficiently wide (i.e., $k_{a}\ll k_{b}$), then Kraichnan’s argument
applies to the turbulence over this wavenumber interval and inertial ranges
are expected to form.
### 3.4 The Tulloch and Smith (2006) argument
If we assume the existence of inertial ranges, then we can adapt the cascade
argument of Tulloch and Smith (2006) to general surface quasigeostrophic
fluids to obtain predictions for the cascade spectra.
In the inverse cascade inertial range, we must have
$\Pi_{E}(k)=\varepsilon_{E}$ where $\varepsilon_{E}$ is a constant. Assuming
locality in wavenumber space, we have
$\varepsilon_{E}\sim\frac{k\,\mathscr{E}(k)}{\tau(k)},$ (32)
where $\tau(k)$ is a spectrally local timescale444A spectrally local timescale
is appropriate so long as $m(k)$ grows less quickly than $k^{2}$. Otherwise, a
non-local timescale must be used (Kraichnan, 1971; Watanabe and Iwayama,
2004).. If we further assume that the timescale $\tau(k)$ is determined by the
kinetic energy spectrum, $\mathscr{K}(k)$, then dimensional consistency
requires
$\tau(k)\sim\left[k^{3}\,\mathscr{K}(k)\right]^{-1/2}.$ (33)
Substituting this timescale into equation (32) and using the relationship
between the energy spectrum, $\mathscr{E}(k)$, and the streamfunction variance
spectrum, $\mathscr{S}(k)$, in equations (25) and (26), we obtain the total
energy spectrum in the inverse cascade inertial range,
$\mathscr{E}(k)\sim\varepsilon_{E}^{2/3}\,k^{-7/3}\,\left[m(k)\right]^{1/3}.$
(34)
Analogously, in the forward cascade inertial range, we must have
$\Pi_{P}(k)=\varepsilon_{P}$ where $\varepsilon_{P}$ is a constant. A similar
argument yields the surface potential enstrophy spectrum in the forward
cascade inertial range,
$\mathscr{P}(k)\sim\varepsilon_{P}^{2/3}\,k^{-7/3}\,\left[m(k)\right]^{2/3}.$
(35)
The predicted spectra (34) and (35) are not uniquely determined by dimensional
analysis. Rather than assuming that the spectrally local timescale $\tau(k)$
is determined by the kinetic energy spectrum, $\mathscr{K}(k)$, we can assume
that $\tau(k)$ is determined by the total energy spectrum, $\mathscr{E}(k)$,
or the surface potential enstrophy spectrum, $\mathscr{P}(k)$.555These
assumptions lead to timescales of
$\tau(k)\sim\left[k^{4}\,\mathscr{E}(k)\right]^{-1/2}$ and
$\tau(k)\sim\left[k^{3}\,\mathscr{P}(k)\right]^{-1/2}$, respectively. Either
choice will result in cascade spectra distinct from (34) and (35). However, by
assuming that the timescale $\tau(k)$ is determined by the kinetic energy
spectrum, the resulting cascade spectra agree with the $\alpha$-turbulence
predictions of Pierrehumbert et al. (1994) when the inversion function takes
the power law form (30).
For later reference, we provide the expressions for the inverse and forward
cascade surface kinetic energy spectra. Using either the inverse cascade
spectrum (34) or forward cascade spectrum (35) along with the relations
between the various spectra [equations (25) and (26)], we obtain
$\mathscr{K}(k)\sim\varepsilon_{E}^{2/3}\,k^{-1/3}\,\left[m(k)\right]^{-2/3}$
(36)
in the inverse cascade and
$\mathscr{K}(k)\sim\varepsilon_{P}^{2/3}\,k^{-1/3}\,\left[m(k)\right]^{-4/3}$
(37)
in the forward cascade.
Finally, we note that the vorticity spectrum,
$\mathscr{Z}(k)=k^{2}\,\mathscr{K}(k),$ (38)
is an increasing function of $k$ if $m(k)$ is flatter than $k^{5/4}$. In
particular, at small scales, we expect $m(k)\sim k$ (section 22.3), implying a
vorticity spectrum of $\mathscr{Z}(k)\sim k^{1/3}$. Such an increasing
vorticity spectrum implies high Rossby numbers and the breakdown of
geostrophic balance at small scales.
## 4 Idealized stratification profiles
In this section we provide analytical solutions for $m(k)$ in the cases of an
increasing and decreasing piecewise constant stratification profiles as well
as in the case of exponential stratification. These idealized stratification
profiles then provide intuition for the inversion function’s functional form
in the case of an arbitrary stratification profile, $\sigma(z)$.
### 4.1 Piecewise constant stratification
Figure 1: Log-log plots of the inversion function, $m(k)$ [panels (a), (d),
and (g)], for three stratification profiles [panels (b), (e), and (h)] and the
resulting streamfunctions at the two horizontal length scales of 50 km
(dashed) and 100 km (solid) [for panels (c) and (i)] or 2 km and 10 km [panel
(f)]. In the first two inversion function plots [panels (a) and (d)], the thin
solid diagonal lines represent the two linear asymptotic states of
$k/\sigma_{0}$ and $k/\sigma_{\mathrm{pyc}}$. The vertical solid line is the
mixed-layer length scale $L_{\mathrm{mix}}$, given by equation (41), whereas
the vertical dotted line is the pycnocline length scale $L_{\mathrm{pyc}}$,
given by equation (42). The power $\alpha$, where
$m(k)/k^{\alpha}\approx\mathrm{constant}$, is computed by fitting a straight
line to the log-log plot of $m(k)$ between $2\pi/L_{\mathrm{mix}}$ and
$2\pi/L_{\mathrm{pyc}}$. This straight line is shown as a grey dashed line in
panels (a) and (d). In panel (g), the thin diagonal line is the linear small-
scale limit, $m(k)\approx k/\sigma_{0}$, whereas the thin horizontal line is
the constant large-scale limit, $m(k)=2/(\sigma_{0}^{2}\,h_{\mathrm{exp}})$.
Finally, the solid vertical lines in panel (g) indicate the horizontal length
scale $L_{\mathrm{exp}}=2\pi/k_{\mathrm{exp}}$ [equation (48)] induced by the
exponential stratification. Further details on the stratification profiles are
in the text.
Consider the piecewise constant stratification profile, given by
$\sigma(z)=\begin{cases}\sigma_{0}\,&\text{for }-h<z\leq 0\\\
\sigma_{\mathrm{pyc}}\,&\text{for }\infty<z\leq-h.\end{cases}$ (39)
This stratification profile consists of an upper layer of thickness $h$ with
constant stratification $\sigma_{0}$ overlying an infinitely deep layer with
constant stratification $\sigma_{\mathrm{pyc}}$. If
$\sigma_{0}<\sigma_{\mathrm{pyc}}$, then this stratification profile is an
idealization of a weakly stratified mixed-layer overlying an ocean of stronger
stratification. See panels (b) and (e) in figure 1 for an illustration.
For this stratification profile, an analytical solution is possible, with the
solution provided in appendix B. The resulting inversion function is
$m(k)=\frac{k}{\sigma_{0}}\left[\frac{\cosh\left(\sigma_{0}hk\right)+\left(\frac{\sigma_{\mathrm{pyc}}}{\sigma_{0}}\right)\sinh\left(\sigma_{0}hk\right)}{\sinh\left(\sigma_{0}hk\right)+\left(\frac{\sigma_{\mathrm{pyc}}}{\sigma_{0}}\right)\cosh\left(\sigma_{0}hk\right)}\right].$
(40)
At small horizontal scales, $2\pi/k\ll L_{\mathrm{mix}}$, where
$L_{\mathrm{mix}}=2\,\pi\,\sigma_{0}\,h,$ (41)
the inversion function takes the form $m(k)\approx k/\sigma_{0}$, as expected
from the uniformly stratified theory (Held et al., 1995). At large horizontal
scales, $2\pi/k\gg L_{\mathrm{pyc}}$, where
$L_{\mathrm{pyc}}=2\,\pi\begin{cases}\,\sigma_{\mathrm{pyc}}\,h\,&\text{if
}\sigma_{0}\leq\sigma_{\mathrm{pyc}}\\\
\sigma_{\mathrm{0}}^{2}\,h/\sigma_{\mathrm{pyc}}\,&\text{if
}\sigma_{0}>\sigma_{\mathrm{pyc}},\end{cases}$ (42)
then the inversion function takes the form $m(k)\approx
k/\sigma_{\mathrm{pyc}}$, because at large horizontal scales, the ocean will
seem to have constant stratification $\sigma_{\mathrm{pyc}}$.
The functional form of the inversion function at horizontal scales between
$L_{\mathrm{mix}}$ and $L_{\mathrm{pyc}}$ depends on whether $\sigma(z)$ is a
decreasing or increasing function. If $\sigma(z)$ is a decreasing function,
with $\sigma_{0}<\sigma_{\mathrm{pyc}}$, then we obtain a mixed-layer like
stratification profile and the inversion function steepens to a super linear
wavenumber dependence at these scales. An example is shown in figure 1(a)-(b).
Here, the stratification abruptly jumps from a value of $\sigma_{0}\approx 14$
to $\sigma_{\mathrm{pyc}}=100$ at $z\approx-79$ m. Consequently, the inversion
function takes the form $m(k)\sim k^{1.57}$ between $2\pi/L_{\mathrm{pyc}}$
and $2\pi/L_{\mathrm{mix}}$. In contrast, if
$\sigma_{0}>\sigma_{\mathrm{pyc}}$ then the inversion function flattens to a
sublinear wavenumber dependence for horizontal scales between
$L_{\mathrm{mix}}$ and $L_{\mathrm{pyc}}$. An example is shown in figure
1(d)-(e), where the stratification abruptly jumps from $\sigma_{0}\approx 14$
to $\sigma_{\mathrm{pyc}}\approx 2$ at $z\approx-79$ m. In this case, the
inversion function has a sublinear wavenumber dependence, $m(k)\sim k^{0.43}$,
between $2\pi/L_{\mathrm{pyc}}$ and $2\pi/L_{\mathrm{mix}}$.
Figure 2: Results of three pseudo-spectral simulations, forced at
approximately 100 km, with $1024^{2}$ horizontal grid points. See appendix C
for a description of the numerical model. The first simulation [panels (a),
(d), and (g)] corresponds to the stratification profile and inversion function
shown in figure 1(a)-(b), the second simulations [panels (b), (e), and (h)]
corresponds to the stratification profile and inversion function shown in
figure 1(d)-(e), and the third simulation corresponds to the stratification
profile and inversion function shown in figure 1(g)-(h). Plots (a), (b), and
(c) are snapshots of the surface potential vorticity, $\theta$, normalized by
its maximum value in the snapshot. Plots (d), (e), and (f) are snapshots of
the horizontal speed $\left\lvert\bm{u}\right\rvert$ normalized by its maximum
value in the snapshot. Plots (g), (h), and (i) show the model kinetic energy
spectrum (solid black line) along with the prediction given by equation (37)
(dashed black line). We also provide linear fits to the model kinetic energy
spectrum (dash-dotted red line) and to the predicted spectrum (dotted blue
line).
By fitting a power law, $k^{\alpha}$, to the inversion function, we do not
mean to imply that $m(k)$ indeed takes the form of a power law. Instead, the
purpose of obtaining the estimated power $\alpha$ is to apply the intuition
gained from $\alpha$-turbulence (Pierrehumbert et al., 1994; Smith et al.,
2002; Sukhatme and Smith, 2009; Burgess et al., 2015; Foussard et al., 2017)
to surface quasigeostrophic turbulence. In $\alpha$-turbulence, an active
scalar $\xi$, defined by the power law inversion relation (31), is materially
conserved in the absence of forcing and dissipation [that is, $\xi$ satisfies
the time-evolution equation (6) with $\theta$ replaced by $\xi$]. The scalar
$\xi$ can be thought of as a generalized vorticity; if $\alpha=2$ we recover
the vorticity of two-dimensional barotropic model. If $\alpha=1$, $\xi$ is
proportional to the surface buoyancy in the uniformly stratified surface
quasigeostrophic model. To discern how $\alpha$ modifies the dynamics, we
consider a point vortex $\xi\sim\delta(r)$, where $r$ is the horizontal
distance from the vortex and $\delta(r)$ is the Dirac delta. If $\alpha=2$, we
obtain $\psi(r)\sim\log(r)/2\pi$; otherwise, if $0<\alpha<2$, we obtain
$\psi(r)\sim-C_{\alpha}/r^{2-\alpha}$ where $C_{\alpha}>0$ (Iwayama and
Watanabe, 2010). Therefore, larger $\alpha$ leads to vortices with a longer
interaction range whereas smaller $\alpha$ leads to a shorter interaction
range.
More generally, $\alpha$ controls the spatial locality of the resulting
turbulence. In two-dimensional turbulence ($\alpha=2$), vortices induce flows
reaching far from the vortex core and the combined contributions of distant
vortices dominates the local fluid velocity. These flows are characterized by
thin filamentary $\xi$-structures due to the dominance of large-scale strain
(Watanabe and Iwayama, 2004). As we decrease $\alpha$, the turbulence becomes
more spatially local, the dominance of large-scale strain weakens, and a
secondary instability becomes possible in which filaments roll-up into small
vortices; the resulting turbulence is distinguished by vortices spanning a
wide range of horizontal scales, as in uniform stratification surface
quasigeostrophic turbulence (Pierrehumbert et al., 1994; Held et al., 1995).
As $\alpha$ is decreased further the $\xi$ field becomes spatially diffuse
because the induced velocity, which now has small spatial scales, is more
effective at mixing small-scale inhomogeneities in $\xi$ (Sukhatme and Smith,
2009).
These expectations are confirmed in the simulations shown in figure 2. The
simulations are set in a doubly periodic square with side length 400 km and
are forced at a horizontal scale of 100 km. Large-scale dissipation is
achieved through a linear surface buoyancy damping whereas an exponential
filter is applied at small scales. In the case of a mixed-layer like
stratification, with $\sigma_{0}<\sigma_{\mathrm{pyc}}$, the $\theta$ field
exhibits thin filamentary structures (characteristic of the $\alpha=2$ case)
as well as vortices spanning a wide range of horizontal scales (characteristic
of the $\alpha=1$ case). In contrast, although the
$\sigma_{0}>\sigma_{\mathrm{pyc}}$ simulation exhibits vortices spanning a
wide range of scales, no large-scale filaments are evident. Instead, we see
that the surface potential vorticity is spatially diffuse. These contrasting
features are consequences of the induced horizontal velocity field. The mixed-
layer like case has a velocity field dominated by large-scale strain, which is
effective at producing thin filamentary structures. In contrast the velocity
field in the $\sigma_{0}>\sigma_{\mathrm{pyc}}$ case consists of narrow
meandering currents, which are effective at mixing away small-scale
inhomogeneities.
Both the predicted [equation (37)] and diagnosed surface kinetic energy
spectra are plotted in figure 2. In the mixed-layer like case, with
$\sigma_{0}<\sigma_{\mathrm{pyc}}$, the predicted and diagnosed spectrum are
close, although the diagnosed spectrum is steeper at large scales (a too steep
spectrum is also observed in the $\alpha=1$ and $\alpha=2$ cases, see
Schorghofer, 2000). In the $\sigma_{0}>\sigma_{\mathrm{pyc}}$ case, the large-
scale spectrum agrees with the predicted spectrum. However, at smaller scales,
the model spectrum is significantly steeper.
Figure 3: The spectral density transfer functions for surface potential
enstrophy (a) and surface kinetic energy (b) normalized by their absolute
maximum for the three simulations in figure 2.
The derivation of the predicted spectra in section 3 assumed the existence of
an inertial range, which in this case means $\Pi_{P}(k)=$ constant. To verify
whether this assumption holds, we show in figure 3(a) the spectral transfer of
surface potential enstrophy for both the $\sigma_{0}<\sigma_{\mathrm{pyc}}$
and the $\sigma_{0}>\sigma_{\mathrm{pyc}}$ cases. In the mixed-layer like
case, with $\sigma_{0}<\sigma_{\mathrm{pyc}}$, an approximate inertial range
forms with some deviations at larger scales. However, in the
$\sigma_{0}>\sigma_{\mathrm{pyc}}$ case, $\Pi_{P}$ is an increasing function
at small scales, which indicates that the spectral density of surface
potential enstrophy, $\mathscr{P}(k)$, is diverging at these scales. That is,
at small scales, there is a depletion of $\mathscr{P}(k)$ and this depletion
is causing the steepening of the kinetic energy spectrum at small scales in
figure 2.
The steepening of the model spectrum at small scales may be a consequence of
the inversion function being steeper at small scales than at large scales. We
tested this hypothesis with two additional simulations having a prescribed
inversion function of the form
$m(k)=\begin{cases}k^{\alpha_{1}}\quad\text{for }k<k_{0}\\\
C\,k^{\alpha_{2}}\quad\text{for }k>k_{0},\end{cases}$ (43)
where $\alpha_{1}$,$\alpha_{2}$, and $k_{0}$ are positive numbers, and $C$ is
chosen to ensure that $m(k)$ is continuous. In the first simulation, we chose
$\alpha_{1}=3/2$ and $\alpha_{2}=1/2$ so that $m(k)$ is flatter at small
scales than at large scales (as in the mixed-layer like case). We obtained an
approximate inertial range and the model spectrum is close to the predicted
surface kinetic energy spectrum (37). In the second simulation, we chose
$\alpha_{1}=1/2$ and $\alpha_{2}=3/2$ so that $m(k)$ is steeper at small
scales than at large scales (as in the $\sigma_{0}>\sigma_{\mathrm{pyc}}$
case). We found that $\Pi_{P}(k)$ is an increasing function (so that no
inertial range exists) and we obtained a model surface kinetic energy spectrum
that is much steeper than predicted. It is not clear why the inertial range
theory fails in this case, and the failure may be a consequence of the finite
model resolution.
### 4.2 An exponentially stratified ocean
Now consider the exponential stratification profile
$\sigma=\sigma_{0}\,\mathrm{e}^{z/h_{\mathrm{exp}}}.$ (44)
Substituting the stratification profile (44) into the vertical structure
equation (9) with boundary conditions (10) and (11) yields the vertical
structure
$\Psi_{k}(z)=\mathrm{e}^{z/h_{\mathrm{exp}}}\,\frac{I_{1}\left(\mathrm{e}^{z/h_{\mathrm{exp}}}\sigma_{0}\,h_{\mathrm{exp}}\,k\right)}{I_{1}\left(\sigma_{0}\,h_{\mathrm{exp}}\,k\right)},$
(45)
where $I_{n}(z)$ is the modified Bessel function of the first kind of order
$n$.
To obtain the inversion function, we substitute the vertical structure (45)
into the definition of the inversion function (15) to obtain
$\displaystyle m(k)=$
$\displaystyle\frac{1}{\sigma_{0}^{2}h_{\mathrm{exp}}}\,+$ (46)
$\displaystyle\frac{k}{2\sigma_{0}}\left[\frac{I_{0}\left(\sigma_{0}h_{\mathrm{exp}}k\right)}{I_{1}\left(\sigma_{0}h_{\mathrm{exp}}k\right)}+\frac{I_{2}\left(\sigma_{0}h_{\mathrm{exp}}k\right)}{I_{1}\left(\sigma_{0}h_{\mathrm{exp}}k\right)}\right].$
In the small-scale limit $k\gg 1/\left(\sigma_{0}\,h_{\mathrm{exp}}\right)$,
the inversion function becomes $m(k)\approx k/\sigma_{0}$ as in constant
stratification surface quasigeostrophic theory. In contrast, the large-scale
limit $k\ll 1/\left(\sigma_{0}\,h_{\mathrm{exp}}\right)$ gives
$m(k)\approx\frac{h_{\mathrm{exp}}}{4}\left(k_{\mathrm{exp}}^{2}+k^{2}\right),$
(47)
where $k_{\mathrm{exp}}$ is given by
$k_{\mathrm{exp}}=\frac{2\,\sqrt{2}}{\sigma_{0}\,h_{\mathrm{exp}}}.$ (48)
As $k/k_{\mathrm{exp}}\rightarrow 0$, the inversion function asymptotes to a
constant value and the vertical structure becomes independent of the
horizontal scale $2\pi/k$, with $\Psi_{k}\rightarrow\Psi_{0}$ where
$\Psi_{0}(z)=\mathrm{e}^{2z/h_{\mathrm{exp}}}.$ (49)
Further increasing the horizontal scale no longer modifies $\Psi_{k}(z)$ and
so vertical structure is arrested at $\Psi_{0}$.
An example with $h_{\mathrm{exp}}=300$ m and $\sigma_{0}=100$ is shown in
figure 1(g)-(i). At horizontal scales smaller than
$L_{\mathrm{exp}}=2\pi/k_{\mathrm{exp}}$, the inversion function rapidly
transitions to the linear small-scale limit of $m(k)\approx k/\sigma_{0}$. In
contrast, at horizontal scales larger than $L_{\mathrm{exp}}$, the large-scale
approximation (47) holds, and at sufficiently large horizontal scales, the
inversion function asymptotes to constant value of
$m(k)=h_{\mathrm{exp}}\,k_{\mathrm{exp}}^{2}/4$.
The inversion relation implied by the inversion function (47) is
$\hat{\theta}_{\bm{k}}\approx-\frac{h_{\mathrm{exp}}}{4}\left(k_{\mathrm{exp}}^{2}+k^{2}\right)\hat{\psi}_{\bm{k}},$
(50)
which is isomorphic to the inversion relation in the equivalent barotropic
model (Larichev and McWilliams, 1991), with $k_{\mathrm{exp}}$ assuming the
role of the deformation wavenumber. In this limit, the total energy and the
surface potential enstrophy are not independent quantities to leading order in
$k_{\mathrm{exp}}^{2}$. Using the relations between the various spectra
[equations (25) and (26)] with an inversion function of the form $m(k)\approx
m_{0}+m_{1}k^{2}$, we obtain $\mathscr{E}(k)\approx
m_{0}\,\mathscr{S}(k)+m_{1}\,\mathscr{K}(k)$ and $\mathscr{P}(k)\approx
m_{0}^{2}\,\mathscr{S}(k)+2\,m_{0}\,m_{1}\mathscr{K}(k)$; solving for
$\mathscr{S}(k)$ and $\mathscr{K}(k)$ then yields
$\mathscr{S}(k)\approx\frac{2\,m_{0}\mathscr{E}(k)-\mathscr{P}(k)}{m_{0}^{2}},$
(51)
and
$\mathscr{K}(k)\approx\frac{\mathscr{P}(k)-m_{0}\,\mathscr{E}(k)}{m_{0}\,m_{1}},$
(52)
which are now the two independent quantities. Then, using an argument
analogous to that in Larichev and McWilliams (1991), we find that
$\mathscr{S}(k)\sim k^{-11/3}$ (53)
in the inverse cascade inertial range whereas
$\mathscr{K}(k)\sim k^{-3}$ (54)
in the forward cascade inertial range.
The implied dynamics are extremely local; a point vortex,
$\theta(r)\sim\delta(r)$, leads to an exponentially decaying streamfunction,
$\psi(r)\sim\exp(-k_{\mathrm{exp}}r)/\sqrt{k_{\mathrm{exp}}r}$ (Polvani et
al., 1989). Therefore, as for the $\sigma_{0}>\sigma_{\mathrm{pyc}}$ case
above, we expect a spatially diffuse surface potential vorticity field and no
large-scale strain. However, unlike the $\sigma_{0}>\sigma_{\mathrm{pyc}}$
case, the presence of a distinguished length scale, $L_{\mathrm{exp}}$, leads
to the emergence of plateaus of homogenized surface potential vorticity
surrounded by kinetic energy ribbons (Arbic and Flierl, 2003). Both of these
features can be seen in figure 2.
The $k^{-3}$ surface kinetic energy spectrum (54) is only expected to hold at
horizontal scales larger than $2\,\pi\,\sigma_{0}\,h_{\mathrm{exp}}$; at
smaller scales we should recover the $k^{-5/3}$ spectrum expected from
uniformly stratified surface quasiogeostrophic theory. Figure 2(i) shows that
there is indeed a steepening of the kinetic energy spectrum at horizontal
scales larger than 20 km, although the model spectrum is somewhat steeper than
the predicted $k^{-3}$. Similarly, although the spectrum flattens at smaller
scales, the small-scale spectrum is also slightly steeper than the predicted
$k^{-5/3}$.
We can also examine the spectral transfer functions of $\mathscr{P}(k)$ and
$\mathscr{K}(k)$. At large scales, we expect an inertial range in surface
kinetic energy, so $\Pi_{K}(k)=$ constant, whereas at small scales, we expect
an inertial range in surface potential enstrophy, so $\Pi_{P}(k)=$ constant.
However, figure 3 shows that although both $\Pi_{K}(k)$ and $\Pi_{P}(k)$
become approximately flat at small scales, we observe significant deviations
at larger scales.
### 4.3 More general stratification profiles
These three idealized cases provide intuition for how the inversion function
behaves for an arbitrary stratification profile, $\sigma(z)$. Generally, if
$\sigma(z)$ is decreasing over some depth, then the inversion function will
steepen to a super linear wavenumber dependence over a range of horizontal
wavenumber whose vertical structure function significantly impinges on these
depths. A larger difference in stratification between these depths leads to a
steeper inversion function. Analogously, if $\sigma(z)$ is increasing over
some depth, then the inversion function will flatten to a sublinear wavenumber
dependence, with a larger difference in stratification leading to a flatter
inversion function. Finally, if $\sigma(z)$ is much smaller at depth than near
the surface, the inversion function will flatten to become approximately
constant, and we recover an equivalent barotopic like regime, similar to the
exponentially stratified example.
## 5 Application to the ECCOv4 ocean state estimate
We now show that, over the mid-latitude North Atlantic, the inversion function
is seasonal at horizontal scales between 1-100 km, transitioning from
$m(k)\sim k^{3/2}$ in winter to $m(k)\sim k^{1/2}$ in summer. To compute the
inversion function $m(k)$, we obtain the stratification profile
$\sigma(z)=N(z)/f$ at each location from the Estimating the Circulation and
Climate of the Ocean version 4 release 4 (ECCOv4, Forget et al., 2015) state
estimate. We then compute $\Psi_{k}(z)$ using the vertical structure equation
(9) and then use the definition of the inversion function (15) to obtain
$m(k)$ at each wavenumber $k$.
### 5.1 The three horizontal length-scales
In addition to $L_{\mathrm{mix}}$ and $L_{\mathrm{pyc}}$ [defined in equations
(41) and (42)], we introduce the horizontal length scale, $L_{H}$, the full-
depth horizontal scale, defined by
$L_{H}=2\,\pi\,\sigma_{\mathrm{ave}}\,H,$ (55)
where $\sigma_{\mathrm{ave}}$ is the vertical average of $\sigma$ and $H$ is
the local ocean depth. The bottom boundary condition becomes important to the
dynamics at horizontal scales larger than $\approx L_{H}$.
Figure 4: Panels (a), (b), and (c) show the horizontal length scales $L_{H}$,
$L_{\textrm{mix}}$, and $L_{\textrm{pyc}}$ as computed from 2017 January mean
ECCOv4 stratification profiles, $\sigma(z)=N(z)/f$, over the North Atlantic.
The green ‘x’ in panel (a) shows the location chosen for the inversion
functions of figure 6 and the model simulations of figure 7. Panel (d) shows
$\alpha$, defined by $m(k)/k^{\alpha}\approx\mathrm{constant}$, over the North
Atlantic. We compute $\alpha$ by fitting a straight line to a log-log plot of
$m(k)$ between $2\pi/L_{\mathrm{mix}}$ and $2\pi/L_{\mathrm{pyc}}$. Panel (e)
is a histogram of the computed values of $\alpha$ over the North Atlantic. We
exclude from this histogram grid cells with $L_{H}<150$ km; these are
primarily continental shelves and high-latitude regions. In these excluded
regions, our chosen bottom boundary condition (56) may be influencing the
computed value of $\alpha$.
Figure 5: Panels (a), (b), (c), and (e) are as in figure 4(a)-(d), but
computed from 2017 July mean stratification profiles. The calculation of
$L_{0}$ in panel (d) is explained in the text. In panel (f), we show $\alpha$
but measured between $2\pi/(50\,\mathrm{km})$ and $2\pi/L_{0}$.
We compute all three length scales using ECCOv4 stratification profiles over
the North Atlantic, with results displayed in figures 4(a)-(c) and 5(a)-(c)
for January and July, respectively. To compute the mixed-layer horizontal
scale, $L_{\mathrm{mix}}=2\,\pi\sigma_{0}\,h_{\mathrm{mix}}$, we set
$\sigma_{0}$ equal to the stratification at the uppermost grid cell. The
mixed-layer depth, $h_{\mathrm{mix}}$, is then defined as follows. We first
define the pycnocline stratification, $\sigma_{\mathrm{pyc}}$, to be the
maximum of $\sigma(z)$. The mixed-layer depth, $h_{\mathrm{mix}}$, is then the
depth at which
$\sigma(-h_{\mathrm{mix}})=\sigma_{0}+\left(\sigma_{\mathrm{pyc}}-\sigma_{0}\right)/4$.
Finally, the pycnocline horizontal scale, $L_{\mathrm{pyc}}$, is computed as
$L_{\mathrm{pyc}}=2\,\pi\,\sigma_{\mathrm{pyc}}\,h_{\mathrm{pyc}}$, where
$h_{\mathrm{pyc}}$ is the depth of the stratification maximum
$\sigma_{\mathrm{pyc}}$.
Figures 4(a) and 5(a) show that $L_{H}$ is not seasonal, with typical mid-
latitude open ocean values between $400-700$ km. On continental shelves, as
well as high-latitudes, $L_{H}$ decreases to values smaller than $200$ km. As
we approach the equator, the full-depth horizontal scale $L_{H}$ becomes large
due to the smallness of the Coriolis parameter.
Constant stratification surface quasigeostrophic theory is only valid at
horizontal scales smaller than $L_{\mathrm{mix}}$. Figure 4(b) shows that the
wintertime $L_{\mathrm{mix}}$ is spatially variable with values ranging
between $1-15$ km. In contrast, figure 5(b) shows that the summertime
$L_{\mathrm{mix}}$ is less than 2 km over most of the midlatitude North
Atlantic.
Finally, we expect to observe a superlinear inversion function for horizontal
scales between $L_{\mathrm{mix}}$ and $L_{\mathrm{pyc}}$. The latter,
$L_{\mathrm{pyc}}$, is shown in figures 4(c) and 5(c). Typical mid-latitude
values range between $70-110$ km in winter but decrease to $15-30$ km in
summer. In figure 4(c), there is a region close to the Gulf stream with
anomalously high values of $L_{\mathrm{pyc}}$. In this region, the
stratification maximum, $\sigma_{\mathrm{pyc}}$ is approximately half as large
as the surrounding region, but its depth, $h_{\mathrm{pyc}}$, is about five
times deeper, resulting in larger values of
$L_{\mathrm{pyc}}=2\,\pi\,\sigma_{\mathrm{pyc}}\,h_{\mathrm{pyc}}$.
### 5.2 The inversion function at a single location
Before computing the inversion function over the North Atlantic, we focus on a
single location. However, we must first address what boundary conditions to
use in solving the vertical structure equation $\eqref{eq:Psi_equation}$ for
$\Psi_{k}(z)$. We cannot use the infinite bottom boundary condition (11)
because the ocean has a finite depth. However, given that figures 4(a) and
5(a) show that the bottom boundary condition should not effect the inversion
function at horizontal scales smaller than 400 km in the mid-latitude North
Atlantic open ocean, we choose to use the no-slip bottom boundary condition
$\displaystyle\Psi_{k}(-H)=0.$ (56)
The alternate free-slip boundary condition
$\displaystyle\frac{\mathrm{d}\Psi_{k}(-H)}{\mathrm{d}z}=0$ (57)
gives qualitatively identical results for horizontal scales smaller than 400
km, which are the scales of interest in this study [see appendix A for the
large-scale limit of $m(k)$ under these boundary conditions].666 The no-slip
boundary condition (56) is appropriate over strong bottom friction (Arbic and
Flierl, 2004) or steep topography (LaCasce, 2017) whereas the free-slip
boundary condition (57) is appropriate over a flat bottom.
Figure 6: As in figure 1 but for the mid-latitude North Atlantic location
($38^{\circ}$ N, $45^{\circ}$ W) in January [(a)-(c)] and July [(d)-(f)]. This
location is marked by a green ‘x’ in figure 4(a). Only the upper 750 m of the
stratification profiles and vertical structures are shown in panels (b), (c),
(e) and (f).
Figure 7: Two pseudo-spectral simulations differing only in the chosen
stratification profile $\sigma(z)=N(z)/f$. Both simulations use a monthly
averaged 2017 stratification at the mid-latitude North Atlantic location
($38^{\circ}$ N,$45^{\circ}$ W) [see the green ’x’ in figure 4a] in January
[(a), (c), (e)] and July [(b), (d), (f)]. The stratification profiles are
obtained from the Estimating the Circulation and Climate of the Ocean version
4 release 4 (ECCOv4, Forget et al., 2015) state estimate. Otherwise as in
figure 2.
Figure 6 shows the computed inversion function in the mid-latitude North
Atlantic at ($38^{\circ}$ N, $45^{\circ}$ W) [see the green ‘x’ in figure
4(a)]. In winter, at horizontal scales smaller than $L_{\mathrm{mix}}\approx
5$ km, we recover the linear $m(k)\approx k/\sigma_{0}$ expected from constant
stratification surface quasigeostrophic theory. However, for horizontal scales
between $L_{\mathrm{mix}}\approx 5$ km and $L_{\mathrm{pyc}}\approx 70$ km,
the inversion function, $m(k)$, becomes as steep as a $k^{3/2}$ power law.
Figure 7 shows a snapshot of the surface potential vorticity and the
geostrophic velocity from a surface quasigeostrophic model using the
wintertime inversion function. The surface potential vorticity snapshot is
similar to the idealized mixed-layer snapshot of figure 2(a), which is also
characterized by $\alpha\approx 3/2$ (but at horizontal scales between 7-50
km). Both simulations exhibit a preponderance of small-scale vortices as well
as thin filaments of surface potential vorticity. As expected, the kinetic
energy spectrum [figure 7(e)] transitions from an $\alpha\approx 3/2$ regime
to an $\alpha=1$ regime near $L_{\mathrm{mix}}=5$ km. Moreover, as shown in
figure 8, an approximate inertial range is evident between the forcing and
dissipation scales.
Figure 8: The spectral density transfer functions for surface potential
enstrophy (a) and surface kinetic energy (b) normalized by their absolute
maximum for the two simulations in figure 7.
In summer, the mixed-layer horizontal scale, $L_{\mathrm{mix}}$, becomes
smaller than 1 km and the pycnocline horizontal scale, $L_{\mathrm{pyc}}$,
decreases to 20 km. We therefore obtain a super linear regime, with $m(k)$ as
steep as $k^{1.2}$, but only for horizontal scales between 1-20 km. Thus,
although there is a range of wavenumbers for which $m(k)$ steepens to a super
linear wavenumber dependence in summer, this range of wavenumbers is narrow,
only found at small horizontal scales, and the steepening is much less
pronounced than in winter. At horizontal scales larger than
$L_{\mathrm{pyc}}$, the summertime inversion function flattens, with the
$m(k)$ increasing like a $k^{1/2}$ power law between 50-400 km. This
flattening is due to the largely decaying nature of ocean stratification below
the stratification maximum.
As expected from a simulation with a sublinear inversion function at large
scales, the surface potential vorticity appears spatially diffuse [figure
7(d)] and comparable to the $\sigma_{0}>\sigma_{\mathrm{pyc}}$ and the
exponential simulations [figure 2(b)-(c)]. However, despite having a sublinear
inversion function, the July simulation is dynamically more similar to the
exponential simulation rather than the $\sigma_{0}>\sigma_{\mathrm{pyc}}$
simulation. The July simulation displays approximately homogenized regions of
surface potential vorticity surrounded by surface kinetic energy ribbons. As a
result, the surface kinetic energy spectrum does not follow the predicted
spectrum (37).
### 5.3 The inversion function over the North Atlantic
We now present power law approximations to the inversion function $m(k)$ over
the North Atlantic in winter and summer. In winter, we obtain the power
$\alpha$, where $m(k)/k^{\alpha}\approx\mathrm{constant}$, by fitting a
straight line to $m(k)$ on a log-log plot between $2\pi/L_{\mathrm{mix}}$ and
$2\pi/L_{\mathrm{pyc}}$. A value of $\alpha=1$ is expected for constant
stratification surface quasigeostrophic theory. A value of $\alpha=2$ leads to
an inversion relation similar to two-dimensional barotropic dynamics. However,
in general, we emphasize that $\alpha$ is simply a crude measure of how
quickly $m(k)$ is increasing; we do not mean to imply that $m(k)$ in fact
takes the form of a power law. Nevertheless, the power $\alpha$ is useful
because, as $\alpha$-turbulence suggests (and the simulations in section 4
confirm), the rate of increase of the inversion function measures the spatial
locality of the resulting flow.
Figure 9: Panel (a) is as in figure 4(e), but with the additional restriction
that $L_{H}<750$ km to filter out the non-seasonal equatorial region. In panel
(b), we instead plot $\alpha$ as obtained by fitting a straight line to a log-
log plot of $m(k)$ between $2\pi/(50\,\mathrm{km})$ and $2\pi/L_{0}$ with the
same restrictions as in panel (a).
Figure 4(d) shows that we generally have $\alpha\approx 3/2$ in the wintertime
open ocean. Deviations appear at high-latitudes (e.g., the Labrador sea and
southeast of Greenland) and on continental shelves where we find regions of
low $\alpha$. However, both of these regions have small values of $L_{H}$ so
that our chosen no-slip bottom boundary condition (56) may be influencing the
computed $\alpha$ there.
A histogram of the computed values of $\alpha$ [figure 4(e)] confirms that
$\alpha\approx 1.53\pm 0.08$ in the wintertime mid-latitude open ocean. This
histogram only includes grid cells with $L_{H}>150$ km, which ensures that the
no-slip bottom boundary condition (56) is not influencing the computed
distribution.
An inversion function of $m(k)\sim k^{3/2}$ implies a surface kinetic energy
spectrum of $k^{-4/3}$ upscale of small-scale forcing [equation (36)] and a
spectrum of $k^{-7/3}$ downscale of large-scale forcing [equation (37)]. As we
expect wintertime surface buoyancy anomalies to be forced both by large-scale
baroclinic instability and by small-scale mixed-layer baroclinic instability,
the realized surface kinetic energy spectrum should be between $k^{-4/3}$ and
$k^{-7/3}$. Such a prediction is consistent with the finding that wintertime
North Atlantic geostrophic surface velocities are mainly due to surface
buoyancy anomalies (Lapeyre, 2009; González-Haro and Isern-Fontanet, 2014) and
observational evidence of a $k^{-2}$ wintertime spectrum (Callies et al.,
2015).
The universality of the $m(k)\sim k^{3/2}$ regime over the mid-latitudes is
expected because it arises from a mechanism universally present over the mid-
latitude ocean in winter; namely, the deepening of the mixed-layer. However, a
comment is required on why this regime also appears at low latitudes where we
do not observe deep wintertime mixed-layers. At low latitudes, the $m(k)\sim
k^{3/2}$ regime emerges because there is a large scale-separation between
$L_{\mathrm{mix}}$ and $L_{\mathrm{pyc}}$. The smallness of the low latitude
Coriolis parameter $f$ cancels out the shallowness of the low latitude mixed-
layer depth resulting in values of $L_{\mathrm{mix}}$ comparable to the
remainder of the mid-latitude North Atlantic, as seen in figure 4(b). However,
no similar cancellation occurs for $L_{\mathrm{pyc}}$ which reaches values of
$\approx 500$ km due to the smallness of the Coriolis parameter $f$ at low
latitudes. As a consequence, there is a non-seasonal $m(k)\sim k^{3/2}$ regime
at low latitudes for horizontal scales between $10-500$ km.
The analogous summertime results are presented in figure 5(e) and figure 9(a).
Near the equator, we obtain values close to $\alpha\approx 3/2$, as expected
from the weak seasonality there. In contrast, the midlatitudes generally
display $\alpha\approx 1.2-1.3$ but this superlinear regime is only present at
horizontal scales smaller than $L_{\mathrm{pyc}}\approx 20-30$ km. Figure 9(a)
shows a histogram of the measured $\alpha$ values but with the additional
restriction that $L_{H}<750$ km to filter out the near equatorial region
(where $\alpha\approx 3/2$).
The summertime inversion function shown in figure 6(d) suggests that the
inversion function flattens at horizontal scales larger than 50 km, with
$m(k)$ increasing like a $k^{1/2}$ power law. We now generalize this
calculation to the summertime midlatitude North Atlantic by fitting a straight
line to $m(k)$ on a log-log plot between $2\pi/(50\,\mathrm{km})$ and
$2\pi/L_{0}$ where $L_{0}$ is defined by
$m\left(\frac{2\,\pi}{L_{0}}\right)=m_{0}=\left[\int_{-H}^{0}\sigma^{2}(s)\mathrm{d}s\right]^{-1}$
(58)
and $m_{0}$ is defined by the second equality. In this case, we solve for
$m(k)$ using the free-slip boundary condition (57). We made this choice
because $m(k)$ must cross $m_{0}$ in the large-scale limit if we apply the
free-slip boundary condition (57). In contrast, $m(k)$ asymptotes to $m_{0}$
from above if we apply the no-slip boundary condition (56). See appendix A for
more details. In any case, if we use the free-slip boundary condition (57),
then $L_{0}$ is a horizontal length scale at which the flattening of $m(k)$
ceases and $m(k)$ instead begins to steepen in order to attain the required
$H\,k^{2}$ dependence at large horizontal scales [see equation (65)]. Over the
mid-latitudes North Atlantic, $L_{0}$ has typical values of 200-500 km [figure
5(d)].
When $\alpha$ is measured between 50 km and $L_{0}$, we find typical
midlatitude values close to $\alpha\approx 1/2$ [figure 5(f)]. A histogram of
these $\alpha$ values is provided in figure 9(b), where we only consider grid
cells satisfying $L_{H}>150$ km and $L_{H}<750$ km (the latter condition
filters out near equatorial grid cells). The distribution is broad with a mean
of $\alpha=0.56\pm 0.15$ and a long tail of $\alpha>0.8$ values. Therefore,
$m(k)$ flattens considerably in response to the decaying nature of summertime
upper ocean stratification. It is not clear, however, whether the resulting
dynamics will be similar to the $\sigma_{0}>\sigma_{\mathrm{pyc}}$ case or the
exponentially stratified case in section 4. As we have seen, the summertime
simulation (in figure 7) displayed characteristics closer to the idealized
exponential case than the $\sigma_{0}>\sigma_{\mathrm{pyc}}$ case.
Nevertheless, the low summertime values of $\alpha$ indicate that buoyancy
anomalies generate shorter range velocity fields in summer than in winter.
Isern-Fontanet et al. (2014) and González-Haro et al. (2020) measured the
inversion function empirically, through equation (1), and found that the
inversion function asymptotes to a constant at large horizontal scales (270 km
near the western coast of Australia and 100 km in the Mediterranean Sea). They
suggested this flattening is due to the dominance of the interior
quasigeostrophic solution at large scales (a consequence of equation 29 in
Lapeyre and Klein, 2006). We instead suggest this flattening is intrinsic to
surface quasigeostrophy. In our calculation, the inversion function does not
become constant at horizontal scales smaller than 400 km. However, if the
appropriate bottom boundary condition is the no-slip boundary condition (56),
then the inversion asymptotes to a constant value at horizontal scales larger
than $L_{H}$ (appendix A).
## 6 Discussion and conclusion
As reviewed in the introduction, surface geostrophic velocities over the Gulf
Stream, the Kuroshio, and the Southern Ocean are primarily induced by surface
buoyancy anomalies in winter (Lapeyre, 2009; Isern-Fontanet and Hascoët, 2014;
González-Haro and Isern-Fontanet, 2014; Qiu et al., 2016; Miracca-Lage et al.,
2022). However, the kinetic energy spectra found in observations and numerical
models are too steep to be consistent with uniformly stratified surface
quasigeostrophic theory (Blumen, 1978; Callies and Ferrari, 2013). By
generalizing surface quasigeostrophic theory to account for variable
stratification, we have shown that surface buoyancy anomalies can generate a
variety of dynamical regimes depending on the stratification’s vertical
structure. Buoyancy anomalies generate longer range velocity fields over
decreasing stratification [$\sigma^{\prime}(z)\leq 0$] and shorter range
velocity fields over increasing stratification [$\sigma^{\prime}(z)\geq 0$].
As a result, the surface kinetic energy spectrum is steeper over decreasing
stratification than over increasing stratification. An exception occurs if
there is a large difference between the surface stratification and the deep
ocean stratification (as in the exponential stratified example of section 4).
In this case, we find regions of approximately homogenized surface buoyancy
surrounded by kinetic energy ribbons (similar to Arbic and Flierl, 2003) and
this spatial reorganization of the flow results in a steep kinetic energy
spectrum. By applying the variable stratification theory to the wintertime
North Atlantic and assuming that mixed-layer instability acts as a narrowband
small-scale surface buoyancy forcing, we find that the theory predicts a
surface kinetic energy spectrum between $k^{-4/3}$ and $k^{-7/3}$, which is
consistent with the observed wintertime $k^{-2}$ spectrum (Sasaki et al.,
2014; Callies et al., 2015; Vergara et al., 2019). There remains the problem
that mixed-layer instability may not be localized at a certain horizontal
scale but is forcing the surface flow at a wide range of scales (Khatri et
al., 2021). In this case we suggest that the main consequence of this
broadband forcing is again to flatten the $k^{-7/3}$ spectrum.
Over the summertime North Atlantic, buoyancy anomalies generate a more local
velocity field and the surface kinetic energy spectrum is flatter than in
winter. This contradicts the $k^{-3}$ spectrum found in observations and
numerical models (Sasaki et al., 2014; Callies et al., 2015). However,
observations also suggest that the surface geostrophic velocity is no longer
dominated by the surface buoyancy induced contribution, suggesting the
importance of interior potential vorticity for the summertime surface velocity
(González-Haro and Isern-Fontanet, 2014; Miracca-Lage et al., 2022). As such,
the surface kinetic energy predictions of the present model, which neglects
interior potential vorticity, are not valid over the summertime North
Atlantic.
The situation in the North Pacific is broadly similar to that in the North
Atlantic. In the Southern Ocean, however, the weak depth-averaged
stratification leads to values of $L_{H}$ close to 150-200 km. As such, the
bottom boundary becomes important at smaller horizontal scales than in the
North Atlantic. Regardless of whether the appropriate bottom boundary
condition is no-slip (56) or free-slip (57), in both cases, the resulting
inversion function implies a steepening to a $k^{-3}$ surface kinetic energy
spectrum (appendix A). The importance of the bottom boundary in the Southern
Ocean may explain the observed steepness of the surface kinetic energy spectra
[between $k^{-2.5}$ to $k^{-3}$ (Vergara et al., 2019)] even though the
surface geostrophic velocity seems to be largely due to surface buoyancy
anomalies throughout the year (González-Haro and Isern-Fontanet, 2014).
The claims made in this article can be explicitly tested in a realistic high-
resolution ocean model; this can be done by finding regions where the surface
streamfunction as reconstructed from sea surface height is highly correlated
to the surface streamfunction as reconstructed from sea surface buoyancy (or
temperature, as in González-Haro and Isern-Fontanet, 2014). Then, in regions
where both streamfunctions are highly correlated, the theory predicts that the
inversion function, as computed from the stratification [equation (15)],
should be identical to the inversion function computed through the surface
streamfunction and buoyancy fields [equations (1) and (16)]. Moreover, in
these regions, the model surface kinetic energy spectrum must be between the
inverse cascade and forward cascade kinetic energy spectra [equations (36) and
(37)].
Finally the vertical structure equation (9) along with the inversion relation
(13) between $\hat{\theta}_{\bm{k}}$ and $\hat{\psi}_{\bm{k}}$ suggest the
possibility of measuring the buoyancy frequency’s vertical structure, $N(z)$,
using satellites observations. This approach, however, is limited to regions
where the surface geostrophic velocity is largely due to surface buoyancy
anomalies. By combining satellite measurements of sea surface temperature and
sea surface height, we can use the inversion relation (13) to solve for the
inversion function. Then we obtain $N(z)$ by solving the inverse problem for
the vertical structure equation (9). How practical this approach is to
measuring the buoyancy frequency’s vertical structure remains to be seen.
###### Acknowledgements.
We sincerely thank Bob Hallberg, Isaac Held, and Sonya Legg for useful
comments on earlier drafts of this manuscript. We also thank Guillaume Lapeyre
and an anonymous reviewer whose comments and recommendations greatly helped
with our presentation. This report was prepared by Houssam Yassin under award
NA18OAR4320123 from the National Oceanic and Atmospheric Administration, U.S.
Department of Commerce. The statements, findings, conclusions, and
recommendations are those of the authors and do not necessarily reflect the
views of the National Oceanic and Atmospheric Administration, or the U.S.
Department of Commerce. The ECCO data (ECCO Consortium et al., 2021a, b) is
available from NASA’s Physical Oceanography Distributed Active Archive Center
(https://podaac.jpl.nasa.gov/ECCO). The pyqg model is available on Github
(https://github.com/pyqg/pyqg). [A] The small- and large-scale limits
### .1 The small-scale limit
Let $h$ be a characteristic vertical length scale associated with $\sigma(z)$
near $z=0$. Then, in the small-scale limit, $k\,\sigma_{0}\,h\gg 1$, the
infinite bottom boundary condition (11) is appropriate. With the substitution
$\displaystyle\Psi(z)=\sigma(z)\,P(z),$ (59)
we transform the vertical structure equation (9) into a Schrödinger equation,
$\frac{\mathrm{d}^{2}P}{\mathrm{d}z^{2}}=\left[-\frac{1}{\sigma}\frac{\mathrm{d}^{2}\sigma}{\mathrm{d}z^{2}}+2\left(\frac{1}{\sigma}\frac{\mathrm{d}\sigma}{\mathrm{d}z}\right)^{2}+k^{2}\,\sigma^{2}\right]P,$
(60)
with a lower boundary condition
$\sigma\,P\rightarrow 0\quad\text{ as }\quad z\rightarrow-\infty.$ (61)
In the limit $k\,\sigma_{0}\,h\gg 1$, the solution to the Schrödinger equation
equation (60) is given by
$\Psi_{k}(z)\approx\sqrt{\frac{\sigma(z)}{\sigma_{0}}}\,\exp\left({k\,\int_{0}^{z}\sigma(s)\mathrm{d}s}\right).$
(62)
On substituting $\Psi_{k}(z)$ into the definition of the inversion function
(15), we obtain $m(k)\approx k/\sigma_{0}$ to leading order in
$(k\sigma_{0}h)^{-1}$. Therefore, the inversion relation in the small-scale
limit coincides with the familiar inversion relation of constant
stratification surface quasigeostrophic theory (Blumen, 1978; Held et al.,
1995).
### .2 The large-scale free-slip limit
Let $k_{H}=2\pi/L_{H}$, where the horizontal length scale $L_{H}$ is defined
in equation (55). Then, in the large-scale limit, $k/k_{H}\ll 1$, we assume a
solution of the form
$\Psi_{k}(z)=\Psi_{k}^{(0)}(z)+\left(\frac{k}{k_{H}}\right)^{2}\,\Psi_{k}^{(1)}(z)+\cdots.$
(63)
Substituting the series expansion (63) into the vertical structure equation
(9) and applying the free-slip bottom boundary condition (57) yields
$\Psi_{k}(z)\approx
A\left[1+k^{2}\int_{-H}^{z}\sigma^{2}(s)\,\left(s+H\right)\mathrm{d}\,s+\cdots\right],$
(64)
where $A$ is a constant determined by the upper boundary condition (10). To
leading order in $k/k_{H}$, the large-scale vertical structure is independent
of depth.
Substituting the solution (64) into the definition of the inversion function
(15) gives
$m(k)\approx H\,k^{2}.$ (65)
Therefore, over a free-slip bottom boundary, the large-scale dynamics resemble
two-dimensional vorticity dynamics, generalizing the result of Tulloch and
Smith (2006) to arbitrary stratification $\sigma(z)$.
### .3 The large-scale no-slip limit
Substituting the expansion (63) into the vertical structure equation (9) and
applying the no-slip lower boundary condition (56) yields
$\displaystyle\begin{split}\Psi_{k}&(z)\approx
B\Bigg{[}\int_{-H}^{z}\,\sigma^{2}(s)\,\mathrm{d}\,s\,+\\\
&k^{2}\,\int_{-H}^{z}\sigma^{2}(s_{3})\int_{-H}^{s_{3}}\int_{-H}^{s_{2}}\sigma^{2}(s_{1})\,\mathrm{d}s_{1}\,\mathrm{d}s_{2}\,\mathrm{d}s_{3}\Bigg{]},\end{split}$
(66)
where $B$ is a constant determined by the upper boundary condition (10).
Substituting the solution (66) into the definition of the inversion function
(15) gives
$m(k)\approx m_{1}\left(k_{\sigma}^{2}+k^{2}\right),$ (67)
where $k_{\sigma}=\sqrt{m_{0}/m_{1}}$ is analogous to the deformation
wavenumber, the constant $m_{0}$ is given by
$m_{0}=\left[\int_{-H}^{0}\sigma^{2}(s)\mathrm{d}s\right]^{-1},$ (68)
and $m_{1}$ is some constant determined by integrals of $\sigma(z)$. If
$\sigma(z)$ is positive then both $m_{0}$ and $m_{1}$ are also positive.
Therefore, over a no-slip bottom boundary, the large-scale dynamics resemble
those of the equivalent barotropic model.
[B]
Inversion function for piecewise constant stratification
We seek a solution to the vertical structure equation (9) for the piecewise
constant stratification (39) with upper boundary condition (10) and the
infinite lower boundary condition (11). The solution has the form
$\Psi_{k}(z)=\cosh\left(\sigma_{0}\,k\,z\right)+a_{2}\sinh\left(\sigma_{0}\,k\,z\right)$
(69)
for $-h\leq z\leq 0$, and
$\Psi_{k}(z)=a_{3}\,e^{\sigma_{\mathrm{pyc}}k(z+h)}$ (70)
for $-\infty<z<-h$. To determine $a_{2}$ and $a_{3}$, we require $\Psi_{k}(z)$
to be continuous across $z=-h$ and that its derivative satisfy
$\frac{1}{\sigma_{0}^{2}}\,\frac{\mathrm{d}\Psi_{k}(-h^{+})}{\mathrm{d}z}=\frac{1}{\sigma_{\mathrm{pyc}}^{2}}\,\frac{\mathrm{d}\Psi_{k}(-h^{-})}{\mathrm{d}z},$
(71)
where the $-$ and $+$ superscripts indicate limits from the below and above
respectively. Solving for $a_{2}$ and substituting the vertical structure (69)
into the definition of the inversion function (15) then yields $m(k)$.
[C]
The numerical model
We solve the time-evolution equation (6) using the pseudo-spectral pyqg model
(Abernathey et al., 2019). To take stratification into account, we use the
inversion relation (14). Given a stratification profile $\sigma(z)$ from
ECCOv4, we first interpolate the ECCOv4 stratification profile with a cubic
spline onto a vertical grid with 350 vertical grid points. We then numerically
solve the vertical structure equation (9), along with boundary conditions (10)
and either (56) or (57), and obtain the vertical structure at each wavevector
$\bm{k}$. Using the definition of the inversion function (15) then gives
$m(k)$.
We apply a large-scale forcing, $F$, between the (non-dimensional) wavenumbers
$3.5<k<4.5$ in all our simulations, corresponding to horizontal length scales
88 - 114 km. Otherwise, the forcing $F$ is as in Smith et al. (2002). The
dissipation term can be written as
$D=r_{d}\,\theta+\mathrm{ssd}$ (72)
where $r_{d}$ is a damping rate and $\mathrm{ssd}$ is small-scale dissipation.
Small-scale dissipation is through an exponential surface potential enstrophy
filter as in Arbic and Flierl (2003). Simulations have a horizontal resolution
of $1024^{2}$ horizontal grid points.
## References
* Abernathey et al. (2019) Abernathey, R., and Coauthors, 2019: pyqg/pyqg: v0.3.0. Zenodo, https://doi.org/10.5281/zenodo.3551326.
* Arbic and Flierl (2003) Arbic, B. K., and G. R. Flierl, 2003: Coherent vortices and kinetic energy ribbons in asymptotic, quasi two-dimensional f-plane turbulence. Physics of Fluids, 15, 2177–2189, https://doi.org/10.1063/1.1582183.
* Arbic and Flierl (2004) Arbic, B. K., and G. R. Flierl, 2004: Baroclinically Unstable Geostrophic Turbulence in the Limits of Strong and Weak Bottom Ekman Friction: Application to Midocean Eddies. J. Phys. Oceanogr., 34, 2257–2273, https://doi.org/10.1175/1520-0485(2004)034¡2257:BUGTIT¿2.0.CO;2.
* Blumen (1978) Blumen, W., 1978: Uniform Potential Vorticity Flow: Part I. Theory of Wave Interactions and Two-Dimensional Turbulence. J. Atmos. Sci., 35, 774–783, https://doi.org/10.1175/1520-0469(1978)035¡0774:UPVFPI¿2.0.CO;2.
* Boccaletti et al. (2007) Boccaletti, G., R. Ferrari, and B. Fox-Kemper, 2007: Mixed Layer Instabilities and Restratification. J. Phys. Oceanogr., 37, 2228–2250, https://doi.org/10.1175/JPO3101.1.
* Bretherton (1966) Bretherton, F. P., 1966: Critical layer instability in baroclinic flows. Quart. J. Roy. Meteor. Soc., 92, 325–334, https://doi.org/10.1002/qj.49709239302.
* Burgess et al. (2015) Burgess, B. H., R. K. Scott, and T. G. Shepherd, 2015: Kraichnan–Leith–Batchelor similarity theory and two-dimensional inverse cascades. J. Fluid Mech., 767, 467–496, https://doi.org/10.1017/jfm.2015.26.
* Callies and Ferrari (2013) Callies, J., and R. Ferrari, 2013: Interpreting Energy and Tracer Spectra of Upper-Ocean Turbulence in the Submesoscale Range (1–200 km). J. Phys. Oceanogr., 43, 2456–2474, https://doi.org/10.1175/JPO-D-13-063.1.
* Callies et al. (2015) Callies, J., R. Ferrari, J. M. Klymak, and J. Gula, 2015: Seasonality in submesoscale turbulence. Nature, 6, https://doi.org/10.1038/ncomms7862.
* Callies et al. (2016) Callies, J., G. Flierl, R. Ferrari, and B. Fox-Kemper, 2016: The role of mixed-layer instabilities in submesoscale turbulence. J. Fluid Mech., 788, 5–41, https://doi.org/10.1017/jfm.2015.700.
* Charney (1971) Charney, J. G., 1971: Geostrophic Turbulence. J. Atmos. Sci., 28, 1087–1095, https://doi.org/10.1175/1520-0469(1971)028¡1087:GT¿2.0.CO;2.
* ECCO Consortium et al. (2021a) ECCO Consortium, I. Fukumori, O. Wang, I. Fenty, G. Forget, P. Heimbach, and R. M. Ponte, 2021a: Ecco central estimate (version 4 release 4). Date accessed 28 January 2022, https://ecco.jpl.nasa.gov/drive/files.
* ECCO Consortium et al. (2021b) ECCO Consortium, I. Fukumori, O. Wang, I. Fenty, G. Forget, P. Heimbach, and R. M. Ponte, 2021b: Synopsis of the ecco central production global ocean and sea-ice state estimate (version 4 release 4). https://doi.org/10.5281/zenodo.4533349.
* Forget et al. (2015) Forget, G., J.-M. Campin, P. Heimbach, C. N. Hill, R. M. Ponte, and C. Wunsch, 2015: ECCO version 4: an integrated framework for non-linear inverse modeling and global ocean state estimation. Geoscientific Model Development, 8, 3071–3104, https://doi.org/10.5194/gmd-8-3071-2015.
* Foussard et al. (2017) Foussard, A., S. Berti, X. Perrot, and G. Lapeyre, 2017: Relative dispersion in generalized two-dimensional turbulence. J. Fluid Mech., 821, 358–383, https://doi.org/10.1017/jfm.2017.253.
* Gkioulekas and Tung (2007) Gkioulekas, E., and K. K. Tung, 2007: A new proof on net upscale energy cascade in two-dimensional and quasi-geostrophic turbulence. J. Fluid Mech., 576, 173–189, https://doi.org/10.1017/S0022112006003934.
* González-Haro and Isern-Fontanet (2014) González-Haro, C., and J. Isern-Fontanet, 2014: Global ocean current reconstruction from altimetric and microwave SST measurements. J. Geophys. Res.: Oceans, 119, 3378–3391, https://doi.org/10.1002/2013JC009728.
* González-Haro et al. (2020) González-Haro, C., J. Isern-Fontanet, P. Tandeo, and R. Garello, 2020: Ocean Surface Currents Reconstruction: Spectral Characterization of the Transfer Function Between SST and SSH. J. Geophys. Res.: Oceans, 125, https://doi.org/10.1029/2019JC015958.
* Held et al. (1995) Held, I. M., R. T. Pierrehumbert, S. T. Garner, and K. L. Swanson, 1995: Surface quasi-geostrophic dynamics. J. Fluid Mech., 282, 1–20, https://doi.org/10.1017/S0022112095000012.
* Isern-Fontanet and Hascoët (2014) Isern-Fontanet, J., and E. Hascoët, 2014: Diagnosis of high-resolution upper ocean dynamics from noisy sea surface temperatures. J. Geophys. Res.: Oceans, 119, 121–132, https://doi.org/10.1002/2013JC009176.
* Isern-Fontanet et al. (2014) Isern-Fontanet, J., M. Shinde, and C. González-Haro, 2014: On the Transfer Function between Surface Fields and the Geostrophic Stream Function in the Mediterranean Sea. J. Phys. Oceanogr., 44, 1406–1423, https://doi.org/10.1175/JPO-D-13-0186.1.
* Isern‐Fontanet et al. (2006) Isern‐Fontanet, J., B. Chapron, G. Lapeyre, and P. Klein, 2006: Potential use of microwave sea surface temperatures for the estimation of ocean currents. Geophys. Res. Lett., 33, https://doi.org/10.1029/2006GL027801.
* Isern‐Fontanet et al. (2008) Isern‐Fontanet, J., G. Lapeyre, P. Klein, B. Chapron, and M. W. Hecht, 2008: Three-dimensional reconstruction of oceanic mesoscale currents from surface information. J. Geophys. Res.: Oceans, 113, https://doi.org/10.1029/2007JC004692.
* Iwayama and Watanabe (2010) Iwayama, T., and T. Watanabe, 2010: Green’s function for a generalized two-dimensional fluid. Physical Review E, 82, https://doi.org/10.1103/PhysRevE.82.036307.
* Khatri et al. (2021) Khatri, H., S. M. Griffies, T. Uchida, H. Wang, and D. Menemenlis, 2021: Role of mixed-layer instabilities in the seasonal evolution of eddy kinetic energy spectra in a global submesoscale permitting simulation. Geophys. Res. Lett., 48, e2021GL094777, https://doi.org/10.1029/2021GL094777.
* Kraichnan (1967) Kraichnan, R. H., 1967: Inertial Ranges in Two‐Dimensional Turbulence. Physics of Fluids, 10, 1417–1423, https://doi.org/10.1063/1.1762301.
* Kraichnan (1971) Kraichnan, R. H., 1971: Inertial-range transfer in two and three‐dimensional turbulence. J. Fluid Mech., 47, 525–535, https://doi.org/10.1017/S0022112071001216.
* LaCasce (2017) LaCasce, J. H., 2017: The Prevalence of Oceanic Surface Modes. Geophys. Res. Lett., 44, 11 097–11 105, https://doi.org/10.1002/2017GL075430.
* LaCasce and Mahadevan (2006) LaCasce, J. H., and A. Mahadevan, 2006: Estimating subsurface horizontal and vertical velocities from sea-surface temperature. J. Mar. Res., 64, 695–721, https://doi.org/10.1357/002224006779367267.
* Lapeyre (2009) Lapeyre, G., 2009: What Vertical Mode Does the Altimeter Reflect? On the Decomposition in Baroclinic Modes and on a Surface-Trapped Mode. J. Phys. Oceanogr., 39, 2857–2874, https://doi.org/10.1175/2009JPO3968.1.
* Lapeyre (2017) Lapeyre, G., 2017: Surface Quasi-Geostrophy. Fluids, 2, 7, https://doi.org/10.3390/fluids2010007.
* Lapeyre and Klein (2006) Lapeyre, G., and P. Klein, 2006: Dynamics of the Upper Oceanic Layers in Terms of Surface Quasigeostrophy Theory. J. Phys. Oceanogr., 36, 165–176, https://doi.org/10.1175/JPO2840.1.
* Larichev and McWilliams (1991) Larichev, V. D., and J. C. McWilliams, 1991: Weakly decaying turbulence in an equivalent‐barotropic fluid. Physics of Fluids A: Fluid Dynamics, 3, 938–950, https://doi.org/10.1063/1.857970.
* Lilly (1989) Lilly, D. K., 1989: Two-Dimensional Turbulence Generated by Energy Sources at Two Scales. J. Atmos. Sci., 46, 2026–2030, https://doi.org/10.1175/1520-0469(1989)046¡2026:TDTGBE¿2.0.CO;2.
* Maltrud and Vallis (1991) Maltrud, M. E., and G. K. Vallis, 1991: Energy spectra and coherent structures in forced two-dimensional and beta-plane turbulence. J. Fluid Mech., 228, 321–342, https://doi.org/10.1017/S0022112091002720.
* Mensa et al. (2013) Mensa, J. A., Z. Garraffo, A. Griffa, T. M. Özgökmen, A. Haza, and M. Veneziani, 2013: Seasonality of the submesoscale dynamics in the Gulf Stream region. Ocean Dynamics, 63, 923–941, https://doi.org/10.1007/s10236-013-0633-1.
* Miracca-Lage et al. (2022) Miracca-Lage, M., C. González-Haro, D. C. Napolitano, J. Isern-Fontanet, and P. S. Polito, 2022: Can the Surface Quasi-Geostrophic (SQG) Theory Explain Upper Ocean Dynamics in the South Atlantic? J. Geophys. Res.: Oceans, 127, e2021JC018 001, https://doi.org/10.1029/2021JC018001.
* Pierrehumbert et al. (1994) Pierrehumbert, R. T., I. M. Held, and K. L. Swanson, 1994: Spectra of local and nonlocal two-dimensional turbulence. Chaos, Solitons & Fractals, 4, 1111–1116, https://doi.org/10.1016/0960-0779(94)90140-6.
* Polvani et al. (1989) Polvani, L. M., N. J. Zabusky, and G. R. Flierl, 1989: Two-layer geostrophic vortex dynamics. Part 1. Upper-layer V-states and merger. J. Fluid Mech., 205, 215–242, https://doi.org/10.1017/S0022112089002016.
* Qiu et al. (2020) Qiu, B., S. Chen, P. Klein, H. Torres, J. Wang, L.-L. Fu, and D. Menemenlis, 2020: Reconstructing Upper-Ocean Vertical Velocity Field from Sea Surface Height in the Presence of Unbalanced Motion. J. Phys. Oceanogr., 50, 55–79, https://doi.org/10.1175/JPO-D-19-0172.1.
* Qiu et al. (2016) Qiu, B., S. Chen, P. Klein, C. Ubelmann, L.-L. Fu, and H. Sasaki, 2016: Reconstructability of Three-Dimensional Upper-Ocean Circulation from SWOT Sea Surface Height Measurements. J. Phys. Oceanogr., 46, 947–963, https://doi.org/10.1175/JPO-D-15-0188.1.
* Sasaki et al. (2014) Sasaki, H., P. Klein, B. Qiu, and Y. Sasai, 2014: Impact of oceanic-scale interactions on the seasonal modulation of ocean dynamics by the atmosphere. Nature Communications, 5, https://doi.org/10.1038/ncomms6636.
* Schorghofer (2000) Schorghofer, N., 2000: Energy spectra of steady two-dimensional turbulent flows. Physical Review E, 61, 6572–6577, https://doi.org/10.1103/PhysRevE.61.6572.
* Smith et al. (2002) Smith, K. S., G. Boccaletti, C. C. Henning, I. Marinov, C. Y. Tam, I. M. Held, and G. K. Vallis, 2002: Turbulent diffusion in the geostrophic inverse cascade. J. Fluid Mech., 469, 13–48, https://doi.org/10.1017/S0022112002001763.
* Stammer (1997) Stammer, D., 1997: Global Characteristics of Ocean Variability Estimated from Regional TOPEX/POSEIDON Altimeter Measurements. J. Phys. Oceanogr., 27, 1743–1769, https://doi.org/10.1175/1520-0485(1997)027¡1743:GCOOVE¿2.0.CO;2.
* Sukhatme and Smith (2009) Sukhatme, J., and L. M. Smith, 2009: Local and nonlocal dispersive turbulence. Phys. Fluids, 21, 056603, https://doi.org/10.1063/1.3141499.
* Tulloch and Smith (2006) Tulloch, R., and K. S. Smith, 2006: A theory for the atmospheric energy spectrum: Depth-limited temperature anomalies at the tropopause. Proc. Natl. Acad. Sci. (USA), 103, 14 690–14 694, https://doi.org/10.1073/pnas.0605494103.
* Vergara et al. (2019) Vergara, O., R. Morrow, I. Pujol, G. Dibarboure, and C. Ubelmann, 2019: Revised Global Wave Number Spectra From Recent Altimeter Observations. J. Geophys. Res.: Oceans, 124, 3523–3537, https://doi.org/10.1029/2018JC014844.
* Watanabe and Iwayama (2004) Watanabe, T., and T. Iwayama, 2004: Unified Scaling Theory for Local and Non-local Transfers in Generalized Two-dimensional Turbulence. Journal of the Physical Society of Japan, 73, 3319–3330, https://doi.org/10.1143/JPSJ.73.3319.
* Wunsch (1997) Wunsch, C., 1997: The Vertical Partition of Oceanic Horizontal Kinetic Energy. J. Phys. Oceanogr., 27, 1770–1794, https://doi.org/10.1175/1520-0485(1997)027¡1770:TVPOOH¿2.0.CO;2.
|
Kronecker Representations and Steiner Bundles]Representations of Kronecker Quivers and Steiner Bundles on Grassmannians
D. Bissinger and R. Farnsteiner]Daniel Bissinger and Rolf Farnsteiner
[Daniel Bissinger]Mathematisches Seminar, Christian-Albrechts-Universität zu Kiel, Heinrich-Hecht-Platz 6, 24118 Kiel, Germany
[Rolf Farnsteiner]Mathematisches Seminar, Christian-Albrechts-Universität zu Kiel, Heinrich-Hecht-Platz 6, 24118 Kiel, Germany
Let $\KK$ be an algebraically closed field. Connections between representations of the generalized Kronecker quivers $K_r$ and vector bundles on $\PP^{r-1}$ have been known for quite some time. This
article is concerned with a particular aspect of this correspondence, involving more generally Steiner bundles on Grassmannians $\Gr_d(\KK^r)$ and certain full subcategories $\repp(K_r,d)$ of relative projective
$K_r$-representations. Building on a categorical equivalence first explicitly established by Jardim and Prata [40], we employ representation-theoretic techniques provided by Auslander-Reiten theory and reflection
functors to organize indecomposable Steiner bundles in a manner that facilitates the study of bundles enjoying certain properties such as uniformity and homogeneity. Conversely, computational results on Steiner bundles
motivate investigations in $\repp(K_r,d)$, which elicit the conceptual sources of some recent work on the subject.
From a purely representation-theoretic vantage point, our paper initiates the investigation of certain full subcategories of the, for $r\!\ge\!3$, wild category of $K_r$-representations. These may be characterized as being
right Hom-orthogonal to certain algebraic families of elementary test modules.
§ INTRODUCTION
Let $k$ be an algebraically closed field of characteristic $p\!>\!0$. In their groundbreaking article [32], Friedlander and Pevtsova associated vector bundles to representations of infinitesimal group schemes by
means of the so-called universal nilpotent operators. In subsequent work [18], Carlson-Friedlander-Suslin computed some of these bundles for the second Frobenius kernel $\GG_{a(2)}$ of the additive group
$\GG_a$. The bundles considered in [18] are kernels of the nilpotent operators associated to the so-called $W$-modules. These modules are graded with respect to the standard grading of the “group algebra"
$k\GG_{a(2)}$, and the grading induces a decomposition of the associated vector bundles.
The vector bundles studied in [18] are defined on the projective line $\PP^1$ and hence decompose into a direct sum of Serre shifts of the structure sheaf. For group schemes of type $\GG_{a(r)}$ there are natural
generalizations of $W$-modules, whose bundles have base space $\PP^{r-1}$. When studying their graded pieces as mentioned above, one is naturally led to bundles on $\PP^{r-1}$ that are associated to certain
representations of the $r$-Kronecker quiver $K_r$. The observation that their duals are examples of the so-called Steiner bundles motivated the investigations of this paper.
Since the introduction of the notion of Steiner bundles by Dolgachev-Kapranov [23], the original definition has been generalized to include vector bundles over projective varieties with short resolutions or
coresolutions given by certain exceptional pairs. We will confine our attention to Steiner bundles of Grassmannians, whose defining resolution is given by the universal bundle and trivial bundles, cf. [1]. In the
aforementioned context of group schemes, the study of the interplay between representations of elementary abelian $p$-groups and vector bundles on Grassmannians was initiated in [16]. Being based on
morphisms between Serre shifts of trivial vector bundles, the approach in loc. cit. differs from the point of view taken here, which can also be adapted to representations of elementary abelian groups.
The by now numerous connections between vector bundles on $\PP^{r-1}$ and representations of the generalized Kronecker quiver $K_r$, were apparently first systematically exploited in Hulek's article
[39].[Hulek considers representations of dimension vectors $(n,n)$, whereas our approach is based on dimension vectors $(m,n)$ with $m\!<\!n$.] While in most cases, the authors are working over the
complex numbers, the examples alluded to above guide us to work in greater generality: Throughout, $\KK$ is assumed to be an algebraically closed field of arbitrary characteristic. All vector spaces are assumed to be
finite-dimensional over $\KK$.
Our paper is organized as follows. After recalling basic features of representations of Kronecker quivers, we introduce in Section <ref> the full subcategories $\repp(K_r,d)$ of the category $\rep(K_r)$ of
representations of the Kronecker quiver that turn out to correspond to Steiner bundles on the Grassmannian of $d$-planes of $\KK^r$. Their objects may be described as those having trivial rank varieties; they may also be
thought of as relative projective modules of certain ring extensions. The category $\repp(K_r,1)$ coincides with the category $\EKP(K_r)$ of equal kernels representations, whose definition was inspired by [18], and
which was studied in our context in a number of papers including [64, 8]. Our first main result, Theorem <ref>, describes the objects $M \in \repp(K_r,d)$ via a family $(E(\fv))_{\fv \in \Gr_d(\KK^r)}$ of
elementary test modules. This observation is the gateway for our applications of the shift functor $\sigma_{K_r} : \rep(K_r) \lra \rep(K_r)$ and Auslander-Reiten theory. We summarize some of our findings as follows:
Suppose that $r\!\ge\!3$ and $d \in \{1,\ldots, r\!-\!1\}$. Then the following statements hold:
* A representation $M \in \rep(K_r)$ belongs to $\repp(K_r,d)$ if and only if $\Hom_{K_r}(E(\fv),M)\!=\!(0)$ for every $\fv \in \Gr_d(\KK^r)$.
* $\repp(K_r,d) \subseteq \EKP(K_r)$ is a torsion-free class containing all preprojective representations.
* We have $\repp(K_r,r\!-\!1) \subseteq \repp(K_r,r\!-\!2) \subseteq \cdots \subseteq \repp(K_r,2) \subseteq \repp(K_r,1)\!=\!\EKP(K_r)$.
* We have $\sigma_{K_r}^{-1}(\repp(K_r,d)) \subseteq \repp(K_r,r\!-\!1)$.
Generalizing a consequence of Westwick's fundamental result [63], we introduce, for a given $M\!=\!(M_1,M_2, (M(\gamma_i))_{1\le i \le r}) \in \rep(K_r)$ and $d \in \{1,\ldots, r\}$, the invariant $\Delta_M(d)\!:=\!
\dim_\KK M_2\!-\!d\dim_\KK M_1 \in \ZZ$ and show that $\Delta_M(d)\!\ge \min\{\dim_\KK M_1,d\}(r\!-\!d)$ whenever $M \in \repp(K_r,d)$. Moreover, if $(V_1,V_2)$ is a pair of vector spaces such that
$\Delta_{(V_1,V_2)}(d)\!\ge\!d(r\!-\!d)$, then there always exists $M \in \repp(K_r,d)$ such that $(M_1,M_2)\!=\!(V_1,V_2)$.
Given $d \in \{1,\ldots, r\!-\!1\},$ we introduce in Section <ref> the functor $\TilTheta_d : \rep(K_r) \lra \Coh(\Gr_d(\KK^r))$ which assigns to a representation of $K_r$ a coherent sheaf on $\Gr_d(\KK^r)$. The essential
image of $\repp(K_r,d)$ under $\TilTheta_d$ coincides with the full subcategory $\StVect(\Gr_d(\KK^r))$ of Steiner bundles. By definition (see [1]), each Steiner bundle $\cF$ affords a short resolution
\[ (0) \lra \cU_{(r,d)}^s \lra \cO_{\Gr_d(\KK^r)}^t \lra \cF \lra (0),\]
where $\cU_{(r,d)}$ denotes the universal subbundle on $\Gr_d(\KK^r)$. Since $\repp(K_r,d) \subseteq \repp(K_r,e)$ for $e\!\le\!d$, one can in fact associate Steiner bundles $\TilTheta_e(M) \in
\StVect(\Gr_e(\KK^r))$ to each $M \in \repp(K_,d)$ ($e\!\le\!d$). The functor $\TilTheta_e : \repp(K_r,d) \lra \StVect(\Gr_e(\KK^r))$ is full and faithful and $\TilTheta_d : \repp(K_r,d) \lra \StVect(\Gr_d(\KK^r))$ is an
equivalence, cf. [40]. We record a few basic properties of $\TilTheta_d$ and, by way of illustration, interpret the results of Section <ref> in the context of Steiner bundles.
The present understanding of the wild category $\rep(K_r)$ mainly rests on Kac's Theorem [43] concerning the distribution of the dimension vectors of indecomposable representations and the Auslander-Reiten
quiver $\Gamma(K_r)$, which describes $\rep(K_r)$ modulo the square of its radical. In Section <ref> we introduce these tools and provide first applications concerning exceptional Steiner bundles, which have also
been referred to as Fibonacci bundles, cf. [14]. Much of our subsequent work on non-exceptional Steiner bundles builds on Ringel's determination [56] of the regular Auslander-Reiten
components of $\Gamma(K_r)$. This enables us to identify the parts of AR-components belonging to $\repp(K_r,d)$ and thus yields a way to efficiently organize the corresponding Steiner bundles of $\Gr_d(\KK^r)$, see
Section <ref>.[In case $r\!=\!2$, it is well-known that $\Coh(\PP^1)$ is a $\Hom$-finite abelian $\KK$-category with Serre duality. It therefore affords almost split sequences, see <cit.>.]
In the next two sections, we apply the aforementioned concepts in the context of homogeneity, uniformity and stability, with the latter two only being discussed in the classical setting of projective spaces. Since the
inhomogeneity of an indecomposable Steiner bundle is inherited by all other members of its component, one can easily construct families of inhomogeneous bundles that are uniform of certain type. There are various
notions of equivariance for modules and vector bundles, which are equivalent to homogeneity in some cases, such as modules with trivial space endomorphisms or over fields of characteristic $0$. We intend to address
these connections along with those involving rational representations of parabolic subgroups of $\SL(A_r)$ in future work, where it turns out to be expedient to interpret vector bundles as locally trivial vector space fibrations.
Given $M \in \EKP(K_r)\!\smallsetminus\!\{(0)\}$ one defines the slope $\mu(M)\!:=\!\frac{\dim_\KK M_1}{\Delta_M(1)}$ and observes that $\mu(\TilTheta_1(M))\!=\!\mu(M)$ for the standard slope on $\Vect(\PP^{r-1})$. This
motivates the investigation of stability for modules and Steiner bundles and our main result in Section <ref> shows that modules corresponding to stable Steiner bundles are stable. Non-projective modules $M \in
\repp(K_r,d)$ of minimal type are stable and our results of Section <ref> concerning the distribution of slopes relative to AR-components imply that elementary $\EKP$-modules also enjoy this property. This is applied in
Section <ref>, where we provide a sufficient condition on a pair $(a,b) \in \NN^2$ involving the Tits form for $\EKP$-modules of dimension vector $(a,b)$ to afford a filtration by elementary $\EKP$-modules. Specializing to $r\!=\!3$, we use methods by Drezet [24] to give a sufficient condition for the stability of a bundle corresponding to a stable Kronecker representation. In particular, we verify:
Let $r\!\ge\!3$.
* Suppose that $M \in \EKP(K_r)\!\smallsetminus\!\{(0)\}$.
* If $\TilTheta_1(M)$ is (semi)stable, so is $M$.
* Let $r\!=\!3$ and $\Delta_M(2)\!\ge\!0$ ($\Delta_M(2)\!>\!0$). If $M$ is semistable (stable), so is $\TilTheta_1(M)$.
* Let $r\!=\!3$. If $M$ is elementary, then $\TilTheta_1(M)$ is stable.
* Let $\cC \subseteq \Gamma(K_3)$ be an Auslander-Reiten component containing an elementary module. Then every Steiner bundle $\cF \in \TilTheta_1(\cC\cap\EKP(K_3))$ possesses a Harder-Narasimhan filtration
by Steiner bundles, whose filtration factors are stable.
The final section is concerned with an application pertaining to restrictions of Steiner bundles on $\PP^{r-1}(\CC)$ to linear hyperplanes $H \subseteq \PP^{r-1}(\CC)$. We provide conditions on the rank and the first Chern
class ensuring that a generic Steiner bundle $\cF$ with these data is stable with none of its restrictions $\cF|_H$ being semistable.
The authors would like to thank Chiara Brambilla for patiently explaining her work to them, Adrian Langer for bringing reference [11] to their attention, and Enrique Arrondo and Simone Marchesi for correspondence.
§ PRELIMINARIES
§.§ Representations of $K_r$
Let $r\!\ge\!2$ be a natural number. We denote by $K_r$ the (generalized) Kronecker quiver with two vertices $1$ and $2$ and $r$ arrows:
\[ \begin{tikzcd} 1 \arrow[r, draw=none, "\raisebox{+1.5ex}{\vdots}" description] \arrow[r, bend left, "\gamma_1"] \arrow[r, bend right, swap, "\gamma_r"] & 2. \end{tikzcd} \]
(For $r\!=\!1$ one obtains the quiver $K_1$ with underlying Dynkin diagram $A_1$.)
It will be convenient to view representations of $K_r$ in the following equivalent ways:
(i) A representation $M$ of the quiver $K_r$ is a triple $(M_1,M_2,(M(\gamma_i))_{1\le i \le r})$, where $M_1, M_2$ are
finite-dimensional vector spaces and each $M(\gamma_i) : M_1 \lra M_2$ is a $\KK$-linear map. A homomorphism $f : M \lra N$ is a pair $(f_1,f_2)$ of $\KK$-linear maps $f_j : M_j \lra N_j \ (j \in \{1,2\})$ such that,
for each $i \in \{1,\ldots,r\}$, the diagram
\[ \begin{tikzcd} M_1 \arrow[r, "M(\gamma_i)"] \arrow[d,"f_1"] & M_2 \arrow[d,"f_2"] \\
N_1 \arrow[r,"N(\gamma_i)"] & N_2
\end{tikzcd} \]
(ii) Let $A_r\!:=\!\bigoplus_{i=1}^r\KK\gamma_i$ be the arrow space of $K_r$. We consider the triangular matrix algebra
\[ \KK K_r := \begin{pmatrix} \KK & 0 \\ A_r & \KK \end{pmatrix},\]
which is isomorphic to the path algebra of $K_r$. Accordingly, $\rep(K_r)$ is equivalent to the category $\modd \KK K_r$ of finite-dimensional $\KK K_r$-modules. In view of <cit.>, the category $\modd
\KK K_r$ is equivalent to the category $\cK_r$, whose objects are triples $M\!=\!(M_1,M_2, \psi_M)$, where $M_1,M_2$ are finite-dimensional $\KK$-vector spaces and $\psi_M : A_r\!\otimes_\KK\!M_1 \lra M_2$ is $\KK$-
linear. A morphism $M \lra N$ is a pair $(f_1,f_2)$ of $\KK$-linear maps $f_i: M_i \lra N_i$ such that the diagram
\[ \begin{tikzcd} A_r\!\otimes_\KK\!M_1 \arrow[r, "\psi_M"] \arrow[d,"\id_{A_r}\otimes f_1"] & M_2 \arrow[d,"f_2"] \\
A_r\!\otimes_\KK\!N_1 \arrow[r,"\psi_N"] & N_2
\end{tikzcd} \]
(iii) One can equally well consider triples $(M_1,M_2,\psi_M)$, where $\psi_M : A_r \lra \Hom_\KK(M_1,M_2)$ is $\KK$-linear. The condition for a morphism $(f_1,f_2)$ then reads
\[ \psi_N(a)\circ f_1 = f_2\circ \psi_M(a) \ \ \ \ \forall \ a \in A_r.\]
(iv) For future reference we also record the interpretation as triples $(M_1,M_2,\varphi_M)$, where $\varphi_M : M_1 \lra \Hom_\KK(A_r,M_2)$ is given by $\varphi_M(m)(a)\!:=\!\psi_M(a \otimes m)$ for all
$m \in M_1$ and $a \in A_r$.
We denote by $\rep(K_r)$ the category of all (finite-dimensional) representations of $K_r$ and refer the reader to [4] and [2] for further details.
Every subspace $\fv \subseteq A_r$ gives rise to a subalgebra
\[ \KK.\fv := \begin{pmatrix} \KK & 0 \\ \fv & \KK \end{pmatrix}\]
of $\KK K_r$. If $M$ is a $\KK K_r$-module, the restriction $M|_{\KK.\fv}$ corresponds to $(M_1,M_2,\psi_{M,\fv})$, where $\psi_{M,\fv}\!=\!\psi_M|_{\fv\otimes_\KK M}$. By choosing a basis of $\fv$ one obtains an
equivalence $\modd \KK.\fv \cong \rep(K_{\dim_\KK\!\fv})$, which depends on the choice of the basis.
Given a pair $(V_1,V_2)$ of $\KK$-vector spaces, we let $\udim(V_1,V_2)\!\:=\!(\dim_\KK V_1,\dim_\KK V_2)$ be its dimension vector. For $M \in \rep(K_r)$, we write $\udim M\!:=\!\udim(M_1,M_2)$. The category
$\rep(K_r)$ has two simple objects (representations) $S_1,S_2$ of dimension vectors $(1,0)$ and $(0,1)$, respectively. The object $S_1$ is injective, while $S_2$ is projective. The projective cover $P_1(r)$ of
$S_1$ has dimension vector $\udim P_1(r)\!=\!(1,r)$. We will also write $P_0(r)\!:=\!S_2$.
Note that
\[ \omega_r : \KK K_r \lra \KK K_r \ \ ; \ \ \left(\begin{smallmatrix} \alpha & 0 \\ a & \beta \end{smallmatrix}\right) \mapsto \left(\begin{smallmatrix} \beta & 0 \\ a & \alpha \end{smallmatrix}\right)\]
is an anti-automorphism of order $2$. There results a duality $D_{K_r}: \modd \KK K_r \lra \modd \KK K_r \ \ ; \ \ M \mapsto M^\ast$ given by
\[ (x.f)(m) := f(\omega_r(x).m) \ \ \ \ \ \ \forall \ f \in M^\ast, m \in M, x \in \KK K_r.\]
Let $M \in \rep(K_r)$. For $a \in A_r$, we denote by
\[ a_M : M_1 \lra M_2 \ \ ; \ \ m \mapsto a.m\]
the multiplication effected by $a$. Then the dual $D_{K_r}(M)$ of $M \in \modd \KK K_r$ corresponds to $D_{K_r}(M)\!=\!(M_2^\ast, M_1^\ast, \psi_{D_{K_r}(M)})$, where
\[ \psi_{D_{K_r}(M)}(a\otimes f) = f\circ a_M \ \ \ \ \ \forall \ a \in A_r, f \in M_2^\ast.\]
We denote by
\[ \langle \, , \, \rangle_r : \ZZ^2\!\times\!\ZZ^2 \lra \ZZ \ \ ; \ \ (x,y) \mapsto x_1y_1\!+\!x_2y_2\!-\!rx_1y_2\]
the Euler-Ringel bilinear form. The corresponding quadratic form
\[ q_r : \ZZ^2 \lra \ZZ \ \; \ \ x \mapsto \langle x,x\rangle_r\]
is referred to as the Tits quadratic form of $K_r$.
The dimension vectors give rise to an isomorphism
\[ \udim : K_0(\rep(K_r)) \lra \ZZ^2,\]
which identifies the Grothendieck group $K_0(\rep(K_r))$ of $\rep(K_r)$ with $\ZZ^2$. Recall that
\[\langle \udim M,\udim N\rangle_r\!=\!\dim_\KK\Hom_{K_r}(M,N)\!-\!\dim_\KK\Ext^1_{K_r}(M,N)\]
for all $M,N \in \rep(K_r)$. Moreover, as the path algebra $\KK K_r$ is hereditary, we have $\Ext^i_{K_r}(-,-)\!=\!(0)$ for all $i\!\ge\!2$.
Let $M \in \modd \KK K_r$. Then $\Ext^1_{K_r}(M,\KK K_r)$ is a right $\KK K_r$-module, so that
\[ \tau_{K_r}(M)\!:=\!\Ext^1_{K_r}(M,\KK K_r)^\ast\]
is a left $\KK K_r$-module. There results a (covariant) endofunctor
\[ \tau_{K_r} : \modd \KK K_r \lra \modd \KK K_r\]
which, by virtue of $\KK K_r$ being hereditary, is left exact. By the same token, $\tau_{K_r}$ is exact when restricted to the full subcategory of those modules which have no non-zero projective direct summands.
The functor $\tau_{K_r}$ induces an endofunctor on $\rep(K_r)$, which we will also denote by $\tau_{K_r}$. General theory <cit.> implies that
\[ \udim \tau_{K_r}(M) = \Phi_r.\udim M\]
for every non-projective indecomposable $M \in \rep(K_r)$,[When multiplying matrices with vectors, we will always implicitly assume that the vector has the correct shape and thus dispense with transposing vectors.]
\[ \Phi_r\!=\! \begin{pmatrix} r^2\!-\!1 & -r \\ r & -1 \end{pmatrix} \in \GL_2(\ZZ)\]
is the Coxeter matrix of the quiver $K_r$. An indecomposable representation $M \in \rep(K_r)$ is called preprojective, provided $\tau^n_{K_r}(M)\!=\!(0)$ for some $n \in \NN$.
Similarly, we consider the functor
\[ \tau_{K_r}^{-1} : \modd \KK K_r \lra \modd \KK K_r \ \ ; \ \ M \mapsto \Ext^1_{K_r}(M^\ast, \KK K_r).\]
Then we have
\[ \udim \tau^{-1}_{K_r}(M) = \Phi^{-1}_r.\udim M\]
for every non-injective indecomposable representation $M \in \rep(K_r)$. An indecomposable representation $M \in \rep(K_r)$ is called preinjective, provided $\tau^{-n}_{K_r}(M)\!=\!(0)$ for some $n \in \NN$.
Indecomposable representations that are neither preprojective nor preinjective are said to be regular.
The preprojective $\KK K_r$-modules are well understood. For $r\!\ge\!3$, we put $L_r\!:=\!\frac{r+\sqrt{r^2-4}}{2}$ and set
\[ a_i(r) := \left\{ \begin{array}{cc} \frac{(L_r)^i-(\frac{1}{L_r})^i}{\sqrt{r^2-4}} & r\!\ge\!3 \\ i & r\!=\!2 \end{array} \right. \ \ \ \ \forall\ i \in \NN_0.\]
For each $i \in \NN_0$ there is, up to isomorphism, exactly one preprojective module $P_i(r)$ such that
(a) $P_0(r)\!=\!S_2$; $P_1(r)$ is the projective cover of $S_1$, and
(b) $\udim P_i(r)\!=\!(a_i(r),a_{i+1}(r))$ for all $i \in \NN_0$.
For every $i \in \NN_0$ there is an almost split sequence
\[ (0) \lra P_i(r) \lra rP_{i+1}(r) \lra P_{i+2}(r) \lra (0).\]
In particular, $\tau^{-1}_{K_r}(P_i(r))\cong P_{i+2}(r)$.
In case the module category to be considered is clear from the context, we will write $P_i$ instead of $P_i(r)$.
§.§ Module varieties
In this section, we recall a number of general results from the representation theory of quivers, most of which can be found in [43].
Let $A$ be an $n$-dimensional associative $\KK$-algebra, $V$ be a $\KK$-vector space. We denote by $\modd(A;V)$ the affine variety of $A$-module structures on $V$. When
convenient we will identify $M \in \modd(A;V)$ with its representation $\varrho_M : A \lra \End_\KK(V)$. If $\{a_1,\ldots, a_n\}$ is a basis of $A$, then
\[ \iota_V : \modd(A;V) \lra \End_\KK(V)^n \ \ ; \ \ M \mapsto (\varrho_M(a_1), \ldots, \varrho_M(a_n))\]
is a closed embedding.
If $X$ is a variety and $x \in X$, then $\dim_xX$ denotes the local dimension of $X$ at $x$.
Let $V$ and $W$ be $\KK$-vector spaces. The map
\[ d : \modd(A;V)\!\times\!\modd(A;W) \lra \NN_0 \ \ ; \ \ (M,N) \mapsto \dim_\KK\Hom_A(M,N)\]
is upper semicontinuous. $\square$
In analogy with the above, one can consider the variety $\rep(K_r;V_1,V_2)$ of representations of $K_r$ on a pair $(V_1,V_2)$ of vector spaces. There is an isomorphism
\[ \rep(K_r;V_1,V_2) \stackrel{\sim}{\lra} \modd(\KK K_r;V_1\!\oplus\!V_2)^{\rightarrow}\]
between $\rep(K_r;V_1,V_2)$ and the closed subset $\modd(\KK K_r;V_1\!\oplus\!V_2)^{\rightarrow}$ of the module variety of the path algebra $\KK K_r$, given by conditions $(\begin{smallmatrix} 1 & 0 \\ 0 & 0
\end{smallmatrix})\dact (v_1\!\oplus\!v_2)\!=\!v_1$ and $(\begin{smallmatrix} 0 & 0 \\ 0 & 1\end{smallmatrix})\dact (v_1\!+\!v_2)\!=\!v_2.$
Recall that $M \in \rep(K_r)$ is referred to as a brick, provided $\End_{K_r}(M)\cong \KK$. Preprojective and preinjective representations are known to be bricks.
Suppose that $r\!\ge\!3$ and let $V_1,V_2$ be vector spaces with $V_1\!\oplus\!V_2\!\ne\!(0)$ and such that $q_r(\udim (V_1,V_2))\!\le\!1$. Then
\[ \cB_{(V_1,V_2)} := \{ M \in \rep(K_r;V_1,V_2) \ ; \ M \ \text{is a brick} \}\]
is a dense open subset of $\rep(K_r;V_1,V_2)$.
Since $r\!\ge\!3$, the assumption $q_r(a,b)\!=\!0$ implies $a\!=\!0\!=\!b$.
If $q_r(\udim(V_1,V_2))\!\le\!0$, we may apply <cit.> (see also <cit.> for an elementary proof valid in our context) in conjunction with the observation above to see that the set
$\cB_{(V_1,V_2)}$ is not empty. Alternatively, general theory provides $M \in \rep(K_r;V_1,V_2)$ that is preprojective or preinjective (see Theorem <ref> below). In particular, $M$ is a brick.
Lemma <ref> implies that the map $\rep(K_r;V_1,V_2) \lra \NN_0 \ ; \ M \mapsto \dim_\KK\End_{K_r}(M)$ is upper semicontinuous, so that $\cB_{(V_1,V_2)}$ is open. Since the variety $\rep(K_r;V_1,V_2)
\cong \Hom_\KK(V_1,V_2)^r$ is irreducible, $\cB_{(V_1,V_2)}$ thus lies dense in $\rep(K_r;V_1,V_2)$.
Let $V_1,V_2$ be non-zero $\KK$-vector spaces. The algebraic group $G\!:=\!\GL(V_2)\!\times\!\GL(V_1)$ acts on the irreducible variety $\Hom_\KK(V_1,V_2)^r$ via
\[ (g_2,g_1)\dact (f_i)_{1\le i\le r} := (g_2\circ f_i\circ g_1^{-1})_{1\le i\le r}.\]
Upper semicontinuity of fiber dimension implies that the map
\[ \Hom_\KK(V_1,V_2)^r \lra \NN_0 \ \ ; \ \ x \mapsto \dim G\dact x\]
is lower semicontinuous. Thus, if $s\!:=\!\max\{ \dim G\dact x \ ; \ x \in \Hom_\KK(V_1,V_2)^r\}$, then the open sheet
\[ \cO_{(V_1,V_2)} := \{ x \in \Hom_\KK(V_1,V_2)^r \ ; \ \dim G\dact x\!=\!s\}\]
for the action of $G$ on $\Hom_\KK(V_1,V_2)^r$ is a non-empty open subset of $\Hom_\KK(V_1,V_2)^r\!=\!\rep(K_r;V_1,V_2)$.
The following result, which was first proved by Kac <cit.>, actually shows that for those dimension vectors for which $\rep(K_r;V_1,V_2)$ contains indecomposable modules, the modules belonging to the open
sheet are just the bricks.
Let $r\!\ge\!3$. Suppose that $V_1,V_2$ are vector spaces with $V_1\!\oplus\!V_2\!\ne\!(0)$ and such that $q_r(\udim(V_1,V_2))\!\le\!1$. Then we have $\cB_{(V_1,V_2)}\!=\!\cO_{(V_1,V_2)}$.
Given any $M \in \rep(K_r;V_1,V_2)$, the stabilizer $G_M$ of $M$ coincides with $G\cap\End_{K_r}(M)$, and thus is a dense open subset of $\End_{K_r}(M)$.
By Corollary <ref>, there is $M_0 \in \cB_{(V_1,V_2)}\cap\cO_{(V_1,V_2)}$, so that
\[ s = \dim G\dact M_0 = \dim G\!-\!\dim G_{M_0} = \dim G\!-\!1.\]
Let $M \in \cB_{(V_1,V_2)}$. Then $\dim G_M\!=\!1$, so that $\dim G\dact M\!=\!s$. This implies $\cB_{(V_1,V_2)}\! \subseteq \!\cO_{(V_1,V_2)}$.
On the other hand, if $M \in \cO_{(V_1,V_2)}$, then $\dim_\KK\End_{K_r}(M)\!=\!\dim G_M\!=\!\dim G\!-\!s\!=\!1$, whence $M \in \cB_{(V_1,V_2)}$.
§.§ Restrictions
For $d \in \{1,\ldots, r\}$, we let $\Gr_d(A_r)$ be the Grassmannian of $d$-planes of $A_r$. This is an irreducible projective variety of dimension $\dim \Gr_d(A_r)\!=\!d(r\!-\!d)$.
For $M \in \modd \KK K_r$ and $\fv \in \Gr_d(A_r)$, we denote by $M|_\fv$ the restriction of $M$ to $\KK.\fv \subseteq \KK K_r$.
Given $\KK$-vector spaces $V_1,V_2$, we consider the set
\[ \Iso(K_d;V_1,V_2)\!:=\!\rep(K_d;V_1,V_2)/(\GL(V_2)\!\times\!\GL(V_1))\]
of isomorphism classes together with the projection map $\pi : \rep(K_d;V_1,V_2) \lra \Iso(K_d;V_1,V_2)$. We equip $\Iso(K_d;V_1,V_2)$ with the final topology relative to $\pi$. This renders $\pi$ a continuous map
such that $U \subseteq \Iso(K_d;V_1,V_2)$ is open if and only if $\pi^{-1}(U) \subseteq \rep(K_r;V_1,V_2)$ is open.
\[ \Inj_\KK(A_d,A_r) := \{ \alpha \in \Hom_\KK(A_d,A_r) \ ; \ \alpha \ \text{is injective}\}.\]
This is a conical, open subset of the affine space $\Hom_\KK(A_d,A_r)$. In particular, $\Inj_\KK(A_d,A_r)$ is a quasi-affine variety.
The algebraic groups $\GL(A_d)$ and $\GL(A_r)$ act on the affine variety $\Hom_\KK(A_d,A_r)$ via
\[ g.\alpha := \alpha\circ g^{-1} \ \ \ \forall \ g \in \GL(A_d) \ \ \text{and} \ \ h.\alpha := h\circ \alpha \ \ \ \forall \ h \in \GL(A_r),\]
respectively. The subvariety $\Inj_\KK(A_d,A_r) \subseteq \Hom_k(A_d,A_r)$ is stable with regard to these commuting actions.
Every $\alpha \in \Inj_\KK(A_d,A_r)$ defines an injective homomorphism
\[ \alpha : \KK K_d \lra \KK K_r \ \ ; \ \ \left(\begin{smallmatrix} x & 0 \\ a & y \end{smallmatrix} \right) \lra \left(\begin{smallmatrix} x & 0 \\ \alpha(a) & y \end{smallmatrix}\right) \]
of $\KK$-algebras. In particular, $\GL(A_r)$ can be considered a group of automorphisms of $\KK K_r$.
Let $M \in \modd \KK K_r$. Given $\alpha \in \Inj_\KK(A_d,A_r)$, we let $\alpha^\ast(M) \in \modd \KK K_d$ be the pull-back of $M$ along $\alpha$. Thus, we have
\[ u \dact m := \alpha(u)m \ \ \ \ \ \forall \ u \in \KK K_d, m \in M.\]
In particular, $\GL(A_r) \subseteq \Aut_\KK(\KK K_r)$ acts on $\modd \KK K_r$ via pull-back: If $M \in \modd \KK K_r$, and $g \in \GL(A_r)$, then
\[M^{(g)} := (g^{-1})^\ast(M).\]
We say that $M$ is homogeneous, provided $M^{(g)} \cong M$ for every $g \in \GL(A_r)$.
Let $P_i\!=\!(k[X,Y]_{i-1},k[X,Y]_i, X\cdot, Y\cdot)$ be the preprojective $\KK K_2$-module of dimension vector $(i,i\!+\!1)$. Then $\GL(A_2)\!=\!\GL(\KK X\!\oplus\!\KK Y)$
acts of $\KK [X,Y]$ via algebra homomorphisms of degree zero in such a way that
\[ g(\gamma f) = g(\gamma) g(f) \ \ \ \ \ \ \ \ \ \forall \ g \in \GL(A_2), \gamma \in A_2, f \in \KK[X,Y]_{i-1}.\]
Consequently, $g$ induces a $\KK$-linear isomorphism $P_i^{(g)} \stackrel{\sim}{\lra} P_i$ such that
\[ g(\gamma \dact f) = g(g^{-1}(\gamma)f) = \gamma g(f).\]
As a result, the $\KK K_2$-module $P_i$ is homogeneous. (In fact, the $P_i$ are examples of equivariant $K_r$-representations.)
We let $\GL(A_r)$ act on $\Gr_d(A_r)$ via $h\dact \fv\!:=\!h(\fv)$ for all $h \in \GL(A_r)$ and $\fv \in \Gr_d(A_r)$. This action is transitive.
Following [64], we denote by $\EKP(K_r)$ the full subcategory of $\rep(K_r)$ such that $a_M$ is injective for every $a \in A_r\!\smallsetminus\!\{0\}$.[The notation derives from the
correspondence with the category of equal kernels modules studied in [18].] Note that $\EKP(K_r)$ is closed under taking subobjects.
The classification of indecomposable objects in $\rep(K_2)$ implies that every $N \in \EKP(K_2)$ decomposes into a direct sum
\[ N \cong \bigoplus_{i\ge 0} n_iP_i.\]
Let $M \in \EKP(K_r)$. Then we have
\[ \alpha^\ast(M) \cong \beta^\ast(M)\]
for all $\alpha,\beta \in \Inj_\KK(A_2,A_r)$ such that $\im\alpha\!=\!\im\beta$.
Let $\alpha \in \Inj_\KK(A_2,A_r)$. Since $\alpha$ is injective, it follows that $\alpha^\ast(M) \in \EKP(K_2)$. By virtue of the foregoing remark and the example above, we conclude that $\alpha^\ast(M)$
is homogeneous.
If $\beta \in \Inj_\KK(A_2,A_r)$ is such that $\im \alpha\!=\!\im \beta$, then $g\!:=\!\beta^{-1}\circ \alpha \in \GL(A_2)$, so that for $m \in (\alpha^\ast(M))^{(g)}$ and $\gamma \in A_2$, we obtain
\[ \gamma\dact m = g^{-1}(\gamma).m = \alpha(g^{-1}(\gamma))m = \beta(\gamma)m.\]
By the example above, this shows that $\beta^\ast(M)\cong (\alpha^\ast(M))^{(g)} \cong \alpha^\ast(M)$.
Let $M \in \EKP(K_r)$, $\fv \in \Gr_2(A_r)$. In view of the foregoing result, we may define the isomorphism class $[M|_\fv]$ via
\[ [M|_\fv] := [\alpha^\ast(M)] \ \ \ \ \ \alpha \in \Inj_\KK(A_2,A_r)\ ; \ \im \alpha\!=\!\fv.\]
Less formally, we will write
\[ M|_\fv \cong \bigoplus_{i\ge 0} n_i(M,\fv) P_i(2)\]
to indicate that the above isomorphism holds for $\alpha^\ast(M)$, where $\alpha \in \Inj_\KK(A_2,A_r)$ is such that $\im\alpha\!=\!\fv$.
Given $X \in \rep(K_2;V_1,V_2)$, we write
\[ \dim_\KK\Hom_{K_2}(X,M|_\fv) := \dim_\KK\Hom_{K_2}(X,\alpha^\ast(M)).\]
for some $\alpha \in \Inj_\KK(A_2,A_r)$ with $\im\alpha\!=\!\fv$.
Let $r\!\ge\!2$, $M \in \EKP(K_r)\cap \rep(K_r;V_1,V_2)$. Then the following statements hold:
* The map
\[ \res_M : \Gr_2(A_r) \lra \Iso(K_2;V_1,V_2) \ \ ; \ \ \fv \mapsto [M|_\fv]\]
is continuous.
* If $X \in \rep(K_2;U_1,U_2)$, then the map
\[ d_{X,M} : \Gr_2(A_r) \lra \NN_0 \ \ ; \ \ \fv \mapsto \dim_\KK\Hom_{K_2}(X,M|_\fv)\]
is upper semicontinuous.
(1) It follows from <cit.> that the map
\[ \msim : \Inj_\KK(A_2,A_r) \lra \Gr_2(A_r) \ \ ; \ \ \alpha \mapsto \im\alpha\]
is a principal $\GL(A_2)$-bundle (a $\GL(A_2)$-torsor). In particular,
\[ \overline{\msim} : \Inj_\KK(A_2,A_r)/\GL(A_2) \lra \Gr_2(A_r) \ \ ; \ \ \bar{\alpha} \mapsto \im\alpha\]
is a homeomorphism.
Consider the map
\[ \widetilde{\res}_M : \Inj_\KK(A_2,A_r) \lra \rep(K_2;V_1,V_2) \ \ ; \ \ \alpha \mapsto \alpha^\ast(M).\]
We identify $\rep(K_2;V_1,V_2)$ with $\Hom_\KK(V_1,V_2)^2$ and let $\{\gamma_1,\gamma_2\}$ be a basis of $A_2$. Then
\[ \widehat{\res}_M : \Hom_\KK(A_2,A_r) \lra \Hom_\KK(V_1,V_2)^2 \ \ ; \ \ \alpha \mapsto (M(\alpha(\gamma_1)),M(\alpha(\gamma_2)))\]
is a linear map and hence in particular a morphism. (Here we have written $M(\alpha(\gamma_i))\!:=\!\psi_M(\alpha(\gamma_i)\!\otimes -)$.) Consequently,
$\widetilde{\res}_M\!=\!\widehat{\res}_M|_{\Inj_\KK(A_2,A_r)}$ is also a morphism.
In view of Lemma <ref>, $\widetilde{\res}_M$ gives rise to a continuous map
\[ \iso_M : \Inj_\KK(A_2,A_r)/\GL(A_2) \lra \Iso(K_2;V_1,V_2) \ \ ; \ \ \bar{\alpha} \mapsto [\alpha^\ast(M)].\]
\[ \res_M \circ \overline{\msim} = \iso_M,\]
we obtain (1).
(2) Thanks to Lemma <ref>, the map
\[ d_X : \rep(K_2;V_1,V_2) \lra \NN_0 \ \ ; \ \ Y \mapsto \dim_\KK\Hom_{K_2}(X,Y)\]
is upper semicontinuous. Since images of $(\GL(V_2)\!\times\!\GL(V_1))$-stable open sets under $\pi : \rep(K_2;V_2,V_2)$ $ \lra \Iso(K_2;V_1,V_2)$ are open, the map
\[ \bar{d}_X : \Iso(K_2;V_1,V_2) \lra \NN_0 \ \ ; \ \ [Y] \mapsto \dim_\KK\Hom_{K_2}(X,Y)\]
enjoys the same property. In view of (1), $d_{X,M}\!=\! \bar{d}_X\circ \res_M$ is upper semicontinuous as well.
Let $M \in \EKP(K_r)$. Then there exists a sequence $(n_i(M))_{i \in \NN_0} \in \NN_0^{\NN_0}$ such that
\[ O_M := \{ \fv \in \Gr_2(A_r) \ ; \ n_i(M,\fv)\!=\!n_i(M) \ \ \ \forall \ i \in \NN_0\}\]
is a dense open subset of $\Gr_2(A_r)$.
For $i \in \NN_0$ we consider the map $d_i\!:=\!d_{P_i(2),M}$. Proposition <ref> provides $c_i \in \NN_0$ such that
\[ O_i := \{ \fv \in \Gr_2(A_r) \ ; \ d_i(\fv) = c_i\}\]
is a dense open subset of $\Gr_2(A_r)$. Since $\dim_\KK\Hom_{K_2}(P_i(2),P_j(2))\!=\!\max\{j\!-\!i\!+\!1,0\}$, it follows that
\[ n_i(M,\fv) = d_i(\fv)\!-\!2d_{i+1}(\fv)\!+\!d_{i+2}(\fv)\]
for every $\fv \in \Gr_2(A_r)$ and $i \in \NN_0$. Consequently,
\[ n_i(M,\fv) = c_i\!-\!2c_{i+1}\!+\!c_{i+2}\]
for every $\fv \in O_i\cap O_{i+1}\cap O_{i+2}$. There exists $\ell \in \NN_0$ such that $n_i(M,\fv)\!=\!0\!=\! d_i(\fv)$ for all $i\!\ge\!\ell$ and $\fv \in \Gr_2(A_r)$. We define
\[ n_i(M):= c_i\!-\!2c_{i+1}\!+\!c_{i+2}.\]
for every $i \in \NN_0$ and obtain $n_i(M)\!=\!0\!=n_i(M,\fv)$ for all $i\!\ge\!\ell$. Consequently, $\bigcap_{i=0}^{\ell-1} O_i \subseteq O_M$.
Suppose that $\fv \in O_M$. Then we have
\[ d_i(\fv)\!-\!2d_{i+1}(\fv)\!+\!d_{i+2}(\fv) = n_i(M,\fv) = c_i\!-\!2c_{i+1}\!+\!c_{i+2} \ \ \ \ 0\!\le\!i\!\le\!\ell\!-\!1.\]
Since the $(\ell\!\times\!\ell)$-matrix
\[ A = \begin{pmatrix} 1 & -2 & 1 & 0 & \cdots & 0 \\
0 & 1 & -2 & 1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
0 & 0 & \cdots & 1 & -2 & 1 \\
0 & 0 & \cdots & 0 & 1 & -2\\
0 & 0 & \cdots & 0 & 0 & 1 \end{pmatrix}\]
is invertible, it follows that $d_i(\fv)\!=\!c_i$ for all $i \in \NN_0$. Consequently, $\fv \in \bigcap_{i=0}^{\ell-1}O_i$, so that $O_M\!=\!\bigcap_{i=0}^{\ell-1}O_i$ is a dense, open subset of $\Gr_2(A_r)$.
Let $M \in \EKP(K_r)\!\smallsetminus\!\{(0)\}$. We call
\[ M_{\gen} := \bigoplus_{i\ge 0} n_i(M)P_i\]
the generic decomposition of $M$.
We let $\PP(A_r)$ be the projective space of lines in $A_r$. The basis $\{\gamma_1,\ldots, \gamma_r\}$ defines an isomorphism $\PP(A_r) \cong \PP^{r-1}$ of projective varieties. For $x \in \PP(A_r)$ the number
$\rk(x_M)\!=\! \rk(a_M) \ \text{for some} \ a \in x\!\smallsetminus\!\{0\}$ is well-defined and we refer to
\[ \rk(M) := \max\{\rk(x_M) \ ; \ x \in \PP(A_r)\}\]
as the generic rank of $M$. We say that $M$ has constant rank, provided $\rk(x_M)\!=\!\rk(M)$ for all $x \in \PP(A_r)$. The full subcategory of $\rep(K_r)$, whose objects have constant rank, will be denoted
$\CR(K_r)$. It contains the full subcategory $\EIP(K_r)$, whose objects $M$ satisfy the condition $\rk(M)\!=\!\dim_\KK M_2$. We have a duality $D_{K_r} : \CR(K_r) \lra \CR(K_r)$ that sends $\EKP(K_r)$ to $\EIP(K_r)$ and
vice versa. The aforementioned classification of indecomposable $K_2$-representations implies that $\CR(K_2)\!=\!\add(\EKP(K_2)\cup\EIP(K_2))$, so that every $M \in \CR(K_r)$ affords a generic decomposition
\[ M_{\gen} = \bigoplus_{i\ge 0} n_i(M)P_i(2)\!\oplus\!\bigoplus_{j\ge 0} m_j(M) I_j(2),\]
where $I_j(2)\!:=\!D_{K_r}(P_j(2))$ is indecomposable preinjective.
§ CATEGORIES OF RELATIVE PROJECTIVE MODULES
Let $d \in \{1,\ldots, r\!-\!1\}$. In this section we introduce the full subcategory $\repp(K_r,d)$ of $\EKP(K_r)$, that will turn out to be equivalent to the category of Steiner bundles on the Grassmannian $\Gr_d(A_r)$.
The objects of $\repp(K_r,d)$ may be characterized via rank varieties, whose definition is analogous to those investigated in [16]. By contrast to the setting in [16], their complements may also be
defined as being the objects of the right $\Hom$-orthogonal category of a class of elementary $K_r$-modules.
§.§ Rank varieties and families of elementary modules
As noted above, the map
\[ \msim : \Inj_\KK(A_d,A_r) \lra \Gr_d(A_r) \ \ ; \ \ \alpha \mapsto \im \alpha\]
is a $\GL(A_d)$-torsor with respect to the canonical right $\GL(A_d)$-action $\alpha.h\!:=\!\alpha\circ h$, cf. <cit.>.
Let $M \in \rep(K_r)$. Then the following statements hold:
* The subset
\[ \cP(K_r,d)_M := \{ \alpha \in \Inj_\KK(A_d,A_r) \ ; \ \alpha^\ast(M) \ \text{is not projective}\}\]
is closed and $\GL(A_d)$-stable.
* The set
\[\cV(K_r,d)_M := \{ \fv \in \Gr_d(A_r) \ ; \ \exists \ \alpha \in \msim^{-1}(\fv) \ \text{such that} \ \alpha^\ast(M) \ \text{is not projective}\}\]
is a closed subset of $\Gr_d(A_r)$.
(1) Recall that a $\KK K_d$-module $N$ is projective if and only if $\dim_\KK\Ext^1_{K_d}(N,S_2)\!=\!0$. In view of the upper semicontinuity of the map $\rep(K_d;M_1,M_2) \lra \NN_0 \ ; \ N \mapsto
\dim_\KK\Ext^1_{K_d}(N,S_2)$,[This follows for instance from Lemma <ref> and $\dim_\KK\Ext^1_{K_d}(N,S_2)\!=\!\dim_\KK \Hom_{K_d}(N,S_2)\!-\!\langle \udim N,\udim S_2\rangle_d$.] we
conclude that
\[ C_d := \{ X \in \rep(K_d;M_1,M_2) \ ; \ X \ \text{is not projective}\}\]
is a closed subset of $\rep(K_d;M_1,M_2)$. The arguments of the proof of Proposition <ref> ensure the continuity of the map
\[ \Inj_\KK(A_d,A_r) \lra \rep(K_d;M_1,M_2) \ \ ; \ \ \alpha \mapsto \alpha^\ast(M).\]
Consequently, $\cP(K_r,d)_M$ is a closed subset of $\Inj_\KK(A_d,A_r)$.
Recall that $P_1$ and $S_2$ are representatives of the isomorphism classes of the projective indecomposable $\KK K_d$-modules. Since their dimensions pairwise differ, it follows that $Q^{(h)} \cong Q$ for
$Q \in \{P_1,S_2\}$ and $h \in \GL(A_d)$ and hence $P^{(h)} \cong P$ for every projective $\KK K_d$-module $P$. Let $\alpha \not \in \cP(K_r,d)_M$ and $h \in \GL(A_d)$. Then
\[ (h\dact\alpha)^\ast(M) \cong (\alpha^\ast(M))^{(h)},\]
is projective, so that $h\dact\alpha \not\in \cP(K_r,d)_M$. As a result, the variety $\cP(K_r,d)_M$ is $\GL(A_d)$-stable.
(2) As before, we consider the space $\Inj_\KK(A_d,A_r)/\GL(A_d)$ together with the topology induced by the canonical projection
\[ \pi : \Inj_\KK(A_d,A_r) \lra \Inj_\KK(A_d,A_r)/\GL(A_d).\]
As observed in the proof of Proposition <ref>, the map
\[ \msim : \Inj_\KK(A_d,A_r) \lra \Gr_d(A_r) \ \ ; \ \ \alpha \mapsto \im \alpha \]
induces a homeomorphism
\[ \overline{\msim} : \Inj_\KK(A_d,A_r)/\GL(A_d) \lra \Gr_d(A_r) \ \ ; \ \ [\alpha] \mapsto \im \alpha.\]
In view of (1), the open set $U_M\!:=\!\Inj_\KK(A_d,A_r)\!\smallsetminus\!\cP(K_r,d)_M$ is $\GL(A_d)$-stable. Thus, $\pi^{-1}(\pi(U_M))\!=\!U_M$, so that $\pi(U_M)\!=\!(\Inj_\KK(A_d,A_r)/\GL(A_d))\!\smallsetminus\!
\pi(\cP(K_r,d)_M)$ is an open subset of $\Inj_\KK(A_d,A_r)/\GL(A_d)$. It follows that $\cV(K_r,d)_M\!=\!\msim(\cP(K_r,d)_M)\!=\!\overline{\msim}(\pi(\cP(K_r,d)_M))$ is closed.
Since $\GL(A_d)$ acts transitively on $\im^{-1}(\fv)$, it follows from (1) that
\[ \cV(K_r,d)_M = \{ \fv \in \Gr_d(A_r) \ ; \ \alpha^\ast(M) \ \text{is not projective for all} \ \alpha \in \msim^{-1}(\fv) \}.\]
Let $(0) \lra M' \lra M \lra M'' \lra (0)$ be an exact sequence in $\rep(K_r)$.
* We have
\[ \cV(K_r,d)_{M'} \subseteq \cV(K_r,d)_M \subseteq \cV(K_r,d)_{M'}\cup\cV(K_r,d)_{M''}.\]
* If the sequence splits, then
\[ \cV(K_r,d)_M = \cV(K_r,d)_{M'}\cup\cV(K_r,d)_{M''}. \]
(1) Since the algebra $\KK K_d$ is hereditary, we have $\cV(K_r,d)_{M'} \subseteq \cV(K_r,d)_M$.
(2) This is a direct consequence of (1).
We consider the projective $K_r$-representations $(0,A_d)\cong P_0(r)^d$ and $(\KK,A_r)\cong P_1(r)$, where we have $\psi_{P_1(r)}(a\!\otimes\!t)\!=\!t a$. An element $\alpha \in \Hom_\KK(A_d,A_r)$ can be viewed as a
\[ \bar{\alpha} : (0,A_d) \lra (\KK,A_r) \ \ ; \ \ \bar{\alpha}_1\!:=\!0 \ \ \text{and} \ \ \bar{\alpha}_2\!:=\!\alpha.\]
Let $M \in \rep(K_r)$. As noted in Section <ref>, $M$ may be interpreted as a linear map
\[ \varphi_M : M_1 \lra \Hom_\KK(A_r,M_2) \ \ ; \ \ \varphi_M(m)(a)\!:=\!\psi_M(a\!\otimes m).\]
Given $\alpha \in \Hom_\KK(A_d,A_r)$, we obtain
\[ \varphi_{\alpha^\ast(M)} : M_1 \lra \Hom_\KK(A_d,M_2) \ \ ; \ \ m \mapsto \varphi_M(m)\circ \alpha.\]
A representation $M \in \rep(K_r)$ is called regular, if all its indecomposable constituents are regular. We recall the notion of elementary representations, which are just the simple objects in the full subcategory $\reg(K_r) \subseteq \rep(K_r)$ of regular representations.
A non-zero regular representation $E \in \rep(K_r)$ is referred to as elementary, provided there does not exist a short exact sequence
\[ (0) \lra A \lra E \lra B \lra (0)\]
with $A,B$ non-zero and regular.
Let $\alpha \in \Inj_\KK(A_d,A_r)$. Then the following statements hold:
* The module $\coker\bar{\alpha}$ is elementary with dimension vector $\udim \coker\bar{\alpha}\!=\!(1,r\!-\!d)$.
* Let $\beta \in \Inj_\KK(A_d,A_r)$. Then $\coker\bar{\alpha}\cong \coker\bar{\beta}$ if and only if $\im \alpha\!=\!\im\beta$.
* If $N\in \rep(K_r)$ is indecomposable with $\udim N\!=\!(1,r\!-\!d)$, then there is $\zeta \in \Inj_\KK(A_d,A_r)$ such that $N \cong \coker\bar{\zeta}$.
* Given $M \in \rep(K_r)$, we have an isomorphism of vector spaces $\coker\varphi_{\alpha^\ast(M)} \cong \Ext^1_{K_r}(\coker\bar{\alpha},M)$.
* Given $c \in \{1,\ldots, d\}$ and $\beta \in \Inj_\KK(A_c,A_d)$, there exists an epimorphism $\coker\overline{\alpha\circ \beta} \lra \coker\bar{\alpha}$.
(1)-(3) Recall that the projective cover $P_1(r)$ of the simple representation $S_1\!=\!(\KK,0)$ is given by $P_1(r)\!=\!(\KK,A_r,(P_1(r)(\gamma_i))_{1 \leq i \leq r})$ with $P_1(r)(\gamma_i)(\lambda)\!=\!\lambda
\gamma_i$ for all $i \in \{1,\ldots,r\}$. For $\alpha \in \Inj_\KK(A_d,A_r)$ we consider the monomorphism of representations $x_\alpha\!=\!(0,(x_\alpha)_2) \colon P_0(r)^d$ $\lra P_1(r)$, given by
\[(x_\alpha)_2 \colon \KK^d \lra A_r \ \ ; \ \ \lambda \mapsto \sum^d_{i=1} \lambda_i \alpha(\gamma_i).\]
We get a commutative diagram
\[ \xymatrix{
(0) \ar[r] & P_0(r)^d \ar^{x_\alpha}[r] \ar^{\cong}[d] & P_1(r) \ar[r] \ar^{\cong}[d]& \coker x_\alpha \ar[r] \ar[d] & (0) \\
(0) \ar[r] & (0,A_d) \ar^{\overline{\alpha}}[r] & (\KK,A_r) \ar[r] & \coker \overline{\alpha} \ar[r] & (0)
with exact rows. In particular, $\udim\coker \overline{\alpha}\!=\!(1,r\!-\!d)$ and the 5-Lemma yields $\coker x_\alpha \cong \coker \overline{\alpha}$. Hence (1), (2) and (3) follow from <cit.>,
(4) The statement follows from a small modification of the proof of <cit.>. For the benefit of the reader we outline the changes. Since $\Ext^1_{K_r}((\KK,A_r),-)\!=\!(0)$, application of $\Hom_{K_r}(-,M)$ to the
short exact sequence
\[(0) \lra (0,A_d) \stackrel{\overline{\alpha}}{\lra} (\KK,A_r) \lra \coker \overline{\alpha} \lra (0)\]
yields the following diagram with exact rows
\[
\xymatrix{
\Hom_{K_r}((\KK,A_r),M) \ar^{\overline{\alpha}^\ast}[r] \ar^{f}[d]& \Hom_{K_r}((0,A_d),M) \ar[r] \ar^{g}[d] & \Ext^1_{K_r}(\coker \overline{\alpha},M) \ar[r] & (0) \\
M_1 \ar^{\varphi_{\alpha^{\ast}(M)}}[r] & \Hom_\KK(A_d,M_2) \ar[r] & \coker \varphi_{\alpha^{\ast}(M)} \ar[r] & (0),
where $f(h)\!:=\!h_1(1_\KK)$ for all $h \in \Hom_{K_r}((\KK,A_r),M)$ and $g(\eta)\!:=\!\eta_2$ for all $\eta \colon (0,A_d) \lra M$. Let $(h_1,h_2)\!=\!h \in \Hom_{K_r}((\KK,A_r),M)$. We recall that $\psi_M \circ (\id_{A_r}
\otimes h_1)\! =\! h_2 \circ \psi_{(\KK,A_r)}$ and $\psi_{(\KK,A_r)}(a \otimes \lambda)\! =\! \lambda a$ for all $\lambda \in \KK$ and $a \in A_r$. For $a \in A_d$ we obtain
\begin{align*}
[(\varphi_{\alpha^\ast(M)} \circ f)(h)](a) &= [\varphi_M(h_1(1_\KK)) \circ \alpha](a) = [\psi_M \circ (\id_{A_r} \otimes h_1)](\alpha(a) \otimes 1_\KK)\\
&= [h_2 \circ \psi_{(\KK,A_r)}](\alpha(a) \otimes 1_\KK) = h_2(\alpha(a)) = (h \circ \overline{\alpha})_2(a) \\
&= [(g \circ \overline{\alpha}^{\ast})(h)](a).
\end{align*}
Hence the left-hand square of the diagram commutes and $f, g$ being isomorphisms gives us
\[\dim_\KK \coker \varphi_{\alpha^{\ast}(M)} = \dim_\KK \Ext^1_{K_r}(\coker \overline{\alpha},M).\]
(5) This follows from the proof of <cit.>.
Let $\fv \in \Gr_d(A_r)$. Thanks to Proposition <ref>(2), we may define
\[ \coker \fv := \coker \bar{\alpha} \ \ \ \ \ \ \alpha \in \msim^{-1}(\fv).\]
For $\fv \in \Gr_d(A_r)$, we put
\[ E(\fv) := D_{K_r}(\tau_{K_r}(\coker \bar{\alpha})) \ \ \ \ \ \ \alpha \in \msim^{-1}(\fv).\]
In view of <cit.> $E(\fv)$ is an elementary representation.
Given $M \in \rep(K_r)$ and $\fv \in \Gr_d(A_r)$, we recall the notation
\[ \psi_{M,\fv} := (\psi_M)|_{\fv\otimes_\KK M_1}.\]
Similarly, we define $\varphi_{M,\fv} : M_1\lra \Hom_\KK(\fv,M_2)$ via
\[ \varphi_{M,\fv}(m) := \varphi_M(m)|_\fv \ \ \ \ \forall \ m \in M_1.\]
Let $M \in \rep(K_r)$, $\fv \in \Gr_d(A_r)$.
* Let $\alpha \in \msim^{-1}(\fv)$. Then
\[\dim_\KK M_2\!-\!\rk(\psi_{M,\fv}) = \dim_\KK M_2\!-\!\dim_\KK\Rad(\alpha^{\ast}(M))\]
is the multiplicity of $P_0(d)$ in the decomposition of $\alpha^{\ast}(M) \in \rep(K_d)$ into indecomposable direct summands.
* We have $\dim_\KK \ker \psi_{M,\fv}\!=\! \dim_\KK \Hom_{K_r}(E(\fv),M)$.
(1) This follows immediately from the definition.
(2) We consider the dual representation $D_{K_r}(M) = (M_2^{\ast},M_1^{\ast},\psi_{D_{K_r}(M)} )$ with structure map
\[\psi_{D_{K_r}(M)} \colon A_r\!\otimes_\KK\!M_2^{\ast} \lra M_1^{\ast} \ \ ; \ \ a \otimes f \mapsto f \circ a_M.\]
There results a commutative diagram
\[\xymatrixcolsep{6pc} \xymatrix{
M_2^{\ast} \ar^{(\psi_{M,\fv})^{\ast}}[r] \ar@{=}[d]& (\fv\!\otimes_\KK\!M_1)^{\ast} \ar_{\cong}^{\eta}[d]\\
M_2^{\ast} \ar^{\varphi_{D_{K_r}(M),\fv}}[r] & \Hom_\KK(\fv,M_1^{\ast}),
with $\eta(f)(a)(m_1)\!=\!f(a \otimes m_1)$ for all $a \in \fv, m_1 \in M_1$ and $f \in (\fv\!\otimes_\KK\!M_1)^{\ast}$: For $h \in M_2^\ast, a \in \fv$ and $m \in M_1$ we have
\begin{align*}
[ \varphi_{D_{K_r}(M),\fv}(h)(a)](m) &= [ \psi_{D_{K_r}(M)}(a \otimes h)](m) = (h \circ a_M)(m)\\
&= (h \circ \psi_M)(a \otimes m) = (h \circ \psi_{M,\fv})(a \otimes m) \\
&= [\eta(h \circ \psi_{M,\fv})(a)](m) = [(\eta \circ (\psi_{M,\fv})^{\ast})(h)(a)](m).
\end{align*}
Hence $\varphi_{D_{K_r}(M),\fv}\!=\!\eta \circ (\psi_{M,\fv})^{\ast}$. Consequently, $\coker \varphi_{D_{K_r}(M),\fv} \cong \ker \psi_{M,\fv}$.
Let $\alpha \in \im^{-1}(\fv)$. We conclude with Proposition <ref>(2),(4) that
\[\dim_\KK\coker\varphi_{\alpha^\ast(D_{K_r}(M))} =\dim_\KK\coker\varphi_{D_{K_r}(M),\fv} = \dim_\KK\ker\psi_{M,\fv}.\]
Since $\coker\overline{\alpha}$ is regular, the Auslander-Reiten formula <cit.> in conjunction with Proposition <ref>(4) now yields
\begin{align*}
\dim_\KK\ker\psi_{M,\fv} & = \dim_\KK \coker \varphi_{\alpha^{\ast}(D_{K_r}(M))} = \dim_\KK \Ext^1_{K_r}(\coker \overline{\alpha},D_{K_r}(M)) \\
&= \dim_\KK \Hom_{K_r}(D_{K_r}(M),\tau_{K_r}(\coker \overline{\alpha})) \\
&=\dim_\KK \Hom_{K_r}(E(\fv),M) \hfill \qedhere.
\end{align*}
For a pair $(V_1,V_2)$ of $\KK$-vector spaces and $d\in \{1,\ldots,r\}$, we set
\[ \Delta_{(V_1,V_2)}(d) := \dim_\KK V_2\!-\!d\dim_\KK V_1.\]
If $M \in \rep(K_r)$, we write
\[\Delta_M(d) := \Delta_{(M_1,M_2)}(d).\]
Let $M \in \rep(K_r)$. Given $\fv \in \Gr_d(A_r)$, the following statements are equivalent:
* $\Hom_{K_r}(E(\fv),M)\!=\!(0)$.
* For every $\alpha \in \msim^{-1}(\fv)$, the map $\psi_{\alpha^\ast(M)}$ is injective.
* There is $\alpha \in \msim^{-1}(\fv)$ such that $\psi_{\alpha^\ast(M)}$ is injective.
* There is $\alpha \in \msim^{-1}(\fv)$ such that $\alpha^\ast(M) \cong \Delta_M(d)P_0(d)\!\oplus\!(\dim_kM_1)P_1(d)$.
* $\fv \not\in \cV(K_r,d)_M$.
(1) $\Rightarrow$ (2). Let $\alpha \in \im^{-1}(\fv)$. Proposition <ref>(2) yields $0\!=\!\dim_\KK\Hom_{K_r}(E(\fv),M)\!=\!\dim_\KK \ker \psi_{\alpha^\ast(M)}$.
(2) $\Rightarrow$ (3). Trivial.
(3) $\Rightarrow$ (4). Let $\alpha \in \msim^{-1}(\fv)$ be such that $\psi_{\alpha^\ast(M)}$ is injective. Then we have
\begin{align*}
\dim_\KK M_2\! -\! \rk(\psi_{M,\fv}) & = \dim_\KK M_2\!-\!\dim_\KK (\fv\!\otimes_\KK\!M_1) \\
&= \dim_\KK M_2\! -\! d \dim_\KK M_1 = \Delta_M(d).
\end{align*}
By Proposition <ref>(1), we may write $\alpha^{\ast}(M)\!=\!{\Delta_M(d)} P_0(d)\!\oplus\!N$, where $N \in \rep(K_d)$ does not have $P_0(d)$ as a direct summand. Therefore, a projective cover of $N$ is given by
$(\dim_\KK N_1)P_1(d) \twoheadrightarrow N$. Since $\dim_\KK N_1\!=\!\dim_\KK M_1$ and
\[ \dim_\KK N_2 = \dim_\KK M_2\!-\! \Delta_M(d) = d \dim_\KK M_1, \]
we conclude $\udim N\! =\! \udim [(\dim_\KK N_1) P_1(d)]$. Hence $\alpha^{\ast}(M) \cong \Delta_M(d) P_0(d)\!\oplus\! (\dim_\KK M_1)\!P_1(d)$.
(4) $\Rightarrow$ (5). This is a direct consequence of the remark following Lemma <ref>.
(5) $\Rightarrow$ (1). Let $\alpha \in \msim^{-1}(\fv)$. We write $\alpha^{\ast}(M) \cong a P_0(d)\! \oplus\!b P_1(d)$, so that
\[ \dim_\KK \Rad(\alpha^{\ast}(M)) = \dim_\KK \Rad(a P_0(d)\!\oplus\! b P_1(d)) = b \dim_\KK \Rad(P_1(d)) = b d.\]
Since $\udim P_0(d)$ and $\udim P_1(d)$ are linearly independent, it follows that $a\!=\!\Delta_M(d)$ and $b\!=\! \dim_\KK M_1$. We obtain
$\dim_\KK \ker \psi_{M,\fv}\! = \! d \dim_\KK M_1\! -\! \rk(\psi_{M,\fv})\! =\! d \dim_\KK M_1\! -\! \dim_\KK\Rad(\alpha^{\ast}(M))\!=\!0$, so that Proposition <ref> yields (1).
Let $M \in \rep(K_r)$. Theorem <ref> implies
\[ \cV(K_r,d)_M = \{\fv \in \Gr_d(A_r) \ ; \ \rk(\psi_{M,\fv})\!<\!d\dim_\KK M_1\},\]
so that we refer to $\cV(K_r,d)_M$ as the $d$-th rank variety of $M$.
(1) For $d\!=\!1$, we have $\Gr_1(A_r)\!=\!\PP(A_r)$ and
\[ \cV(K_r,1)_M := \{ x \in \PP(A_r) \ ; \ \rk(x_M)\!<\!\dim_\KK M_1\}.\]
In particular, $\cV(K_r,1)_M\!=\!\emptyset$ if and only if $M \in \EKP(K_r)$.
(2) Let $\alpha \in \Inj_\KK(A_d,A_r)$. The pull-back functor $\alpha^\ast : \rep(K_r) \lra \rep(K_d)$ takes projectives to projectives. Theorem <ref> implies that
$\beta \in \GL(A_d).\alpha$ if and only if for every $M \in \rep(K_r)$ we have $\alpha^\ast(M) \ \text{projective} \ \Leftrightarrow \ \beta^\ast(M) \ \text{projective}$. Hence
$\Inj_\KK(A_d,A_r)/\GL(A_d)$ may be viewed as an analogue of the space of equivalence classes of $p$-points (in the context of abelian unipotent group schemes), cf. [31].
§.§ The categories $\repp(K_r,d)$
As before, we fix $d \in \{1,\ldots, r\!-\!1\}$. In this section, we introduce the subcategory $\repp(K_r,d)$ of $\rep(K_r)$ that will be instrumental in our study of Steiner bundles on the Grassmannian $\Gr_d(A_r)$.
We let $\repp(K_r,d)$ be the full subcategory of $\rep(K_r)$, whose objects $M$ are given by $\cV(K_r,d)_M\!=\!\emptyset$.
(1) Note that $\repp(K_r,1)\!=\!\EKP(K_r)$ is just the category of equal kernels representations, which usually has wild representation type, cf. <cit.>. The roughly analogous concept of shifted
cyclic subgroups for elementary abelian $p$-groups yields projective modules, see <cit.>.
(2) The category $\repp(K_r,d)$ is closed under taking submodules, cf. Lemma <ref>.
(3) Suppose that $M \in \repp(K_r,d)$ is such that $\dim_\KK M_1\!\le\!d$. Then $A_r\!\otimes_\KK\!M_1\!=\!\bigcup_{\fv \in \Gr_d(A_r)}\fv\otimes_\KK\!M_1$, so that $\psi_M$ is injective. Consequently, the direct
summand $(M_1,\psi_M(M_1),\psi_M)$ of $M$ is isomorphic to $(\dim_\KK M_1)P_1(r)$ and $M\cong \Delta_M(r)P_0(r)\!\oplus\!(\dim_\KK M_1)P_1(r)$ is projective.
(4) An object $M \in \repp(K_r,d)$ can be characterized by saying that $\alpha^\ast(M)$ is projective for every $\alpha \in \Inj_\KK(A_d,A_r)$. Equivalently, for $M \in \modd \KK K_r$ the restrictions
$M|_{\KK .\fv} \in \modd \KK .\fv$ are projective for every $\fv \in \Gr_d(A_r)$. The latter condition can be shown to be equivalent to $M$ being $(\KK K_r, \KK .\fv)$-projective in the sense of relative homological
algebra, cf. [38].
The following statements hold:
* $\repp(K_r,d) \subseteq \EKP(K_r)$ is a torsion free class that is closed under $\tau^{-1}_{K_r}$.
* $\repp(K_r,d)$ contains all preprojective representations.
* We have $\repp(K_r,r\!-\!1) \subseteq \repp(K_r,r\!-\!2) \subseteq \cdots \subseteq \repp(K_r,1) = \EKP(K_r)$.
(1) and (2) follow from Theorem <ref> and <cit.>.
(3) Let $M \in \repp(K_r,b)$ for some $1\! <\! b\!<\! r$ and consider $\fv \in \Gr_{b-1}(A_r)$. Then there is $\fw \in \Gr_b(A_r)$ such that $\fv \subseteq \fw$. Theorem <ref> yields
\[ \ker\psi_{M,\fv} \subseteq \ker\psi_{M,\fw} = (0),\]
so that $M \in \repp(K_r,b\!-\!1)$.
Let $X \subseteq \Gr_d(A_r)$ be a subset. Using Theorem <ref> and <cit.>, one can show that (1) and (2) of the foregoing result hold for the full subcategory $\rep_X(K_r,d)$, whose objects
$M$ satisfy $\cV(K_r,d)_M \subseteq X$.
In the sequel the shift functors $\sigma_{K_r},\sigma^{-1}_{K_r}: \rep(K_r) \lra \rep(K_r)$ will be of major importance. These functors correspond to the BGP-reflection functors but take into account that the opposite
quiver of $K_r$ is isomorphic to $K_r$, i.e. $D_{K_r} \circ \sigma_{K_r} \cong \sigma^{-1}_{K_r} \circ D_{K_r}$, where $D_{K_r} \colon \rep(K_r) \lra \rep(K_r)$ denotes the standard duality.
Given a representation $M \in \rep(K_r)$, $\sigma_{K_r}(M)$ is by definition the representation
\[ (\sigma_{K_r}(M)_1,\sigma_{K_r}(M)_2) = (\ker \psi_M,M_1), \]
where we identify $\psi_M$ by means of the basis $\{\gamma_1,\ldots, \gamma_r\}$ with the map $\psi_M : (M_1)^r \lra M_2, (m_i) \mapsto \sum^{r}_{i=1} M(\gamma_i)(m_i)$. By definition, $[\sigma_{K_r}(M)](\gamma_i)
\colon \sigma_{K_r}(M)_1 \lra \sigma_{K_r}(M)_2 = \pi_{i}|_{\ker \psi_M}$, with $\pi_i \colon (M_1)^r \lra M_1$ being the projection onto the $i$-th component. If $f \in \Hom_{K_r}(M,N)$, then $\sigma_{K_r}(f)_1 : \sigma_{K_r}(M)_1 \lra \sigma_{K_r}(N)_1$ is the restriction to $\ker\psi_M$ of the map
\[ (m_i)_{i\le i\le r} \mapsto (f_1(m_i))_{1\le i \le r},\]
while $\sigma_{K_r}(f)_2\!:=\!f_1$.
According to <cit.>, $\sigma_{K_r}$ is left exact, while $\sigma_{K_r}^{-1}$ is right exact and the functor $\sigma_{K_r}$ induces an equivalence
\[ \sigma_{K_r} : \rep_2(K_r) \lra \rep_1(K_r)\]
between the full subcategories $\rep_i(K_r)$ of $\rep(K_r)$, whose objects don't have any direct summands isomorphic to $S_i$. By the same token, $\sigma_{K_r}^{-1}$ is a quasi-inverse of $\sigma_{K_r}$.
Moreover, $\sigma_{K_r}$ and $\sigma^{-1}_{K_r}$ induce quasi-inverse equivalences on the full subcategory $\reg(K_r) \subseteq \rep(K_r)$ of regular representations. We have $\sigma_{K_r} \circ \sigma_{K_r} \cong
\tau_{K_r}$ as well as $\sigma_{K_r}(P_{i+1}(r)) \cong P_{i}(r)$ for all $i\!\geq\!0$.
Recall that $\{ \udim P_i(r) \ ; \ i \in \NN_0 \}$ consists exactly of those tuples $(a,b) \in \NN^2_0$ that satisfy $a\!<\!b$ and $q_r(a,b)\!=\!1$. Since all irreducible morphisms between preprojective representations in
$\rep(K_r)$ are injective, it follows that $P_i(r)$ is isomorphic to a subrepresentation of $P_j(r)$ if and only if $i\! \leq\!j$. We
also note that
\[\udim \sigma_{K_r}(M)\! =\!(r \dim_\KK M_1\! -\! \dim_\KK M_2, \dim_\KK M_1)\]
for $M$ indecomposable and not isomorphic to $P_0(r)$, while $\sigma_{K_r}(P_0(r))\!=\!(0)$. In conjunction with the left exactness of $\sigma_{K_r}$ this implies that $\sigma_{K_r} \colon \rep_2(K_r) \lra \rep_1(K_r)$ is exact. By the same token, $\sigma_{K_r}^{-1} \colon \rep_1(K_r) \lra \rep_2(K_r)$ is exact.
Let $1\! \leq \!d \!<\! r$. We have
\[\{ [\sigma_{K_r}(E(\fv))] \ ; \ \fv \in \Gr_d(A_r) \} = \{ [\coker \fw] \ ; \ \fw \in \Gr_{r-d}(A_r) \}.\]
Let $\fv \in \Gr_d(A_r)$. Since $E(\fv)$ is regular indecomposable, the same holds for $\sigma_{K_r}(E(\fv))$. As $\udim \sigma_{K_r}(E(\fv))\!=\! \udim \sigma^{-1}_{K_r}(D_{K_r}(\coker \fv))\!=\!(1,d)$, it
follows from Proposition <ref>(3) that $\sigma_{K_r}(E(\fv))$ $\cong \coker \fw$ for some $\fw \in \Gr_{r-d}(A_r)$.
If $\fw \in \Gr_{r-d}(A_r)$, then the preceding argument provides $\fv \in \Gr_d(A_r)$ such that $\sigma_{K_r}(E(\fw)) \cong \coker \fv$. We therefore obtain
\[ \sigma_{K_r}(E(\fv)) \cong \sigma_{K_r}(D_{K_r}(\tau_{K_r}(\coker \fv))) \cong D_{K_r}(\sigma_{K_r}(\coker\fv)) \cong D_{K_r}(\tau_{K_r}(E(\fw))) \cong \coker \fw. \qedhere\]
Let $M,E \in \rep(K_r)$ with $E$ regular. We claim that
(i) $\Hom_{K_r}(E,M) \cong \Hom_{K_r}(\sigma_{K_r}(E),\sigma_{K_r}(M))$, and
(ii) $\Hom_{K_r}(E,M) \cong \Hom_{K_r}(\tau_{K_r}(E),\tau_{K_r}(M))$.
In order to verify (i), we write $M\!=\!S_2^l\!\oplus\!N$, with $S_2$ not being a direct summand of $N$. Since there are no non-zero homomorphisms from non-zero regular representations to
projective representations (see <cit.>), we have $\Hom_{K_r}(E,S_2)\!=\!(0)$. Moreover, $\sigma_{K_r}(S_2)\!=\!(0)$ implies $\sigma_{K_r}(M) \cong \sigma_{K_r}(N)$ and we conclude
\begin{eqnarray*} \Hom_{K_r}(E,M) & \cong & \Hom_{K_r}(E,S_2^l)\!\oplus\! \Hom_{K_r}(E,N) \cong \Hom_{K_r}(\sigma_{K_r}(E),\sigma_{K_r}(N))\\
& \cong & \Hom_{K_r}(\sigma_{K_r}(E),\sigma_{K_r}(M)).
\end{eqnarray*}
Since $\sigma_{K_r}^2 \cong \tau_{K_r}$, we also get (ii).
Let $1\! \leq\! d\! <\! r$ and $M \in \rep(K_r)$.
* $M \in \repp(K_r,d)$ if and only if $\Hom_{K_r}(\coker\fw,\sigma_{K_r}(M))\!=\!(0)$ for all $\fw \in \Gr_{r-d}(A_r)$.
* $M \in \repp(K_r,r\!-\!1)$ if and only if $\sigma_{K_r}(M) \in \EKP(K_r)$.
* We have $\sigma_{K_r}^{-1}(\repp(K_r,d)) \subseteq \repp(K_r,r\!-\!1)$. In particular, $\repp(K_r,d)$ is $\sigma^{-1}_{K_r}$-stable.
(1) This is a direct consequence of Theorem <ref>, Lemma <ref>, and (i).
(2) In view of (i), we have $M \in \rep(K_r,r\!-\!1)$ if and only if
\[ (0) = \Hom_{K_r}(E(\fv),M) \cong \Hom_{K_r}(\sigma_{K_r}(E(\fv)),\sigma_{K_r}(M)) \cong \Hom_{K_r}(D_{K_r}(\sigma_{K_r}(\coker \fv)),\sigma_{K_r}(M))\]
for all $\fv \in \Gr_{r-1}(A_r)$. Since $\dim_\KK D_{K_r}(\sigma_{K_r}(\coker \fv)) = (1,r\!-\!1)$ for all $\fv \in \Gr_{r-1}(A_r)$, the statement follows from Lemma <ref> and Proposition <ref>(3).
(3) Let $M\!=\!\sigma_{K_r}^{-1}(N)$ for some $N \in \repp(K_r,d)$. Let $\fv \in \Gr_d(A_r)$. Since $\dim_\KK \Hom_{K_r}(E(\fv),S_1)\!=\!\dim_\KK E(\fv)_1\!\neq\!0$, it follows that $N \in \rep_1(K_r)$.
Then Proposition <ref>(3) in conjunction with the equivalence $\rep_2(K_r) \lra \rep_1(K_r)$ yields $\sigma_{K_r}(M) \cong N \in \EKP(K_r)$, so that (2) ensures that $M \in \rep(K_r,r\!-\!1)$.
By Proposition <ref>(3), this also implies that $\repp(K_r,d)$ is $\sigma^{-1}_{K_r}$-stable.
In view of $\sigma_{K_r}(P_{i+1}(r)) \cong P_{i}(r)$ for all $i\!\geq\!0$, parts (2) and (3) of Theorem <ref> imply inductively that $P_i(r) \in \repp(K_r,r\!-\!1)$ for all $i\!\ge\!0$.
We finally record a topological property of the set of relative projective modules of fixed dimension vector. The relevance of the technical condition (2) will be clarified in the following section.
Let $V_1,V_2$ be $\KK$-vector spaces, $d\in \{1,\ldots,r\!-\!1\}$. Then the following statements hold:
* The set $\repp(K_r,d)\cap\rep(K_r;V_1,V_2) \!=\!\{M \in \rep(K_r;V_1,V_2) \ ; \ \cV(K_r,d)_M\!=\!\emptyset\}$ is open.
* If $\Delta_{(V_1,V_2)}(d)\!\ge\!d(r\!-\!d)$, then $\repp(K_r,d)\cap\rep(K_r;V_1,V_2)$ lies dense in $\rep(K_r;V_1,V_2)$.
(1) We interpret $\rep(K_r;V_1,V_2)$ as $\Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)$. In view of Theorem <ref>, the relevant set is given by
\[ \cO_{(r,d)} := \{ \psi \in \Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2) \ ; \ \psi\circ (\alpha\!\otimes \id_{V_1}) \in \Inj_\KK(A_d\!\otimes_\KK\!V_1,V_2) \ \ \ \forall \ \alpha \in \Inj_\KK(A_d,A_r)\} .\]
We consider the canonical map
\[ \msim : \Inj_\KK(A_d,A_r) \lra \Gr_d(A_r) \ \ ; \ \ \alpha \mapsto \im\alpha.\]
As noted before, this map defines a principal $\GL(A_d)$-bundle, and hence so does
\[ \kappa : \Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)\!\times\!\Inj_\KK(A_d,A_r) \lra \Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)\!\times\!\Gr_d(A_r) \ \ ; \ \ (\psi, \alpha) \mapsto (\psi,\im\alpha).\]
In particular, $\kappa$ is an open morphism.
For $\psi \in \Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)$ and $\fv \in \Gr_d(A_r)$, we put $\psi_\fv\!:=\!\psi|_{\fv\otimes_\KK V_1}$. We consider the sets
\[ \cO_1\!:=\!\{(\psi, \alpha) \in \Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)\!\times\!\Inj_\KK(A_d,A_r) \ ; \ \rk(\psi\circ (\alpha\otimes\id_{V_1}))\!=\!d\dim_\KK V_1\}\]
\[ \cO_2 \!:= \! \{(\psi,\fv) \in \Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)\!\times\!\Gr_d(A_r) \ ; \ \rk(\psi_\fv)\!=\!d\dim_\KK V_1\}.\]
Since $\Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)\!\times\!\Inj_\KK(A_d,A_r) \lra \Hom_\KK(A_d\!\otimes_\KK\!V_1,V_2) \ ; \ (\psi,\alpha) \mapsto \psi\circ(\alpha\otimes\id_{V_1})$ is a morphism, lower semicontinuity of ranks
ensures that $\cO_1$ is an open subset, so that
\[\cO_2 = \kappa(\cO_1)\]
is open as well. As a result,
\[ \cC_{(r,d)}\!:=\!\{(\psi,\fv) \in \Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)\!\times\!\Gr_d(A_r) \ ; \ \rk(\psi_\fv)\!<\!d\dim_\KK V_1\}.\]
is closed.
As the projective variety $\Gr_d(A_r)$ is complete, the morphism $\pr : \Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)\!\times\!\Gr_d(A_r) \lra \Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2) \ ; \ (\psi,\fv) \mapsto \psi$ is closed.
\[\cX_{(r,d)}:=\pr(\cC_{(r,d)})\]
is a closed subset of the affine space $\Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)$.
Suppose that $\psi \not \in \cO_{(r,d)}$. Then there is $\fv \in \Gr_d(A_r)$ such that $\rk(\psi_\fv)\!<\!d\dim_\KK V_1$, so that $(\psi,\fv) \in \cC_{(r,d)}$. It follows that $\psi \in \cX_{(r,d)}$. Conversely, if $\psi \in \cX_{(r,d)}$,
then there is $\fv \in \Gr_d(A_r)$ such that $(\psi,\fv) \in \cC_{(r,d)}$. Thus, there is $\alpha \in \Inj_\KK(A_d,A_r)$ such that $\rk(\psi \circ (\alpha\otimes\id_{V_1}))\!<\!d\dim_\KK V_1$, whence $\psi\not \in \cO_{(r,d)}$.
As an upshot of the above, we obtain that
\[ \cO_{(r,d)} = \Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)\!\smallsetminus\!\cX_{(r,d)}\]
is open.
(2) In view of (1), it suffices to show that $\repp(K_r,d)\cap\rep(K_r;V_1,V_2)\!\ne\!\emptyset$. Setting $n\!:=\!\dim_\KK V_1$ and $m\!:=\!\dim_\KK V_2$, we define for $\ell \in \{0,\ldots,n\}$
\[ \Hom_\KK(V_1,V_2)_{ \le \ell} := \{ f \in \Hom_\KK(V_1,V_2) \ ; \ \rk(f)\!\le\!\ell\}.\]
We proceed in several steps, beginning by recalling that (see <cit.>)
(i) $\Hom_\KK (V_1,V_2)_{\le \ell}$ is a closed, irreducible subspace of dimension $\ell(n\!+\!m\!-\!\ell)$.
Next, we verify the following claim
(ii) The closed subset $\cC_{(r,d)}$ defined in (1) is an irreducible variety of dimension
\[ \dim \cC_{(r,d)} = rmn\!-\!1\!-\!\Delta_{(V_1,V_2)}(d)\!+\!d(r\!-\!d).\]
In what follows, we let $\ell\!:=\!dn\!-\!1$. The algebraic group $\KK^\times$ acts on $\Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)\!\times\!\Gr_d(A_r)$ via
\[ \alpha\dact(\psi,\fv) := (\alpha\psi,\fv) \ \ \ \ \forall \ (\psi,\fv) \in \Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)\!\times\!\Gr_d(A_r).\]
Note that $\cC_{(r,d)}$ is $\KK^\times$-stable, so that every irreducible component $Z$ of $\cC_{(r,d)}$ enjoys the same property.
We consider the surjective morphism
\[ q : \cC_{(r,d)} \lra \Gr_d(A_r) \ \ ; \ \ (\psi,\fv) \mapsto \fv\]
as well as
\[ \iota_0 : \Gr_d(A_r) \lra \Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)\!\times\!\Gr_d(A_r) \ \ ; \ \ \fv \mapsto (0,\fv) \]
Let $Z$ be an irreducible component of $\cC_{(r,d)}$. Then $Z$ is a closed subset of $\Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)\!\times\!\Gr_d(A_r)$. If $\fv \in q(Z)$, then there is $\psi \in
\Hom_\KK(A_r\!\otimes_\KK\!V_1,V_2)$ such that $(\psi,\fv) \in Z$, and the $\KK^\times$-stability mentioned above implies $\iota_0(\fv)\!=\!(0,\fv) \in \overline{\KK^\times\dact(\psi,\fv)} \subseteq Z$. Consequently, $q(Z)\!=\!
\iota_0^{-1}(Z)$ is closed.
Let $\fv \in \Gr_d(A_r)$. Writing $A_r\!=\!\fv\!\oplus\!\fw$, we obtain
\[ q^{-1}(\fv) \cong\Hom_\KK(\fv\!\otimes_\KK\!V_1,V_2)_{\le \ell}\!\times\! \Hom_\KK(\fw\!\otimes_\KK\!V_1,V_2),\]
so that (i) ensures that $q^{-1}(\fv)$ is irreducible of dimension
\[ \dim q^{-1}(\fv) = \ell(m\!+\!dn\!-\!\ell)\!+\!mn(r\!-\!d) = \ell(m\!+\!1)\!+\!mn(r\!-\!d).\]
In view of <cit.>, the variety $\cC_{(r,d)}$ is irreducible and the fiber dimension theorem <cit.> yields
\begin{eqnarray*}
\dim \cC_{(r,d)} & = & \ell(m\!+\!1)\!+\!mn(r\!-\!d) \!+\!d(r\!-\!d)\\
& = & (dn\!-\!1)(m\!+\!1)\!+\!mn(r\!-\!d)\!+\!d(r\!-\!d)\\
& = & dmn\!+\!\!dn\!-\!m\!-\!1\!+\!mn(r\!-\!d)\!+\!d(r\!-\!d)\\
& = & rmn\!-\!1\!-\!\Delta_{(V_1,V_2)}(d)\!+\!d(r\!-\!d),
\end{eqnarray*}
as desired. $\diamond$
By virtue of (ii) and our current assumption, the fiber dimension theorem implies
\[ \dim \cX_{(r,d)} \le \dim \cC_{(r,d)} \le rmn\!-\!1 < \dim \Hom_\KK(A_r\!\otimes_\KK V_1,V_2),\]
so that $\cO_{(r,d)}\!=\!\Hom_\KK(A_r\!\otimes_\KK\! V_1,V_2)\!\smallsetminus\!\cX_{(r,d)}\!\ne\!\emptyset$. Hence there is $M \in \rep(K_r;V_1,V_2)$ such that $\cV(K_r,d)_M\!=\!\emptyset$.
§.§ Modules of minimal type and applications
It is a consequence of work of Westwick [63] that for “most" objects $M \in \EKP(K_r)$ the inequality
\[ \Delta_M(1) \ge r\!-\!1\]
holds. As we shall show below, the following definition naturally generalizes those $\EKP$-representations for which we have equality.
Let $d \in \{1,\ldots, r\!-\!1\}$. We say that $M \in \repp(K_r,d)$ has minimal type, provided $\Delta_M(d)\!=\!d(r\!-\!d)$.
Our approach rests on the following technical Lemma:
Let $M \in \repp(K_r,d)$. Then
\[ C_M :=\{(m,\fv) \in M_2\!\times\!\Gr_d(A_r) \ ; \ m \in \im \psi_{M,\fv}\} \subseteq M_2\!\times\!\Gr_d(A_r)\]
is a closed, irreducible subset of dimension $\dim C_M\!=\!d(r\!-\!d)\!+\!d\dim_\KK M_1$.
We proceed in several steps.
(a) Let $f : V \lra W$ be a $\KK$-linear map, $C \subseteq \Gr_q(V)$ be closed such that $\dim_\KK f(\fv)\!=\!q$ for all $\fv \in C$. Then
\[ \msim_{f,C} : C \lra \Gr_q(W) \ \ ; \ \ \fv \mapsto f(\fv)\]
is a morphism of projective varieties.
We denote by $\mspl_W : \Gr_q(W) \lra \PP(\bigwedge^q(W))$ the Plücker embedding and observe that $(\mspl_W\circ\msim_{f,C})(\fv)\!=\!\bigwedge^q(f(\fv))\!=\!\bigwedge^q(f)(\bigwedge^q(\fv))$ for every $\fv \in C$. The
map $\bigwedge^q(f) : \bigwedge^q(V) \lra \bigwedge^q(W)$ is $\KK$-linear and
$O_f\!:=\!\{ [x] \in \PP(\bigwedge^q(V)) \ ; \ \bigwedge^q(f)(x)\!\ne\!0\}$ is open and such that
\[ \varphi : O_f \lra \PP(\bigwedge^q(W)) \ ; \ [x] \mapsto [\bigwedge^q(f)(x)]\]
is a morphism. Since $\mspl_V(C) \subseteq O_f$, it follows that $\mspl_W\circ\msim_{f,C}\!=\!\varphi\circ \mspl_V|_C$ is a morphism. Hence the same holds for $\msim_{f,C}$. $\diamond$
(b) The map
\[ \zeta_M : \Gr_d(A_r) \lra \Gr_{d\dim_\KK M_1}(M_2) \ \ ; \ \ \fv \mapsto \psi_M(\fv\!\otimes_\KK\!M_1)\]
is a morphism.
We first consider the map
\[ \eta_M : \Gr_d(A_r) \lra \Gr_{d\dim_\KK M_1}(A_r\!\otimes_\KK\!M_1) \ \ ; \ \ \fv \mapsto \fv\!\otimes_\KK\!M_1.\]
Then we have
\[ (\mspl_{A_r\otimes_\KK M_1}\circ \eta_M)(\fv) = \bigwedge^{d\dim_\KK M_1}(\fv\!\otimes_\KK\!M_1) \cong \bigwedge^d(\fv)\!\otimes_\KK\!\bigwedge^{\dim_\KK M_1}(M_1) \cong \mspl_{A_r}(\fv)\!\otimes_\KK\!
\bigwedge^{\dim_\KK M_1}(M_1),\]
where the second to last isomorphism holds for dimension reasons. Hence $\mspl_{A_r\otimes_\KK M_1}\circ \eta_M$ is a morphism, so that $\eta_M$ enjoys the same property.
Consequently, $C\!:=\!\im \eta_M$ is closed. Since $\zeta_M\!=\!\msim_{\psi_M,C}\circ \eta_M$, our assertion follows from (a). $\diamond$
The incidence variety $\cI_M\!:=\!\{(m,\fw) \in M_2\!\times\!\Gr_{d\dim_\KK M_1}(M_2) \ ; \ m \in \fw\}$ is known to be closed. By (b), the map
\[ \id_{M_2}\!\times\zeta_M : M_2\!\times\!\Gr_d(A_r) \lra M_2\!\times\!\Gr_{d\dim_\KK M_1}(M_2)\]
is a morphism. Hence $C_M\!=\!(\id_{M_2}\!\times \zeta_M)^{-1}(\cI_M)$ is closed as well.
We consider the projection
\[ \pr_2 : C_M \lra \Gr_d(A_r) \ \ ; \ \ (m,\fv) \mapsto \fv.\]
For every $\fv \in \Gr_d(A_r)$, the fiber $\pr_2^{-1}(\fv)\!\cong\!\im \psi_{M,v}$ is irreducible of dimension $d\dim_\KK M_1$. Note that $\KK^\times$ acts on $C_M$ via
\[ \alpha. (m,\fv) = (\alpha m,\fv) \ \ \ \ \forall \ \alpha \in \KK^\times, (m,\fv) \in C_M.\]
It follows that every irreducible component $C \subseteq C_M$ is $\KK^\times$-stable, so that $(m,\fv) \in C$ implies $(0,\fv) \in C$. The morphism
\[ \iota : \Gr_d(A_r) \lra C_M \ \ ; \ \ \fv \mapsto (0,\fv)\]
thus yields $\pr_2(C)\!=\!\iota^{-1}(C)$, showing that $\pr_2(C)$ is closed. The irreducibility of $C_M$ now follows from <cit.>, while its dimension is given by the fiber dimension theorem.
For $M \in \rep(K_r)$ we put
\[ I_M := \bigcup_{\fv \in \Gr_d(A_r)}\im\psi_{M,\fv} \subseteq M_2.\]
Let $M \in \repp(K_r,d)$. Then we have
\[ \dim_\KK \ker\psi_M \le (r\!-\!d)(\dim_\KK M_1\!-\!\min\{\dim_\KK M_1,d\}).\]
In particular,
\[ \Delta_M(d) \ge (r\!-\!d)\min\{\dim_\KK M_1,d\}.\]
We consider the representation
\[ P(M)\!:=\!(M_1,A_r\!\otimes_\KK\!M_1,\id_{A_r\otimes_\KK M_1}).\]
Since $P(M)\cong (\dim_\KK M_1)P_1(r)$ is projective, Lemma <ref> shows that the set
\[ C_{P(M)} = \{ (x,\fv) \in (A_r\!\otimes_\KK\!M_1)\!\times\!\Gr_d(A_r) \ ; \ x \in \fv\!\otimes_\KK\!M_1\}\]
is closed, irreducible and of dimension $\dim C_{P(M)}\!=\!d\dim_\KK M_1\!+\!d(r\!-\!d)$. The projection
\[ \pr_1 : C_{P(M)} \lra A_r\!\otimes_\KK\!M_1\]
has image $I_{P(M)}\!=\!\bigcup_{\fv \in \Gr_d(A_r)}\fv\!\otimes_\KK\!M_1 \subseteq A_r\!\otimes_\KK\!M_1$. Since $\Gr_d(A_r)$ is complete and $C_{P(M)}$ is closed and irreducible, $I_{P(M)}$ is a closed, irreducible
subset of $A_r\!\otimes_\KK\!M$. Moreover, given $x \in I_{P(M)}$, we have $\pr_1^{-1}(x)\cong \{\fv \in \Gr_d(A_r) \ ; \ x \in \fv\!\otimes_\KK\!M_1\}$.
Let $\{m_1,\ldots, m_s\}$ be a basis of $M_1$, so that $s\!=\!\dim_\KK M_1$. Every $x \in A_r\!\otimes_\KK\!M_1$ is of the form $x\!=\!\sum_{i=1}^s v_i(x)\otimes m_i$. We let $\fc(x)\!:=\!\langle v_i(x) \ , \ i \in \{1,\ldots, s\}
\rangle \subseteq A_r$ be the coefficient space of $x$ and put $\ell(x)\!=\!\dim_\KK\fc(x)$. Writing $v_i(x)\!:=\!\sum_{j=1}^ra_{ij}(x)\gamma_j$ and $A(x)\!:=\!(a_{ij}(x)) \in \Mat_{s\times r}(k)$, we obtain $\dim_\KK\fc(x)\!=\!
\rk(A(x))$. Since $\rk(A(x))\!\le\!\min\{s,d\}$ for all $x \in I_{P(M)}$, it follows from $\sum_{i=1}^{\min\{s,d\}}\gamma_i\otimes m_i \in I_{P(M)}$ and the lower semicontinuity of $x \mapsto \rk(A(x))$ that
\[ O_M := \{ x \in I_{P(M)} \ ; \ \ell(x)\!=\!\min\{s,d\}\}\]
is a dense open subset of $I_{P(M)}$.
Let $x \in O_M$. Then $(x,\fv )\in \pr_1^{-1}(x)$ if and only if $\fc(x) \subseteq \fv$. If $s\!\ge\!d$, then $\pr^{-1}_1(x)\!=\!\{(x\,\fc(x))\}$. Alternatively, $\pr_1^{-1}(x) \cong \Gr_{d-s}(A_r/\fc(x))$,
so that $\dim \pr^{-1}_1(x)\!=\!(d\!-\!s)(r\!-\!d)$. The fiber dimension theorem now shows that
\[\dim I_{P(M)} = ds\!+\!(r\!-\!d)\min\{s,d\}.\]
Note that $M \in \repp(K_r,d)$ implies that $\ker\psi_M\cap I_{P(M)}\!= \{0\}$. The affine dimension theorem <cit.> now ensures that
\[ 0 = \dim \ker\psi_M\cap I_{P(M)} \ge \dim_\KK \ker \psi_M\!+\!\dim I_{P(M)}\!-\!\dim_\KK(A_r\!\otimes_\KK\!M_1),\]
\[ \dim_\KK\ker\psi_M\le rs\!-\!\dim I_{P(M)} = (r\!-\!d)(s\!-\!\min\{s,d\}).\]
\[ \dim_\KK M_2 \ge \rk(\psi_M) = rs\!-\!\dim_\KK\ker \psi_M \ge rs\!-\!(r\!-\!d)(s\!-\!\min\{s,d\}) = ds\!+\!(r\!-\!d)\min\{s,d\},\]
so that $\Delta_M(d)\!\ge\!(r\!-\!d)\min\{\dim_\KK M_1,d\}$.
Theorem <ref> has been used by the first named author [10] to determine the orbit representatives of the dimension vectors of the elementary $K_r$-modules relative to the actions given by
$\sigma_{K_r}$ and the duality. This extends work by Ringel [57] concerning $r\!=\!3$, where module representatives were also identified. In this regard, the case $r\!=\!3$ appears to be special.
We shall see later that the following result reflects the scarcity of indecomposable vector bundles $\cF$ on $\Gr_d(A_r)$ of rank $\rk(\cF)\!<\!d(r\!-\!d)$.
Let $M,N \in \repp(K_r,d)$ be representations.
* Suppose that $\Delta_M(d)\!<\!d(r\!-\!d)$.
* If $M$ is indecomposable, then $M\cong P_0(r)$, or $d\!\ne\!1$ and $M\!\cong P_1(r)$.
* $M \cong \Delta_M(r)P_0(r)\!\oplus\!(\dim_\KK M_1)P_1(r)$ is projective.
* Suppose that $\Delta_M(d)\!=\!d(r\!-\!d)$.
* If $M$ is not projective, then every $f \in \Hom_{K_r}(M,N)\!\smallsetminus\!\{0\}$ is injective.
* $M$ is a brick or projective.
* If $M$ is not projective, then $\dim_\KK M_1\!\ge\!d\!+\!1$.
* If $\dim_\KK M_1\!\ge\!d\!+\!1$, then $M$ is not projective.
(1a) Suppose first that $M\not\cong P_0(r)$ is indecomposable. Then $\Delta_M(r)\!\le\!0$ and
\[ (\ast) \ \ \ \ (0) \lra -\Delta_M(r)P_0(r) \lra (\dim_\KK M_1)P_1(r) \lra M \lra (0)\]
is a projective resolution of $M$. In view of
\[ (\dim_\KK M_1)(r\!-\!d) = \Delta_M(d)\!-\!\Delta_M(r),\]
our current assumption in conjunction with Theorem <ref> implies $d\!>\!\dim_\KK M_1$ and $\Delta_M(d)\!\ge\!(\dim_\KK M_1)(r\!-\!d)$, whence $\Delta_M(r)\!\ge\!0$ and $d\!\ne\!1$. Consequently,
$\Delta_M(r)\!=\!0$, so that ($\ast$) and the indecomposability of $M$ show that $M\cong P_1(r)$.
(1b) Since $\repp(K_r,d)$ is closed under taking subrepresentations, the assertion follows from the Theorem of Krull-Remak-Schmidt.
(2a) Let $f \in \Hom_{K_r}(M,N)\!\smallsetminus\!\{0\}$. Then the terms of the short exact sequence
\[ (0) \lra \ker f \lra M \stackrel{f}{\lra} \im f \lra (0)\]
belong to $\repp(K_r,d)$ and
\[ \Delta_M(d) = \Delta_{\ker f}(d)\!+\!\Delta_{\im f}(d).\]
If $\Delta_{\ker f}(d)\!\ne\!0$, then $\Delta_{\im f}(d)\!<\!d(r\!-\!d)$, so that (1) shows that $\im f$ is projective. In view of Theorem <ref>, the assumption $\Delta_{\im f}(d)\!=\!0$ implies $(\im f)_1\!=\!(0)$, whence
$0\!=\!-d\dim_\KK (\im f)_2$, which contradicts $f\!\ne\!0$. Hence $\ker f$ is also projective, and so is $M\cong \ker f\!\oplus\!\im f$. As $M$ is not projective, we conclude that $\Delta_{\ker f}(d)\!=\!0$, whence
$\ker f\!=\!(0)$.
(2b) Suppose that $M$ is not projective and let $f \in \End_{K_r}(M)\!\smallsetminus\!\{0\}$. In view of (2a), $f$ is bijective. Hence there is a non-zero eigenvalue $\alpha \in \KK$ and another application of (2a) yields
(2c) As $M$ is a non-projective brick, the exact sequence ($\ast$) yields $\Delta_M(r)\!<\!0$ and
\[ (\dim_\KK M_1)(r\!-\! d) = \Delta_{(\dim_\KK M_1) P_1(r)}(d) > \Delta_M(d) = d(r\!-\!d),\]
so that $\dim_\KK M_1\!\ge\!d\!+\!1$.
(2d) Suppose that $M$ is projective. Then $M \cong \Delta_M(r)P_0(r)\!\oplus\!(\dim_\KK M_1)P_1(r)$, and we have
\[ 0 \le \Delta_M(r) = \Delta_M(d)\!-\!(\dim_\KK M_1)(r\!-\!d) \le -(r\!-\!d),\]
a contradiction. Hence $M$ is not projective.
Suppose that $M \in \repp(K_r,d)$ is non-projective and of minimal type. Then $M \not \in \repp(K_r,d\!+\!1)$.
Since $M$ is non-projective and of minimal type, Corollary <ref>(2b),(2c) provides $x\!\ge\!d\!+\!1$ such that $\udim M\!=\!(x,d(r\!-\!d\!+\!x))$. We thus have
\[ (\ast) \ \ \ \ \Delta_M(d\!+\!1) = d(r\!-\!d)\!-\!x.\]
Suppose that $M \in \repp(K_r,d\!+\!1)$. Since $x\!\ge\!d\!+\!1$, Theorem <ref> in conjunction with ($\ast$) gives $d(r\!-\!d)\!-\!x\!\ge\!(d\!+\!1)(r\!-\!d\!-\!1)$, whence $d\!+\!1\! \le\! x\!\le\! 2d\!+\!1\!-\!r$. As this
contradicts $d\!\le\!r\!-\!1$, we conclude that $M \not \in \rep(K_r,d\!+\!1)$.
(1) It follows that every regular $M \in \repp(K_r,d)$ of minimal type is a brick.
(2) Let $(V_1,V_2)$ be a pair of $\KK$-vector spaces. If $\Delta_{(V_1,V_2)}(d)\!\ge\!d(r\!-\!d)$, then Proposition <ref> ensures that $\repp(K_r,d)\cap\rep(K_r;V_1,V_2)$ is not empty. Alternatively, this set is either empty
(namely, if and only if $\Delta_M(r)\!<\!0$), or it consists of one $(\GL(V_2)\!\times\!\GL(V_1))$-orbit of projective representations.
Since the category $\repp(K_r,d)$ is closed under taking extensions (see Theorem <ref>), our next result provides a first characterization of objects in $\repp(K_r,d)$ in terms of those of minimal type.
Let $M \in \repp(K_r,d)$ be such that $\Delta_M(d)\!>\!d(r\!-\!d)$. There exists an exact sequence
\[ (0) \lra a P_0(r) \lra M \stackrel{\pi}{\lra} M_{\min} \lra (0),\]
where $M_{\min}$ has minimal type and $a \in \NN$.
By Lemma <ref>, the set
\[ C_M =\{(m,\fv) \in M_2\!\times\!\Gr_d(A_r) \ ; \ m \in \im \psi_{M,\fv}\} \subseteq M_2\!\times\!\Gr_d(A_r)\]
is a closed, irreducible subset of dimension $\dim C_M\!=\!d(r\!-\!d)\!+\!d\dim_\KK M_1$. Note that the morphism
\[ \pr_1 : C_M \lra M_2 \ \ ; \ \ (m,\fv) \mapsto m\]
has image $I_M\! =\!\bigcup_{\fv \in \Gr_d(A_r)}\im\psi_{M,\fv} \subseteq M_2$, so that $\Gr_d(A_r)$ being complete implies that $I_M$ is closed.[Observe that $\pr_1$ is the restriction of the projection
$M_2\!\times\!\Gr_d(A_r)\lra M_2$, which is closed.] Moreover, $I_M$ is irreducible, conical and of dimension
\[ \dim I_M \le d(r\!-\!d)\!+\!d\dim_\KK M_1\!<\!\dim_\KK M_2.\]
The Noether Normalization Theorem thus implies the existence of a linear subspace $X \subseteq M_2$ of dimension $\dim_\KK M_2\!-\!\dim I_M$ and such that
\[ X\cap I_M = \{0\},\]
cf. <cit.>. Since $\dim_\KK X\!\ge\!\dim_\KK M_2\!-\!d\dim_\KK M_1\!-\!d(r\!-\!d)\!=\!\Delta_M(d)\!-\!d(r\!-\!d)\!>\!0$, we can find a subspace $(0) \subsetneq Y \subseteq X$ such that
(i) $\dim_\KK Y\!=\!\Delta_M(d)\!-\!d(r\!-\!d)$, and
(ii) $Y\cap I_M\!=\!\{0\}$.
We now consider $M_{\min}\!:=\!N\!:=\!(M_1,M_2/Y)$ together with the canonical projection $\pi_2 : M_2 \lra N_2$ and put $\psi_N\!:=\!\pi_2\circ \psi_M$. Then $\pi\!:=\!(\id_{M_1},\pi_2)$ defines a surjective morphism
$\pi : M \lra N$ such that $\ker\pi \cong (\dim_\KK Y)P_0(r)$. Given $\fv \in \Gr_d(A_r)$, (ii) yields $\im\psi_{M,\fv}\cap Y\!=\!(0)$, whence
\[ \ker \psi_{N,\fv} = \psi_{M,\fv}^{-1}(Y) = (0).\]
Consequently, $M_{\min} \in \repp(K_r,d)$, while (i) yields $\Delta_{M_{\min}}(d)\!=\!\Delta_M(d)\!-\!\Delta_{(\dim_\KK Y)P_0(r)}(d)\!=\!\Delta_M(d)\!-\!\dim_\KK Y\!=\!d(r\!-\!d)$.
We turn to the problem of embedding $K_r$-representations of minimal type into $K_{r+s}$-representations of the same type.
Given $M \in \rep(K_r)$ such that $\dim_\KK M_1\!\ge\!d$, we have
\[ I_M = \bigcup_{W \in \Gr_d(M_1)} \psi_M(A_r\!\otimes_\KK\!W)\!=:\!J_M.\]
Recall that $P(M)\!=\!(M_1,A_r\!\otimes_\KK\!M_1,\id_{A_r\!\otimes_\KK M_1})$. We first show that $I_{P(M)}\!=\!J_{P(M)}$. Let $x \in I_{P(M)}$. Then there is $\fv \in \Gr_d(A_r)$ such that
$x \in \fv\!\otimes_\KK\!M_1$. Given a basis $\{a_1,\ldots, a_d\} \subseteq \fv$, we write $x\!=\!\sum_{i=1}^d a_i\otimes m_i$, so that there is $W \in \Gr_d(M_1)$ such that $\langle\{m_1,\ldots, m_d\}\rangle \subseteq W$.
Consequently, $x \in A_r\!\otimes_\KK\!W$. As a result, $I_{P(M)} \subseteq J_{P(M)}$. The proof of the reverse inclusion is analogous.
For general $M$, we note that $I_M\!=\!\psi_M(I_{P(M)})\!=\!\psi_M(J_{P(M)})\!=\!J_M$.
Let $M \in \repp(K_r,d)$ be of minimal type such that $\dim_\KK M_1\!\ge\!d\!+\!1$. Then there exists $N \in \repp(K_{r+1},d)$ of minimal type such that
\[ N|_{K_r} \cong M\!\oplus\!dP_0(r).\]
Writing $A_{r+1}\!=\!A_r\!\oplus\!\KK a$, we first construct $Q \in \repp(K_{r+1},d)$ such that
\[ Q|_{K_r} \cong M\!\oplus (\dim_\KK M_1)P_0(r).\]
We put $Q_1\!:=\!M_1$, $Q_2\!:=\!M_2\!\oplus\! Y_2$, with $\dim_\KK Y_2\!=\!\dim_\KK M_1$ and pick an isomorphism
\[ \lambda : \KK a\!\otimes_\KK\!M_1 \stackrel{\sim}{\lra} Y_2.\]
Interpreting $\psi_M$ and $\lambda$ as elements of $\Hom_\KK(A_{r+1}\!\otimes_\KK\!Q_1,Q_2)$, we have
\[ (\ast) \ \ \ \ \ \ \ \ \ \psi_M(A_r\!\otimes_\KK\!W)\cap \lambda(\KK a\!\otimes_\KK\!W) = (0) \ \ \text{for all} \ \ W \in \Gr_d(M_1).\]
We consider the representation $Q\!:=\!(M_1,Q_2,\psi_M\!+\!\lambda) \in \rep(K_{r+1})$. Let $x \in \ker\psi_Q\cap I_{P(Q)}$. Lemma <ref> provides $W \in \Gr_d(M_1)$ such that
$x \in \ker\psi_Q\cap (A_{r+1}\!\otimes_\KK W)$. Writing $x\!=\!y\!+\!z$, with $y \in A_r\!\otimes_\KK\!W$ and $z \in \KK a\!\otimes_\KK\!W$ we obtain $\lambda(z)\!=\!-\psi_M(y) \in \psi_M(A_r\!\otimes_\KK\!W)\cap
\lambda(ka\!\otimes_\KK\!W)$, so that ($\ast$) implies $\lambda(z)\!=\!0\!=\!\psi_M(y)$. As $\lambda$ is injective, we conclude that $z\!=\!0$, while Lemma <ref> in conjunction with $M \in \repp(K_r,d)$ yields
$y \in \ker\psi_M\cap J_{P(M)}\!=\!\ker\psi_M\cap I_{P(M)}\!=\!(0)$. As a result, $x\!=\!0$, so that $Q \in \repp(K_{r+1},d)$. By construction, we have $Q|_{K_r} \cong M\!\oplus (\dim_\KK M_1)P_0(r)$.
Since $\Delta_Q(d)\!=\!\Delta_M(d)\!+\!\dim_\KK M_1\!=\!\dim_\KK M_1\!-\!d\!+\!d(r\!+\!1\!-\!d)$ and $\dim_\KK M_1\!>\!d$, Proposition <ref> provides a short exact sequence
\[ (0) \lra (\dim_\KK M_1\!-\!d)P_0(r\!+\!1) \lra Q \stackrel{\pi}{\lra} Q_{\min} \lra (0),\]
where $N\!:=\!Q_{\min}$ has minimal type. Let $\iota : M \lra Q|_{K_r}$ be the given injection. Since $\dim_\KK M\!\ge\!d\!+\!1$, Corollary <ref>(2d) ensures that $M$ is not projective. As $\ker \pi|_{K_r} \cong
(\dim_\KK M_1\!-\!d)P_0(r)$, the map $\pi \circ \iota : M \lra N|_{K_r}$ is not zero, and hence by Corollary <ref>(2a) injective. The restriction of the sequence above yields
\[ (0) \lra (\dim_\KK M_1\!-\!d)P_0(r) \stackrel{\binom{\alpha}{\beta}}{\lra} (\dim_\KK M_1)P_0(r)\!\oplus\!M \stackrel{(\zeta,\pi\circ \iota)}{\lra} N|_{K_r} \lra (0),\]
and thanks to <cit.> there results a push-out and pull-back diagram
\[ \begin{tikzcd} (\dim_\KK M_1\!-\!d)P_0(r) \arrow[r, "\alpha"] \arrow[d,"\beta"] & (\dim_\KK M_1)P_0(r) \arrow[d,"-\zeta"] \\
M \arrow[r,"\pi\circ \iota"] & N|_{K_r}.
\end{tikzcd} \]
Since $\pi\circ\iota$ is injective, so is $\alpha$. It follows that $dP_0(r) \cong \coker\alpha \cong \coker(\pi\circ \iota)$ (cf. <cit.>), so that there is an exact sequence
\[ (0) \lra M \stackrel{\pi\circ \iota}{\lra} N|_{K_r} \lra dP_0(r) \lra (0).\]
Consequently, $N|_{K_r} \cong M\!\oplus\! dP_0(r)$.
Suppose that $M \in \repp(K_r,d)$ is projective and of minimal type. Since $\Delta_{P_1(r)}(d)\!=\!(r\!-\!d)$, there is $\ell \in \{0,\ldots, d\}$ such that $M \cong \ell P_1(r)\!\oplus\!(d\!-\!\ell)(r\!-\!d)P_0(r)$.
We consider $N\!:= \ell P_1(r\!+\!1)\!\oplus\!(d\!-\!\ell)(r\!+\!1\!-\!d)P_0(r\!+\!1) \in \repp(K_{r+1},d)$. Then we have
\[ \Delta_N(d) = \ell (r\!+\!1\!-\!d)\!+\!(d\!-\!\ell)(r\!+\!1\!-\!d) = d(r\!+\!1\!-\!d),\]
so that $N$ has minimal type. Moreover,
\[ N|_{K_r} \cong \ell P_1(r)\!\oplus\!\ell P_0(r)\!\oplus\!(d\!-\!\ell)(r\!+\!1\!-\!d)P_0(r),\]
showing that $N|_{K_r} \cong M\!\oplus\! dP_0(r)$. By the remark at the beginning of Section <ref>, Proposition <ref> thus also holds in case $\dim_\KK M_1\!\le\!d$.
§.§ Examples: Representations and hyperplane arrangements
The representations of $\EKP(K_r)\!=\!\repp(K_r,1)$ discussed in this section correspond to the logarithmic bundles [23] and
Schwarzenberger bundles [60]. Our notation is meant to reflect this relationship
Let $r\!\ge\!2$, $V$ be an $r$-dimensional vector space and $m\!\ge\!r$. We say that $\ff\!:=\!(f_1,\ldots, f_m) \in (V^\ast)^m$ is in general position, provided $V^\ast\!=\!\langle f_i \ ; \ i \in J \rangle$ for every subset $J
\subseteq \{1,\ldots, m\}$ such that $|J|\!=\!r$. We consider the projective space $\PP^{r-1}\!=\!\PP(V)$. A hyperplane arrangement of $V$ is an $m$-tuple $\cH\!:=\!(H_1,\ldots, H_m)$ of linear hyperplanes of
$\PP(V)$. We say that $\cH$ is in general position, provided for every $\ell\! \le\! r$ and every $\ell$-element subset $J_\ell\subseteq \{1,\ldots, m\}$, we have $\dim_\KK \bigcap_{i\in J_\ell}H_i\!=\!r\!-\!1\!-\!\ell$.
(Here we put $\dim \emptyset\!=\!-1$.) By definition, each $H_i\:=\!Z(f_i)$ is the set of zeros for some $f_i \in V^\ast\!\smallsetminus\!\{0\}$. It follows that $\cH$ is in general position if and only if $(f_1,\ldots, f_m)$ enjoys this
Let $\ff\!:=\!(f_1,\ldots, f_m) \in (V^\ast)^m$. Following <cit.>, we set
\[ I_\ff\!:=\{\lambda \in \KK^m \ ; \ \sum_{i=1}^m\lambda_if_i\!=\!0\} \ \ \text{and} \ \ W_m\!:=\!\{\lambda \in \KK^m \ ; \ \sum_{i=1}^m\lambda_i\!=\!0\},\]
and refer to the linear map
\[ t_\ff : V\!\otimes_\KK\!I_\ff \lra W_m \ \ ; \ \ v \otimes \lambda \mapsto (\lambda_1f_1(v), \ldots, \lambda_mf_m(v))\]
as the fundamental tensor of $\ff$ (viewed as an element of $V^\ast\!\otimes_\KK\!I_\ff^\ast\!\otimes_\KK\!W_m$).
Given $\ff\!:=\!(f_1,\ldots, f_m) \in (A_r^\ast)^m$, we define $Q_\ff\!:=\!(\KK^m,\KK^m,(Q_\ff(\gamma_i))_{1\le i \le r}) \in \rep(K_r)$ via
\[ Q_\ff(\gamma_i)(\lambda) = (\lambda_1f_1(\gamma_i), \ldots, \lambda_mf_m(\gamma_i)) \ \ \ \ \ \forall \ i \in \{1,\ldots, r\}, \lambda \in \KK^m.\]
Then, setting $V\!:=\!A_r$, we see that
\[ M_\ff\!:=\!(I_\ff,W_m, (Q_\ff(\gamma_i)|_{I_\ff})_{1\le i \le r})\]
is a subrepresentation of $Q_\ff$ such that $\psi_{M_\ff}\!=\!t_\ff$. If $\cH$ is a hyperplane arrangement given by $\ff \in ((A_r)^\ast\!\smallsetminus\!\{0\})^m$, we abuse notation and write $M_\cH\!=\!M_\ff$
as well as $t_\cH\!=\!t_\ff$.
Suppose that $m\!\ge\!r\!\ge\!3$, and let $\cH\!=\!(H_1,\ldots, H_m)$ be a hyperplane arrangement in general position.
* $M_\cH \in \EKP(K_r)$ has dimension vector $\udim M_\cH\!=\!(m\!-\!r,m\!-\!1)$. In particular, $M_\cH$ has minimal type.
* If $m\!\ge\! r\!+\!2$, then $M_\cH \not \in \repp(K_r,2)$.
(1) Let $a \in A_r\!\smallsetminus\!\{0\}$. Then we have
\[ a_{M_\cH}(\lambda) = \psi_{M_\cH}(a\otimes \lambda) = t_\cH(a\!\otimes \lambda)\]
for all $\lambda \in I_\ff$. We may now apply <cit.> (which holds for arbitrary ground fields) to see that the linear map $a_{M_\cH}$ is injective. Consequently, $M_\cH \in \EKP(K_r)$.
We write $\cH\!:=\!(Z(f_1),\ldots, Z(f_m))$, $\ff\!:=\!(f_1,\ldots, f_m)$. By construction, we have $\dim_\KK(M_\cH)_2\!=\!\dim_\KK W_m\!=\!m\!-\!1$. Since $m\!\ge\!r$ and $\cH$ is in general position, we have
$A_r^\ast\!=\!\langle f_1,\ldots, f_r\rangle$, so that $\{f_1,\ldots, f_r\} \subseteq A_r^\ast$ is a basis of $A_r^\ast$. Hence the linear map
\[\alpha_\ff : \KK^m \lra A_r^\ast \ \ ; \ \ \lambda \mapsto \sum_{i=1}^m\lambda_if_i\]
is surjective, whence $\dim_\KK I_\ff\!=\!\dim_\KK\ker\alpha_\ff\!=\!m\!-\!r$.
(2) This is a direct consequence of (1), Corollary <ref>(2d) and Corollary <ref>.
A subclass of the modules $M_\cH$ considered above is given by the family $(M_\cS[m])_{m\ge r+1}$ of Schwarzenberger modules. Let $X_1,X_2$ be indeterminates over $\KK$. For $m\!\ge\!r\!+\!1$, we consider the
representation $M_\cS[m] \in \rep(K_r)$, defined via
\[ M_\cS[m]_1 := \KK[X_1,X_2]_{m-r-1} \ \ ; \ \ M_\cS[m]_2 := \KK[X_1,X_2]_{m-2} \ \ ; \ \ M_\cS[m](\gamma_i)(a) := X_1^{i-1}X_2^{r-i}a \ \ \ \forall \ a \in M[m]_1\]
for $1\!\le\!i\!\le\!r$. Then we have $M_\cS[m] \in \EKP(K_r)$, while $\udim M_\cS[m]\!=\!(m\!-\!r,m\!-\!1)$. The modules $M_\cS[m]$ turn out to correspond to vector bundles on $\PP^{r-1}$ that were introduced by
Schwarzenberger, cf. [60].
§.§ The generic canonical decomposition
Our first result provides the canonical decomposition <cit.> of certain dimension vectors. For ease of notation, we write
Suppose that $V_1,V_2$ are vector spaces such that $\repp(K_r,d)\cap\rep(K_r;V_1,V_2)\!\ne\!\emptyset$. According to Theorem <ref>, the assumption $\Delta_{(V_1,V_2)}(d)\!=\!0$ implies
$\dim_\KK V_1\!=\!0$ and hence $\dim_\KK V_2\!=\!\Delta_{(V_1,V_2)}(d)\!=\!0$. Hence the technical condition $\Delta_{(V_1,V_2)}\!\ge\!1$ of the ensuing results only excludes the trivial case.
Let $V_1,V_2$ be vector spaces such that $\Delta_{(V_1,V_2)}\!\ge\!1$ and write $\dim_\KK V_1\!=\!j\Delta_{(V_1,V_2)}\!+\!b$ for $b \in \{0,\ldots,\Delta_{(V_1,V_2)}\!-\!1\}$.
Then the set
\[ \rep(K_2;V_1,V_2)_0 := \{M \in \rep(K_2;V_1,V_2) \ ; \ M \cong (\Delta_{(V_1,V_2)}\!-\!b)P_j(2)\!\oplus\! bP_{j+1}(2)\}\]
is $(\GL(V_2)\!\times\!\GL(V_1))$-stable, dense and open.
By definition, $ \rep(K_2;V_1,V_2)_0$ is $(\GL(V_2)\!\times\!\GL(V_1))$-stable.
Since $P_j(2),P_{j+1}(2)$ are bricks, Corollary <ref> implies that these representations belong to their respective open sheets. By virtue of $\dim_\KK V_i\!=\!(j\!+\!i\!-\!1)\Delta_{(V_1,V_2)}\!+\!b$ and
$\Ext^1_{K_2}(P_\ell(2),P_k(2))\!=\!(0)$ for $\{\ell,k\}\!=\!\{j,j\!+\!1\}$, the conditions of <cit.> are met, so that our assertion follows from Kac's Theorem.
Let $r\!\ge\!3$. We consider the map
\[ \Res : \rep(K_r;V_1,V_2)\!\times\!\Inj_\KK(A_2,A_r) \lra \rep(K_2;V_1,V_2) \ \ ; \ \ (M,\alpha) \mapsto \alpha^\ast(M).\]
Let $V_1,V_2$ be $\KK$-vector spaces such that $\Delta_{(V_1,V_2)}\!\ge\!1$, and write
\[\dim_\KK V_1= j\Delta_{(V_1,V_2)}\!+\!b\]
for some $b \in \{0,\ldots,\Delta_{(V_1,V_2)}\!-\!1\}$. The following statements hold:
* $\Res$ is a morphism.
* The set $\Res((\EKP(K_r)\cap\rep(K_r;V_1,V_2))\!\times\!\Inj_\KK(A_2,A_r))$ is open.
* Suppose that $\EKP(K_r)\cap \rep(K_r;V_1,V_2)\!\ne\!\emptyset$. Then
\[ U := \{(M,\alpha) \in \rep(K_r;V_1,V_2)\!\times\! \Inj_\KK(A_2,A_r) \ ; \ \alpha^\ast(M) \cong (\Delta_{(V_1,V_2)}\!-\!b)P_j(2)\!\oplus\!bP_{j+1}(2)\}\]
is a dense, open subset of $\rep(K_r;V_1,V_2)\!\times\! \Inj_\KK(A_2,A_r)$.
(1) We interpret $\rep(K_r;V_1,V_2)$ as $\Hom_\KK(A_r, \Hom_\KK(V_1,V_2))$, so that
\[ \Res(\varrho,\alpha) = \varrho\circ \alpha\]
corresponds to matrix multiplication. Consequently, $\Res$ is a morphism.
(2) Note that
\[ \Res((\EKP(K_r)\cap\rep(K_r;V_1,V_2))\!\times\!\Inj_\KK(A_2,A_r)) = \bigcup_{\alpha \in \Inj_\KK(A_2,A_r)} \alpha^\ast(\EKP(K_r)\cap \rep(K_r;V_1,V_2)).\]
Given $\alpha \in \Inj_\KK(A_2,A_r)$, we write $A_r\!=\!\im\alpha\!\oplus\!\fw \cong A_2\!\oplus\!\fw$ for some $\fw \in \Gr_{r-2}(A_r)$. Then we have
\[ \Hom_\KK(A_r,\Hom_\KK(V_1,V_2)) \cong \Hom_\KK(A_2,\Hom_\KK(V_1,V_2))\!\oplus \Hom_\KK(\fw,\Hom_\KK(V_1,V_2)),\]
and $\alpha^\ast$ is the projection onto the first summand.
For varieties $X,Y,$ the projection $\pr_X : X\!\times\!Y \lra X$ is easily seen to be open. We apply this to the situation above to see that $\alpha^\ast : \rep(K_r;V_1,V_2) \lra \rep(K_2;V_1,V_2)$ is open.
Since Proposition <ref> ensures that $\EKP(K_r)\cap \rep(K_r;V_1,V_2)$ is open, our assertion follows.
(3) In view of (2) and Lemma <ref>, our current assumption implies that
\[ \Res((\EKP(K_r)\cap\rep(K_r;V_1,V_2))\!\times\!\Inj_\KK(A_2,A_r))\cap \rep(K_2;V_1,V_2)_0\]
is a non-empty, open subset of $\rep(K_2;V_1,V_2)$. By (1),
\[ U = \Res^{-1}(\rep(K_2;V_1,V_2)_0)\]
is thus a non-empty, open subset of $\rep(K_r;V_1,V_2)\!\times\! \Inj_\KK(A_2,A_r)$. Being an open subset of the irreducible variety $\rep(K_r;V_1,V_2)\!\times\! \Hom_\KK(A_2,A_r)$, the variety $\rep(K_r;V_1,V_2)\!\times\!
\Inj_\KK(A_2,A_r)$ is irreducible, so that $U$ lies dense in $\rep(K_r;V_1,V_2)\!\times\! \Inj_\KK(A_2,A_r)$.
Recall the notion of the generic decomposition $M_{\gen}$ of $M \in \repp(K_r,d)$, which we have discussed at the end of Section <ref>.
Suppose that $V_1,V_2$ are $\KK$-vector spaces such that $\Delta_{(V_1,V_2)}\!\ge\!1$ and $\EKP(K_r)\cap\rep(K_r;V_1,V_2)\!\ne\!\emptyset$. Let $(j,b) \in \NN_0\!\times\!\{0,\ldots,
\Delta_{(V_1,V_2)}\!-\!1\}$ be given by $\dim_\KK V_1\!=\!j\Delta_{(V_1,V_2)}\!+\!b$. Then
\[ O_{\gen} := \{ M \in \EKP(K_r)\cap\rep(K_r;V_1,V_2) \ ; \ M_{\gen}\! =\! (\Delta_{(V_1,V_2)}\!-\!b)P_j(2)\!\oplus\!bP_{j+1}(2)\}\]
is a dense open subset of $\rep(K_r;V_1,V_2)$.
According to Proposition <ref>
\begin{eqnarray*}
\{(M,\alpha) \in \rep(K_r;V_1,V_2)\!\times\! \Inj_\KK(A_2,A_r) \ ; \ \alpha^\ast(M)\! & \cong & \!(\Delta_{(V_1,V_2)}\!-\!b)P_j(2)\!\oplus\!bP_{j+1}(2)\} \\
& = & U= \Res^{-1}(\rep(K_2;V_1,V_2)_0)
\end{eqnarray*}
is a dense, open subset of $\rep(K_r;V_1,V_2)\!\times\!\Inj_\KK(A_2,A_r)$. The projection $\pr_1 : \rep(K_r;V_1,V_2)\!\times\! \Inj_\KK(A_2,A_r) \lra \rep(K_r;V_1,V_2)$ is open, so that $\pr_1(U)$ is open and
dense in $\rep(K_r;V_1,V_2)$. In view of Proposition <ref>(1), the subset $\pr_1(U)\cap\EKP(K_r)$ enjoys the same properties.
If $M \in O_{\gen}$, then there is $\alpha \in \Inj_\KK(A_2,A_r)$ such that
\[ \alpha^\ast(M) \cong (\Delta_{(V_1,V_2)}\!-\!b)P_j(2)\!\oplus\!bP_{j+1}(2),\]
whence $(M,\alpha) \in U$ and $M \in \pr_1(U)\cap\EKP(K_r)$.
Conversely, let $M \in \pr_1(U)\cap\EKP(K_r)$. Then there is $\alpha \in \Inj_\KK(A_2,A_r)$ such that
\[ (\ast) \ \ \ \ \alpha^\ast(M) \cong (\Delta_{(V_1,V_2)}\!-\!b)P_j(2)\!\oplus\!bP_{j+1}(2).\]
Let $\pi : \rep(K_2;V_1,V_2) \lra \Iso(K_2;V_1,V_2)$ be the canonical projection. Thanks to Lemma <ref>, $\Iso(K_2;V_1,V_2)_0\!:=\!\pi(\rep(K_2;V_1,V_2)_0)$ is a non-empty open subset of $\Iso(K_2;V_1,V_2)$.
Owing to ($\ast$), Proposition <ref> implies that
\[ U_M\!:=\! \res_M^{-1}(\Iso(K_2;V_1,V_2)_0) = \{\fv \in \Gr_2(A_r) \ ; \ [M|_\fv]\!=\![(\Delta_{(V_1,V_2)}\!-\!b)P_j(2)\!\oplus\!bP_{j+1}(2)]\}\]
is a dense open subset of $\Gr_2(A_r)$. In view of Corollary <ref>, $U_M$ therefore intersects the set
\[ O_M := \{ \fv \in \Gr_2(A_r) \ ; \ n_i(M,\fv) = n_i(M) \ \ \ \forall \ i \in \NN_0\}.\]
\[ n_i(M) = \left\{ \begin{array}{cc} 0 & i \not\in \{j,j\!+\!1\} \\ \Delta_{(V_1,V_1)}\!-\!b & i\!=\!j \\ b & i\!=\!j\!+\!1, \end{array} \right.\]
so that $M \in O_{\gen}$. It follows that $O_{\gen}\!=\!\pr_1(U)\cap\EKP(K_r)$ has the asserted properties.
Let $V_1,V_2$ be $\KK$-vector spaces such that $\Delta_{(V_1,V_2)}\!\ge\!\max\{\dim_\KK V_1,r\!-\!1\}$. Then we have
\[ O_{\gen} = \{ M \in \EKP(K_r)\cap\rep(K_r;V_1,V_2) \ ; \ M_{\gen}\! =\! (\Delta_{(V_1,V_2)}\!-\!\dim_\KK V_1)P_0(2)\!\oplus\!(\dim_\KK V_1)P_1(2)\}.\]
Since $\Delta_{(V_1,V_2)}\!\ge\!r\!-\!1$, Proposition <ref> shows that $\EKP(K_r)\cap\rep(K_r;V_1,V_2)\!\ne\!\emptyset$. If $\Delta_{(V_1,V_2)}\!\ge\!\dim_\KK V_1\!+\!1$, we obtain
\[ \dim_\KK V_1 = 0\Delta_{(V_1,V_2)}\!+\!\dim_\KK V_1,\]
where $\dim_\KK V_1\le\!\Delta_{(V_1,V_2)}\!-\!1$. The result thus follows from Corollary $\ref{GCD3}$. Alternatively, $\dim_\KK V_1\!=\!\Delta_{(V_1,V_2)}$ and Corollary <ref> implies
$M_{\gen}\!=\!(\dim_\KK V_1)P_1(2)$.
Let $V_1,V_2$ be $\KK$-vector spaces such that $\Delta_{(V_1,V_2)}\!\ge\!1$, $M \in \EKP(K_r)\cap\rep(K_r;V_1,V_2)$. If there exist $\fv_0 \in \Gr_2(A_r)$ and $c,d,j \in \NN_0$ such that
\[ [M|_{\fv_0}] = cP_j(2)\!\oplus\!dP_{j+1}(2),\]
then $M_{\gen}\!=\!cP_j(2)\!\oplus\!dP_{j+1}(2)$ and $M \in O_{\gen}$.
By assumption, we have $\Delta_{(V_1,V_2)}\!=\!\Delta_M\!=\!c\Delta_{P_j(2)}\!+\!d\Delta_{P_{j+1}(2)}\!=\!c\!+\!d$, while $\dim_\KK V_1\!=\!cj\!+\!d(j\!+\!1)\!=\!j\Delta_{(V_1,V_2)}\!+\!d$. If $c\!\ne\!0$,
then $0\!\le\!d\!\le\!\Delta_{(V_1,V_2)}\!-\!1$ and Proposition <ref>(3) implies that
\[ U := \{(N,\alpha) \in \rep(K_r;V_1,V_2)\!\times\! \Inj_\KK(A_2,A_r) \ ; \ \alpha^\ast(N) \cong cP_j(2)\!\oplus\!dP_{j+1}(2)\}\]
is a dense open subset of $\rep(K_r;V_1,V_2)\!\times\! \Inj_\KK(A_2,A_r)$. Alternatively, $\dim_\KK V_1\!=\!(j\!+\!1)\Delta_{(V_1,V_2)}$ and we arrive at the same conclusion. In view of Proposition <ref>(1),
we conclude that
\[ \tilde{U} := U \cap ((\EKP(K_r)\cap\rep(K_r;V_1,V_2))\!\times\! \Inj_\KK(A_2,A_r))\]
is a dense open subset of $\rep(K_r;V_1,V_2)\!\times\! \Inj_\KK(A_2,A_r)$.
\[ \iota_M : \Inj_{\KK}(A_2,A_r) \lra \rep(K_r;V_1,V_2)\!\times\!\Inj_{\KK}(A_2,A_r) \ \ ; \ \ \alpha \mapsto (M,\alpha)\]
is continuous, our current assumption ensures that $\tilde{O}\!:=\! \iota_M^{-1}(\tilde{U})$ is a dense open subset of $\Inj_{\KK}(A_2,A_r)$. As the morphism $\msim : \Inj_\KK(A_2,A_r) \lra \Gr_2(A_r)$ is open
(see the proof of Proposition <ref>),
\[ \msim(\tilde{O})\!=\!\{ \fv \in \Gr_2(A_r) \ , \ M|_\fv \cong cP_j(2)\!\oplus\!dP_{j+1}(2)\}\]
is a dense open subset of $\Gr_2(A_r)$ and Corollary <ref> gives
\[ M_{\gen} = cP_j(2)\!\oplus\!dP_{j+1}(2).\]
As a result, $M \in O_{\gen}$.
By way of example, we briefly discuss Schwarzenberger modules. Our result and its succeeding remark will provide the generic splitting type of the associated vector bundles and
show that these are $\GL(A_2)$-homogeneous, but not homogeneous (cf. Section <ref> below).
Given $\ell \in \NN$, we consider the Schwarzenberger module $M_\cS[\ell\!+\!r]$ of dimension vector $\udim M_\cS[\ell\!+\!r]\!=\!(\ell, \ell\!+\!r\!-\!1)$. Then we have $(M_\cS[\ell\!+\!r])_1\!=\!\KK[X_1,X_2]_{\ell-1}$ and
$(M_\cS[\ell\!+\!r])_2\!=\!\KK[X_1,X_2]_{\ell+r-2}$. Note that these spaces are $\ZZ^2$-graded. Without loss of generality, we may assume that $M_\cS[\ell\!+\!r](\gamma_i)(f)\!=\!X_i^{r-1}f$ for $i \in \{1,2\}$ and $f \in
\KK[X_1,X_2]_{\ell-1}$. In particular, we have $\deg(M_\cS[\ell\!+\!r](\gamma_1))\!=\!(r\!-\!1,0)$ and $\deg(M_\cS[\ell\!+\!r](\gamma_2))\!=\!(0,r\!-\!1)$.
If $\ell\!-\!1\!=\!a_\ell(r\!-\!1)\!+\!q_\ell$ for $q_\ell \in \{0,\ldots, r\!-\!2\}$, then
\[ M_\cS[\ell\!+\!r]_{\gen} = (r\!-\!2\!-\!q_\ell)P_{a_\ell}(2)\!\oplus\!(q_\ell\!+\!1)P_{a_\ell+1}(2).\]
In particular, $M_{\cS}[\ell\!+\!r] \in O_{\gen}$.
Let $\fv_0\!:=\!\KK\gamma_1\!\oplus\!\KK\gamma_2$. Writing $M\!:=\!M_\cS[\ell\!+\!r]$, we shall first find the decomposition of $M|_{\fv_0}$.
Given $(a,b) \in \NN_0^2$, we write $X^{(a,b)}\!:=\!X_1^aX_2^b$. For $q \in \{0,\ldots, r\!-\!2\}$ we consider the vector spaces
\[ M_{1,q} := \langle\{ X^{(a,b)} \in \KK[X_1,X_2]_{\ell-1} \ ; \ a\equiv q \ \modd(r\!-\!1)\} \rangle\]
as well as
\[M_{2,q} := \langle\{ X^{(a,b)} \in \KK[X_1,X_2]_{\ell+r-2} \ ; \ a\equiv q \ \modd(r\!-\!1)\}\rangle.\]
Then we have $M(\gamma_i)(M_{1,q}) \subseteq M_{2,q}$ for $i \in \{1,2\}$. Defining $M_q\!:=\!(M_{1,q},M_{2,q})$, we obtain a decomposition
\[ (\ast) \ \ \ \ \ \ \ \ M|_{\fv_0} = \bigoplus_{q=0}^{r-2} M_q\]
of $\KK.\fv_0$-modules. Moreover, setting $t_q\!:=\!|\{ n \in \NN_0 \ ; \ q\!+\!n(r\!-\!1)\!\le\!\ell\!-\!1\}|$, we have $\udim M_q\!=\!(t_q,t_q\!+\!1)$, so that $M_q \in \EKP(K_2)$ is of minimal type. Since $\EKP(K_2)$ is
closed under taking direct summands and $\Delta_X\!>\!0$ for all $X \in \EKP(K_2)\!\smallsetminus\!\{(0)\}$, this renders $M_q$ indecomposable. We conclude $M_q\!\cong\!P_{t_q}(2)$, so that ($\ast$) actually is the
decomposition of the $\KK K_2$-module $M|_{\fv_0}$ into indecomposables. For $0\!\le\!q\!\le\!q_{\ell}$, we have $t_q\!=\!a_\ell\!+\!1$, while $t_q\!:=\!a_\ell$ for $q_\ell\!+\!1\!\le\!q\!\le\!r\!-\!2$. As a result, ($\ast$) yields
\[ M|_{\fv_0} \cong (r\!-\!2\!-\!q_\ell)P_{a_\ell}(2)\!\oplus\!(q_\ell\!+\!1)P_{a_\ell+1}(2).\]
Corollary <ref> implies
\[ (M_{\cS}[\ell\!+\!r])_{\gen} = (r\!-\!2\!-\!q_\ell)P_{a_\ell}(2)\!\oplus\!(q_\ell\!+\!1)P_{a_\ell+1}(2),\]
and $M_{\cS}[\ell\!+\!r] \in O_{\gen}$.
Let $M\!:=\!M_\cS[\ell\!+\!r]$ and consider
\[ \fv_1 := \KK\gamma_1\!\oplus\!\KK\gamma_3,\]
\[ M(\gamma_1)(X_1^aX_2^b) = X_1^{a+r-1}X_2^b \ \ ; \ \ M(\gamma_3)(X_1^aX_2^b) = X_1^{a+r-2}X_2^{b+1} \ \ \ \ (a\!+\!b\!=\!\ell\!-\!1).\]
Then $(\id_{\KK [X_1,X_2]_{\ell-1}}, X_1^{r-2}\cdot)$ defines an embedding $P_\ell(2) \hookrightarrow M|_{\fv_1}$ such that
\[ M|_{\fv_1} \cong (r\!-\!2)P_0(2)\!\oplus\!P_{\ell}(2).\]
Thus, for $\ell\!\ge\!2$, the dense open subset $O_M$ defined in Corollary <ref> is properly contained in $\Gr_2(A_r)$.
§ COHERENT SHEAVES ON $\GR_D(A_R)$ AND REPRESENTATIONS OF $K_R$
This section is concerned with the equivalence of the categories $\repp(K_r,d)$ of relative projective representations and Steiner bundles on the Grassmannians $\Gr_d(A_r)$ along with some consequences thereof.
Our account builds on work by Jardim-Prata [40], where vector bundles over fields of characteristic $0$ were considered. For fields of characteristic $p\!>\!0$, Friedlander-Pevtsova [32] earlier employed
universal nilpotent operators to define functors between module categories of infinitesimal group schemes and vector bundles on their support spaces. In the context of elementary abelian $p$-groups (or equivalently
elementary abelian restricted Lie algebras), the two approaches are related via the functorial correspondence between Kronecker representations and modules of Loewy length $\le\!2$.
§.§ Conventions
Let $r\!\ge\!2$, $d \in \{1,\ldots, r\!-\!1\}$. We denote by $\Coh(\Gr_d(A_r))$ and $\Vect(\Gr_d(A_r))$ the categories of coherent sheaves and vector bundles (locally free coherent
sheaves) on $\Gr_d(A_r)$, respectively. Note that $\Coh(\Gr_d(A_r))$ is an abelian category, cf. <cit.>. Let $\cO_{\Gr_d(A_r)}$ be the structure sheaf of $\Gr_d(A_r)$, so that every $\cF \in \Coh(\Gr_d(A_r))$ is
an $\cO_{\Gr_d(A_r)}$-module. In view of [3], the category $\Vect(\Gr_d(A_r))$ is a Krull-Schmidt category. Thus, the indecomposable objects of any full subcategory $\cC \subseteq \Vect(\Gr_d(A_r))$ that is closed
under taking direct summands are just the indecomposable vector bundles belonging to $\cC$.
Given $\fv \in \Gr_d(A_r)$, we let $\cO_{\Gr_d(A_r),\fv}$ be the local ring of $\Gr_d(A_r)$ at $\fv$, whose maximal ideal we denote by $\fm_\fv$. Let $\cF \in \Coh(\Gr_d(A_r))$. For $\fv \in \Gr_d(A_r)$, the stalk $
\cF_\fv$ of $\cF$ at $\fv$ is an $\cO_{\Gr_d(A_r),\fv}$-module, and the finite-dimensional $\KK$-vector space $\cF(\fv)\!=\!\cF_\fv/\fm_\fv\cF_\fv$ is called the fiber of $\cF$ at $\fv$. If $\cF \in \Vect(\Gr_d(A_r))$, then $
\cF_\fv$ is a free $\cO_{\Gr_d(A_r),\fv}$-module of rank $\rk(\cF_\fv)\!=\!\dim_\KK\cF(\fv)$.
If $\cF,\cG \in \Coh(\Gr_d(A_r))$ are sheaves, we let $\msHom_{\Gr_d(A_r)}(\cF,\cG)$ be the sheaf of $\cO_{\Gr_d(A_r)}$-homomor- phisms from $\cF$ to $\cG$. By definition, we have $\msHom_{\Gr_d(A_r)}(\cF,\cG)
(\Gr_d(A_r))\!=\!\Hom_{\Gr_d(A_r)}(\cF,\cG)$, the space of homomorphisms from $\cF$ to $\cG$, cf. <cit.>. For $i \in \NN_0$, we let
\[ \Ext^i_{\Gr_d(A_r)}(\cF,-) : \Coh(\Gr_d(A_r)) \lra \modd \KK\]
be the $i$-th right derived functor of $\Hom_{\Gr_d(A_r)}(\cF,-) \cong \Ext^0_{\Gr_d(A_r)}(\cF,-)$. Setting
\[ \HH^i(\Gr_d(A_r),\cF) := \Ext^i_{\Gr_d(A_r)}(\cO_{\Gr_d(A_r)},\cF),\]
we recall that $\Ext^i_{\Gr_d(A_r)}(\cF,\cG) \cong \HH^i(\Gr_d(A_r),\msHom_{\Gr_d(A_r)}(\cF,\cG))$ for every $\cG \in \Coh(\Gr_d(A_r))$.
We denote by
\[ \chi(\cF) = \sum_{i=0}^{d(r-d)}(-1)^i \dim_\KK\HH^i(\Gr_d(A_r),\cF)\]
the Euler characteristic of $\cF$ (see <cit.>). We refer the reader to <cit.> for more details.
The following subsidiary result for “special exceptional pairs“ of coherent sheaves on an arbitrary variety $X$ is inspired by <cit.>. We begin by recalling the relevant terminology:
A vector bundle $\cF \in \Vect(X)$ is referred to as exceptional if $\dim_\KK \Ext^i_X(\cF,\cF)\!=\!\delta_{i,0}$ for all $i\!\ge\!0$. We say that a pair $(E_0,E_1)$ of coherent sheaves on $X$ is special
exceptional, provided
(a) $\dim_\KK\Ext^n_X(E_i,E_i)\!=\!\delta_{n,0}$ for all $n\!\ge\!0$, $i \in \{0,1\}$.
(b) $\dim_\KK\Ext^n_X(E_0,E_1)\!=\!0$ for all $n\!\ge\!1$.
(c) $\dim_\KK\Ext^n_X(E_1,E_0)\!=\!0$ for all $n\!\ge\!0$.[Our choice of terminology derives from Rudakov's more general notion of an exceptional pair, cf. [58]. Pairs satisfying the above conditions are
referred to as “strongly exceptional" in [1], which differs from the definition employed in [15].]
We put $r\!:=\!\dim_\KK\Hom_X(E_0,E_1)$. Suppose there are exact sequences
\[ (\ast) \ \ \ \ (0) \lra E_0^m \lra E_1^n \lra \cF \lra (0)\]
\[ (\ast\ast) \ \ \ (0) \lra E_0^s \lra E_1^t \lra \cG \lra (0).\]
Recall the from Section <ref> the definition of the Euler-Ringel bilinear form:
\[ \langle(x_1,y_1),(y_1,y_2)\rangle_r = x_1y_1\!+\!x_2y_2\!-\!rx_1y_2.\]
We have
\[\chi(\msHom_X(\cF,\cG))=\!\langle (m,n),(s,t)\rangle_r.\]
We proceed in several steps:
(i) We have an exact sequence
\[ (0) \lra \Hom_X(\cF,E_1^t) \lra \Hom_X(\cF,\cG) \lra \Ext^1_X(\cF,E_0^s) \lra \Ext^1_X(\cF,E_1^t) \lra \Ext^1_X(\cF,\cG) \lra (0).\]
The assertion follows by applying $\Hom_X(\cF, -)$ to ($\ast\ast$), once we know that $\Hom_X(\cF,E_0^s)\!=\!(0)\!=\!\Ext^2_X(\cF,E_0^s)$.
By applying $\Hom_X(-,E_0^s)$ to ($\ast$) while observing (c), we obtain a sequence $(0)\!\lra\!\Hom_X(\cF,E_0^s)$ $\lra\!\Hom_X(E_1^n,E_0^s)\!=\!(0)$, so that $\Hom_X(\cF,E_0^s)\!=\!(0)$. To show exactness at
$\Ext^1_X(\cF,\cG)$, we apply $\Ext^1_X(-,E_0^s)$ to $(\ast)$. Then (a) and (c) yield
\[ (0) = \Ext^1_X(E_0^m,E_0^s) \lra \Ext^2_X(\cF,E_0^s) \lra \Ext^2_X(E_1^n,E_0^s) = (0),\]
whence $\Ext^2_X(\cF,E_0^s)\!=\!(0)$. $\diamond$
(ii) We have $\dim_\KK\Hom_X(\cF,E_1^t)\!-\!\dim_\KK\Ext^1_X(\cF,E_1^t)\!=\!nt\!-\!rmt$.
We apply $\Hom_X(-,E_1^t)$ to ($\ast$) and obtain
\[ (0) \lra \Hom_X(\cF,E_1^t) \lra \Hom_X(E_1^n,E_1^t) \lra \Hom_X(E_0^m,E_1^t) \lra \Ext^1_X(\cF,E_1^t) \lra \Ext^1_X(E_1^n,E_1^t)\!=\!(0).\]
As $\dim_\KK\Hom_X(E_1^n,E_1^t)\!=\!nt$, while $\dim_\KK\Hom_X(E_0^m,E_1^t)\!=\!rmt$, our assertion follows. $\diamond$
(iii) We have $\Ext^\ell_X(\cF,\cG)\!=\!(0)$ for $\ell\!\ge\!2$.
Let $\ell\!\ge\!2$. Application of $\Ext^{\ell-1}_X(-,E_0^s)$ to $(\ast)$ in conjunction with (a) and (c) yields
\[(0) = \Ext^{\ell-1}_X(E_0^m,E_0^s) \lra \Ext^\ell_X(\cF,E_0^s) \lra \Ext^\ell_X(E_1^n,E_0^s) = (0),\]
so that $\Ext^\ell_X(\cF,E_0^s)\!=\!(0)$. In view of (b) and (a), application of $\Ext^{\ell-1}_X(-,E_1^t)$ to ($\ast$) gives
\[(0) = \Ext^{\ell-1}_X(E_0^m,E_1^t) \lra \Ext^\ell_X(\cF,E_1^t) \lra \Ext^\ell_X(E_1^n,E_1^t) = (0),\]
so that $\Ext^\ell_X(\cF,E_1^t)\!=\!(0)$.
Applying $\Ext^\ell_X(\cF,-)$ to ($\ast\ast$), we finally arrive at
\[ (0) = \Ext^\ell_X(\cF,E_1^t) \lra \Ext^\ell_X(\cF,\cG) \lra \Ext^{\ell+1}_X(\cF,E_0^s)=(0),\]
so that $\Ext^\ell_X(\cF,\cG)\!=\!(0)$. $\diamond$
By applying (iii), (i) and (ii) consecutively, we obtain
\begin{eqnarray*}
\chi(\msHom_X(\cF,\cG)) & = & \dim_\KK\Hom_X(\cF,\cG)\!-\!\dim_\KK\Ext^1_X(\cF,\cG)\\
& = & \dim_\KK\Hom_X(\cF,E_1^t)\!-\!\dim_\KK\Ext^1_X(\cF,E_1^t)\!+\!\dim_\KK\Ext^1_X(\cF,E_0^s)\\
& = & nt\!-\!rmt\!+\!\!\dim_\KK\Ext^1_X(\cF,E_0^s).
\end{eqnarray*}
Application of $\Hom_X(-,E_0^s)$ to ($\ast$) gives
\[ (0)= \Hom_X(E_1^n,E_0^s) \lra \Hom_X(E_0^m,E_0^s) \lra \Ext^1_X(\cF,E_0^s) \lra \Ext^1_X(E_1^n,E_0^s)=(0),\]
\[ \chi(\msHom_X(\cF,\cG)) = nt\!-\!rmt\!+\!ms = \langle (m,n),(s,t)\rangle_r,\]
as desired.
§.§ The functor $\TilTheta_d$
For $d \in \{1,\ldots,r\!-\!1\}$ we consider the universal vector bundle $\cU_{(r,d)}$ of $\Gr_d(A_r)$. By definition, $\cU_{(r,d)}$ is the locally free sheaf corresponding to the locally trivial vector space fibration $(E,p)$
over $\Gr_d(A_r)$, where
\[ E\!:=\!\{(\fv,a) \in \Gr_d(A_r)\!\times\!A_r \ ; \ a \in \fv\} \ \ \text{and} \ \ p : E \lra \Gr_d(A_r) \ ; \ (\fv,a) \mapsto \fv\]
denote the incidence variety and the canonical projection, respectively. Consequently,
\[ \cU_{(r,d)}(U) = \{s : U \lra E \ ; \ p\circ s\!=\!\id_U\} \cong \{ t : U \lra A_r \ ; \ t(\fv) \in \fv \ \ \ \forall \ \fv \in U\}\]
for every open subset $U \subseteq \Gr_d(A_r)$. The vector bundle $\cU_{(r,d)}$ is known to be simple and the canonical map $\cU_{(r,d)} \lra \cO^r_{\Gr_d(A_r)}$ is locally split injective. Hence its cokernel $\cQ_{(r,d)}$
is a vector bundle (cf. <cit.>), referred to as the universal quotient bundle, cf. <cit.>.
Given a pair $(V_1,V_2)$ of $\KK$-vector spaces, we put
\[ \widetilde{V_1} := V_1\!\otimes_\KK\!\cU_{(r,d)} \ \ \text{and} \ \ \widetilde{V_2} := V_2\!\otimes_\KK\!\cO_{\Gr_d(A_r)}.\]
Let $f_i : V_i \lra W_i$ ($i \in \{1,2\}$) be linear maps between $\KK$-vector spaces. These give rise to morphisms $\tilde{f_i} : \widetilde{V_i} \lra \widetilde{W_i}$, where
\[ \tilde{f_1} := f_1\!\otimes\id_{\cU_{(r,d)}} \ \ \ \text{and} \ \ \ \tilde{f_2} := f_2\otimes\id_{\cO_{\Gr_d(A_r)}}.\]
For $\fv \in \Gr_d(A_r)$, the standard isomorphisms $\ev_{1,\fv} : \cU_{(r,d)}(\fv) \lra \fv$ and $\ev_{2,\fv} : \cO_{\Gr_d(A_r)}(\fv) \lra \KK$ induce commutative diagrams
\begin{equation}\label{diagram1} \begin{tikzcd} \widetilde{V_i}(\fv) \arrow[d, "\id_{V_i}\otimes \ev_{i,\fv}"] \arrow[r,"\tilde{f_i}(\fv)"] & \widetilde{W_i}(\fv) \arrow[d,"\id_{W_i}\otimes \ev_{i,\fv}"]\\
V_i\!\otimes_\KK\!\fu_i \arrow[r,"f_i\otimes\id_{\fu_i}"] & W_i\!\otimes_\KK\!\fu_i,
\end{tikzcd} \end{equation}
\[ \fu_i := \left\{\begin{array}{cc} \fv & i\!=\!1 \\ \KK & i\!=\!2.\end{array} \right.\]
Similarly, the isomorphism $\cO_{\Gr_d(A_r)}(\Gr_d(A_r)) \cong \KK$ allows us to identify $(\tilde{f_2})_{\Gr_d(A_r)}$ and $f_2$.
Let $M \in \rep(K_r)$. In analogy with <cit.> (see also [32]), we consider for every open subset $U \subseteq \Gr_d(A_r)$, the morphism
\[ \TilTheta_{M,d}(U) : \widetilde{M_1}(U) \lra \widetilde{M_2}(U) \ \ ; \ \ m\otimes t \mapsto \sum_{i=1}^r \gamma_i\dact m\otimes \gamma_i^\ast\circ t\]
of $\cO_{\Gr_d(A_r)}(U)$-modules.[Here $\{\gamma_1^\ast,\ldots, \gamma_r^\ast\} \subseteq A_r^\ast$ denotes the dual basis of $\{\gamma_1,\ldots, \gamma_r\}.$] Then $\TilTheta_{M,d} : \widetilde{M_1} \lra
\widetilde{M_2}$ is a morphism of vector bundles, and we define
\[ \TilTheta_d(M) := \msCoker\TilTheta_{M,d} \in \Coh(\Gr_d(A_r)).\]
The definition of $\TilTheta_d(M)$ does not depend on the choice of the dual bases $\{\gamma_1,\ldots, \gamma_r\} \subseteq A_r$, $\{\gamma^\ast_1,\ldots, \gamma^\ast_r\} \subseteq A^\ast_r$. In fact, one can define
sheaves $\widetilde{M}'_1$ and $\widetilde{M}'_2$ via
\[ \widetilde{M}'_1(U) := \{ s : U \lra A_r\!\otimes_\KK\!M_1 \ ; \ s(\fv) \subseteq \fv\!\otimes_\KK\!M_1 \ \ \ \ \forall \ \fv \in U\} \subseteq \Mor(U,A_r\!\otimes_\KK\!M_1)\]
\[ \widetilde{M}'_2(U) := \Mor(U,M_2).\]
\[ \TilTheta'_{M,d}(U)(s) = \psi_M\circ s \ \ \ \ \forall \ s \in \widetilde{M}'_1(U),\]
we obtain
\[ \TilTheta_d(M) \cong \coker\TilTheta'_{M,d}.\]
If $f : M \lra N$ is a morphism in $\rep(K_r)$ with components $f_i: M_i \lra N_i \ \ \ (i \in \{1,2\})$, then
\[ \tilde{f}_i : \widetilde{M_i} \! \lra \widetilde{N_i} \ \ ; \ \ m\otimes g \mapsto f_i(m)\otimes g \ \ \ \ \ \ \ (i \in \{1,2\})\]
are morphisms of vector bundles and there results a commutative diagram
\[ \begin{tikzcd} \widetilde{M_1} \arrow[d, "\tilde{f_1}"] \arrow[r,"\TilTheta_{M,d}"] & \widetilde{M_2} \arrow[d,"\tilde{f_2}"] \arrow[r] & \TilTheta_d(M) \arrow[r] & (0)\\
\widetilde{N_1} \arrow[r,"\TilTheta_{N,d}"] & \widetilde{N_2} \arrow[r] & \TilTheta_d(N) \arrow[r] & (0).
\end{tikzcd} \]
Consequently, there is a unique morphism
\[ \TilTheta_d(f) : \TilTheta_d(M) \lra \TilTheta_d(N),\]
which completes the diagram above. We thus obtain a functor
\[ \TilTheta _d: \rep(K_r) \lra \Coh(\Gr_d(A_r)).\]
We denote by $\reppi(K_r,d)$ the full subcategory of $\rep(K_r)$, whose objects $M$ have the property that $\psi_{M,\fv}\!=\!\psi_M|_{\fv\otimes_\KK M_1}$ is surjective for every $\fv \in \Gr_d(A_r)$. (This category
coincides with the essential image of $\repp(K_r,d)$ under the standard duality.)
Let $M \in \rep(K_r)$. We say that $M$ has constant $d$-rank, provided there is $\rk_d(M) \in \NN_0$ such that $\rk(\psi_{M,\fv})\!=\!\rk_d(M)$ for all $\fv \in \Gr_d(A_r)$.
We let $\CR(K_r,d)$ be the full subcategory of $\rep(K_r)$ whose objects have constant $d$-rank.
The category $\CR(K_r,d)$ is the analog of the category of modules of constant $(d,1)$-radical rank that was considered in <cit.>.
We record a few basic properties:
The following statements hold:
* The functor $\TilTheta_d$ is right exact.
* Let $M \in \rep(K_r)$. Then $M \in \CR(K_r,d)$ if and only if $\TilTheta_d(M) \in \Vect(\Gr_d(A_r))$. In that case, we have $\rk(\TilTheta_d(M))\!=\!\dim_\KK M_2\!-\!\rk_d(M)$.
* We have $\ker\TilTheta_d\cap\CR(K_r,d)\!=\!\reppi(K_r,d)$.
(1) The right exactness of $\TilTheta_d$ is a direct consequence of (the proof of) the Snake Lemma.
(2) Suppose that $M \in \CR(K_r,d)$. Given $\fv \in \Gr_d(A_r)$, the commutative diagram
\[ \begin{tikzcd} \widetilde{M_1}(\fv) \arrow[d, "\id_{M_1}\otimes \ev_{1,\fv}"] \arrow[r,"\TilTheta_{M,d}(\fv)"] & \widetilde{M_2}(\fv) \arrow[d,"\id_{M_2}\otimes \ev_{2,\fv}"]\\
M_1\!\otimes_\KK\!\fv \arrow[r,"\psi_{M,\fv}\circ \omega"] & M_2,
\end{tikzcd} \]
where $\omega$ flips the tensor factors, yields
\[\rk(\TilTheta_{M,d}(\fv)) = \rk(\psi_{M,\fv}) = \rk_d(M).\]
It now follows from <cit.> that $\TilTheta_d(M)$ is a vector bundle of rank $\rk(\TilTheta_d(M))\!=\!\dim_\KK M_2\!-\!\rk_d(M)$.
Conversely, assume that $\TilTheta_d(M)$ is a vector bundle. Then <cit.> implies that there is $n \in \NN_0$ such that
\[ \rk(\TilTheta_{M,d}(\fv)) = n \ \ \ \ \ \forall \ \fv \in \Gr_d(A_r).\]
The observation above thus implies that $\rk(\psi_{M,\fv})\!=\!n$ for all $\fv \in \Gr_d(A_r)$, so that $M \in \CR(K_r,d)$.
(3) Let $M \in \CR(K_r,d)$. By definition, we have $M \in \reppi(K_r,d)$ if and only if $\rk_d(M)\!=\!\dim_\KK M_2$. In view of (2), this is equivalent to $\TilTheta_d(M)\!=\!(0)$.
Steiner bundles on projective space were first systematically studied by Dolgachev-Kapranov [23]. The following definition for Grassmannians is taken from [1].
Let $(s,t) \in \NN_0^2$. A vector bundle $\cF \in \Vect(\Gr_d(A_r))$ is referred to as an $(s,t)$-Steiner bundle if there exists an exact sequence
\[ (0) \lra \cU^s_{(r,d)} \lra \cO^t_{\Gr_d(A_r)} \lra \cF \lra (0).\]
We denote by $\StVect(\Gr_d(A_r))$ the full subcategory of $\Vect(\Gr_d(A_r))$, whose objects are Steiner bundles (for some $(s,t) \in \NN_0^2$).
In view of $\cU_{(r,1)}\cong \cO_{\PP(A_r)}(-1)$ (cf. <cit.>), we retrieve the original definition of [23].
The relationship between $\repp(K_r)$ and $\StVect(\Gr_d(A_r))$, which for $d\!=\!1$ is implicit in Brambilla's work [13, 14] on Steiner bundles (for $\KK\!=\!\CC$), was investigated by Jardim and
Prata [40] for $\Char(\KK)\!=\!0$ in the more general context of cokernel bundles. In our context the category of “globally injective representations“ defined in [40] coincides with $\repp(K_r,d)$.
Since we will employ the functors $\TilTheta_d$ extensively, we recall the basic arguments of the proof of <cit.>, thereby hopefully convincing the reader that the result does
not necessitate any assumption on the characteristic of the base field. To that end we require the following subsidiary result.
The pair $(\cU_{(r,d)},\cO_{\Gr_d(A_r)})$ is special exceptional.
Given a partition $\alpha$, we let $|\alpha|\!=\!\sum_i\alpha_i$ be its degree, and $\alpha'$ be its transpose partition. Let $\cB_{r-d,d}$ be the set of those partitions, whose corresponding Young tableaux
have at most $r\!-\!d$ rows and at most $d$ columns. We pick a total ordering $\prec$ on $\cB_{r-d,d}$ such that $|\alpha| < |\beta|$ implies $\alpha \prec \beta$.
According to <cit.>, the vector bundle
\[ \cT := \bigoplus_{\alpha \in \cB_{r-d,d}}\bigwedge^{\alpha'}(\cU_{(r,d)})\]
is a tilting object in the bounded derived category $\msD^b(\Gr_d(A_r))$ of $\Coh(\Gr_d(A_r))$. In particular, $\Ext^i_{\Gr_d(A_r)}(\cT,\cT)\!=\!(0)$ for all $i\!>\!0$. We consider the partitions $\alpha\!:=\!0$ and
$\beta\!:=\!1$. Then $\alpha\!\prec\!\beta \in \cB_{r-d,d}$, so that $\cO_{\Gr_d(A_r)}\!=\!\bigwedge^\alpha(\cU_{(r,d)})$ and $\cU_{(r,d)}\!=\!\bigwedge^\beta(\cU_{(r,d)})$ are direct summands of $\cT$. Consequently,
$\Ext^i_{\Gr_d(A_r)}(\cX,\cY)\!=\!(0)$ for $\cX,\cY \in \{\cU_{(r,d)},\cO_{\Gr_d(A_r)}\}$ and $i\!>\!0$. Since both bundles are simple, our assertion follows from the fact that $\Hom_{\Gr_d(A_r)}(\cO_{\Gr_d(A_r)}, \cU_{(r,d)})\!
=\!\cU_{(r,d)}(\Gr_d(A_r))\!=\!(0)$, cf. <cit.>.
The following statements hold:
* $\cF \in \Vect(\Gr_d(A_r))$ is a Steiner bundle if and only if there is $M \in \repp(K_r,d)$ such that $\cF \cong \TilTheta_d(M)$.
* The functor $\TilTheta_d : \repp(K_r,d) \lra \StVect(\Gr_d(A_r))$ is an equivalence.
* Let $M \in \repp(K_r,d)$. Then we have $\rk(\TilTheta_d(M))\!=\!\Delta_M(d)$.
Let $M \in \CR(K_r,d)$. The proof of Lemma <ref> yields $\ker \TilTheta_{M,d}(\fv) \cong \ker \psi_{M,\fv}$ for every $\fv \in \Gr_d(A_r)$.
(1) Suppose that $M \in \repp(K_r,d)$. By virtue of Theorem <ref> and the above we have
\[ \ker \TilTheta_{M,d}(\fv) \cong \ker \psi_{M,\fv}\!=\!(0)\]
for all $\fv \in \Gr_d(A_r)$. According to <cit.>, the morphism $\TilTheta_{M,d}$ is locally split injective, whence $\msKer\TilTheta_{M,d}\!=\!(0)$. Consequently, $\TilTheta_d(M)$ is a Steiner bundle.
Conversely, suppose that $\cF$ is a Steiner bundle. Then there exists a pair $(M_1,M_2)$ of vector spaces and an exact sequence
\[ (0) \lra \widetilde{M_1} \stackrel{\Psi}{\lra} \widetilde{M_2} \lra \cF \lra (0)\]
of vector bundles. It follows from <cit.>, which holds for fields of arbitrary characteristic, that
\[ \dim_\KK\Hom_{\Gr_d(A_r)}(\cU_{(r,d)},\cO_{\Gr_d(A_r)}) = \dim_\KK\HH^0(\cU_{(r,d)}^\vee) = r.\]
Hence the injective canonical map
\[ \Hom_\KK(M_1,M_2)\!\otimes_\KK\!A_r^\ast \lra \Hom_{\Gr_d(A_r)}(\widetilde{M_1},\widetilde{M_2})\]
sending $\zeta\otimes f$ to the map $m\otimes t \mapsto \zeta(m)\otimes f\circ t$ is also surjective and thus an isomorphism. There thus exist $\KK$-linear maps $M(\gamma_i) : M_1 \lra M_2$ such that
\[ \Psi(m\otimes t) = \sum_{i=1}^rM(\gamma_i)(m)\otimes \gamma_i^\ast\circ t \ \ \ \ \ \ \forall \ m \in M_1, t \in \cU_{r,d}.\]
Letting $M\!:=\!(M_1,M_2,(M(\gamma_i))_{1\le i\le r}) \in \rep(K_r)$, we see that $\cF\!\cong\!\TilTheta_d(M)$, while $\msKer\TilTheta_{M,d}\!=\!(0)$. Consequently, $M \in \repp(K_r,d)$, as $\ker \psi_{M,\fv} \cong
(\msKer\TilTheta_{M,d})(\fv)\! =\! (0)$ for every $\fv \in \Gr_d(A_r)$.
(2) Lemma <ref> ensures that $\Ext^1_{\Gr_d(A_r)}(\cO_{\Gr_d(A_r)},\cU_{(r,d)})\!=\!(0)$ and we may now proceed as in <cit.>, observing that the assumption $\Char(\KK)\!=\!0$ is not needed.
(3) This follows directly from Lemma <ref>.
For $d\!=\!1$, the bundles $\TilTheta_1(M_{\cS}[m]), \ (m\!\ge\!r\!+\!1)$ are those introduced by Schwarzenberger, cf. <cit.>. Using Corollary <ref> one obtains an explicit formula for
the generic splitting types of these bundles.
Let $(0) \lra A \stackrel{f}{\lra} B \stackrel{g}{\lra} C \lra (0)$ be an exact sequence in $\rep(K_r)$ such that $C \in \repp(K_r,d)$. Then the sequence $(0) \lra \TilTheta_d(A) \lra \TilTheta_d(B) \lra
\TilTheta_d(C) \lra (0)$ is exact.
We have exact sequences $(0) \lra A_i \stackrel{f_i}{\lra} B_i \stackrel{g_i}{\lra} C_i \lra (0)$ for $i \in \{1,2\}$. Tensoring with $\cU_{(r,d)}$ and $\cO_{\Gr_d(A_r)}$ yields a commutative diagram
\[ \begin{tikzcd}
(0) \arrow[r] & \widetilde{A_1} \arrow[r] \arrow[d,"\TilTheta_{A,d}"] & \widetilde{B_1} \arrow[r] \arrow[d,"\TilTheta_{B,d}"] & \widetilde{C_1} \arrow[d,"\TilTheta_{C,d}"] \arrow[r] & (0)\\
(0) \arrow[r] & \widetilde{A_2} \arrow[r] & \widetilde{B_2} \arrow[r] & \widetilde{C_2} \arrow[r] & (0)
\end{tikzcd} \]
with exact rows. The arguments of the proof of Theorem <ref> show that
\[ 0 = \dim_\KK \ker \psi_{C,\fv} = \dim_\KK\ker(\TilTheta_{C,d}(\fv))\]
for all $\fv \in \Gr_d(A_r)$. By virtue of <cit.>, we obtain $\msKer \TilTheta_{C,d}\!=\!(0)$. The assertion now follows by applying the Snake Lemma.
For $M,N \in \rep(K_r,d)$ the following statements hold:
* We have
\[ \chi(\msHom_{\Gr_d(A_r)}(\TilTheta_d(M),\TilTheta_d(N))) = \langle\udim M, \udim N\rangle_r.\]
* We have
\[ \dim_\KK\Ext^1_{K_r}(M,N) = \dim_\KK\Ext^1_{\Gr_d(A_r)}(\TilTheta_d(M),\TilTheta_d(N)).\]
(1) This is a direct consequence of Theorem <ref>, Proposition <ref> and Lemma <ref>.
(2) Proposition <ref> and its proof yield
\begin{eqnarray*}
\dim_\KK\Ext^1_{\Gr_d(A_r)}(\TilTheta_d(M),\TilTheta_d(N)) &= & \dim_\KK \Hom_{\Gr_d(A_r)}(\TilTheta_d(M),\TilTheta_d(N))\\
&- & \chi(\msHom_{\Gr_d(A_r)}(\TilTheta_d(M),\TilTheta_d(N)))\\
& = & \dim_\KK \Hom_{K_r}(M,N)\!-\!\langle\udim M,\udim N\rangle_r\\
& = &\dim_\KK\Ext^1_{K_r}(M,N). \ \ \ \ \ \ \qedhere
\end{eqnarray*}
§.§ Direct consequences
Following [26], we begin by collecting a few facts concerning Chow rings of Grassmannians. We emphasize that our statements are valid for the field $\KK$, which has arbitrary characteristic.
For $\ell \in \{0,\ldots, d(r\!-\!d)\}$, we let $Z_\ell(\Gr_d(A_r))$ be the free abelian group with basis given by the $\ell$-dimensional irreducible closed subsets of $\Gr_d(A_r)$. The elements
of $Z_\ell(\Gr_d(A_r))$ are called $\ell$-cycles, those of
\[ Z(\Gr_d(A_r)) := \bigoplus_{\ell=0}^{d(r\!-\!d)} Z_\ell(\Gr_d(A_r))\]
are referred to as cycles.
The Chow group of $\Gr_d(A_r)$ is the factor group
\[ A(\Gr_d(A_r)) := Z(\Gr_d(A_r))/\Rat(\Gr_d(A_r))\]
given by the subgroup $\Rat(\Gr_d(A_r))$ defined in <cit.>. If $C$ is a cycle, we write $[C]$ for its residue class. The class $[\Gr_d(A_r)] \in A(\Gr_d(A_r))$ is called the fundamental class of $\Gr_d(A_r)$.
Given $\ell \in \{0,\ldots, d(r\!-\!d)\}$, we put
\[ A^\ell(\Gr_d(A_r)) := \{ [C] \ ; \ C \in Z_{d(r-d)-\ell}(\Gr_d(A_r))\}.\]
We have
\[ A(\Gr_d(A_r)) = \bigoplus_{\ell=0}^{d(r-d)} A^\ell(\Gr_d(A_r)).\]
The group $A(\Gr_d(A_r))$ has the structure of a $\ZZ$-graded, commutative ring, see <cit.>. One can show that $A^0(\Gr_d(A_r))\!=\!\ZZ\cdot [\Gr_d(A_r)] \cong \ZZ$.
The ring $A(\Gr_d(A_r))$ is called the Chow ring of $\Gr_d(A_r)$.
\[ \cV : \ \ \ \ \ \ (0) \subseteq V_1 \subseteq V_2 \subseteq \cdots \subseteq V_{r-1} \subseteq V_r\!=\!A_r\]
be a complete flag in $A_r$, so that $\dim_\KK V_i\!=\!i$.
We let $\NN_0^d(r)\!:=\!\{ a \in \NN_0^d \ ; \ r\!-\!d\!\ge\! a_1\! \ge\! \cdots\! \ge\! a_d\! \ge\! 0\}$. For $a \in \NN_0^d(r)$ we define the Schubert variety $\Sigma_a(\cV)$ via
\[ \Sigma_a(\cV) := \{\fv \in \Gr_d(A_r) \ ; \ \dim_\KK(\fv\cap V_{r-d+i-a_i})\!\ge\!i \ \ \ \forall \ i \in \{1,\ldots, d\}\}.\]
Then $\Sigma_a(\cV)$ is smooth, closed, irreducible and of codimension $|a|\!:=\!\sum_{i=1}^da_i$ in $\Gr_d(A_r)$, cf. <cit.>. Its class
\[ \sigma_a := [\Sigma_a(\cV)]\]
is the Schubert class of $a$. It does not depend on the choice of $\cV$.
Given $\ell \in \{0,\ldots, r\!-\!d\}$, we write
\[ \Sigma_\ell := \Sigma_{(\ell,0,\ldots,0)},\]
so that
\[\Sigma_\ell(\cV) = \{ \fv \in \Gr_d(A_r) \ ; \ \fv\cap V_{r-d+1-\ell} \!\ne\!(0)\}.\]
The classes $\sigma_\ell\!:=[\Sigma_\ell(\cV)]$ are referred to as special Schubert classes.
We have
\[ \sigma_\ell \in A^\ell(\Gr_d(A_r))\]
for all $\ell \in \{0,\ldots,r\!-\!d\}$.
(cf. <cit.>) The Schubert classes $\{\sigma_a \ ; \ a \in \NN_0^d(r)\}$ form a basis of the $\ZZ$-module $A(\Gr_d(A_r))$.
It follows that
\[\{\sigma_a \ ; \ a \in \NN_0^d(r), \ |a|\!=\!i\}\]
is a basis of $A^i(\Gr_d(A_r))$ for every $i \in \{0,\ldots,d(r\!-\!d)\}$.
Let $\cF$ be a vector bundle on $\Gr_d(A_r)$. We denote by
\[ c(\cF) = 1\!+\!\sum_{i=1}^{\rk(\cF)} c_i(\cF)t^i \in A(\Gr_d(A_r))[t]\]
the Chern polynomial of $\cF$. The coefficient $c_i(\cF) \in A^i(\Gr_d(A_r))$ is called the $i$-th Chern class of $\cF$, cf. <cit.>.
The Chern polynomial of the universal quotient bundle $\cQ_{(r,d)}$ on $\Gr_d(A_r)$ is given by
\[ c(\cQ_{(r,d)}) = 1\!+\!\sum_{\ell=1}^{r-d} \sigma_\ell t^{\ell},\]
see <cit.>.
Let $M \in \repp(K_r,d)$. Then we have
\[ c_1(\TilTheta_d(M)) = (\dim_\KK M_1)\sigma_1.\]
Hence the dimension vector of $M$ can be recovered from $c_1(\TilTheta_d(M))$ and $\rk(\TilTheta_d(M))$ (cf. Theorem <ref>(3)).
By definition, there is an exact sequence
\[ (0) \lra \cU_{(r,d)} \lra A_r\!\otimes_\KK\! \cO_{\Gr_d(A_r)} \lra \cQ_{(r,d)} \lra (0), \]
so that the Whitney sum formula yields $c_1(\cU_{(r,d)})\!=\!-c_1(\cQ_{(r,d)})\!=\!-\sigma_1$. By the same token, the defining sequence
\[ (0) \lra \widetilde{M}_1 \lra \widetilde{M}_2 \lra \TilTheta_d(M) \lra (0)\]
implies $c_1(\TilTheta_d(M))\!=\!-c_1(\widetilde{M}_1)\!=\!-(\dim_\KK M_1)c_1(\cU_{(r,d)})\!=\!(\dim_\KK M_1)\sigma_1$.
The foregoing result allows us to view the first Chern class $c_1(\cF)$ of a Steiner bundle $\cF$ as a number $c_1(\cF) \in \NN_0$. In the sequel, we will freely use this interpretation without further notice.
We next employ the functor $\TilTheta_d$ to translate some results of Section <ref> to the context of Steiner bundles.
Let $\cF \in \StVect(\Gr_d(A_r))$ be a Steiner bundle.
* We have $\rk(\cF)\!\ge\!\min\{c_1(\cF),d\}(r\!-\!d)$.
* If $c_1(\cF)\!\le\!d$ or $\rk(\cF)\!<\!d(r\!-\!d)$, then $\cF \cong \cO_{\Gr_d(A_r)}^{\rk(\cF)-(r-d)c_1(\cF)}\!\oplus\!\cQ_{(r,d)}^{c_1(\cF)}$.
* If $\rk(\cF)\!=\!d(r\!-\!d)$, then $\cF$ is either as in (2), or $\cF$ is simple.
* If $\rk(\cF)\!=\!d(r\!-\!d)$ and $c_1(\cF)\!\ge\!d\!+\!1$, then $\cF$ is simple.
* Suppose that $\rk(\cF)\!>\!d(r\!-\!d)$. Then there exists a Steiner bundle $\cG$ such that $\rk(\cG)\!=\!d(r\!-\!d)$ and a short exact sequence
\[ (0) \lra \cO_{\Gr_d(A_r)}^{\rk(\cF)-d(r-d)} \lra \cF \lra \cG \lra (0).\]
Theorem <ref> provides $M \in \repp(K_r,d)$ such that $\TilTheta_d(M) \cong \cF$ and $\rk(\cF) \cong \Delta_M(d)$. In addition, Corollary <ref> gives $\dim_\KK M_1\!=\!c_1(\cF)$.
(1) This is a direct consequence of Theorem <ref>.
(2) In view of Corollary <ref> and the remark at the beginning of Section <ref>, the representation $M$ is projective, whence $M \cong \Delta_M(r)P_0(r)\!\oplus\!(\dim_\KK M_1)P_1(r)$.
We have $\Delta_M(r)\!=\!\Delta_M(d)\!-\!(r\!-\!d)\dim_\KK M_1 = \rk(\cF)\!-\!(r\!-\!d)c_1(\cF)$. Moreover, $\TilTheta_d(P_0(r))\cong \cO_{\Gr_d(A_r)}$ and since $\TilTheta_{P_1(r),d}(t)\!=\!\sum_{i=1}^r \gamma_i\otimes
\gamma_i^\ast\circ t$ for all $t \in \cU_{(r,d)}(U)$ and $U \subseteq \Gr_d(A_r)$ open, we obtain $\TilTheta_d(P_1(r))\cong \cQ_{(r,d)}$.
(3) In view of Corollary <ref>, $M$ is projective or a brick. In the former case, $\cF$ is as in (2), in the latter case, Theorem <ref> gives $\End_{\Gr_d(A_r)}(\cF) \cong \End_{K_r}(M) \cong \KK$, so that $\cF$
is simple.
(4) This follows as in (3), using Corollary <ref>(2d,2b).
(5) Let $n\!:=\!\Delta_M(d)\!-\!(r\!-\!d)$. Proposition <ref> provides a short exact sequence
\[ (0) \lra nP_0(r) \lra M \lra N \lra (0),\]
where $N \in \repp(K_r,d)$ has minimal type. Observing Corollary <ref>, we may apply $\TilTheta_d$ to obtain the asserted sequence.
Suppose that $\Char(\KK)\!=\!0$.
(1) In this case, part (1) was proved in <cit.>, using Porteous' formula.
(2) For $d\!=\!1$, part (5) was obtained in <cit.>.
§ INDECOMPOSABLE STEINER BUNDLES: KAC'S THEOREM AND THE AUSLANDER-REITEN QUIVER
In this section, we introduce our two main representation-theoretic tools: Kac's Theorem concerning dimension vectors of indecomposable representations of $K_r$ and the Auslander-Reiten quiver $\Gamma(K_r)$ of
$\rep(K_r)$. In combination with the functors $\TilTheta_d$ the latter will be used to endow the set of isoclasses of indecomposable Steiner bundles with the structure of a quiver, whose connected components can be
studied in terms of those of $\Gamma(K_r)$.
§.§ Kac's Theorem
Recall that an indecomposable module $M \in \rep(K_r)$ is called regular, if it is neither preinjective nor preprojective. For future reference it will be convenient to have the following version of <cit.>, which also builds on <cit.>, at our disposal:
[Kac's Theorem for $K_r$] Let $r\!\ge\!2$ and $\delta \in \NN^2_0\!\smallsetminus\!\{0\}$.
* If $\delta\!=\!\udim M$ for some indecomposable $M \in \rep(K_r)$, then $q_r(\delta)\!\le\!1$.
* If $q_r(\delta)\!=\!1$, then there is a, up to isomorphism, unique indecomposable module $M \in \rep(K_r)$ such that $\udim M\!=\!\delta$. The module $M$ is preprojective or preinjective.
* If $q_r(\delta)\!\le\!0$, then every indecomposable module $M \in \rep(K_r)$ such that $\udim M\!=\!\delta$ is regular. Moreover, there are infinitely many isoclasses of these modules.
§.§ Exceptional Steiner bundles
In this section we shall classify exceptional Steiner bundles. Throughout, we assume that $r\!\ge\!2$ and $d \in \{1,\ldots, r\!-\!1\}$. Our first result characterizes the
modules corresponding to exceptional Steiner bundles.
Suppose that $r\!\ge\!3$, and let $\cF \in \StVect(\Gr_d(A_r))$ be a Steiner bundle. Then the following
statements are equivalent:
* $\cF$ is exceptional.
* There is $M \in \rep(K_r)$ indecomposable preprojective such that $\cF \cong \TilTheta_d(M)$.
(1) $\Rightarrow$ (2). By definition, we have $\End_{\Gr_d(A_r)}(\cF)\cong \KK$, while $\Ext^i_{\Gr_d(A_r)}(\cF,\cF)\!=\!(0)$ for all $i\!\ge\!1$. Theorem <ref> provides $M \in
\repp(K_r,d) \subseteq \EKP(K_r)$ such that $\TilTheta_d(M)\cong \cF$. In particular, $M$ is a brick and hence indecomposable. In view of <cit.>, the module $M$ is not preinjective. Corollary
<ref> yields
\[ \Ext^1_{K_r}(M,M) \cong \Ext^1_{\Gr_d(A_r)}(\TilTheta_d(M),\TilTheta_d(M)) = (0),\]
so that Theorem <ref> in conjunction with the Euler-Ringel form (cf. Section <ref>) ensures that $M$ is preprojective.
(2) $\Rightarrow$ (1). Since $M$ is preprojective, an application of <cit.> in conjunction with Corollary <ref> implies
\[\Ext^1_{\Gr_d(A_r)}(\cF,\cF) \cong \Ext^1_{K_r}(M,M)\!=\!(0).\]
Now Theorem <ref> ensures that $M$ is a brick, so that the $\KK$-space $\End_{\Gr_d(A_r)}(\cF,\cF) \cong \End_{K_r}(M)$ is one-dimensional. In view of Lemma <ref>, part (iii) of the proof of Proposition <ref>
yields $\Ext^i_{\Gr_d(A_r)}(\cF,\cF)\!=\!(0)$ for all $i\!\ge\!2$. As a result, the Steiner bundle $\cF$ is exceptional.
Part (3) of our next result extends <cit.>, where the bundles involved were assumed to be generic. Part (5) shows that <cit.> holds for Grassmannians over fields of arbitrary characteristic.
Recall from Section <ref> that $(P_n(r))_{n\ge 0}$ denotes the family of preprojective $K_r$-representations (of dimension vectors $\udim P_n(r)\!=\!(a_n(r),a_{n+1}(r))$).
Suppose that $r\!\ge\!3$. Then the following statements hold:
* For each $n \in \NN_0$ there exists an exceptional Steiner bundle $\cE_{n,d} \in \StVect(\Gr_d(A_r))$ such that $\rk(\cE_{n,d})\!=\! a_{n+1}(r)\! -\!da_n(r)$ and $c_1(\cE_{n,d})\! =\! a_n(r)\sigma_1$.
* If $\cE \in \StVect(\Gr_d(A_r))$ is an exceptional Steiner bundle, then $\cE \cong \cE_{n,d}$ for some $n \in \NN_0$.
* An indecomposable Steiner bundle $\cF \in \StVect(\Gr_d(A_r))$ is exceptional if and only if $\rk(\cF)\!=\!a_{n+1}(r)\!-\!da_n(r)$ and $c_1(\cF)\! =\! a_n(r)\sigma_1$ for some $n \in \NN_0$.
* A non-zero Steiner bundle $\cF \in \StVect(\Gr_d(A_r))$ satisfies $\Ext^1_{\Gr_d(A_r)}(\cF,\cF)\!=\!(0)$ if and only if there exist $(a,b) \in \NN\!\times\!\NN_0$ and $n \in \NN_0$ such that $\cF \cong \cE_{n,d}^a\!
|
11institutetext: Department of Cybernetics,
Faculty of Electrical Engineering,
Czech Technical University in Prague
# Learning to segment from object sizes
Denis Baručić Jan Kybic
###### Abstract
Deep learning has proved particularly useful for semantic segmentation, a
fundamental image analysis task. However, the standard deep learning methods
need many training images with ground-truth pixel-wise annotations, which are
usually laborious to obtain and, in some cases (e.g., medical images), require
domain expertise. Therefore, instead of pixel-wise annotations, we focus on
image annotations that are significantly easier to acquire but still
informative, namely the size of foreground objects. We define the object size
as the maximum Chebyshev distance between a foreground and the nearest
background pixel. We propose an algorithm for training a deep segmentation
network from a dataset of a few pixel-wise annotated images and many images
with known object sizes. The algorithm minimizes a discrete (non-
differentiable) loss function defined over the object sizes by sampling the
gradient and then using the standard back-propagation algorithm. Experiments
show that the new approach improves the segmentation performance.
###### keywords:
semantic segmentation, weakly-supervised learning, deep learning, distance
transform
00footnotetext: Copyright ©2022 for this paper by its authors. Use permitted
under Creative Commons License Attribution 4.0 International (CC BY 4.0).
## 1 Introduction
Semantic segmentation is the process of associating a class label to each
pixel of an image. With the advent of deep learning, deep networks have
achieved incredible performance on many image processing tasks, including
semantic segmentation. Deep learning for semantic segmentation has many
benefits; for example, it is flexible w.r.t. the model architecture and scales
particularly well [5, 6]. On the contrary, the standard deep learning demands
many ground-truth (GT) pixel-wise annotations to prevent overfitting. Since a
human expert annotator must usually provide the GT annotations, acquiring a
good-quality training dataset can be difficult. To combat this issue, we focus
on learning from GT image annotations that are easier to produce but still
informative enough, namely the sizes of foreground objects. In practice, our
approach assumes a training dataset that consists of relatively few pixel-wise
annotated images and many images with known object sizes. We present a work-
in-progress solution.
### 1.1 Proposed approach
Suppose a standard convolutional network for image segmentation (e.g., a U-Net
[10]). Given an input image, we feed it to the network and collect the output
prediction. The prediction is then thresholded to obtain a binary mask, which
is processed by a distance transform, assigning to each foreground pixel the
shortest distance to the background. Finally, the object size is defined as
double the maximum of the computed distances.
Due to the thresholding, the cost function is not differentiable and it is
therefore not possible to use the standard gradient descent for learning. We
overcome this obstacle by adding random noise to the output of our network.
The predicted binary masks then become stochastic and the gradient can be
sampled. A detailed description of our method is given later in Sec. 2 and 3.
### 1.2 Related work
Cano-Espinosa et al. [1] considered a similar learning problem. They proposed
a network architecture that performs a biomarker (fat contents) regression and
image segmentation after being trained directly on images annotated by
biomarker values only. Similarly to ours, their method derives the biomarker
value from the predicted segmentation deterministically. The difference is
that their biomarker, equivalent to the foreground area, can be obtained by a
simple summation. Furthermore, the method assumes that the foreground objects
can be roughly segmented using thresholding. Pérez-Pelegrí et al. [7] took a
similar approach. Although their method does not involve thresholding to
produce approximate segmentation, it was tailored explicitly for learning from
images annotated by the foreground volume (as their images are 3D).
Karam et al. [4] implemented a differentiable distance transform via a
combination of the convolution operations. The method is fast but exhibits
numerical instabilities for bigger images. Resolving the numerical
instabilities, Pham et al. [8] later proposed a cascaded procedure with
locally restricted convolutional distance transforms. Nonetheless, both
methods substitute the minimum function with the log-sum-exp operation, which
leads to inaccurate results.
The way our method deals with a non-differentiable cost function is borrowed
from stochastic binary networks [9]. In a stochastic binary network, one needs
to deal with zero gradient after each layer of the network. However, methods
such as ARM [13] or PSA [11] are unnecessarily complex. Instead, we employ a
single sample estimation, which has been discussed in [2].
## 2 Model
The proposed model consists of (1) a segmentation network, $f_{\bm{\theta}}$,
parametrized by $\bm{\theta}$, and (2) a deterministic algorithm to derive the
object size based on distance transform, denoted as $g$.
Given an input image $\bm{x}=(x_{1},\ldots,x_{V})$, the network produces a
pixel-wise segmentation
$\bm{a}=f_{\bm{\theta}}(\bm{x}),$ (1)
such that $a_{i}\in{\mathbb{R}},\,1\leq i\leq V$, where $V$ is the number of
pixels. The method does not make any assumptions about the network’s technical
details, except that it can be trained using the standard back-propagation
algorithm and gradient descent. In our experiments, we always employed a U-Net
[10] with a residual network encoder [3] and a mirroring decoder.
To obtain a binary mask $\bm{\hat{y}}\in\\{\pm 1\\}^{V}$, the network response
$\bm{a}$ is thresholded,
$\hat{y}_{i}=\operatorname{sign}a_{i}.$ (2)
### 2.1 Object size
We use a distance transform of the binary mask to define the object size (see
Fig. 1). Distance transform assigns to each pixel the shortest distance to the
background, i.e.,
$d_{i}=\min_{j,\hat{y}_{j}=-1}\delta(i,j),\quad i=1,\ldots,V,$ (3)
where $\delta(i,j)$ is the Chebyshev $\ell_{\infty}$ distance. After that, we
take double the maximum distance to define the object size,
$\hat{s}=2\,\max_{i}\,d_{i}.$ (4)
The composition of the distance transform and the maximum aggregation is the
object size, denoted as $g\colon\\{\pm 1\\}^{V}\to\mathbb{R}$,
$g(\hat{\bm{y}})=2\,\max_{i}\min_{j,\hat{y}_{j}=-1}\delta(i,j).$ (5)
$i$$d_{i}$$\tilde{s}$ Figure 1: Illustrative example of an object and its
derived size. The object is outlined by the thick boundary line. The point $i$
denotes the foreground pixel whose shortest distance to the background,
$d_{i}$, is the highest among the pixels. The derived object size
$\hat{s}=2d_{i}$.
#### 2.1.1 Implementation details
There is an efficient, two-pass algorithm that computes the distance transform
in $\Theta(V)$ time. Furthermore, when evaluating a batch of images, it is
possible to compute the distance transform on all images in parallel.
We have implemented a CPU version111https://github.com/barucden/chdt of this
algorithm that works with PyTorch tensors and is faster than, e.g., the SciPy
implementation.
image $\bm{x}$Segmentationnetwork $f_{\bm{\theta}}$noise
$Z$DistancetransformMaxLoss $l$$s$$l(s,g(Y))$Size derivation $g$$Y$$g(Y)$
Figure 2: An overview of the proposed probabilistic model.
## 3 Learning
Suppose a training dataset $\mathcal{D}=\mathcal{D}_{f}\cup\mathcal{D}_{w}$
consists of fully- and weakly-annotated subsets $\mathcal{D}_{f}$ and
$\mathcal{D}_{w}$. The fully-annotated subset $\mathcal{D}_{f}$ contains pairs
$(\bm{x},\bm{y})$, where $\bm{x}$ is an input image and $\bm{y}$ the
corresponding GT pixel-wise segmentation, while $\mathcal{D}_{w}$ comprises of
pairs $(\bm{x},s)$, where $s$ is the size of the object present in the image
$\bm{x}$. We focus on situations when
$\lvert\mathcal{D}_{f}\rvert\ll\lvert\mathcal{D}_{w}\rvert$.
### 3.1 Supervised pre-training
Our method starts by optimizing a pixel-wise loss w.r.t. the network
parameters $\bm{\theta}$ on the small subset $\mathcal{D}_{f}$, as in the
standard supervised learning. For a particular training pair
$(\bm{x},\bm{y})\in\mathcal{D}_{f}$ and the corresponding prediction
$\bm{a}\in\mathbb{R}^{V}$, the loss function reads
$\sum_{i=1}^{V}\left(a_{i}(1-y_{i})+\log(1+\exp(-a_{i}))\right),$ (6)
which is sometimes referred to as the binary cross-entropy with logits loss.
The optimization continues until convergence.
Using proper data augmentation to extend the training dataset, the network
tends to recognize useful features and produces decent predictions after this
initial stage (see Sec. 4.2).
### 3.2 Weakly-supervised training
Consider a training pair $(\bm{x},s)\in\mathcal{D}_{w}$. As described in Sec.
2, one can obtain a prediction of the object size, $\hat{s}=g(\hat{\bm{y}})$,
from the thresholded network response $\hat{\bm{y}}$. We penalize the
prediction error by the square loss
$l(s,\hat{s})=(s-\hat{s})^{2}.$ (7)
We propose to follow an approach similar to those used in binary neural
networks [11] and subtract random noise $Z$ from the real predictions $a_{i}$
before thresholding. Consequently, the binary segmentation becomes a
collection $\bm{Y}=(Y_{1},\ldots,Y_{V})$ of $V$ independent Bernoulli
variables,
$Y_{i}=\operatorname{sign}(a_{i}-Z),$ (8)
with
$\Pr(Y_{i}=+1\mid\bm{x};\bm{\theta})=\Pr(Z\leq a_{i})=F_{Z}(a_{i}),$ (9)
where $F_{Z}$ is the cumulative distribution function (CDF) of the noise $Z$
(see Fig. 2).
Then, instead of minimizing the loss $l$ (7), we minimize the expected loss
$\mathcal{L}=\operatorname{\mathbb{E}}_{\bm{Y}}[l(s,g(\bm{Y}))]$,
$\mathcal{L}=\sum_{\bm{y}\in\\{\pm
1\\}^{V}}\Pr(\bm{Y}=\bm{y}\mid\bm{x};\bm{\theta})l(s,g(\bm{y})).$ (10)
Contrary to (7), the expected loss (10) is differentiable, assuming a smooth
$F_{Z}$.
#### 3.2.1 Noise distribution
Following [11], we sample the noise $Z$ from the logistic distribution with
mean $\mu=0$ and scale $s=1$. Hence, the CDF of $Z$ is a smooth, sigmoid
function,
$F_{Z}(a)=\frac{1}{1+\exp(-a)}.$ (11)
#### 3.2.2 Exact gradient
To compute the gradient $\nabla_{\bm{\theta}}\mathcal{L}$, we need to evaluate
the derivative
$\frac{\partial\operatorname{\mathbb{E}}_{\bm{Y}}[l(s,g(\bm{Y}))]}{\partial
F_{Z}(a_{i})}$ (12)
for each pixel $i=1,\ldots,V$. The gradient can be then computed automatically
by the back-propagation algorithm. However, an exact computation of (12) leads
to
$\sum_{\bm{y}\in\\{\pm
1\\}^{V}}\frac{\Pr(\bm{Y}=\bm{y}\mid\bm{x};\bm{\theta})}{\Pr(Y_{i}=y_{i}\mid\bm{x};\bm{\theta})}l(s,g(\bm{y}))y_{i},$
(13)
which involves summing $2^{V}$ terms and is thus tractable only for very small
images. Instead, we resort to a single sample estimator.
#### 3.2.3 Single sample estimator
The single sample estimator is based on Lemma 3.1, which is, in fact, a
specific form of [11, Lemma B.1].
###### Lemma 3.1.
Let $\bm{Y}=(Y_{1},\ldots,Y_{V})$ be a collection of $V$ independent $\\{\pm
1\\}$-valued Bernoulli variables with probabilities $\Pr(Y_{i}=+1)=p_{i}$. Let
$h$ be a function $h\colon\\{\pm 1\\}^{V}\to\mathbb{R}$. Let
$\bm{y}=(y_{1},\ldots,y_{V})$ denote a random sample of $\bm{Y}$ and
$\bm{y}_{\downarrow i}=(y_{1},\ldots,y_{i-1},-y_{i},y_{i+1},\ldots,y_{V})$.
Then
$y_{i}\left(h(\bm{y})-h(\bm{y}_{\downarrow i})\right)$ (14)
is an unbiased estimate of $\frac{\partial}{\partial
p_{i}}\operatorname{\mathbb{E}}_{\bm{y}\sim\bm{Y}}[h(\bm{y})]$.
###### Proof 3.2.
We take the derivative of the expectation,
$\frac{\partial}{\partial
p_{i}}\operatorname{\mathbb{E}}_{\bm{y}\sim\bm{Y}}[h(\bm{y})]=\sum_{\bm{y}}\frac{\Pr(\bm{y})}{\Pr(y_{i})}h(\bm{y})y_{i},$
(15)
and write out the sum over $y_{i}$,
$\sum_{\bm{y}_{\neg i}}\sum_{y_{i}}\Pr(\bm{y}_{\neg
i})h(\bm{y})y_{i}=\sum_{\bm{y}_{\neg i}}\Pr(\bm{y}_{\neg
i})\sum_{y_{i}}h(\bm{y})y_{i}$ (16)
where $\bm{y}_{\neg i}$ denotes vector $\bm{y}$ with the $i$-th component
omitted. Notice that the inner sum simplifies and no longer depends on
$y_{i}$,
$\sum_{\bm{y}_{\neg i}}\Pr(\bm{y}_{\neg
i})(h(\bm{y}_{i=+1})-h(\bm{y}_{i=-1})),$ (17)
where $\bm{y}_{i=z}$ is the vector $\bm{y}$ with the $i$-th component set to
$z$. Then, we multiply the inner subtraction by the constant factor
$1=p_{i}+(1-p_{i})=\sum_{y_{i}}\Pr(y_{i})$,
$\sum_{\bm{y}_{\neg i}}\Pr(\bm{y}_{\neg
i})\sum_{y_{i}}\Pr(y_{i})(h(\bm{y}_{i=+1})-h(\bm{y}_{i=-1})),$ (18)
ultimately leading to the following expression for (15):
$\sum_{\bm{y}}\Pr(\bm{y})(h(\bm{y}_{i=+1})-h(\bm{y}_{i=-1})),$ (19)
which can be written as
$\sum_{\bm{y}}\Pr(\bm{y})y_{i}\left[h(\bm{y})-h(\bm{y}_{\downarrow
i})\right].$ (20)
Thus, (14) is a single sample unbiased estimate of (15).
$F_{Z}$
$n=1$
$n=8$
$n=64$
$n=512$
Figure 3: Examples of derivatives (12) computed according to (21) for
different number of samples $n$, given the output of $F_{Z}$, for a small,
$6\times 6$ image. The red frame outlines the object.
According to Lemma 3.1, an unbiased estimate of the derivative (12) is
$\frac{\partial\operatorname{\mathbb{E}}_{Y}[l(s,g(Y))]}{\partial
F_{Z}(a_{i})}\approx y_{i}\left[l(s,g(\bm{y}))-l(s,g(\bm{y}_{\downarrow
i}))\right],$ (21)
where $\bm{y}$ is a random sample of Bernoulli variables with probabilities
(9) (see a few examples of sampled derivatives in Fig. 3).
## 4 Experiments
The proposed method was implemented in the PyTorch Lightning
framework222https://github.com/Lightning-AI/lightning using a ResNet
implementation from the Segmentation Models PyTorch
library333https://github.com/qubvel/segmentation_models.pytorch. The presented
experiments were perfomed on a server equipped with Intel Xeon Silver 4214R
(2.40GHz) and NVIDIA GeForce RTX 2080 Ti.
Figure 4: Example of a hippocampus image [12] with the object outlined in red.
The data for our experiments was based on a dataset of 3D MRI images of the
hippocampus [12]. The dataset consists of 394 volumes provided with GT
segmentation of classes hippocampus head, hippocampus body, and background. We
decomposed the volumes into individual 2D slices of size $48\times 32$ pixels
and kept only those with at least 1% foreground, obtaining a total of 6093
images. Next, we merged the hippocampus classes to get a binary segmentation
problem (see Fig. 4). Afterward, we derived the object sizes from the GT
pixel-wise annotations to use in training. Finally, we randomly split the data
into training, validation, and testing subsets containing 70%, 10%, and 20% of
the images.
Given a GT segmentation $\bm{y}$ and a predicted segmentation $\hat{\bm{y}}$,
we evaluate two metrics, the squared size prediction error $E$ and the
intersection-over-union $IoU$,
$\displaystyle E(\bm{y},\hat{\bm{y}})$
$\displaystyle=l(g(\bm{y}),g(\hat{\bm{y}})),$ (22) $\displaystyle
IoU(\bm{y},\hat{\bm{y}})$
$\displaystyle=\frac{\sum_{i=1}^{V}1+y_{i}+\hat{y}_{i}+y_{i}\hat{y}_{i}}{\sum_{i=1}^{V}3+y_{i}+\hat{y}_{i}-y_{i}\hat{y}_{i}}.$
(23)
In the case of standard supervised method, vertical and horizontal flipping
was randomly applied to augment the training dataset. The proposed method did
not apply any augmentation.
### 4.1 Number of derivative samples
Figure 5: Average epoch duration for the proposed method with different number
of gradient samples. The duration of the standard method is given as a
reference.
Figure 6: Development of the squared size prediction error $E$ and the
intersection-over-union $IoU$ on the validation images over the course of
learning for different numbers of derivative samples $n$.
A toy example (see Fig. 3) indicated that taking more samples of the
derivatives (21) might lead to better results than taking just one. This
experiment investigates how the number of derivative samples $n$ impacts
learning speed and prediction quality.
We considered four different numbers of samples $n$, $n\in\\{1,2,4,8\\}$. For
each $n$, the other parameters (such as the batch size or the learning rate)
were the same, and the learning began with the same segmentation network
$f_{\bm{\theta}}$ that was pre-trained in the standard way on $85$ pixel-wise
annotated images from the training subset. The proposed method always ran
until the squared error $E$ on the validation data stopped improving.
To assess the learning speed, we measured the duration of one learning epoch.
For $n=1$, an epoch took $\approx 10\times$ longer than the standard
supervised learning. Generally, the duration grew roughly exponentially with
$n$ (see Fig. 5).
Higher values of $n$ did not lead to a lower $E$ or a faster convergence speed
(see Fig. 6). In fact, $n=1$ and $n=2$ achieved the lowest $E$, but not by a
large margin. Given the speed benefits, we use $n=1$ always. Interestingly,
even though $E$ kept decreasing over the course of learning for all $n$, $IoU$
improved only slightly and started declining after $\approx 20$ epochs. This
observation suggests that the squared error of the object size is not a
sufficient objective for learning the segmentation.
### 4.2 Pre-training impact
This experiment tests the essential question: given a segmentation model
trained on a few pixel-level annotated images, can we improve its testing
performance by further learning from size annotations?
We trained different segmentation networks until convergence on randomly
selected training subsets of size $m$. Then, we fine-tuned these networks on
the whole training dataset using the proposed method. We measured the test
performance in terms of $IoU$.
The proposed method led to a $\approx 5\%$ increase of $IoU$ for small $m<100$
(see Fig. 7), improving the segmentation quality. For higher $m$, the effect
was negligible, which complements the observation from the previous experiment
that improving the size estimate does not necessarily improve the segmentation
quality.
Figure 7: $IoU$ on the test data for different sizes $m$ of the pre-training
dataset. The plot shows results achieved by a network after pre-training and
after subsequent fine-tuning by the proposed method.
## 5 Discussion
The method is promising but there is definitely potential for improvement in
both speed and prediction performance.
The proposed method samples the derivatives according to (21) for each pixel
$i$. Flipping the prediction, $y_{i}\mapsto-y_{i}$, changes the derived size
only for some $i$; particularly those within and on the border of the
predicted object. Therefore, given a sample $\bm{y}$,
$l(s,g(\bm{y}))=l(s,g(\bm{y}_{\downarrow i}))$ for many pixels $i$, and the
sampled derivatives (21) are sparse. The method might sample only those
derivatives that are potentially non-zero and set the rest to zero directly,
which would save much computational time.
We have seen in the experiments that lower size prediction error does not
strictly imply better segmentation. We need to closely investigate in what
cases the size prediction loss is insufficient and adjust the objective. The
adjustment might involve adding an L1 regularization (as in [1]) or drawing
inspiration from unsupervised methods (e.g., demand for the segmentation to
respect edges in images, etc.).
The proposed approach entails some principled limitations. For example, it
allows only a single object in an image. We also expect the method to be ill-
suited for complex object shapes, but we have not performed any experiments in
that regard yet.
## 6 Conclusion
We proposed a weakly-supervised method for training a segmentation network
from a few pixel-wise annotated images and many images annotated by the object
size. The key ingredients is a method for evaluating the object size from a
probabilistic segmentation and a method for optimizing a deep network using a
non-differentiable objective.
The achieved results seem promising. We believe the improvements suggested in
the discussion will improve performance, rendering the method valuable for
training segmentation models for biomedical images.
### Acknowledgments
The authors acknowledge the support of the OP VVV funded project
“CZ.02.1.01/0.0/0.0/16_019/0000765 Research Center for Informatics”, the Czech
Science Foundation project 20-08452S, and the Grant Agency of the Czech
Technical University in Prague, grant No. SGS20/170/OHK3/3T/13.
## References
* [1] C. Cano-Espinosa et al. Biomarker localization from deep learning regression networks. IEEE Transactions on Medical Imaging, 39(6):2121–2132, 2020.
* [2] Y. Cong, M. Zhao, K. Bai, and L. Carin. GO gradient for expectation-based objectives. In 7th International Conference on Learning Representations, 2019\.
* [3] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 770–778, 2016.
* [4] C. Karam, K. Sugimoto, and K. Hirakawa. Fast convolutional distance transform. IEEE Signal Processing Letters, 26(6):853–857, 2019.
* [5] X. Liu et al. A review of deep-learning-based medical image segmentation methods. Sustainability, 13(3):1224, 2021.
* [6] S. Minaee et al. Image segmentation using deep learning: A survey. IEEE transactions on pattern analysis and machine intelligence, 2021\.
* [7] M. Pérez-Pelegrí et al. Automatic left ventricle volume calculation with explainability through a deep learning weak-supervision methodology. Computer Methods and Programs in Biomedicine, 208:106275, 2021.
* [8] D. D. Pham, G. Dovletov, and J. Pauli. A differentiable convolutional distance transform layer for improved image segmentation. In DAGM German Conference on Pattern Recognition, pages 432–444. Springer, 2020.
* [9] T. Raiko, M. Berglund, G. Alain, and L. Dinh. Techniques for learning binary stochastic feedforward neural networks. In 3rd International Conference on Learning Representations, 2015\.
* [10] O. Ronneberger, P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention, pages 234–241. Springer, 2015.
* [11] A. Shekhovtsov, V. Yanush, and B. Flach. Path sample-analytic gradient estimators for stochastic binary networks. Advances in Neural Information Processing Systems, 33:12884–12894, 2020.
* [12] A. L. Simpson et al. A large annotated medical image dataset for the development and evaluation of segmentation algorithms, 2019, arXiv:1902.09063.
* [13] M. Yin and M. Zhou. ARM: augment-REINFORCE-merge gradient for stochastic binary networks. In 7th International Conference on Learning Representations, 2019\.
|
# Adaptive Partially-Observed Sequential Change Detection and Isolation
Xinyu Zhao1, Jiuyun Hu1, Yajun Mei2, Hao Yan1
1School of Computing and Augmented Intelligence
Arizona State University,
2School of Industrial and Systems Engineering,
Georgia Institute of Technology
The authors gratefully acknowledge the support from NSF DMS 1830363, and CMMI
1922739
###### Abstract
High-dimensional data has become popular due to the easy accessibility of
sensors in modern industrial applications. However, one specific challenge is
that it is often not easy to obtain complete measurements due to limited
sensing powers and resource constraints. Furthermore, distinct failure
patterns may exist in the systems, and it is necessary to identify the true
failure pattern. This work focuses on the online adaptive monitoring of high-
dimensional data in resource-constrained environments with multiple potential
failure modes. To achieve this, we propose to apply the Shiryaev–Roberts
procedure on the failure mode level and utilize the multi-arm bandit to
balance the exploration and exploitation. We further discuss the theoretical
property of the proposed algorithm to show that the proposed method can
correctly isolate the failure mode. Finally, extensive simulations and two
case studies demonstrate that the change point detection performance and the
failure mode isolation accuracy can be greatly improved.
Keywords: Shiryaev–Roberts procedure, multi-arm bandit, sequential change-
point detection, adaptive sampling, multiple failure modes
## 1 Introduction
Nowadays, most industrial applications are instrumented with hundreds or
thousands of sensors due to the advancement in sensing technology. Real-time
process monitoring and fault diagnosis are among the benefits that can be
gained from effective modeling and analysis of the produced high-dimensional
streaming data. Classical researches for process monitoring of high-
dimensional streaming data focus on a fully observable process, which means at
each sampling time point, all the variables can be observed for analysis (Yan
et al., 2018). However, it is often infeasible to acquire measurements of all
these sensing variables in real time due to limited sensing resources, sensing
capacity, sensor battery, or other constraints such as system transmission
bandwidth, memory, storage space, and processing speed in modern industrial
applications (Liu et al., 2015). Furthermore, under change detection and
isolation setting, we assume that the engineered systems that are being
studied have several distinct failure modes and patterns but do not know which
failure mode may occur beforehand. Overall, this paper focuses on change point
detection under resource-constrained environments with multiple potential
failure modes.
The first motivating example is in the hot forming process (Li and Jin, 2010)
as shown in Fig. 1(a). There are five sensing variables in the system: the
final dimension of workpiece ${\bf X}_{1}$, the tension in workpiece ${\bf
X}_{2}$, material flow stress ${\bf X}_{3}$, temperature ${\bf X}_{4}$, and
blank holding force ${\bf X}_{5}$. These five variables can be represented as
a Bayesian network, as shown in Fig. 1(a). For example, if we know that the
change of ${\bf X}_{4}$ and ${\bf X}_{5}$ are the two major failure sources in
the system. If ${\bf X}_{4}$ changes, $({\bf X}_{1},{\bf X}_{2},{\bf
X}_{3},{\bf X}_{4})$ will also change. Furthermore, If ${\bf X}_{5}$ changes,
only $({\bf X}_{1},{\bf X}_{2},{\bf X}_{5})$ will change. Therefore, different
failure modes may affect a different subset of sensors differently.
(a) Hot foaming process
(b) 3D printing example
Figure 1: Examples of Complex data in Various Industrial Applications Left
figure shows an example of a hot foaming process. Right figure shows an
example of monitoring the thermal images in additive manufacturing.
Another example comes from in-situ hot-spots detection in the laser powder bed
fusion (LPBF) process in the metal additive manufacturing process. A thermal
camera is often used to monitor the stability of the process while the product
is being produced on a layer-by-layer basis. Here, detecting the hot-spots
early is crucial for further product quality control. Fig. 1(b) show an
example of such hot-spots from the thermal camera. Given that the anomaly or
hot-spots can only occur on the edge/corner of the scanning path, multiple
failure modes can be defined.
There are a few challenges of sequential change-point detection under the
sampling constraint: 1) From the previous examples, the failure mode
distribution can be quite complicated. For example, in the hot foaming
process, as shown in Fig. 1(a), we aim to detect the failure mode with the
weakly conditional dependency on the graph; In the laser powder bed fusion
process, as shown in Fig. 1(b), we aim to detect the spatially clustered hot-
spots. 2) Even though we assume that we have prior knowledge of different
potential failure modes, we do not know which failure mode may occur in the
system. The main challenge is to balance the exploration of all potential
failure modes and the exploitation to focus on the most probable failure mode.
A conceptual illustration of the proposed algorithm is provided in Fig. 2. The
illustration example has shown an example that the sampled points are
performed on the 2D spatial domain. The sampling patterns at time $t_{1}$ and
$t_{2}$ focus on exploration for all failure modes and the sampling patterns
at $t_{3},\cdots,t_{n}$ focus on exploitation for failure mode 3. In general,
it is hard to decide when the algorithms should change to exploitation or
which failure mode they should focus on. Finally, given that the multiple
failure modes have quite complex shapes and distributions, the exploration and
exploitation among these modes are often quite challenging.
Figure 2: Conceptual Illustration of the Balance of Exploration and
Exploitation; The sampling patterns at $t_{1},t_{2}$ focus on the exploration
of all failure modes. The sampling pattern at $t_{3},\cdots,t_{n}$ focuses on
the exploitation of the failure mode 3.
There are also many works focusing on change-point detection under resources
constraint. Most of the existing works are proposed based on the ”local
monitoring and global decision” framework, which focuses on monitoring each
data stream independently using local monitoring statistics and then fusing
these local monitoring statistics together via a global decision framework.
For example, Liu et al. (2015) proposed scalable and efficient algorithms for
adaptive sampling for online monitoring. The method introduced a compensation
parameter for the unobserved variables to increase the chance of exploring
them. Recently, Zhang and Mei (2020) proposed to combine the powerful tools of
the multi-arm bandit problem for efficient real-time monitoring of HD
streaming data. However, these works either assume the data stream is
independent or cannot take advantage of the failure mode information in some
systems, which fails to monitor and identify the correct failure pattern. For
a complete literature review of monitoring of high-dimensional streaming data,
please see Section 2.
To generalize the sequential change-point detection framework to both detect
and identify the correct failure modes, change detection and isolation
literature has been proposed in the literature, which also inspires this
research. The change-point detection and isolation often assume that there are
a set of pre-defined post-change distributions. The goal is not only to detect
the change with the shortest detection delay but also to identify which change
mode occurs in the system. For example, Chen et al. (2020) proposed a Bayesian
method to decide on a procedure to identify both the change point as well as
the correct change mode. For a complete literature review of change detection
and isolation, please see Section 3.3. However, these works typically assume
that the data is fully observed, which cannot be applied to partially observed
data.
To address the challenge of multiple failure modes and partially observed
data, we propose a novel Multiple Thompson Sampling Shiryaev-Roberts-Pollak
(MTSSRP) Method by a modified ”local monitoring and global decision
framework”. As far as the authors know, this is the first work that discusses
the adaptive sampling framework for failure mode detection and isolation.
Unlike the literature on the monitoring of HD streaming data, where the local
monitoring statistics are defined at each individual sensor, we propose to
define the local statistics for each individual failure mode. This enables the
proposed MTSSRP to take advantage of the failure mode information, which is
very important in the high-dimensional space, given that failure mode
information can significantly reduce the search space since there are
unlimited ways that change may occur in the high-dimensional space. To
quantify the uncertainty of unobserved sensing variables for different failure
modes, we propose to apply the Shiryaev-Robert (SR) procedure for sequential
change point detection on the failure mode level.
Furthermore, to balance the exploration and exploitation, we will borrow the
idea from Multi-arm Bandit (MAB). MAB aims to sequentially allocate a limited
set of resources between competing ”arms” to maximize their expected gain,
where the reward function for each arm is not known. MAB provides a principled
way to balance exploration and exploitation. To apply MAB for change point
detection under the sampling constraint, we propose to use the SR statistics
of the selected failure modes as the reward function in the Multi-arm Bandit
(MAB) problem (Zhang and Mei, 2020). However, different from (Zhang and Mei,
2020), the selection of arm is on the sensor level, where the SR statistics is
defined on the failure-mode level. For high-dimensional data, specifying the
joint distribution of high-dimensional data can be very challenging.
Therefore, this paper will explore spatial structures for defining the failure
mode distributions as shown in 4.7. This paper also discussed that with the
independence assumption of the distribution variable, the computational
efficiency can be greatly improved.
The paper is organized as follows. In Section 2, we will review the existing
literature on change detection and isolation. We will also discuss works on
process monitoring with resource constraints. We further introduce our
proposed method and then discuss its property in Section 3. Then, we apply the
proposed approach to both the simulated data and evaluate its performance and
compare the existing methods in Section 5. Furthermore, we apply the proposed
method to two real cases in Section 6, respectively. Concluding remarks are
given in Section 7.
## 2 Literature Review
In this section, we will provide a more detailed review of statistical process
control or sequential change point detection methods. We will briefly classify
the methods for the following four categories: monitoring of independent HD
streaming data, monitoring of functional data or profile monitoring, process
monitoring with the resource constraint, change-point detection, and
isolation.
In the first category, monitoring the HD streaming data has often been treated
as monitoring the multiple independent univariate data streams. There are two
distinct frameworks for monitoring independent data streams in recent years.
First, the ”global monitoring” framework focused on directly designing the
global monitoring statistics for the process monitoring or change point
detection for high-dimensional data (Xie and Siegmund, 2013; Wang and Mei,
2015; Cho and Fryzlewicz, 2015; Chan et al., 2017). However, the global
monitoring framework is typically computationally inefficient for high-
dimensional data. Second, the ”local monitoring and global decision” framework
focus on monitoring each data stream independently using local monitoring
statistics and then fusing these monitoring statistics together via the global
statistics (Mei, 2010, 2011). The benefit is that these methods are typically
computationally efficient and can be scalable to high-dimensional data.
However, these methods are often limited to the independent data stream.
Finally, this framework is targeted for the case of fully observed data, which
may not be applicable under the resource constraint.
In the second category, profile monitoring techniques have been proposed to
tackle the complex spatial correlation structures. Dimensionality reduction
techniques, such as principal component analysis (PCA), are widely used.
Various types of alternatives such as multivariate functional PCA (Paynabar et
al., 2016), multi-linear PCA (Grasso et al., 2014), and tensor-based PCA (Yan
et al., 2015) are proposed. On the other hand, non-parametric methods based on
local kernel regression (Qiu et al., 2010) and splines (Chang and Yadama,
2010) are developed. To monitor the non-smooth waveform signals, a wavelet-
based mixed effect model is proposed in (Paynabar and Jin, 2011). However, for
both PCA-based methods and non-parametric methods, they typically assume that
the change alternative is not known. To utilize the anomaly structures, smooth
sparse decomposition methods have been proposed and utilize two sets of basis
functions, the background basis and anomaly basis, to represent the spatial
structures of the background and anomaly, which have been applied to smooth
profiles (Yan et al., 2017, 2018) and waveform profiles (Yue et al., 2017).
However, all the profile monitoring techniques assume that the complete
measurements are given and cannot be applied for HD data with partial
observations.
In the third category, many existing works focus on the change point detection
with the sampling constraint. Here, we will briefly classify the existing
monitoring methods with the sampling constraint into two categories,
monitoring the i.i.d data stream and monitoring the correlated data stream.
For monitoring the i.i.d data stream with the sampling constraint, Liu et al.
(2015) proposed a top-R-based adaptive sampling strategy as a combination of
random sampling in the in-control state and fixed sampling in the out-of-
control state. Another work by (Zhang and Mei, 2020) converts the problem into
a MAB framework and adaptively selects the sensors with Thompson Sampling.
Recent work by (Gopalan et al., 2021) provides an information-theoretic lower
bound for the detection delay. However, this method can be applied to multi-
dimensional problems, but cannot be applied to the case with multiple complex
failure-mode (i.e., after-change) distributions.
However, due to the i.i.d assumption, these methods might not be suitable for
data with complex distributions in reality. To deal with this problem, Xian et
al. (2018) proposed an adaptive sampling strategy that can handle the
correlated data generated from a multinomial distribution. For monitoring
correlated data streams with the sampling constraint, these methods can be
classified into monitoring data generated from the Bayesian Network and
spatial profile. For example, Liu et al. (2013) and Liu and Shi (2013)
proposed a sensor allocation strategy according to a Bayesian Network to
detect changes with multivariate $T^{2}$ control chart. Another work discussed
the problem when there is a spatial correlation among sensors and proposed a
spatial-adaptive sampling strategy to focus on suspicious spatial
clusters(Wang et al., 2018). However, these methods either consider the data
stream with spatial correlation (Wang et al., 2018; Ren et al., 2020; Gómez et
al., 2022) or modeled by the Bayesian Network structures (Liu et al., 2013;
Liu and Shi, 2013), which fails to apply to the problem with general failure
mode distributions as discussed in this paper.
Finally, there is a large amount of work focused on the case when there are
multiple failure modes, and it is necessary to identify the true failure while
detecting the changes. The problem of sequential change detection with
multiple failure modes is usually called change detection and isolation. The
goal is to find the best decision procedure that can control the false alarm
rate as well as the false isolation probability. The problem is of importance
since it is common in different applications like fault diagnosis, process
monitoring, and object identification (Nikiforov et al., 1993; Willsky, 1976;
Malladi and Speyer, 1999). The major works in change detection and isolation
can be categorized into Bayesian and non-Bayesian directions. Nikiforov (1995)
proposed a change detection/isolation framework as an extension of Lorden’s
results (Lorden, 1971) which follows non-Bayesian schema. Another two works
formulate the problem into a Bayesian version, which considers the change
point as a random variable (Chen et al., 2020; Malladi and Speyer, 1999).
## 3 Proposed Methodology
In this section, we will first describe the problem formulation of partially
observed multi-mode change detection based on high-dimensional (HD) streaming
data with sampling control in Section 3.1. We will review some relevant
methodology in Section 4. We will describe the proposed MTSSRP methodology in
Section 4.4. We will prove important properties of the proposed algorithms
about the average run length and failure mode isolation guarantee in Section
4.5. We will give the guidelines to select the tuning parameters of the
proposed MTSSRP method in Section 4.6. Finally, we will give a discussion and
several guidelines on selecting the failure mode distributions in Section 4.7.
### 3.1 Problem Formulation and Background
Suppose we are monitoring data stream $X_{j,t}$ for $j=1,\cdots,p$ and
$t=1,2,\cdots,T$, where $p$ is the number of dimensionality in the system and
$T$ is the monitored time length. We assume that the data streams follow joint
distribution $f_{0}$ before change as ${\bf X}_{t}\sim f_{0}$ for $t<\nu$. At
some unknown change time $\nu\in\\{1,2,\cdots,\\}$, an undesirable event
occurs and causes an abrupt change of the data stream into one or few failure
modes. For example, the after-change distributions $f_{k}\in\mathcal{F}$ can
be anyone from a family of distributions
$\mathcal{F}=\\{f_{1},\cdots,f_{K}\\}$. In another word, after the change,
${\bf X}_{t}\sim f_{k},k=1,\cdots,K$ for $t>\nu$. In other words, we do not
assume that we know which failure mode occurs in the system. Following the
change detection and isolation framework, we do assume that a single failure
mode $f_{k}$ may occur after the change. We will discuss the case with
multiple failure mode. Here, $f_{0},f_{1},\cdots f_{K}$ are the joint
distribution for all sensing variables. The sensing variables can also also be
correlated. Finally, in practice, we can set the magnitude of the joint
distribution as the interested magnitude of the change to be detected.
Furthermore, we assume that given the resource constraint, it is not possible
to observe all the data streams. For the partially observed data with sampling
control, the set of the observed data is denoted by
$\mathbf{y}_{t}=\\{X_{j,t},j\in C_{t}\\}$. Here, $C_{t}$ is the set of
observed sensor indices at time $t$, which can be selected online. In other
words, we can define $a_{j,t}$ as the binary variable denoting whether the
variable $j$ is observed at time $t$, $C_{t}=\\{j:a_{j,t}=1\\}$. Finally, the
sampling constraint is represented by $\sum_{j=1}^{p}a_{j,t}=q$ at each time
$t=1,2\cdots$, which means at each time $t$, only $q$ sensing variables can be
observed from all $p$ variables.
The objective of this paper is to design an efficient adaptive sampling
algorithm and the change point detection algorithm to automatically distribute
sensing resources according to the knowledge of the system failure modes such
that the change can be detected quickly as soon as it occurs and the
corresponding failure mode can be identified accurately while maintaining the
false alarm constraint.
## 4 Review of Relevant Methodology
In Section 4, we will review the formulations of the relevant methodology in
detail. Section 4.1 reviews the Bayesian decision framework, namely the
Shiyaev-Roberts (SR) procedure for sequential change-point detection methods.
Section 4.2 reviews the extension of the SR procedure for HD data monitoring.
### 4.1 Review of Shiyaev-Roberts statistics for uni-variate Sequential
Change Point Detection
We will first review the Bayesian decision approach for the sequential change-
point detection by Shiryaev (1963). Consider a univariate streaming data ${\bf
X}_{1},{\bf X}_{2}\cdots,$ where the distribution of the data changed from
$f_{0}$ to $f_{1}$ at some unknown time $\nu$. In another word, We assume that
${\bf X}_{1},\ldots,{\bf X}_{\nu}\overset{i.i.d}{\sim}f_{0}$ and ${\bf
X}_{\nu+1},\ldots\overset{i.i.d}{\sim}f_{1}$. The goal is to propose a
statistical decision policy to determine the stopping time $T$, where $T=t$
implies that a change has happened at time $t$.
Here, the goal is to find a decision policy such that the detection delay can
be minimized under the false alarm constraint. A Bayesian formulation on the
statistical decision policy has been proposed, where the true change point
$\nu$ is assumed to has geometric distribution as $P(\nu=t)=p(1-p)^{t-1}$.
When $p\rightarrow 0$, (Shiryaev, 1963) proposed a statistics
$R_{t}=\sum_{j=1}^{t}\prod_{i=j}^{t}\frac{f_{1}({\bf X}_{i})}{f_{0}({\bf
X}_{i})}$, which can be computed recursively as
$R_{t}=(R_{t-1}+1)\frac{f_{1}({\bf X}_{t})}{f_{0}({\bf X}_{t})}.$ (1)
For some pre-specified constant $A$, the decision to detect a change point in
distribution can be made as $T_{A}=\inf_{t}\\{R_{t}\geq A\\}.$ Pollak (1985,
1987) proved that this change point detection rules has an asymptotic minimax
property, which is to minimize
$\sup_{1\leq\nu<\infty}\mathbb{E}(T_{A}-\nu|T_{A}\geq\nu)$ under the
constraint that $\mathbb{E}(T_{A}|\nu=\infty)\geq B$ and $B\to\infty$.
Despite the statistical efficiency, SR procedure is defined for the change
point detection for univariate data, which cannot be used to monitor the high-
dimensional data.
### 4.2 Review of Thompson Sampling with the SR statistics
Zhang and Mei (2020) combined the Thompson Sampling with the SR statistics to
monitor high-dimensional with adaptive sampling control. Suppose for any time
$t$, the data is $p$ dimensional. Before change time $\nu$, the $p$th
dimension is $\overset{i.i.d}{\sim}f_{0}$, and after change time $\nu$, the
$p$th dimension is $\overset{i.i.d}{\sim}f_{1}$. Zhang and Mei (2020) proposed
a statistic for each dimension with
$R^{\prime}_{j,t}=\begin{cases}(R^{\prime}_{j,t-1}+1)\frac{f_{1}(X_{j,t})}{f_{0}(X_{j,t})}&j~{}{\rm
observed}\\\ R^{\prime}_{j,t-1}+1&j~{}{\rm unobserved}\end{cases}$ (2)
The decision of a change point of distribution is a threshold of top $r$
statistics, which is
$T=\inf\\{t\geq 1:\sum_{j=1}^{r}R^{\prime}_{j,t}\geq A\\}$
To determine which dimensions to observe, Zhang and Mei (2020) implemented
Thompson Sampling, where a random variable $R^{\prime}_{j,0}$ is sampled from
a prior distribution $G$ and added to $R^{\prime}_{j,t}$ with some
coefficients. The $q$ dimensions with the largest result are observed at time
$t+1$. Even though this algorithm provided a theoretical understanding of the
compensation terms in (Liu et al., 2015) from the Bayesian perspective, it is
still focused on the case where the failures occurred on individual sensor
level, which can not be applied to HD data with complex failure mode
distributions.
### 4.3 Review of Change Point Detection and Isolation
We will briefly review the change-point detection and isolation framework by
Chen et al. (2020). Consider a streaming data ${\bf X}_{1},{\bf X}_{2}\cdots,$
where the distribution of the data changed from $f_{0}$ to $f_{k}$ at some
unknown time $\nu$. In another word, We assume that ${\bf X}_{1},\ldots,{\bf
X}_{\nu}\overset{i.i.d}{\sim}f_{0}$ and ${\bf
X}_{\nu+1},\ldots\overset{i.i.d}{\sim}f_{k}$, where
$f_{k}\in\mathcal{F}=\\{f_{1},\cdots,f_{K}\\}$ is a known pre-defined set of
after-change distributions.
The goal of a change-point detection and isolation algorithm is to compute a
terminal pair $\delta=(T,\hat{k})$ online based on the observations ${\bf
X}_{1},{\bf X}_{2},\cdots$, where $T$ is the alarm time at which
$\hat{k}$-type change is identified. The goal is to propose a statistical
decision policy to determine the stopping time $T_{A}$, where $T_{A}=t$
implies that a change has happened at time $t$. Most of the change-point
detection framework will use pre-specified constant $A$, the decision to
detect a change point in distribution can be made as
$T_{A}=\inf_{t}\\{R_{t}\geq A\\}.$
Similar to the change point detection problem, we would like to minimize the
detection delay as $\sup_{1\leq\nu<\infty}\mathbb{E}(T_{A}-\nu|T_{A}\geq\nu)$.
Besides, there are typically two types of constraints: 1) under the case that
there is no change, the average detection time should be larger than a
threshold $B$ as $\mathbb{E}(T_{A}|\nu=\infty)\geq B$. 2) If a true change
mode $k$ happens, the false isolation rate, which is defined as:
$\max\limits_{1\leq k\leq K}\mathbb{P}^{k}\\{\hat{k}\neq k|T\geq\nu\\}$ should
also be small. However, these methods assume that the data is fully
observable, which cannot be used to partially observed data.
### 4.4 Proposed Algorithm
In Section 4.4, we will introduce the proposed methodology with the following
major steps, monitoring statistics update, change point detection decision,
failure mode isolation, and planning for adaptive sampling. The overall
framework is shown in Fig. 3, and the detailed steps are as follows:
1. 1.
Monitoring statistics update: We first construct the monitoring statistics of
partially observed HD streaming data for each failure mode based on the SR
procedure. The detailed step is discussed in Section 4.4.1.
2. 2.
Change point detection decision: According to the updated monitoring
statistics for each failure mode, a top-R statistic is used to conduct the
global decision. We will then raise a global alarm if the process has gone out
of control and decide which failure mode has occurred. The detailed step is
discussed in Section 4.4.2.
3. 3.
Planning for adaptive sampling: If the change is not detected, we will update
the sampling layout dynamically according to the historical observations. To
achieve this, we propose to borrow the Thompson sampling idea to decide the
next sampling layout, where the data is randomly sampled from the identified
failure mode distribution. The detailed step is discussed in Section 4.4.3.
Furthermore, the optimization algorithm to solve this planning and optimal
sampling decision is discussed in Section 4.4.4. The selected sampling
patterns will be used to update the monitoring statistics recursively.
4. 4.
Failure mode isolation: Finally, if the change is detected, we will isolate
and identify the true failure mode in the system.
Figure 3: Procedure of the proposed method
#### 4.4.1 Recursive Monitoring Statistics Update
In this subsection, we will discuss the proposed method of constructing the
Shiyaev-Roberts (SR) statistics for each failure mode $k\in\\{1,\cdots,K\\}$
with missing observations. Here, we denote the local SR statistics at time $t$
as $R_{k,t}$. Here, $\mathbf{y}_{t}$ is the set of observed data streams. We
will follow the same rule of updating the local statistics $r_{k,t}$.
$R_{k,t}=(R_{k,t-1}+1)\frac{\tilde{f}_{C_{t},k}(\mathbf{y}_{t})}{\tilde{f}_{C_{t},0}(\mathbf{y}_{t})}.$
Here, $\tilde{f}_{C_{t},k}$ is the joint distribution of the observed data
$\mathbf{y}_{t}$ at set $C_{t}$. For computational efficiency and stability,
it is recommended to use the $r_{k,t}=\log R_{k,t}$, updated as
$r_{k,t}=\log(\exp(r_{k,t-1})+1)+\log\frac{\tilde{f}_{C_{t},k}(\mathbf{y}_{t})}{\tilde{f}_{C_{t},0}(\mathbf{y}_{t})}.$
We set $R_{k,0}=0$ initially and update the statistics accordingly.
#### 4.4.2 Detection Decision and Failure Mode Isolation
We will combine the local statistics for each failure mode to construct a
global stopping time. Here, if we know that the system only contains one
failure mode, we propose to use the largest of the monitoring statistics
$r_{k,t}$ to trigger the alarm. For example, if $r_{k,t}$ is larger than a
control limit $A$, to raise the alarm.
$T=\inf\\{t\geq 1:\max\limits_{k}r_{k,t}\geq A\\}$ (3)
However, if we know that there are multiple failure modes in the system, the
summation of the top $r_{k,t}$ statistics can be used to trigger the alarm.
$T=\inf\\{t\geq 1:\sum_{k=1}^{K_{s}}r_{(k),t}\geq A\\}$ (4)
Finally, to isolate the most probable failure mode when the change is
detected, we propose to use the monitoring statistics with the largest index
$\hat{k}$, computed as
$\hat{k}=\arg\max\limits_{k}r_{k,T}.$ (5)
#### 4.4.3 Planning for Adaptive Sampling
In this section, we present an efficient method to plan and select the best
sampling pattern to observe at the next time point. Suppose that we have
observed $\mathbf{y}_{1},\cdots,\mathbf{y}_{t-1}$ now and the goal is to
determine $\\{a_{j,t}\\}$ at next time $t$, which is a binary variable
denoting whether variable $j$ is observed or not at time $t$. Inspired by the
MAB, we propose to maximize the reward function, defined by the monitoring
statistics of the top few selected failure modes. More specifically, we
propose to use the summation of the SR statistics of the top-$K_{s}$ failure
modes as the reward function, where $K_{s}$ is a pre-defined parameter to
balance the exploration and exploitation. In other words, we can define the
reward function as $S_{t}=\sum_{k=1}^{K_{s}}r_{(k),t}$, where $r_{(k),t}$ is
the rank of the statistics such as $r_{(1),t}\geq\cdots\geq r_{(K_{s}),t}$.
One specific challenge is that to compute the reward function $S_{t}$ for
planning, we need to compute $r_{(k),t}$, which requires $\mathbf{x}_{t}$ to
be fully observed. However, given that we are still at time $t-1$ yet and data
$x_{t}$ has not been observed yet, it is impossible to compute and optimize
$S_{t}$ for the planning problem.
To solve this planning problem, we propose to optimize a sampled version of
the monitoring statistics $S_{t}$, defined as
$\tilde{S}_{t}=\sum_{k=1}^{K_{s}}a_{k,t}r_{(k),t}$. Borrowing from the
Thompson sampling algorithm, we would like to use the sampled version of
$\mathbf{x}_{t}$, denoted as $\mathbf{\tilde{x}}^{k}_{t}$ as
$\tilde{r}_{k,t}=\log(\exp{(r_{k,t-1})}+1)+\log\frac{f_{k}(\tilde{\mathbf{x}}^{k}_{t})}{f_{0}(\tilde{\mathbf{x}}^{k}_{t})},$
(6)
where $\tilde{\mathbf{x}}^{k}_{t}\sim f_{k}$ is sampled from the $k^{th}$
failure mode. Finally, one can optimize $a_{j,t}$ by maximizing the sampled
version of $S_{t}$, denoted as $\tilde{S}_{t}$ as
$\displaystyle\max_{a_{j,t}}\tilde{S}_{t}\quad\text{subject to}$
$\displaystyle\sum_{j}a_{j,t}=q,a_{j,t}=\\{0,1\\}$ (7)
Finally, we will discuss how to solve the optimization (7) in Section 4.4.4.
#### 4.4.4 Optimization for Planning
In general, optimizing (7) is often challenging. If the problem dimension $p$
and the number of selected sensing variables $q$ are small, we can enumerate
all the $\tbinom{p}{q}$ possible combinations of the sampling layouts.
However, given the time complexity is $O\left(\tbinom{p}{q}\right)$,
enumeration of all possible combinations is not feasible for large $p,q$.
Here, we will first present the closed-form solution for a special case of the
proposed algorithm, where the joint failure mode distribution
$f_{k}(\mathbf{x}_{t})=\prod_{j}f_{j,k}(x_{t,j})$ can be approximated by
independent but not necessarily identical distributions in each dimension $j$.
Notice that if the possible failure modes in each dimension $j$ is finite, the
total number of possible failure modes for all dimensions is finite. Here, we
would like to derive the analytical solution to optimize (7) under this
setting in Proposition 1.
###### Proposition 1.
If $f_{k}(\mathbf{x}_{t})=\prod_{j}f_{j,k}(x_{t,j})$ for $k=0,\cdots,K$, the
set $C_{t}$ in (7) can be solved by selecting the indices of the largest $q$
of $s_{t,j}$, denoted as $s_{t,(1)},s_{t,(2)},\cdots,s_{t,(q)}$. Here
$s_{t,j}$ is defined as
$s_{t,j}=\sum_{k=1}^{K_{s}}\log\frac{f_{j,(k)}(\tilde{x}^{k}_{j,t})}{f_{j,0}(\tilde{x}^{k}_{j,t})}$
(8)
$s_{t,(j)}$ is the order statistics, defined as $s_{t,(1)}\geq
s_{t,(2)}\geq\cdots\geq s_{t,(q)}\geq s_{t,(q+1)}\geq\cdots\geq s_{t,(p)}$.
We would like to mention that the computation of (8) in Proposition 1 is
actually very efficient. To compute each $s_{t,j}$, it requires the summation
of $K_{s}$ terms, which is of $O(K_{s})$ complexity. To compute all $p$
sensing variables at each time $t$, it requires only $O(pK_{s})$ complexity at
each time to decide the best sampling layout. Here, the limitation is that
$f_{k,j}$ is assumed to be independent over different data dimension $j$.
However, we find that even the distribution of each failure mode is not
independent, this approximation can still achieve a pretty reasonable
solution.
In this paper, we will only focus on the monitoring of continuous variables
and assume that the data follows a normal distribution $f_{k}\sim
N(\boldsymbol{\mu}_{k},\Sigma_{k})$. However, as derived in the proposed
framework, this method can be generalized to other distributions quite easily.
Finally, as mentioned in Proposition 1, if we will further assume that
$\Sigma_{k}$ is diagonal as
$\Sigma_{k}=diag(\sigma_{k,1}^{2},\cdots\sigma_{k,p}^{2})$, we can derive a
simpler formula for $s_{t,j}$ in Proposition 2.
###### Proposition 2.
Given that $f_{k,j}=N(\mu_{k,j},\Sigma_{k})$, where
$\Sigma_{k}=diag(\sigma_{k,1}^{2},\cdots\sigma_{k,p}^{2})$. We can derive
$s_{t,j}=\sum_{k=1}^{K_{s}}(\frac{1}{\sigma_{k,j}^{2}}(\tilde{x}^{k}_{j,t}-\mu_{k,j})^{2}-\frac{1}{\sigma_{k,0}^{2}}(\tilde{x}^{k}_{j,t}-\mu_{k,j})^{2})$,
$\tilde{x}_{t}\sim f_{k}$,
We would like to emphasize that the independence assumption of each failure
mode distribution is actually not required for the proposed algorithm. It is
only useful to derive the closed-form solution in solving (7). If the spatial
dimension is not independent for different failure modes, the proposed
planning procedure can still be optimized without the assumptions by
approximating the optimal solution. Here, we propose a greedy algorithm to
detect the $a_{j,t}$ sequentially. The detailed step is given as follows.
First, we can select the first sensing variable $j_{1}$ to optimize the
sampled version $\tilde{S}_{t}$ by
$j_{1}=\arg\max\tilde{S}_{t},\sum_{j}a_{j,t}=1,a_{j,t}=\\{0,1\\}.$ After
$j_{1}$ is decided, we would like to choose the second sensing variable
$j_{2}$ by $j_{2}=arg\max\tilde{S}_{t},\sum
a_{j,t}=2,a_{j_{1},t}=1,a_{j,t}=\\{0,1\\}$. We will continue the procedure
until $j_{q}$ is selected. In conclusion, the set of the observed sensing
index is given as $C_{t}=\\{j_{1},\cdots,j_{q}\\}$. Given that we only need to
enumerate all $p$ dimensions in each of the $q$ iterations, the time
complexity can be reduced to $O(pq)$. Despite the efficiency, the greedy
forward selection strategy usually does not produce a global optimal solution.
Here, we would like to highlight the major difference of the proposed method
compared to the existing literature of monitoring of the i.i.d data stream
such as (Zhang and Mei, 2020): 1) the number of failure modes $K$ does not
need to be the same as the dimensionality $p$ of the data stream; 2) The
normal data distribution $f_{0}$ does not need to be i.i.d according to each
dimension as $x_{j}\overset{i.i.d}{\sim}f(x),\text{for all }j$; 3) For each
failure mode, it can include overlapping sets of sensing variables.
Finally, we would like to point out a special version of the proposed method
and how it links to (Zhang and Mei, 2020), if we are interested in monitoring
the i.i.d data stream with the focus of detecting the change of each
individual sensing variable.
###### Proposition 3.
For before change $H_{0}:x_{j}\overset{i.i.d}{\sim}f(x),\text{for all }j$.
After change, for $j^{th}$ failure mode, where $j\in\\{1,\cdots,p\\}$, only 1
distribution changed the distribution to $g(x)$ as $x_{j}{\sim}g(x)$, where
the rest $x_{j^{\prime}}\overset{i.i.d}{\sim}f(x),j^{\prime}\neq j$ still
follows the pre-change distribution. The proposed algorithm will result in the
sampled updating rule as in (Zhang and Mei, 2020):
$R_{j,t}=\begin{cases}\frac{g(\tilde{x}_{j,t})}{f(\tilde{x}_{j,t})}(R_{j,t-1}+1)&a_{j,t}=1\\\
R_{j,t-1}+1&a_{j,t}=0\end{cases}$ (9)
Proposition 3 shows a special case for the proposed algorithm, which assumes
the change only affected a few data streams and the algorithms try to identify
the change with the resources constraint. Under the current setting, the
proposed algorithm will become another sampled version of the TSSRP algorithm.
Many previous works, including (Liu et al., 2015; Zhang and Mei, 2020) have
studied this setting. However, the proposed algorithm can be generalized into
any other joint distributions of different failure modes.
### 4.5 Properties of the Proposed Algorithm
Here, we will prove two important properties of the algorithms about the bound
of the average run length and the failure mode isolation in Theorem 4 and
Theorem 5, respectively.
###### Theorem 4 (Average Run Length).
Let $T=\inf\\{t\geq 1:r_{(1),t}\geq A_{2}\\}$. Then we have that under the
null hypothesis where no changes occur, $\mathbb{E}T\geq A_{1}/K$,
$\mathbb{E}T=O(A_{1})$, where $A_{1}=e^{A_{2}}$.
Theorem 4 provides a lower and upper bound for the Average Run Length if no
changes occur. Theorem 4 provides us the guidance to select conservative upper
and lower bounds of the control limit $A$. Specifically, $K*ARL$ can serve as
the upper bound in the bisection search to speed up the threshold choosing
procedure.
###### Theorem 5 (Failure Mode Isolation).
Assume ${\bf X}_{1}\ldots{\bf X}_{\nu}\sim f_{0}$, ${\bf X}_{\nu+1}\ldots\sim
f_{k}$. All distributions of failure modes are continuous, and the KL
divergence of the distributions of failure mode $l$ and the true failure mode
$k$ follows $0<KL(f_{k}\|f_{l})<\infty$, and $\mathrm{Var}_{x\sim
f_{k}}[\log\frac{f_{k}}{f_{l}}]<\infty$, $\text{for all }l\neq k$. The
probability that $P(r_{k,t}>r_{l,t})\rightarrow 1$ as $t\to\infty$.
Theorem 5 provides the behavior of the largest SRP statistics when time goes
to infinity under the alternative hypothesis (where the failure mode $k$
occurs). It shows that the adaptive sampling algorithm will always be able to
isolate the true failure mode $k$ if $t\rightarrow\infty$.
In some cases, there might be multiple failure modes happening at different
time points after the change point $\nu$, i.e. for some $t_{1}>\nu$, ${\bf
X}_{t_{1}}\sim f_{k}$; for some $t_{2}>\nu$, ${\bf X}_{t_{2}}\sim f_{l}$.
Corollary 6 is proved.
###### Corollary 6.
Assume ${\bf X}_{1}\ldots{\bf X}_{\nu}\sim f_{0}$. Let $\mathcal{K}=\\{k:{\bf
X}_{t}\sim f_{k}\textit{ for some }t>\nu\\}$ be the set of true failure modes.
If we further assume that the support of different failure modes are non-
overlapping, then for any $k\in\mathcal{K}$ and $l\notin\mathcal{K}$, we have
$\lim_{t\to\infty}P(r_{k,t}>r_{l,t})=1$.
Corollary 6 assumes that different failure modes are not overlapping with each
other, we can prove in Corollary 6 that SRP statistics of the true failure
modes will be larger compared to those of the other potential failure modes.
We would like to point out that Corollary 6 is not always true for failure
mode distributions that are potentially overlapped with each other. For
example, if half of the data after the change follows $f_{1}$ and the other
half after the change follows $f_{2}$, it might be possible that the true
failure mode identified would be $f_{3}=\frac{1}{2}(f_{1}+f_{2})$.
### 4.6 Choice of Parameters
Here, we will present practical guidelines for tuning parameter selection.
Given that the number of sensing variables $q$ typically depends on the
available sensing resources in the particular applications, we only need to
select the following parameters: the number of top-R selected failure modes
$K_{s}$, the control limit threshold $A$, and the failure mode distribution
$f_{k}$ and $f_{0}$.
Choice of the number of observed failure modes $K_{s}$: First, the number of
selected failure modes for the monitoring statistics should be smaller than
the total number of potential failure modes. Ideally, $K_{s}$ should be chosen
as large as the total number of true failure modes in the system. In practice,
we found that increasing $K_{s}$ to be more than the true number of failure
modes in the system would lead the algorithm to explore more potential failure
modes or increase the exploration power. However, if $K_{s}$ is too large, the
algorithm is not able to focus on the actual failure modes, which decreases
the exploitation power.
Choice of threshold $A$: The choice of control limit $A$ can be determined by
the In-control ARL (or $ARL_{0}$). If $A$ is large, $ARL_{0}$ will also
increase. In practice, we can set an upper bound of the $A$ by utilizing the
Theorem 4 and then use the binary search algorithm to find the best $A$ for a
fixed $ARL_{0}$.
### 4.7 Selection of failure mode distribution $f_{k}$
Finally, the complex joint distributions of different failure modes also bring
significant computational complexity, which will be addressed in this paper.
Selecting the failure mode distribution is very important to achieve better
change detection and isolation performance. However, it is very challenging to
provide accurate failure modes definition for high-dimension data without any
domain knowledge. In this work, we mainly focus on detecting mean-shift in
high dimension data. We further discuss how to define the failure modes based
on our knowledge of the high-dimension data. Overall, there are two strategies
for choosing the most appropriate failure mode distribution $f_{k}$. 1) If
there is prior knowledge about the failure mode distributions, we can set the
distribution according to the prior knowledge. For example, if we know that
the hot spots are clustered, each failure mode distribution can be assumed as
the mean-shift of the IC distribution $f_{0}$ with an individual B-spline or
Gaussian kernel basis. If we know that the post-change distribution is sparse,
a simple way is to set the failure mode distribution as the mean shift of each
individual sensor. 2) If we do not know the failure mode distributions, we can
collect some samples for each failure mode and use these samples to estimate
the failure mode distribution.
## 5 Simulation Study
Here, we will evaluate the proposed method in a simulation study. We will
start with the simulation setup for single failure mode in Section 5.1 with
two different scenarios: the non-overlapping case and the overlapping case.
Then, we will evaluate the proposed algorithm in these two scenarios. To test
the robustness of the proposed algorithm in the case when multiple failure
modes coexist, we also perform the sensitivity analysis to evaluate the
performance in 5.2
### 5.1 Simulation Study for a Single Failure Mode
Here, we will discuss the two scenarios for the simulation setup. We are
trying to distinguish multiple failure modes by whether these failure modes
have overlapping support. For example, for the first ”non-overlapping” case,
we assume that different failure modes $f_{i}$ and $f_{j}$ have non-
overlapping support.
#### 5.1.1 Scenario 1: the non-overlapping case
We will first discuss the non-overlapping case of the proposed method. In the
simulation, we let the data dimension $p=1000$, the number of failure mode
$K=50$, each time the algorithm will select $q=10$ sensors at each time. Here,
we assume the normal data or in control (IC) data follows $f_{0}=N(0,I)$.
After the change, the out-of-control (OC) data have $K$ failure modes, where
$f_{k}=N(\boldsymbol{\mu}_{k},I)$ and
$\boldsymbol{\mu}_{k}=\sum_{j=sk}^{(s+1)k}e_{j}$.
$e_{j}=(0,\cdots,1,\cdots,0)$, and only $j^{t}h$ element is 1. Here, three
failure modes have been selected after the change. In other words, if we
organize the 1000 data streams into a $50\times 20$ image, each failure
pattern would be each row of pixels which can be visualized in Fig. 7(a).
Therefore, different failure modes are not overlapped, given they contain
different sensing variables.
(a) All 50 Non-overlapping potential failure modes
(b) All 49 overlapping potential failure modes
Figure 4: Failure Modes for Overlapping and Non-overlapping Cases
Finally, we assume that we are only accessible to $10$ out of $p=1000$ data
streams to observe. At each time step, we adaptive select data streams to
monitor the whole process. We want to detect the correct failure mode as soon
as possible.
#### 5.1.2 Scenario 2: the overlapping case
We will discuss the second scenario, where the failure patterns are generated
as small spatial clusters. In this case, we set the data as 2-D images with
size $30\times 30$ with total dimension $p=900$. Here, the failure modes are
generated using B-spline basis with $7$ knots in both $x$ and $y$ directions.
As shown in Fig. 7(b), we end up with $7^{2}=49$ potential failure modes.
After the change happens, we randomly select a failure mode as the true
failure mode. In this situation, some failure modes might overlap with each
other, which will be more challenging for the algorithm to isolate the real
changes. Finally, during the monitoring process, we can select 10 out of
$p=900$ data streams adaptively to observe online.
#### 5.1.3 Simulation Result
Here, we will compare the proposed MTSSRP with the following benchmark
methods: 1) TSSRP method (Zhang and Mei, 2020), which is introduced in detail
in Appendix. 2) TRAS method Liu et al. (2015), where the local CUSUM
statistics are used for each individual data stream and later fused together
via the Top-R rule. To show the upper-bound and lower-bound performance, we
will also add three simple alternatives: 1) Random, where we randomly select
$q$ sensors at each time step with the top-r statistics by monitoring each
sensor individually. 2) Oracle, where we have not only access to all the data
streams but also the failure mode distribution information using the
monitoring statistics as MTSSRP. 3) MRandom, where we apply the same
monitoring statistics as MTSSRP, which considers the failure mode distribution
information in the monitoring statistics, but we randomly select the sensors
at each time step. We evaluate the proposed method with two metrics which are
detection delay and failure isolation accuracy.
First, we would like to compare the detection delay of the proposed method and
all the benchmark methods. Here, we set the in-control average run length
(i.e., denoted as $ARL_{0}$) for all methods as $200$ and compare their out-
of-control ARL or average detection delay (i.e., denoted as $ARL_{1}$) with
1000 replications. Here, we will compute the $ARL_{1}$ for different change
magnitudes ($\delta$ = $0.5$, $0.8$) in Table 1. We also compared the
$ARL_{1}$ and Isolation Accuracy from different magnitude $\delta$ ($0.1$ to
$0.8$) in Fig. 5. From the results, we can see that the proposed MTSSRP has
better $ARL_{1}$ compared to other benchmark methods. MTSSRP performs much
better than MRandom, which validates the efficiency of the proposed sampling
strategy. The advantage of MTSSRP over TSSRP shows that considering the
failure mode information can greatly improve the performance. We further
compare the isolation accuracy as shown in Fig. 8(c) and Fig. 8(d). It can be
seen that the proposed MTSSRP achieves better performance than others. It can
also reach pretty high accuracy when $\delta$ is greater than 0.6.
Table 1: Average Run Length and Failure Mode Isolation Accuracy for single
failure
_Case_ nonoverlap overlap _Change Magnitude_ $\delta=0.5$ $\delta=0.8$
$\delta=0.5$ $\delta=0.8$ _Metrics_ $ARL_{1}$ Accuracy $ARL_{1}$ Accuracy
$ARL_{1}$ Accuracy $ARL_{1}$ Accuracy _Oracle_ 13.71(11.56) 0.99(0.04)
2.52(1.08) 1.0(0.0) 13.68(12.05) 1.0(0.05) 2.57(1.43) 1.0(0.03) _Competing
Methods_ MTSSRP 112.05(66.96) 0.75(0.43) 26.49(17.09) 0.99(0.03) 111.16(66.01)
0.75(0.43) 26.66(17.71) 0.98(0.14) TSSRP 124.66(58.2) 0.55(0.50) 61.19(30.67)
0.81(0.39) 134.51(58.38) 0.48(0.5) 60.5(35.92) 0.81(0.39) TRAS 164.84(49.07)
0.46(0.50) 75.53(42.51) 0.98(0.13) 161.75(51.48) 0.47(0.5) 84.08(50.73)
0.93(0.25) MRandom 178.1(41.49) 0.28(0.45) 104.55(48.71) 0.92(0.28)
181.67(39.91) 0.23(0.42) 121.16(54.09) 0.8(0.4) Random 199.97(1.07) -
199.66(5.08) - 200.0(0.0) - 199.95(1.61) -
To understand how the proposed algorithm balances the exploration and
exploitation automatically, we would like to plot both the SR statistics for
each failure mode (i.e., in red) and when this particular failure mode has
observed sensors (i.e., in black dot) for both the potential failure mode
(i.e., failure mode doesn’t happen in this run) and the true failure mode in
Fig. 6 together. Here, the failure mode with the observed sensors can be
defined as that there are observed sensing variables located in the non-zero
location of the mean-shift of that particular failure mode. From Fig. 6, it is
clear that the statistics for potential is quite small compared to the
statistics for the true failure mode. From Fig. 6(a), we can also observe that
the SR statistics will grow naturally if this particular failure mode is not
observed. This will encourage the sensing variables to be allocated to this
particular failure mode eventually. Furthermore, if the particular failure
mode is observed where no change occurs, the monitoring statistics will drop
significantly, indicating that this failure mode has dropped significantly. On
the other hand, from Fig. 6(b), we can clearly see that after time $100$ when
the change occurs, the true failure mode statistics will grow significantly as
long as that particular failure mode is observed.
(a) $ARL_{1}$ in Non-overlapping Cases
(b) $ARL_{1}$ in Overlapping Cases
(c) Accuracy in Non-overlapping Cases
(d) Accuracy in Overlapping Cases
Figure 5: Out-of-control Average Run Length ($ARL_{1}$) and Failure isolation
accuracy for Different Change Magnitude $\delta$
(a) Potential Failure Mode
(b) True Potential Failure Mode
Figure 6: SR statistics for the True Failure Mode. The change happens at time
$t=100$. Left figure shows the monitoring statistics for the potential failure
mode, where the monitoring statistics is small. Right figure shows the
monitoring statistics for the true failure mode and the monitoring statistics
increase dramatically at time $t=100$.
### 5.2 Simulation Study for Multiple Failure Modes
We further design another simulation study to evaluate the performance of the
proposed methods when multiple failure modes happen together. Fig. 7
illustrates the potential failure modes and generated multiple failure modes.
(a) All 50 Non-overlapping potential failure modes
(b) All 49 overlapping potential failure modes
(c) Non-overlapping true failure modes
(d) Overlapping true failure modes
Figure 7: Generated Potential and True Failure Mode Patterns for Overlapping
and Non-overlapping Cases Table 2: Average Run Length and Failure Mode
Isolation Accuracy for multiple failure
_Case_ nonoverlap overlap _Change Magnitude_ $\delta=0.5$ $\delta=0.8$
$\delta=0.5$ $\delta=0.8$ _Metrics_ $ARL_{1}$ Accuracy $ARL_{1}$ Accuracy
$ARL_{1}$ Accuracy $ARL_{1}$ Accuracy _Oracle_ 6.05(0.12) 0.91(0.29)
1.78(0.02) 0.93(0.26) 5.28(0.12) 0.99(0.09) 1.56(0.02) 1.0(0.0) _Competing
Methods_ MTSSRP 57.26(1.32) 0.91(0.29) 14.27(0.22) 0.93(0.26) 53.73(1.36)
0.93(0.25) 14.38(0.23) 0.97(0.16) TSSRP 77.78(1.26) 0.85(0.36) 36.32(0.53)
0.88(0.26) 79.15(1.35) 0.89(0.31) 32.12(0.54) 0.92(0.26) TRAS 104.18(1.67)
0.43(0.50) 38.1(0.56) 0.91(0.29) 102.81(1.71) 0.48(0.50) 39.34(0.68)
0.93(0.25) MRandom 146.09(1.64) 0.59(0.49) 64.18(0.82) 0.92(0.26) 149.29(1.69)
0.57(0.50) 69.63(1.06) 0.98(0.15) Random 199.96(0.03) - 198.6(0.29) -
199.93(0.05) - 198.44(0.32) -
(a) $ARL_{1}$ for the Non-overlapping Cases
(b) $ARL_{1}$ for Overlapping Cases
(c) Accuracy for the Non-overlapping Cases
(d) Accuracy for Overlapping Cases
Figure 8: Out-of-control Average Run Length ($ARL_{1}$) and Failure isolation
accuracy for Different Change Magnitude $\delta$ for multiple failure modes
Similar to the experiment in the single failure mode. we will compare the
proposed MTSSRP with the following benchmark methods: 1) TSSRP method (Zhang
and Mei, 2020). 2) TRAS method Liu et al. (2015), where the local CUSUM
statistics are used for each individual data stream and later fused together
via the Top-R rule. To show the upper-bound and lower-bound performance, we
will also add three simple alternatives: 1) Random, where we randomly select
$q$ sensors at each time step with the top-r statistics by monitoring each
sensor individually. 2) Oracle, where we have not only access to all the data
streams but also the failure mode distribution information using the
monitoring statistics as MTSSRP. 3) MRandom, where we apply the same
monitoring statistics as MTSSRP, which considers the failure mode distribution
information in the monitoring statistics, but we randomly select the sensors
at each time step. We evaluate the proposed method with two metrics which are
detection delay and failure isolation accuracy. Here, we will compute the
$ARL_{1}$ for different change magnitudes ($\delta$ = $0.5$, $0.8$) in Table
2. We also compared the $ARL_{1}$ and Isolation Accuracy from different
magnitude $\delta$ ($0.1$ to $0.8$) in Fig. 8
We then show the sampling point distributions before the change and after the
change for both scenarios. To visualize the sampling distribution better, we
have aggregated 1000 generate sampled IC data and OC data in Fig. 9(a) and
Fig. 9(b), respectively for the non-overlapping failure modes and Fig. 9(c)
and Fig. 9(d) for the overlapping failure modes. It is clear that before the
change, the sampling distribution is pretty random. After the change, most of
the points will gather around the three true failure modes.
Finally, we showed the heatmap of the identified top failure modes from MTSSRP
method at $\delta=4$ in Fig. 10. It shows the top most likely failure modes
(columns) at a different time (rows) from our proposed algorithm. The actual
change point of the data is at $t=100$. Column $0$ shows that we identify a
failure pattern at around time $100$, which is consistent with the change
time. Prior to time $100$, the failure mode patterns are actually quite
random. Interestingly, we find that after the algorithm finds the first
failure pattern, the algorithm will continue to search for other underlying
failure patterns. At around time 120, we further detect the other two failure
patterns in our experiments in both overlapping and non-overlapping cases.
Another interesting behavior is that the other failure modes (except the top
3) are quite random in the non-overlapping case but not as random in the
overlapping case. The reason is that in the non-overlapping failure modes,
given that potential failure modes have no overlapping with the true failure
modes, there are no particular orders on the potential failure modes. However,
in the overlapping case, besides the 3 true failure modes, there are also some
potential failure modes overlapped with the true failure modes, which will be
selected. In conclusion, this behavior shows that the proposed algorithm is
able to detect the true failure modes, as indicated by the Theorem 5.
(a) IC Sampling Point Distribution for Non-overlapping Failure Modes
(b) OC Sampling Point Distribution for Non-overlapping Failure Modes
(c) IC Sampling Point Distribution for Overlapping Failure Modes
(d) OC Sampling Point Distribution for Overlapping Failure Modes
Figure 9: Examples of Simulated Data and Sampled Point Distribution for Both
the Overlapping and Non-overlapping Failure Modes; It is clear that the OC
sampling point distributions are the same as the true failure mode patterns.
(a) Non-overlapping case
(b) Overlapping case
Figure 10: Failure Mode Selection history of MTSSRP. Here, X-axis refers to
the ranking of the failure mode, and Y-axis refers to the time. All the
changes happen at time 100.
## 6 Case Study
In this section, we will evaluate the proposed MTSSRP algorithm in the laser
powder bed fusion process monitoring. We will evaluate the performance of the
MTSSRP and compare it with the state-of-the-art benchmark methods.
Figure 11: Hot-spots Detection in LPBF
### 6.1 Hot-spots Detection in Laser Powder Bed Fusion Process
Here, we will implement the proposed algorithm into the hot-spots detection in
the process monitoring in the Laser Powder Bed Fusion (LPBF) process. A $300$
fps video sequence was acquired during the realization of one layer of the
part by using the setup shown in Fig. 11, which consists of a thermal camera
mounting outside the LPBF chamber monitoring the hot-spots events. The
observed image is of size $121\times 71$ pixels. Previous studies showed that
the occurrence of local over-heating conditions might yield geometrical
distortions (Yan et al., 2020; Colosimo and Grasso, 2018). The hot-spots
caused by the formation of solidified balls will cause the local heat
accumulation and inflate from one layer to another. Therefore, the overall
goal of this study is to detect such hot-spots quickly. For more details about
the setup of this experiment and some preliminary works related to this
dataset, please refer to (Grasso et al., 2017; Colosimo and Grasso, 2018; Yan
et al., 2020). The dataset is also publicly available at
http://doi.org/10.6084/m9.figshare.7092863.
In this example, it is not easy to obtain the failure mode data beforehand,
and therefore, we rely on domain knowledge to define the failure mode
distribution. First, we know that the hot-spots must be in the scanning path.
Second, we know that the hot-spots must be locally clustered. Therefore, we
define the failure modes as each individual B-spline basis overlapped with the
printing regions. In this dataset, there are four different events starting
from 77, 94, 150, and 162. From Table 3, the proposed algorithm can detect the
change at time 79, 95, 152, and 162 with only 200 sensing variables out of
$8591$ sensing variables. In comparison, TSSRP can detect all four events but
with a much larger delay. However, TRAS can only detect Event 4, which fails
to detect the first three events. The original image frame, the sampling
patterns, and the selected failure modes at the detected time for these four
events are shown in Fig. 12. From Fig. 12, we can observe that the algorithm
can quickly converge the sampled points to the true hot-spots location at the
upper left corner.
Figure 12: Original Thermal Images, Sampling Patterns and Detected Failure Modes Table 3: Detection Time in the LPBF Process | | Time of first signal
---|---|---
_Event Times_ | Event 1 | Event 2 | Event 3 | Event 4
_Actual Change Time_ | 77 | 94 | 150 | 162
_Competing Methods_ | MTSSRP | 79 | 95 | 152 | 162
| TSSRP | 80 | 99 | 156 | 164
| TRAS | - | - | - | 165
### 6.2 Tonnage Signal Monitoring
We will evaluate the proposed methodology to monitor the tonnage signals
collected in a multi-operation forging process, where four strain gauge
sensors on four columns of the forging machine measure the exerted tonnage
force of the press uprights as shown in Fig. 13. This results in the tonnage
profiles in each cycle of operation.
Figure 13: Tonnage Signal Monitoring
The data contains 305 in-control profiles collected under the normal
production condition and each abnormal production condition have 68 out-of-
control profiles for three different failure modes. 10 samples from each
failure mode is shown in Fig. 14. The four channel tonnage profiles results in
4804 dimensions in total.
In our experiment, since we do not have the prior knowledge about the failure
mode,. Therefore, we utilize 20% of the samples for training of the failure
modes by assuming that the data under each failure mode follows the Gaussian
distribution with diagonal covariance matrix approximation. We only select 500
out of 4804 data streams for the adaptive sampling. Here, we have conducted a
simulation study where the normal samples are randomly drawn from the $305$
in-control profiles. For out-of-control profiles, we have conducted three
scenarios, where each scenarios the out-of-control profiles are randomly
sampled from 68 out-of-control profiles with replacement. For all the methods,
we selected the threshold as the $95\%$ for the IC samples and compare the
out-of-control ARL (denoted as $ARL_{1}$) with $500$ replications for all this
three scenarios with the proposed MTSSRP and benchmark methods TSSRP and TRAS.
Figure 14: Tonnage Data
The result is shown in Table 4. From Table 4, we can conclude that failure
mode 3 is fairly easy to detect and all methods can achieve $ARL_{1}$ around
$1$. For failure mode 1, it is more similar to the normal dataset. The
proposed MTSSRP can achieve $ARL_{1}$ of $1.96$, which is at least half of the
$ARL_{1}$ of other methods such as TSSRP and TRAS. Finally, the most
challenging case is the failure mode 2, where the difference of failure mode 2
and the normal data is almost neglectable, as seen from Fig. 14. The proposed
MTSSRP can achieve $ARL_{1}$ around $7.46$, which is much smaller than the
$ARL_{1}$ of TSSRP and TRAS. In Fig. 15, the monitoring statistics for three
failure production conditions have been shown and our method is able to
identified the true failure mode correctly.
(a) Change of Failure mode 1
(b) Change of failure mode 2
(c) Change of failure mode 3
Figure 15: Monitoring Statistics for Each Failure Mode. The monitoring statistics of the true failure mode increase significantly after the changed time $t=305$. Table 4: Out-of-control Average Run Length for Tonnage Signals | Out-of-control Average Run Length
---|---
_Failure Modes_ | Mode 1 | Mode 2 | Mode 3
_Competing Methods_ | MTSSRP | 1.96(0.05) | 7.46(0.26) | 1.02(0.01)
| TSSRP | 4.2(0.15) | 28.35(0.34) | 1.03(0.02)
| TRAS | 6.0(0.6) | 45.85(0.44) | 1.05(0.01)
### 6.3 COVID-19 Cases Detection for Hotspot Detetion
To better understand the COVID-19 status, different types of testing resource
is typically distributed into different regions. Centers for Disease Control
and Prevention (CDC) has classified the testing for COVID-19 into the
following two categories: 1) Diagnostic testing is intended to identify
current infection in individuals and is performed when a person has signs or
symptoms consistent with COVID-19, or is asymptomatic, but has recently known
or suspected exposure to COVID-19. 2) Screening tests are recommended for
unvaccinated people to identify those who are asymptomatic and do not have
known, suspected, or reported exposure to COVID-19. Screening helps to
identify unknown cases so that measures can be taken to prevent further
transmission. CDC (2022).
Overall, the screening test is very useful in randomly distributed test in
some underdeveloped areas to identify unknown cases so that measures can be
taken to prevent further transmission. However, in the Screening test, the
decision maker may have limited sampling resources. Therefore, adaptive
sampling techniques can be used to decide which region to sample at each time
epoch based on the collected testing results from the previous iterations.
In the case study, we will use the real COVID-19 test report data from Johns
Hopkins University Center for Systems Science and Engineering (JHU CSSE) (Dong
et al. (2020)). The dataset is available on
https://github.com/CSSEGISandData/COVID-19. More specifically, we will use the
confirmed COVID-19 cases from all 39 counties in Washington state. The source
of daily positive cases in Washington State is the Department of Health
(https://www.doh.wa.gov/Emergencies/COVID19). The time-series data is updated
daily around 23:59 (UTC). We will use the data from Jan 23, 2020, to Sep 13,
2020, a total of 235 days, as an illustration. On each day, the confirmed
cases are recorded in all counties in the United States. Overall, in the case
study, we assume at each time epoch, the state government will focus on doing
screening tests on Yakima County and Okanogan County over the $39$ counties.
In this example, it is important to detect the outbreak in each individual
county. Therefore, we set the failure mode as the mean shift of the infection
rate of each individual county. The outbreak time for Yakima and Okanogan are
118 and 169, as shown in Table. 5. The proposed method is able to detect the
outbreak rapidly. TSSRP method can also detect the outbreak with some delay,
while TRAS method cannot detect the outbreak. We further present the sampling
pattern shown in Fig. 17 and Fig. 18. As shown in Fig. 17, the algorithm
started to focus on certain counties starting around 140. Fig. 18 presents the
aggregated sampling frequency for each county. It can be seen that the
algorithm samples all the counties evenly during the in-control phase, as
shown in Fig. 18(a). During the out-of-control phase, as shown in Fig. 18(b),
it started to focus on the true hotspot counties such as Yakima and Okanogan,
which helps us identify the outbreak quickly in those counties.
(a) Monitoring Statistics for Hotspot Detection
(b) OMonitoring Statistics for different counties
Figure 16: Monitoring Statistics for Covid-19 Case Figure 17: Sampling Pattern
(a) Sampling frequency during in control phase
(b) Sampling frequency during out control phase
Figure 18: Sampling Pattern for Covid-19 Case Table 5: Detection Time for Covid hotspot detection | Time of first signal
---|---
Location | | Yakima County
---
| Okanogan County
---
Infection Rate $>$ 0.01 | 118 | 169
Competing Methods | MTSSRP | 138 | 170
TSSRP | 173 | 183
TRAS | - | -
## 7 Conclusion
Online change detection of high-dimensional data under multiple failure modes
is an important problem in reality. In this paper, we propose to borrow the
concept from Bayesian change point detection and MAB to adaptively sample
useful local components, given the distributions of the multiple failure
modes. Our proposed algorithm can balance between exploration of all possible
system failure modes or exploitation of the most probable system failure mode.
Furthermore, we also studied the properties of the proposed methods and showed
the proposed algorithm could isolate the correct failure mode. Our simulation
and case study show that the proposed algorithm, by considering the failure
mode information, can significantly reduce the detection delay.
## Appendix A Proof of Proposition 1
###### Proof.
It is worth noting that under the assumption that distribution is independent,
$\tilde{r}_{k,t}$ can be derived as
$\displaystyle\tilde{S}_{t}$
$\displaystyle=\sum_{k=1}^{K_{s}}\tilde{r}_{(k),t}$
$\displaystyle=\sum_{k=1}^{K_{s}}(\log(\exp(r_{(k),t-1})+1)+\sum_{j\in
C_{t}}\log\frac{f_{j,(k)}(\tilde{x}^{k}_{j,t})}{f_{j,0}(\tilde{x}^{k}_{j,t})})$
$\displaystyle=\sum_{k=1}^{K_{s}}(\log(\exp(r_{(k),t-1})+1))+\sum_{j\in
C_{t}}\sum_{k=1}^{K_{s}}\log\frac{f_{j,(k)}(\tilde{x}^{k}_{j,t})}{f_{j,0}(\tilde{x}^{k}_{j,t})})$
$\displaystyle=C_{0}+\sum_{j\in C_{t}}s_{j}$
Therefore, $\tilde{S}_{t}$ can be derived as
$\tilde{S}_{t}=\sum_{k=1}^{K_{s}}\tilde{r}_{(k),t}=C_{0}+\sum_{j\in
C_{t}}s_{j}$. Here,
$s_{j}=\sum_{k=1}^{K_{s}}\log\frac{f_{j,(k)}(\tilde{x}^{k}_{j,t})}{f_{j,0}(\tilde{x}^{k}_{j,t})}$
and $C_{0}=\sum_{k=1}^{K_{s}}(\log(\exp(r_{k,t-1})+1))$ is a con-stance. To
optimize $\tilde{S}_{t+1}$, we can always select the largest $q$ from $s_{j}$
after ranking $s_{(1)}\geq s_{(2)}\geq\cdots\geq s_{(q)}\geq
s_{(q+1)}\geq\cdots\geq s_{(p)}$.
∎
## Appendix B Proof of Proposition 2
Therefore,
$\log\frac{f_{k,j}(X_{j,t})}{f_{0,j}(X_{j,t})}=(\frac{1}{\sigma_{k,j}^{2}}(\tilde{x}^{k}_{j,t}-\mu_{k,j})^{2}-\frac{1}{\sigma_{k,0}^{2}}(\tilde{x}^{k}_{j,t}-\mu_{k,j})^{2})$.
Therefore, we know that
$s_{j}=\sum_{k=1}^{K_{s}}\log\frac{f_{j,(k)}(\tilde{x}^{k}_{j,t})}{f_{j,0}(\tilde{x}^{k}_{j,t})}=\sum_{k=1}^{K_{s}}(\frac{1}{\sigma_{k,j}^{2}}(\tilde{x}^{k}_{j,t}-\mu_{k,j})^{2}-\frac{1}{\sigma_{k,0}^{2}}(\tilde{x}^{k}_{j,t}-\mu_{k,j})^{2}).$
## Appendix C Proof of Proposition 3
###### Proof.
Here, If sensor $k$ is not selected,
$\frac{{f}_{k}(y_{t})}{{f}_{0}(y_{t})}=1$. Therefore, $R_{k,t}=R_{k,t-1}+1$.
This implies that if sensor $k$ is not observed, the corresponding $R_{k,t}$
will increase by $1$. Furthermore, if sensor $k$ is selected, we will update
$R_{k,t}$ as
$R_{k,t}=\frac{g(\tilde{x}_{k,t})}{f(\tilde{x}_{k,t})}(R_{k,t-1}+1)$. ∎
## Appendix D Proof of Theorem 4
###### Proof.
Let $T_{1}=\inf\\{t\geq 1:R_{(1),t}\geq A_{1}\\}$, $T_{2}=\inf\\{t\geq
1:r_{(1),t}\geq A_{2}\\}$. Since the event $\\{R_{(1),t}\geq A_{1}\\}$ is
equivalent to the event $\\{r_{(1),t}\geq\log A_{1}\\}$, we only need to
consider the behavior for $T_{1}$.
For an upper bound of $T_{1}$, we define $T^{\prime}=\inf\\{t:R_{k,t}\geq
A_{1}\\}$ for some $k$. The theorem from Pollak (1987) is valid in
multivariate cases and thus
$\mathbb{E}T_{1}\leq\mathbb{E}T^{\prime}=O(A_{1})$. We show that
$\sum_{k=1}^{K}R_{k,t}-Kt$ is a martingale under the null hypothesis.
$\displaystyle\mathbb{E}[\sum_{k=1}^{K}R_{k,t+1}-K(t+1)|\mathcal{F}_{t}]$
$\displaystyle=\mathbb{E}\left[\sum_{k=1}^{K}(R_{k,t}+1)\frac{\tilde{f}_{C_{t+1},k}({\bf
X}_{C_{t+1},t+1})}{\tilde{f}_{C_{t+1},0}({\bf
X}_{C_{t+1},t+1})}-K(t+1)|\mathcal{F}_{t}\right]$
$\displaystyle=\sum_{k=1}^{K}(R_{k,t}+1)\mathbb{E}\left[\frac{\tilde{f}_{C_{t+1},k}({\bf
X}_{C_{t+1},t+1})}{\tilde{f}_{C_{t+1},0}({\bf
X}_{C_{t+1},t+1})}|\mathcal{F}_{t}\right]-K(t+1)$
For any $k$, we have that
$\displaystyle E\left[\frac{\tilde{f}_{C_{t+1},k}({\bf
X}_{C_{t+1},t+1})}{\tilde{f}_{C_{t+1},0}({\bf
X}_{C_{t+1},t+1})}|\mathcal{F}_{t}\right]$
$\displaystyle=\int_{x_{1},\ldots,x_{n}}\frac{\tilde{f}_{C_{t+1},k}({\bf
X}_{C_{t+1},t+1})}{\tilde{f}_{C_{t+1},0}({\bf
X}_{C_{t+1},t+1})}f_{0}(x_{1},\ldots,x_{n})$
$\displaystyle=\int_{x_{1},\ldots,x_{n}}\tilde{f}_{C_{t+1},k}({\bf
X}_{C_{t+1},t+1})f_{C_{t+1},0}(x_{unobserved}|{\bf X}_{C_{t+1},t+1})$
$\displaystyle=\int_{x_{observed}}\tilde{f}_{C_{t+1},k}({\bf
X}_{C_{t+1},t+1})\int_{x_{unobserved}}f_{C_{t+1},0}(x_{unobserved}|{\bf
X}_{C_{t+1},t+1})$ $\displaystyle=1$
Thus,
$\mathbb{E}[\sum_{k=1}^{K}R_{k,t+1}-K(t+1)|\mathcal{F}_{t}]=\sum_{k=1}^{K}R_{k,t}-Kt$
and $\sum_{k=1}^{K}R_{k,t}-Kt$ is a martingale. Since we have that
$\liminf_{t\to\infty}\int_{T_{1}>t}|\sum_{k=1}^{K}R_{k,t}-Kt|{\rm
d}\mathbb{P}_{\infty}=0$, $\sum_{k=1}^{K}R_{k,t}-Kt$ is uniformly integrable.
Thus
$\mathbb{E}_{\infty}(\sum_{k=1}^{K}R_{k,T_{1}}-KT_{1})=\mathbb{E}_{\infty}(\sum_{k=1}^{K}R_{k,0})=0$.
Thus, $\mathbb{E}KT_{1}=\mathbb{E}_{\infty}\sum_{k=1}^{K}R_{k,T_{1}}\geq
A_{1}$.
This completes the part for $T_{1}$. When $A_{2}=\log A_{1}$, the events
$\\{R_{(1),t}\geq A_{1}\\}$ and $\\{r_{(1),t}\geq\log A_{1}\\}$ are
equivalent. Therefore, the behavior of $T_{2}$ follows the same rule.
∎
## Appendix E Proof of Theorem 5
###### Proof.
From (Pollak, 1987), we have
$\log R_{k,\nu+r}=\sum_{t=\nu+1}^{\nu+r}Z_{k,t}+\log
R_{k,\nu}+\sum_{i=0}^{r-1}\log[1+\frac{1}{R_{k,\nu+i}}]$
The increment of $R_{k,t}$ will be almost $Z_{k,t}$ where
$Z_{k,t}=\log(\frac{\tilde{f}_{C_{t},k}({\bf
X}_{C_{t},t})}{\tilde{f}_{C_{t},0}({\bf X}_{C_{t},t})})$. For any $l\neq k$,
$\lim_{t\to\infty}\mathbb{P}(R_{l,t}\leq
R_{k,t})=\lim_{T\to\infty}\mathbb{P}(\sum_{t=\nu+1}^{T}Z_{l,t}-Z_{k,t}\leq
b),$
where $b$ is some constant.
$\displaystyle\mathbb{P}(\sum_{t=\nu+1}^{T}Z_{l,t}-Z_{k,t}\leq b)$
$\displaystyle=\mathbb{P}\left(\sum_{t=\nu+1}^{T}\log(\frac{\tilde{f}_{C_{t},l}({\bf
X}_{C_{t},t})}{\tilde{f}_{C_{t},k}({\bf X}_{C_{t},t})})\leq b\right)$
${\bf X}_{C_{t}}$ is a sample of $q$ variables from $p$ dimensional data
$\mathbf{{\bf X}}_{t}$. There are $\binom{p}{q}$ kinds of different samples.
We divide the variables $\log(\frac{\tilde{f}_{C_{t},l}({\bf
X}_{C_{t},t})}{\tilde{f}_{C_{t},k}({\bf X}_{C_{t},t})})$ by the categories of
different sample results, and denote them by
$Y_{j,t}=\log(\frac{\tilde{f}_{C_{t},l}({\bf
X}_{C_{t},t})}{\tilde{f}_{C_{t},k}({\bf X}_{C_{t},t})})$,
$j=1,2,\ldots,\binom{p}{q}$ and different $j$ represents different selection
of $C_{t}$. We notice that when $j$ or $t$ is different, it has to be sampled
from different times. Thus, they are independent. Since we consider the
situation when $T$ goes to infinity, there exists at least one selection that
is observed infinitely many times. We first consider the case that there is
only one selection observed infinitely many times. Without loss of generality,
let this selection be the first one. Let the times that other selections of
data is observed be $c_{2},\ldots,c_{\binom{p}{q}}$ respectively. Denote
$Y_{j,t}=\log(\frac{f_{l,j}(X_{t,j})}{f_{k,j}(X_{t,j})})$. When $j=1$,
$t=1,2,\ldots$; when $j=2,\ldots,\binom{p}{q}$,
$t=1,2,\ldots,c_{\binom{p}{q}}$. It suffices to show
$\mathbb{P}\left(\sum_{t=1}^{\infty}Y_{1,t}+\sum_{t=1}^{c_{2}}Y_{2,t}+\cdots+\sum_{t=1}^{c_{\binom{p}{q}}}Y_{\binom{p}{q},t}\leq
b\right)=1$ for any constant $b$. Since $Y_{1,t}$ is i.i.d with negative
expectation, we have that for any constant $b$,
$\mathbb{P}\left(\sum_{t=1}^{\infty}Y_{1,t}\leq b\right)=1$
For any $\varepsilon>0$, let $d_{j}$ be the constant s.t.
$\mathbb{P}\left(\sum_{t=1}^{c_{j}}Y_{j,t}<d_{j}\right)>1-\frac{\varepsilon}{\binom{p}{q}-1}$.
$\displaystyle\mathbb{P}\left(\sum_{t=1}^{\infty}Y_{1,t}+\sum_{t=1}^{c_{2}}Y_{2,t}+\cdots+\sum_{t=1}^{c_{\binom{p}{q}}}Y_{\binom{p}{q},t}\leq
b\right)$
$\displaystyle>\mathbb{P}(\sum_{t=1}^{c_{2}}Y_{2,t}<d_{2})\mathbb{P}(\sum_{t=1}^{c_{3}}Y_{3,t}<d_{3})\cdots\mathbb{P}(\sum_{t=1}^{c_{\binom{p}{q}}}Y_{{\binom{p}{q}},t}<d_{\binom{p}{q}})\mathbb{P}\left(\sum_{t=1}^{\infty}Y_{1,t}\leq
b-d_{2}-\ldots-d_{\binom{p}{q}}\right)$
$\displaystyle>(1-\frac{\varepsilon}{\binom{p}{q}-1})^{\binom{p}{q}-1}\mathbb{P}\left(\sum_{t=1}^{\infty}Y_{1,t}\leq
b-d_{2}-\ldots-d_{\binom{p}{q}}\right)>1-\varepsilon$
Notice that this is true for any $\varepsilon>0$. We have
$\mathbb{P}\left(\sum_{t=1}^{\infty}Y_{1,t}+\sum_{t=1}^{c_{2}}Y_{2,t}+\cdots+\sum_{t=1}^{c_{\binom{p}{q}}}Y_{{\binom{p}{q}},t}\leq
b\right)=1.$
Using similar method, we can also prove the case when there are more than one
dimension observed infinitely many times. Thus,
$\lim_{t\to\infty}\mathbb{P}(R_{l,t}\leq R_{k,t})=1$. Since $l$ is chosen
arbitrarily, we complete the proof. ∎
## Appendix F Proof of Corollary 6
###### Proof.
We extend the notation from Appendix E. For any $k\in\mathcal{K}$ and
$l\notin\mathcal{K}$, consider
$\displaystyle\mathbb{P}(\sum_{t=\nu+1}^{T}Z_{l,t}-Z_{k,t}\leq b)$
$\displaystyle=\mathbb{P}\left(\sum_{t=\nu+1}^{T}\log(\frac{\tilde{f}_{C_{t},l}({\bf
X}_{C_{t},t})}{\tilde{f}_{C_{t},k}({\bf X}_{C_{t},t})})\leq b\right)$
∎
Since $k\in\mathcal{K}$, there exists some $t_{0}$ s.t. ${\bf X}_{t}\sim
f_{k}$. Since the support of $f_{k}$ and $f_{l}$ are different,
$\log(\frac{\tilde{f}_{C_{t_{0}},l}({\bf
X}_{C_{t_{0}},t_{0}})}{\tilde{f}_{C_{t_{0}},k}({\bf
X}_{C_{t_{0}},t_{0}})})=-\infty$. For other times that ${\bf X}_{t}\nsim
f_{k}$, we have an undefined term of $\log\frac{0}{0}$. Since this term can be
interpreted as the log-likelihood difference at time $t$ and at this time,
neither $l$ and $k$ are selected, we may assume $\log\frac{0}{0}=0$ under this
assumption. Therefore,
$\mathbb{P}(\sum_{t=\nu+1}^{T}Z_{l,t}-Z_{k,t}\leq b)=1$
## References
* CDC (2022) CDC. Overview of testing for sars-cov-2 (covid-19), 2022. URL https://www.cdc.gov/coronavirus/2019-ncov/hcp/testing-overview.html.
* Chan et al. (2017) Hock Peng Chan et al. Optimal sequential detection in multi-stream data. _The Annals of Statistics_ , 45(6):2736–2763, 2017.
* Chang and Yadama (2010) Shing I. Chang and Srikanth Yadama. Statistical process control for monitoring non-linear profiles using wavelet filtering and b-spline approximation. _International Journal of Production Research_ , 48(4):1049–1068, 2010.
* Chen et al. (2020) Jie Chen, Wenyi Zhang, and H Vincent Poor. A bayesian approach to sequential change detection and isolation problems. _IEEE Transactions on Information Theory_ , 67(3):1796–1803, 2020.
* Cho and Fryzlewicz (2015) Haeran Cho and Piotr Fryzlewicz. Multiple-change-point detection for high dimensional time series via sparsified binary segmentation. _Journal of the Royal Statistical Society: Series B: Statistical Methodology_ , pages 475–507, 2015.
* Colosimo and Grasso (2018) Bianca M Colosimo and Marco Grasso. Spatially weighted pca for monitoring video image data with application to additive manufacturing. _Journal of Quality Technology_ , 50(4):391–417, 2018.
* Dong et al. (2020) Ensheng Dong, Hongru Du, and Lauren Gardner. An interactive web-based dashboard to track covid-19 in real time. _The Lancet infectious diseases_ , 20(5):533–534, 2020.
* Gómez et al. (2022) Ana María Estrada Gómez, Dan Li, and Kamran Paynabar. An adaptive sampling strategy for online monitoring and diagnosis of high-dimensional streaming data. _Technometrics_ , 64(2):253–269, 2022.
* Gopalan et al. (2021) Aditya Gopalan, Braghadeesh Lakshminarayanan, and Venkatesh Saligrama. Bandit quickest changepoint detection. _Advances in Neural Information Processing Systems_ , 34:29064–29073, 2021.
* Grasso et al. (2014) M Grasso, BM Colosimo, and M Pacella. Profile monitoring via sensor fusion: the use of pca methods for multi-channel data. _International Journal of Production Research_ , 52(20):6110–6135, 2014.
* Grasso et al. (2017) Marco Grasso, Vittorio Laguzza, Quirico Semeraro, and Bianca Maria Colosimo. In-process monitoring of selective laser melting: spatial detection of defects via image data analysis. _Journal of Manufacturing Science and Engineering_ , 139(5), 2017.
* Li and Jin (2010) Jing Li and Jionghua Jin. Optimal sensor allocation by integrating causal models and set-covering algorithms. _IIE Transactions_ , 42(8):564–576, May 2010\.
* Liu and Shi (2013) Kaibo Liu and Jianjun Shi. Objective-oriented optimal sensor allocation strategy for process monitoring and diagnosis by multivariate analysis in a Bayesian network. _IIE Transactions_ , 45(6):630–643, June 2013\.
* Liu et al. (2013) Kaibo Liu, Xi Zhang, and Jianjun Shi. Adaptive sensor allocation strategy for process monitoring and diagnosis in a bayesian network. _IEEE Transactions on Automation Science and Engineering_ , 11(2):452–462, 2013.
* Liu et al. (2015) Kaibo Liu, Yajun Mei, and Jianjun Shi. An Adaptive Sampling Strategy for Online High-Dimensional Process Monitoring. _Technometrics_ , 57(3):305–319, July 2015.
* Lorden (1971) Gary Lorden. Procedures for reacting to a change in distribution. _The Annals of Mathematical Statistics_ , pages 1897–1908, 1971.
* Malladi and Speyer (1999) Durga P Malladi and Jason L Speyer. A generalized shiryayev sequential probability ratio test for change detection and isolation. _IEEE Transactions on Automatic Control_ , 44(8):1522–1534, 1999.
* Mei (2010) Yajun Mei. Efficient scalable schemes for monitoring a large number of data streams. _Biometrika_ , 97(2):419–433, 2010.
* Mei (2011) Yajun Mei. Quickest detection in censoring sensor networks. In _2011 IEEE International Symposium on Information Theory Proceedings_ , pages 2148–2152. IEEE, 2011.
* Nikiforov et al. (1993) I Nikiforov, V Varavva, and V Kireichikov. Application of statistical fault detection algorithms to navigation systems monitoring. _Automatica_ , 29(5):1275–1290, 1993.
* Nikiforov (1995) Igor V Nikiforov. A generalized change detection problem. _IEEE Transactions on Information theory_ , 41(1):171–187, 1995.
* Paynabar and Jin (2011) Kamran Paynabar and Jionghua Jin. Characterization of non-linear profiles variations using mixed-effect models and wavelets. _IIE transactions_ , 43(4):275–290, 2011.
* Paynabar et al. (2016) Kamran Paynabar, Changliang Zou, and Peihua Qiu. A change-point approach for phase-i analysis in multivariate profile monitoring and diagnosis. _Technometrics_ , 58(2):191–204, 2016.
* Pollak (1985) Moshe Pollak. Optimal detection of a change in distribution. _The Annals of Statistics_ , pages 206–227, 1985.
* Pollak (1987) Moshe Pollak. Average run lengths of an optimal method of detecting a change in distribution. _The Annals of Statistics_ , pages 749–779, 1987.
* Qiu et al. (2010) Peihua Qiu, Changliang Zou, and Zhaojun Wang. Nonparametric profile monitoring by mixed effects modeling. _Technometrics_ , 52(3), 2010.
* Ren et al. (2020) Haojie Ren, Changliang Zou, Nan Chen, and Runze Li. Large-scale datastreams surveillance via pattern-oriented-sampling. _Journal of the American Statistical Association_ , pages 1–15, 2020\.
* Shiryaev (1963) Albert N Shiryaev. On optimum methods in quickest detection problems. _Theory of Probability & Its Applications_, 8(1):22–46, 1963.
* Wang et al. (2018) Andi Wang, Xiaochen Xian, Fugee Tsung, and Kaibo Liu. A spatial-adaptive sampling procedure for online monitoring of big data streams. _Journal of Quality Technology_ , 50(4):329–343, October 2018.
* Wang and Mei (2015) Yuan Wang and Yajun Mei. Large-scale multi-stream quickest change detection via shrinkage post-change estimation. _IEEE Transactions on Information Theory_ , 61(12):6926–6938, 2015.
* Willsky (1976) Alan S Willsky. A survey of design methods for failure detection in dynamic systems. _Automatica_ , 12(6):601–611, 1976.
* Xian et al. (2018) Xiaochen Xian, Andi Wang, and Kaibo Liu. A Nonparametric Adaptive Sampling Strategy for Online Monitoring of Big Data Streams. _Technometrics_ , 60(1):14–25, January 2018.
* Xie and Siegmund (2013) Yao Xie and David Siegmund. Sequential multi-sensor change-point detection. In _2013 Information Theory and Applications Workshop (ITA)_ , pages 1–20. IEEE, 2013.
* Yan et al. (2015) H. Yan, K. Paynabar, and J. Shi. Image-based process monitoring using low-rank tensor decomposition. _IEEE Transactions on Automation Science and Engineering_ , 12(1):216–227, 2015.
* Yan et al. (2017) Hao Yan, Kamran Paynabar, and Jianjun Shi. Anomaly detection in images with smooth background via smooth-sparse decomposition. _Technometrics_ , 59(1):102–114, 2017.
* Yan et al. (2018) Hao Yan, Kamran Paynabar, and Jianjun Shi. Real-Time Monitoring of High-Dimensional Functional Data Streams via Spatio-Temporal Smooth Sparse Decomposition. _Technometrics_ , 60(2):181–197, April 2018.
* Yan et al. (2020) Hao Yan, Marco Grasso, Kamran Paynabar, and Bianca Maria Colosimo. Real-time detection of clustered events in video-imaging data with applications to additive manufacturing. _arXiv preprint arXiv:2004.10977_ , 2020.
* Yue et al. (2017) Xiaowei Yue, Hao Yan, Jin Gyu Park, Zhiyong Liang, and Jianjun Shi. A wavelet-based penalized mixed-effects decomposition for multichannel profile detection of in-line raman spectroscopy. _IEEE Transactions on Automation Science and Engineering_ , 15(3):1258–1271, 2017.
* Zhang and Mei (2020) Wanrong Zhang and Yajun Mei. Bandit change-point detection for real-time monitoring high-dimensional data under sampling control. _arXiv preprint arXiv:2009.11891_ , 2020.
|
# Stress index strategy enhanced with financial news sentiment analysis for
the equity markets
Baptiste Lefort CentraleSupélec, Paris-Saclay University Ai For Alpha Eric
Benhamou Paris-Dauphine PSL Ai For Alpha Jean-Jacques Ohana Ai For Alpha
David Saltiel Ai For Alpha Beatrice Guez Ai For Alpha Thomas Jacquot Ai
For Alpha
This paper introduces a new risk-on risk-off strategy for the stock market,
which combines a financial stress indicator with a sentiment analysis done by
ChatGPT reading and interpreting Bloomberg daily market summaries. Forecasts
of market stress derived from volatility and credit spreads are enhanced when
combined with the financial news sentiment derived from GPT4. As a result, the
strategy shows improved performance, evidenced by higher Sharpe ratio and
reduced maximum drawdowns. The improved performance is consistent across the
NASDAQ, the S&P 500 and the six major equity markets, indicating that the
method generalises across equities markets.
Keywords: Market stress, Volatility, News sentiment, Investment strategy
JEL Codes: G14, D81, C55, G17
## 1 Introduction
Recent advancements in Natural Language Processing (NLP) with Large Language
Models (LLMs) have made the sentiment analysis of financial news by machines a
practical achievement and no longer just a dream. More precisely, Large
Language Models (LLMs) have marked a major step forward in processing large
contexts, exhibiting human-level performance on various professional and
academic benchmarks, although they still have limitations such as reliability
issues and limited context windows (OpenAI, 2023). Their ability to process
more context has shown particularly interesting applications in many business
areas (George and George, 2023). Hence exploring the potential to extract
either weak or strong signals from financial news to enhance a risk-on risk-
off investment strategy becomes highly pertinent.
Indeed, extracting sentiment from financial news is not new (Tetlock, 2007;
Schumaker and Chen, 2009), and finance has a longstanding tradition of
exploiting textual data (Kearney and Liu, 2014). More precisely, sentiment
analysis is a task where we aim at identifying the underlying sentiment in a
text. Many different types of models and methods can be employed for
identifying sentiment (Baccianella et al., 2010; Tang et al., 2023). Models
like BERT and its finance-focused version FinBERT have exposed their
application in the financial industry (Devlin et al., 2018; Araci, 2019).
These models have significantly increased the sentiment analysis’s precision
(Kant et al., 2018), giving a new opening to using the news for financial
decision-making. Likewise, SentiWordNet 3.0, provides an enhanced lexical
resource for sentiment analysis, showing about 20% accuracy improvement over
its earlier version (Baccianella et al., 2010). Recent advancements like
FinEntity focus on entity-level sentiment classification in financial texts,
demonstrating its utility in investment and regulatory applications (Tang et
al., 2023). Deep Learning application have also shown clear improvement and
proved their ability to provide consistently reliable sentiment to a complex
text (Zhang et al., 2018).
However, when scrutinized across various equity markets and extended out-of-
sample backtesting, these efforts proved to be less than compelling Xing et
al. (2018). Interpreting financial news has long been a complex task (Loughran
and McDonald, 2011), as it involves intricate concepts open to various
interpretations and contains time-sensitive information with an expiration
date of relevance. Moreover, the sentiment conveyed in financial news is often
influenced by human perception, and there are numerous underlying implications
(Ikizlerli et al., 2019; Elliott et al., 2018).
We hypothesize two things that can help solve the seminal problem of
interpreting news in order to forecast equity market regimes. First, the
emergence of Large Language Models (LLMs) could bring fresh perspectives to
this longstanding issue, potentially enhancing the interpretation of
ambiguities in financial news. Second, news sentiments should be combined with
other signals in order to have a robust risk-on risk-off strategy accross
major equity markets.
Hence, we present here a new approach to tackle risk-on, risk-off strategy for
the stock market. This approach integrates a financial stress indicator with
sentiment analysis performed by ChatGPT, which reads and interprets daily
market summaries from Bloomberg. Additionally, we present a strategy selection
method, alternating between the hybrid strategy that combines news signals and
stress index and another based only on the conventional stress index
indicator.
The rest of the paper is organized as follows. Section 2 provides a review of
existing literature on the application of ChatGPT in formulating financial
strategies. In section 3, we explain how our methodology differs from existing
studies and outline our principal contributions. Section 4 presents the
different types of data used: namely news, the stress index and investment
universe overwhich this new strategy is tested. Section 5 details the
comprehensive set of experiments conducted, highlighting the execution of 18
distinct tests. These tests are designed to ascertain the most effective
strategy from a range of alternatives, namely whether the news signal and the
stress index signal are effective or not. In total, six varied strategies are
evaluated across three disparate financial markets to assert whether the
results remains similar across various equity markets. In Section 6, we
provide an in-depth discussion of the result, with a particular focus on the
dynamically strategy that alternates between the stress index and the news
sentiment and the stress index alone. The findings reveal that this combined
method consistently outperforms other strategies in all three equity markets
evaluated. The superiority of this approach is highlighted by its higher
Sharpe and Calmar ratio (the ratio of return to maximum drawdown), when
compared with the other strategies that rely on the stress index, news alone,
the volatility index (VIX) based strategy, and a static combination of stress
with news. Section 7 concludes and suggests future works.
## 2 Related Works
In the field of finance and economics, numerous recent academic studies have
used ChatGPT, including (Hansen and Kazinnik, 2023), (Cowen and Tabarrok,
2023), (Korinek, 2023; Lopez-Lira and Tang, 2023), and (Noy and Zhang, 2023).
The capabilities of ChatGPT in various economic and professional contexts are
explored in several studies. (Cowen and Tabarrok, 2023) and (Korinek, 2023)
discuss the model’s role in economics education and research. (Hansen and
Kazinnik, 2023) focuses on how ChatGPT interprets Fedspeak, the complex
language used by the Federal Reserve. In a similar vein, (Lopez-Lira and Tang,
2023) delves into how to effectively prompt the model for stock return
forecasts. Additionally, (Noy and Zhang, 2023) highlights the model’s
enhancement of productivity in professional writing, while (Yang and Menczer,
2023) examines its ability to identify credible news sources. Beside the
increasing number of use of LLMs for sentiment analysis, using news sentiment
in investment strategy is not new. Many studies involve using more classical
methods like lexical approaches for developing news sentiment indicator
(Januário et al., 2022).
The development and utilization of sentiment indicators for trading strategies
have also seen significant progress. The study A Deep Learning Approach with
Extensive Sentiment Analysis for Quantitative Investment by (Li et al., 2023)
introduces an approach that incorporates news content for sentiment analysis
in quantitative investment, achieving interesting annualized returns. In the
realm of cryptocurrency, (Yeoh et al., 2023) analyze the impact of news
sentiment on the prices of cryptocurrencies using the FinBERT model for
financial sentiment analysis. Their findings indicate an influence of
sentiment on price movements. (Nakagawa et al., 2022) also contribute to this
field with their investment strategy that exploits the lead-lag effect in
company relationships using sentiment analysis. Furthermore, (Yang, 2023)
demonstrates the profitability of investment strategies based on analyst
opinions extracted from text analysis, suggesting an improved daily returns.
The market stress is also a widely studied index for combining it with news
sentiment indicator and shows promising results (Smales, 2015, 2016; Lin et
al., 2023). These studies collectively highlight the growing importance and
effectiveness of sentiment analysis as a tool for developing investment
strategies.
## 3 Key contributions
In this paper, we present a strategy that combines a stress index with news
sentiment analysis in a way that has never been addressed. We built a crisis-
resistant strategy that yields a constant performance over a long period. Our
approach involves several key steps. Firstly, we generate a signal by
analyzing news sentiment, utilizing Bloomberg’s extensive daily market
summaries. This signal provides a broad, market-wide perspective, focusing on
impactful financial news. Our findings reveal that using this signal alone is
not effective. However, integrating it with the stress index significantly
enhances our strategy, making it more consistent and robust. This approach not
only improves performance but also demonstrates the broader applicability of
the stress index strategy on a macroeconomic level.
The principal contribution of the paper are the following:
1. 1.
News alone is not enough: Although we use premium and heavily scrutinized
data, that is the Bloomberg’s daily market reports to create a news sentiment-
based macro indicator thanks to a multiple prompt approach using ChatGPT that
has demonstrated a statistical correlation with equity market returns, we
experienced poor results when using the news sentiment indicator alone.
However, when combined with a stress index combining other data type like
volatility and credit spread, we get improved performance over our benchmark
of a naive long only strategy as well as other strategies based purely on
stress index or news.
2. 2.
It is crucial to have an alternative method when the news-based signal becomes
inefficient: We present a method to switch between the stress index and the
combination of the stress index and the news sentiments based on the
persistence of the overall performances of one strategy over the other one.
This helps us to mitigate periods of time where the news sentiment indicator
becomes inefficient.
3. 3.
The method works on various equity markets: Through empirical validation, we
confirm that the integration of the stress index and news signal strategy,
coupled with the transition to a pure risk index in the event of prolonged
underperformance, consistently improves outcomes in various equities markets.
This observation underscores the strategy’s ability to demonstrate persistent
promising results, indicating its capacity to generalize across major equity
markets.
## 4 Data
To create an investment strategy on various equity markets leveraging
financial news and a stress index indicator, we need to collect multiple data:
reliable financial news, stress index and investment universe. Let us
introduce each of these one at a time.
### 4.1 News Data
We use professional news from Bloomberg. Bloomberg provides daily market
wraps, that are daily summaries of the most important news. They are available
since 2010 with growing frequency after 2011. These texts contain both
numerical and literal values and are gathering the most informative content
and news of the day. Daily market wraps are produced multiple times during the
day and specialized for the three regions: the US, Europe and Asia. In order
to have a consistent data set, we rely on the latest available news before the
European market opening, that is the end of day Asian market wraps. Hence, we
collect 3627 daily market wraps. Each of these wraps is about 7000 characters,
representing with line breaks about 140 lines or about 5 pages long. Here is
below an example of Bloomberg Daily Market Wraps for the day. The range of the
contents are any news that may impact financial markets including equity,
currencies, cryptocurrencies, bonds and commodities.
\- Investor sentiment remains quite negative in China despite a rally in
global stocks during the past two months of 2023, Nomura Group analysts
including Chetan Sethin Singapore wrote in a client note. In China, there have
been more signs of support for the economy, but equity investors still do not
appear convinced.
\- Bond Traders Seize on 4% Yields, Confident Fed Rate Cuts Coming
\- The Australian dollar fell 0.2% to $0.6701 …
#### 4.1.1 News signal
We follow the same methodology as in (Lefort et al., 2024), and use a two-step
approach to break the sentiment analysis into simpler subtasks for which
chatGPT is better suited, namely text summary and keywords identification.
Here are the steps we follow:
1. 1.
First, we collect the daily market summaries from Bloomberg to stay updated on
financial trends.
2. 2.
Next, we ask ChatGPT to generate 15 notable headlines every day, ensuring we
capture the most significant events.
3. 3.
Once we have our headlines, we take a moment to assess their tone, deciding
whether they convey positive, negative, or indecisive sentiment. Hence, we
calculate the daily count of news evaluated in each category, resulting in a
numerical score for each day. It is a simple scale: -1 points for a negative
sentiment, 0 for indecisive, and +1 for the the days that are positively
charged.
4. 4.
With these scores in hand, we don’t just look at one day in isolation; we add
them up across a 10-day period to get a broader view of the market’s mood. By
averaging these daily scores over 10 days, we’re able to smooth out the ups
and downs and pinpoint the general trend in news sentiment.
5. 5.
We then apply a statistical method called z-scoring to this 10-day sentiment
average, which helps us understand how strongly the news is leaning compared
to the norm.
6. 6.
The final transformation involves calculating the mean of the z-scored signal
over the preceding 10 days.
7. 7.
Finally, we obtain a simple binary news indicator. If the z-score shows a
positive trend, we set it up as 1, indicating a thumbs-up for market
sentiment. If the trend dips negative, we set it to 0, signaling a more
cautious outlook. This binary indicator serves as a quick reference to gauge
the overall sentiment in the financial news, allowing us to make more informed
decisions based on the prevailing mood in the market.
### 4.2 Stress Index
We rely on the stress index as presented in (Guilleminot et al., 2014) to
capture contagion effects in the market and anticipates future crisis. The
indicator is documented to be predictive since it aims at detecting period of
high volatility. We prefer this financial stress indicator to the VIX as it
has numerous advantages in detecting market contagion:
* •
Comprehensive Risk Analysis: The stress index combines a broad range of market
prices of risk, including CDS contracts, providing a more detailed view of
market stress compared to the VIX, which focuses mainly on S&P 500 index
options.
* •
Enhanced Normalization Techniques: It employs z-score based normalization,
facilitating more effective comparability across various markets, thereby
allowing for a nuanced understanding of stress levels in different market
segments.
* •
Detection of Market Contagion: It is specially designed to capture
interconnectedness and contagion in financial markets, reflecting the complex
dynamics often missed by other indicators like the VIX.
* •
Detection of Crises: Incorporating CDS contracts from the main banks,
insurances and governments enables it to capture signals related to crises.
#### 4.2.1 Stress Signal computation
The step to compute the stress index are the following:
1. 1.
We start by gathering market data for a variety of assets, which includes
indicators like the VIX index, TED spread, and CDS index, along with the
volatility data for major equity, bond, and commodity markets.
2. 2.
Next, we calculate the price of risk for each asset by standardizing the data
with z-scoring, which adjusts for the variability and scales the risk price in
terms of its deviation from the mean.
3. 3.
Then, we aggregate these z-scores by their respective categories such as
equities, emerging bonds, government bonds, financial stocks, foreign
exchange, commodities, interest rates, and corporate credit to form category-
specific stress indicators.
4. 4.
We take the average of these category-specific stress indicators to compile a
comprehensive stress index that reflects the overall market conditions.
5. 5.
Finally, we scale the resulting average to fall between 0 and 1 by applying
the cumulative distribution function (norm.cdf) to the computed stress index,
which normalizes our final index value.
Because the stress index final result is a number between 0 and 1 thanks to
the cumulative distribution function of the normal distribution, we directly
get a stress index signal.
### 4.3 Markets Tested
We test the strategy on six different equity markets: NASDAQ, S&P500, Nikkei,
Euro Stoxx, and Emerging Markets from January 2005 to 2024.
We compare the strategy on three cases:
1. 1.
The S&P500 alone.
2. 2.
The NASDAQ alone.
3. 3.
An equally weighted basket of the aforementioned six equity markets.
## 5 Experiments
### 5.1 Transaction Costs
To have very realistic simulations, we incorporate linear transaction costs,
denoted by $b$ equals to 2 basis points for all our strategy ($b=0.0002$). To
put some mathematical notations, assuming we have $n$ assets whose weights are
denoted by $(w_{t}^{i})_{i=1,...,n}$ and that the daily returns of our assets
are denoted by $(r_{t}^{i})_{i=1,...,n}$, we compute our strategy value
recursively with the convention that the strategy starts at 1 at time 0:
$S_{0}=1$ as follows:
$S_{t}=S_{t-1}\times(1+\sum_{i=1}^{n}w_{t-1}^{i}\times
r_{t}^{i}-b|w_{t}-w_{t-1}|)$ (1)
### 5.2 The six different strategies
In order to evaluate whether the news signal is predictive or not, we compare
six strategies and run in total eighteen experiments as we compare these
strategies over 3 different markets, namely the US market by testing these
strategies on the NASDAQ, the S&P 500, and an equally weighted strategy
combining the 6 major equity markets presented in section 4.3.
Here is the list of the 6 different strategies:
1. 1.
Long only: also referred to as the Benchmark in our comparison figures, this
strategy takes a long only position on the tested markets. We term this the
Benchmark because our aim is to assess whether the other quantitative
strategies exhibit superior performance compared to this one.
2. 2.
VIX: Weights are directly related to the VIX signal where we assume that
periods of stress are identified by a VIX level above its 80 percentile which
is around 26 percents.
3. 3.
SI for Stress Index: Weights are directly related to the stress index signal.
4. 4.
News: Weights are directly related to the news signal
5. 5.
SI News: Weights are obtained by the straighforward multiplication of the two
signals, namely the stress index times the news signal.
6. 6.
Dynamic SI News: this strategy is a dynamic selection between the strategy
based on stress index with news and the strategy using the stress index alone.
More details can be found in the section 5.3
In total we have 18 experiments as we compare these 6 strategies over the
three cases listed below:
1. 1.
Test on the NASDAQ.
2. 2.
Test on the S&P 500.
3. 3.
Test on the 6 major world equity markets.
Also to compare these strategies with a naive long only strategy, we also
compute a long only strategy that shares the same volatility as the better
performing strategy. Hence, we calculate ex-post the volatility of our best
strategies and rescale the benchmark to have the same volatility to compare
track records with the benchmark in a volatility consistent way.
### 5.3 Dynamic Strategy Selection
The primary goal involves dynamically shifting between two investment
strategies: one only reliant on the stress index (SI) and another that
incorporates news signals alongside the stress index (SI+News). This strategic
alternation aims to navigate efficiently through periods where the inclusion
of news signals doesn’t substantially improve the performance of the strategy
compared to using the stress index alone. Empirical observations reveal a
consistent pattern: the SI+News strategy either significantly outperforms or
underperforms the SI-only strategy. Consequently, a strategic selector
mechanism has been developed. This mechanism computes the Sharpe ratio
(defined as the ratio of the excess return over the annualized volatility as
in (Sharpe, 1966, 1975)) over a span of 250 observations, equivalent to an
annual rolling period, for the preceding month and consequently selects the
strategy that demonstrated superior performance for the forthcoming month.
Notably, there are intervals where the SI-only strategy surpasses the combined
SI+News strategy. The news signals predominantly act as a performance enhancer
during specific scenarios, particularly in times of crisis.
To elucidate this strategic selection, refer to Table 1. For each month, the
Sharpe ratios of the two strategies are calculated. The strategy for the
subsequent month is chosen based on the higher Sharpe ratio. For example, in
December 2022, the SI+News strategy, with a Sharpe ratio of 0.9, surpasses the
SI-only strategy, which has a Sharpe ratio of 0.4. Consequently, for December,
January, and February, the dominant SI+News strategy is selected. Conversely,
in March and April, the SI-only strategy becomes dominant, leading to its
selection in April and May.
Table 1: Illustration of the Method for Switching Between the SI and SI+News Strategies Month | Sharpe SI | Sharpe SI+News | Selected Strategy
---|---|---|---
Dec 2022 | 0.4 | 0.9 | …
Jan 2023 | -0.1 | 0.7 | SI+News
Feb 2023 | 0.2 | 0.5 | SI+News
Mar 2023 | 0.5 | 0.1 | SI+News
Apr 2023 | 1.2 | 0.6 | SI
May 2023 | … | … | SI
… | … | … | …
To validate the predictive efficacy of each strategy, it is necessary to
demonstrate that the strategy selection exhibits a degree of persistence. The
primary indicator of predictive capability is the frequency of selection for
each strategy. Over the period from 2011 to 2024, the SI-only strategy was
selected 71% of the time, while the SI+News strategy was chosen 29% of the
time. A random selection mechanism would yield a 50% selection rate for each
strategy. Additionally, a graphical representation of the strategy selection
demonstrates that the SI+News strategy is chosen primarily during specific
periods, suggesting that the news signal is particularly beneficial during
certain times.
## 6 The results
We test our six different strategies from January 2005 to January 2024, hence
testing them over 19 years. Our comparative analysis, as detailed in tables 2,
3, and 4, highlights some consistency in the comparison of the different
investment strategies for the S&P 500, NASDAQ, and major equity markets with
the uniform outperformance of the Dynamic SI+News strategy in terms of Sharpe
and Calmar ratios across all markets. It underscores its superior risk-
adjusted returns and drawdown management. In the three tables, in order to
have short column name, we denote Sharpe for the Sharpe ratio, Calmar for the
Calmar ratio, Vol for the annualized volatility and Max DD for the maximum
drawdowns. We recall that the Calmar ratio is defined as the annualized return
over the maximum drawdowns (see (Young, 1991)).
Table 2: Comparative Analysis of Investment Strategies for the S&P 500 Strategy | Sharpe | Calmar | Vol | Max DD | Turnover
---|---|---|---|---|---
Dynamic SI+News | 0.81 | 0.56 | 7.5% | 11% | 8.6
SI | 0.70 | 0.51 | 8.0% | 11% | 7.7
SI+News | 0.53 | 0.30 | 6.2% | 11% | 13.4
Long Only | 0.45 | 0.13 | 7.5% | 27% | n.a.
VIX | 0.42 | 0.17 | 8.9% | 22% | 18.5
News | 0.42 | 0.15 | 10.6% | 29% | 17.9
In more details, in the S&P 500 (Table 2), the Dynamic SI+News strategy
achieves not only the highest Sharpe ratio (0.81) but also the highest Calmar
ratio (0.56). This improvement of Sharpe ratio is quite important as the
benchmark strategy (the long only strategy) only achieves a Sharpe ratio of
0.45, making the Sharpe ratio improvement quite substantial. Similarly, in the
NASDAQ market (Table 3), the same strategy shows a remarkable Sharpe ratio of
0.89, further reinforcing its efficacy in a tech-heavy index. The analysis for
the major equity markets (Table 4) also corroborates the superiority of the
Dynamic SI+News strategy, particularly in managing market downturns, as
evidenced by its lower maximum drawdown (Max DD) compared to other strategies
like News and VIX.
If we refine our analysis on the Sharpe ratio, we can also notice that the
strategy based on the Stress index alone always comes second indicating that
the signals emitted by the stress index seem quite robust and more effective
than the ones using the VIX index. Regarding turnover, which measures the
frequency of trading within the portfolio, we observe notable differences
across strategies. For instance, in the S&P 500 (Table 2), the ’SI+News’
strategy exhibits the highest turnover rate at 13.4, indicating a more active
trading approach. This contrasts with strategies like ’SI’ and ’Dynamic SI
News’ which have lower turnover rates, suggesting a more passive strategy. The
buy and hold strategy has by definition no turnover as we hold the position
for ever. It is interesting to note that the Stress Index based strategy is
effectively more moderate in terms of turning the position compared to the VIX
based strategy.
Table 3: Comparative Analysis of Investment Strategies for the NASDAQ Strategy | Sharpe | Calmar | Vol | Max DD | Turnover
---|---|---|---|---|---
Dynamic SI+News | 0.89 | 0.62 | 9.2% | 13% | 8.6
SI | 0.84 | 0.53 | 9.8% | 16% | 7.6
Long Only | 0.62 | 0.20 | 9.2% | 28% | n.a.
SI+News | 0.61 | 0.38 | 7.5% | 12% | 13.4
VIX | 0.58 | 0.25 | 11.3% | 27% | 18.5
News | 0.43 | 0.15 | 12.3% | 34% | 17.9
Table 4: Comparative Analysis of Investment Strategies for the major equity markets Strategy | Sharpe | Calmar | Vol | Max DD | Turnover
---|---|---|---|---|---
Dynamic SI+News | 0.85 | 0.44 | 6.8% | 13% | 8.5
SI | 0.71 | 0.39 | 7.2% | 13% | 7.7
SI+News | 0.61 | 0.20 | 5.5% | 17% | 13.4
Long Only | 0.52 | 0.13 | 6.8% | 28% | n.a.
VIX | 0.42 | 0.15 | 8.4% | 23% | 18.4
News | 0.32 | 0.09 | 8.1% | 29% | 16.2
Similarly, in the NASDAQ (Table 3) and major equity markets (Table 4), the
’VIX’ and ’News’ strategies are characterized by higher turnover rates (18.5
and 17.9 for NASDAQ, 18.4 and 16.2 for major equity markets, respectively),
again reflecting a more active management style. This frequent trading could
be indicative of an attempt to capitalize on short-term market movements.
In terms of the Calmar ratio, which assesses the risk-adjusted performance of
an investment by comparing the average annual compounded rate of return and
the maximum drawdown, the ’Dynamic SI+News’ strategy consistently outperforms
others across all markets. This is evident from its higher Calmar ratios (0.56
for S&P 500, 0.62 for NASDAQ, and 0.44 for major equity markets). The superior
Calmar ratios suggest that this strategy not only provides higher returns but
also does so with less risk, as indicated by lower maximum drawdowns.
In contrast, strategies like ’Long Only’ and ’News’ show lower Calmar ratios,
implying that they bear a higher risk (as seen in their higher maximum
drawdowns). This could be a critical factor for investors who are risk-averse
and prefer strategies that limit potential losses.
Figures 1, 2, and 3 provide a visual representation of these findings and
compare the best performing strategy namely the ’Dynamic SI+News’ strategy
versus the passive long only benchmark. The subplot in all figures 1, 2, and 3
reveal how the Dynamic SI+News strategy dynamically adjusts its allocation,
contributing to its robust performance.
Figure 1: Comparison of the dynamic strategy combining Stress index and News
and Stress index alone using persistence of overperformance of one strategy
over the other one versus the naive long only strategy (rescaled at the same
volatility) for the S&P 500 universe. The first plot compares the two
strategies over time while the subplot provides the corresponding overall
allocation. Figure 2: Comparison of the dynamic strategy combining Stress
index and News and Stress index alone using persistence of overperformance of
one strategy over the other one versus the naive long only strategy (rescaled
at the same volatility) for the NASDAQ 100 universe. The first plot compares
the two strategies over time while the subplot provides the corresponding
overall allocation. Figure 3: Comparison of the dynamic strategy combining
Stress index and News and Stress index alone using persistence of
overperformance of one strategy over the other one versus the naive long only
strategy (rescaled at the same volatility) for the Major Equities Markets. The
first plot compares the two strategies over time while the subplot provides
the corresponding overall allocation.
All in all, when compiling our 18 experiments, our analysis demonstrates the
effectiveness of the Dynamic SI News strategy across different markets, with
its ability to adapt to market conditions and maintain superior performance
metrics standing out as a key takeaway.
Additionally we provide in the appendix figures 4 5, 6, 7, 8, 9, 10, 11, 12,
13, 14, 15 the comparative performance of other investment strategies over
time, alongside their corresponding allocations and over the 3 different
market universes, namely the S&P 500, the NASDAQ, and Major Equities Markets.
For each figure, we give as the top plot the temporal performance comparison
with the benchmark, the long only strategy, while the subplot details the
overall allocation.
Figure 4 illustrates the performance of a news-based strategy against a naive
long-only strategy within the S&P 500 universe. This figure highlights the
temporal evolution of the strategy’s effectiveness in comparison to the
benchmark. In Figure 5, we observe a similar analysis within the NASDAQ
universe. The subplot is particularly insightful for understanding the
allocation differences underlined by the news-based strategy. Figure 6 extends
this comparison to the Major Equities Markets, offering a broader view of the
strategy’s applicability in a global context.
Figures 7, 8, and 9 delve into strategies that amalgamate news with stress
index data. These figures offer an intriguing perspective on the synergistic
effects of combining these two data sources across different market universes.
Figures 10, 11, and 12 focus on strategies purely based on the stress index.
These plots are crucial for understanding the standalone impact of the stress
index on the investment strategy.
Finally, Figures 13, 14, and 15 compare the VIX-based strategy with the naive
long-only approach. The temporal comparison and allocation subplots underscore
the unique characteristics and performance of the VIX-based strategy in
different market settings.
### 6.1 Intuition of the results
The integration of news signals with the stress index in investment
strategies, as demonstrated in the S&P 500, NASDAQ, and major equity markets
(Tables 2, 3, and 4), makes a lot of sense and is grounded in both empirical
results and theoretical underpinnings from financial literature.
#### 6.1.1 Added Values of News
Intuitively incorporating news read by a machine into an investment strategy
based on the stress index can improve the strategy’s effectiveness for several
reasons:
* •
Signal Reactivity: News can detect new trend or sentiment in the markets
quicker. This is effectively illustrated by the higher turnover of the pure
news strategy.
* •
Objectivity and Consistency: Unlike human analysts, reading news by a machine
is less prone to cognitive biases or emotional responses. The news signal
should be quite consistent and objective.
* •
Quality of the data: Since we use Bloomberg market wraps that are supposed to
capture all major news, the news signal should be triggered by any weak
signals from news.
* •
Hidden pattern catching: The news signal is capable of capturing more complex
relationships between news items than a human can.
#### 6.1.2 Empirical Evidence
As already presented, the Dynamic SI+News strategy, which combines both news
signals and stress index data, shows a remarkable performance across various
markets. In the S&P 500, this strategy not only achieves the highest Sharpe
ratio (0.81) but also a significant Calmar ratio (0.56), surpassing the
benchmark long only strategy’s Sharpe ratio of 0.45.
#### 6.1.3 Theoretical Insights
Combining news signals with the stress index aligns with the principles of
Behavioral Finance, particularly as detailed in (Barber and Odean, 2008).
Their research demonstrates how news and media attention significantly
influence investor behavior, often leading to irrational decision-making based
on recent news rather than long-term trends. This indicates a potential edge
in combining news analysis with stress indices, which act as barometers for
market sentiment and stress. By integrating recent news, which potentially
triggers overreactions or underreactions, with the more steady and sentiment-
focused stress indices, a more balanced and informed investment the strategy
can be improved by capitalizing on the quick, news-driven shifts in investor
behavior while being anchored by the broader, more stable market insights
provided by the stress index.
#### 6.1.4 Role of Stress Index
The stress index alone consistently ranks second in terms of Sharpe ratio
across different markets, underscoring its robustness. It acts as a gauge of
market tension and uncertainty, often signaling impending market corrections
or volatility, making it a crucial component of a holistic investment
strategy.
#### 6.1.5 Turnover Analysis
Analyzing turnover rates sheds light on strategy aggressiveness. The ’SI+News’
strategy exhibits the highest turnover (13.4 in the S&P 500), indicative of an
active trading approach. This is quite logical as the news based signals is
supposed to react very quickly. The same applies to the VIX strategy that
seems to overreact to any high volatility environement. In contrast, ’SI’ and
’Dynamic SI+News’ strategies demonstrate more moderation, suggesting a
balanced approach that leverages the predictive power of stress indices while
avoiding excessive trading.
#### 6.1.6 Impact on Maximum Drawdown and Calmar Ratio
The integration of news signals into the stress index strategy has a
significant impact on the maximum drawdown (Max DD) and, consequently, on the
Calmar ratio. This is evident in the comparative analysis of investment
strategies across various markets (Tables 2, 3, and 4).
##### Max DD Reduction
The Dynamic SI+News strategy consistently demonstrates a lower Max DD compared
to other strategies, especially the News-only strategy. For example, in the
S&P 500, the Max DD for Dynamic SI+News is 11%, markedly lower than the 29%
Max DD for the News strategy. This reduction in Max DD is crucial for risk-
averse investors and indicates enhanced strategy stability during market
downturns.
##### Enhanced Calmar Ratio
The Calmar ratio, which measures the return per unit of downside risk, is
notably higher in the Dynamic SI+News strategy across all markets. For
instance, in the S&P 500, this strategy achieves a Calmar ratio of 0.56,
surpassing the Stress Index (SI) alone at 0.51 and significantly outperforming
the News strategy at 0.15. This trend is consistent across the NASDAQ and
major equity markets.
##### Rationale for Improvement
Incorporating news into the stress index strategy enhances its ability to
adapt to market sentiments and trends rapidly. News provides real-time
insights and immediate market reactions, which, when combined with the stress
index’s broader market sentiment gauge, allows for a more dynamic and
responsive strategy. This combination effectively mitigates risks during
volatile periods, leading to a reduced Max DD and an improved Calmar ratio.
##### Literature Support
This finding aligns with the existing financial literature that emphasizes the
importance of combining various types of market information to achieve a more
comprehensive and robust investment strategy. For instance, (Tetlock, 2007)
highlights how media and news content significantly influence investor
sentiment and stock market dynamics, underscoring the value of incorporating
real-time news into investment strategies (Tetlock, 2007). Similarly, Baker et
al. (2016) in their study provide insights into how economic policy
uncertainty, derived from news media coverage, impacts market conditions and
investor behavior, further justifying the inclusion of news data alongside
stress index information in strategic decision-making(Tetlock, 2007).
Additionally, Da et al. (2011) through their work emphasize the influence of
media attention on stock prices and trading volumes, suggesting that the
attention garnered by specific news can be pivotal in financial market
movements and should be integrated into comprehensive investment strategies
(Da et al., 2011). These studies collectively support the synergy between
real-time news and stress index data, enhancing the ability to capture and
react to market anomalies and stress conditions, thereby improving the overall
risk-adjusted performance of the strategy.
## 7 Conclusion
This paper introduces a novel risk-on, risk-off strategy for the stock market,
leveraging a financial stress indicator combined with sentiment analysis
performed by ChatGPT on Bloomberg’s daily market summaries. The strategy
enhances market stress forecasts, which are based on volatility and credit
spreads, using financial news sentiment derived from GPT4. This results in a
significant improvement in performance, evidenced by a higher Sharpe ratio and
reduced maximum drawdowns. The method’s effectiveness is not limited to a
single market; it is consistent across various equity markets, including the
NASDAQ and S&P 500, as well as six other major equity markets, indicating its
broad applicability. There are many potential directions for future research.
Future works could investigate if the strategy can be applied to different
financial markets such as commodities, bonds, and foreign exchange markets, to
test its versatility and robustness. Additionally, it would be worthwhile to
investigate whether additional data sources, such as social media sentiment or
macroeconomic indicators, could further improve the strategy’s performance.
Identifying patterns in economic events where the news signal is most
effective is also a potential avenue for future research.
## 8 Bibliographical References
## References
* Araci [2019] Dogu Araci. Finbert: Financial sentiment analysis with pre-trained language models. _CoRR_ , abs/1908.10063, 2019. URL http://arxiv.org/abs/1908.10063.
* Baccianella et al. [2010] Stefano Baccianella, Andrea Esuli, and Fabrizio Sebastiani. SentiWordNet 3.0: An enhanced lexical resource for sentiment analysis and opinion mining. In Nicoletta Calzolari, Khalid Choukri, Bente Maegaard, Joseph Mariani, Jan Odijk, Stelios Piperidis, Mike Rosner, and Daniel Tapias, editors, _Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10)_ , Valletta, Malta, May 2010. European Language Resources Association (ELRA). URL http://www.lrec-conf.org/proceedings/lrec2010/pdf/769_Paper.pdf.
* Baker et al. [2016] Scott R Baker, Nicholas Bloom, and Steven J Davis. Measuring economic policy uncertainty. _The quarterly journal of economics_ , 131(4):1593–1636, 2016.
* Barber and Odean [2008] Brad M Barber and Terrance Odean. All that glitters: The effect of attention and news on the buying behavior of individual and institutional investors. _The review of financial studies_ , 21(2):785–818, 2008.
* Cowen and Tabarrok [2023] Tyler Cowen and Alexander T. Tabarrok. How to Learn and Teach Economics with Large Language Models, Including GPT. _SSRN Electronic Journal_ , XXX(XXX):0–0, 3 2023. ISSN 1556-5068. doi: 10.2139/SSRN.4391863. URL https://papers.ssrn.com/abstract=4391863.
* Da et al. [2011] Zhi Da, Joseph Engelberg, and Pengjie Gao. In search of attention. _The journal of finance_ , 66(5):1461–1499, 2011.
* Devlin et al. [2018] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. _CoRR_ , abs/1810.04805, 2018. URL http://arxiv.org/abs/1810.04805.
* Elliott et al. [2018] W Brooke Elliott, Stephanie M Grant, and Frank D Hodge. Negative news and investor trust: The role of $ firm and # ceo twitter use. _Journal of Accounting Research_ , 56(5):1483–1519, 2018.
* George and George [2023] A Shaji George and AS Hovan George. A review of ChatGPT AI’s impact on several business sectors. _Partners Universal International Innovation Journal_ , 1(1):9–23, 2023.
* Guilleminot et al. [2014] Benoît Guilleminot, Jean-Jacques Ohana, and Steve Ohana. A new financial stress indicator: properties and conditional asset price behavior. _SSRN Electronic Journal_ , January 2014. Available at SSRN: https://ssrn.com/abstract=2317321 or http://dx.doi.org/10.2139/ssrn.2317321.
* Hansen and Kazinnik [2023] Anne Lundgaard Hansen and Sophia Kazinnik. Can ChatGPT Decipher Fedspeak? _SSRN Electronic Journal_ , XX(XX):XX, 3 2023. ISSN 1556-5068. doi: 10.2139/SSRN.4399406. URL https://papers.ssrn.com/abstract=4399406.
* Ikizlerli et al. [2019] Deniz Ikizlerli, Phil Holmes, and Keith Anderson. The response of different investor types to macroeconomic news. _Journal of Multinational Financial Management_ , 50:13–28, 2019.
* Januário et al. [2022] Brenda A. Januário, Arthur Emanuel de O. Carosia, Ana Estela A. da Silva, and Guilherme P. Coelho. Sentiment analysis applied to news from the brazilian stock market. _IEEE Latin America Transactions_ , 20(3):512–518, 2022. doi: 10.1109/TLA.2022.9667151.
* Kant et al. [2018] Neel Kant, Raul Puri, Nikolai Yakovenko, and Bryan Catanzaro. Practical text classification with large pre-trained language models, 2018.
* Kearney and Liu [2014] Colm Kearney and Sha Liu. Textual sentiment in finance: A survey of methods and models. _International Review of Financial Analysis_ , 33:171–185, 2014.
* Korinek [2023] Anton Korinek. Language Models and Cognitive Automation for Economic Research. _Cambridge, MA_ , XX(XX):XX, 2 2023. doi: 10.3386/W30957. URL https://www.nber.org/papers/w30957.
* Lefort et al. [2024] Baptiste Lefort, Eric Benhamou, Jean-Jacques Ohana, David Saltiel, Beatrice Guez, and Damien Challet. Can ChatGPT Compute Trustworthy Sentiment Scores from Bloomberg Market Wraps. _Available at SSRN_ , January 2024. URL https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4688451.
* Li et al. [2023] Wang Li, Chaozhu Hu, and Youxi Luo. A deep learning approach with extensive sentiment analysis for quantitative investment. _Electronics_ , 12(18):3960, 2023. doi: 10.3390/electronics12183960. URL https://dx.doi.org/10.3390/electronics12183960.
* Lin et al. [2023] Tse-Mao Lin, Jing-Long Yu, Jhih-Wei Chen, and Chin-Sheng Huang. Application of machine learning with news sentiment in stock trading strategies. _International Journal of Financial Research_ , 14:1, 05 2023. doi: 10.5430/ijfr.v14n3p1.
* Lopez-Lira and Tang [2023] Alejandro Lopez-Lira and Yuehua Tang. Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models. _SSRN Electronic Journal_ , XXX(XX-XX):XX, 4 2023. ISSN 1556-5068. doi: 10.2139/SSRN.4412788. URL https://papers.ssrn.com/abstract=4412788.
* Loughran and McDonald [2011] Tim Loughran and Bill McDonald. When is a liability not a liability? textual analysis, dictionaries, and 10-ks. _The Journal of finance_ , 66(1):35–65, 2011.
* Nakagawa et al. [2022] Kei Nakagawa, Shingo Sashida, and Hiroki Sakaji. Investment strategy via lead lag effect using economic causal chain and ssestm model. In _IIAI International Conference on Advanced Applied Informatics_ , 2022. doi: 10.1109/IIAIAAI55812.2022.00065. URL https://dx.doi.org/10.1109/IIAIAAI55812.2022.00065.
* Noy and Zhang [2023] Shakked Noy and Whitney Zhang. Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence. _SSRN Electronic Journal_ , XX(XX):XX, 3 2023. doi: 10.2139/SSRN.4375283. URL https://papers.ssrn.com/abstract=4375283.
* OpenAI [2023] OpenAI. Gpt-4 technical report, 2023.
* Schumaker and Chen [2009] Robert P Schumaker and Hsinchun Chen. Textual analysis of stock market prediction using breaking financial news: The azfin text system. _ACM Transactions on Information Systems (TOIS)_ , 27(2):1–19, 2009.
* Sharpe [1966] William F. Sharpe. Mutual fund performance. _The Journal of Business_ , pages 119–138, January 1966.
* Sharpe [1975] William F. Sharpe. Adjusting for risk in portfolio performance measurement. _Journal of Portfolio Management_ , pages 29–34, Winter 1975.
* Smales [2015] L.A. Smales. Risk-on / risk-off: Financial market response to investor fear. _Journal of International Financial Markets, Institutions and Money_ , 2015. School of Economics & Finance, Curtin University, Australia.
* Smales [2016] Lee A. Smales. Time-varying relationship of news sentiment, implied volatility and stock returns. _Applied Economics_ , 48(51):4942–4960, November 2016. doi: 10.1080/00036846.2016.116. URL https://ideas.repec.org/a/taf/applec/v48y2016i51p4942-4960.html.
* Tang et al. [2023] Yixuan Tang, Yi Yang, Allen H Huang, Andy Tam, and Justin Z Tang. Finentity: Entity-level sentiment classification for financial texts, 2023.
* Tetlock [2007] Paul C. Tetlock. Giving Content to Investor Sentiment: The Role of Media in the Stock Market. _The Journal of Finance_ , 62(3):1139–1168, 6 2007. ISSN 1540-6261. doi: 10.1111/J.1540-6261.2007.01232.X. URL https://onlinelibrary.wiley.com/doi/full/10.1111/j.1540-6261.2007.01232.x.
* Xing et al. [2018] Frank Z Xing, Erik Cambria, and Roy E Welsch. Natural language based financial forecasting: a survey. _Artificial Intelligence Review_ , 50(1):49–73, 2018.
* Yang [2023] Cheol-Won Yang. Investment strategy via analyst report text mining. _Journal of Data and Qualitative Science_ , 2023:1–12, 2023. doi: 10.1108/jdqs-09-2022-0022. URL https://dx.doi.org/10.1108/jdqs-09-2022-0022.
* Yang and Menczer [2023] Kai-Cheng Yang and Filippo Menczer. Large language models can rate news outlet credibility. Technical report, arxiv, 4 2023. URL https://arxiv.org/abs/2304.00228v1.
* Yeoh et al. [2023] Eik Den Yeoh, Tinfah Chung, and Yuyang Wang. Predicting price trends using sentiment analysis: A study of stepn’s socialfi and gamefi cryptocurrencies. _Cryptocurrency Markets_ , 44, 2023. doi: 10.37256/cm.4420232572. URL https://dx.doi.org/10.37256/cm.4420232572.
* Young [1991] T.W. Young. Calmar ratio: A smoother tool. _Futures_ , 20, Nr 1:40–41, 1991.
* Zhang et al. [2018] Lei Zhang, Shuai Wang, and Bing Liu. Deep learning for sentiment analysis: A survey. _Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery_ , 8(4):e1253, 2018.
## Figures
Figure 4: Comparison of the strategy based only on news versus the naive long
only strategy (rescaled at the same volatility) for the S&P 500 universe. The
first plot compares the two strategies over time while the subplot provides
the corresponding overall allocation. Figure 5: Comparison of the strategy
based only on news versus the naive long only strategy (rescaled at the same
volatility) for the NASDAQ 100 universe. The first plot compares the two
strategies over time while the subplot provides the corresponding overall
allocation. Figure 6: Comparison of the strategy based only on news versus the
naive long only strategy (rescaled at the same volatility) for the Major
Equities Markets. The first plot compares the two strategies over time while
the subplot provides the corresponding overall allocation. Figure 7:
Comparison of the strategy combining Stress index and News versus the naive
long only strategy (rescaled at the same volatility) for the S&P 500 universe.
The first plot compares the two strategies over time while the subplot
provides the corresponding overall allocation. Figure 8: Comparison of the
strategy combining Stress index and News versus the naive long only strategy
(rescaled at the same volatility) for the NASDAQ 100 universe. The first plot
compares the two strategies over time while the subplot provides the
corresponding overall allocation. Figure 9: Comparison of the strategy
combining Stress index and News versus the naive long only strategy (rescaled
at the same volatility) for the Major Equities Markets. The first plot
compares the two strategies over time while the subplot provides the
corresponding overall allocation. Figure 10: Comparison of the Stress Index
based strategy versus the naive long only strategy (rescaled at the same
volatility) for the S&P 500 universe. The first plot compares the two
strategies over time while the subplot provides the corresponding overall
allocation. Figure 11: Comparison of the Stress Index based strategy versus
the naive long only strategy (rescaled at the same volatility) for the NASDAQ
100 universe. The first plot compares the two strategies over time while the
subplot provides the corresponding overall allocation. Figure 12: Comparison
of the Stress Index based strategy versus the naive long only strategy
(rescaled at the same volatility) for the Major Equities Markets. The first
plot compares the two strategies over time while the subplot provides the
corresponding overall allocation. Figure 13: Comparison of the VIX based
strategy versus the naive long only strategy (rescaled at the same volatility)
for the S&P 500 universe. The first plot compares the two strategies over time
while the subplot provides the corresponding overall allocation. Figure 14:
Comparison of the VIX based strategy versus the naive long only strategy
(rescaled at the same volatility) for the NASDAQ 100 universe. The first plot
compares the two strategies over time while the subplot provides the
corresponding overall allocation. Figure 15: Comparison of the VIX based
strategy versus the naive long only strategy (rescaled at the same volatility)
for the Major Equities Markets. The first plot compares the two strategies
over time while the subplot provides the corresponding overall allocation.
|
$\displaystyle=\left(\begin{array}[]{c}\frac{2\left(2\left(D^{2}-5D+2\right)M_{3}^{2}+4(3D-2)M_{1}^{2}+2(2-3D)DM_{2}^{2}\right)}{(D-1)(3D-2)}\\\
-\frac{2\left(2D^{2}-9D+6\right)}{3D^{2}-5D+2}\\\
-\frac{2\left(2D^{2}-9D+6\right)}{3D^{2}-5D+2}\\\
\frac{2\left(2D^{2}-9D+6\right)}{3D^{2}-5D+2}\\\ \end{array}\right)\,,$ (357)
$\displaystyle\left(\begin{array}[]{c}C^{(1,2)}_{1}\\\ C^{(1,2)}_{5}\\\
C^{(1,2)}_{6}\\\
C^{(1,2)}_{7}\end{array}\right)\Bigg{|}_{s_{01}s_{0^{\prime}1}^{2}s_{11}^{-1}}$
$\displaystyle=\left(\begin{array}[]{c}\frac{2\left(2\left(D^{2}-5D+2\right)M_{3}^{2}+2(2-3D)DM_{1}^{2}+4(3D-2)M_{2}^{2}\right)}{(D-1)(3D-2)}\\\
-\frac{2\left(2D^{2}-9D+6\right)}{3D^{2}-5D+2}\\\
-\frac{2\left(2D^{2}-9D+6\right)}{3D^{2}-5D+2}\\\
\frac{2\left(2D^{2}-9D+6\right)}{3D^{2}-5D+2}\\\ \end{array}\right)\,,$ (366)
$\displaystyle\left(\begin{array}[]{c}C^{(0,3)}_{1}\\\ C^{(0,3)}_{5}\\\
C^{(0,3)}_{6}\\\
C^{(0,3)}_{7}\end{array}\right)\Bigg{|}_{s_{0^{\prime}1}^{3}s_{11}^{-1}}$
$\displaystyle=\left(\begin{array}[]{c}-\frac{8(D+2)\left((3D-2)M_{2}^{2}+(1-2D)M_{3}^{2}\right)}{(D-1)(3D-2)}\\\
-\frac{2\left(5D^{2}+6D-8\right)}{3D^{2}-5D+2}\\\
\frac{8D^{2}-60D+40}{3D^{2}-5D+2}\\\
\frac{2\left(5D^{2}+6D-8\right)}{(D-1)(3D-2)}\\\ \end{array}\right)\,.$ (375)
## References
* (1) B. Feng, T. Li, and X. Li, Analytic tadpole coefficients of one-loop integrals, JHEP 09 (2021) 081, [arXiv:2107.03744].
* (2) C. Hu, T. Li, and X. Li, One-loop Feynman Integral Reduction by Differential Operators, arXiv:2108.00772.
* (3) B. Feng, T. Li, H. Wang, and Y. Zhang, Reduction of General One-loop Integrals Using Auxiliary Vector, arXiv:2203.14449.
* (4) L. M. Brown and R. P. Feynman, Radiative corrections to Compton scattering, Phys. Rev. 85 (1952) 231–244.
* (5) D. B. Melrose, Reduction of Feynman diagrams, Nuovo Cim. 40 (1965) 181–213.
* (6) G. Passarino and M. J. G. Veltman, One Loop Corrections for e+ e- Annihilation Into mu+ mu- in the Weinberg Model, Nucl. Phys. B 160 (1979) 151–207.
* (7) G. ’t Hooft and M. J. G. Veltman, Scalar One Loop Integrals, Nucl. Phys. B 153 (1979) 365–401.
* (8) W. L. van Neerven and J. A. M. Vermaseren, LARGE LOOP INTEGRALS, Phys. Lett. B 137 (1984) 241–244.
* (9) R. G. Stuart, Algebraic Reduction of One Loop Feynman Diagrams to Scalar Integrals, Comput. Phys. Commun. 48 (1988) 367–389.
* (10) G. J. van Oldenborgh and J. A. M. Vermaseren, New Algorithms for One Loop Integrals, Z. Phys. C 46 (1990) 425–438.
* (11) Z. Bern, L. J. Dixon, and D. A. Kosower, Dimensionally regulated one loop integrals, Phys. Lett. B 302 (1993) 299–308, [hep-ph/9212308]. [Erratum: Phys.Lett.B 318, 649 (1993)].
* (12) Z. Bern, L. J. Dixon, and D. A. Kosower, Dimensionally regulated pentagon integrals, Nucl. Phys. B 412 (1994) 751–816, [hep-ph/9306240].
* (13) J. Fleischer, F. Jegerlehner, and O. V. Tarasov, Algebraic reduction of one loop Feynman graph amplitudes, Nucl. Phys. B 566 (2000) 423–440, [hep-ph/9907327].
* (14) T. Binoth, J. P. Guillet, and G. Heinrich, Reduction formalism for dimensionally regulated one loop N point integrals, Nucl. Phys. B 572 (2000) 361–386, [hep-ph/9911342].
* (15) A. Denner and S. Dittmaier, Reduction of one loop tensor five point integrals, Nucl. Phys. B 658 (2003) 175–202, [hep-ph/0212259].
* (16) G. Duplancic and B. Nizic, Reduction method for dimensionally regulated one loop N point Feynman integrals, Eur. Phys. J. C 35 (2004) 105–118, [hep-ph/0303184].
* (17) A. Denner and S. Dittmaier, Reduction schemes for one-loop tensor integrals, Nucl. Phys. B 734 (2006) 62–115, [hep-ph/0509141].
* (18) R. K. Ellis and G. Zanderighi, Scalar one-loop integrals for QCD, JHEP 02 (2008) 002, [arXiv:0712.1851].
* (19) G. Ossola, C. G. Papadopoulos, and R. Pittau, Reducing full one-loop amplitudes to scalar integrals at the integrand level, Nuclear Physics B 763 (Feb, 2007) 147–169.
* (20) Z. Bern, L. J. Dixon, D. C. Dunbar, and D. A. Kosower, Fusing gauge theory tree amplitudes into loop amplitudes, Nucl. Phys. B 435 (1995) 59–101, [hep-ph/9409265].
* (21) K. G. Chetyrkin and F. V. Tkachov, Integration by Parts: The Algorithm to Calculate beta Functions in 4 Loops, Nucl. Phys. B 192 (1981) 159–204.
* (22) F. V. Tkachov, A Theorem on Analytical Calculability of Four Loop Renormalization Group Functions, Phys. Lett. B 100 (1981) 65–68.
* (23) G. Ossola, C. G. Papadopoulos, and R. Pittau, Reducing full one-loop amplitudes to scalar integrals at the integrand level, Nucl. Phys. B 763 (2007) 147–169, [hep-ph/0609007].
* (24) G. Ossola, C. G. Papadopoulos, and R. Pittau, Numerical evaluation of six-photon amplitudes, JHEP 07 (2007) 085, [arXiv:0704.1271].
* (25) R. K. Ellis, W. T. Giele, and Z. Kunszt, A Numerical Unitarity Formalism for Evaluating One-Loop Amplitudes, JHEP 03 (2008) 003, [arXiv:0708.2398].
* (26) Z. Bern, L. J. Dixon, D. C. Dunbar, and D. A. Kosower, One loop n point gauge theory amplitudes, unitarity and collinear limits, Nucl. Phys. B 425 (1994) 217–260, [hep-ph/9403226].
* (27) R. Britto, F. Cachazo, and B. Feng, Generalized unitarity and one-loop amplitudes in N=4 super-Yang-Mills, Nucl. Phys. B 725 (2005) 275–305, [hep-th/0412103].
* (28) R. Britto, E. Buchbinder, F. Cachazo, and B. Feng, One-loop amplitudes of gluons in SQCD, Phys. Rev. D 72 (2005) 065012, [hep-ph/0503132].
* (29) J. M. Campbell, E. W. N. Glover, and D. J. Miller, One loop tensor integrals in dimensional regularization, Nucl. Phys. B 498 (1997) 397–442, [hep-ph/9612413].
* (30) Z. Bern, L. J. Dixon, and D. A. Kosower, One loop amplitudes for e+ e- to four partons, Nucl. Phys. B 513 (1998) 3–86, [hep-ph/9708239].
* (31) C. Anastasiou, R. Britto, B. Feng, Z. Kunszt, and P. Mastrolia, Unitarity cuts and Reduction to master integrals in d dimensions for one-loop amplitudes, JHEP 03 (2007) 111, [hep-ph/0612277].
* (32) R. Britto and E. Mirabella, Single Cut Integration, JHEP 01 (2011) 135, [arXiv:1011.2344].
* (33) C. Anastasiou, R. Britto, B. Feng, Z. Kunszt, and P. Mastrolia, D-dimensional unitarity cut method, Phys. Lett. B 645 (2007) 213–216, [hep-ph/0609191].
* (34) R. Britto and B. Feng, Unitarity cuts with massive propagators and algebraic expressions for coefficients, Phys. Rev. D 75 (2007) 105006, [hep-ph/0612089].
* (35) R. Britto and B. Feng, Integral coefficients for one-loop amplitudes, JHEP 02 (2008) 095, [arXiv:0711.4284].
* (36) R. Britto, B. Feng, and P. Mastrolia, Closed-Form Decomposition of One-Loop Massive Amplitudes, Phys. Rev. D 78 (2008) 025031, [arXiv:0803.1989].
* (37) R. Britto, B. Feng, and G. Yang, Polynomial Structures in One-Loop Amplitudes, JHEP 09 (2008) 089, [arXiv:0803.3147].
* (38) B. Feng and H. Wang, Analytic structure of one-loop coefficients, JHEP 05 (2013) 104, [arXiv:1301.7510].
* (39) B. Feng, J. Gong, and T. Li, Universal Treatment of Reduction for One-Loop Integrals in Projective Space, arXiv:2204.03190.
* (40) A. Denner, Techniques for calculation of electroweak radiative corrections at the one loop level and results for W physics at LEP-200, Fortsch. Phys. 41 (1993) 307–420, [arXiv:0709.1075].
* (41) B. Feng and T. Li, PV-Reduction of Sunset Topology with Auxiliary Vector, arXiv:2203.16881.
* (42) A. V. Smirnov and F. S. Chuharev, FIRE6: Feynman Integral REduction with Modular Arithmetic, Comput. Phys. Commun. 247 (2020) 106877, [arXiv:1901.07808].
|
[labelstyle=]
# Operator Product States on Tensor Powers of $C^{\ast}$-Algebras
Emil Prodan Department of Physics and
Department of Mathematical Sciences
Yeshiva University
New York, NY 10016, USA
<EMAIL_ADDRESS>
###### Abstract.
The program of matrix product states on an infinite tensor product
${\mathcal{A}}^{\otimes{\mathbb{Z}}}$ of $C^{\ast}$-algebras, initiated by
Fannes, Nachtergaele and Werner in their seminal paper Commun. Math. Phys.
144, 443-490 (1992), is re-assessed in a generic context where ${\mathcal{A}}$
is an infinite nuclear $C^{\ast}$-algebra. This covers, for example, two and
higher dimensional spin lattices. The program consists of three major blocks:
reduction, factorization and reconstruction. We use the Archimedeanization
process of Kavruk, Paulsen, Todorov and Tomforde to produce an operator system
structure on the quotient of ${\mathcal{A}}^{\otimes{\mathbb{Z}}}$ by the so
called entanglement kernel. Then we argue that the Haagerup tensor product
accommodates naturally the factorization block and demonstrate how its
representations on free products of algebras lead to operator product
presentations of the states.
The author was supported by the U.S. National Science Foundation through the
grant DMR-1823800.
###### Contents
1. 1 Introduction and Main Statements
2. 2 Entanglement Kernel and the Reduced Space and State
1. 2.1 Background: Concrete and abstract operator spaces
2. 2.2 Background: Completely bounded linear maps
3. 2.3 Algebra of physical observables
4. 2.4 The entanglement kernel
5. 2.5 The reduced state
3. 3 Reduction Process
1. 3.1 Ordered vector spaces
2. 3.2 Entanglement kernel is an order ideal
3. 3.3 Operator systems and matrix ordered $\ast$-vector spaces
4. 3.4 Completing the reduction process
4. 4 Factorization Process
1. 4.1 Multi-linear maps and Haagerup tensor product
2. 4.2 Generating bi-linear map
3. 4.3 Generating multi-linear map
4. 4.4 Asymptotic constraints
5. 5 Reconstruction Algorithm
1. 5.1 Re-construction: The abstract form
2. 5.2 Reconstruction: The concrete form
3. 5.3 Operator product states
## 1\. Introduction and Main Statements
The work [6], by Fannes, Nachtergaele and Werner, contains some of the most
influential results in the field of correlated quantum systems. Indeed, most
of theoretical physicists working in the field will agree that the
1-dimensional chains of quantum resonators, each carrying a finite local
algebra ${\mathcal{A}}$ of physical observables, are fully understood, in the
sense that any translational invariant state can be represented, in an exact
or approximate form, as a matrix product. The rigorous grounds for this
statement are contained in [6]. Once the matrix product presentation of a
state is derived, which is a standard exercise in many cases, the correlation
functions of the chain and many other physical observables can be computed
explicitly. Hence, the concepts and the tools introduced in [6] have both
fundamental and practical values.
At a more concrete level, let ${\mathcal{A}}^{\otimes{\mathbb{Z}}}$ be the
infinite tensor power of the local algebra ${\mathcal{A}}$, with the former
playing the role of the algebra of physical observables for the chain.
Embedded in ${\mathcal{A}}^{\otimes{\mathbb{Z}}}$ are two sub-algebras,
${\mathcal{A}}_{R}={\mathcal{A}}^{\otimes{\mathbb{N}}^{\times}}$ and
${\mathcal{A}}_{L}={\mathcal{A}}^{\otimes({\mathbb{Z}}\setminus{\mathbb{N}}^{\times})}$.
Recall that ${\mathcal{A}}^{\otimes{\mathbb{Z}}}$ comes equipped with a
standard shift map and, if this map shifts to the right, then it leaves
${\mathcal{A}}_{R}$ invariant, hence, it descends on this sub-algebra.
Likewise, $S^{-1}$ descends to a map on ${\mathcal{A}}_{L}$. Now, given a
shift invariant state $\omega$ on ${\mathcal{A}}_{\mathbb{Z}}$, $\omega\circ
S=\omega$, Ref. [6] introduced the closed sub-space of ${\mathcal{A}}_{R}$
${\mathcal{K}}_{\omega}=\bigcap_{x\in{\mathcal{A}}_{L}}{\rm
Ker}\,\omega_{x},\quad\omega_{x}(a_{R}):=\omega(xa_{R}),$ (1.1)
which we refer to as the entanglement kernel of $\omega$. The attention then
shifts to the reduced space
${\mathcal{B}}_{\omega}={\mathcal{A}}_{R}/{\mathcal{K}}_{\omega}$ and to the
descent $\bar{\omega}$ of $\omega$ on this space (note that
${\mathcal{K}}_{\omega}$ is automatically in the kernel of $\omega$). The
authors of [6] observed that, if ${\mathcal{B}}_{\omega}$ happens to be a
$C^{\ast}$-algebra, then $\omega$ factorizes through a completely positive map
${\mathbb{E}}:{\mathcal{A}}\otimes{\mathcal{B}}_{\omega}\to{\mathcal{B}}_{\omega},\quad{\mathbb{E}}(a\otimes\lfloor
a_{R}\rfloor)=\lfloor a\otimes a_{R}\rfloor,$ (1.2)
in the sense that $\omega(a_{1}\otimes\cdots\otimes a_{n})$ can be computed by
applying $\bar{\omega}$ on certain iterates of ${\mathbb{E}}$ map. Here,
$\lfloor\cdot\rfloor$ indicates the classes of
${\mathcal{B}}_{\omega}={\mathcal{A}}_{R}/{\mathcal{K}}_{\omega}$. Hence, [6]
identified a reduction process as well as a factorization process which supply
the $\omega$-reduced and factorized data
$({\mathcal{A}},{\mathcal{B}}_{\omega},{\mathbb{E}},\bar{\omega})$.
The re-construction problem addresses the reversed process: Given the
$\omega$-reduced and factorized data
$({\mathcal{A}},{\mathcal{B}}_{\omega},{\mathbb{E}},\bar{\omega})$, can one
reproduce the state $\omega$ that generated this data in the first place? It
was shown in [6] that, given a set of data
$({\mathcal{A}},{\mathcal{B}},{\mathbb{E}},\xi)$ with ${\mathcal{A}}$ and
${\mathcal{B}}$ finite dimensional $C^{\ast}$-algebras,
${\mathbb{E}}:{\mathcal{A}}\otimes{\mathcal{B}}\to{\mathcal{B}}$ a completely
positive map and $\xi:{\mathcal{B}}\to{\mathbb{C}}$ a state, there is a
standard algorithm which generates a state on
${\mathcal{A}}^{\otimes{\mathbb{Z}}}$. This state is necessarily finitely
correlated, but this type of states densely sample the set of physically
interesting states. Furthermore, this algorithm automatically leads to a
presentation of the state in the form of a matrix product.
The present work re-assesses the above program in the more general context
when the local algebra ${\mathcal{A}}$ is an infinite nuclear
$C^{\ast}$-algebra. Examples of interest that are cover by the new setting are
${\mathcal{M}}^{\otimes{\mathbb{Z}}^{d}}\simeq\big{(}{\mathcal{M}}^{\otimes{\mathbb{Z}}^{d-1}}\big{)}^{\otimes{\mathbb{Z}}}$,
where ${\mathcal{M}}$ is a finite $C^{\ast}$-algebra. As we shall see,
${\mathcal{B}}_{\omega}$ has a natural structure of an operator space but, in
general, it fails to be an operator system. Hence, apriorily,
${\mathcal{B}}_{\omega}$ does not have a good order structure. Nevertheless,
using relatively recent works by Kavruk, Paulsen, Todorov and Tomforde [10,
11, 7, 8], we find that the reduced space ${\mathcal{B}}_{\omega}$ always
accepts a canonical operator system structure, which can be put in place via
the Archimedeanization process developed in [10, 11]. Since this requires no
additional data besides $({\mathcal{A}}^{\otimes{\mathbb{Z}}},\omega)$, our
conclusion is that the $\omega$-reduced data
$({\mathcal{B}}_{\omega},\bar{\omega})$ always comes in the form of an
operator system and a state (see section 3).
Focusing on the factorization process, we define a bi-linear map on
${\mathcal{A}}\times{\mathcal{B}}_{\omega}$ using the same expression as in
Eq. (1.2), which we show to be a completely contractive bi-linear map, when
${\mathcal{B}}_{\omega}$ is equipped with its operator system norms. As such,
this bi-linear map extends to a completely positive unital map on the Haagerup
tensor product ${\mathcal{A}}\otimes_{\rm h}{\mathcal{B}}_{\omega}$, hence
supplying the map ${\mathbb{E}}$ in this more general context. Furthermore,
the iterates of ${\mathbb{E}}$ continue to display the Markov property
discovered in [6] and used in an essential way in the re-construction process
(see section 4).
The above led us to our first important conclusion that the $\omega$-reduced
and factorized data always come in the form
$({\mathcal{A}},{\mathcal{S}},{\mathbb{E}},\xi)$, where ${\mathcal{A}}$ is a
nuclear $C^{\ast}$-algebra, ${\mathcal{S}}$ is an operator system,
${\mathbb{E}}:{\mathcal{A}}\otimes_{\rm h}{\mathcal{S}}\to{\mathcal{S}}$ is a
completely positive map and $\xi:{\mathcal{S}}\to{\mathbb{C}}$ is a state. The
data is determined entirely and canonically by the original state $\omega$.
Focusing on the re-construction process, we were able to prove the following
statements underlining an abstract re-construction algorithm:
###### Theorem 1.1.
Assume $({\mathcal{A}},{\mathcal{S}},{\mathbb{E}},\xi)$ as already described.
Let
${\mathbb{E}}_{(n)}:{\mathcal{A}}^{\otimes n}\otimes_{\rm
h}{\mathcal{S}}\rightarrow{\mathcal{S}}$
be the maps defined iteratively as
${\mathbb{E}}_{(1)}={\mathbb{E}},\quad{\mathbb{E}}_{(n+1)}={\mathbb{E}}\circ({\rm
id}\otimes_{\rm h}{\mathbb{E}}_{(n)}),\quad n\geq 1.$ (1.3)
Then:
1. i)
The tower of linear functionals
$\omega_{(n)}:{\mathcal{A}}^{\otimes
n}\rightarrow{\mathbb{C}},\quad\omega_{n}=\xi\circ{\mathbb{E}}_{(n)}\circ
J_{n},\quad n\geq 1,$ (1.4)
where $J_{n}$’s are the isometric embeddings
$J_{n}:{\mathcal{A}}^{\otimes n}\rightarrow{\mathcal{A}}^{\otimes
n}\otimes_{\rm h}{\mathcal{S}},\quad J_{n}\big{(}a_{1}\otimes\cdots\otimes
a_{n}\big{)}=a_{1}\otimes\cdots\otimes a_{n}\otimes e,$ (1.5)
with $e$ the identity of ${\mathcal{S}}$, defines a state $\omega_{R}$ on
${\mathcal{A}}_{R}$.
2. ii)
Let $\bar{S}$ be defined as
$\bar{S}:{\mathcal{S}}\rightarrow{\mathcal{S}},\quad\bar{S}={\mathbb{E}}\circ\bar{L},$
(1.6)
with $\bar{L}$ being the unital and isometric embedding:
$\bar{L}:{\mathcal{S}}\rightarrow{\mathcal{A}}\otimes_{\rm
h}{\mathcal{S}},\quad\bar{L}(s)=1\otimes s.$ (1.7)
Then the reconstructed state is shift invariant, $\omega_{R}\circ
S=\omega_{R}$, provided $\xi\circ\bar{S}=\xi$.
3. iii)
In the conditions from point ii), there exists a unique shift invariant state
$\omega$ on ${\mathcal{A}}_{\mathbb{Z}}$, which coincides with $\omega_{R}$
when restricted from ${\mathcal{A}}_{\mathbb{Z}}$ to ${\mathcal{A}}_{R}$.
4. iv)
If the reduced data has the following asymptotic clustering property
$\lim_{r\rightarrow\infty}\sup\Big{\\{}\Big{|}\xi\Big{(}{\mathbb{E}}_{(n)}\big{(}a_{(n)}\otimes\bar{S}^{\circ
r}(s-\xi(s)\,e)\big{)}\Big{)}\Big{|}\Big{\\}}=0,$ (1.8)
where the supremum is over all $n\geq 0$, $a_{(n)}\in{\mathcal{A}}^{\otimes
n}$ with $\|a_{(n)}\|=1$, and $s\in{\mathcal{S}}$ with $\|s\|\leq 1$, then
$\omega$ also has the asymptotic clustering property.
5. v)
If ${\mathbb{E}}$ is full, in the sense that
$\overline{\bigcup_{n\geq 1}\bigcup_{x\in{\mathcal{A}}^{\otimes
n}}{\mathbb{E}}_{n}(x\otimes s)}={\mathcal{S}},\quad\forall\
s\in{\mathcal{S}},\ s\neq 0,$ (1.9)
then the data $({\mathcal{A}}^{\otimes{\mathbb{Z}}},\omega)$ reduces back to
$({\mathcal{A}},{\mathcal{S}},{\mathbb{E}},\xi)$.
It follows from the last statement that any full and shift invariant full
state over ${\mathcal{A}}^{\otimes{\mathbb{Z}}}$ can be generated from a set
of data $({\mathcal{A}},{\mathcal{S}},{\mathbb{E}},\xi)$ via the algorithm
described at point i).
Focusing now on the operator product presentation of the reconstructed states,
we note that, since we are dealing with operator systems, there always exist
two Hilbert spaces $H_{1}$ and $H_{2}$, which enter the following commutative
diagram:
$\vspace{0.2cm}\begin{diagram}\vspace{0.2cm}$ (1.10)
Here, all the embeddings are isometric. We recall that the Haagerup tensor
product $B(H_{1})\otimes_{\rm h}B(H_{2})$ embeds isometrically into the free
product $B(H_{1})\star B(H_{2})$ of the algebras (see [12, p. 98]). At its
turn, this free product embeds isometrically into the algebra of bounded
operators over the free product $H_{1}\star H_{2}$ of Hilbert spaces. The
latter requires choices of distinguished unit vectors [16, p. 3] and a natural
way to introduce them is to expand the Hilbert spaces to
$\widetilde{H}_{i}={\mathbb{C}}\cdot\xi_{i}\oplus H_{i}$, $i=1,2$. We then re-
draw the diagram of embeddings as follows:
$\begin{diagram}\vspace{0.1cm}$ (1.11)
The projections onto the sub-spaces ${\mathbb{C}}\cdot\xi_{i}$ of
$\widetilde{H}_{i}$ will be denoted by $P_{\xi_{i}}$, $i=1,2$.
From Arveson Theorem [3][p. 18], we know that the completely positive unital
map ${\mathbb{E}}$ extends over $B(\widetilde{H}_{1}\star\widetilde{H}_{2})$
and, from Stinespring theorem [15], we know that this extension has the
generic structure
$B(\widetilde{H}_{1}\star\widetilde{H}_{2})\ni
G\mapsto{\mathbb{E}}(G)=V^{\ast}\hat{\pi}(G)V\in B(\widetilde{H}_{2}),$ (1.12)
where $\hat{\pi}$ is a representation of
$B(\widetilde{H}_{1}\star\widetilde{H}_{2})$ into the algebra of bounded
operators over a Hilbert space $\widehat{H}$ and
$V:\widetilde{H}_{2}\rightarrow\widehat{H}$ is an isometry. Since
$B(\widetilde{H}_{1})$ and $B(\widetilde{H}_{2})$ are canonically embedded
into their free product, the restrictions of $\hat{\pi}$ generate
representations $\hat{\pi}_{1}$ and $\hat{\pi}_{2}$ of $B(\widetilde{H}_{1})$
and $B(\widetilde{H}_{2})$ on $\widehat{H}$, respectively. Now, for $\psi\in
H_{1}$, we introduce the pair of conjugate operators from
$B(\widetilde{H}_{1})$,
$Z_{\psi}(\alpha\,\xi_{1}+\psi^{\prime})=\alpha\psi,\quad
Z^{\ast}_{\psi}(\alpha\,\xi_{1}+\psi^{\prime})=\langle\psi,\psi^{\prime}\rangle\,\xi_{1},$
(1.13)
and we model the separable Hilbert space $H_{1}$ as $\ell^{2}({\mathcal{I}})$
with ${\mathcal{I}}$ a countable set. Then, if
$\big{\\{}\delta_{i}\big{\\}}_{i\in{\mathcal{I}}}$ denotes the standard basis
of $\ell^{2}({\mathcal{I}})$, we let $Z_{i}=Z_{\delta_{i}}\in
B(\widetilde{H}_{1})$. In this context, the following statement holds:
###### Theorem 1.2.
Let $\delta_{\bm{i}}=\delta_{i_{1}}\otimes\ldots\otimes\delta_{i_{n}}$,
$\bm{i}=(i_{1},\ldots,i_{n})\in{\mathcal{I}}^{n}$, be a basis of
$H_{1}^{\otimes n}$ and assume the following:
1. 1)
The representations $\hat{\pi}_{1}$ and $\hat{\pi}_{2}$ commute;
2. 2)
There exists a unitary map
$U:\widetilde{H}_{2}\to\hat{\pi}_{1}\big{(}P_{\xi_{1}}\big{)}\widehat{H}$.
3. 3)
The representation
$\hat{\pi}_{1}\big{(}P_{\xi_{1}}\big{)}\hat{\pi}_{2}(\cdot)\hat{\pi}_{1}\big{(}P_{\xi_{1}}\big{)}$
of $B(\widetilde{H}_{2})$ on
$\hat{\pi}_{1}\big{(}P_{\xi_{1}}\big{)}\widehat{H}$ is unitarily isomorphic to
the identity representation.
Then the maps ${\mathbb{E}}_{n}$ accept presentations as operator products:
${\mathbb{E}}_{n}(a_{(n)}\otimes
t)=\sum_{\bm{i},\bm{j}\in{\mathcal{I}}^{n}}\big{\langle}\delta_{\bm{i}},A_{(n)}\delta_{\bm{j}}\big{\rangle}\,\Sigma_{i_{1}}\cdots\Sigma_{i_{n}}T\,\Sigma^{\ast}_{j_{n}}\cdots\Sigma^{\ast}_{j_{1}},$
(1.14)
where $A_{(n)}=\rho^{\otimes_{\rm h}n}(a_{(n)})\in B(H_{1})^{\otimes_{\rm
h}n}\subset B(\widetilde{H}_{1})^{\otimes_{\rm h}n}$, $T=\sigma(t)\in
B(H_{2})\subset B(\widetilde{H}_{2})$ and
$\Sigma_{i}\in
B(\widetilde{H}_{2}),\quad\Sigma_{i}=V^{\ast}\hat{\pi}_{1}(Z_{i})U.$ (1.15)
The sum in (5.42) converges in the weak topology of $B(\widetilde{H}_{2})$.
This statement covers Eq. 5.4 from [6], which encodes the matrix product
presentation of the state, as a particular case (see section 5.3). A large
class of additional situations where the conditions of the Theorem 1.2 are
satisfied are presented in Example 5.14.
## 2\. Entanglement Kernel and the Reduced Space and State
Throughout our presentation, we will oscillate between the categories of
operator spaces and of operator systems. In this section, we first provide the
minimal context needed to formulate the problem, which is entirely on operator
spaces. The material is mostly compiled from the textbook by Blecher and Merdy
[3] and the textbook by Pisier [12], and it contains relevant definitions and
fundamental statements that will be referenced throughout our presentation.
Hence, our goal here was to make the presentation self-contained rather than
give an overview of the field.
In the second part, we introduce and exemplify the main objects to be studied,
namely, the algebra of physical observables, which is the infinite tensor
product ${\mathcal{A}}^{\otimes{\mathbb{Z}}}$ of a nuclear $C^{\ast}$-algebra
${\mathcal{A}}$, the entanglement kernel of a state $\omega$ over this algebra
and the quotient space ${\mathcal{B}}_{\omega}$ of
${\mathcal{A}}^{\otimes{\mathbb{Z}}}$ by this kernel. As we shall see, the
latter has the structure of an operator space and $\omega$ descends to a
completely contractive functional $\bar{\omega}$ over
${\mathcal{B}}_{\omega}$. At this, point we formulate the reduction process in
precise terms and point out the main challenges for moving the program
forward.
Before we start, let us lay out our conventions for the notation. The letters
$H,K,L,\ldots$, will be designated for Hilbert spaces. The symbol $H^{(n)}$
will stand for the direct sum of $n$ identical copies of $H$,
$H^{(n)}=H\oplus\ldots\oplus H$. The $C^{\ast}$-algebra of bounded linear maps
between two Hilbert spaces will be denoted by $B(H,K)$. The letters $E$, $F$,
$G$, etc., will be designated to operator spaces. The matrix amplification of
a linear space will be denoted by $M_{n}(E)$. The elements of the operator
spaces and algebras will be denote by lowercase letters $e$, $f$, $g$, etc..
All norms are automatically assumed to be complete. The symbol $\otimes$ will
mostly refer to the algebraic tensor product. Different completions of the
algebraic tensor product will carry sub-scripts, such as $\otimes_{\rm min}$
or $\otimes_{\rm h}$ for the minimal and Haagerup tensor products,
respectively. For nuclear algebras, for which the various tensor product
completions all coincide, the tensor product will be again denoted by
$\otimes$. The elements of matrix amplifications will be specified by
$[a_{ij}]$.
### 2.1. Background: Concrete and abstract operator spaces
###### Definition 2.1 ([3], p. 5).
A concrete operator space is a closed linear sub-space of $B(H,K)$, for some
Hilbert spaces $H$ and $K$. Since $B(H,K)$ can be canonically embedded in
$B(H\oplus K)$, one can equivalently define a concrete operator space as a
closed linear sub-space of $B(H)$, for some Hilbert space $H$.
The following fundamental result, due to Ruan [14], shows that the operator
spaces are intrinsic structures that are intimately related to matrix
amplifications of normed linear spaces:
###### Theorem 2.2 (Ruan, [14]).
Let $M_{n}(E)$ be the linear space of $n\times n$ matrices with entries from
the linear space $E$. Suppose that each $M_{n}(E)$, $n\in{\mathbb{N}}$ can be
equipped with norms $\|\ \|_{n}$ such that:
1. R1)
$\|aeb\|_{n}\leq\|a\|\|e\|_{n}\|b\|$ for any $a,b\in M_{n}({\mathbb{C}})$ and
$e\in M_{n}(E)$.
2. R2)
For all $e\in M_{m}(E)$ and $e^{\prime}\in M_{n}(E)$:
$\left\|\begin{pmatrix}e&0\\\
0&e^{\prime}\end{pmatrix}\right\|_{m+n}\leq{\max}\big{\\{}\|e\|_{n},\,\|e^{\prime}\|_{m}\big{\\}},\quad\forall\
m,n=1,2,\ldots.$ (2.1)
Then $E$ can be isometrically embedded in $B(H)$ for some Hilbert space $H$
and, as such, $E$ is an operator space. Conversely, if $E$ can be
isometrically embedded in $B(H)$, then the norms $\|\ \|_{n}$ inherited by
$M_{n}(E)$ from $M_{n}(B(H))\simeq B(H^{(n)},H^{(n)})$ satisfy R1-2.
The system of norms $\\{\|\ \|_{n}\\}_{n\geq 1}$ should be seen as encoding
the extra-structure on the normed space $(E,\|\ \|)$, namely, the fact that
$E$ accepts an isometric embedding in $B(H)$ for some Hilbert space $H$. In
fact, Theorem 2.2 tells us that there is a one to one correspondence between
such embeddings and the systems of norms $\\{\|\ \|_{n}\\}_{n\geq 1})$
satisfying R1-2. An important consequence is that the operator spaces are
intrinsic constructs:
###### Definition 2.3.
An abstract operator space is a pair $\big{(}E,\\{\|\ \|_{n}\\}_{n\geq
1}\big{)}$ of a vector space $E$ and a sequence of norms on $M_{n}(E)$
satisfying R1-2.
Closed linear sub-spaces of an operator space are again operator spaces with
the $n$-norms induced from the parent operator space. More importantly for us
is a fundamental result by Ruan that quotients of operator spaces by closed
linear sub-spaces are also operators spaces:
###### Proposition 2.4 ([3] p. 8).
If $E$ is an operator space and $F$ is one of its closed linear subspaces,
then $E/F$ is an operator space with norms induced by the identification
$M_{n}(E/F)\simeq M_{n}(E)/M_{n}(F)$. Explicitly, these norms are give by the
formula
$\|[e_{ij}+F]\|_{n}=\inf\\{\|[e_{ij}+f_{ij}]\|_{n},\ [f_{ij}]\in M_{n}(F)\\},$
(2.2)
for any $[e_{ij}]\in M_{n}(E)$.
Concrete presentations of quotient operator spaces were supplied by Rieffel in
[13]. Unfortunately, we were not able to take advantage of them at this point.
### 2.2. Background: Completely bounded linear maps
The morphisms in the category of operator spaces are supplied by the
completely bounded (c.b.) linear maps:
###### Definition 2.5 ([12] p. 19; [3] p. 4).
A linear map $u:E\rightarrow F$ between two operator spaces can be amplified
to a linear map
$u_{n}:M_{n}(E)\rightarrow M_{n}(F),\quad u_{n}([e_{ij}])=[u(e_{ij})],$ (2.3)
for all $n\geq 1$. The map $u$ is called:
1. 1)
Completely bounded if
$\sup_{n\geq 1}\,\|u_{n}\|_{M_{n}(E)\rightarrow M_{n}(F)}<\infty.$ (2.4)
2. 2)
Complete isometry if all $u_{n}$’s are isometries.
3. 3)
Complete quotient if each $u_{n}$ sends the unit ball of $M_{n}(E)$ onto the
unit ball of $M_{n}(F)$.
The set of c.b. maps ${\rm CB}(E,F)$ is closed under addition and becomes a
Banach linear space when equipped with the norm
$\|u\|_{\rm cb}=\sup_{n\in{\mathbb{N}}}\,\|u_{n}\|_{M_{n}(E)\rightarrow
M_{n}(F)}.$ (2.5)
As expected, c.b. linear maps behave well under composition:
###### Proposition 2.6 ([12] p. 19).
If $E$, $F$ and $G$ are operator spaces and $u:E\rightarrow F$ and
$v:F\rightarrow G$ are completely bounded linear maps, then $v\circ
u:E\rightarrow G$ is a completely bounded map and $\|v\circ u\|_{\rm
cb}\leq\|v\|_{\rm cb}\,\|u\|_{\rm cb}$.
C.b. linear maps also behave quite natural under taking quotients:
###### Proposition 2.7 ([12] p. 42).
Let $E$, $F$ and $G$ be operator spaces such that $F\subset E$, and let
$q:E\rightarrow E/F$ be the canonical surjection. Then, a linear map
$u:E/F\rightarrow G$ is completely bounded if and only if $u\circ q$ is
completely bounded and $\|u\|_{\rm cb}=\|u\circ q\|_{\rm cb}$.
###### Proposition 2.8 ([3] p. 8).
If $u:E\rightarrow G$ is completely bounded and if $F$ is a closed subspace of
$E$ contained in ${\rm Ker}\,u$, then the canonical map
$\tilde{u}:E/F\rightarrow G$ induced by $u$ is also completely bounded, with
$\|\tilde{u}\|_{\rm c.b.}=\|u\|_{\rm c.b.}$. If $F={\rm Ker}\,u$, then $u$ is
a complete quotient map if and only if $\tilde{u}$ is a complete isometric
isomorphism.
###### Proposition 2.9.
Let $E$ and $F$ be operator spaces with $F\subset E$. Then the canonical
surjection $q:E\rightarrow E/F$ is a complete quotient map and ${\rm
Ker}\,q=F$.
The following statement is known as the fundamental factorization/extension of
c.b. linear maps.
###### Theorem 2.10 ([12] p. 23).
Consider a completely bounded map
$\begin{diagram}$ (2.6)
Then there exits a Hilbert space $\widehat{H}$, a representation
$\pi:B(H)\rightarrow B(\widehat{H})$ and operators
$V_{1}:K\rightarrow\widehat{H}$, $V_{2}:\widehat{H}\rightarrow K$ such that
$\|V_{1}\|\,\|V_{2}\|=\|u\|_{\rm cb}$ and
$u(e)=V_{2}\pi(e)V_{1},\quad\forall\ e\in E.$ (2.7)
Conversely, if (2.7) holds, then $u$ is completely bounded and $\|u\|_{\rm
cb}\leq\|V_{1}\|\,\|V_{2}\|$. Moreover, $u$ admits a completely bounded
extension $\tilde{u}$ such that
$\begin{diagram}$ (2.8)
is a commutative diagram and $\|\tilde{u}\|_{\rm cb}=\|u\|_{\rm cb}$.
### 2.3. Algebra of physical observables
Let ${\mathcal{A}}_{j}$, $j\in{\mathbb{Z}}$, be $C^{\ast}$-algebras
canonically isomorphic to a fixed unital $C^{\ast}$-algebra ${\mathcal{A}}$,
referred to as the local algebra. We denote by $\alpha_{j}$ the isomorphism
between ${\mathcal{A}}_{j}$ and ${\mathcal{A}}$. To avoid unnecessary
complications, we assume that ${\mathcal{A}}$ is nuclear, such that all many
possible ways to complete its tensor powers coincide. We introduce the
notation
${\mathcal{A}}_{(m,n)}={\mathcal{A}}_{m}\otimes{\mathcal{A}}_{m+1}\ldots\otimes{\mathcal{A}}_{n},\quad
m\leq n\in{\mathbb{Z}},$ (2.9)
and we will use the symbols $1_{(m,n)}$ and ${\rm id}_{(m,n)}$ for the
identity element and identity automorphism of ${\mathcal{A}}_{(m,n)}$,
respectively. When $m=1$, we will simplify and write ${\mathcal{A}}_{(1,n)}$
as ${\mathcal{A}}_{(n)}$, etc.. Clearly, there are canonical unital embedding
maps
$\mathfrak{i}_{nm}:{\mathcal{A}}_{(n)}\rightarrowtail{\mathcal{A}}_{(m)}$, for
any pair $m>n\geq 1$. Similarly, the notation ${\mathcal{A}}_{(-n,0)}$ will be
simplified to ${\mathcal{A}}_{(-n)}$ for $n\geq 0$. Now, the natural unital
embeddings
${\mathcal{A}}_{(-n,n)}\rightarrowtail{\mathcal{A}}_{(-n-1,n+1)},\quad
a_{(-n,n)}\mapsto 1_{(-n-1)}\otimes a_{(-n,n)}\otimes 1_{(n+1)},$ (2.10)
supply an inductive tower of unital $C^{\ast}$-algebras
${\mathcal{A}}\rightarrowtail{\mathcal{A}}_{(-1,1)}\cdots\rightarrowtail{\mathcal{A}}_{(-n,n)}\rightarrowtail\cdots,$
(2.11)
whose direct limit is a $C^{\ast}$-algebra denote here by
${\mathcal{A}}_{\mathbb{Z}}$ and sometimes as
${\mathcal{A}}^{\otimes{\mathbb{Z}}}$. This is the algebra of physical
observables we are assuming in this work.
As is the case for any $C^{\ast}$-algebra, ${\mathcal{A}}_{\mathbb{Z}}$ comes
equipped with a $C^{\ast}$-norm that enjoys the special property
$\|a^{\ast}a\|=\|a^{\ast}\|\,\|a\|=\|a\|^{2}$, for any
$a\in{\mathcal{A}}_{\mathbb{Z}}$. Among many other things, this property
enables one to define a special positive cone
${\mathcal{A}}^{+}_{\mathbb{Z}}=\\{a^{\ast}a,\
a\in{\mathcal{A}}_{\mathbb{Z}}\\},$ (2.12)
whose order semi-norm (see 3.8) is a norm and coincides with the
$C^{\ast}$-norm. The state space of ${\mathcal{A}}_{{\mathbb{Z}}}$ consists of
all bounded linear functionals $\omega$ which map
${\mathcal{A}}^{+}_{\mathbb{Z}}$ into ${\mathbb{R}}_{+}$, the positive cone of
${\mathbb{C}}$, and are normalized as $\omega(1)=1$.
Embedded in ${\mathcal{A}}_{\mathbb{Z}}$, are the $C^{\ast}$ algebras
${\mathcal{A}}_{(n,\infty)}$ and ${\mathcal{A}}_{(-\infty,n)}$ defined by the
inductive towers of ${\mathcal{A}}_{(n,m)}$ and ${\mathcal{A}}_{(m,n)}$
algebras, $m\to\pm\infty$, respectively. There are canonical
$C^{\ast}$-algebra isomorphisms
${\mathcal{A}}_{(n,m-1)}\otimes{\mathcal{A}}_{(m,\infty)}\simeq{\mathcal{A}}_{(n,\infty)}$
(2.13)
and
${\mathcal{A}}_{(-\infty,m-1)}\otimes{\mathcal{A}}_{(m,n)}\simeq{\mathcal{A}}_{(-\infty,n)}$
(2.14)
and we will simply identify these algebras throughout. Special symbols will be
used for ${\mathcal{A}}_{R}:={\mathcal{A}}_{(1,\infty)}$ and
${\mathcal{A}}_{L}:={\mathcal{A}}_{(-\infty,0)}$. We note that
${\mathcal{A}}_{\mathbb{Z}}={\mathcal{A}}_{L}\otimes{\mathcal{A}}_{R}$ and
that ${\mathcal{A}}_{R,L}$ belongs to the relative commutant of
${\mathcal{A}}_{L,R}$, respectively. We will denote by $\mathfrak{i}_{R}$ and
$\mathfrak{i}_{L}$ the unital embeddings of ${\mathcal{A}}_{R}$ and
${\mathcal{A}}_{L}$ into ${\mathcal{A}}_{\mathbb{Z}}$, respectively. Also, we
will denote by ${\mathcal{A}}_{R}^{+}$ and ${\mathcal{A}}_{L}^{+}$ the
positive cones of the corresponding $C^{\ast}$-algebras. Lastly, we will
denote by
$\mathfrak{i}_{n}:{\mathcal{A}}_{(n)}\rightarrowtail{\mathcal{A}}_{R}$ and
$\mathfrak{i}_{-n}:{\mathcal{A}}_{(-n)}\rightarrowtail{\mathcal{A}}_{L}$ the
obvious canonical unital embeddings.
###### Remark 2.11.
On several occasions, we will define maps on ${\mathcal{A}}_{\mathbb{Z}}$
using actions on monomials $\otimes_{n\in{\mathbb{Z}}}\,a_{n}$. Our convention
for such expressions is that the $a_{n}$’s are all unit elements except for a
finite number of them. In other words, these monomials are images of elements
from ${\mathcal{A}}_{(m,n)}$ for some finite $m$ and $n$. Similar conventions
are adopted for expressions such as $\otimes_{n\geq m}\,a_{n}$ or
$\otimes_{n\leq m}\,a_{n}$. $\Diamond$
The algebra ${\mathcal{A}}_{\mathbb{Z}}$ has a special automorphism
$S:{\mathcal{A}}_{\mathbb{Z}}\mathrel{\rightarrowtail\kern-5.72635pt\twoheadrightarrow}{\mathcal{A}}_{\mathbb{Z}}$,
which is the shift acting on monomials as
$S\big{(}\otimes_{n\in{\mathbb{Z}}}a_{n}\big{)}=\otimes_{n\in{\mathbb{Z}}}\,(\alpha_{n}^{-1}\circ\alpha_{n-1})(a_{n-1}).$
(2.15)
Note that we shift the entries from left to right. Then, by restriction to
sub-algebras, $S$ generates $C^{\ast}$-algebra isomorphisms
${\mathcal{A}}_{(m,n)}\mathrel{\rightarrowtail\kern-5.72635pt\twoheadrightarrow}{\mathcal{A}}_{(m+1,n+1)}$
as well as
${\mathcal{A}}_{(n,\infty)}\mathrel{\rightarrowtail\kern-5.72635pt\twoheadrightarrow}{\mathcal{A}}_{(n+1,\infty)}$,
which will be denoted by the same symbol $S$. Likewise, the $C^{\ast}$-algebra
isomorphisms
${\mathcal{A}}_{(m,n)}\mathrel{\rightarrowtail\kern-5.72635pt\twoheadrightarrow}{\mathcal{A}}_{(m-1,n-1)}$
and
${\mathcal{A}}_{(-\infty,n)}\mathrel{\rightarrowtail\kern-5.72635pt\twoheadrightarrow}{\mathcal{A}}_{(-\infty,n-1)}$
will be denoted by the single symbol $S^{-1}$. Furthermore, because
${\mathcal{A}}_{(2,\infty)}$ is canonically embedded in ${\mathcal{A}}_{R}$,
we can define the important shift map on ${\mathcal{A}}_{R}$ by the action
$S_{R}\big{(}\otimes_{n\geq 1}a_{n}\big{)}=\otimes_{n\geq
1}\,\,(\alpha_{n}^{-1}\circ\alpha_{n-1})(a_{n-1}),\quad a_{0}=1,$ (2.16)
on the monomials. A similar $C^{\ast}$-algebra morphism $S_{L}^{-1}$ can be
defined on ${\mathcal{A}}_{L}$. The goal of our work is to explore the states
$\omega$ on ${\mathcal{A}}_{\mathbb{Z}}$ that are shift invariant,
$\omega=\omega\circ S$, using the strategy develop in [6].
### 2.4. The entanglement kernel
Let $\omega$ be a state on ${\mathcal{A}}_{\mathbb{Z}}$, not necessarily shift
invariant. Then, for any $x\in{\mathcal{A}}_{L}$, define a bounded linear
functional
$\omega_{x}:{\mathcal{A}}_{R}\rightarrow{\mathbb{C}},\quad\omega_{x}(a_{R})=\omega(x\,a_{R}),$
(2.17)
which is not positive, in general. Following [6], we introduce the following
object:
###### Definition 2.12.
The following sub-set of ${\mathcal{A}}_{R}$,
${\mathcal{K}}_{\omega}=\bigcap_{x\in{\mathcal{A}}_{L}}{\rm Ker}\,\omega_{x},$
(2.18)
will be referred to as the entanglement kernel of $\omega$.
${\mathcal{K}}_{\omega}$ is an intersection of closed linear sub-space, hence
it is closed linear sub-space of the $C^{\ast}$-algebra ${\mathcal{A}}_{R}$.
As such, it is automatically an operator space. An important observation for
later is:
###### Proposition 2.13.
${\mathcal{K}}_{\omega}$ does not contain the unit $1_{R}$ of
${\mathcal{A}}_{R}$.
###### Proof.
We need to find one element $x$ of ${\mathcal{A}}_{L}$ for which
$\omega_{x}(1_{R})=\omega(x)\neq 0$. This element is the identity of
${\mathcal{A}}_{L}$. ∎
${\mathcal{K}}_{\omega}$ enters the exact sequence of operator spaces
${\mathcal{K}}_{\omega}\rightarrowtail{\mathcal{A}}_{R}\twoheadrightarrow{\mathcal{B}}_{\omega}={\mathcal{A}}_{R}/{\mathcal{K}}_{\omega}\
.$ (2.19)
Indeed, from Proposition 2.4, we know that the quotient space
${\mathcal{B}}_{\omega}$ inherits a natural operator space structure.
Furthermore, the second map in (2.19) is the canonical surjection
$q:{\mathcal{A}}_{R}\to{\mathcal{A}}_{R}/{\mathcal{K}}_{\omega}$, which is a
completely bounded map of norm one, as we learned from Corollary 2.9. Let us
recall that the operator space norms $\|\cdot\|^{\rm osp}_{n}$ on
${\mathcal{B}}_{\omega}$ and its matrix amplifications are supplied by
Proposition 2.4. We will refer to ${\mathcal{B}}_{\omega}$ as the
$\omega$-reduced operator space. By Ruan’s theorem, it can be entirely and
abstractly described by the data
$\big{(}{\mathcal{B}}_{\omega},\\{\|\cdot\|^{\rm osp}_{n}\\}_{n\geq
1}\big{)}$. Its elements will be specified by $b$, $b^{\prime}$ and so on.
Also, the matrix amplifications of $q$, which are all contractions, will be
denoted by $q_{n}$.
###### Remark 2.14.
The class of an element $a_{R}\in{\mathcal{A}}_{R}$ in
${\mathcal{B}}_{\omega}={\mathcal{A}}_{R}/{\mathcal{K}}_{\omega}$ will be
indicated by several symbols, such as
$q(a_{R})=\hat{a}_{R}=\lfloor a_{R}\rfloor.$ (2.20)
The second notation $\hat{a}_{R}$ is useful when considering matrix
amplifications of ${\mathcal{B}}_{\omega}$. The third notation will be used
when $a_{R}$ is given as a long expression. $\Diamond$
The algebra ${\mathcal{A}}_{R}$ can be reduced (quoted) in many different
ways, but one of the practical values of the above particular reduction, which
is the great insight supplied by [6], is that ${\mathcal{B}}_{\omega}$ can be
computed for a large class of interesting physical models.
###### Example 2.15.
Let $\omega_{0}$ be a state on ${\mathcal{A}}$ and let
$\omega=\omega_{0}^{\otimes{\mathbb{Z}}}$ be the product state on
${\mathcal{A}}_{\mathbb{Z}}$. In this case,
$\omega_{x}(a_{R})=\omega(x)\,\omega(a_{R})=\omega(x)\,\omega_{R}(a_{R})$
(2.21)
for any $x\in{\mathcal{A}}_{L}$ and $a_{R}\in{\mathcal{A}}_{R}$, hence,
${\mathcal{K}}_{\omega}={\rm Ker}\,\omega_{R}$. Furthermore,
$\mathfrak{i}_{n}(a_{1}\otimes a_{2}\ldots\otimes
a_{n})-\omega_{R}\big{(}\mathfrak{i}_{n-1}(a_{1}\otimes a_{2}\ldots\otimes
a_{n-1})\big{)}\mathfrak{i}_{1}(a_{n})\in{\mathcal{K}}_{\omega},$ (2.22)
hence ${\mathcal{B}}_{\omega}$ is a sub-space of ${\mathcal{A}}$. In fact,
${\mathcal{B}}_{\omega}={\mathcal{A}}/{\rm Ker}\,\omega_{0}$. $\Diamond$
Any product state has zero correlation length, in the language introduced in
[6]. Of course, the main interest of the physics community is on correlated
states. The following example supplies a large class of such states for which
${\mathcal{B}}_{\omega}$ is again computable.
Figure 2.1. Spin-1 particles, shown by red bubbles, are arranged in a closed
chain of length $N$. The algebra of observables is $[1]^{\otimes N}$, where
$[j]\simeq{\mathcal{M}}_{2j+1}$ denotes the spin-j algebra. The local algebra
$[1]$ is embedded via a map $\mathfrak{j}$ into the algebra
$\big{[}\tfrac{1}{2}\big{]}\otimes\big{[}\tfrac{1}{2}\big{]}\simeq{\mathcal{M}}_{2}\otimes{\mathcal{M}}_{2}$
of two spin-$\frac{1}{2}$ particles, shown as black dots. The tensor power
$P_{0}^{\otimes N}$ of the projection $P_{0}$ onto the one-dimensional
subspace of the decomposition
$\big{[}\tfrac{1}{2}\big{]}\otimes\big{[}\tfrac{1}{2}\big{]}\simeq[0]\oplus[1]$,
shown by the segments, generates a rank-one projection in
${\mathcal{M}}_{3}^{\otimes N}$ and a state ${\mathcal{M}}_{3}^{\otimes N}\ni
M\mapsto{\rm Tr}\big{(}\mathfrak{j}^{\otimes N}(M)P_{0}^{\otimes N}\big{)}$.
Note that the projections are applied on the “bonds” shown by the sticks and
this is why the half-shift appears in (2.23).
###### Example 2.16.
The product state in conjunction with the shift map can be used in creative
ways to generate states with finite correlation length. The following class of
states is modeled after the so call AKLT state for spin-1 system [1], whose
construction is sketched in Fig. 2.1. In fact, the construction given here
covers all dimerized states introduced in [1][p. 523]. For this reason, we
refer to the states cover by this example as AKLT type. The construction
involves two (nuclear) $C^{\ast}$-algebras $\hat{\mathcal{A}}$ and
$\tilde{\mathcal{A}}$, related by
$\hat{\mathcal{A}}\simeq\tilde{\mathcal{A}}\otimes\tilde{\mathcal{A}}$, and a
projection $p\in\hat{\mathcal{A}}$. Then the local algebra ${\mathcal{A}}$ is
defined as the unital $C^{\ast}$-algebra ${\mathcal{A}}=p\hat{\mathcal{A}}p$,
with $p$ playing the role of the unit. We generate a state $\omega$ on
${\mathcal{A}}_{\mathbb{Z}}$ via the thermodynamic limit $N\rightarrow\infty$
of the states $\omega_{N}$ on ${\mathcal{A}}_{{\mathbb{Z}}_{N}}$,
${\mathbb{Z}}_{N}={\mathbb{Z}}/\big{(}(2N+1){\mathbb{Z}}\big{)}$, supplied by
the following sequence of maps
$\begin{diagram}$ (2.23)
where $\xi_{0}$ is a positive contractive functional on
$\tilde{\mathcal{A}}\otimes\tilde{\mathcal{A}}$. The first map is the power of
the non-unital inclusion
${\mathcal{A}}\ni
p\hat{a}p\mapsto\mathfrak{j}(p\hat{a}p):=p\hat{a}p\in\hat{\mathcal{A}},$
(2.24)
and $S_{\frac{1}{2}}$ is the circular right half-shift
$S_{\frac{1}{2}}\Big{(}\bigotimes_{n\in{\mathbb{Z}}_{N}}\big{(}\tilde{a}_{n,1}\otimes\tilde{a}_{n,2}\big{)}\Big{)}=\bigotimes_{n\in{\mathbb{Z}}_{N}}\big{(}\tilde{a}_{n-1,2}\otimes\tilde{a}_{n,1}\big{)}.$
(2.25)
The set ${\mathbb{Z}}_{N}={\mathbb{Z}}/\big{(}(2N+1){\mathbb{Z}})$ is
considered ordered, such that $-N$ is the first element in this order. The
tensor products are then taken accordingly.
If Eq. (2.23) is to supply a state on ${\mathcal{A}}_{{\mathbb{Z}}}$, the
above data must obey the constraint
$\lim_{N\rightarrow\infty}\xi_{0}^{\otimes{\mathbb{Z}}_{N}}\Big{(}S_{\frac{1}{2}}\big{(}p^{\otimes{\mathbb{Z}}_{N}}\big{)}\Big{)}=1.$
(2.26)
For example, this is indeed satisfied for the particular case described in
Fig. 2.1. The limiting procedure is required because $p^{\otimes{\mathbb{Z}}}$
is not an element of $\hat{\mathcal{A}}^{\otimes{\mathbb{Z}}}$, hence the non-
unital inclusion $\mathfrak{j}^{\otimes{\mathbb{Z}}_{N}}$ does not make sense
in the thermodynamic limit. Let us specify that, if a unital inclusion is used
instead of $\mathfrak{j}$ and $\xi_{0}$ is a state, then all the maps in the
sequence (2.23) are well defined in the thermodynamic limit, but then the
state on ${\mathcal{A}}$ is a trivial product state. We will not investigate
the thermodynamic limiting process here because we will re-construct this
class of states via a different path in Example 5.7. We only mention here that
these issues were fully resolved in [1] for the particular case illustrated in
Fig. 2.1. Now, assuming that the thermodynamic limit of the state exists, we
compute the corresponding operator spaces ${\mathcal{K}}_{\omega}$ and
${\mathcal{B}}_{\omega}$. For this, we first note the obvious isomorphisms
$\chi_{R}:\hat{\mathcal{A}}_{R}\mathrel{\rightarrowtail\kern-5.72635pt\twoheadrightarrow}\tilde{\mathcal{A}}\otimes\hat{\mathcal{A}}_{R},\quad\chi_{L}:\hat{\mathcal{A}}_{L}\mathrel{\rightarrowtail\kern-5.72635pt\twoheadrightarrow}\hat{\mathcal{A}}_{L}\otimes\tilde{\mathcal{A}}$
(2.27)
and then we use them to define the maps
$\Gamma_{R}=({\rm
id}\otimes\xi_{0}^{\otimes{\mathbb{N}}^{\times}})\circ\chi_{R}\circ\mathfrak{j}^{\otimes{\mathbb{N}}^{\times}},\quad\Gamma_{L}=(\xi_{0}^{\otimes({\mathbb{Z}}\setminus{\mathbb{N}}^{\times})})\circ\chi_{L}\circ\mathfrak{j}^{\otimes({\mathbb{Z}}\setminus{\mathbb{N}}^{\times})},$
(2.28)
from ${\mathcal{A}}_{R}$ to $\tilde{\mathcal{A}}$ and from ${\mathcal{A}}_{L}$
to $\tilde{\mathcal{A}}$, respectively. These maps are well defined because of
our assumption that the map $\xi_{0}^{\otimes{\mathbb{Z}}}\circ
S_{\frac{1}{2}}\circ\mathfrak{j}^{\otimes{\mathbb{Z}}}$ exists as the
thermodynamic limit of chain (2.23). Then
$\omega(a_{L}a_{R})=\xi_{0}(\tilde{a}_{L}\otimes\tilde{a}_{R}),\quad\tilde{a}_{R,L}=\Gamma_{R,L}(a_{R,L}).$
(2.29)
Furthermore, if we choose $\xi_{0}$ such that
$\xi_{0}(\tilde{a}\otimes\tilde{a})\neq 0$ when $\tilde{a}$ samples a dense
sub-set of $\tilde{\mathcal{A}}$, then we must conclude that
${\mathcal{B}}_{\omega}$ and ${\mathcal{K}}_{\omega}$ coincide with the range
and the kernel of $\Gamma_{R}$, respectively, which are both sub-spaces of
$\tilde{\mathcal{A}}$. This is certainly true for the particular case
illustrated in Fig. 2.1 (see Example 5.7 for additional details). $\Diamond$
### 2.5. The reduced state
###### Proposition 2.17.
Let $\omega_{R}$ be the state on ${\mathcal{A}}_{R}$ supplied by the
restriction of $\omega$. Then $\omega_{R}$ descends to a completely bounded
linear functional $\bar{\omega}:{\mathcal{B}}_{\omega}\rightarrow{\mathbb{C}}$
with $\omega_{R}=\bar{\omega}\circ q$ and $\|\bar{\omega}\|_{\rm cb}=1$.
###### Proof.
Taking $x$ the unit of ${\mathcal{A}}_{L}$ sub-algebra, we see from the
definition (2.18) of ${\mathcal{K}}_{\omega}$ that
${\mathcal{K}}_{\omega}\subset{\rm Ker}\,\omega_{R}$. As such, the map:
$\bar{\omega}:{\mathcal{B}}_{\omega}\rightarrow{\mathbb{C}},\quad\bar{\omega}(a_{R}+{\mathcal{K}})=\omega_{R}(a_{R}),$
(2.30)
is well defined. Furthermore, $\bar{\omega}\circ q=\omega_{R}$ and the latter
is a completely bounded functional with c.b. norm 1. According to Proposition
2.7, this can be true if and only if $\bar{\omega}$ is completely contractive.
∎
We will denote by $\bar{\omega}_{n}$ the matrix amplifications of
$\bar{\omega}$. We can deduced some of straightforward properties of these
maps. In particular, let
${\mathcal{D}}_{n}=M_{n}({\mathcal{A}}_{R})^{+}/M_{n}({\mathcal{K}}_{\omega})=q_{n}\big{(}M_{n}({\mathcal{A}}_{R})^{+}\big{)},$
(2.31)
which can be characterized more explicitly as
${\mathcal{D}}_{n}=\big{\\{}[a_{ij}+{\mathcal{K}}_{\omega}]\in
M_{n}({\mathcal{B}}_{\omega})\ |\ \exists\,k_{ij}\in{\mathcal{K}}_{\omega}\
s.t.\ [a_{ij}+k_{ij}]\in M_{n}({\mathcal{A}}_{R})^{+}\big{\\}}.$ (2.32)
Evidently, we have:
$\bar{\omega}_{n}({\mathcal{D}}_{n})\subset{\mathbb{R}}_{+},\quad
n=1,2,\ldots.$ (2.33)
Hence, ${\mathcal{B}}_{n}$ spaces carry more structure than its operator space
norms, hence we present the $\omega$-reduced data as
$\big{(}{\mathcal{B}}_{\omega},\\{\|\cdot\|^{\rm osp}_{n}\\}_{n\geq
1},\\{{\mathcal{D}}_{n}\\}_{n\geq 1},\bar{\omega}\big{)}.$ (2.34)
Without extracting more information about ${\mathcal{K}}_{\omega}$, this is as
far one can go with the reduction process and things are far from
satisfactory. Indeed, ${\mathcal{D}}_{n}$’s do not generate a matrix order
structure, in general, and such structure is essential for defining a
factorizing map ${\mathbb{E}}$ with correct properties and, as such, it is
also essential for the reconstruction block of the program. Here are a few
things that can go wrong:
* •
${\mathcal{D}}_{n}$’s may fail to be closed spaces;
* •
${\mathcal{D}}_{n}$’s may not be compatible, in the sense that the relation
$A^{\ast}{\mathcal{D}}_{n}A\subseteq{\mathcal{D}}_{m}$ may fail for $A$ an
ordinary $n\times m$ matrix;
* •
The intersections ${\mathcal{D}}_{n}\cap(-{\mathcal{D}}_{n})$ might contain
elements other than $0$.
Of course, there are states for which ${\mathcal{D}}_{n}$’s do supply matrix
order structures and explicit sufficient conditions will be supplied in the
next section. We, however, plan to develop our analysis in the most general
conditions possible.
## 3\. Reduction Process
In this section, we identify the space of the admissible input data
$({\mathcal{B}}_{\omega},\bar{\omega})$ for the reconstruction process.
Specifically, we demonstrate that the reduced space ${\mathcal{B}}_{\omega}$
always accepts the structure of an operator system and that the reduced map
$\bar{\omega}$ is a completely positive functional relative to this structure.
Since any operator system accepts a concrete representation as a linear sub-
space of $C^{\ast}$-algebra and any completely positive functional can be
extended to this $C^{\ast}$-algebra, the assumption in [6] that
${\mathcal{B}}_{\omega}$ is a $C^{\ast}$-algebra is now fully justified.
The proof consists of tying together several concepts and results from the
literature, due to Kavruk, Paulsen, Todorov and Tomforde [10, 11, 7, 8]. We
will take this opportunity and give the reader a brisk walk in the space of
these ideas, which supply the natural framework and the right tools for the
problem at hand, something that we still contemplate with amazement. In the
process, one will hear about ordered $\ast$-vector spaces, (Archimedean) order
units, order semi-norms and topologies, as well as order ideals and their
quotients [10]. One will also hear about matrix ordered $\ast$-vector spaces,
(Archimedean) matrix order units and a conceptual refinement of the order
ideal, which is the kernel introduced in [8]. The later has the remarkable
property that any quoted space by it is automatically a matrix ordered
$\ast$-vector space with an Archimedean matrix order unit. This together with
Choi and Effros abstract characterization of operator systems [4] reduces our
proof to demonstrating that ${\mathcal{K}}_{\omega}$ is a kernel.
### 3.1. Ordered vector spaces
This material is entirely collected from [10].
###### Definition 3.1 ([10], p. 1322).
If ${\mathcal{V}}$ is a real vector space, a cone in ${\mathcal{V}}$ is a
nonempty subset ${\mathcal{C}}\subseteq$ with the following two properties:
1. 1)
$av\in{\mathcal{C}}$ whenever $a\in[0,\infty)$ and $v\in{\mathcal{C}}$;
2. 2)
$v+w\in{\mathcal{C}}$ whenever $v,w\in{\mathcal{C}}$.
An ordered vector space is a pair $({\mathcal{V}},{\mathcal{V}}^{+})$
consisting of a real vector space ${\mathcal{V}}$ and cone
${\mathcal{V}}^{+}\subseteq{\mathcal{V}}$ satisfying
${\mathcal{V}}^{+}\cap(-{\mathcal{V}}^{+})=\\{0\\}$.
###### Remark 3.2.
If $({\mathcal{V}},{\mathcal{V}}^{+})$ is an ordered real vectors space, one
writes $v\geq v^{\prime}$ if $v-v^{\prime}\in{\mathcal{V}}^{+}$. $\Diamond$
###### Definition 3.3 ([10], pp. 1323-1324).
If $({\mathcal{V}},{\mathcal{V}}^{+})$ is an ordered real vector space, an
element $e\in{\mathcal{V}}$ is called an order unit for ${\mathcal{V}}$ if,
for each $v\in{\mathcal{V}}$, there exists a real number $r>0$ such that
$re\geq v$. The order unit $e$ is called Archimedean if whenever
$v\in{\mathcal{V}}$ with $re+v\geq 0$ for all real $r>0$, then
$v\in{\mathcal{V}}^{+}$.
###### Example 3.4 ([10], p. 1353).
The real vector space of self-adjoint elements of any unital
$C^{\ast}$-algebra is an ordered vector space with the unit playing the role
of Archimedean order unit. $\Diamond$
Of course, our interest is in order structures on complex vector spaces. In
this case, an extra structure is required.
###### Definition 3.5 ([10], p. 1337).
A $\ast$-vector space consists of a complex vector space ${\mathcal{V}}$
together with a map $\ast:{\mathcal{V}}\rightarrow{\mathcal{V}}$ that is
involutive, $(v^{\ast})^{\ast}=v$ for all $v\in{\mathcal{V}}$, and conjugate
linear. If ${\mathcal{V}}$ is a $\ast$-vector space, then ${\mathcal{V}}_{\rm
h}=\\{v\in{\mathcal{V}}\ |\ v^{\ast}=v\\}$ represents the set of hermitean
elements of ${\mathcal{V}}$.
###### Definition 3.6 ([10], p. 1337).
If ${\mathcal{V}}$ is a $\ast$-vector space, one says that
$({\mathcal{V}},{\mathcal{V}}^{+})$ is an ordered $\ast$-vector space if
$({\mathcal{V}}_{\rm h},{\mathcal{V}}^{+})$ is an ordered real vector space.
Furthermore, $e\in{\mathcal{V}}$ is an (Archimedean) order unit for
$({\mathcal{V}},{\mathcal{V}}^{+})$ if it is an (Archimedean) order unit for
$({\mathcal{V}}_{\rm h},{\mathcal{V}}^{+})$.
###### Definition 3.7 ([10], p. 1337).
Let $({\mathcal{V}},{\mathcal{V}}^{+})$ be an ordered $\ast$-vector space with
order unit $e$ and let $({\mathcal{W}},{\mathcal{W}}^{+})$ be an ordered
$\ast$-vector space with order unit $e^{\prime}$. A linear map
$\varphi:{\mathcal{V}}\rightarrow{\mathcal{W}}$ is positive if
$v\in{\mathcal{V}}^{+}$ implies $\varphi(v)\in{\mathcal{W}}^{+}$, and unital
if $\varphi(e)=e^{\prime}$.
Order structures can be used to generate topologies:
###### Definition 3.8 ([10], p. 1327).
Let $({\mathcal{V}},{\mathcal{V}}^{+})$ be an ordered real vector space with
order unit $e$. The order semi-norm on ${\mathcal{V}}$ determined by $e$ is
defined as:
$\llbracket v\rrbracket=\inf\\{r\in{\mathbb{R}}\,|\,re+v\geq 0\ \mbox{and}\
re-v\geq 0\\}.$ (3.1)
The order topology on ${\mathcal{V}}$ is the topology induced by the order
semi-norm.
The following statement supplies a complete characterization of the order
seminorm:
###### Theorem 3.9 ([10], p. 1330).
Let $({\mathcal{V}},{\mathcal{V}}^{+})$ be an ordered real vector space with
order unit $e$. Then the order seminorm $\llbracket\cdot\rrbracket$ is the
unique seminorm on ${\mathcal{V}}$ satisfying simultaneously the following
three conditions:
1. 1)
$\llbracket e\rrbracket=1$;
2. 2)
If $-v^{\prime}\leq v\leq v^{\prime}$, then $\llbracket
v\rrbracket\leq\llbracket v^{\prime}\rrbracket$;
3. 3)
If $f:{\mathcal{V}}\rightarrow{\mathbb{R}}$ is a state, then
$|f(v)|\leq\llbracket v\rrbracket$.
When the order unit is Archimedean, then $\llbracket\cdot\rrbracket$ is
actually a norm and the order topology is Hausdorff. The reciprocal, however,
is not necessarily true (see [10][p. 1328]). Nevertheless, the Archimedean
case can be characterized as it follows:
###### Theorem 3.10 ([10], p. 1330).
Let $({\mathcal{V}},{\mathcal{V}}^{+})$ be an ordered real vector space with
order unit $e$, and let $\llbracket\cdot\rrbracket$ be the order semi-norm
determined by $e$. Then the following are equivalent:
1. i)
$e$ is Archimedean;
2. ii)
${\mathcal{V}}^{+}$ is a closed subset of ${\mathcal{V}}$ in the order
topology induced by $\llbracket\cdot\rrbracket$;
3. iii)
$-\llbracket v\rrbracket\,e\leq v\leq\llbracket v\rrbracket\,e$ for all
$v\in{\mathcal{V}}$.
###### Remark 3.11 ([10], Sec. 4).
The order semi-norm on the hermitean sub-space of an ordered $\ast$-vector
space with unit can be extend over the entire complex space, in an essentially
unique way. $\Diamond$
###### Remark 3.12.
$C^{\ast}$-algebras are extremely special cases where the $C^{\ast}$-norm
coincides with order norm. $\Diamond$
###### Definition 3.13 ([10], p. 1341).
If $({\mathcal{V}},{\mathcal{V}}^{+})$ is an ordered $\ast$-vector space, then
a subspace ${\mathcal{J}}\subseteq{\mathcal{V}}$ is called an order ideal if
${\mathcal{J}}$ is self-adjoint (${\mathcal{J}}^{\ast}={\mathcal{J}})$ and,
furthermore, $v\in{\mathcal{J}}\cap{\mathcal{V}}^{+}$ and $0\leq
v^{\prime}\leq v$ implies that $v^{\prime}\in{\mathcal{J}}$.
###### Proposition 3.14 ([10], p. 1342).
Let $({\mathcal{V}},{\mathcal{V}}^{+})$ be an ordered $\ast$-vector space with
order unit $e$ and let ${\mathcal{J}}\subset V$ be an order ideal. Then
$({\mathcal{V}}/{\mathcal{J}},{\mathcal{V}}^{+}/{\mathcal{J}})$ is an ordered
$\ast$-vector space with order unit $e+{\mathcal{J}}$.
### 3.2. Entanglement kernel is an order ideal
###### Proposition 3.15.
The entanglement kernel is self-adjoint:
${\mathcal{K}}_{\omega}^{\ast}={\mathcal{K}}_{\omega}$.
###### Proof.
We have:
$\omega_{x}(a_{R}^{\ast})=\omega(x\,a_{R}^{\ast})=\omega(a_{R}x^{\ast})^{\ast}=\omega(x^{\ast}a_{R})^{\ast},$
(3.2)
for all $\ x\in{\mathcal{A}}_{L}$. Hence, if $a_{R}\in{\mathcal{K}}_{\omega}$,
then $\omega(xa_{R}^{\ast})=0$ for any $x\in{\mathcal{A}}_{L}$. As a
consequence, $a_{R}^{\ast}\in{\mathcal{K}}_{\omega}$.∎
###### Proposition 3.16.
The entanglement kernel can be equivalently defined as
${\mathcal{K}}_{\omega}=\bigcap_{x\in{\mathcal{A}}_{L}^{+}}{\rm
Ker}\,\omega_{x}.$ (3.3)
Compared to (2.18), the intersection in (3.3) runs over the smaller space of
positive elements.
###### Proof.
Let ${\mathcal{K}}^{\prime}_{\omega}$ denote the set at the right side of
(3.3). Clearly,
${\mathcal{K}}_{\omega}\subseteq{\mathcal{K}}^{\prime}_{\omega}$. Consider now
an $a_{R}$ from ${\mathcal{K}}^{\prime}_{\omega}$ and let $x$ be an arbitrary
element from ${\mathcal{A}}_{L}$. The latter can be always decompose as
$x=(x_{1}^{+}-x_{2}^{+})+\imath(x_{3}^{+}-x_{4}^{+})$ in terms of positive
elements $x_{i}^{+}\in{\mathcal{A}}_{L}^{+}$, though this decomposition is not
unique. Nevertheless, from (3.3), we know that $\omega(x_{i}^{+}a_{R})=0$, for
all $i=\overline{1,4}$. Since $\omega$ is a linear map, we can conclude that
$\omega(x\,a_{R})=0$, hence $a_{R}$ also belong to the set defined at(2.18).∎
###### Proposition 3.17.
The entanglement kernel is an order ideal.
###### Proof.
Consider $k_{R}^{+}$ from ${\mathcal{K}}_{\omega}\cap{\mathcal{A}}_{R}^{+}$
and $a_{R}^{+}$ from ${\mathcal{A}}_{R}^{+}$, such that $a_{R}^{+}\leq
k_{R}^{+}$. Our task is to show that $a_{R}^{+}$ is automatically in
${\mathcal{K}}\cap{\mathcal{A}}_{R}^{+}$. For this, consider
$x^{+}\in{\mathcal{A}}_{L}^{+}$ and note that $x^{+}a_{R}^{+}$ is a positive
element of ${\mathcal{A}}_{\mathbb{Z}}$ because the two terms commute. Then:
$0\leq\omega(x^{+}a_{R}^{+})\leq\omega(x^{+}k_{R}^{+})=0,\quad\forall\
x^{+}\in{\mathcal{A}}_{L}^{+}.$ (3.4)
Proposition 3.16 then assures us that $a_{R}^{+}$ belongs to
${\mathcal{K}}_{\omega}\cap{\mathcal{A}}_{R}^{+}$.∎
From above and Proposition 3.14, we can conclude that
$({\mathcal{B}}_{\omega},{\mathcal{D}}_{1})$ is an order space with unit
$1+{\mathcal{K}}_{\omega}$. As such, ${\mathcal{B}}_{\omega}$ can be endowed
with an order seminorm $\llbracket\cdot\rrbracket$. The order unit, however,
is not Archimedean, in general. Nevertheless, we can show that the order
seminorm is in fact a norm.
###### Proposition 3.18.
Let $x\in{\mathcal{A}}_{L}^{+}$ such that $\omega(x)=0$. Then ${\rm
Ker}\,\omega_{x}={\mathcal{A}}_{R}$.
###### Proof.
By renormalizing $x$ by its norm, we can assume $\|x\|=1$. From the Cauchy-
Schwarz inequality, we have:
$|\omega(xa_{R})|^{2}\leq\omega(x^{2})\,\omega(a_{R}^{\ast}a_{R}),\quad\forall\
a_{R}\in{\mathcal{A}}_{R}.$ (3.5)
Since $\|x\|\leq 1$, we have $x^{2}\leq x$, hence
$0\leq\omega(x^{2})\leq\omega(x)=0$. Then (3.5) assures us that
$\omega_{x}(a_{R})=0$ for all $a_{R}\in{\mathcal{A}}_{R}$.∎
###### Corollary 3.19.
The entanglement kernel can be equivalently defined as
${\mathcal{K}}_{\omega}=\bigcap_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)\neq
0}{\rm Ker}\,\omega_{x}.$ (3.6)
###### Proof.
Indeed,
$\displaystyle\bigcap_{x\in{\mathcal{A}}_{L}^{+}}{\rm Ker}\,\omega_{x}$
$\displaystyle=\Big{(}\bigcap_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)\neq
0}{\rm
Ker}\,\omega_{x}\Big{)}\bigcap\Big{(}\bigcap_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)=0}{\rm
Ker}\,\omega_{x}\Big{)}$ (3.7)
$\displaystyle=\Big{(}\bigcap_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)\neq
0}{\rm Ker}\,\omega_{x}\Big{)}\bigcap{\mathcal{A}}_{R}$
and the statement follows.∎
The value of the last statement rests in the observation that all positive
functionals $\omega_{x}$ entering in the new definition (3.6) of
${\mathcal{K}}_{\omega}$ can be normalized by $\omega_{x}(1)$, hence,
transformed into states. Then:
###### Proposition 3.20.
The entanglement kernel is a kernel in the sense of Defintion 3.2 in [8].
Explicitly, the entanglement kernel is the intersection of the kernels of a
family of states:
${\mathcal{K}}_{\omega}=\bigcap_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)=1}{\rm
Ker}\,\omega_{x}.$ (3.8)
Note that $\omega_{x}$ is a state if $\omega(x)=1$.
###### Proof.
We have
$\displaystyle\bigcap_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)\neq 0}{\rm
Ker}\,\omega_{x}=\bigcap_{\alpha\in(0,\infty)}\Big{(}\,\bigcap_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)=\alpha}{\rm
Ker}\,\omega_{x}\Big{)}$ (3.9)
Obviously, ${\rm Ker}\,\omega_{x}={\rm Ker}\,\omega_{\alpha x}$ for all
$\alpha\in(0,\infty)$, hence,
$\displaystyle\bigcap_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)=\alpha}{\rm
Ker}\,\omega_{x}=\bigcap_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)=1}{\rm
Ker}\,\omega_{x},\quad\forall\ \alpha\in(0,\infty),$ (3.10)
because both the left and right sides sample the same subsets of
${\mathcal{A}}_{R}$. ∎
In our opinion, it is quite remarkable that the entanglement kernel introduced
in [6] can be connected to a concept introduced two decades later. This
connection is even more remarkable, given that:
###### Corollary 3.21 ([8], Lemma 3.3).
Since ${\mathcal{K}}_{\omega}$ is a kernel, the order seminorm on
${\mathcal{B}}_{\omega}$ is actually a norm.
We now want to spell out a simple and explicit condition that ensures that the
order structure $({\mathcal{B}}_{\omega},{\mathcal{D}}_{1})$ is Archimedean.
###### Proposition 3.22 ([8], Prop. 4.1).
The operator space and the order norms on ${\mathcal{B}}_{\omega}$ can be
characterized as
$\displaystyle\|a_{R}+{\mathcal{K}}_{\omega}\|^{\rm
osp}_{1}=\sup\big{\\{}\|\phi(a_{R})\|\ |$ $\displaystyle\
\phi:{\mathcal{A}}_{R}\rightarrow B(H),\
\phi({\mathcal{K}}_{\omega})=\\{0\\},$ (3.11) $\displaystyle\quad\phi\
\mbox{completely contractive}\big{\\}}$
and
$\displaystyle\llbracket
a_{R}+{\mathcal{K}}_{\omega}\rrbracket=\sup\big{\\{}\|\phi(a_{R})\|\ |$
$\displaystyle\ \phi:{\mathcal{A}}_{R}\rightarrow B(H),\
\phi({\mathcal{K}}_{\omega})=\\{0\\},$ (3.12) $\displaystyle\quad\phi\
\mbox{unital, completely positive}\big{\\}}$
where in each case $H$ runs through all Hilbert spaces.
As pointed out in Corollary 4.2 of [8], $\|\cdot\|^{\rm
osp}_{1}\geq\llbracket\cdot\rrbracket$ because every unital completely
positive map over a $C^{\ast}$-algebra is automatically contractive. Now,
since $\omega_{x}$ are states whenever $\omega(x)=1$, they are completely
contractive as well as unital and completely positive and, as a result,
$\|a_{R}+{\mathcal{K}}_{\omega}\|^{\rm
osp}_{1}\geq\sup_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)=1}|\omega_{x}(a_{R})|$
(3.13)
and
$\llbracket
a_{R}+{\mathcal{K}}_{\omega}\rrbracket\geq\sup_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)=1}|\omega_{x}(a_{R})|.$
(3.14)
This led us to proceed as follows.
###### Proposition 3.23.
The right side of both these two inequalities define a map
$\Gamma:{\mathcal{B}}_{\omega}\rightarrow[0,\infty),\quad\Gamma(a_{R}+{\mathcal{K}}_{\omega})=\sup_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)=1}|\omega_{x}(a_{R})|,$
(3.15)
which is continuous and sub-linear
$\Gamma(\alpha b+\beta
b^{\prime})\leq\alpha\Gamma(b)+\beta\Gamma(b^{\prime}),\quad\forall\
b,b^{\prime}\in{\mathcal{B}}_{\omega},\ \ \alpha,\beta\in(0,\infty),$ (3.16)
as well as homogeneous
$\Gamma(\alpha b)=\alpha\Gamma(b),\quad\forall\ b\in{\mathcal{B}}_{\omega},\ \
\alpha\in(0,\infty).$ (3.17)
Furthermore $\Gamma({\mathcal{B}}_{\omega}\setminus\\{0\\})\subset(0,\infty)$.
###### Proof.
The map is well defined for, if $a^{\prime}_{R}$ is another element from the
class of $a_{R}$, then $a^{\prime}_{R}-a_{R}\in{\rm Ker}\ \omega_{x}$ and
$\omega_{x}(a^{\prime}_{R})=\omega_{x}(a_{R})$ for all
$x\in{\mathcal{A}}_{L}^{+}$ with $\omega(x)=1$. Sub-linearity follows from the
linearity of each $\omega_{x}$ and from the “sub-linearity” of the sup
process. If $\alpha\in(0,\infty)$, then
$\sup_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)=1}|\omega_{x}(\alpha
a_{R})|=\sup_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)=1}\alpha|\omega_{x}(a_{R})|=\alpha\sup_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)=1}|\omega_{x}(a_{R})|$
(3.18)
and homogeneity follows. Lastly,
$a_{R}+{\mathcal{K}}_{\omega}\in{\mathcal{B}}_{R}\setminus\\{0\\}$ implies
that $a_{R}\notin{\mathcal{K}}_{\omega}$, hence, there exists at least one
$x\in{\mathcal{A}}_{L}^{+}$ with $\omega(x)=1$ such that
$\omega_{x}(a_{R})>0$. Then the last statement follows. ∎
###### Proposition 3.24.
If the closure of the image of the unit ball of
$({\mathcal{B}}_{\omega},\|\cdot\|^{\rm osp}_{1})$ through $\Gamma$ does not
contain the origin of the real axis, then the order unit of
${\mathcal{B}}_{\omega}$ is Archimedean.
###### Proof.
With the stated assumption, there must exist a strictly positive constant $c$
such that $\Gamma(b)\geq c$, for all $b\in{\mathcal{B}}_{\omega}$ with
$\|b\|^{\rm osp}_{1}=1$. Since $\Gamma$ is homogeneous, this is same as
$\Gamma(b)\geq c\|b\|^{\rm osp}_{1}$ for all $b\in{\mathcal{B}}_{\omega}$.
This together with (3.14) and Corollary 4.2 of [8] give
$\|b\|\geq\llbracket b\rrbracket\geq c\,\|b\|.$ (3.19)
We conclude that the operator space norm and the order norm are equivalent,
hence the two topologies coincide. Since ${\mathcal{D}}_{1}$ is closed in the
operator space topology, it is also closed in the order topology and Theorem
3.10 assures us that order unit is Archimedean.∎
###### Remark 3.25.
If ${\mathcal{B}}_{\omega}$ is finite dimensional, then all norms are
equivalent and the above statement holds without any assumption on $\Gamma$.
Nevertheless, let us acknowledge that, in this case, the stated condition on
$\Gamma$ is automatically satisfied. Indeed, the unit ball is compact if
${\mathcal{B}}_{\omega}$ is finite dimensional, hence, its image through
$\Gamma$ is a closed interval. Since this interval is contained in the open
interval $(0,\infty)$, the strictly positive constant $c$, mentioned in the
proof, exits. $\Diamond$
In the light of the above observation, it is reasonable to claim that the
scenario described in Proposition 3.24 is the level immediately above that of
finite dimensional ${\mathcal{B}}_{\omega}$’s studied in [6] and, as such, it
is an interesting case to investigate. We will proceed, however, with the most
general case possible. It is interesting to note, though, that the condition
spelled in Proposition 3.24 will appear again when we study certain asymptotic
properties under the shift map.
### 3.3. Operator systems and matrix ordered $\ast$-vector spaces
We collect here a number of fundamental concepts and statements related to
order structures on matrix amplifications.
###### Definition 3.26 ([9] p. 9).
A concrete operator system is a self-adjoint closed linear sub-space of a
unital $C^{\ast}$-algebra containing the unit.
A concrete operator system inherits a full order structure from the embedding
unital $C^{\ast}$-algebra. Indeed, if ${\mathcal{S}}\subseteq{\mathcal{A}}$ is
an operator system, then
${\mathcal{S}}^{+}={\mathcal{S}}\cap{\mathcal{A}}^{+}$ supplies a positive
cone. The matrix amplification of an operator system is again a linear
subspace of a $C^{\ast}$-algebra, which is the matrix amplification of the
embedding $C^{\ast}$-algebra. As such, the matrix amplifications of an
operator system come equipped with order structures too. This tower of order
structures puts a sharp distinction between the linear spaces that can be
isometrically embedded in $C^{\ast}$-algebras such that they contain the unit
or they do not. The extra structures can be described abstractly and
intrinsically.
###### Definition 3.27 ([9] p. 176).
Given a $\ast$-vector space ${\mathcal{V}}$, one says that ${\mathcal{V}}$ is
matrix-ordered provided that:
1. i)
For each $n$, we are give a cone ${\mathcal{C}}_{n}$ in
$M_{n}({\mathcal{V}})_{\rm h}$;
2. ii)
${\mathcal{C}}_{n}\cap(-{\mathcal{C}}_{n})=\\{0\\}$ for all $n$;
3. iii)
For every $n$ and $m$ and $A$ an $n\times m$ matrix, we have
$A^{\ast}{\mathcal{C}}_{n}A\subseteq{\mathcal{C}}_{m}$.
###### Definition 3.28.
Let $({\mathcal{V}},{\mathcal{V}}^{+})$ be a matrix-ordered $\ast$-vector
space with order unit $e$. Then $e$ is called a matrix order unit provided
$I_{n}\otimes e$ is an order unit for $M_{n}({\mathcal{V}})$, for each $n$.
Furthermore, $e$ is called Archimedean matrix order unit if $I_{n}\otimes e$
is an Archimedean order unit for $M_{n}({\mathcal{V}})$, for each $n$.
###### Definition 3.29 ([9] p. 176).
Given two matrix-ordered $\ast$-vector spaces ${\mathcal{V}}$ and
${\mathcal{V}}^{\prime}$ with cones ${\mathcal{C}}_{n}$ and
${\mathcal{C}}^{\prime}_{n}$, one calls a linear map
$\varphi:{\mathcal{V}}\rightarrow{\mathcal{V}}^{\prime}$ completely positive
provided that $[v_{ij}]\in{\mathcal{C}}_{n}$ implies that
$[\varphi(v_{ij})]\in{\mathcal{C}}^{\prime}_{n}$. One calls $\varphi$ a
complete order isomorphism if $\varphi$ is completely positive and it has an
inverse which is also completely positive.
The following result, due to Choi and Effros [4], supplies the abstract
characterization of operator systems.
###### Theorem 3.30 ([9] p. 177).
If ${\mathcal{V}}$ is a matrix-ordered $\ast$-vector space with an Archimedean
matrix order unit $e$, then there exists a Hilbert space $H$, an operator
system ${\mathcal{S}}\in B(H)$, and a complete order isomorphism
$\varphi:{\mathcal{V}}\rightarrow{\mathcal{S}}$ with $\varphi(e)=I_{H}$.
Conversely, every concrete operator system ${\mathcal{S}}$ is a matrix-ordered
$\ast$-vector space with Archimedean matrix order unit, when equipped with the
matrix order inherited from the embedding $C^{\ast}$-algebra and with the
Archimedean matrix order unit $e=1$.
###### Definition 3.31.
A linear map between two abstract operator systems is called unital if it maps
the order unit into the order unit.
The following supplies an effective criterion for a map to be completely
positive. It will be used here very often.
###### Proposition 3.32 ([3], p. 18).
Let ${\mathcal{S}}$ and ${\mathcal{S}}^{\prime}$ be two operator systems and
$\varphi:{\mathcal{S}}\rightarrow{\mathcal{S}}^{\prime}$ be a linear unital
map. Then $\varphi$ is completely positive if and only if $\varphi$ is
completely contractive.
Stinespring theorem [15], formulated below, supplies the structure of the
completely positive maps when the domain is a $C^{\ast}$-algebra.
###### Theorem 3.33 ([3], p. 18).
Let ${\mathcal{A}}$ be a unital $C^{\ast}$-algebra. A linear map
$\varphi:{\mathcal{A}}\rightarrow B(H)$ is completely positive if and only if
there is a Hilbert space $K$, a unital $\ast$-homomorphism
$\pi:{\mathcal{A}}\rightarrow B(K)$, and a bounded linear map $V:H\rightarrow
K$ such that $\varphi(a)=V^{\ast}\pi(a)V$ for all $a\in{\mathcal{A}}$. This
can be accomplished with $\|\varphi\|_{\rm cb}=\|V\|^{2}$. Also, this equals
$\|\varphi\|$. If $\varphi$ is unital, then we may take $V$ to be an isometry;
in this case we may view $H\subseteq K$ and have $\varphi(a)=P_{H}\pi(a)|_{H}$
Arveson extension theorem [2], formulated below, tells us among many other
things that the above factorization functions also when the domain is an
operator system.
###### Theorem 3.34 ([3], p. 18).
If ${\mathcal{S}}$ is an operator subsystem of a unital $C^{\ast}$-algebra
${\mathcal{A}}$, and if $\varphi:{\mathcal{S}}\rightarrow B(H)$ is completely
positive, then there exists a completely positive map
$\hat{\varphi}:{\mathcal{A}}\rightarrow B(H)$ extending $\varphi$.
### 3.4. Completing the reduction process
The starting point here is the $\omega$-reduced data
$\big{(}{\mathcal{B}}_{\omega},\\{\|\cdot\|^{\rm osp}_{n}\\}_{n\geq
1},\\{{\mathcal{D}}_{n}\\}_{n\geq 1},\bar{\omega}\big{)}$ compiled in section
2.5. Along the way, we have discovered several important facts about its
structure. Indeed, since ${\mathcal{K}}_{\omega}$ is an order ideal, this data
already describes a matrix-order space with matrix order unit
$1+{\mathcal{K}}_{\omega}$ (see [8][p. 328]). As we have seen, the order
matrix-order structure is not Archimedean, in general. There is, however, a
well understood Archimedeanization process that can be used to transform
${\mathcal{B}}_{\omega}$ into an operator system. It was developed in [10] for
ordered vector spaces and in [11] for matrix amplifications.
The Archimedeanization process involves two stages. In the first stage, one
quotients out closed linear spaces in order to transform the order seminorms
$\llbracket\cdot\rrbracket_{n}$ into norms. It is enough to run this process
for the ground level $n=1$, because that fixes the semi-norm issue for all the
matrix amplifications (see [11][Lemma 3.14]). Given Corollary 3.21, we can
bypass this first stage and complete the Archimedeanization process without
reducing ${\mathcal{B}}_{\omega}$ and its matrix amplifications. This is
significant! In the second stage, one generates the smallest possible
expansions of ${\mathcal{D}}_{n}$’s to positive cones that ensure an
Archimedean structure. For quotiented spaces by kernels, this process was
described in [8][p. 328]:
###### Proposition 3.35 ([8], Prop. 3.4; [11] Prop. 3.16).
Let
$\displaystyle{\mathcal{C}}_{n}=\\{[a^{ij}_{R}+{\mathcal{K}}_{\omega}]\in
M_{n}({\mathcal{B}}_{\omega})\ |$ $\displaystyle\ \forall\,\epsilon>0\
\exists\,k_{i,j}\in{\mathcal{K}}_{\omega}\ \mbox{such that}\ $ (3.20)
$\displaystyle\quad\epsilon 1\otimes I_{n}+[a^{i,j}_{R}+k_{i,j}]\in
M_{n}(A_{R})^{+}\big{\\}}.$
Then $({\mathcal{B}}_{\omega},\\{{\mathcal{C}}_{n}\\}_{n\geq 1})$ is a matrix-
order $\ast$-space with an Archimedean matrix unit
$e=1+{\mathcal{K}}_{\omega}$. Furthermore, the quotient map
$q:{\mathcal{A}}_{R}\rightarrow{\mathcal{B}}_{\omega}$ is unital and
completely positive.
###### Remark 3.36.
Recall that each matrix space $M_{n}({\mathcal{B}}_{\omega})$ comes equipped
with an order norm, which we denote by $\|\cdot\|_{n}^{\rm osy}$ (as in [8]).
From the discussion in [11][p. 37], one learns that the ${\mathcal{C}}_{n}$’s
defined in Proposition 3.35 are just the closures of ${\mathcal{D}}_{n}$ in
the topology induced by these norms. Furthermore, the quotient map
$q:{\mathcal{A}}_{R}\rightarrow{\mathcal{B}}_{\omega}$ is unital and
completely contractive for $\|\cdot\|_{n}^{\rm osy}$. $\Diamond$
Using Choi-Effros Theorem 3.30, we can now state that:
###### Corollary 3.37.
The quotient space with its Archimedean matrix-order structure
$({\mathcal{B}}_{\omega},\\{{\mathcal{C}}_{n}\\}_{n\geq 1},e)$ admits a
concrete representation as an operator system. In particular,
$({\mathcal{B}}_{\omega},\|\cdot\|_{1}^{\rm osy})$ can be unitally and
isometrically embedded in a $C^{\ast}$-algebra.
###### Remark 3.38.
Corollary 3.37 justifies the assumption made in [6] that
${\mathcal{B}}_{\omega}$ is a $C^{\ast}$-algebra. $\Diamond$
We now turn our attention to the reduced map $\bar{\omega}$. The
Archimedeanization described above enjoys a certain universal property, which
is described in [8][p. 329]. We can use this universal property to deduce that
the reduced map $\bar{\omega}$ enjoys the desired properties:
###### Proposition 3.39 ([8] Prop. 3.16).
Let ${\mathcal{T}}$ be an operator system and
$\varphi:{\mathcal{A}}_{R}\rightarrow{\mathcal{T}}$ be a unital and completely
positive map such that ${\mathcal{K}}_{\omega}\subset{\rm Ker}\,\varphi$. Then
the map $\bar{\varphi}:{\mathcal{B}}_{\omega}\rightarrow{\mathcal{T}}$ given
by $\bar{\varphi}(a_{R}+{\mathcal{K}}_{\omega})=\varphi(a_{R})$ is unital and
completely positive. In particular, $\bar{\omega}$ is a unital and completely
positive map from ${\mathcal{B}}_{\omega}$ to ${\mathbb{C}}$, hence a state.
From Stinespring [15] and Arveson Theorems 3.33 and 3.34, respectively, we now
can spell out the structure of the reduced map $\bar{\omega}$:
###### Corollary 3.40.
There is a Hilbert space $H$ and an isometric embedding
$\rho:{\mathcal{B}}_{\omega}\rightarrow B(H)$, as well as a Hilbert space $K$,
a representation $\pi:B(H)\rightarrow B(K)$ and a vector $\zeta\in K$ such
that:
$\bar{\omega}(b)=\big{\langle}\zeta,\pi\big{(}\rho(b)\big{)}\zeta\big{\rangle},\quad
b\in{\mathcal{B}}_{\omega}.$ (3.21)
Furthermore, the map extends over the entire $B(H)$. Here,
${\mathcal{B}}_{\omega}$ is again equipped with the operator system norm.
###### Remark 3.41.
Such concrete representations were exploit to their full extent in [6]. We
want to point out, however, that many concrete examples, including the ones
analyzed in [6], can be worked out using standard maps such as embeddings,
quotients and shifts. $\Diamond$
At this point, we have found the right candidate for the $\omega$-reduced
data, which, as we shall see, encodes the minimal information needed to
reproduce $\omega$:
###### Definition 3.42.
The $\omega$-reduced data consists of the operator system
$({\mathcal{B}}_{\omega},\\{{\mathcal{C}}_{n}\\}_{n\geq 1},e)$ and the state
$\bar{\omega}:{\mathcal{B}}_{\omega}\to{\mathbb{C}}$. This data is entirely
and canonically determined by the original state $\omega$.
In 3.24, we supplied a simple condition that ensures that $\|\cdot\|^{\rm
osy}_{1}$ and $\|\cdot\|^{\rm osp}$ are equivalent. Similar conditions can be
found for the matrix amplifications. Indeed, Proposition 3.43 accepts the
following generalization:
###### Proposition 3.43 ([8], Prop. 4.1).
The operator space and the order norms on ${\mathcal{B}}_{\omega}$ can be
characterized as
$\displaystyle\|[a_{R}^{ij}+{\mathcal{K}}_{\omega}]\|^{\rm
osp}_{n}=\sup\big{\\{}\|[\phi(a_{R}^{ij})]\|\ |$ $\displaystyle\
\phi:{\mathcal{A}}_{R}\rightarrow B(H),\
\phi({\mathcal{K}}_{\omega})=\\{0\\},$ (3.22) $\displaystyle\quad\phi\
\mbox{completely contractive}\big{\\}}$
and
$\displaystyle\|[a_{R}^{ij}+{\mathcal{K}}_{\omega}]\|^{\rm
osy}_{n}=\sup\big{\\{}\|[\phi(a_{R}^{ij})]\|\ |$ $\displaystyle\
\phi:{\mathcal{A}}_{R}\rightarrow B(H),\
\phi({\mathcal{K}}_{\omega})=\\{0\\},$ (3.23) $\displaystyle\quad\phi\
\mbox{unital, completely positive}\big{\\}}$
where in each case $H$ runs through all Hilbert spaces.
As before, a direct consequence of the above is that $\|\cdot\|^{\rm
osy}_{n}\leq\|\cdot\|^{\rm osp}_{n}$. Another consequence is that:
$\|[a_{R}^{ij}+{\mathcal{K}}_{\omega}]\|^{\rm
osy}_{n}\geq\sup_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)=1}\|[\omega_{x}(a_{R}^{ij})]\|_{M_{n}}$
(3.24)
for any $[a_{R}^{ij}+{\mathcal{K}}_{\omega}]\in
M_{n}({\mathcal{B}}_{\omega})$. This prompts us to define:
###### Proposition 3.44.
The map
$\Gamma_{n}:M_{n}({\mathcal{B}}_{\omega})\rightarrow[0,\infty),\quad\Gamma_{n}([a_{R}^{ij}+{\mathcal{K}}_{\omega}])=\sup_{x\in{\mathcal{A}}_{L}^{+}}^{\omega(x)=1}\|[\omega_{x}(a_{R}^{ij})]\|_{M_{n}},$
(3.25)
is continuous, sub-linear and homogeneous. Furthermore
$\Gamma_{n}({\mathcal{B}}_{\omega}\setminus\\{0\\})\subset(0,\infty)$.
###### Proposition 3.45.
If the closure of the image of the unit ball of
$(M_{n}({\mathcal{B}}_{\omega}),\|\cdot\|^{\rm osp}_{n})$ through $\Gamma_{n}$
does not contain the origin of the real axis, then $\|\cdot\|^{\rm osp}_{n}$
and $\|\cdot\|^{\rm osy}_{n}$ are equivalent and ${\mathcal{D}}_{n}$ coincides
with ${\mathcal{C}}_{n}$.
###### Remark 3.46.
In the language introduced in [8][p. 334], the entanglement kernel
${\mathcal{K}}_{\omega}$ becomes completely order proximinal under the
conditions of Propositions 3.24 and 3.45. It will be interesting to establish
if these conditions are optimal or not for ${\mathcal{K}}_{\omega}$ to enjoy
this property. $\Diamond$
Let us point out that, if ${\mathcal{B}}_{\omega}$ is finite dimensional, then
any two norms are equivalent and the entanglement kernel is automatically
completely order proximinal.
## 4\. Factorization Process
The main conclusions of the previous section apply to generic states $\omega$
on the algebra of physical observables ${\mathcal{A}}_{\mathbb{Z}}$. In this
section, however, we start by assuming that the state is shift invariant,
$\omega\circ S=\omega$. In these conditions, [6] defined a bi-linear map from
${\mathcal{A}}_{\mathbb{Z}}\times{\mathcal{B}}_{\omega}$ to
${\mathcal{B}}_{\omega}$, which proved to be of fundamental importance in the
analysis of one dimensional spin systems. In this section, we investigate the
properties of this bi-linear map and of its extensions to tensor products. In
a generic setting as ours, such extensions exist canonically only for the
Haagerup tensor product of operator spaces. As such, we start with a brief
introduction of the latter.
### 4.1. Multi-linear maps and Haagerup tensor product
We review here a few fundamental concepts about completely bounded multi-
linear maps and their extensions on tensor products of operators spaces.
###### Definition 4.1 ([12] p. 28).
Consider two concrete operator spaces $E\subset B(H)$ and $F\subset B(K)$.
Then their minimal tensor product $E\otimes_{\rm min}F$ is defined as the
completion of the algebraic tensor product $E\otimes F$ with respect to the
norm induced by ${\mathbb{B}}(H\otimes K)$ via the canonical embedding
$E\otimes F\subset{\mathbb{B}}(H\otimes K).$ (4.1)
###### Example 4.2.
There is a completely isometric isomorphism $M_{n}(E)\simeq
M_{n}({\mathbb{C}})\otimes_{\rm min}E$, for any operator space $E$. $\Diamond$
Completely bounded linear maps behave naturally w.r.t. the minimal tensor
product.
###### Proposition 4.3 ([12], p. 30).
If $E_{i}$ and $F_{i}$ are operator spaces and if $u_{i}:E_{i}\rightarrow
F_{i}$ are completely bounded maps, then their algebraic tensor product
$u_{1}\otimes u_{2}$ extends to a completely bounded map
$u_{1}\otimes_{\rm min}u_{2}:E_{1}\otimes_{\rm min}E_{2}\rightarrow
F_{1}\otimes_{\rm min}F_{2},$ (4.2)
and
$\|u_{1}\otimes_{\rm min}u_{2}\|_{\rm cb}=\|u_{1}\|_{\rm cb}\ \|u_{2}\|_{\rm
cb}.$ (4.3)
The minimal tensor product was exclusively used in [6] but will play no role
here. This is because the maps involved in the factorization process are
induced by multi-linear maps and, as opposed to linear maps, they do not
behave naturally under the minimal tensor product.
###### Definition 4.4 ([3], p. 362).
Let $E_{i}$ and $F$ be operator spaces. A bounded multi-linear map
$\tilde{\varphi}:E_{1}\times\ldots\times E_{m}\rightarrow F$ is a linear map
in each of the arguments such that there exists a constant $C$ with the
property $\|\tilde{\varphi}(e_{1},\ldots,e_{m})\|\leq
C\|e_{1}\|\ldots\|e_{m}\|$ for all $e_{i}\in E_{i}$. The smallest constant $C$
with such property is denoted by $\|\tilde{\varphi}\|$ and defines the norm of
the multi-linear map.
###### Definition 4.5 ([3], p. 30).
Let $\tilde{\varphi}:E_{1}\times\ldots\times E_{m}\rightarrow F$ be a bounded
multi-linear map between operator spaces. Any such multi-linear map can be
amplified to a multi-linear map
$\tilde{\varphi}_{n}:M_{n}(E_{1})\times\ldots\times M_{n}(E_{m})\rightarrow
M_{n}(F)$ as
$\tilde{\varphi}_{n}([e^{1}_{ij}],\ldots,[e^{m}_{ij}])=\Big{[}\sum_{k_{1},\ldots,k_{m-1}=1}^{n}\tilde{\varphi}\big{(}e^{1}_{ik_{1}},e^{2}_{k_{1}k_{2}},\ldots,e^{n}_{k_{m-1}j}\big{)}\Big{]}.$
(4.4)
The norm of this map is computed as in Definition 4.4 using $\|\cdot\|_{n}$
norms on the matrix amplifications. A multi-linear map is called completely
bounded if $\|\varphi\|_{\rm c.b.}:=\sup_{n}\|\tilde{\varphi}_{n}\|<\infty$.
The c.b. multi-linear maps are naturally associated with Haagerup tensor
product of operator spaces:
###### Definition 4.6 ([12], p. 86).
Let $E_{i}$ be operator spaces and ${\mathcal{K}}$ be the $C^{\ast}$-algebra
of compact operators over a separable Hilbert space. Then the rule
$(k_{1}\otimes e_{1})\odot(k_{2}\otimes
e_{2})=(k_{1}k_{2})\otimes(e_{1}\otimes e_{2})$ (4.5)
defines a unique bi-linear form from $({\mathcal{K}}\otimes
E_{1})\times({\mathcal{K}}\otimes E_{2})$ to
${\mathcal{K}}\otimes(E_{1}\otimes E_{2})$. Furthermore, for each
$x_{i}\in{\mathcal{K}}\otimes E_{i}$, one defines
$\alpha_{i}(x_{i})=\|x_{i}\|_{{\mathcal{K}}\otimes_{\rm min}E_{i}},\quad
i=1,2,$ (4.6)
and, for each $x\in{\mathcal{K}}\otimes(E_{1}\otimes E_{2})$,
$\alpha_{\rm
h}(x)=\inf\Big{\\{}\sum_{j=1}^{m}\alpha_{1}(x_{1}^{j})\,\alpha_{2}(x_{2}^{j})\Big{\\}},$
(4.7)
where the infimum is taken over all possible decompositions of $x$ as a finite
sum
$x=\sum_{j=1}^{m}x_{1}^{j}\odot x_{2}^{j},\quad m\geq 1.$ (4.8)
When ${\mathcal{K}}$ is restricted to $M_{n}({\mathbb{C}})$, $n\geq 1$,
$\alpha_{\rm h}$ supplies norms on the matrix amplifications that satisfy
Ruan’s conditions. The completion w.r.t. these norms supplies the Haagerup
tensor product $E_{1}\otimes_{\rm h}E_{2}$.
The c.b. linear maps behave well w.r.t. Haagerup tensor product:
###### Proposition 4.7 ([12], p. 88).
If $E_{i}$ and $F_{i}$ are operator spaces and if $u_{i}:E_{i}\rightarrow
F_{i}$ are completely bounded maps, then their algebraic tensor product
$u_{1}\otimes u_{2}:E_{1}\otimes E_{2}\rightarrow F_{1}\otimes F_{2}$ (4.9)
extends to a c.b. linear map $u_{1}\otimes_{\rm h}u_{2}$ between
$E_{1}\otimes_{\rm h}E_{2}$ and $F_{1}\otimes_{\rm h}F_{2}$ and:
$\|u_{1}\otimes_{\rm h}u_{2}\|_{\rm cb}\leq\|u_{1}\|_{\rm cb}\ \|u_{2}\|_{\rm
cb}.$ (4.10)
The Haagerup tensor product enjoys several unique properties among the tensor
products, which makes it extremely suitable for the applications considered in
this work. The first one relates to extensions of multi-linear maps.
###### Proposition 4.8 ([3] p. 31).
Let $\tilde{\varphi}:E\times F\rightarrow G$ be bilinear map between operator
spaces and $\varphi:E\otimes F\rightarrow G$ be its corresponding linear map
on the algebraic tensor product. Then $\tilde{\varphi}$ is a completely
bounded bilinear map if and only if $\varphi$ extends to a completely bounded
linear map on $E\otimes_{\rm h}F$. Futhermore, if that is indeed the case,
then
$\|\tilde{\varphi}:E\times F\rightarrow G\|_{\rm cb}=\|\varphi:E\otimes_{\rm
h}F\rightarrow G\|_{\rm cb}.$ (4.11)
The second property relates to the unique behavior relative to taking sub-
spaces and quotients:
###### Proposition 4.9 ([12] p. 93; [3] p. 31).
Let $E_{i}$ and $F_{i}$ be operator spaces.
1. i)
Injectivity: If $E_{i}\subset F_{i}$ completely isometrically, then
$E_{1}\otimes_{\rm h}E_{2}\subset F_{1}\otimes_{\rm h}F_{2}$ completely
isometrically.
2. ii)
Projectivity: If $q_{i}:E_{i}\rightarrow F_{i}$ are complete quotient maps,
then $q_{1}\otimes q_{2}$ extends to a complete quotient map
$q_{1}\otimes_{\rm h}q_{2}:E_{1}\otimes_{\rm h}E_{2}\rightarrow
F_{1}\otimes_{\rm h}F_{2}$.
The following is a very useful characterization of the kernel of product maps,
with an important corollary stated below:
###### Proposition 4.10 ([5] Prop. 3).
Let $q_{i}:E_{i}\rightarrow F_{i}$ be complete quotient maps. Then
${\rm Ker}\,q_{1}\otimes_{\rm h}q_{2}=\overline{{\rm Ker}\,q_{1}\otimes_{\rm
h}E_{2}+E_{1}\otimes_{\rm h}{\rm Ker}\,q_{2}}.$ (4.12)
###### Corollary 4.11.
Let $E$ and $F$ be operator spaces and $G\subset F$ an operator sub-space.
Then $E\otimes_{\rm h}G$ embeds completely isometrically in $E\otimes_{\rm
h}F$ and $(E\otimes_{\rm h}F)/(E\otimes_{\rm h}G)$ and $E\otimes_{\rm h}F/G$
are completely isometric.
The last property that we will exploit is the natural embedding of the
Haagerup product into free products of $C^{\ast}$-algebras:
###### Proposition 4.12 ([12] p. 98).
Let $E_{i}$ be operator spaces with completely isometric embeddings
$E_{i}\subset{\mathcal{A}}_{i}$ into unital $C^{\ast}$-algebras. Then
$E_{1}\otimes_{\rm h}\ldots\otimes_{\rm h}E_{N}$ accepts a complete isometric
embedding into the $C^{\ast}$-algebra supplied by the free product
${\mathcal{A}}_{1}\star\ldots\star{\mathcal{A}}_{N}$.
This explicit representation of the Haagerup tensor product will be essential
for the presentation of the states as operator products. The following direct
consequence of Proposition 4.12 will also play an essential role:
###### Corollary 4.13.
If $E_{i}$ are operator systems, then $E_{1}\otimes_{\rm h}\ldots\otimes_{\rm
h}E_{N}$ is an operator system too.
### 4.2. Generating bi-linear map
The setting here is the same as in section 2.3 and $\omega$ is assumed shift
invariant. In this section, the class $a_{R}+{\mathcal{K}}_{\omega}$ in
${\mathcal{B}}_{\omega}$ is denoted by $\hat{a}_{R}$ and
${\mathcal{B}}_{\omega}$ is considered equipped with its canonical operator
space structure $({\mathcal{B}}_{\omega},\\{\|\cdot\|^{\rm osy}_{n}\\}_{n\geq
1})$. We will continue to denote the quotient map from ${\mathcal{A}}_{R}$ to
${\mathcal{B}}_{\omega}$ by $q$. We recall that $q$ is unital and completely
contractive.
In the following, we are going to exploit the canonical $C^{\ast}$-algebra
isomorphisms $\alpha_{i}:{\mathcal{A}}_{i}\to{\mathcal{A}}$ and define two
additional $C^{\ast}$-algebra isomorphisms:
$\xi_{R}:{\mathcal{A}}\otimes{\mathcal{A}}_{R}\mathrel{\rightarrowtail\kern-5.72635pt\twoheadrightarrow}{\mathcal{A}}_{R},\quad\xi_{R}(a\otimes
a_{R}):=S(\alpha_{0}^{-1}(a)\otimes a_{R})$ (4.13)
and
$\xi_{L}:{\mathcal{A}}_{L}\otimes{\mathcal{A}}\mathrel{\rightarrowtail\kern-5.72635pt\twoheadrightarrow}{\mathcal{A}}_{L},\quad\xi_{L}(a_{L}\otimes
a):=S^{-1}\big{(}a_{L}\otimes\alpha_{1}^{-1}(a)\big{)},$ (4.14)
where the conventions set in section 2.3 were used to full extent. We recall
that, for $C^{\ast}$-algebras, the isomorphisms are automatically isometric.
The following identity will come handy
$a_{L}\otimes\xi_{R}(a\otimes a_{R})=S\big{(}\xi_{L}(a_{L}\otimes a)\otimes
a_{R}\big{)},$ (4.15)
for any $a_{L,R}\in{\mathcal{A}}_{L,R}$ and $a\in{\mathcal{A}}$. Here, $S$ is
the shift automorphism on ${\mathcal{A}}_{\mathbb{Z}}$.
We now proceed with defining of what we call the generating map. As in [6], it
is introduced here by extending a bi-linear map:
###### Proposition 4.14.
The relation
${\mathbb{E}}:{\mathcal{A}}\times{\mathcal{B}}_{\omega}\rightarrow{\mathcal{B}}_{\omega},\quad{\mathbb{E}}\big{(}a,\hat{a}_{R}\big{)}:=(q\circ\xi_{R})(a\otimes
a_{R}),$ (4.16)
is a well defined bi-linear map.
###### Remark 4.15.
The map will be written formally as ${\mathbb{E}}(a,\hat{a}_{R})=\lfloor
a\otimes a_{R}\rfloor$. $\Diamond$
###### Proof.
Consider $a_{R}$ and $a^{\prime}_{R}$ from the same class of
${\mathcal{B}}_{\omega}$. We need to show that $\xi_{R}(a\otimes a_{R})$ and
$\xi_{R}(a\otimes a^{\prime}_{R})$ belong to the same class of
${\mathcal{B}}_{\omega}$, for all $a\in{\mathcal{A}}$, more precisely, that
$\omega_{x}\big{(}\xi_{R}(a\otimes(a_{R}-a^{\prime}_{R}))\big{)}=0\ \ \forall\
x\in{\mathcal{A}}_{L}.$ (4.17)
It follows from identity (4.15) and shift invariance of the state $\omega$
that
$\omega_{x}\big{(}\xi_{R}(a\otimes(a_{R}-a^{\prime}_{R}))\big{)}=\omega_{\xi_{L}(x\otimes
a)}(a_{R}-a^{\prime}_{R}),$ (4.18)
which is indeed identically zero for all $x\in{\mathcal{A}}_{L}$. Bi-linearity
is evident.∎
###### Remark 4.16.
It is important to acknowledge that ${\mathbb{E}}$ is entirely determined by
the parent state $\omega$ and that the existence of such map depends crucially
on the shift invariance of the state. $\Diamond$
###### Proposition 4.17.
${\mathbb{E}}$ is a completely contractive bi-linear map.
###### Proof.
We recall that the bi-linear map
${\mathcal{A}}\times{\mathcal{A}}_{R}\ni(a,a_{R})\mapsto a\otimes
a_{R}\in{\mathcal{A}}\otimes_{\rm h}{\mathcal{A}}_{R}$ (4.19)
is completely contractive [3][p. 30]. Furthermore, the identity map on the
algebraic tensor product extends to a completely contractive map
$j:{\mathcal{A}}\otimes_{\rm
h}{\mathcal{A}}_{R}\rightarrowtail{\mathcal{A}}\otimes_{\rm
min}{\mathcal{A}}_{R}$ [12][p. 88]. This enables us to define the completely
contractive bi-linear map
$\phi:{\mathcal{A}}\times{\mathcal{A}}_{R}\rightarrow{\mathcal{B}}_{\omega},\quad\phi=q\circ\xi_{R}\circ
j\circ\otimes.$ (4.20)
which enters the following relation, ${\mathbb{E}}\circ({\rm id}\times
q)=\phi$. Indeed, note that $j$ acts as the identity on monomials. A similar
relation holds between the matrix amplifications,
${\mathbb{E}}_{n}\circ({\rm id}_{n}\times q_{n})=\phi_{n}.$ (4.21)
Indeed, the matrix amplification of (4.19) is given by
$\big{(}[a_{ij}],[a_{R}^{ij}]\big{)}\mapsto\Big{[}\sum_{k}a_{ik}\otimes
a_{R}^{kj}\Big{]}$ (4.22)
and
${\mathbb{E}}_{n}\big{(}[a_{ij}],[\hat{a}_{R}^{ij}]\big{)}=\Big{[}\sum_{k}a_{ik}\otimes\hat{a}_{R}^{kj}\Big{]}.$
(4.23)
Then the relation in (4.21) is evident. Based on these identities,
$\displaystyle{\mathbb{E}}_{n}\big{(}[a_{ij}],[\hat{a}_{R}^{ij}]\big{)}={\mathbb{E}}_{n}\big{(}[a_{ij}],q_{n}\big{(}[a_{R}^{ij}]+[k_{ij}]\big{)}\big{)}=\phi_{n}\big{(}[a_{ij}],[a_{R}^{ij}]+[k_{ij}]\big{)},$
(4.24)
where $[k_{ij}]$ is an arbitrary element from $M_{n}({\mathcal{K}}_{\omega})$.
Finally, we use the fact that each $\phi_{n}$ is a contraction,
$\|{\mathbb{E}}_{n}\big{(}[a_{ij}],[a_{R}^{ij}]\big{)}\|^{\rm
osy}_{n}\leq\|[a_{ij}]\|_{n}\,\|[a_{R}^{ij}]+[k_{ij}]\|_{n},$ (4.25)
where $\|\cdot\|_{n}$ are the matrix amplifications of the $C^{\ast}$-norms.
We conclude that
$\|{\mathbb{E}}_{n}\big{(}[a_{ij}],[a_{R}^{ij}]\big{)}\|^{\rm
osy}_{n}\leq\|[a_{ij}]\|_{n}\,\inf_{[k_{ij}]\in
M_{n}({\mathcal{K}}_{\omega})}\|[a_{R}^{ij}]+[k_{i}j]\|_{n},$ (4.26)
and the statement follows.∎
A direct consequence of the above and Proposition 4.8 is:
###### Corollary 4.18.
${\mathbb{E}}$ extends to a unital completely contractive linear map from
${\mathcal{A}}\otimes_{\rm h}{\mathcal{B}}_{\omega}$ to
${\mathcal{B}}_{\omega}$.
The extension mentioned above will be denoted by the same symbol
${\mathbb{E}}$ and be called the generating map. We recall Proposition 4.13,
which assures us that ${\mathcal{A}}\otimes_{\rm h}{\mathcal{B}}_{\omega}$ has
the structure of an operator system. Then, according to Proposition 3.32,
${\mathbb{E}}$ is a completely positive map of operator systems. Then
Stinespring [15] and Arveson Theorems 3.33 and 3.34, respectively, enable us
to pin down the general structure of ${\mathbb{E}}$. This will be discussed in
great details in sub-section 5.2.
###### Example 4.19.
Let us consider the product state discussed in Example 2.15, where we found
${\mathcal{B}}_{\omega}={\mathcal{A}}/{\rm Ker}\,\omega_{0}$. Since
$a\otimes a^{\prime}=\omega_{0}(a)\,a^{\prime}+\big{(}a\otimes
a^{\prime}-\omega_{0}(a)\,a^{\prime}\big{)},\quad
a,a^{\prime}\in{\mathcal{A}},$ (4.27)
and the last term belongs ${\rm Ker}\,\omega_{0}$, we have
${\mathbb{E}}(a\otimes\lfloor a^{\prime}\rfloor)=\lfloor a\otimes
a^{\prime}\rfloor=\omega_{0}(a)\lfloor a^{\prime}\rfloor.$ (4.28)
If ${\mathcal{B}}_{\omega}\subset B(H_{2})$ and $H_{1}$ is the Hilbert space
associated to the representation $\pi$ of ${\mathcal{A}}$ generated by
$\omega_{0}$, then we can take
$V:H_{2}\rightarrow H_{1}\otimes H_{2},\quad V(\chi)=\xi\otimes\chi,$ (4.29)
with $\xi\in H_{1}$ such that $\omega_{0}(a)=\langle\xi,\pi(a)\xi\rangle$, in
which case
$V^{\ast}\big{(}\pi(a)\otimes B\big{)}V=\langle\xi,\pi(a)\xi\rangle\otimes
B=\omega_{0}(a)B$ (4.30)
This is the Stinespring representation of ${\mathbb{E}}$ for this particular
case. $\Diamond$
### 4.3. Generating multi-linear map
We consider now the multi-linear map
$\displaystyle{\mathbb{E}}_{(n)}$ $\displaystyle:{\mathcal{A}}^{\times
n}\times{\mathcal{B}}_{\omega}\rightarrow{\mathcal{B}}_{\omega},$ (4.31)
$\displaystyle{\mathbb{E}}_{(n)}\big{(}a_{1},\ldots,a_{n},\lfloor
a_{R}\rfloor\big{)}$
$\displaystyle:=\Big{\lfloor}S^{n}\Big{(}\alpha_{-n+1}^{-1}(a_{1})\otimes\ldots\otimes\alpha_{0}^{-1}(a_{n})\otimes
a_{R}\Big{)}\Big{\rfloor},$
which is again well defined by the same arguments used in Proposition 4.14.
Furthermore, using the fact that $\otimes^{n}_{\rm h}$ is a completely
contractive multi-linear map, we can repeat the arguments in Proposition 4.17
and conclude that ${\mathbb{E}}_{(n)}$ are completely contractive multi-linear
maps. As a consequence, they can be extended to completely contractive linear
maps on ${\mathcal{A}}^{\otimes n}\otimes_{\rm
h}{\mathcal{B}}_{\omega}\simeq{\mathcal{A}}_{(n)}\otimes_{\rm
h}{\mathcal{B}}_{\omega}$ and, since these maps are also unital,
${\mathbb{E}}_{(n)}$ are automatically completely positive. In the following,
we establish useful relations between these maps and also supply an
alternative and more direct proof of their complete positivity.
###### Proposition 4.20.
The maps ${\mathbb{E}}_{(n)}:{\mathcal{A}}_{(n)}\otimes_{\rm
h}{\mathcal{B}}_{\omega}\rightarrow{\mathcal{B}}_{\omega}$ satisfy the
recursive relations
${\mathbb{E}}_{(1)}={\mathbb{E}},\quad{\mathbb{E}}_{(n+1)}={\mathbb{E}}\circ({\rm
id}\otimes_{\rm h}{\mathbb{E}}_{(n)}),\quad n>1.$ (4.32)
###### Proof.
It is enough to verify the identities on monomials and, if
$a\in{\mathcal{A}}$, $a_{(n)}=a_{1}\otimes\cdots\otimes
a_{n}\in{\mathcal{A}}_{(n)}$ and $a_{R}\in{\mathcal{A}}_{R}$, we have
$\displaystyle\big{(}{\mathbb{E}}\circ({\rm id}\otimes_{\rm h}$
$\displaystyle{\mathbb{E}}_{(n)})\big{)}\big{(}a\otimes a_{(n)}\otimes\lfloor
a_{R}\rfloor\big{)}$
$\displaystyle\quad={\mathbb{E}}\Big{(}a\otimes{\mathbb{E}}_{(n)}\big{(}a_{(n)}\otimes\lfloor
a_{R}\rfloor\big{)}\Big{)}$
$\displaystyle\quad={\mathbb{E}}\Big{(}a\otimes\big{\lfloor}S^{n}\big{(}\alpha_{-n+1}^{-1}(a_{1})\otimes\ldots\otimes\alpha_{0}^{-1}(a_{n})\otimes
a_{R}\big{)}\big{\rfloor}\Big{)}$
$\displaystyle\quad=\Big{\lfloor}S\Big{(}\alpha_{0}^{-1}(a)\otimes
S^{n}\big{(}\alpha_{-n+1}^{-1}(a_{1})\otimes\ldots\otimes\alpha_{0}^{-1}(a_{n})\otimes
a_{R}\big{)}\Big{)}\Big{\rfloor}$
$\displaystyle\quad=\Big{\lfloor}S^{n+1}\Big{(}\alpha_{-n}^{-1}(a)\otimes\alpha_{-n+1}^{-1}(a_{1})\otimes\ldots\otimes\alpha_{0}^{-1}(a_{n})\otimes
a_{R}\Big{)}\Big{\rfloor}$
and the last line coincides with ${\mathbb{E}}_{n+1}(a\otimes
a_{(n)}\otimes\lfloor a_{R}\rfloor)$.∎
###### Proposition 4.21.
The maps ${\mathbb{E}}_{n}$, $n\geq 1$, are completely positive maps.
###### Proof.
We will use an inductive argument. We already know that
${\mathbb{E}}_{(1)}={\mathbb{E}}$ is unital and completely positive on
${\mathcal{A}}\otimes_{\rm h}{\mathcal{B}}_{\omega}$. Henceforth, assume that
${\mathbb{E}}_{(s)}$ is unital and completely positive. Then, from Proposition
3.32, ${\mathbb{E}}_{(s)}$ is necessarily a complete unital contraction. From
Proposition 4.7, the map:
${\rm id}\otimes_{\rm h}{\mathbb{E}}_{(s)}:{\mathcal{A}}\otimes_{\rm
h}\big{(}{\mathcal{A}}_{(s)}\otimes_{\rm
h}{\mathcal{B}}_{\omega}\big{)}\rightarrow{\mathcal{A}}\otimes_{\rm
h}{\mathcal{B}}_{\omega}$ (4.33)
is a complete contraction and, evidently, it is also a unital map. Now,
${\mathbb{E}}\circ({\rm id}\otimes_{\rm h}{\mathbb{E}}_{(s)})$ is a
composition of unital completely contractive maps, hence Proposition 2.6
assures us that ${\mathbb{E}}_{(s+1)}$ is a unital and completely contractive
map. Then Proposition 3.32 says that ${\mathbb{E}}_{(s+1)}$ is a positive map
and this completes the induction proof. ∎
###### Proposition 4.22.
The linear maps ${\mathbb{E}}_{n}$ display Markov’s property, more precisely,
for any $n,m\geq 1$,
${\mathbb{E}}_{(m+n)}={\mathbb{E}}_{(m)}\circ({\rm id}_{(m)}\otimes_{\rm
h}{\mathbb{E}}_{(n)}).$ (4.34)
###### Proof.
This is a direct consequence of the recursive relations(4.32). ∎
###### Proposition 4.23.
Recall the unital isometric embeddings
$\mathfrak{i}_{mn}:{\mathcal{A}}_{(n)}\rightarrow{\mathcal{A}}_{(m)},\quad\mathfrak{i}_{mn}\big{(}a_{(n)}\big{)}=a_{(n)}\otimes
1_{(m-n)},\quad(m>n),$ (4.35)
and let $J_{n}$, $n\geq 1$, be the unital isometric embeddings
$J_{n}:{\mathcal{A}}_{(n)}\rightarrow{\mathcal{A}}_{(n)}\otimes_{\rm
h}{\mathcal{B}}_{\omega},\quad J_{n}\big{(}a_{(n)}\big{)}=a_{(n)}\otimes e.$
(4.36)
Then
${\mathbb{E}}_{m}\circ J_{m}\circ\mathfrak{i}_{mn}={\mathbb{E}}_{n}\circ
J_{n},\quad m\geq n.$ (4.37)
As such, the tower of maps
${\mathbb{E}}_{n}\circ
J_{n}:{\mathcal{A}}_{(n)}\to{\mathcal{B}}_{\omega},\quad(n\geq 1),$ (4.38)
has a direct limit ${\mathbb{E}}_{\infty}\circ
J_{\infty}:{\mathcal{A}}_{R}\to{\mathcal{B}}_{\omega}$, which coincides with
the quotient map $q:{\mathcal{A}}_{R}\to{\mathcal{B}}_{\omega}$.
###### Proof.
The relations (4.37) are direct consequences of the Markov’s property. The
other statement follows from the observation that
${\mathbb{E}}_{(n)}(a_{(n)}\otimes e)=q(a_{(n)})$, for all $n\geq 1$.∎
### 4.4. Asymptotic constraints
So far, we have not imposed any asymptotic constraints on the state $\omega$.
Here we introduce the so called asymptotic clustering property, which is often
assumed based on physical arguments. We also point out finer asymptotic
properties, which emerge from a dynamical system canonically associated to
${\mathcal{B}}_{\omega}$.
Let us recall the shift map (2.16),
$S_{R}:{\mathcal{A}}_{R}\rightarrow{\mathcal{A}}_{R},\quad
S_{R}(a_{R})=S(\alpha_{0}^{-1}(1)\otimes a_{R}).$ (4.39)
The descent of this map on ${\mathcal{B}}_{\omega}$ can be defined as follows:
###### Definition 4.24.
We call the shift map on ${\mathcal{B}}_{\omega}$ the map
$\bar{S}:{\mathcal{B}}_{\omega}\rightarrow{\mathcal{B}}_{\omega},\quad\bar{S}={\mathbb{E}}\circ\bar{L},$
(4.40)
with $\bar{L}$ being the unital and isometric embedding
$\bar{L}:{\mathcal{B}}_{\omega}\rightarrow{\mathcal{A}}\otimes_{\rm
h}{\mathcal{B}}_{\omega},\quad\bar{L}(b)=1\otimes b.$ (4.41)
###### Proposition 4.25.
$\bar{S}\big{(}\lfloor a_{R}\rfloor\big{)}=\lfloor S_{R}(a_{R})\rfloor$, for
all $a_{R}\in{\mathcal{A}}_{R}$.
###### Remark 4.26.
Without doubt, the dynamical system $({\mathcal{B}}_{\omega},\bar{S})$ is
essential for understanding the shift invariant states on
${\mathcal{A}}^{\otimes{\mathbb{Z}}}$. Unfortunately, here, we can only point
to some far reaching consequences once certain dynamical behavior is assumed.
Specifically, note that, as a composition of completely contractive maps,
$\bar{S}$ is a contractive map, $\|\bar{S}(b)\|\leq\|b\|$. Furthermore,
$\bar{S}(e)=e$. One of the key questions that emerged during our study was if
$({\mathcal{B}}_{\omega},\bar{S})$ has an attractor that is larger than the
sub-space generated by $e$. We believe that this is a very important question
which deserves a full investigation. $\Diamond$
###### Definition 4.27.
We say that the shift-invariant state $\omega$ on ${\mathcal{A}}_{\mathbb{Z}}$
has the asymptotic clustering property if the sequence
$\sup\Big{\\{}\big{|}\omega\big{(}a_{L}\cdot S_{R}^{\circ
r}(a_{R})\big{)}-\omega(a_{L})\omega(a_{R})\big{|},\
a_{R/L}\in{\mathcal{A}}_{R/L},\ \|a_{R/L}\|=1\Big{\\}}$ (4.42)
converges to zero as $r\to\infty$.
###### Remark 4.28.
The asymptotic clustering property can be reformulated in a more effective but
less familiar way as
$\lim_{r\to\infty}\sup\Big{\\{}\Big{|}\omega\Big{(}a_{L}\cdot S_{R}^{\circ
r}\big{(}a_{R}-\omega(a_{R})\,1\big{)}\Big{)}\Big{|},\
a_{R/L}\in{\mathcal{A}}_{R/L},\ \|a_{R/L}\|=1\Big{\\}}=0.$ (4.43)
Both forms will be used in the following. $\Diamond$
The clustering property descends to $\bar{\omega}$ in the following way:
###### Proposition 4.29.
Assume that $\omega$ displays the asymptotic clustering property. Then the
reduced state also display a similar property,
$\lim_{r\rightarrow\infty}\sup\Big{\\{}\Big{|}\bar{\omega}\Big{(}{\mathbb{E}}_{(n)}\big{(}a_{(n)}\otimes\bar{S}^{\circ
r}(b)\big{)}\Big{)}-\bar{\omega}\Big{(}{\mathbb{E}}_{(n)}\big{(}a_{(n)}\otimes
e\big{)}\Big{)}\bar{\omega}(b)\Big{|}\Big{\\}}=0,$ (4.44)
where the supremum is over all $n\geq 0$, $a_{(n)}\in{\mathcal{A}}_{(n)}$ with
$\|a_{(n)}\|=1$, and $b\in{\mathcal{B}}$ with $\|b\|\leq 1$.
###### Proof.
Let $a_{R}\in{\mathcal{A}}_{R}$ such that $b=\lfloor a_{R}\rfloor$ and
$a_{(n)}\in{\mathcal{A}}_{(n)}$. Then
${\mathbb{E}}_{n}\big{(}a_{(n)}\otimes\bar{S}^{\circ
r}(b)\big{)}=\big{\lfloor}S^{n}\big{(}a_{(n)}\otimes S_{R}^{\circ
r}(a_{R})\big{)}\big{\rfloor}.$ (4.45)
Therefore, taking into account the shift invariance of $\omega$,
$\bar{\omega}\big{(}{\mathbb{E}}_{n}(a_{(n)}\otimes\bar{S}^{\circ
r}(b)\big{)}=\omega\big{(}a_{(n)}\otimes S_{R}^{\circ r}(a_{R})\big{)}.$
(4.46)
Then
$\displaystyle\bar{\omega}\Big{(}{\mathbb{E}}_{n}\big{(}a_{(n)}\otimes\bar{S}^{\circ
r}(b)\big{)}\Big{)}-\bar{\omega}\Big{(}{\mathbb{E}}_{n}\big{(}a_{(n)}\otimes
e\big{)}\Big{)}\bar{\omega}(b)$ (4.47)
$\displaystyle\qquad=\omega\Big{(}a_{(n)}\otimes S_{R}^{\circ
r}(a_{R})\Big{)}-\omega\big{(}a_{(n)}\big{)}\omega\big{(}a_{R}\big{)}$
$\displaystyle\qquad\qquad=\omega\Big{(}\mathfrak{i}_{L}\big{(}a_{(n)}\big{)}\cdot
S^{\circ
r}\big{(}a_{R}\big{)}\Big{)}-\omega\Big{(}\mathfrak{i}_{L}\big{(}a_{(n)}\big{)}\Big{)}\omega\big{(}a_{R}\big{)},$
and the statement follows from the clustering property of $\omega$. ∎
We recall the functional $\Gamma:{\mathcal{B}}_{\omega}\to[0,\infty)$ defined
in Propostion 3.23. Then:
###### Proposition 4.30.
Assume $\omega$ to have the asymptotic clustering property and, furthermore,
that
$\Gamma(b)\geq c\|b\|,\quad\ \forall\ b\in{\mathcal{B}}_{\omega},$ (4.48)
for some strictly positive constant $c$. Then the linear sub-space generated
by $e$ is the only attractor of the map $\bar{S}$ and, furthermore,
$\lim_{r\rightarrow\infty}\bar{S}^{\circ
r}(b)=\bar{\omega}(b)\,e,\quad\forall\ b\in{\mathcal{B}}_{\omega}.$ (4.49)
###### Proof.
From the asymptotic clustering property (4.43), we have
$\lim_{r\rightarrow\infty}\sup\Big{\\{}\Big{|}\omega\Big{(}x\cdot S^{\circ
r}\big{(}a_{R}-\omega(a_{R})\,1_{R}\big{)}\Big{)}\Big{|},\
x\in{\mathcal{A}}_{L},\ \|x\|=1\Big{\\}}=0,$ (4.50)
for all $a_{R}\in{\mathcal{A}}_{R}$, which translates to
$\Gamma\Big{(}\bar{S}^{\circ r}\big{(}\lfloor
a_{R}-\omega(a_{R})\,1_{R}\rfloor\big{)}\Big{)}\rightarrow 0\ \ {\rm as}\ \
r\rightarrow\infty,$ (4.51)
Since $\lfloor 1_{R}\rfloor=e$ and the unit is invariant for $\bar{S}$,
condition (4.48) implies
$\big{\|}\bar{S}^{\circ r}(\lfloor
a_{R}\rfloor)-\omega(a_{R})\,e\big{\|}\rightarrow 0\ \ {\rm as}\ \
r\rightarrow\infty,$ (4.52)
which proves the claim. ∎
###### Corollary 4.31.
In the conditions of Proposition 4.30, there is one and only one shift
invariant state, $\bar{\omega}\circ\bar{S}=\bar{\omega}$, on
${\mathcal{B}}_{\omega}$.
###### Remark 4.32.
Note that (4.48) is exactly the same condition that ensures that the operator
space and operation system structures coincide on ${\mathcal{B}}_{\omega}$
(see Proposition 3.24). $\Diamond$
###### Corollary 4.33.
Assume that the statement in Eq. (4.49) holds. Then the map ${\mathbb{E}}$ is
full, in the sense that
$\overline{\bigcup_{n\geq
1}\bigcup_{x\in{\mathcal{A}}_{(n)}}{\mathbb{E}}_{n}(x\otimes
b)}={\mathcal{B}}_{\omega},\quad\forall\ b\in{\mathcal{B}}_{\omega}.$ (4.53)
###### Proof.
The statement is true for $b=e$ because ${\mathbb{E}}_{\infty}\circ
J_{\infty}=q$. For a generic element, note that
$\displaystyle{\mathbb{E}}_{(m+r)}\big{(}(x\otimes 1_{(n+1,n+r)})\otimes
b\big{)}$ $\displaystyle={\mathbb{E}}_{(m)}\big{(}x\otimes\bar{S}^{\circ
r}(b)\big{)}$ (4.54) $\displaystyle=\bar{\omega}(b)$
$\displaystyle{\mathbb{E}}_{(m)}(x\otimes
e)+{\mathbb{E}}_{(m)}\big{(}x\otimes(\bar{S}^{\circ
r}(b)-\bar{\omega}(b)e)\big{)},$
for any $x\in{\mathcal{A}}_{(m)}$. If $b^{\prime}$ is an arbitrary element of
${\mathcal{B}}_{\omega}$, then there exists a uniformly bounded sequence
$\\{x_{(m)}\in{\mathcal{A}}_{(m)}\\}_{m\in{\mathbb{N}}^{\times}}$ such that
$\\{\lfloor x_{(m)}\rfloor\\}_{m\in{\mathbb{N}}^{\times}}$ converges to
$b^{\prime}$ in ${\mathcal{B}}_{\omega}$. If
$y_{(2m)}=\bar{\omega}(b)^{-1}x_{(m)}\otimes 1_{(m)},$ (4.55)
then we can see from Eq. (4.54) that the sequence
$\\{E_{(2m)}(y_{(2m)}\\}_{m\in{\mathbb{N}}^{\times}}$ converges to
$b^{\prime}$ because the second term of the last time vanishes in the limit
since all maps ${\mathbb{E}}_{(n)}$ are all contractions.∎
###### Remark 4.34.
Our investigation of the asymptotic behavior led us naturally to the concept
of a full ${\mathbb{E}}$ map. It will be this property that we will use in our
reconstruction process. As we have seen, certain asymptotic behavior of the
map $\bar{S}$ implies the fullness of ${\mathbb{E}}$. It will be interesting
to investigate the reverse problem, namely, what kind of constraints are
imposed on the dynamical system $({\mathcal{B}}_{\omega},\bar{S})$ by the
fullness of ${\mathbb{E}}$. In particular, if fullness always requires (4.49).
$\Diamond$
## 5\. Reconstruction Algorithm
The reduction and factorization processes described in the previous sections
produced the data
$({\mathcal{A}},{\mathcal{B}}_{\omega},{\mathbb{E}},\bar{\omega})$ consisting
of the following:
1. (1)
The local nuclear $C^{\ast}$-algebra ${\mathcal{A}}$.
2. (2)
The reduced space ${\mathcal{B}}_{\omega}$ with the structure of an operator
system.
3. (3)
The surjective, completely positive map
${\mathbb{E}}:{\mathcal{A}}\otimes_{\rm
h}{\mathcal{B}}_{\omega}\rightarrow{\mathcal{B}}_{\omega}$.
4. (4)
The completely positive functional
$\bar{\omega}:{\mathcal{B}}_{\omega}\rightarrow{\mathbb{C}}$.
It is certainly appropriate to say that the initial data
$({\mathcal{A}}_{\mathbb{Z}},\omega)$ was reduced and factorized to the data
$({\mathcal{A}},{\mathcal{B}}_{\omega},{\mathbb{E}},\bar{\omega})$. We enforce
this statement by noting that:
###### Proposition 5.1.
If $\omega$ and $\omega^{\prime}$ are shift invariant states and the data
$({\mathcal{A}}_{\mathbb{Z}},\omega)$ and
$({\mathcal{A}}_{\mathbb{Z}},\omega^{\prime})$ both reduce to the same data
$({\mathcal{A}},{\mathcal{B}},{\mathbb{E}},\bar{\xi})$, then
$\omega=\omega^{\prime}$.
###### Proof.
We concentrate on the induced states $\omega_{R}$ and $\omega^{\prime}_{R}$ on
${\mathcal{A}}_{R}$. They must coincide because, for any $n\geq 1$ and
$a_{(n)}\in{\mathcal{A}}_{(n)}$, the reduction and factorization process for
the two initial sets of data assures us that
$\bar{\xi}\Big{(}{\mathbb{E}}_{(n)}\big{(}a_{(n)}\otimes
e\big{)}\Big{)}=\omega_{R}\Big{(}\mathfrak{i}_{n}\big{(}a_{(n)}\big{)}\Big{)}=\omega^{\prime}_{R}\Big{(}\mathfrak{i}_{n}\big{(}a_{(n)}\big{)}\Big{)},$
(5.1)
where the ${\mathbb{E}}_{(n)}$ maps are entirely determined by the reduced
data, namely, by ${\mathbb{E}}$ via the recursion relations (4.32). The two
states coincide on dense sub-spaces, hence they must be identical. Due to the
assumed shift invariance, this automatically implies that
$\omega=\omega^{\prime}$.∎
The goal of this section is to complete the reverse process, namely, to
reconstruct $({\mathcal{A}}_{\mathbb{Z}},\omega)$ from the data
$({\mathcal{A}},{\mathcal{B}}_{\omega},{\mathbb{E}},\bar{\omega})$.
### 5.1. Re-construction: The abstract form
Here we prove Theorem 1.1, which is reproduced below for reader’s convenience.
###### Theorem 5.2.
Assume:
* •
A nuclear $C^{\ast}$-algebra ${\mathcal{A}}$;
* •
An Archimedean matrix-ordered space $({\mathcal{S}},e)$;
* •
A surjective, unital and completely positive map
${\mathbb{E}}:{\mathcal{A}}\otimes_{\rm
h}{\mathcal{S}}\rightarrow{\mathcal{S}}$;
* •
A unital and completely positive functional $\bar{\omega}$ on ${\mathcal{S}}$.
Furthermore, let ${\mathbb{E}}_{(n)}:{\mathcal{A}}_{(n)}\otimes_{\rm
h}{\mathcal{S}}\rightarrow{\mathcal{S}}$ be the maps defined iteratively as in
Proposition 4.20, specifically,
${\mathbb{E}}_{(1)}={\mathbb{E}},\quad{\mathbb{E}}_{(n+1)}={\mathbb{E}}\circ({\rm
id}\otimes_{\rm h}{\mathbb{E}}_{(n)}),\quad n\geq 1.$ (5.2)
Then:
1. i)
The tower of linear functionals
$\omega_{(n)}:{\mathcal{A}}_{(n)}\rightarrow{\mathbb{C}},\quad\omega_{n}=\bar{\omega}\circ{\mathbb{E}}_{(n)}\circ
J_{n},\quad n\geq 1,$ (5.3)
where $J_{n}$’s were defined in Eq. (4.36), defines a state $\omega_{R}$ on
${\mathcal{A}}_{R}$.
2. ii)
Let $\bar{S}$ be defined as in Proposition 4.29, specifically,
$\bar{S}:{\mathcal{S}}\rightarrow{\mathcal{S}},\quad\bar{S}={\mathbb{E}}\circ\bar{L},$
(5.4)
with $\bar{L}$ being the unital and isometric embedding:
$\bar{L}:{\mathcal{S}}\rightarrow{\mathcal{A}}\otimes_{\rm
h}{\mathcal{S}},\quad\bar{L}(s)=1\otimes s.$ (5.5)
Then the reconstructed state is shift invariant, $\omega_{R}\circ
S_{R}=\omega_{R}$, provided $\bar{\omega}\circ\bar{S}=\bar{\omega}$.
3. iii)
In the conditions from point ii), there exists a unique shift invariant state
$\omega$ on ${\mathcal{A}}_{\mathbb{Z}}$, which coincides with $\omega_{R}$
when restricted from ${\mathcal{A}}_{\mathbb{Z}}$ to ${\mathcal{A}}_{R}$.
4. iv)
If the data has the asymptotic clustering property
$\lim_{r\rightarrow\infty}\sup\Big{\\{}\Big{|}\bar{\omega}\Big{(}{\mathbb{E}}_{(n)}\big{(}a_{(n)}\otimes\bar{S}^{\circ
r}(s-\bar{\omega}(s)\,e)\big{)}\Big{)}\Big{|}\Big{\\}}=0,$ (5.6)
where the supremum is over all $n\geq 0$, $a_{(n)}\in{\mathcal{A}}_{(n)}$ with
$\|a_{(n)}\|=1$, and $s\in{\mathcal{S}}$ with $\|s\|\leq 1$, then $\omega$
also has the asymptotic clustering property.
5. v)
If $\bar{\omega}$ is shift invariant and ${\mathbb{E}}$ is full,
$\overline{\bigcup_{n\geq
1}\bigcup_{x\in{\mathcal{A}}_{(n)}}{\mathbb{E}}_{n}(x\otimes
s)}={\mathcal{S}},\quad\forall\ s\in{\mathcal{S}},\ s\neq 0,$ (5.7)
then the data $({\mathcal{A}}_{\mathbb{Z}},\omega)$ reduces back to
$({\mathcal{A}},{\mathcal{S}},{\mathbb{E}},\bar{\omega})$.
###### Proof.
i) By repeating the arguments from Proposition 4.21, we can conclude that the
maps ${\mathbb{E}}_{(n)}$ are completely positive. The maps $J_{n}$ are
completely positive as well. As such, $\omega_{(n)}$ are completely positive,
hence states on ${\mathcal{A}}_{(n)}$. Furthermore,
$\omega_{(m)}\circ\mathfrak{i}_{mn}=\omega_{(n)}$ for any $m\geq n$ and, as
such, the tower of states defines a state on the inductive limit of the tower
of ${\mathcal{A}}_{(n)}$ algebras, which is ${\mathcal{A}}_{R}$ itself.
ii) We denote by $\omega_{R}$ the inductive limit of states from point i).
Then, for any $n\geq 1$ and $a_{(n)}\in{\mathcal{A}}_{(n)}$, we have
$(\omega_{R}\circ
S_{R})\Big{(}\mathfrak{i}_{n}\big{(}a_{(n)}\big{)}\Big{)}=\omega_{(n+1)}\big{(}1\otimes
a_{(n)}\big{)}=(\bar{\omega}\circ{\mathbb{E}}_{(n+1)})\Big{(}\big{(}1\otimes
a_{(n)}\big{)}\otimes e\Big{)}.$ (5.8)
Using Markov’s property, we find
$(\omega_{R}\circ
S_{R})\Big{(}\mathfrak{i}_{n}\big{(}a_{(n)}\big{)}\Big{)}=(\bar{\omega}\circ{\mathbb{E}})\Big{(}1\otimes{\mathbb{E}}_{(n)}\big{(}a_{(n)}\otimes
e\big{)}\Big{)},$ (5.9)
and, after invoking the definition of $\bar{L}$,
$(\omega_{R}\circ
S_{R})\Big{(}\mathfrak{i}_{n}\big{(}a_{(n)}\big{)}\Big{)}=(\bar{\omega}\circ{\mathbb{E}}\circ\bar{L})\Big{(}{\mathbb{E}}_{(n)}\big{(}a_{(n)}\otimes
e\big{)}\Big{)}.$ (5.10)
Since ${\mathbb{E}}\circ\bar{L}=\bar{S}$ and
$\bar{\omega}\circ\bar{S}=\bar{\omega}$, the statement follows.
iii) Obvious.
iv) If $\omega$ is the reconstructed shift invariant full state, then from
definition (5.3) of the tower of states and assumed shift invariance, for any
$a_{(-m)}\in{\mathcal{A}}_{(-m)}$ and $a_{(n)}\in{\mathcal{A}}_{(n)}$, we have
$\displaystyle\omega\Big{(}\mathfrak{i}_{-m}\big{(}a_{(-m)}\big{)}\cdot(S_{R}^{\circ
r}\circ\mathfrak{i}_{n})\big{(}a_{(n)}-\omega(a_{(n)})\,1_{(n)}\big{)}\Big{)}$
(5.11)
$\displaystyle\qquad=\bar{\omega}\Big{(}{\mathbb{E}}_{(m)}\Big{(}S^{\circ
m}\big{(}a_{(-m)}\big{)}\otimes\bar{S}^{\circ
r}\big{(}s-\bar{\omega}(s)\,e\big{)}\Big{)}\Big{)},$
with $s=({\mathbb{E}}_{(n)}\circ J_{n})\big{(}a_{(n)}\big{)}$. As such, the
asymptotic clustering property for $\omega$ holds on dense subsets of
${\mathcal{A}}_{L}$ and ${\mathcal{A}}_{R}$, hence on the whole sub-algebras.
v) Recall Proposition 4.23, which prompts us to define the closed subset
${\mathcal{K}}={\rm Ker}\big{(}{\mathbb{E}}_{\infty}\circ J_{\infty}\big{)}$.
We will show that ${\mathcal{K}}$ coincides with the entanglement kernel
${\mathcal{K}}_{\omega}$ of the state $\omega$ constructed at points i) and
iii). By definition of $\omega$, we have
$\omega\Big{(}\mathfrak{i}_{-m}\big{(}S^{-m}\big{(}\mathfrak{i}_{m}(x)\big{)}\big{)}\cdot
a_{R}\Big{)}=\bar{\omega}\Big{(}{\mathbb{E}}_{(m)}\big{(}x\otimes({\mathbb{E}}_{\infty}\circ
J_{\infty})(a_{R})\big{)}\Big{)},$ (5.12)
for any $x\in{\mathcal{A}}_{(m)}$. As such, if $a_{R}\in{\mathcal{K}}$, then
$\omega_{x}(a_{r})=0$ for $x$ in a dense subset of ${\mathcal{A}}_{L}$, hence
for all $x$ in ${\mathcal{A}}_{L}$ because of the continuity of the state.
This proves that ${\mathcal{K}}\subset{\mathcal{K}}_{\omega}$. The reciprocal
is also true. Indeed, if $a_{R}\in{\mathcal{K}}_{\omega}$, then necessarily
$\bar{\omega}\Big{(}{\mathbb{E}}_{(m)}\big{(}x\otimes({\mathbb{E}}_{\infty}\circ
J_{\infty})(a_{R})\big{)}\Big{)}=0$ (5.13)
for any $x\in{\mathcal{A}}_{(m)}$, which is a direct consequence of the
identity (5.12). Let $s=({\mathbb{E}}_{\infty}\circ
J_{\infty})(a_{R})\in{\mathcal{S}}$ and assume first that $s\neq 0$. In this
case, since ${\mathbb{E}}$ is full,
$\overline{\bigcup_{m\geq
1}\bigcup_{x\in{\mathcal{A}}_{(m)}}{\mathbb{E}}_{m}(x\otimes
s)}={\mathcal{S}},$ (5.14)
and (5.13) can be true for any $m\geq 1$ and $x\in{\mathcal{A}}_{(m)}$ only if
$\bar{\omega}=0$. This cannot be, hence the only alternative is that $s=0$ or,
in other words, that $a_{R}\in{\mathcal{K}}$.
We have established that the entanglement kernel ${\mathcal{K}}_{\omega}$ of
$\omega$ coincide with ${\rm Ker}({\mathbb{E}}_{\infty}\circ J_{\infty})$, and
we consider now the exact sequence of Banach spaces
$\begin{diagram}$ (5.15)
From (5.14), we can then identify ${\mathcal{A}}_{R}/{\mathcal{K}}$ with
${\mathcal{S}}$ and we denote the class of $a_{R}$ in
${\mathcal{A}}_{R}/{\mathcal{K}}$ as $\lfloor a_{R}\rfloor$. Our last task is
to show that
${\mathbb{E}}(a\otimes s)=\lfloor a\otimes
a_{s}\rfloor\in{\mathcal{A}}_{R}/{\mathcal{K}}={\mathcal{S}},\quad\forall\
a\in{\mathcal{A}},\ s\in{\mathcal{S}},$ (5.16)
where $a_{s}$ is an element from ${\mathcal{A}}_{R}$ such that $\lfloor
a_{s}\rfloor=s$. From Eq. 5.15, the class $\lfloor a_{s}\rfloor$ can be also
computed as ${\mathbb{E}}_{\infty}\circ J_{\infty}(s)$. Then, for any
$a_{(n)}\in{\mathcal{A}}_{(n)}$,
${\mathbb{E}}\big{(}a\otimes\lfloor\mathfrak{i}_{n}(a_{(n)})\rfloor\big{)}={\mathbb{E}}\Big{(}a\otimes{\mathbb{E}}_{(n)}\big{(}a_{(n)}\otimes
e\big{)}\Big{)}={\mathbb{E}}_{(n+1)}\Big{(}\big{(}a\otimes
a_{(n)}\big{)}\otimes e\big{)}.$ (5.17)
The last line can be cast as $({\mathbb{E}}_{(n+1)}\circ
J_{(n+1)})\big{(}a\otimes a_{(n)}\big{)}$, which we know from (5.15) to return
the class of $a\otimes a_{(n)}$ in ${\mathcal{A}}_{R}/{\mathcal{K}}$. As such,
the relation (5.16) holds for a dense subset of ${\mathcal{S}}$, hence on the
whole ${\mathcal{S}}$.∎
###### Definition 5.3.
We call a state $\omega$ on ${\mathcal{A}}_{\mathbb{Z}}$ full if its
entanglement kernel is such that
$\overline{\bigcup_{n\geq 1}\bigcup_{x\in{\mathcal{A}}_{(n)}}\lfloor x\otimes
a_{R}\rfloor}={\mathcal{A}}_{R}/{\mathcal{K}}_{\omega},$ (5.18)
for any $a_{R}\in{\mathcal{A}}_{R}$.
###### Remark 5.4.
Sub-section 4.4 supplied conditions that are sufficient for a state to be
full. For example asymptotic clustering property together with condition
(4.48) are sufficient. $\Diamond$
###### Corollary 5.5.
Any full, shift invariant state on ${\mathcal{A}}_{\mathbb{Z}}$ can be
generated from a set of data
$({\mathcal{A}},{\mathcal{S}},{\mathbb{E}},\bar{\omega})$ as in Theorem 5.2.
###### Proof.
Starting from a full, translational invariant state $\omega$, the reduction
and factorization processes generate a set of data
$({\mathcal{A}},{\mathcal{S}},{\mathbb{E}},\bar{\omega})$ with ${\mathbb{E}}$
full. A translational invariant state $\omega^{\prime}$ can be reconstructed
from this set of data and, since ${\mathbb{E}}$ is full, the data
$({\mathcal{A}}_{\mathbb{Z}},\omega^{\prime})$ reduces back to the same
$({\mathcal{A}},{\mathcal{S}},{\mathbb{E}},\bar{\omega})$. Then Proposition
5.1 assures us that $\omega=\omega^{\prime}$ and, as such, $\omega$ is indeed
reproduced by the reconstruction process from the set of reduced data
$({\mathcal{A}},{\mathcal{S}},{\mathbb{E}},\bar{\omega})$. ∎
###### Example 5.6.
Let us consider the product state discussed in Examples 2.15 and 4.19, where
we found ${\mathcal{B}}_{\omega}={\mathcal{A}}/{\rm Ker}\,\omega_{0}$ and
${\mathbb{E}}(a\otimes\lfloor a^{\prime}\rfloor)=\omega_{0}(a)\lfloor
a^{\prime}\rfloor.$ (5.19)
Then, ${\mathbb{E}}$ is full if and only if ${\mathcal{B}}_{\omega}$ is one
dimensional. Suppose that this is indeed the case. Then the only state on
${\mathcal{B}}_{\omega}$ is the one entirely determined by $\bar{\xi}(e)=1$.
As such,
$\omega_{1}(a)=\bar{\xi}\big{(}{\mathbb{E}}(a\otimes e)\big{)}=\omega_{0}(a)$
(5.20)
and
$\omega_{2}(a_{1}\otimes
a_{2})=\bar{\xi}\big{(}{\mathbb{E}}_{(2)}(a_{1}\otimes a_{2}\otimes
e)\big{)}=\bar{\xi}\big{(}{\mathbb{E}}\big{(}a_{1}\otimes{\mathbb{E}}(a_{2})\big{)}\big{)}=\omega_{0}(a_{1})\omega_{0}(a_{2}).$
(5.21)
Iterating further, one finds
$\omega_{n}(a_{1}\otimes\cdots\otimes
a_{n})=\omega_{0}(a_{1})\cdots\omega_{0}(a_{n})$ (5.22)
and the product state $\omega_{0}^{\otimes{\mathbb{Z}}}$ is indeed reproduced
by the reconstruction algorithm. $\Diamond$
###### Example 5.7.
We consider here the class of AKLT states introduced in Example 2.16. We
recall that the setting was that of two nuclear $C^{\ast}$-algebras
$\hat{\mathcal{A}}$ and $\tilde{\mathcal{A}}$, such that
$\hat{\mathcal{A}}=\tilde{\mathcal{A}}\otimes\tilde{\mathcal{A}}$. Both
operator systems ${\mathcal{A}}$ and ${\mathcal{B}}_{\omega}$ are defined in
terms of these two algebras, namely ${\mathcal{A}}=p\hat{\mathcal{A}}p$, with
$p$ a projection from $\hat{\mathcal{A}}$ that also stands for the unit of
${\mathcal{A}}$, and ${\mathcal{B}}_{\omega}=\tilde{\mathcal{A}}$. Here we
derive the map ${\mathbb{E}}$ corresponding to the AKLT-type states $\omega$
over ${\mathcal{A}}$. For this, we define first
$\widetilde{\mathbb{E}}:\hat{\mathcal{A}}\otimes\tilde{\mathcal{A}}\simeq\tilde{\mathcal{A}}\otimes\tilde{\mathcal{A}}\otimes\tilde{\mathcal{A}}\rightarrow\tilde{\mathcal{A}},\quad\widetilde{\mathbb{E}}={\rm
id}\otimes\xi_{0},$ (5.23)
where
$\xi_{0}:\tilde{\mathcal{A}}\otimes\tilde{\mathcal{A}}\rightarrow{\mathbb{C}}$
is a complete contraction. Then $\widetilde{\mathbb{E}}$ is a complete
contraction. Next, we let
$\mathfrak{j}:p\hat{\mathcal{A}}p\rightarrow\hat{\mathcal{A}}$ be the non-
unital embedding that takes $p$ into $p$ rather than into the unit of
$\hat{\mathcal{A}}$. This map is evidently a complete contraction. Lastly, we
define ${\mathbb{E}}$ as:
${\mathbb{E}}:p\hat{\mathcal{A}}p\otimes\tilde{\mathcal{A}}\rightarrow\tilde{\mathcal{A}},\quad{\mathbb{E}}=\widetilde{\mathbb{E}}\circ(\mathfrak{j}\otimes{\rm
id}),$ (5.24)
which, as a composition of two completely contractive maps, is a completely
contractive map. This map is also unital, provided
${\mathbb{E}}(p\,\otimes\tilde{1})=\tilde{1}$. Therefore, whenever $p$ and
$\xi_{0}$ fulfill this constraint, an AKLT-type state can be reconstructed
from $({\mathcal{A}},\tilde{\mathcal{A}},{\mathbb{E}},\bar{\xi})$, where
$\bar{\xi}$ is a shift invariant state on $\tilde{\mathcal{A}}$. We recall
that this machinery now works for infinite dimensional algebras. $\Diamond$
###### Remark 5.8.
The particular case studied in [1], illustrated in Fig. 2.1 and partially
analyzed in Example 2.16, corresponds to
$\tilde{\mathcal{A}}={\mathcal{M}}_{2}$, hence
$\hat{\mathcal{A}}={\mathcal{M}}_{4}\simeq{\mathcal{M}}_{2}\otimes{\mathcal{M}}_{2}$,
with $p$ being the rank-3 projection
$P=\tfrac{3}{4}\,I_{2}\otimes
I_{2}+\tfrac{1}{4}\sum_{i=1}^{3}\sigma^{i}\otimes\sigma^{i}\in{\mathcal{M}}_{2}\otimes{\mathcal{M}}_{2}\simeq{\mathcal{M}}_{4},$
(5.25)
such that ${\mathcal{A}}=P{\mathcal{M}}_{4}P\simeq{\mathcal{M}}_{3}$.
Furthermore,
$\xi_{0}(M)=\tfrac{4}{3}\,{\rm Tr}\big{(}MP_{0}\big{)},\quad
M\in{\mathcal{M}}_{2}\otimes{\mathcal{M}}_{2}\simeq{\mathcal{M}}_{4},$ (5.26)
with $P_{0}$ being the rank-1 projection
$P_{0}=\tfrac{1}{4}\,I_{2}\otimes
I_{2}-\tfrac{1}{4}\sum_{i=1}^{3}\sigma^{i}\otimes\sigma^{i}\in{\mathcal{M}}_{2}\otimes{\mathcal{M}}_{2}\simeq{\mathcal{M}}_{4}.$
(5.27)
In this case, (5.24) takes the following concrete form:
$\displaystyle{\mathbb{E}}\Big{(}P\big{(}\sigma^{u}\otimes\sigma^{v}\big{)}P\otimes\sigma^{w}\Big{)}=\tfrac{4}{3}\sum_{\alpha,\beta=0}^{3}g_{\alpha}g_{\beta}{\rm
Tr}\Big{(}\big{(}\sigma^{\alpha}\sigma^{v}\sigma^{\beta}\otimes\sigma^{w}\big{)}P_{0}\Big{)}\,\sigma^{\alpha}\sigma^{u}\sigma^{\beta},$
(5.28)
where $g_{0}=\tfrac{3}{4}$ and $g_{i}=\tfrac{1}{4}$ for $i=\overline{1,3}$.
Then we find
${\mathbb{E}}(P\otimes
I_{2})=I_{2},\quad{\mathbb{E}}(P\otimes\vec{\alpha}\cdot\vec{\sigma})=-\tfrac{1}{3}\,\vec{\alpha}\cdot\vec{\sigma},$
(5.29)
where $\alpha\in{\mathbb{C}}^{3}$ and
$\vec{\alpha}\cdot\vec{\sigma}=\sum_{i=1}^{3}\alpha_{i}\,\sigma_{i}$. Hence,
the data satisfy the constraint mentioned in Example 5.7. Furthermore, since
any $B\in{\mathcal{M}}_{2}$ can be written uniquely as
$B=\alpha_{0}\,I_{2}+\vec{\alpha}\cdot\vec{\sigma}$, we have
$\bar{S}^{\circ
r}(B)={\mathbb{E}}(P\otimes{\mathbb{E}}(\ldots{\mathbb{E}}(P\otimes
B)\ldots))=\alpha_{0}\,I_{2}+\big{(}-\tfrac{1}{3}\big{)}^{r}\,\vec{\alpha}\cdot\vec{\sigma},$
(5.30)
hence $\bar{S}$ has a unique fixed point. In this case, the only compatible
$\bar{S}$-invariant functional $\bar{\xi}$ on ${\mathcal{M}}_{2}$ is
$\bar{\xi}(\alpha_{0}\,I_{2}+\vec{\alpha}\cdot\vec{\sigma})=\alpha_{0}.$
(5.31)
As pointed out in [6], there are many other possible choices for $P_{0}$.
$\Diamond$
### 5.2. Reconstruction: The concrete form
Diagram (5.32) collects all isometric and unital embeddings between the
operator systems occurring so far:
$\begin{diagram}$ (5.32)
Note that all the isometric inclusions result directly from the injectivity of
the Haagerup tensor product, stated in Proposition 4.9. This is another reason
why Haagerup tensor product appears naturally in the present context. However,
the diagram (5.32) is not complete, because we don’t have yet a concrete
isometric embedding of ${\mathcal{A}}\otimes_{\rm h}{\mathcal{S}}$ into a
$C^{\ast}$-algebra. To solve this issue, we recall that the Haagerup tensor
product $B(H_{1})\otimes_{\rm h}B(H_{2})$ embeds isometrically into the free
product $B(H_{1})\star B(H_{2})$ of the algebras (see [12, p. 98]). At its
turn, this free product embeds isometrically into the algebra of bounded
operators over the free product $H_{1}\star H_{2}$ of the Hilbert spaces. The
latter, however, require choices of distinguished unit vectors [16, p. 3].
A natural way to introduce such distinguished vectors is to expand the Hilbert
spaces to $\widetilde{H}_{i}={\mathbb{C}}\cdot\xi_{i}\oplus H_{i}$, $i=1,2,$
and re-draw the diagram of embeddings as follows:
$\begin{diagram}\vspace{0.1cm}$ (5.33)
As we shall see, these Hilbert space expansions will prove very convenient
below, hence their role is far more than a formal one. We denote the maps
supplying the isometric embeddings in $B(\widetilde{H}_{i})$, $i=1,2$, of
${\mathcal{A}}$ and ${\mathcal{S}}$ by $\tilde{\rho}$ and $\tilde{\sigma}$,
respectively. Furthermore, to ease the notation, we will write $A$ for
$\rho(a)$ and $\tilde{A}$ for $\tilde{\rho}(a)$, as well as $T$ for
$\sigma(t)$ and $\tilde{T}$ for $\tilde{\sigma}(t)$. The projections onto the
sub-spaces ${\mathbb{C}}\cdot\xi_{i}$ of $\widetilde{H}_{i}$ will be denoted
by $P_{\xi_{i}}$, $i=1,2$.
Now, from Arveson Theorems 3.34, we know that the completely positive map
${\mathbb{E}}$ extends over $B(\widetilde{H}_{1}\star\widetilde{H}_{2})$ and,
from Stinespring theorem [15], we know that this extension has the generic
structure
$B(\widetilde{H}_{1}\star\widetilde{H}_{2})\ni
G\mapsto{\mathbb{E}}(G)=V^{\ast}\hat{\pi}(G)V\in B(\widetilde{H}_{2}),$ (5.34)
where $\hat{\pi}$ is a representation of
$B(\widetilde{H}_{1}\star\widetilde{H}_{2})$ into the algebra of bounded
operators over a Hilbert space $\widehat{H}$ and
$V:\widetilde{H}_{2}\rightarrow\widehat{H}$ is a contraction. Recall that
$B(\widetilde{H}_{1})$ and $B(\widetilde{H}_{2})$ are canonically embedded
into their free product, hence the restriction of $\hat{\pi}$ generates
representations $\hat{\pi}_{1}$ and $\hat{\pi}_{2}$ of $B(\widetilde{H}_{1})$
and $B(\widetilde{H}_{2})$ on $\widehat{H}$, respectively. Reciprocally, from
the universal property of the free product of $C^{\ast}$-algebras, we know
that any representation $\hat{\pi}$ is the unique extension of two such
representations. More concretely, an element $\tilde{A}\otimes\tilde{T}\in
B(\widetilde{H}_{1})\otimes_{\rm h}B(\widetilde{H}_{2})$ embeds as
$\tilde{A}\tilde{T}$ in $B(\widetilde{H}_{1})\star B(\widetilde{H}_{2})$,
hence
$\hat{\pi}(\tilde{A}\otimes\tilde{T})=\hat{\pi}(\tilde{A})\hat{\pi}(\tilde{T})$.
The conclusion is that ${\mathbb{E}}$, as extended to a completely positive
map from $B(\widetilde{H}_{1})\otimes_{\rm h}B(\widetilde{H}_{2})$ to
$B(\widetilde{H}_{2})$, has the following generic action on monomials
${\mathbb{E}}(\tilde{A}\otimes\tilde{T})=V^{\ast}\hat{\pi}_{1}(\tilde{A})\hat{\pi}_{2}(\tilde{T})V.$
(5.35)
This is also known from the factorization of the completely bounded multi-
linear maps [12][Lemma 5.14]. It is instructive to iterate ${\mathbb{E}}$
once:
$\displaystyle{\mathbb{E}}\big{(}a_{1}\otimes{\mathbb{E}}(a_{2}\otimes
t)\big{)}=V^{\ast}\hat{\pi}_{1}(A_{1})\hat{\pi}_{2}\Big{(}V^{\ast}\hat{\pi}_{1}(A_{2})\hat{\pi}_{2}(T)V\Big{)}V,$
(5.36)
where $A_{i}=\rho(a_{i})\in B(H_{1})$ and $T=\sigma(t)\in B(H_{2})$. The
concrete action of a generic ${\mathbb{E}}_{(n)}$ now becomes apparent:
###### Proposition 5.9.
The iterated maps accept the concrete presentation
$\displaystyle{\mathbb{E}}_{(n)}\big{(}(a_{1}\otimes\ldots\otimes
a_{n})\otimes t\big{)}$ (5.37)
$\displaystyle\quad=V^{\ast}\hat{\pi}_{1}(A_{1})\hat{\pi}_{2}\Big{(}V^{\ast}\hat{\pi}_{1}(A_{2})\ldots\hat{\pi}_{2}\Big{(}V^{\ast}\hat{\pi}_{1}(A_{n})\hat{\pi}_{2}(T)V\Big{)}\ldots\Big{)}V\Big{)}V.$
### 5.3. Operator product states
We model the separable Hilbert space $H_{1}$ as $\ell^{2}({\mathcal{I}})$ with
${\mathcal{I}}$ a countable set, which is always possible e.g. by fixing a
basis and have ${\mathcal{I}}$ serve as the set of labels for this basis. For
$\psi\in H_{1}$, we introduce the pair of conjugate operators from
$B(\widetilde{H}_{1})$,
$Z_{\psi}(\alpha\,\xi_{1}+\psi^{\prime})=\alpha\psi,\quad
Z^{\ast}_{\psi}(\alpha\,\xi_{1}+\psi^{\prime})=\langle\psi,\psi^{\prime}\rangle\,\xi_{1},$
(5.38)
and, if $\big{\\{}\delta_{i}\big{\\}}_{i\in{\mathcal{I}}}$ denotes the
standard basis of $\ell^{2}({\mathcal{I}})$, we let $Z_{i}=Z_{\delta_{i}}\in
B(\widetilde{H}_{1})$.
###### Proposition 5.10.
For $\psi,\psi^{\prime}\in H_{1}$,
$Z_{\psi}Z^{\ast}_{\psi^{\prime}}=|\psi\rangle\langle\psi^{\prime}|\in
B(H_{1})\subset B(\widetilde{H}_{1}).$ (5.39)
Furthermore, if $A\in B(H_{1})\subset B(\widetilde{H}_{1})$, then
$A=\sum_{i,j\in{\mathcal{I}}}\langle\delta_{i},A\delta_{j}\rangle\,Z_{i}Z_{j}^{\ast},$
(5.40)
with the sum converging in the weak topology of $B(\widetilde{H}_{1})$.
###### Proof.
We have
$Z_{\psi}Z^{\ast}_{\psi^{\prime}}(\alpha\,\xi_{1}+\psi^{\prime\prime})=Z_{\psi}\big{(}\langle\psi^{\prime},\psi^{\prime\prime}\rangle\,\xi_{1}\big{)}=\langle\psi^{\prime},\psi^{\prime\prime}\rangle\,\psi,$
(5.41)
hence, the first statement follows. The second statement follows directly from
the expansion of the operator in terms of its matrix elements. ∎
###### Remark 5.11.
Eq. (5.40) replaces the expansion of $A$ in terms of its matrix elements,
$A=\sum_{i,j\in{\mathcal{I}}}A_{ij}|\delta_{i}\rangle\langle\delta_{j}|$. In
this expansion, the bra and the ket can be interpreted as maps between $H_{1}$
and ${\mathbb{C}}$. However, with the Hilbert space extension trick, the bra
and the ket become elements of $B(\widetilde{H}_{1})$, hence they can be acted
on by $\hat{\pi}_{1}$. This is significant! $\Diamond$
We are now in a position where we can supply the proof of Theorem 1.2,
reproduced below for convenience:
###### Theorem 5.12.
Let $\delta_{\bm{i}}=\delta_{i_{1}}\otimes\ldots\otimes\delta_{i_{n}}$,
$\bm{i}=(i_{1},\ldots,i_{n})\in{\mathcal{I}}^{n}$, be a basis of
$H_{1}^{\otimes n}$ and assume the following:
* •
The representations $\hat{\pi}_{1}$ and $\hat{\pi}_{2}$ commute;
* •
There exist a unitary map
$U:\widetilde{H}_{2}\to\hat{\pi}_{1}\big{(}P_{\xi_{1}}\big{)}\widehat{H}$.
* •
The representation
$\hat{\pi}_{1}\big{(}P_{\xi_{1}}\big{)}\hat{\pi}_{2}(\cdot)\hat{\pi}_{1}\big{(}P_{\xi_{1}}\big{)}$
of $B(\widetilde{H}_{2})$ on
$\hat{\pi}_{1}\big{(}P_{\xi_{1}}\big{)}\widehat{H}$ is unitarily isomorphic to
the identity representation.
Then the maps ${\mathbb{E}}_{n}$ accept presentations as operator products:
${\mathbb{E}}_{n}(a_{(n)}\otimes
t)=\sum_{\bm{i},\bm{j}\in{\mathcal{I}}^{n}}\big{\langle}\delta_{\bm{i}},A_{(n)}\delta_{\bm{j}}\big{\rangle}\,\Sigma_{i_{1}}\cdots\Sigma_{i_{n}}T\,\Sigma^{\ast}_{j_{n}}\cdots\Sigma^{\ast}_{j_{1}},$
(5.42)
where $A_{(n)}\in B(H_{1})^{\otimes_{\rm h}n}\subset
B(\widetilde{H}_{1})^{\otimes_{\rm h}n}$, $T\in B(H_{2})\subset
B(\widetilde{H}_{2})$ and
$\Sigma_{i}\in
B(\widetilde{H}_{2}),\quad\Sigma_{i}=V^{\ast}\hat{\pi}_{1}(Z_{i})U.$ (5.43)
The sum in (5.42) converges in the weak topology of $B(\widetilde{H}_{2})$.
###### Proof.
We start our proof from ${\mathbb{E}}_{(1)}={\mathbb{E}}$. Using its
expression from Eq. (5.35) and the decomposition of $A$ from Eq. (5.40), we
have
$\displaystyle{\mathbb{E}}(a\otimes t)$
$\displaystyle=\sum_{i,j\in{\mathcal{I}}}\langle\delta_{i},A\delta_{j}\rangle
V^{\ast}\hat{\pi}_{1}(Z_{i}Z^{\ast}_{j})\hat{\pi}_{2}(T)V$ (5.44)
$\displaystyle=\sum_{i,j\in{\mathcal{I}}}\langle\delta_{i},A\delta_{j}\rangle
V^{\ast}\hat{\pi}_{1}(Z_{i})\hat{\pi}_{2}(T)\hat{\pi}_{1}(Z^{\ast}_{j})V,$
|
(eccv) Package eccv Warning: Package ‘hyperref’ is loaded with option
‘pagebackref’, which is *not* recommended for camera-ready version
11institutetext: ARTORG, University of Bern, Switzerland
11email<EMAIL_ADDRESS>
# Tune without Validation: Searching for Learning Rate and Weight Decay on
Training Sets
Lorenzo Brigato Stavroula Mougiakakou
###### Abstract
We introduce Tune without Validation (Twin), a pipeline for tuning learning
rate and weight decay without validation sets. We leverage a recent
theoretical framework concerning learning phases in hypothesis space to devise
a heuristic that predicts what hyper-parameter (HP) combinations yield better
generalization. Twin performs a grid search of trials according to an
early-/non-early-stopping scheduler and then segments the region that provides
the best results in terms of training loss. Among these trials, the weight
norm strongly correlates with predicting generalization. To assess the
effectiveness of Twin, we run extensive experiments on 20 image classification
datasets and train several families of deep networks, including convolutional,
transformer, and feed-forward models. We demonstrate proper HP selection when
training from scratch and fine-tuning, emphasizing small-sample scenarios.
## 1 Introduction
Figure 1: Overview. While traditional pipelines need a validation set to tune
learning rate and weight decay, Twin performs the search directly on the
training set, simplifying the process or saving additional data-collection
costs.
Like most machine learning models, deep networks are configured by a set of
hyper-parameters (HPs) whose values must be carefully chosen and which often
considerably impact the final outcome [26, 17, 60]. Setting up wrong
configurations translates into bad performance, particularly in the most
difficult optimization scenarios, e.g., large models that overfit on small
datasets [35, 4, 5].
Traditionally, HP search is performed in two ways, as exemplified in Fig. 1
(top). Although more comprehensive methodologies, such as multi-fold or multi-
round cross-validation [44], exist, they are scarcely employed in training
deep networks due to their significant computational overhead. When no
validation set is available, the training set is split into two unbalanced
subsets to perform the HP search. Then, the “optimal” HP configuration
initializes the final training on the original collection. In contrast, if the
validation set is accessible, ML professionals can expedite the HP search and
simplify the model selection process since the two-step training pipeline is
avoided. However, this comes at the expense of collecting an additional 10-30%
of samples. Use cases where acquiring additional data is either expensive or
logistically unfeasible, such as medical imaging [50] or federated learning
[37], challenge the traditional HP selection pipeline. HP optimization on
small validation sets may have inherent noise [35, 7].
Motivated by these challenges, we introduce Tune without Validation (Twin), an
innovative HP selection approach inspired by a theoretical framework which
aims at explaining representation learning across HPs from phase diagrams
[33]. Twin obviates the need for validation sets when tuning optimizer
parameters. In particular, Twin enables practitioners to directly select the
learning rate (LR) and weight decay (WD) from the training set, as sketched in
Fig. 1 (bottom). Twin performs a grid search over an hypothesis space using an
early-/non-early-stopping scheduler and successfully predicts generalizing HP
configurations by monitoring only the training loss, as a proxy for task
performance, and the weight norm, to measure regularization strength.
We perform an extensive empirical analysis to demonstrate Twin’s practical
versatility by training 4,000+ deep models involving 20 datasets and several
architectures. On a suite of 34 different dataset-architecture configurations
with networks trained from scratch and without early stopping, Twin scores an
MAE of 1.3% against an Oracle pipeline - the ideal unrealistic scenario - that
directly selects HPs from testing sets. We summarize the contributions of our
paper:
* •
We introduce Twin, a simple but effective HP selection pipeline which
optimizes LR and WD, directly from training sets (Sec. 3).
* •
We showcase the effectiveness of Twin across a wide spectrum of experimental
scenarios (Sec. 4.1, Sec. 4.2, Sec. 4.3, Sec. 4.4), encompassing datasets from
different domains (e.g., natural, medical imagery) and scales (hundreds to
thousands of samples) as well as models with various architecture (e.g.,
ResNet, MLP, CVT) and size (from $\sim$0.2M to $\sim$90M params.).
* •
We ablate on the different components and parameters of Twin to provide
additional insights regarding the working mechanisms of our pipeline (Sec.
4.5).
## 2 Related Work
#### Image classification.
Since the introduction of AlexNet [26], image classification has witnessed
remarkable advances, propelled by the development of novel neural
architectures (e.g., ResNet [16], Vision Transformer (ViT) [11]), and large
datasets [42]. Large-scale pre-training has favored the application of
transfer learning to tackle small-sample scenarios [54, 24]. More recent work
provides insights regarding the training of deep models from scratch on
limited datasets [56, 1, 6, 28, 5, 8]. Motivated by the medium-to-small size
of datasets explored in this work, we mostly train convolutional networks
(ConvNets) but also experiment with ViTs and feed-forward networks.
#### Hyper-parameter tuning.
There is a vast literature tackling the problem of HP tuning for deep networks
[60], including works on implicit differentiation [35], data augmentation [10,
31], neural-architecture search [12], invariance learning [55, 3, 21], and
general-purpose schedulers [30, 29]. Concerning optimization-related HPs, the
seminal work of Goyal et al. [14] popularized the linear scaling rule for
learning rate and batch size. Yang et al., [58] proposed a parameterization to
zero-shot transfer LRs to larger model sizes [58]. Recent work studied HP
selection as data scales by exploiting SGD symmetries [61, 62]. However, only
a few studies explore HP optimization without employing validation sets,
mainly focusing on learning invariances. When employing Bayesian inference,
methods either fail to scale to relatively simple tasks (e.g., CIFAR-10) [43]
or larger network sizes (e.g., ResNet-14) [21]. Benton et al. [3] make strong
assumptions about knowing what HPs help learning invariances in advance. A
recent method improves scalability issues but still introduces complexity by
needing data and model partitioning and an additional backward-forward pass
[38]. Unlike such methods, Twin focuses on LR and WD, easily scales to
increased model and data sizes, and simplifies the HP optimization pipeline.
## 3 Tune without Validation
### 3.1 Preliminaries
#### Problem settings.
Image classification tasks present a training set
$\mathcal{D}_{train}=\\{x_{i},y_{i}\\}$ and a testing set
$\mathcal{D}_{test}=\\{x_{i},y_{i}\\}$ sampled from a distribution $P(X,Y)$.
The learners, in our case, deep neural networks $f_{\theta}$ parameterized by
parameters $\theta$, are trained via SGD optimization to minimize the cross-
entropy loss over the training set,
$\min_{\theta}(\mathcal{L}_{\theta}=\mathcal{L}(f_{\theta},\mathcal{D}_{train}))$.
A popular regularization technique to avoid over-fitting and improve
generalization is $L_{2}$ regularization, i.e.,
$\min_{\theta}\mathcal{\hat{L}}_{\theta}$ with
$\mathcal{\hat{L}}_{\theta}=\mathcal{L}(f_{\theta},\mathcal{D}_{train})+\lambda\cdot\frac{{||\theta||}^{2}}{2}$
that features a penalty over the norm of the weights controlled by the
parameter $\lambda$, widely known as WD. For modern scale-invariant
architectures111Up to 90% of the parameters in ResNets tend to be scale-
invariant, primarily due to the influence of Batch Normalization [20]., WD
does not reduce the complexity of the model but rather increases the effective
learning rate by reducing the weight norm [48, 64], and hence can indirectly
exert a regularizing effect by means of larger gradient noise [39, 22, 32].
When optimized with momentum SGD, the parameters follow the update rule
$\theta_{t+1}=\theta_{t}-\mu
v_{t}+\alpha_{t}(\nabla\mathcal{L}_{\theta}+\lambda\theta_{t})$ where
$\alpha_{t}$ being the LR adjusted at each iteration according to a LR
schedule and $\mu$ the momentum coefficient.
#### Cross-validation.
To estimate the HPs controlling the model, specifically the regularization
parameter $\lambda$ and the learning speed $\alpha$, we ideally need a
surrogate of the test set, which identifies the right model complexity that
minimizes the test error. As anticipated previously, the simplest (and most
popular) operation is to split the training set into two sub-datasets to
deliver a smaller training set and a validation set
$\mathcal{D}_{val}=\\{x_{i},y_{i}\\}$, as shown in Fig. 1 (top). The
cardinality of the sub-sampled training and validation sets are respectively
$|\hat{D}_{train}|=n-m$ and $|\hat{D}_{val}|=m$. Now, we define the expected
value and variance of the prediction error over unseen points respectively as
$\delta=\mathrm{E}[\mathcal{L}(f_{\theta}(x),y]$ and
$\sigma^{2}=\mathrm{Var}[\mathcal{L}(f_{\theta}(x),y]$. We also refer to the
loss computed over the validation set as
$\mathcal{L}(f_{\theta},\mathcal{D}_{val})=\sum^{m}_{i=1}\mathcal{L}(f_{\theta}(x_{i}),y_{i})$.
By applying the linearity property of the expectation and the previously
defined relationships, we derive that
$\mathrm{E}[\mathcal{L}(f_{\theta},\mathcal{D}_{val})]=\delta$,
$\mathrm{Var}[\mathcal{L}(f_{\theta},\mathcal{D}_{val})]=\frac{\sigma^{2}}{m}$,
and $\mathrm{Std}=\mathcal{O}(\frac{1}{\sqrt{m}})$. Finally, we can express
that the expected error on the validation set is proportional to the error
scored over an unseen test sample plus a term depending on the number of
samples in the validation split, or more formally
$\mathrm{E}[\mathcal{L}(f_{\theta},\mathcal{D}_{val})]=\delta\pm\mathcal{O}(\frac{1}{\sqrt{m}})$.
The straightforward consequence is that a small validation set does not
provide a good estimate of the test error; hence, it is unreliable for HP
selection.
#### Motivation.
In classification problems where the independent and identically distributed
(IID) assumption holds, the search for HPs is less challenging since the
overlapping among training and testing sets makes the prediction of
generalization relatively easy. In other words, if $\mathcal{D}_{train}$, and
$\mathcal{D}_{test}$ are sampled from the same distribution, the expected
prediction error $\delta$ on unseen test points is going to be proportional to
the prediction error over the training set, i.e.,
$\delta\approx\mathcal{L}(f_{\theta},\mathcal{D}_{train})$. In section Sec.
4.1, we further discuss this claim and indeed empirically validate it in Fig.
3. On the contrary, distribution shifts, possibly caused by a plethora of
factors such as corruptions [19] or lack of data [52], induce the so-called
out-of-distribution (OOD) learning problems. In this work, we focus on OOD
scenarios caused by sample scarcity. As demonstrated in the previous
paragraph, the number of samples available in the validation set strongly
impacts the expected prediction error and can bias the search for proper HPs.
The significance of this observation is the main motivation guiding our work,
which aims to eliminate the dependency on validation sets and make the search
for LR and WD more robust. Our paper aims to derive a robust recipe for
practically predicting HP generalization in IID and OOD settings.
### 3.2 Working Principle
#### Phases of learning.
We rely on a recently introduced theoretical framework that explains
representation learning and the recently observed behavior of grokking [41], a
phenomenon where models generalize long after overfitting their training set.
Precisely, Liu et al. [33] observe, via a macroscopic analysis of phase
diagrams describing learning performance across HPs, four learning phases: i)
comprehension, ii) grokking, iii) memorization, and iv) confusion. The last
two phases correspond to the well-known overfitting and underfitting
scenarios, while comprehension to standard representation learning and
grokking to delayed generalization.222Note that grokking is mostly observed in
algorithmic tasks, but can also be induced for image classification problems
[33, 34] We leverage the following observations to design Twin [33]:
* •
Observation 1 (O1): The comprehension, grokking, and memorization phases all
share the learner to reach a low enough training error.
* •
Observation 2 (O2): Out of the three phases of O1, only the comprehension and
grokking configurations manage to reach a low-enough testing error.
* •
Observation 3 (O3): Representation learning (comprehension and grokking)
occurs only in a “Goldilocks zone” between memorization and confusion.
#### Intuition.
Our goal is to predict generalizing configurations of LR and WD from a defined
hypothesis space. Configurations that lie in the comprehension or grokking
areas provide the best generalization performance. However, the definitions to
classify learning phases, as proposed in [33], leverage validation error, a
metric that we are willing to avoid. According to O1, low training error
excludes the phase with confusion (underfitting). Monitoring the training loss
can hence identify configurations that underfit the training objectives.
However, O2 predicts that only comprehension and grokking reach low testing
errors. Reasonably, the training loss alone can not discern overfitting from
generalizing solutions. To identify memorization within the hypothesis space,
we leverage O3 and recognize that the norm of the network’s parameters
provides a suitable metric for assessing the transition from confusion to
memorization, passing through comprehension. High WD strongly penalizes the
parameter’s norm, leading to confusion, while low WD causes memorization since
the model is poorly regularized. In summary, we employ the training loss as a
proxy to identify underfitting configurations. Out of the remaining models, we
expect the generalizing configurations to be the ones with the lowest norm of
the parameters. Fig. 4 strongly supports our hypothesis and shows the
predictive power of the parameter norm related to model generalization. We
next describe Twin’s pipeline in practice.
Figure 2: Twin overview. Twin employs a gradient-based optimizer and a trial
scheduler to perform a grid search across the LR-WD space. Twin logs train-
loss and parameter-norm matrices to identify the network with the lowest norm
within the fitting region. The parameter norm within this region is a good
predictor of generalization (right plot). In this figure, we show as an
example a WRN-16-10 trained on ciFAIR-10.
### 3.3 Pipeline
We show the overview of Twin in Fig. 2. Twin performs a grid search over the
LR-WD space and optimizes deep networks via gradient-based methods. Trials are
scheduled according to a first-input-first-output scheduler (no early-
stopping) or successive-halving-based scheduler that reduces the computational
burden. Then, a simple cascade of segmentation and filtering modules
identifies the training loss region where generalization or overfitting
happens. Within this area, the network with the smallest parameter norm is
selected.
#### Grid search and trial scheduler.
Twin runs $N_{\alpha}\cdot N_{\lambda}$ trials sampled by default from a grid
of equally spaced points in logarithmic space for both LR and WD. However,
Twin also supports different grid densities as ablated in Sec. 4.5.
We experiment with two types of trial schedulers: 1) a default first-input-
first-output (FIFO) without any form of automated trial stopping and 2)
HyperBand (HB) [30]. Other trial schedulers (e.g., median stopping rule [13])
and search strategies (e.g., random search) are left for future exploration.
Default HP search over validation sets usually terminates a few HP
configurations, out of which the optimal is picked. Conversely, as anticipated
in Sec. 3.2, Twin needs to detect the region in the LR-WD space with low
training loss. Trial stopping too aggressively would make the loss landscape
noisy and challenging to segment. To this end, we adapted the HB scheduler to
our needs. More precisely, we run the original HB algorithm until only a
certain percentage $X$ of trials is still alive and continue such trials
without further stopping. In other words, we systematically pass the most
promising trials to the bottom rung. The asynchronous version of HB,
Asynchronous Successive Halving Algorithm (ASHA) [29], employs a similar
mechanism to exploit parallelism. We opt for this solution rather than
increasing the minimum resources per trial to save compute resources. We refer
to our adapted HB with HBX% when $X\%$ of the trials are terminated. In Sec.
4.5, we ablate on the impact of different levels of early stopping, i.e.,
$X\%\in\\{25\%,12\%\\}$, and show that Twin supports aggressive stopping well.
#### Region segmentation and selection.
The grid search guided by the trial scheduler yields two log matrices needed
to find the optimal configuration: training losses and parameter norms. We
refer to the first matrix as $\Psi$ and the second as $\Theta$. Note that the
matrix elements are logged at different epochs when early stopping is
employed. In the case of the training losses, we average the values of the
last five training epochs to have a more reliable estimate. To identify the
area of the LR-WD space where the best fitting happens, we treat the loss
matrix as an image and apply the popular Quickshift image segmentation
algorithm [51]. Quickshift proved to be more robust than hand-crafted
thresholds that fail to generalize across diverse datasets and model
configurations. Quickshift needs two input parameters, namely kernel_size and
max_dist, which control the segmentation granularity. We set both parameters
to the squared root of the largest grid side to have a more fine-grained
segmentation. We ablate on this choice in Sec. 4.5. Practically, we first
filter possible outliers, e.g., exorbitant loss values or NaN, stemming from
instabilities during training caused by extreme parameter configurations. We
consider outliers all points having a z-score larger than two. We then scale
the values of $\Psi$ to 0-1 range by minmax normalization, make the lowest
loss the highest value ($|1-\texttt{minmax}(\Psi)|$), and run Quickshift
segmentation. We finally take the mean for each predicted region and apply the
argmax to retrieve the best cluster index.
#### Output configuration.
The selected grid region is converted into a binary mask and applied to the
logged $\Theta$ matrix. Out of this sub-region, the argmin function returns
the final output configuration with the lowest norm.
Figure 3: Overview of quantitative results. Twin scores an overall 1.3% MAE
against the Oracle pipeline across 34 different dataset-model configurations
when using a FIFO scheduler. Twin closely matches the Oracle in IID and OOD
scenarios, while SelTS fails to correctly predict HPs that generalize in OOD
cases.
Lossflowerscubcifair10clammeurosatisic2018retinamnistcifar100organsmnistcifar10pathmnistR.
Segm.R. Sel.$\displaystyle||\theta||$Test loss
Figure 4: Qualitative results. Visualization of the various steps of Twin in
the LR-WD space (first four rows) and the relationship between the selected
parameter norms and test loss (bottom row). The dashed green line represents
the lowest achievable test loss.
## 4 Experiments
### 4.1 Overview
#### Experimented domains.
In selecting datasets for experimentation, we aimed to cover diverse domains
to thoroughly assess Twin’s capabilities. Firstly, our evaluation encompasses
small datasets, a scenario particularly suitable for Twin, given that
traditional pipelines struggle with HP selection due to the limited dimensions
of validation sets, as explained in the paragraph dedicated to cross-
validation of Sec. 3.1. Additionally, we explore Twin’s potential in the
medical field, where its ability to mitigate the need for validation sets is
particularly valuable, considering the complexities and regulations inherent
in healthcare settings. Finally, we examine Twin’s versatility in dealing with
natural images, arguably the most widely studied domain within the computer
vision community.
#### Baselines.
To assess the relative effectiveness of Twin, we introduce three different
baselines. The Selection from Training Set (SelTS) baseline selects the HP
configuration, which scores the lowest loss on the training set. The Selection
from Validation Set (SelVS) is the more traditional reference point, where HP
optimization is conducted exclusively on the validation set. The validation
set can either be subsampled from the training set, as is going to be the case
for small and natural image datasets, or collected externally, like the case
of medical data. These two cases indeed correspond to the upper two schemes of
Fig. 1. SelVS can also be performed with different trial schedulers, e.g.,
with early stopping. Lastly, the Oracle represents the ideal but unrealistic
scenario of selecting HPs directly from the test set. The Oracle always runs a
FIFO scheduler. When Twin performs at its absolute best, it yields an MAE of
0% vs the Oracle. All baselines select the HPs according to the relevant last-
epoch metric with FIFO schedulers and the average of the last 5 with early
stopping.
Method | Model | CUB | ISIC 2018 | EuroSAT | CLaMM | Average
---|---|---|---|---|---|---
Oracle | EN-B0 | 67.2 | 66.8 | 91.0 | 65.3 | 72.6
SelTS | EN-B0 | 62.4 | 64.0 | 79.8 | 56.8 | 65.8
SelVS | EN-B0 | 67.0 | 65.1 | 86.6 | 58.0 | 69.2
Twin | EN-B0 | 66.2 | 66.4 | 89.8 | 62.6 | 71.3
Oracle | RN50 | 72.0 | 69.4 | 90.2 | 74.6 | 76.6
SelTS | RN50 | 57.3 | 65.4 | 83.8 | 68.2 | 68.7
SelVS† | RN50 | 70.8 | 64.5 | 90.6 | 70.2 | 74.0
Twin | RN50 | 70.1 | 68.8 | 89.2 | 73.8 | 75.5
Oracle | RNX101 | 73.0 | 66.7 | 88.6 | 75.2 | 75.9
SelTS | RNX101 | 54.2 | 59.8 | 86.8 | 62.5 | 65.8
SelVS | RNX101 | 72.1 | 62.4 | 90.0 | 70.1 | 73.7
Twin | RNX101 | 73.0 | 65.8 | 87.6 | 75.2 | 75.4
Table 1: Small datasets. The evaluation metric is the balanced test accuracy
(%) [5]. We allocate 100 and 36 trials for EN-B0/RN50 and RNX101,
respectively. Twin and SelVS respectively employ the HB25% and ASHA
schedulers. †Values are taken from [5].
#### Quantitative results.
To provide an overview of the quantitative results, we compare Twin against
our lower (SelTS) and upper (Oracle) bounds in Fig. 3. In particular, we show
the performance per dataset with FIFO schedulers averaged across all
architectures. The order along the x-axis represents the number of images per
class per dataset, which increases from left to right. As anticipated in Sec.
3.1, the training loss alone as an HP-selection metric (SelTS) falls short as
the alignment between training and testing distributions decreases, i.e., when
the IID assumption does not hold because the training set does not fully
represent the broader testing set [49, 52]. On the other hand, we find nearly
optimal LR-WD configurations across all dataset scales by considering the
regularization strength and the task-learning performance, as done with Twin
(Sec. 3.2).
#### Qualitative results.
In Fig. 4, we present qualitative results and empirical evidence supporting
Twin’s ability to predict generalizing HP configurations. We demonstrate
Twin’s consistency across 11 cases involving different models (ConvNets, MLPs,
and transformers), optimizers (SGD, Adam), and grid densities (100 and 49
trials). In Fig. 4, the first row displays the training loss at the last
epoch. Region segmentation and selection steps using Quickshift are shown in
the second and third rows, respectively. The fourth row illustrates the mask
application to the parameter norm matrix, while the last row depicts the
relationship between the parameter norm of the selected sub-region and the
test loss. The dashed green line represents the lowest loss achieved by the
Oracle baseline. Indeed, despite the best-fitting region showcasing various
patterns and positions in the LR-WD space depending on the dataset-
architecture configuration, Twin can identify it robustly. Furthermore, it is
visible that a strong (almost linear) relationship exists between the
parameter norms extracted from the identified region and the test loss.
### 4.2 Small Datasets
#### Datasets.
We select the benchmark introduced in [5], which contains five different
datasets spanning various domains and data types. In particular, the benchmark
contains sub-sampled versions of ciFAIR-10 [2], EuroSAT [18], CLaMM [45], all
with 50 samples per class, and ISIC 2018, with 80 samples per class [9]. Also,
the widely known CUB dataset with 30 images per category is included [53]. The
spanned image domains of this benchmark hence include RGB natural images
(ciFAIR-10, CUB), multi-spectral satellite data (EuroSAT), RGB skin medical
imagery (ISIC 2018), and grayscale hand-written documents (CLaMM). For
EuroSAT, an RGB version is also available. Finally, we include the Oxford
Flowers dataset in our setup, which comprises 102 categories with 20 images
[40].
#### Implementation details.
Along with the popular ResNet-50 (RN50), which was originally evaluated on the
benchmark [5], we also employ EfficientNet-B0 (EN-B0) [46] and ResNeXt-101 (32
$\times$ 8d) (RNX101) [57] to cover three classes of model scales,
respectively tiny, small, and base [47], with 5.3M, 25.6M, and 88.8M
parameters. For the low-resolution images of ciFAIR-10, we employ a Wide
ResNet 16-10 (WRN-16-10). We refer the reader to the Appendix for all the
details regarding training-related parameters. We perform squared grid
searches of 100 trials for RN50 and EN-B0 and 36 trials for RNX101. We set the
LRs and WDs intervals for the grid search to $[5\cdot 10^{-5},5\cdot 10^{-1}]$
to span four orders of magnitude. When training from scratch, we report
results for Twin and SelVS with early stopping, which respectively employ
HB25% and ASHA as schedulers, with the same number of trials. For the
parameters, we follow [5] and keep a halving rate of two and a grace period of
5% of the total epoch budget.
TransfercubScratchflowerseurosat-rgbisic2018clamm0.00.20.40.60.81.0Oracle (B.
Acc.)
cubflowerseurosat-rgbisic2018clamm012410MAE vs Oracle
($\displaystyle\downarrow$)Twin (Transfer)FIFOHB25%HB12%
Figure 5: Transfer learning. (Left) Normalized balanced accuracy of the Oracle
with ImageNet pre-trained (top) or from-scratch RN50 (bottom). Feature overlap
makes the best generalization appear with lower regularization, and Twin (with
EN-B0, RN50, RNX101) plus early stopping identifies this region by scoring a
low MAE (right).
#### Results from scratch.
As visible in Tab. 1, Twin nearly matches the Oracle balanced accuracy by
scoring an MAE of less than 1.5% across different datasets and networks. Twin
outperforms the traditional HP selection from the validation sets (SelVS) by
scoring 71.3% versus 69.2% with EN-B0, 75.5% vs 74% with RN50, and 75.4 vs
73.7 with RNX101 when averaging performance across the CUB, ISIC 2018,
EuroSAT, and CLaMM datasets. Indeed, SelVS relies on a small validation set,
which may lead to sub-optimal HPs given the higher variability of the
prediction error. Furthermore, despite aggressive trial stopping making the
optimal region-segmentation step more challenging, Twin still finds semi-
optimal LR-WD configurations and hence is scalable to computationally heavy
search tasks that would be prohibitive without early stopping strategies.
#### Results with transfer learning.
When dealing with small datasets, it is common practice to start from a
network pre-trained on a larger amount of data (e.g., ImageNet [42]).
Therefore, we also experiment with transfer learning and repeat the
optimization runs with checkpoints initialized from ImageNet. In Fig. 5
(left), we notice that the generalization of networks as a function of the LR-
WD space may differ from when training from scratch, and the main cause
regards the overlapping between the source and target domains. Expectedly,
with a strong class (CUB and Flowers) or feature (EuroSAT RGB, ISIC 2018)
overlap, the best comprehension region shifts towards smaller regularization.
To this end, Twin struggles, as visible from the higher MAE ($\sim$5%) in the
cases of CUB, EuroSAT RGB, and ISIC 2018. As a solutions, we employ early
stopping to terminate the mostly regularized trials whose training loss has a
slower decay rate. In Fig. 5 (right), it is indeed visible that Twin with
HB12% reduces the MAE vs the Oracle to $\leq$ 1% for CUB, EuroSAT RGB, and
ISIC 2018. Conversely, in the case of CLaMM, which has no class and poor
feature overlap (hand-written documents), the pre-trained checkpoints do not
alter the LR-WD landscape and enable Twin to find good HP configurations (< 2%
MAE) with the FIFO and HB25% schedulers. In summary, when applying transfer
learning, it is critical to consider the level of domain overlap to select the
more suitable Twin configuration.
### 4.3 Medical Images
Method | Path | Derma | OCT | Pneum. | Retina | Breast | Blood | Tissue | OrganA | OrganC | OrganS | Avg.
---|---|---|---|---|---|---|---|---|---|---|---|---
Oracle | 91.9 | 80.8 | 79.8 | 92.8 | 52.5 | 89.7 | 97.8 | 73.2 | 95.9 | 94.5 | 84.4 | 84.8
SelTS | 91.9 | 79.7 | 77.3 | 90.7 | 47.8 | 85.9 | 97.2 | 72.8 | 95.0 | 94.0 | 82.8 | 83.2
SelVS | 90.5 | 80.3 | 78.0 | 92.5 | 46.0 | 85.3 | 96.9 | 72.8 | 94.9 | 94.4 | 83.5 | 83.2
Twin | 88.5 | 80.8 | 78.6 | 88.9 | 46.7 | 86.5 | 97.3 | 72.7 | 95.3 | 93.7 | 83.4 | 82.9
Table 2: Medical images. The performance is the test accuracy (%). We allocate
a 100-trial budget per dataset. Both Twin and SelVS employ the FIFO scheduler.
#### Datasets.
We leverage the MedMNIST v2 benchmark [59] to test Twin on medical imaging
tasks. We focus on 2D classification and select 11 out of 12 binary/multi-
class or ordinal regression tasks of the MedMNIST2D sub-collection, which
covers primary data modalities (e.g., X-ray, OCT, Ultrasound, CT, Electron
Microscope) and data scales (from 800 to 100,000 samples). The MedMNIST2D
benchmark provides held-out validation sets to allow HP tuning. The data
diversity of this benchmark presents a significant challenge. We select the
testbed with the images pre-processed to $28\times 28$ resolution out of the
full benchmark to maintain the total computational load under a reasonable
budget.
#### Implementation details.
We use the ResNet-18 (RN18) originally employed in the benchmark [59], which
consists of four stages, as the version developed for ImageNet classification
[16], but with a modified stem more suitable for low-resolution images. We
keep the same Twin configurations tested on small datasets (Sec. 4.2), except
for the trial schedulers that we default to FIFO for both Twin and SelVS.
Refer to the Appendix for additional details on the implementation.
#### Results.
We summarize the empirical results over MedMNIST2D in Tab. 2. The Oracle
scores an upper bound 84.8% test accuracy averaged across the 11 tasks. Twin
is comparable to the traditional SelVS (82.9% vs 83.2%). Note also that Twin
slightly improves its performance in this domain when early stopping is
employed (Sec. 4.5). Twin finds proper HPs and simultaneously leads to a cost-
effective solution by reducing data collection and labeling expenses
associated with the $\sim$10% of samples per dataset originally allocated for
validation.
### 4.4 Natural Images
Dataset | Model | # Trials | Aug. | Oracle | SelTS | SelVS | Twin
---|---|---|---|---|---|---|---
C10 | MLP-4-256 | 100 | + | 66.1 | 65.1 | 65.9 | 65.4
C10 | CCT-2/3×2 | 49 | +++ | 87.3 | 87.3 | 87.3 | 86.7
C10 | RNX11 | 100 | + | 90.7 | 90.2 | 90.6 | 89.8
C10 | RN20 | 100 | + | 92.7 | 91.8 | 92.4 | 90.5
C10 | WRN-40-2 | 49 | ++ | 94.0 | 93.7 | 93.3 | 93.6
C100 | MLP-4-512 | 100 | ++ | 35.4 | 35.0 | 35.1 | 34.9
C100 | CCT-2/3×2 | 49 | +++ | 65.0 | 64.0 | 65.0 | 65.0
C100 | RNX11 | 100 | + | 68.8 | 67.7 | 68.6 | 66.8
C100 | RN20 | 100 | + | 69.8 | 67.6 | 69.0 | 68.2
C100 | WRN-40-2 | 49 | ++ | 74.2 | 74.2 | 72.8 | 72.8
TinyIN | CVT-7/8 | 36 | +++ | 58.0 | 58.0 | 58.0 | 58.0
TinyIN | WRN-16-4 | 49 | ++ | 61.8 | 60.8 | 61.8 | 61.3
Average | | | | 72.0 | 71.3 | 71.6 | 71.1
Table 3: Natural Images. The performance is reported in test accuracy (%).
FIFO scheduler is employed. For the strength of data augmentation, refer to
the Appendix.
#### Datasets.
Finally, we test Twin on popular natural-image datasets such as CIFAR-10/100
[25] and Tiny Imagenet [27]. These datasets contain 50,000/100,000 training
samples from 10 to 200 classes, with an image resolution of $32\times 32$ for
CIFAR and $64\times 64$ for TinyImagenet.
#### Implementation details.
For CIFAR datasets, we employ ConvNets, transformers, and feed-forward
networks. As ConvNets, we select ResNet-20 (RN20) [16], ResNeXt-11 (4 x 16d)
(RNX11) [57], and a Wide ResNet of depth 40 and width two (WRN-40-2) [63]. As
transformers, we train architectures specifically designed for CIFAR, such as
the Compact Convolutional Transformer with two encoder and convolutional-stem
layers (CCT-2/3$\times$2) [15]. We employ a Multi-Layer Perceptron (MLP) with
batch normalization, ReLU activations, and hidden layers of constant width as
feed-forward networks. We set the depth of the MLP to four layers and the
width to 256 units for CIFAR-10 and 512 units for CIFAR-100 (MLP-4-256 and
MLP-4-512). On TinyImagenet, we train a WRN-16-4 [63] and a Compact Vision
Transformer [15] with seven encoder layers and a patch size of 8 (CVT-7/8). We
vary the data augmentation strength from base to medium to strong {+, ++,
+++}. Refer to the Appendix for additional details.
#### Results.
As shown in Tab. 3, Twin is, on average, comparable to SelVS (71.1% vs 71.6%),
despite not having access to the validation set, and SelTS (71.1% vs 71.3%),
while considering the regularization strength along with the training loss.
Remarkably, we also notice that Twin works properly for transformers and MLPs,
confirming that the intuition behind Twin translates well to various network
architectures. Similarly, Twin is agnostic to the data-augmentation strength.
kernel_size | max_dist | MAE
---|---|---
$N_{\alpha}$ | $N_{\alpha}$ | 9.4
$\sqrt{N_{\alpha}}$ | $N_{\alpha}$ | 4.2
$N_{\alpha}$ | $\sqrt{N_{\alpha}}$ | 1.4
$\sqrt{N_{\alpha}}$ | $\sqrt{N_{\alpha}}$ | 1.3
(a) Quickshift
$\alpha$ | $\lambda$ |
---|---|---
[::1] | [::2] | [::3] | [::1] | [::2] | [::3] | MAE
✓ | | | ✓ | | | 1.3
✓ | | | | ✓ | | 1.4
| ✓ | | ✓ | | | 1.8
| ✓ | | | ✓ | | 1.5
✓ | | | | | ✓ | 1.5
| | ✓ | ✓ | | | 1.2
| | ✓ | | | ✓ | 1.2
(b) Grid density
Table 4: Ablation studies concerning: (a) Quickshift controlling the
segmentation density and (b) the robustness of Twin against the grid density.
The MAE $(\downarrow)$ is computed against the Oracle baseline.
| Small Datasets | Medical Datasets
---|---|---
Scheduler | EN-B0 | RN50 | RNX101 | RN18
FIFO | 71.3 | 75.2 | 75.4 | 82.9
HB25% | 71.3 | 75.5 | 75.4 | 83.8
HB12% | 71.5 | 74.9 | 69.8 | 83.2
Table 5: Ablation on early stopping for Twin. The performance is the average
balanced test accuracy (%) on small datasets and test accuracy (%) on medical
datasets.
### 4.5 Ablations
#### Quickshift.
We ablate on the impact of kernel_size and max_dist from Quickshift in Tab.
4(a). In this analysis, we assume a squared grid ($N_{\alpha}=N_{\lambda}$)
and hence report the parameters as a function of $N_{\alpha}$ only. We
consider max($N_{\alpha},$ $N_{\lambda}$) when the grid is not squared. In
Tab. 4(a), it is visible that setting max_dist as large as the size of the
grid side (i.e., $N_{\alpha}$) leads to poor results because Quickshift tends
to segment the loss matrix into an insufficient number of regions. By fixing
kernel_size and max_dist to $\sqrt{N_{\alpha}}$, we increase the segmentation
density which is necessary to identify the best region in terms of task
performance.
#### Grid density.
We systematically test the robustness of Twin against different grid
densities. We slice the log-spaced intervals of LR ($\alpha$) and/or WD
($\lambda$) by sampling values every one, two, or three steps and refer to
such operations with the python slicing notation [::x]. We average the error
of Twin (FIFO scheduler) against the Oracle across 34 different configurations
(as Fig. 3) and show the analysis in Tab. 4(b). The MAE remains almost
unaffected and close to 1.3% obtained with default settings to show Twin’s
support to various grid intervals.
#### Early stopping.
In Tab. 5, we ablate concerning the impact of the early stopping scheduler. As
visible, Twin effectively accommodates HB25% or HB12%. Practitioners could
safely default to either of the two, with HB25% slightly ahead. The drop in
performance for RNX101 with HB12% is due to the small $7\times 7$ grid
employed. We refer to the Appendix for additional comments and guidance.
#### Optimizers and schedulers.
In all experiments throughout the paper, we employed SGD with momentum (SGDM)
and cosine scheduler as standard practice in deep learning. In this paragraph,
we ablate on the possibility of using different optimization setups. In
particular, we test plain SGD in six configurations involving RN20 on
CIFAR-10/100, WRN-16-10 on ciFAIR-10, and RN50 on EusoSAT, ISIC 2018, and
CLaMM. We also test RN20 with a piece-wise LR scheduler. Finally, we train
ConvNets, MLPs, and transformers with either Adam [23] or AdamW [36], two
popular choices when training such models. In Tab. 6, we notably observe that
Twin also closely follows the Oracle in terms of (mean) absolute error (M)AE
in such alternative optimization setups.
Dataset | Model | Optim. Setup | Performance | (M)AE
---|---|---|---|---
{C10, C100, ES, I2018, c10, CM} | {WRN-16-10, RN20, RN50} | SGD | 75.0 | 1.8
SGDM | 75.6 | 1.2
C100 | CVT-7/8 | AdamW | 68.8 | 0.0
C10 | MLP-4-256 | Adam | 65.5 | 0.6
c10 | WRN-16-10 | Adam | 54.5 | 0.6
C10 | RN20 | SGDM (piece-wise) | 91.0 | 1.7
SGDM (cosine) | 90.5 | 2.2
Table 6: Ablation concerning the usage of different optimization setups.
Performance is the (balanced) test accuracy. The (M)AE $(\downarrow)$ is
computed against the Oracle baseline.
## 5 Conclusions
We introduced Twin, a simple yet effective HP tuning approach that reliably
predicts learning rate and weight decay without using validation sets. Twin is
not only beneficial in practice for simplifying model selection pipelines but
also provides additional insights into the predictability of generalization
for deep networks. Twin showed robust performance from a broad suite of
experimental scenarios, including varying dataset sizes, imaging domains,
architectures, model scales, and training setups. In this paper, we
exclusively focused on image classification problems. However, future work
could explore the application of Twin to computer vision tasks and beyond.
Moreover, there is potential for extending Twin to alternative regularization
strategies beyond L2 penalty.
## References
* [1] Barz, B., Denzler, J.: Deep learning on small datasets without pre-training using cosine loss. In: IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) (2020)
* [2] Barz, B., Denzler, J.: Do we train on test data? purging CIFAR of near-duplicates. Journal of Imaging 6(6) (2020). https://doi.org/10.3390/jimaging6060041, https://www.mdpi.com/2313-433X/6/6/41
* [3] Benton, G., Finzi, M., Izmailov, P., Wilson, A.G.: Learning invariances in neural networks from training data. Advances in neural information processing systems (2020)
* [4] Brigato, L., Barz, B., Iocchi, L., Denzler, J.: Tune it or don’t use it: Benchmarking data-efficient image classification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1071–1080 (2021)
* [5] Brigato, L., Barz, B., Iocchi, L., Denzler, J.: Image classification with small datasets: overview and benchmark. IEEE Access (2022)
* [6] Brigato, L., Iocchi, L.: A close look at deep learning with small data. In: 2020 25th International Conference on Pattern Recognition (ICPR) (2021)
* [7] Brigato, L., Mougiakakou, S.: No data augmentation? alternative regularizations for effective training on small datasets. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops (2023)
* [8] Bruintjes, R.J., Lengyel, A., Rios, M.B., Kayhan, O.S., Zambrano, D., Tomen, N., van Gemert, J.: Vipriors 3: Visual inductive priors for data-efficient deep learning challenges. arXiv preprint arXiv:2305.19688 (2023)
* [9] Codella, N., Rotemberg, V., Tschandl, P., Celebi, M.E., Dusza, S., Gutman, D., Helba, B., Kalloo, A., Liopyris, K., Marchetti, M., et al.: Skin lesion analysis toward melanoma detection 2018: A challenge hosted by the international skin imaging collaboration (ISIC). arXiv preprint arXiv:1902.03368 (2019)
* [10] Cubuk, E.D., Zoph, B., Mane, D., Vasudevan, V., Le, Q.V.: Autoaugment: Learning augmentation strategies from data. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (2019)
* [11] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
* [12] Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: A survey. The Journal of Machine Learning Research (2019)
* [13] Golovin, D., Solnik, B., Moitra, S., Kochanski, G., Karro, J.E., Sculley, D. (eds.): Google Vizier: A Service for Black-Box Optimization (2017)
* [14] Goyal, P., Dollár, P., Girshick, R., Noordhuis, P., Wesolowski, L., Kyrola, A., Tulloch, A., Jia, Y., He, K.: Accurate, large minibatch sgd: Training imagenet in 1 hour. arXiv preprint arXiv:1706.02677 (2017)
* [15] Hassani, A., Walton, S., Shah, N., Abuduweili, A., Li, J., Shi, H.: Escaping the big data paradigm with compact transformers. arXiv preprint arXiv:2104.05704 (2021)
* [16] He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016)
* [17] He, T., Zhang, Z., Zhang, H., Zhang, Z., Xie, J., Li, M.: Bag of tricks for image classification with convolutional neural networks. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (2019)
* [18] Helber, P., Bischke, B., Dengel, A., Borth, D.: EuroSAT: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 12(7), 2217–2226 (2019). https://doi.org/10.1109/JSTARS.2019.2918242
* [19] Hendrycks, D., Dietterich, T.G.: Benchmarking neural network robustness to common corruptions and perturbations. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 (2019)
* [20] Heo, B., Chun, S., Oh, S.J., Han, D., Yun, S., Kim, G., Uh, Y., Ha, J.W.: Adamp: Slowing down the slowdown for momentum optimizers on scale-invariant weights. arXiv preprint arXiv:2006.08217 (2020)
* [21] Immer, A., van der Ouderaa, T., Rätsch, G., Fortuin, V., van der Wilk, M.: Invariance learning in deep neural networks with differentiable laplace approximations. Advances in Neural Information Processing Systems (2022)
* [22] Keskar, N.S., Mudigere, D., Nocedal, J., Smelyanskiy, M., Tang, P.T.P.: On large-batch training for deep learning: Generalization gap and sharp minima. arXiv preprint arXiv:1609.04836 (2016)
* [23] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
* [24] Kornblith, S., Shlens, J., Le, Q.V.: Do better imagenet models transfer better? In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 2661–2671 (2019)
* [25] Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)
* [26] Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems 25 (2012)
* [27] Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N (2015)
* [28] Lengyel, A., Bruintjes, R.J., Rios, M.B., Kayhan, O.S., Zambrano, D., Tomen, N., van Gemert, J.: Vipriors 2: visual inductive priors for data-efficient deep learning challenges. arXiv preprint arXiv:2201.08625 (2022)
* [29] Li, L., Jamieson, K., Rostamizadeh, A., Gonina, E., Ben-tzur, J., Hardt, M., Recht, B., Talwalkar, A.: A system for massively parallel hyperparameter tuning. Conference of Machine Learning and Systems (2020)
* [30] Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., Talwalkar, A.: Hyperband: A novel bandit-based approach to hyperparameter optimization. The journal of machine learning research (2017)
* [31] Li, Y., Hu, G., Wang, Y., Hospedales, T., Robertson, N.M., Yang, Y.: Dada: Differentiable automatic data augmentation. arXiv preprint arXiv:2003.03780 (2020)
* [32] Li, Y., Wei, C., Ma, T.: Towards explaining the regularization effect of initial large learning rate in training neural networks. Advances in Neural Information Processing Systems (2019)
* [33] Liu, Z., Kitouni, O., Nolte, N.S., Michaud, E., Tegmark, M., Williams, M.: Towards understanding grokking: An effective theory of representation learning. Advances in Neural Information Processing Systems 35, 34651–34663 (2022)
* [34] Liu, Z., Michaud, E.J., Tegmark, M.: Omnigrok: Grokking beyond algorithmic data. In: The Eleventh International Conference on Learning Representations, ICLR 2023 (2023)
* [35] Lorraine, J., Vicol, P., Duvenaud, D.: Optimizing millions of hyperparameters by implicit differentiation. In: International conference on artificial intelligence and statistics (2020)
* [36] Loshchilov, I., Hutter, F.: Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101 (2017)
* [37] McMahan, B., Moore, E., Ramage, D., Hampson, S., y Arcas, B.A.: Communication-efficient learning of deep networks from decentralized data. In: Artificial intelligence and statistics (2017)
* [38] Mlodozeniec, B., Reisser, M., Louizos, C.: Hyperparameter optimization through neural network partitioning. arXiv preprint arXiv:2304.14766 (2023)
* [39] Neelakantan, A., Vilnis, L., Le, Q.V., Sutskever, I., Kaiser, L., Kurach, K., Martens, J.: Adding gradient noise improves learning for very deep networks. arXiv preprint arXiv:1511.06807 (2015)
* [40] Nilsback, M.E., Zisserman, A.: Automated flower classification over a large number of classes. In: 2008 Sixth Indian conference on computer vision, graphics & image processing. IEEE (2008)
* [41] Power, A., Burda, Y., Edwards, H., Babuschkin, I., Misra, V.: Grokking: Generalization beyond overfitting on small algorithmic datasets. arXiv preprint arXiv:2201.02177 (2022)
* [42] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. International Journal of Computer Vision (IJCV) 115(3), 211–252 (2015)
* [43] Schwöbel, P., Jørgensen, M., Ober, S.W., Van Der Wilk, M.: Last layer marginal likelihood for invariance learning. In: International Conference on Artificial Intelligence and Statistics (2022)
* [44] Stone, M.: Cross-validatory choice and assessment of statistical predictions. Journal of the royal statistical society: Series B (Methodological) (1974)
* [45] Stutzmann, D.: Clustering of medieval scripts through computer image analysis: towards an evaluation protocol. Digital Medievalist 10 (2016)
* [46] Tan, M., Le, Q.: Efficientnet: Rethinking model scaling for convolutional neural networks. In: International conference on machine learning (2019)
* [47] Touvron, H., Cord, M., Douze, M., Massa, F., Sablayrolles, A., Jégou, H.: Training data-efficient image transformers & distillation through attention. In: International conference on machine learning (2021)
* [48] Van Laarhoven, T.: L2 regularization versus batch and weight normalization. arXiv preprint arXiv:1706.05350 (2017)
* [49] Vapnik, V.: Principles of risk minimization for learning theory. Advances in neural information processing systems (1991)
* [50] Varoquaux, G., Cheplygina, V.: Machine learning for medical imaging: methodological failures and recommendations for the future. NPJ digital medicine (2022)
* [51] Vedaldi, A., Soatto, S.: Quick shift and kernel methods for mode seeking. In: 10th European Conference on Computer Vision, Marseille, France, October 12-18, 2008, Proceedings, Part IV 10 (2008)
* [52] Wad, T., Sun, Q., Pranata, S., Jayashree, K., Zhang, H.: Equivariance and invariance inductive bias for learning from insufficient data. In: Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XI (2022)
* [53] Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The Caltech-UCSD Birds-200-2011 Dataset. Tech. Rep. CNS-TR-2011-001, California Institute of Technology (2011)
* [54] Weiss, K., Khoshgoftaar, T.M., Wang, D.: A survey of transfer learning. Journal of Big data 3(1), 1–40 (2016)
* [55] van der Wilk, M., Bauer, M., John, S., Hensman, J.: Learning invariances using the marginal likelihood. Advances in Neural Information Processing Systems (2018)
* [56] Worrall, D.E., Garbin, S.J., Turmukhambetov, D., Brostow, G.J.: Harmonic networks: Deep translation and rotation equivariance. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)
* [57] Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition (2017)
* [58] Yang, G., Hu, E., Babuschkin, I., Sidor, S., Liu, X., Farhi, D., Ryder, N., Pachocki, J., Chen, W., Gao, J.: Tuning large neural networks via zero-shot hyperparameter transfer. Advances in Neural Information Processing Systems 34, 17084–17097 (2021)
* [59] Yang, J., Shi, R., Wei, D., Liu, Z., Zhao, L., Ke, B., Pfister, H., Ni, B.: Medmnist v2-a large-scale lightweight benchmark for 2d and 3d biomedical image classification. Scientific Data (2023)
* [60] Yu, T., Zhu, H.: Hyper-parameter optimization: A review of algorithms and applications. arXiv preprint arXiv:2003.05689 (2020)
* [61] Yun, J., Kim, B., Kim, J.: Weight decay scheduling and knowledge distillation for active learning. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVI 16 (2020)
* [62] Yun, J., Lee, J., Shon, H., Yi, E., Kim, S.H., Kim, J.: On the angular update and hyperparameter tuning of a scale-invariant network. In: European Conference on Computer Vision (2022)
* [63] Zagoruyko, S., Komodakis, N.: Wide residual networks. In: Richard C. Wilson, E.R.H., Smith, W.A.P. (eds.) British Machine Vision Conference (BMVC). pp. 87.1–87.12. BMVA Press (September 2016). https://doi.org/10.5244/C.30.87
* [64] Zhang, G., Wang, C., Xu, B., Grosse, R.B.: Three mechanisms of weight decay regularization. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019 (2019)
|
# Extension of the correspondence principle in relativistic quantum mechanics
K. G. Hernández<EMAIL_ADDRESS>Facultad de Ciencias Naturales y
Matemática, Universidad de El Salvador, Ciudad Universitaria, Av. Mártires y
Héroes del 30 julio, San Salvador, El Salvador, América Central. S. E.
Aguilar<EMAIL_ADDRESS>Facultad de Ciencias Naturales y Matemática,
Universidad de El Salvador, Ciudad Universitaria, Av. Mártires y Héroes del 30
julio, San Salvador, El Salvador, América Central. Instituto de Física
Teórica, Universidade Estadual Paulista (UNESP), R. Dr. Bento Teobaldo Ferraz
271, Barra Funda São Paulo- SP, 01140-070, Brazil. J. Bernal
<EMAIL_ADDRESS>División Académica De Ciencias Básicas,
Universidad Juárez Autónoma de Tabasco, Carretera Cunduacán-Jalpa Km 1,
Cunduacán, Tabasco, México. A.P. 24 C.P. 86690
(August 27, 2024)
###### Abstract
In this paper we apply the Bohr’s correspondence principle to analyze the
asymptotic behavior of the Klein-Gordon and Dirac probability densities. It is
found that in the non-relativistic limit, the densities reduce to their
respective classical single-particle probability distributions plus a series
of quantum corrections. The procedure is applied in two basic problems, the
relativistic quantum oscillator and the relativistic particle in a box. The
particle and antiparticle solutions are found to yield the same classical
distribution plus quantum correction terms for the proposed limit. In the
quantum oscillator case, a $\kappa$ parameter modifies the probability
distribution. Its origin is briefly discussed in terms of energy.
Keywords: Correspondence principle, Klein-Gordon oscillator, Dirac oscillator,
Particle in a box, Non-relativistic limit.
## 1 Introduction
The classical limit of quantum mechanics is a fundamental problem. There are
different methodologies that have been proposed to derive classical
observables from quantum observables involving constraints. The limit of
quantum mechanics as the Planck constant approaches zero, $\hbar\to 0$, is one
of the first methods that explore the classical limit of quantum physics [1].
However, the limit $\hbar\to 0$ is misleading since Planck’s constant in a
nonzero universal constant, and thus, this limit should be understood when
$\hbar$ is negligible with respect to the other physical parameters, like when
the Raileigh-Jean’s law is obtained from the Planck’s Law. A second link
between quantum and classical mechanics is the Ehrenferest’s theorem, which
states that the quantum mechanical expectation values of the position and
momentum operators satisfy the classical equations of motion [2, 3]. However,
the applicability of this theorem is neither necessary nor sufficient, since
the classical limit of a quantum system is an ensemble of classical orbits,
where its mean position $\expectationvalue{x}$ does not necessarily follow
from the corresponding classical orbit [4]. Moreover, this implies that when
Ehrenfest’s theorem is not applicable, a quantum system may have a classic
behavior.
Another example is Wigner’s distribution function, which presents congruent
results when studying the quantum corrections of classical statistical
mechanics [5, 6]. Nevertheless, the Wigner’s function needs restrictions in
order to be interpreted as a probability distribution [1]. On the other hand,
Bohm’s potential formulation of quantum mechanics states that the Hamilton-
Jacobi equation can be recovered from the Schrödinger equation, resulting in
the WKB method (semiclassical regime), which presents congruent results
subject to constraints [7, 8].
The partition function of a system of quantum particles can be used to derive
the classical partition function of the system and its equations of state [9].
Nonetheless, this correspondence should also be studied from quantum to
classical single-particle configurations. Chaotic and regular motions in the
Henon-Heiles model of quantum and classical probability distributions has been
studied in phase-space, the difference in the centroids of the respective
quantum and classical distributions has been calculated and compared with the
prediction by the Ehrenfest’s theorem [10]. Consistency between classical and
quantum models of a localized spin driven by a polarization requires the
correspondence of the classical and quantum auto-correlation functions of the
spin components [11]. Some authors compare classical and quantum probability
densities of single particles [12, 13], but they do not establish a
correspondence between their distributions.
Niels Bohr established a correspondence principle where a classical behavior
is recovered when the quantum principal number, $n$, is large. Bohr applied
this correspondence for frequencies and orbits of quantum systems [1, 14], and
it has been applied successfully in atomic physics [15, 16, 17]. However,
Ketchum contradicts the notion that classical variables can only be obtained
from large principal quantum numbers, suggesting that classical frequencies
can be recovered from small quantum numbers [18]. An example of ”breakdown” of
the Bohr’s correspondence principle has been found in the semiclassical regime
[19], although another study argued that the conclusions do not hold since the
regime of the WKB approximation is limited [20].
Previous studies have been conducted on the correspondence between classical
mechanics and quantum field theory. Earlier studies, applied the Planck’s
limit to quantum field theory with arbitrary interactions, demonstrated that
semiclassical limits are always reached [21]. A recent paper by Staunton and
Browne suggest that the Poisson brackets in quantum states can be used to
derive a classical trajectory [22]. Shin and Rafelski discussed the classical
limit of the relativistic quantum transport theory [23]. Brown and Goble
explored the correspondence between quantum and classical electrodynamics and
concluded that the bremsstrahlung amplitude has a correspondence with the
classical radiation [24]. It has been proven that vacuum classical radiation
occurs in the limit of low frequencies and high photon density [25]. The
classical limit of the quantum electrodynamics was found when Dente used the
path integral formulation in order to eliminate photon coordinates [26].
Refs. [1, 27, 28] propose a simple mathematical method to derive the classical
probability distribution of a quantum system from its corresponding quantum
mechanical analogue. This method has been successfully applied to the harmonic
oscillator, the infinite square well and the Kepler’s problem. Additionally,
the authors show that the Bohr’s correspondence principle is applicable to the
quantum mechanical probability densities, and its application can be performed
for periodic systems (such as atomic orbits).
Here, we propose that the Bohr’s correspondence principle shown in Bernal and
others [1] can also be applied and extended to relativistic quantum systems.
Our results show that, in the high energy regime (large principal quantum
numbers), the quantum distribution can be written as a power series in
$\hbar$, so that the zeroth order corresponds exactly with the classical
single-particle probability distribution. In this paper, we focus on
particular relativistic quantum systems as described by the Klein-Gordon and
the Dirac equations, namely, the infinite square well and the harmonic
oscillator.
## 2 General procedure
The methodology of our research is based on the one described in Refs. [1, 27,
28], but we include the non-relativistic limit. The formulation of this
article is purely in first quantization and we only consider single
eigenstates of the 1-dimensional Klein-Gordon and Dirac equations.
The relativistic probability density of the Dirac spinor that corresponds to
the $n$-th energy state is given by
$\rho_{n}^{RQM}(x)=\psi_{n}^{\dagger}(x)\psi_{n}(x)$ where $\psi_{n}$ is the
four component Dirac spinor that corresponds to the $n$-th energy state and
the symbol $\dagger$ stands for conjugate transpose. In the Klein-Gordon case,
the probability density is given by
$\rho^{RQM}_{n}(x)=\frac{E_{n}}{m}\phi_{n}^{*}(x)\phi_{n}(x)$, where
$\phi_{n}(x)$ is the wavefunction of the $n$-th eigenstate with energy
$E_{n}$, which follows from
$\rho(x,\>t)=\frac{i\hbar}{2mc^{2}}\left[\phi^{*}(x,\>t)\frac{\partial}{\partial
t}\phi(x,\>t)-\phi(x,\>t)\frac{\partial}{\partial t}\phi^{*}(x,\>t)\right]$
[29] when the time-dependent wavefunction is written as
$\phi(x,\>t)=\phi_{n}(x)e^{-\frac{iE_{n}t}{\hbar}}$. The procedure can be
applied for either the particle or antiparticle solutions, with $E_{n}>0$ or
$E_{n}<0$ respectively. We denote the Fourier transform of the probability
density $\rho_{n}^{RQM}(x)$ from position to its Fourier transform in momentum
space, which we denote $f_{n}^{RQM}(p)$.
The extension of the correspondence principle requires the calculation of the
Fourier coefficient $f_{n}^{RQM}(p)$, such that by inverse Fourier
transforming this result we obtain, in the zeroth order of approximation, the
probability density in coordinate representation. The non-relativistic limit
(where the speed of light is larger than the typical velocities of the system)
fixes the relativistic quantum energy $E_{n}^{QRM}$ to a classical value, and
also affects the resulting probability distribution, denoted by
$\rho^{QM}_{n}(x)$, so that it becomes into a classical probability density
plus a series of quantum correction terms, which appear in a power series in
terms of the $\hbar$ constant, as it will be seen in the relativistic quantum
oscillator cases.
## 3 Examples
### 3.1 Klein-Gordon oscillator
According to Ref. [30], the wave function of the one-dimensional Klein-Gordon
oscillator is exactly the same as that of the one-dimensional Schrödinger
oscillator; however, the energy spectrum is modified by the rest energy of the
system. For either particle or antiparticle, the probability density for a
linear combination of particle and antiparticle stationary states is given by
$\rho^{RQM}_{n}(x)=\frac{E_{n}}{mc^{2}}\sqrt{\frac{\alpha}{\pi}}\frac{1}{2^{n}n!}H^{2}_{n}(\sqrt{\alpha}x)e^{-\alpha
x^{2}},$ (1)
where $H_{n}(x)$ are the Hermite polynomials, $n$ is a non-negative integer,
and $\alpha\equiv\frac{m\omega}{\hbar}$, where, $m$ stands for the mass of the
particle or antiparticle, $\omega$ the frequency of the oscillator, and the
energies are given by
$E_{n}^{2}=m^{2}c^{4}+2\quantity(n+\frac{1}{2})mc^{2}\hbar\omega$ [30]. The
resulting Fourier transform of Eq. (1) can be found in the literature [31]
$f^{RQM}_{n}(p)=\frac{E_{n}}{mc^{2}}e^{-\frac{p^{2}}{4m\omega\hbar}}L_{n}\quantity(\frac{p}{2m\omega\hbar}),$
(2)
where $L_{n}(x)$ is a Laguerre polynomial of degree $n$. The asymptotic limit
of the Fourier transform can be evaluated in a similar way to the non-
relativistic case by means of Bessel functions [1]. The inverse Fourier
transform in the asymptotic limit is
$\displaystyle\rho^{RQM}_{n}(x)\sim$
$\displaystyle\frac{E_{n}}{mc^{2}}\frac{1}{\pi}\frac{1}{\sqrt{\kappa_{n}^{2}-x^{2}}}$
(3)
$\displaystyle+\frac{E_{n}}{mc^{2}}{2\pi\kappa_{n}}\sum_{j=1}^{\infty}\quantity(\frac{-\hbar^{2}}{S_{n}^{2}})^{j}i_{j}(x,\kappa_{n}),$
where
$\kappa_{n}\equiv\sqrt{\frac{2\hbar(n+\frac{1}{2})}{m\omega}}=\sqrt{\frac{E_{n}^{2}-m^{2}c^{4}}{m^{2}\omega^{2}c^{2}}},$
(4)
is a parameter found in the relativistic quantum mechanical case;
$S_{n}=4\sqrt{2\pi}m\omega\kappa_{n}^{2}$ and $i_{j}(x,\kappa_{n})$ is the
$j$-th dimensionless integral, defined in Ref. [1]. The factor
$\frac{E_{n}}{mc^{2}}$ is a dilation factor which comes from the inverse of
length proportionality of the probability density.
According to Ref. [1], the principal quantum number is fixed by the classical
energy of the system, and this is achieved by equating the expressions for the
classical and quantum energies. This allows us to express the value of the
principal quantum number in terms of the classical amplitude of the
oscillator. In the present case, this procedure yields to
$\absolutevalue{E_{n}}\rightarrow mc^{2}+\frac{1}{2}m\omega^{2}x_{0}^{2}$,
where $x_{0}$ is the amplitude of the oscillator.
It is easy to verify that in the non-relativistic case $\kappa_{n}\rightarrow
x_{0}$, and therefore
$\displaystyle\rho^{QM}_{n}(x)\sim$
$\displaystyle\frac{1}{\pi}\frac{1}{\sqrt{x_{0}^{2}-x^{2}}}$ (5)
$\displaystyle+\frac{1}{2\pi
x_{0}}\sum_{j=1}^{\infty}\quantity(\frac{-\hbar^{2}}{S^{2}})^{j}i_{j}(x,x_{0}),$
where $S=4\sqrt{2\pi}m\omega x_{0}^{2}$ is the classical action of the
particle up to a constant factor.
As we can see in Eq. (5), the zeroth order result, which is
$\hbar$-independent, corresponds to the classical probability distribution of
a single harmonic oscillator, while the higher order terms can be interpreted
as quantum corrections. This is the same result derived in non-relativistic
quantum mechanics [1].
Note that we have not specified whether the wavefunction is for particles or
antiparticles, it follows that stationary states of particles or antiparticles
yield to the classical probability distribution for the limit that have been
considered.
### 3.2 Klein-Gordon particle in a box
Consider a Klein-Gordon stationary state for either particle or antiparticle
trapped in an infinite square well of length $0\leq x\leq L$. The probability
density is almost the same as the one that would be found from the Schrödinger
equation [29]
$\rho^{RQM}_{n}(x)=\frac{E_{n}}{mc^{2}}\frac{2}{L}\sin^{2}\quantity(\frac{n\pi
x}{L}).$ (6)
A difference with respect to the non-relativistic case is the energy spectrum,
which is given by
$\displaystyle E_{n}^{2}=m^{2}c^{4}+c^{2}\hbar^{2}\frac{n^{2}\pi^{2}}{L^{2}}.$
(7)
According to Ref. [1], by writing the probability density as a Fourier
expansion, the asymptotic behavior of the corresponding Fourier coefficients
is
$f^{RQM}_{n}(p)\sim\frac{E_{n}}{mc^{2}}\frac{i\hbar}{pL}\quantity(e^{-i\frac{Lp}{\hbar}}-1).$
(8)
such that by inverse Fourier transforming we find
$\rho^{RQM}_{n}(x)\sim\frac{E_{n}}{mc^{2}}\frac{1}{L}\quantity[H(L-x)-H(-x)],$
(9)
where $H(x)$ is the Heaviside step function.
We observe that the Klein-Gordon solution yields the same classical limit as
that obtained by using the nonrelativistic Schrödinger equation up to a
Lorentz dilatation factor given by the ratio $\frac{E_{n}}{mc^{2}}$, which is
expected by the dependence $1/L$ in the probability density. Comparing the
lack of quantum correction terms with the result of the Klein-Gordon
oscillator, this happens because the wavefunctions and the asymptotic
condition are the same as the non-relativistic case of the particle in a box
[1], where there is no need of explicitly fixing the energies to the classical
value. The leading term of the probability density becomes independent of
relativistic quantum corrections (which is later contrasted with the Dirac
version of this problem).
### 3.3 Dirac oscillator
The Dirac oscillator was originally proposed by Moshinsky and Szczepaniak
[32], and its eigenfunctions in the 1-dimensional case can be found in Ref.
[33]. For either particles or antiparticles, the probability density for
single eigenstates (either spin up or down), has the form
$\displaystyle\rho^{RQM}_{n}(x)=$ $\displaystyle e^{-\alpha
x^{2}}\absolutevalue{a_{n}}^{2}H_{n}^{2}(\sqrt{\alpha}x)$ (10)
$\displaystyle+e^{-\alpha
x^{2}}\absolutevalue{a^{\prime}_{n}}^{2}H_{n-1}^{2}(\sqrt{\alpha}x),$
where $\alpha\equiv\frac{m\omega}{\hbar}$,
$\absolutevalue{a_{n}}^{2}=\frac{\sqrt{\alpha}(E_{n}+mc^{2})}{2^{n+1}n!E_{n}\sqrt{\pi}}$,
$\absolutevalue{a^{\prime}_{n}}^{2}=\frac{\sqrt{\alpha}(E_{n}-mc^{2})}{2^{n}(n-1)!E_{n}\sqrt{\pi}}$
and $E_{n}^{2}=m^{2}c^{4}+2n\hbar\omega mc^{2}$ are the energy values for this
model, denoting $E_{n}>0$ for the particle solution, $E_{n}<0$ for the
antiparticle solution.
The Fourier transforms for the relevant terms of the probability density can
be found analogously to the Klein-Gordon case, which leads to the expression
$\displaystyle f^{RQM}_{n}(p)\sim$ $\displaystyle
e^{-\frac{p^{2}}{4m\omega\hbar}}\absolutevalue{\frac{a_{n}}{A_{n}}}^{2}L_{n}\quantity(\frac{p}{2m\omega\hbar})$
(11)
$\displaystyle+e^{-\frac{p^{2}}{4m\omega\hbar}}\absolutevalue{\frac{a^{\prime}_{n}}{A_{n-1}}}^{2}L_{n-1}\quantity(\frac{p}{2m\omega\hbar}),$
where $\absolutevalue{A_{n}}^{2}=\sqrt{\frac{\alpha}{\pi}}\frac{1}{2^{n}n!}$.
As in the Klein-Gordon case, now we compute the Fourier transform of the
probability density Eq. (11) and next we calculate its asymptotic behavior.
The result is
$\displaystyle\rho^{RQM}_{n}(x)\sim$
$\displaystyle\absolutevalue{\frac{a^{\prime}_{n}}{A_{n-1}}}^{2}\frac{1}{2\pi\kappa_{n-1}}\sum_{j=1}^{\infty}\quantity(\frac{-\hbar^{2}}{S_{n-1}^{2}})^{j}i_{j}(x,\kappa_{n-1})$
(12)
$\displaystyle+\absolutevalue{\frac{a_{n}}{A_{n}}}^{2}\frac{1}{2\pi\kappa_{n}}\sum_{j=1}^{\infty}\quantity(\frac{-\hbar^{2}}{S_{n}^{2}})^{j}i_{j}(x,\kappa_{n})$
$\displaystyle+\frac{1}{\pi}\absolutevalue{\frac{a^{\prime}_{n}}{A_{n-1}}}^{2}\frac{1}{\sqrt{\kappa_{n-1}^{2}-x^{2}}}$
$\displaystyle+\frac{1}{\pi}\absolutevalue{\frac{a_{n}}{A_{n}}}^{2}\frac{1}{\sqrt{\kappa_{n}^{2}-x^{2}}},$
where
$\kappa_{n}=\sqrt{\frac{2\hbar(n+\frac{1}{2})}{m\omega}},\quad
S_{n}=4\sqrt{2\pi}m\omega\kappa_{n}^{2}.$
The non-relativistic case with quantum corrections, as in the Klein Gordon
case, are found by fixing
$E_{N}\rightarrow\pm\quantity(mc^{2}+\frac{1}{2}m\omega^{2}x_{0}^{2})$ and
choosing $N=n\pm\frac{1}{2}$, where the plus and minus sign stands for
particles and antiparticles respectively. In the particle case we find that
$\absolutevalue{a_{n}}^{2}\rightarrow\absolutevalue{A_{n}}^{2}$,
$a^{\prime}_{n}\rightarrow 0$ and $\kappa_{n}\rightarrow x_{0}$. In the
antiparticle case $a_{n}\rightarrow 0$,
$\absolutevalue{a^{\prime}_{n}}^{2}\rightarrow\absolutevalue{A_{n-1}}^{2}$ and
$\kappa_{n-1}\rightarrow x_{0}$. Thus for either case Eq. (12) becomes
$\rho^{QM}_{n}(x)\sim\frac{1}{\pi}\frac{1}{\sqrt{x_{0}^{2}-x^{2}}}+\frac{1}{2\pi
x_{0}}\sum_{j=1}^{\infty}\quantity(\frac{-\hbar^{2}}{S^{2}})^{j}i_{j}(x,x_{0}),$
(13)
where $S=4\sqrt{2\pi}m\omega x_{0}^{2}$. It should be emphasized that the
system can be in an up or down spin state, or a linear combination of both,
but in either case the probability density adopts the same form shown in Eq.
(10), which corresponds to the classical expressions dictated by the
correspondence principle. We also observe that in the fermionic case, two
$\kappa_{n}$ parameters appear, and both reduce in the classical limit to the
same amplitude $x_{0}$, although only one of them might contribute to the
probability density depending on whether the state we consider is for
particles or antiparticles, or both in the case a superposition of states.
Another remark is that the energy was fixed to $E_{N}$, with
$N=n\pm\frac{1}{2}$ for particles or antiparticles, because of a particular
feature of the Moshinsky model for the Dirac oscillator. The Moshinsky model
does not reproduce the non-relativistic energy values that would be found in
quantum harmonic oscillator from the Schrödinger equation, because of how the
harmonic term is added in the Hamiltonian [32], thus the non-relativistic
limit of the energy spectrum creates a factor $n\hbar\omega$ instead of
$(n+\frac{1}{2})\hbar\omega$ [33].
### 3.4 Dirac particle in a box
Let’s consider the Dirac particle wavefunction, $\psi^{(+)}_{k}(x)$, for a
1-dimensional infinite square Lorentz scalar potential, which does not require
considering the Klein paradox, found explicitly in Ref. [34]. The
corresponding antiparticle solution, $\psi^{(-)}_{k}(x)$, can be calculated as
$\psi^{(-)}_{k}(x)=\gamma^{5}\psi^{(+)}_{-k}(-x)$, where we follow the gamma
matrix representation of Ref. [29] where
$\gamma^{5}=i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}$. Either particle or
antiparticle, having spin up or down, produce a probability density of the
form
$\displaystyle\rho^{RQM}_{k}(x)=$
$\displaystyle\absolutevalue{B_{k}}^{2}\cos^{2}\quantity(kx-\frac{\delta_{k}}{2})$
(14)
$\displaystyle+\absolutevalue{B_{k}}^{2}\Phi_{k}^{2}\sin^{2}\quantity(kx-\frac{\delta_{k}}{2}),$
for the interval $0\leq x\leq L$, and the probability density vanishes outside
this interval. We denote
$\absolutevalue{B_{k}}^{2}\equiv\quantity[\frac{\Phi_{k}^{2}-1}{4k}(2kL-\sin(kL+\delta_{k})-\sin\delta_{k})+L]^{-1}$,
$k\equiv\frac{1}{\hbar}\sqrt{\frac{E_{k}^{2}}{c^{2}}-m^{2}c^{2}}$,
$\Phi_{k}\equiv\frac{\hbar kc}{E_{k}+mc^{2}}$ and
$\delta_{k}\equiv\arctan\quantity(\frac{2\Phi_{k}}{\Phi_{k}^{2}-1})$. The
energy of the particle $E_{k}$ can be found from the condition
$\tan(kL)=-\frac{\hbar k}{mc^{2}}$ [34]. As in Eq. (8) for infinite well from
Klein-Gordon, the asymptotic limit the Fourier transform of the probability
density reduces to
$f^{RQM}_{k}(p)\sim(1+\Phi_{k}^{2})\absolutevalue{B_{k}}^{2}\frac{i\hbar}{2p}\quantity(e^{-i\frac{Lp}{\hbar}}-1).$
(15)
The inverse Fourier transform results into
$\rho^{RQM}_{k}(x)\sim\frac{1+\Phi_{k}^{2}}{2}\absolutevalue{B_{k}}^{2}\quantity[H(L-x)-H(-x)].$
(16)
It should be noticed that the terms $\Phi_{k}^{2}$ and
$\absolutevalue{B_{k}}^{2}$ of the probability density appear as fermionic
quantum parameters, given that Eq. (9) for the Klein Gordon system does not
contain similar factors.
The non-relativistic limit can be stated as the condition such that $\hbar
k\ll mc$, which implies that $\Phi_{k}\rightarrow 0$ and
$\absolutevalue{B_{k}}^{2}\rightarrow 2/L$. The probability density becomes
$\rho^{QM}_{k}(x)\simeq\frac{1}{L}\quantity[H(L-x)-H(-x)].$ (17)
The resulting probability distribution shows invariance under speed boost of
the particle, as it was already discussed for the Klein-Gordon particle in a
box. It should be observed that the non-relativistic asymptotic probability
density of the Klein-Gordon and Dirac particles do not yield a series of
quantum corrections because the asymptotic term has no dependence on $\hbar$,
which is also found in the non-relativistic case of Ref. [1]. As a remark, the
analytic solutions of the Dirac equation for a box potential, that is time
component of a vector, and which considers the Klein paradox problem, are
given in Ref. [35], and can be also be treated with the previous procedure.
## 4 General remarks
The proposed extension of the Bohr’s correspondence principle allowed us to
derive particular classical distributions from relativistic quantum mechanical
ones. The procedure applied to the Klein-Gordon equation (a relativistic
theory) was very similar to Ref. [1] where the classical probability
distribution is recovered through the Schrödinger equation with infinite well
and the quantum harmonic oscillator potentials, with the difference lying on
the energy spectrum, which in turn modifies the probability distribution when
it has dependence on the particle’s velocity. Our results show that the
mathematical procedure proposed in this letter is applicable to the Dirac
equation, and that the big component of the Dirac spinor is the only
contribution that leads to the non-relativistic probability density after
implementing the respective limit.
Whether we considered particle or antiparticle solutions, the same classical
single particle picture plus a series of quantum corrections is found after
applying the non-relativistic limit, with results that coincide with those in
[1]. However if this limit is not considered, the relativistic asymptotic
probability density shows manifest difference between the particle and
antiparticle probability distributions, and the dependence on the $c$ constant
leads to relativistic corrections, which should be experimentally verified in
a future research.
We thank Mauricio Paulin and Alberto Ruiz for their useful comments and
suggestions. S.E. Aguilar acknowledges the financial support by FAPESP under
funding Grant No. 2018/10788-8.
## References
* [1] J. Bernal, A. Martín-Ruíz, and J. C. García-Melgarejo, “A simple mathematical formulation of the correspondence principle,” Journal of Modern Physics, vol. 4, no. 1, pp. 108–112, 2013.
* [2] A. J. Makowski and K. J. Górska, “Bohr’s correspondence principle: The cases for which it is exact,” Phys. Rev. A, vol. 66, p. 062103, Dec 2002.
* [3] L. E. Ballentine, “Quantum-to-classical limit in a hamiltonian system,” Phys. Rev. A, vol. 70, p. 032111, Sep 2004.
* [4] L. E. Ballentine, Y. Yang, and J. P. Zibin, “Inadequacy of ehrenfest’s theorem to characterize the classical regime,” Phys. Rev. A, vol. 50, pp. 2854–2859, Oct 1994.
* [5] X. Y. Huang, “Correspondence between quantum and classical descriptions for free particles,” Phys. Rev. A, vol. 78, p. 022109, Aug 2008.
* [6] A. J. Bracken and J. G. Wood, “Semiquantum versus semiclassical mechanics for simple nonlinear systems,” Phys. Rev. A, vol. 73, p. 012104, Jan 2006.
* [7] A. J. Makowski, “Exact classical limit of quantum mechanics: Central potentials and specific states,” Phys. Rev. A, vol. 65, p. 032103, Jan 2002\.
* [8] A. J. Makowski, “Exact classical limit of quantum mechanics: Noncentral potentials and ermakov-type invariants,” Phys. Rev. A, vol. 68, p. 022102, Aug 2003.
* [9] R. W. Zwanzig, “Transition from quantum to ”classical” partition function,” Phys. Rev., vol. 106, pp. 13–15, Apr 1957.
* [10] L. E. Ballentine and S. M. McRae, “Moment equations for probability distributions in classical and quantum mechanics,” Phys. Rev. A, vol. 58, pp. 1799–1809, Sep 1998.
* [11] L. E. Ballentine, “Quantum-to-classical limit of a dynamically driven spin,” Phys. Rev. A, vol. 47, pp. 2592–2600, Apr 1993.
* [12] M. A. Doncheski and R. W. Robinett, “Comparing classical and quantum probability distributions for an asymmetric infinite well,” European Journal of Physics, vol. 21, no. 3, pp. 227–251, 2000.
* [13] M. N. Berberan-Santos, E. N. Bodunov, and L. Pogliani, “Classical and quantum study of the motion of a particle in a gravitational field,” Journal of Mathematical Chemistry, vol. 37, no. 2, pp. 101–115, 2005.
* [14] R. L. Liboff, “The correspondence principle revisited,” Physics Today, vol. 37, no. 2, pp. 50–55, 1984.
* [15] J. H. van Vleck, “The absorption of radiation by multiply periodic orbits, and its relation to the correspondence principle and the rayleigh-jeans law. part ii. calculation of absorption by multiply periodic orbits,” Phys. Rev., vol. 24, pp. 347–365, Oct 1924.
* [16] F. C. Hoyt, “Application of the correspondence principle to relative intensities in series spectra,” Phys. Rev., vol. 26, pp. 749–760, Dec 1925\.
* [17] J. A. West, Z. D. Gaeta, and C. R. Stroud, “Classical limit states of the helium atom,” Phys. Rev. A, vol. 58, pp. 186–195, Jul 1998.
* [18] P. W. Ketchum, “An extension of bohr’s correspondence principle to apply to small quantum numbers,” Phys. Rev., vol. 24, pp. 463–465, Nov 1924.
* [19] B. Gao, “Breakdown of bohr’s correspondence principle,” Phys. Rev. Lett., vol. 83, pp. 4225–4228, Nov 1999.
* [20] C. Eltschka, H. Friedrich, and M. J. Moritz, “Comment on “breakdown of bohr’s correspondence principle”,” Phys. Rev. Lett., vol. 86, pp. 2693–2693, Mar 2001.
* [21] L. V. Prokhorov, “Limit $\hbar\rightarrow 0$ in quantum field theory,” Phys. Rev., vol. 183, pp. 1515–1517, Jul 1969.
* [22] L. P. Staunton and S. Browne, “Classical limit of relativistic positive-energy theories with intrinsic spin,” Phys. Rev. D, vol. 12, pp. 1026–1037, Aug 1975.
* [23] G. R. Shin and J. Rafelski, “Relativistic classical limit of quantum theory,” Phys. Rev. A, vol. 48, pp. 1869–1874, Sep 1993.
* [24] L. S. Brown and R. L. Goble, “Soft photons and the classical limit,” Phys. Rev., vol. 173, pp. 1505–1516, Sep 1968.
* [25] P. Stehle and P. G. DeBaryshe, “Quantum electrodynamics and the correspondence principle,” Phys. Rev., vol. 152, pp. 1135–1139, Dec 1966.
* [26] G. C. Dente, “Classical limit of quantum electrodynamics,” Phys. Rev. D, vol. 12, pp. 1733–1738, Sep 1975.
* [27] A. Martín-Ruíz, J. Bernal, A. Frank, and A. Carbajal-Domínguez, “The classical limit of the quantum kepler problem,” Journal of Modern Physics, vol. 4, no. 6, pp. 818–822, 2013.
* [28] A. Martín-Ruíz, J. Bernal, and A. Carbajal-Domínguez, “Macroscopic quantum behavior of periodic quantum systems,” Journal of Modern Physics, vol. 5, no. 1, pp. 44–50, 2014.
* [29] J. J. Sakurai and J. Napolitano, Modern quantum mechanics. Pearson Harlow, 2014.
* [30] N. A. Rao and B. A. Kagali, “Energy profile of the one-dimensional klein–gordon oscillator,” Physica Scripta, vol. 77, no. 1, p. 015003, 2008.
* [31] I. S. Gradshteyn and I. M. Ryzhik, Table of Integrals, Series and Products. Academic Press, 8 ed., 2015.
* [32] M. Moshinsky and A. Szczepaniak, “The dirac oscillator,” Journal of Physics A, vol. 22, no. 17, pp. 817–819, 1989.
* [33] R. Szmytkowski and M. Gruchowski, “Completeness of the dirac oscillator eigenfunctions,” Journal of Physics A: Mathematical and General, vol. 34, no. 23, pp. 4991–4997, 2001.
* [34] P. Alberto, C. Fiolhais, and V. M. S. Gil, “Relativistic particle in a box,” European Journal of Physics, vol. 17, no. 1, p. 19, 1996.
* [35] A. D. Alhaidari, “Dirac particle in a square well and in a box,” AIP Conf. Proc., vol. 1370, no. 1, pp. 21–25, 2011.
|
# On the Structure of the Sagittarius Spiral Arm in the Inner Milky Way
S. B. Bian Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing
210033, China<EMAIL_ADDRESS>School of Astronomy and Space Science,
University of Science and Technology of China, Hefei 230026, China Y. W. Wu
National Time Service Center, Key Laboratory of Precise Positioning and Timing
Technology, Chinese Academy of Sciences, Xi’an 710600, China Y. Xu Purple
Mountain Observatory, Chinese Academy of Sciences, Nanjing 210033, China;
<EMAIL_ADDRESS>School of Astronomy and Space Science, University of Science
and Technology of China, Hefei 230026, China M. J. Reid Center for
Astrophysics $|$ Harvard $\&$ Smithsonian, 60 Garden Street, Cambridge, MA
02138, USA J. J. Li Purple Mountain Observatory, Chinese Academy of
Sciences, Nanjing 210033, China<EMAIL_ADDRESS>B. Zhang Shanghai
Astronomical Observatory, Chinese Academy of Sciences, Shanghai 200030, China
K. M. Menten Max-Planck-Institut f$\ddot{u}$r Radioastronomie, Auf dem Hügel
69, 53121 Bonn, Germany L. Moscadelli INAF-Osservatorio Astrofisico di
Arcetri, Largo E. Fermi 5, 50125 Firenze, Italy A. Brunthaler Max-Planck-
Institut f$\ddot{u}$r Radioastronomie, Auf dem Hügel 69, 53121 Bonn, Germany
###### Abstract
We report measurements of trigonometric parallax and proper motion for two 6.7
GHz methanol and two 22 GHz water masers located in the far portion of the
Sagittarius spiral arm as part of the BeSSeL Survey. Distances for these
sources are estimated from parallax measurements combined with 3-dimensional
kinematic distances. The distances of G033.64$-$00.22, G035.57$-$00.03,
G041.15$-$00.20, and G043.89$-$00.78 are $9.9\pm 0.5$, $10.2\pm 0.6$, $7.6\pm
0.5$, and $7.5\pm 0.3$ kpc, respectively. Based on these measurements, we
suggest that the Sagittarius arm segment beyond about 8 kpc from the Sun in
the first Galactic quadrant should be adjusted radially outward relative to
previous models. This supports the suggestion of Xu et al. (2023) that the
Sagittarius and Perseus spiral arms might merge in the first quadrant before
spiraling inward to the far end of the Galactic bar.
Interstellar masers (846), Trigonometric parallax (1713), Star forming regions
(1565), Milky Way Galaxy (1054)
## 1 Introduction
Over the last 16 years, the parallaxes and proper motions of over 200 masers
associated with high-mass star-forming regions (HMSFRs) have been measured
(Reid et al. 2019, hereafter R19; VERA Collaboration et al. 2020), which trace
the spiral arms and three-dimensional (3D) motions of their young stars. While
the nearby spiral arms of the Milky Way have been mapped in detail (e.g., Xu
et al., 2016), there are few parallax measurements for sources with distances
beyond $\approx$10 kpc. In order to extend our mapping of the Milky Way, in
this work we report parallax measurements of four distant maser sources,
G033.64$-$00.22, G035.57$-$00.03, G041.15$-$00.20, and G043.89$-$00.78,
located past the tangent point of the Sagittarius spiral arm in the first
Galactic quadrant. In addition, in order to better constrain the spiral arm
structure in distant regions, we also calculate the 3D kinematic distances for
several sources whose distances are greater than about 8 kpc. Reid (2022) has
shown 3D kinematic distances to be generally superior to parallax distances
for sources well past the Galactic center.
## 2 Observations and Data Analysis
Table 1: Positions and Brightnesses
Source | R.A. (J2000) | Dec. (J2000) | $\Delta\Theta_{\rm E}$ | $\Delta\Theta_{\rm N}$ | Brightness | $V_{\rm LSR}$ | Beam
---|---|---|---|---|---|---|---
| $\mathrm{(^{h}\;\;\;{}^{m}\;\;\;{}^{s})}$ | $(\arcdeg\;\;\;\arcmin\;\;\;\arcsec)$ | (∘) | (∘) | (Jy/beam) | (km s-1) | (mas $\times$ mas @ $\arcdeg$)
G033.64$-$00.22(M) | 18 53 32.56656 | $+$00 31 39.1152 | … | … | 16.0 | 60.36 | 6.7 $\times$ 5.1 @ 165
J1848+0138 | 18 48 21.81041 | $+$01 38 26.6296 | $-$1.3 | 1.1 | 0.030 | | 5.1 $\times$ 3.6 @ 173
J1857$-$0048 | 18 57 51.35887 | $-$00 48 21.9369 | 1.1 | $-$1.3 | 0.053 | | 5.9 $\times$ 3.0 @ 3
G035.57$-$00.03(W) | 18 56 22.52577 | $+$02 20 27.5007 | … | … | 0.9 | 48.62 | 1.6 $\times$ 0.5 @ 164
J1855+0251 | 18 55 35.43649 | $+$02 51 19.5650 | $-$0.2 | 0.5 | 0.115 | | 1.4 $\times$ 0.5 @ 164
G041.15$-$00.20(M) | 19 07 14.36746 | $+$07 13 18.0190 | … | … | 3.2 | 56.00 | 3.7 $\times$ 1.5 @ 4
J1905+0652 | 19 05 21.21029 | $+$06 52 10.7830 | $-$0.5 | $-$0.4 | 0.058 | | 4.2 $\times$ 2.4 @ 6
J1907+0907 | 19 07 41.96333 | $+$09 07 12.3956 | 0.1 | 1.9 | 0.072 | | 3.9 $\times$ 2.0 @ 179
G043.89$-$00.78(W) | 19 14 26.39610 | $+$09 22 36.5926 | … | … | 8.6 | 59.18 | 1.4 $\times$ 0.4 @ 163
J1905+0952 | 19 05 39.89897 | $+$09 52 08.4075 | $-$2.2 | 0.5 | 0.072 | | 1.3 $\times$ 0.6 @ 163
J1913+0932 | 19 13 24.02535 | $+$09 32 45.3775 | $-$0.3 | 0.2 | 0.047 | | 1.5 $\times$ 0.7 @ 159
J1922+0841 | 19 22 18.63365 | $+$08 41 57.3753 | 1.9 | $-$0.7 | 0.013 | | 1.4 $\times$ 0.6 @ 164
Note. — Source names followed by M and W in parentheses indicate methanol and
water masers. $\Delta\Theta_{\rm E}$ and $\Delta\Theta_{\rm N}$ are the
angular offsets of the QSOs relative to the maser toward the East and North.
The absolute position, peak brightness, and (naturally weighted) beam size and
position angle (PA; east of north) are listed for the first epoch. Note that
the absolute position accuracies are limited to about $\pm 1$ mas from the
assumed values for the QSOs. The local standard of rest (LSR) velocities
($V_{\rm LSR}$) of the reference maser spots are listed for G033.64$-$00.22,
G041.15$-$00.20, and G043.89$-$00.78, whereas for G035.57$-$00.03, J1855+0251
was used as the phase reference.
We conducted parallax observations of two 6.7 GHz methanol (CH3OH) and two 22
GHz water (H2O) masers over 16 epochs spanning 1.2 yr with the National Radio
Astronomy Observatory’s (NRAO’s)111NRAO is a facility of the National Science
Foundation operated under cooperative agreement by Associated Universities,
Inc. Very Long Baseline Array (VLBA) under program BR210. Prior to our
program, G035.57$-$00.03 had not been observed for a parallax measurement.
Parallaxes for the other three sources had been previously published by Wu et
al. (2014, 2019). These were specifically chosen for re-observation, because
previous parallax measurements (with only 4 or 5 epochs per source) were not
accurate enough to provide strong constraints on spiral structure for distant
portions of the Sagittarius arm.
Details of the observations are listed in Table 5. We scheduled the
observations to occur near the extrema of the sinusoidal parallax signatures
in R.A., since the parallax amplitudes in decl. were considerably smaller. At
each epoch, three 1.7-hr blocks of phase-referenced observations were inserted
between four 0.5-hr geodetic blocks, used for clock and atmospheric delay
calibrations (see Reid et al., 2009, for details). Four adjacent dual-
polarized intermediate-frequency (IF) bands of 16 MHz bandwidth were used for
the phase-referenced observations. The maser emissions were centered in the
third IF.
The data were correlated in Socorro, New Mexico, with the DiFX222DiFX, a
software correlator for very long baseline interferometry (VLBI), is developed
as part of the Australian Major National Research Facilities Programme by the
Swinburne University of Technology and operated under licence. software
correlator (Deller et al., 2007). The IF bands containing the masers were
correlated with 4000 and 2000 spectral channels for the 6.7 GHz CH3OH and the
22 GHz H2O masers, yielding velocity channels of 0.18 and 0.11 km s-1,
respectively.
We reduced the correlated data with the Astronomical Image Processing System
(AIPS; Greisen, 2003) and ParselTongue (Kettenis et al., 2006) scripts in four
steps as described in Reid et al. (2009). In step 1, we corrected delays and
phases for feed rotations, updated Earth’s orientation parameters, ionospheric
delays using total electron content maps, residual clock errors, tropospheric
delays determined from the geodetic calibration blocks, and updated source
positions (when necessary). In step 2, we converted correlator amplitudes to
Jansky units by applying telescope gains and system temperatures. In step 3,
for the maser data we shifted frequencies to hold LSR velocities at a desired
value for all sources and observations. In step 4, one scan on a strong
calibrator was chosen to remove delay and phase differences among all bands.
In step 5, for all but one source, a channel with strong and compact maser
emission was used as the interferometer phase reference and applied to all
maser channels and background quasi-stellar objects (QSOs). For
G035.57$-$00.03, the bright background QSO (J1855+0251) was used as the phase
reference, because the peak brightness of the H2O maser (0.9 Jy beam-1)
provided insufficient signal-to-noise ratios on individual VLBA baselines in 8
kHz spectral channels and 20 s integrations.
The AIPS task IMAGR was used to image the spectral-line emission of the masers
and continuum emission of the QSOs. The positions of the maser spots and
background QSOs were determined by fitting elliptical Gaussian brightness
distributions to the images using the AIPS task JMFIT. Table 1 lists the
positions and brightnesses from the first epoch.
Table 2: Astrometric Results
Source | $\pi$ | $\mu_{x}$ | $\mu_{y}$ | $V_{\rm LSR}$ | Dπ | Dk | Dave
---|---|---|---|---|---|---|---
| (mas) | (mas y-1) | (mas y-1) | (km s-1) | (kpc) | (kpc) | (kpc)
G019.60$-$00.23 | $0.076\pm 0.011$ | $-3.11\pm 0.16$ | $-6.36\pm 0.17$ | $41\pm 3$ | $13.2^{+2.2}_{-1.7}$ | $12.5\pm 0.8$ | $12.6\pm 0.7$
G020.08$-$00.13 | $0.066\pm 0.010$ | $-3.14\pm 0.14$ | $-6.44\pm 0.16$ | $41\pm 3$ | $15.2^{+2.7}_{-2.0}$ | $12.4\pm 0.6$ | $12.7\pm 0.6$
G032.74$-$00.07‡ | $0.126\pm 0.016$ | $-3.15\pm 0.27$ | $-6.10\pm 0.29$ | $37\pm 10$ | $7.9_{-0.9}^{+1.2}$ | $11.4\pm 1.0$ | $10.6\pm 0.8$
G033.64$-$00.22∗†‡ | $0.103\pm 0.011$ | $-3.01\pm 0.07$ | $-6.30\pm 0.08$ | ${}^{a}61\pm 3$ | $9.7^{+1.2}_{-1.0}$ | $10.0\pm 0.6$ | $9.9\pm 0.5$
G035.57$-$00.03∗ | $0.098\pm 0.008$ | $-3.02\pm 0.13$ | $-6.08\pm 0.18$ | ${}^{a}53\pm 3$ | $10.2^{+0.9}_{-0.8}$ | $10.2\pm 0.7$ | $10.2\pm 0.6$
G035.79$-$00.17 | $0.113\pm 0.013$ | $-2.96\pm 0.12$ | $-6.23\pm 0.14$ | $61\pm 5$ | $8.8_{-0.9}^{+1.2}$ | $9.6\pm 0.8$ | $9.4\pm 0.6$
G037.47$-$00.10 | $0.088\pm 0.030$ | $-2.63\pm 0.07$ | $-6.19\pm 0.15$ | $58\pm 3$ | $11.4_{-2.9}^{+5.9}$ | $9.4\pm 1.0$ | $9.6\pm 0.9$
G038.03$-$00.30 | $0.095\pm 0.022$ | $-3.01\pm 0.06$ | $-6.20\pm 0.11$ | $60\pm 3$ | $10.5_{-2.0}^{+3.2}$ | $9.2\pm 0.6$ | $9.3\pm 0.6$
G041.15$-$00.20∗† | $0.137\pm 0.011$ | $-2.74\pm 0.15$ | $-6.03\pm 0.16$ | ${}^{b}60\pm 3$ | $7.3^{+0.6}_{-0.5}$ | $8.4\pm 1.2$ | $7.6\pm 0.5$
G041.22$-$00.19 | $0.113\pm 0.022$ | $-2.82\pm 0.13$ | $-5.89\pm 0.16$ | $59\pm 5$ | $8.8^{+2.1}_{-1.4}$ | $8.5\pm 1.4$ | $8.7\pm 1.1$
G043.03$-$00.45 | $0.130\pm 0.019$ | $-3.03\pm 0.15$ | $-6.56\pm 0.20$ | $56\pm 5$ | $7.7^{+1.3}_{-1.0}$ | $8.3\pm 0.7$ | $8.2\pm 0.6$
G043.89$-$00.78∗† | $0.136\pm 0.005$ | $-3.01\pm 0.16$ | $-6.03\pm 0.18$ | ${}^{c}54\pm 3$ | $7.3^{+0.3}_{-0.3}$ | $8.3\pm 0.8$ | $7.5\pm 0.3$
G045.07+00.13 | $0.129\pm 0.007$ | $-3.21\pm 0.26$ | $-6.11\pm 0.26$ | $59\pm 5$ | $7.8^{+0.4}_{-0.4}$ | $7.7\pm 1.0$ | $7.7\pm 0.4$
G045.45+00.06 | $0.119\pm 0.017$ | $-2.34\pm 0.38$ | $-6.00\pm 0.54$ | $55\pm 7$ | $8.4^{+1.4}_{-1.1}$ | $7.6\pm 1.4$ | $8.1\pm 0.9$
G045.49+00.12 | $0.144\pm 0.024$ | $-2.62\pm 0.17$ | $-5.61\pm 0.16$ | $58\pm 3$ | $6.9^{+1.4}_{-1.0}$ | $7.0\pm 1.6$ | $6.9\pm 0.9$
G045.80$-$00.35 | $0.137\pm 0.023$ | $-2.52\pm 0.17$ | $-6.08\pm 0.27$ | $64\pm 5$ | $7.3^{+1.5}_{-1.0}$ | $6.3\pm 1.5$ | $7.0\pm 1.0$
Note. — Astrometric results are listed for the four sources discussed in this
paper and 12 other sources in the Sagittarius arm at distances greater than
about 8 kpc. Column 1 lists Galactic source names. Columns 2, 3, and 4 give
parallaxes and proper motions in the eastward ($\mu_{x}$
=$\mu_{\alpha}\cos\delta$) and northward directions ($\mu_{y}$ =
$\mu_{\delta}$ ). Column 5 lists LSR velocities. Columns 6, 7, and 8 list the
parallax distances, 3D kinematic distances, and their variance-weighted
averages. We adopted these weighted averages in column 8 as the ”final”
distance. Sources with a superscript (*) are those reported in this paper.
Their $V_{\rm LSR}$ are determined from NH${}_{3}(J,K)=(1,1)$ emission and the
references are (a) Wienen et al. (2012); (b) Pandian et al. (2012); and (c)
Olmi et al. (1993). For other sources parallaxes, proper motions, and line-of-
sight velocities are taken from Table 1 of R19 (along with their primary
references), except for G019.60$-$00.23 and G020.08$-$00.13, which are taken
from Xu et al. (2021). Sources with a superscript ($\dagger$) are those having
previously published parallaxes, and column 2 lists their weighted averages
with the parallax measurements in this paper, as described in Section 3.2.
Sources with a superscript ($\ddagger$) are those flagged as outliers in R19
but included in this paper when determining the characteristics of the
Sagittarius arm, as described in Section 4.2.
## 3 Parallax and Distance Estimates
### 3.1 Parallax and Proper Motion Fitting
We selected compact maser spots for parallax and proper motion fitting. For
the H2O masers G035.57$-$00.03 and G043.89$-$00.78, we first did the parallax
and proper motion fitting for each maser spot relative to each background
source in order to identify and remove outliers caused by the blending of
maser spots. We added “error floors” in the eastern and northern directions in
quadrature with the formal positional uncertainties from JMFIT and adjusted
them to achieve a reduced $\chi^{2}$ per degree of freedom near unity in each
coordinate. The error floors were used to capture systematic errors in
position measurements, usually owing to uncompensated atmospheric delays.
At the lower frequency of the 6.7 GHz methanol maser sources G033.64$-$00.22
and G041.15$-$00.20, ionospheric “wedges” can cause systematic positional
shifts across the sky, significantly increasing astrometric error. In order to
mitigate this issue, we used an image-position-based method to generate
positional data relative to an “artificial quasar” at the target maser
position during each epoch. Detailed descriptions of this method can be found
in Reid et al. (2017) and Zhang et al. (2019). We adopted this approach (Zhang
et al., 2019) in the parallax fits.333A phase-calibration based procedure,
MultiView (Rioja et al., 2017) and inverse MultiView (Hyland et al., 2022),
has recently been shown superior to the image-based method, but could not be
used with the observing strategy adopted in program BR210.
Because all maser spots in a source should have the same parallax, we used all
(unblended) maser spots and background sources to carry out a combined
solution, which uses a single parallax parameter but allows each maser spot to
have a different proper motion. The fitting results for the H2O and CH3OH
masers are shown in Figures 4 and 5, respectively. Tables 6 and 7 provide
detailed results of the parallax and proper motion fits. Since systematic
errors caused by propagation delays are essentially the same for all maser
spots, we multiplied the formal parallax uncertainties by the square root of
the number of spots used for the parallax fitting (Reid et al., 2009).
Individual fits for G043.89$-$00.78 with different QSOs revealed some
differences among the inferred error-floor values, suggesting different
contributions to the systematic error budget from uncompensated atmospheric
delays and/or variable unresolved QSO structure. After determining the
individual error floors, we added these in quadrature to their formal
uncertainties from JMFIT before doing a combined fit. Also, as seen in Figure
4, the decl. data for G043.89$-$00.78 show the presence of systematics in the
residuals, which might be attributed to proper-motion acceleration, possibly
owing to orbital acceleration in a long-period stellar binary. Such effects
have been seen in Very Long Baseline Interferometric data from Pleiades stars
(Melis et al., 2014). When adjustable acceleration parameters were included in
the fitting process, these systematic residuals disappeared, as shown by the
red dashed line in Figure 4. The estimated acceleration parameters are listed
in Table 6. We adopted the parallax results from this approach for
G043.89$-$00.78.
The internal motions of maser features persisting for at least five epochs
were averaged to estimate the motion of the central star. Considering typical
values of maser spot motions relative to the central star of $\sim 5$ km s-1
(Moscadelli et al., 2002) for 6.7 GHz CH3OH masers and $\sim 10$ km s-1 (Gwinn
et al., 1992) for 22 GHz H2O masers, for sources with multiple maser features
we inflated the formal uncertainties for each proper motion component of the
central star by adding $\pm 3$ km s-1 and $\pm 5$ km s-1 “error floors” in
quadrature, respectively. For sources with only one maser feature, we adopted
$\pm 5$ km s-1 and $\pm 10$ km s-1 error floors for the CH3OH and H2O masers.
We adopted an LSR velocity for the central star based on the centroid of the
NH3 molecular line emission (Wienen et al., 2012; Pandian et al., 2012; Olmi
et al., 1993).
### 3.2 Previous Measurements
Regarding G033.64$-$00.22, Reid et al. (2014) reported a parallax of 0.131
$\pm$ 0.020 mas, whereas our data yield a parallax of 0.090 $\pm$ 0.014 mas.
These two parallaxes are statistically consistent at the 1.7$\sigma$ level.
Combining these two measurements with variance weighting yields a parallax of
0.103 $\pm$ 0.011 mas.
For G041.15$-$00.20, our data yield a parallax of 0.144 $\pm$ 0.014 mas, and
combining this with the parallax of 0.125 $\pm$ 0.018 mas of Wu et al. (2019)
yields a variance-weighted average parallax of 0.137 $\pm$ 0.011 mas.
For G043.89$-$00.78, parallaxes of 0.121 $\pm$ 0.020 mas and 0.144 $\pm$ 0.014
were reported by Wu et al. (2014) and Wu et al. (2019), respectively.
Combining these with the our measured parallax of 0.137 $\pm$ 0.006 mas yields
a parallax of 0.136 $\pm$ 0.005 mas.
### 3.3 3D Kinematic Distances
The 3D kinematic distances provide an alternative method to estimate distance,
independent of parallax measurements. This technique combines likelihoods as a
function of distance for line-of-sight velocities and proper motion components
in Galactic longitude and latitude, assuming a rotation curve for the Galaxy.
Multiplying these three likelihoods yields a posteriori distance estimate,
which is generally free of the two-fold ambiguities for standard (1D)
kinematic distances using only line-of-sight velocities for sources within the
Solar circle (ie, Galactocentric radii less than $R_{0}$).
For the four sources discussed in this paper and 12 other sources in the
Sagittarius arm whose distances are greater than about 8 kpc, their 3D
kinematic distances were estimated using the proper motions, LSR velocities
and the Galactic rotation curve of Reid (2022). Table 2 lists their
trigonometric parallaxes, proper motions, LSR velocities, the 3D kinematic
distances, and a variance-weighted average 444We first convert the measured
parallax ($\pi\pm\sigma_{\pi}$) to distance ($d\pm\sigma_{d}$), where
$d=1/\pi$ and $\sigma_{d}=d^{2}\sigma_{\pi}$, and then combine the parallax-
converted distance and the 3D kinematic distance by variance weighting. of the
two distance estimates for each source. We adopted these average distances in
the following discussion.
Figure 1: Locations of sources superposed on a CO $(l,v)$ diagram from the
Galactic Ring Survey (Jackson et al., 2006) integrated from b =
$-1{{}^{\circ}}$ to $+1{{}^{\circ}}$. Black dots with purple edges indicate
previously published sources assigned to the far portion of the Sagittarius in
the Table 1 of R19 (along with their primary references). White crosses
indicate the target for which we are performing VLBA parallax measurements
under program BL312 as described in in Section 4.2. Red triangles and squares
indicate the $(l,v)$ positions of H II regions and 6.7 GHz CH3OH masers,
estimated to be at far kinematic distances in http://astro.phys.wvu.edu/wise/
(catalog WISE Catalog of Galactic H II Regions) (Anderson et al., 2014) and
the GLOSTAR Galactic plane survey 6.7 GHz methanol maser catalogue (Nguyen et
al., 2022), respectively. Traces of the Sagittarius and Perseus arms from Reid
et al. (2016) pass through the CO (Dame et al., 2001; Jackson et al., 2006)
and H I (Stil et al., 2006) emission features that define the arms in
longitude, latitude, and velocity. The solid and dashed lines correspond to
the far and near portions of the Sagittarius arm. The width of the shaded
region corresponds to a $\pm$ 7 km s-1velocity dispersion.
## 4 Discussion
### 4.1 Spiral Arm Assignments
We now discuss the spiral arm assignments of the four sources from this paper
and the two sources reported by Xu et al. (2021). We assigned maser sources to
the spiral arms based on their locations in H I and/or CO longitude–velocity
$(l,v)$ diagrams. As shown in Figure 1, traces of the Sagittarius and Perseus
arms (Reid et al., 2016) roughly follow the 13CO ($J$= 1–0) Galactic Ring
Survey (Jackson et al., 2006) integrated from b = $-1{{}^{\circ}}$ to
$+1{{}^{\circ}}$. However, for Galactic longitudes lower than about
$30{{}^{\circ}}$, the near portion of the Sagittarius arm overlaps with the
velocities associated with the Perseus arm in $(l,v)$ plots. And, for
longitudes less than about $20{{}^{\circ}}$, it is difficult to distinguish
the far portion of the Sagittarius arm from the Perseus arm. This highlights
the need for accurate distance measurements to trace these arms in space.
The $(l,v)$ distribution of H II regions and 6.7 GHz CH3OH masers, estimated
to be at their far kinematic distances in Anderson et al. (2014) and Nguyen et
al. (2022), reasonably well follow the Sagittarius and Perseus arms, at least
down to Galactic longitudes of about $25{{}^{\circ}}$.
As is evident in Figure 1, the $(l,v)$ positions of G033.64$-$00.22,
G035.57$-$00.03, G041.15$-$00.20, and G043.89$-$00.78 indicate unambiguously
an association with the far portion of the Sagittarius arm. Figure 2 shows the
same four sources (green pentagrams) superposed on a map containing $\sim$200
maser sources listed in R19, as well as sources from Xu et al. (2021), and
Bian et al. (2022). The locations of our four sources are also consistent with
the far portion of the Sagittarius arm. Therefore, we adopt that arm
assignment for these four newly measured maser sources.
The distances for G019.60$-$00.23 and G020.08$-$00.13 are consistent with
either the Sagittarius or Perseus arms. However, the far portion of the
Sagittarius arm is favored over the Perseus arm by their $(l,v)$ positions,
since, as shown in Figure 1, their LSR velocities are $\approx 20$ km
s-1offset from those expected for the Perseus arm. Typical LSR velocity
deviations, owing to internal Virial motions, are $\approx 7$ km s-1 from the
average arm values (Reid et al., 2016). Also, at longitudes near 20∘, the
Galactic latitude of the far portion of the Sagittarius arm is
$\approx-$0${\hbox to0.0pt{.\hss}}^{\circ}$075, while that of the Perseus arm
is $\approx+$0${\hbox to0.0pt{.\hss}}^{\circ}$076 (Reid et al., 2016, 2019).
At distances of over 12 kpc, the Galactic latitudes of G019.60$-$00.23
($-$0${\hbox to0.0pt{.\hss}}^{\circ}$23) and G020.08$-$00.13 ($-$0${\hbox
to0.0pt{.\hss}}^{\circ}$13) would place them more than 200 pc below the center
of the Perseus arm. Comparing this offset to the (Gaussian $1\sigma$) vertical
dispersion of about 100 pc for the Perseus arm (see Fig. 2 of Reid et al.,
2016) also favors association with the Sagittarius over the Perseus arm.
Considering all the evidence, we confidently assign G019.60$-$00.23 and
G020.08$-$00.13 to the far portion of the Sagittarius arm.
Figure 2: Plan view of the Milky Way as seen from the north Galactic pole
following R19. The Galactic center is at (0,0) kpc and the Sun is at (0,8.15)
kpc. Solid dots indicate parallax locations of the maser sources. Distance
uncertainties are indicated by the inverse size of the symbols as given in the
legend at the lower left. Purple dots with 1$\sigma$ error bars indicate
sources described in Section 3 and listed in Table 2. Green stars indicate the
locations of the sources reported in this paper. The spiral arm models from
R19 are traced with solid lines and dotted lines enclose 90% of the sources.
The location of the kink in Sagittarius arm is marked with a ”K.” The long
Galactic bar is indicated with a shaded ellipse (Wegg et al., 2015). The pitch
angle fit for the Sagittarius arm determined in this paper is traced with a
thick purple line.
### 4.2 Structure of the Sagittarius Arm
R19 fitted log-period spiral functions to the locations of HMSFRs in the
Sagittarius arm with trigonometric parallax distances. They found that the
Sagittarius arm has a “kink” at Galactocentric azimuth $\beta\approx
24{{}^{\circ}}$ with pitch angles of 17${\hbox to0.0pt{.\hss}}^{\circ}$1 $\pm$
1${\hbox to0.0pt{.\hss}}^{\circ}$6 for $2{{}^{\circ}}<\beta<24{{}^{\circ}}$,
and 1${\hbox to0.0pt{.\hss}}^{\circ}$0 $\pm$ 2${\hbox
to0.0pt{.\hss}}^{\circ}$1 for $24{{}^{\circ}}<\beta<97{{}^{\circ}}$. Beyond
the Galactic center (i.e., $\beta>90{{}^{\circ}}$), only two sources,
G037.47$-$00.10 and G038.03$-$00.30, with parallaxes reported by Wu et al.
(2019), were used by R19 to characterize the Sagittarius arm. Our results for
G033.64$-$00.22 and G035.57$-$00.03 together with the measurements of
G019.60$-$00.23 and G020.08$-$00.13 (Xu et al., 2021) add significant weight
in determining the arm structure in this distant region. Using these six
distant sources, adopting the averaged distances given in Table 2, and the
same methodology as R19, we redetermine the characteristics of the Sagittarius
arm over an extended range. It is worth noting that, while employing the same
procedure as R19, the increased accuracy of distances in Table 2 has resulted
in two R19 outliers (G032.74-00.07 and G033.64-00.22, with $>3\sigma$
residuals), now being within the acceptable range ($<3\sigma$ residuals).
We estimate the pitch angles of the two Sagittarius arm segments to be
18${\hbox to0.0pt{.\hss}}^{\circ}$6 $\pm$ 1${\hbox to0.0pt{.\hss}}^{\circ}$4
and 1${\hbox to0.0pt{.\hss}}^{\circ}$5 $\pm$ 1${\hbox
to0.0pt{.\hss}}^{\circ}$1 for segments between azimuths of
$-2{{}^{\circ}}<\beta<22{{}^{\circ}}$ and
$22{{}^{\circ}}<\beta<132{{}^{\circ}}$, respectively. Our best-fitting
parameters are consistent with the results of R19 (17${\hbox
to0.0pt{.\hss}}^{\circ}$1 $\pm$ 1${\hbox to0.0pt{.\hss}}^{\circ}$6 and
1${\hbox to0.0pt{.\hss}}^{\circ}$0 $\pm$ 2${\hbox to0.0pt{.\hss}}^{\circ}$1
for the segments between azimuths of $2{{}^{\circ}}<\beta<24{{}^{\circ}}$ and
$24{{}^{\circ}}<\beta<97{{}^{\circ}}$, respectively). Our results, based on
more parallax data, extend the azimuth range from 97∘to 132∘ (see Table 3). In
addition, the significant decrease in the uncertainty of the pitch angle for
$\beta>22{{}^{\circ}}$ between the estimate of R19 and ours suggests that the
pitch angle is nearly constant over the azimuth range
$22{{}^{\circ}}<\beta<132{{}^{\circ}}$. We plot our extended model as the
thick purple line in Figure 2.
While our fitted pitch angles are close to those of R19, our model for the
Sagittarius arm in Fig. 2 starting near $\beta=42{{}^{\circ}}$ traces radially
outward from the R19 model, and by $\beta=132{{}^{\circ}}$ it is at $\approx
1$ kpc greater radius. Why does this occur? When the R19 model was generated,
G019.60$-$00.23 and G020.08$-$00.13 had not yet been measured, and
G032.74$-$00.07 and G033.64$-$00.22 were flagged as outliers (and not used in
their arm fitting). Thus, the pitch angle for $\beta$ $>$ 42∘ was not well
constrained. In order to build a more complete spiral arm model for the Milky
Way, for the distant portion of the Sagittarius arm they adopted a pitch angle
of $8{{}^{\circ}}$, which was closer to the average value determined for the
spiral arms in the Milky Way and extrapolated the Sagittarius arm to the far
end of the Galactic bar at approximately the same radius as that measured for
the Norma arm at the near end of the bar.
As seen in Figure 2, our updated model for the far portion of the Sagittarius
arm, based on new parallax and 3D kinematic distance measurements, approaches
the R19 model for the Perseus arm at $\beta\approx 130{{}^{\circ}}$. This
supports the suggestion (Xu et al., 2023) that the Sagittarius and Perseus arm
might merge. However, we note that the location of the Perseus arm for
azimuths greater than $\approx 90{{}^{\circ}}$ is currently not well
constrained, and the Perseus and Sagittarius arms might merge closer to the
far end of the bar, or they may not merge at all.
As can be seen in the shaded regions of Figure 1, there are dozens of 6.7 GHz
masers at $\ell\approx$ 20∘ to 35∘ that could be associated with the far
portion of the Sagittarius arm or Perseus arms. Parallax measurements for
these sources using the MultiView calibration technique (Rioja et al., 2017;
Hyland et al., 2022) could help us to test these possibilities. In fact, for
this purpose, under program BL312 (from March 2024 to March 2025), we are
measuring the parallax of eight maser sources. Six of the eight sources are in
the scope of Figure 1, shown as white crosses.
Table 3: Sagittarius Arm Fitting Results
Reference | $l$ Tangency | $\beta$ Range | $\beta_{\rm kink}$ | $R_{\rm kink}$ | $\psi_{<}$ | $\psi_{>}$ | Width
---|---|---|---|---|---|---|---
| (deg) | (deg) | (deg) | (kpc) | (deg) | (deg) | (kpc)
This paper | 284.4 | $-2\rightarrow 132$ | $22\pm 2$ | $6.06\pm 0.06$ | $\phantom{0}18.4\pm 1.4$ | $\phantom{0}1.7\pm 1.0$ | $0.18\pm 0.03$
R19 | 285.6 | $2\rightarrow\phantom{0}97$ | $24\pm 2$ | $6.04\pm 0.09$ | $\phantom{0}17.1\pm 1.6$ | $\phantom{0}1.0\pm 2.1$ | $0.27\pm 0.04$
Note. — Parameters estimated from fitting log-periodic spirals to the
Sagittarius arm based on data from this paper, Xu et al. (2021), and R19,
assuming the distance to the Galactic center of $R_{0}$ = 8.15 kpc. An arm
tangency prior of $283\pm 2{{}^{\circ}}$ (Bronfman et al., 2000) was used to
constrain the fit. Column 2 lists the fitted tangency. Column 3 lists the
azimuth range of the parallax data. Columns 4 and 5 list the Galactic azimuth
and radius of the arm kink, separating two arm segments. Columns 6 and 7 give
pitch angles for azimuths less and greater than $\beta_{\rm kink}$. Column 9
lists the intrinsic (Gaussian 1$\sigma$) arm width at $R_{\rm kink}$. Rows 1
and 2 give the best-fitting parameters in this paper and from R19 for
comparison.
### 4.3 Kinematics of the Sagittarius Arm
The peculiar motion components, $U_{s}$ (toward the Galactic center), $V_{s}$
(in the direction of Galactic rotation), and $W_{s}$ (toward the north
Galactic pole) listed in Table 8 for the maser sources listed in Table 2 were
calculated by adopting the distance to the Galactic center, ${R_{0}}$ = 8.15
kpc; the circular rotation speed at the Sun, $\Theta_{0}$ = 236 km s-1 the
solar motion values $U_{\odot}$= 10.6 km s-1, $V_{\odot}$= 10.7 km s-1, and
$W_{\odot}$= 7.6 km s-1; and the Galactic rotation curve from R19.
Figure 3 shows the peculiar motions components ($U_{s}$, $V_{s}$, $W_{s}$) as
a function of Galactic azimuth ($\beta$) for sources in the Sagittarius arm
based on the data from this paper and Table 1 of R19. From Figure 3, one can
see that all values are consistent with 0 $\pm$ 20 km s-1. The variance-
weighted average peculiar motion components ($\overline{U_{s}}$,
$\overline{V_{s}}$, $\overline{W_{s}}$) for the Sagittarius arm sources are
listed in Table 4. For comparison with other spiral arms, the corresponding
values for the sources in the Norma, Scutum, Local, Perseus, and Outer arms
are also listed in Table 4 (based on data in Table 1 of R19). Among these
spiral arms, the Sagittarius arm has the among the smallest $\overline{U_{s}}$
and $\overline{V_{s}}$ magnitudes, indicating near circular Galactic orbits.
However, the Sagittarius arm has the only statistically significant
$\overline{W_{s}}$ value, which comes from sources in the arm segment with
$\beta<22{{}^{\circ}}$ mostly moving toward the South Galactic Pole.
Figure 3: Plots of the peculiar motion components vs. Galactic azimuth for
sources in the Sagittarius arm based on the data from this paper and Table 1
of R19. The top, middle, and bottom panels show $U_{s}$, $V_{s}$, and $W_{s}$,
respectively, calculated by adopting model A5 and the Galactic rotation curve
from R19. Table 4: Average Peculiar Motions of HMSFRs in Spiral Arms
Arm | N | $\overline{U_{s}}$ | $\overline{V_{s}}$ | $\overline{W_{s}}$
---|---|---|---|---
Name | | (km s-1) | (km s-1) | (km s-1)
Norma | 14 | $9.9$1$\pm$ | $3.2$ | $-7.7$1$\pm$ | $1.7$ | $0.2$1$\pm$ | $1.7$
Scutum | 40 | $10.8$1$\pm$ | $1.3$ | $-1.4$1$\pm$ | $1.1$ | $-0.7$1$\pm$ | $1.0$
Sagittarius | 42 | $2.2$1$\pm$ | $1.0$ | $-0.8$1$\pm$ | $1.0$ | $-3.0$1$\pm$ | $0.9$
Local | 28 | $0.1$1$\pm$ | $1.0$ | $-7.4$1$\pm$ | $0.9$ | $1.7$1$\pm$ | $1.0$
Perseus | 41 | $8.7$1$\pm$ | $0.9$ | $-5.9$1$\pm$ | $1.1$ | $0.1$1$\pm$ | $1.0$
Outer | 11 | $5.7$1$\pm$ | $1.9$ | $-6.7$1$\pm$ | $2.6$ | $0.7$1$\pm$ | $2.4$
Note. — Variance-weighted averages of peculiar motions were calculated from
data in this paper and Table 1 of R19. Columns 1 and 2 list the arm name and
the number of sources. Columns 3, 4, and 5 list the variance-weighted average
of the peculiar motion components $U_{s}$, $V_{s}$, and $W_{s}$.
## 5 Summary
We measured the parallaxes and proper motions of four masers in HMSFRs
associated with the distant portions of the Sagittarius spiral arm. The
results for G041.15$-$00.20 and G043.89$-$00.78 at Galactic azimuth $\beta\sim
80{{}^{\circ}}$ are consistent with the previous model for the arm by R19.
However, the more distant sources, G033.64$-$00.22 and G035.57$-$00.03, as
well as G019.60$-$00.23 and G020.08$-$00.13 from Xu et al. (2021), better
constrain the structure of the Sagittarius arm beyond the Galactic center
($\beta>90{{}^{\circ}}$) and suggest that the Sagittarius arm model of R19
should be moved outward by about 1 kpc at $\beta\approx 130{{}^{\circ}}$,
where it might merge with the Perseus arm.
This work was funded by the National SKA Program of China (grant No.
2022SKA0120103), NSFC grant 11933011, 12303072, the Key Laboratory for Radio
Astronomy, the Jiangsu Funding Program for Excellent Postdoctoral Talent
(grant No. 2023ZB093), and the Natural Science Foundation of Jiangsu Province
(grant No. BK20210999).
VLBA
## Appendix A Observations
Here, we list the details of the epochs observed in Table 5.
Table 5: Dates of VLBA Observations G033.64$-$00.22 | G035.57$-$00.03 | G041.15$-$00.20 | G043.89$-$00.78
---|---|---|---
2015 Mar 01 | 2015 Mar 08 | 2015 Mar 01 | 2015 Mar 08
2015 Mar 28 | 2015 Mar 27 | 2015 Mar 28 | 2015 Mar 27
2015 Apr 22 | 2015 Apr 20 | 2015 Apr 22 | 2015 Apr 20
2015 May 20 | 2015 May 14 | 2015 May 20 | 2015 May 14
2015 Aug 28 | 2015 Aug 29 | 2015 Aug 28 | 2015 Aug 29
2015 Sep 10 | 2015 Sep 13 | 2015 Sep 10 | 2015 Sep 13
2015 Sep 21 | 2015 Sep 28 | 2015 Sep 21 | 2015 Sep 28
2015 Oct 03 | 2015 Oct 09 | 2015 Oct 03 | 2015 Oct 09
2015 Oct 16 | 2015 Oct 24 | 2015 Oct 16 | 2015 Oct 24
2015 Oct 27 | 2015 Nov 02 | 2015 Oct 27 | 2015 Nov 02
2015 Nov 06 | 2015 Nov 13 | 2015 Nov 06 | 2015 Nov 13
2015 Nov 16 | 2015 Nov 23 | 2015 Nov 16 | 2015 Nov 23
2016 Feb 26 | 2016 Feb 25 | 2016 Feb 26 | 2016 Feb 25
2016 Mar 25 | 2016 Mar 22 | 2016 Mar 25 | 2016 Mar 22
2016 Apr 18 | 2016 Apr 16 | 2016 Apr 18 | 2016 Apr 16
2016 May 26 | 2016 May 25 | 2016 May 26 | 2016 May 25
## Appendix B Parallax and Proper Motion Fits
Here, we list the details of the parallaxes fits with formal uncertainties and
the proper motion estimations listed in Table 6 and Table 7, respectively.
Figure 4: Parallax fitting results for the H2O masers G035.57$-$00.03 and
G043.89$-$00.78. The top panels give the eastward (solid line) and northward
(dashed line) position offsets vs. time. The middle and bottom panels display
the eastward and northward data with the fitted proper motion removed. See the
legend for the source names, the $V_{\rm LSR}$ of each maser spot, and the
background QSO(s) used for the position reference(s). The red dashed line
shows the northward position offsets when proper-motion acceleration was
included in the fitting process for G043.89$-$00.78.
Figure 5: Same as Figure 4, but for the CH3OH masers, G033.64$-$00.22 and G041.15$-$00.20. Their parallaxes are estimated using the approach of Zhang et al. (2019). Table 6: Detailed Results of the Parallaxes Target | Background | $V_{\rm LSR}$ | Detected | Parallax | Solved for | accx | accy
---|---|---|---|---|---|---|---
Source | Source | (km s-1) | epochs | (mas) | acceleration | (mas y-2) | (mas y-2)
H2O maser | | |
G035.57$-$00.03 | J1855+0251 | 48.62 | 1111 1111 1111 1111 | 0.098 $\pm$ 0.006 | No | |
| | 53.90 | 1111 1111 1111 1000 | 0.098 $\pm$ 0.013 | No | |
| Combined fit | | 0.098 $\pm$ 0.008 | No | |
G043.89$-$00.78 | J1905+0952 | 59.18 | 1111 11111111 1111 | 0.146 $\pm$ 0.006 | No | |
| | 60.47 | 1111 11111111 1111 | 0.142 $\pm$ 0.006 | No | |
| J1913+0932 | 59.18 | 1111 11111111 1111 | 0.143 $\pm$ 0.004 | No | |
| | 60.47 | 1111 11111111 1111 | 0.140 $\pm$ 0.004 | No | |
| J1922+0841 | 59.18 | 1111 11111111 1111 | 0.150 $\pm$ 0.003 | No | |
| | 60.47 | 1111 11111111 1111 | 0.145 $\pm$ 0.005 | No | |
| Combined fit | | 0.145 $\pm$ 0.003 | No | |
G043.89$-$00.78 | J1905+0952 | 59.18 | 1111 11111111 1111 | 0.153 $\pm$ 0.012 | Yes | $-$0.12 $\pm$ 0.16 | 0.62 $\pm$ 0.12
| | 60.47 | 1111 11111111 1111 | 0.129 $\pm$ 0.013 | Yes | 0.16 $\pm$ 0.18 | 0.48 $\pm$ 0.12
| J1913+0932 | 59.18 | 1111 11111111 1111 | 0.157 $\pm$ 0.007 | Yes | $-$0.21 $\pm$ 0.09 | 0.73 $\pm$ 0.05
| | 60.47 | 1111 11111111 1111 | 0.130 $\pm$ 0.009 | Yes | 0.11 $\pm$ 0.13 | 0.55 $\pm$ 0.09
| J1922+0841 | 59.18 | 1111 11111111 1111 | 0.140 $\pm$ 0.007 | Yes | 0.11 $\pm$ 0.09 | 0.43 $\pm$ 0.15
| | 60.47 | 1111 11111111 1111 | 0.119 $\pm$ 0.010 | Yes | 0.36 $\pm$ 0.14 | 0.26 $\pm$ 0.16
| Combined fit | | 0.137 $\pm$ 0.006 | Yes | 0.08 $\pm$ 0.06 | 0.54 $\pm$ 0.05
CH3OH maser | | |
G033.64$-$00.22 | J848 & J1857 | 60.36 | 1111 1111 1101 1111 | 0.081 $\pm$ 0.018 | No | |
| | 62.70 | 1111 1111 1101 1111 | 0.099 $\pm$ 0.019 | No | |
Combined fit | | | 0.090 $\pm$ 0.014 | No | |
G041.15$-$00.20 | J1905 & J1907 | 55.64 | 1111 1111 1111 1111 | 0.128 $\pm$ 0.026 | No | |
| | 55.82 | 1111 1111 1111 1111 | 0.140 $\pm$ 0.016 | No | |
| | 56.00 | 1111 1111 1111 1111 | 0.141 $\pm$ 0.015 | No | |
Combined fit | | | 0.144 $\pm$ 0.014 | No | |
Table 7: Detailed Results of the Proper Motions Target | Feature | $V_{\rm LSR}$ | $\mu_{x}$ | $\mu_{y}$ | $\Delta x$ | $\Delta y$
---|---|---|---|---|---|---
Source | | (km s-1) | (mas y-1) | (mas y-1) | (mas) | (mas)
H2O maser
G035.57$-$00.03 | 1 | 48.19$\sim$49.05 | $-$3.11 $\pm$ 0.04 | $-$6.26 $\pm$ 0.07 | 0 | 0
| 2 | 64.50$\sim$68.71 | $-$3.23 $\pm$ 0.11 | $-$6.02 $\pm$ 0.21 | 624 | $-$1088
| 3 | 55.75$\sim$56.07 | $-$3.17 $\pm$ 0.07 | $-$6.16 $\pm$ 0.17 | $-$319 | $-$303
| 4 | 50.56$\sim$54.88 | $-$2.45 $\pm$ 0.06 | $-$5.93 $\pm$ 0.11 | 53 | 46
| 5 | 53.16$\sim$53.59 | $-$3.16 $\pm$ 0.12 | $-$6.04 $\pm$ 0.20 | 1428 | 596
| Average | $-$3.02 $\pm$ 0.08 | $-$6.08 $\pm$ 0.15 | |
Enlarge 5 km s-1 Error | $-$3.02 $\pm$ 0.13 | $-$6.08 $\pm$ 0.18 | |
G043.89$-$00.78 | 1 | 58.10$\sim$59.61 | $-$2.48 $\pm$ 0.03 | $-$6.08 $\pm$ 0.03 | 0 | 0
| 2 | 48.71$\sim$51.73 | $-$3.42 $\pm$ 0.04 | $-$5.02 $\pm$ 0.05 | $-$62 | $-$51
| 3 | 59.72$\sim$60.80 | $-$3.09 $\pm$ 0.03 | $-$5.81 $\pm$ 0.13 | $-$78 | $-$102
| 4 | 54.43$\sim$56.37 | $-$3.03 $\pm$ 0.10 | $-$7.22 $\pm$ 0.14 | $-$65 | $-$276
| Average | $-$3.01 $\pm$ 0.05 | $-$6.03 $\pm$ 0.09 | |
Enlarge 5 km s-1 Error | $-$3.01 $\pm$ 0.16 | $-$6.03 $\pm$ 0.18 | |
CH3OH maser
G033.64$-$00.22 | 1 | 60.18$\sim$60.54 | $-$3.01 $\pm$ 0.03 | $-$6.26 $\pm$ 0.05 | 0 | 0
| 2 | 62.52$\sim$63.24 | $-$3.00 $\pm$ 0.04 | $-$6.33 $\pm$ 0.06 | $-$28 | $-$2
| Average | $-$3.01 $\pm$ 0.04 | $-$6.30 $\pm$ 0.06 | |
Enlarge 3 km s-1 Error | $-$3.01 $\pm$ 0.07 | $-$6.30 $\pm$ 0.08 | |
G041.15$-$00.20 | 1 | 55.65$\sim$56.00 | $-$2.74 $\pm$ 0.03 | $-$6.03 $\pm$ 0.06 | 0 | 0
Enlarge 5 km s-1 Error | $-$2.74 $\pm$ 0.15 | $-$6.03 $\pm$ 0.16 | |
## Appendix C Peculiar Motions
Here, we list the peculiar motions of the sources located in the Sagittarius
arm in Table 8.
Table 8: Peculiar Motions
Source | $\beta$ | $U_{s}$ | $V_{s}$ | $W_{s}$ | Distance | $\mu_{x}$ | $\mu_{y}$ | $v_{\rm LSR}$
---|---|---|---|---|---|---|---|---
| (∘) | (km s-1) | (km s-1) | (km s-1) | kpc | mas y-1 | mas y-1 | km s-1
G045.49$+$00.12 | $56.0_{-8.7}^{+8.2}$ | $-8.5$. $\pm$ | $9.2$ | $-10.1$. $\pm$ | $5.4$ | $-1.6$. $\pm$ | $5.6$ | $6.9^{+0.9}_{-0.9}$ | $-2.62$. $\pm$ | $0.17$ | $-5.61$. $\pm$ | $0.16$ | $58$. $\pm$ | $3$
G045.80$-$00.35 | $56.9_{-9.6}^{+8.9}$ | $1.1$. $\pm$ | $11.3$ | $-0.3$. $\pm$ | $9.0$ | $-12.8$. $\pm$ | $7.2$ | $7.0^{+1.0}_{-1.0}$ | $-2.52$. $\pm$ | $0.17$ | $-6.08$. $\pm$ | $0.27$ | $64$. $\pm$ | $5$
G043.89$-$00.78 | $62.2_{-2.8}^{+2.8}$ | $7.0$. $\pm$ | $6.3$ | $-10.7$. $\pm$ | $4.4$ | $2.3$. $\pm$ | $5.9$ | $7.5^{+0.3}_{-0.3}$ | $-3.01$. $\pm$ | $0.16$ | $-6.03$. $\pm$ | $0.18$ | $54$. $\pm$ | $3$
G045.07$+$00.13 | $63.6_{-3.6}^{+3.5}$ | $9.5$. $\pm$ | $9.4$ | $2.0$. $\pm$ | $7.3$ | $8.1$. $\pm$ | $9.4$ | $7.7^{+0.4}_{-0.4}$ | $-3.21$. $\pm$ | $0.26$ | $-6.11$. $\pm$ | $0.26$ | $59$. $\pm$ | $5$
G041.15$-$00.20 | $64.1_{-5.1}^{+4.8}$ | $1.5$. $\pm$ | $6.8$ | $-14.5$. $\pm$ | $5.1$ | $-4.9$. $\pm$ | $5.5$ | $7.6^{+0.5}_{-0.5}$ | $-2.74$. $\pm$ | $0.15$ | $-6.03$. $\pm$ | $0.16$ | $60$. $\pm$ | $3$
G045.45$+$00.06 | $66.9_{-8.0}^{+7.2}$ | $-9.3$. $\pm$ | $19.0$ | $-2.7$. $\pm$ | $13.9$ | $-19.8$. $\pm$ | $16.4$ | $8.1^{+0.9}_{-0.9}$ | $-2.34$. $\pm$ | $0.38$ | $-6.00$. $\pm$ | $0.54$ | $55$. $\pm$ | $7$
G043.03$-$00.45 | $68.9_{-5.5}^{+5.1}$ | $19.6$. $\pm$ | $7.7$ | $2.4$. $\pm$ | $10.6$ | $-6.2$. $\pm$ | $6.4$ | $8.2^{+0.6}_{-0.6}$ | $-3.03$. $\pm$ | $0.15$ | $-6.56$. $\pm$ | $0.20$ | $56$. $\pm$ | $5$
G041.22$-$00.19 | $74.3_{-10.3}^{+8.8}$ | $-9.7$. $\pm$ | $9.7$ | $-6.2$. $\pm$ | $13.8$ | $-1.1$. $\pm$ | $5.8$ | $8.7^{+1.1}_{-1.1}$ | $-2.82$. $\pm$ | $0.13$ | $-5.89$. $\pm$ | $0.16$ | $59$. $\pm$ | $5$
G038.03$-$00.30 | $81.8_{-5.4}^{+4.9}$ | $-0.3$. $\pm$ | $5.8$ | $-2.3$. $\pm$ | $9.6$ | $0.2$. $\pm$ | $3.3$ | $9.3^{+0.6}_{-0.6}$ | $-3.01$. $\pm$ | $0.06$ | $-6.20$. $\pm$ | $0.11$ | $60$. $\pm$ | $3$
G035.79$-$00.17 | $84.5_{-5.7}^{+5.1}$ | $-1.5$. $\pm$ | $7.3$ | $-8.2$. $\pm$ | $10.2$ | $-1.9$. $\pm$ | $5.6$ | $9.4^{+0.6}_{-0.6}$ | $-2.96$. $\pm$ | $0.12$ | $-6.23$. $\pm$ | $0.14$ | $61$. $\pm$ | $5$
G037.47$-$00.10 | $84.8_{-8.0}^{+6.8}$ | $-8.3$. $\pm$ | $7.9$ | $-5.9$. $\pm$ | $14.1$ | $-14.8$. $\pm$ | $4.7$ | $9.6^{+0.9}_{-0.9}$ | $-2.63$. $\pm$ | $0.07$ | $-6.19$. $\pm$ | $0.15$ | $58$. $\pm$ | $3$
G033.64$-$00.22 | $91.0_{-4.5}^{+4.1}$ | $-2.2$. $\pm$ | $4.9$ | $-6.3$. $\pm$ | $8.8$ | $-1.7$. $\pm$ | $3.4$ | $9.9^{+0.5}_{-0.5}$ | $-3.01$. $\pm$ | $0.07$ | $-6.30$. $\pm$ | $0.08$ | $61$. $\pm$ | $3$
G035.57$-$00.03 | $91.4_{-4.9}^{+4.4}$ | $-6.2$. $\pm$ | $7.6$ | $-5.7$. $\pm$ | $11.8$ | $3.3$. $\pm$ | $6.8$ | $10.2^{+0.6}_{-0.6}$ | $-3.02$. $\pm$ | $0.13$ | $-6.08$. $\pm$ | $0.18$ | $53$. $\pm$ | $3$
G032.74$-$00.07 | $97.6_{-6.6}^{+5.5}$ | $3.9$. $\pm$ | $12.9$ | $-19.4$. $\pm$ | $20.2$ | $8.7$. $\pm$ | $13.9$ | $10.6^{+0.8}_{-0.8}$ | $-3.15$. $\pm$ | $0.27$ | $-6.10$. $\pm$ | $0.29$ | $37$. $\pm$ | $10$
G020.08$-$00.13 | $130.9_{-3.2}^{+2.6}$ | $-7.1$. $\pm$ | $5.7$ | $0.3$. $\pm$ | $17.9$ | $-5.3$. $\pm$ | $8.7$ | $12.7^{+0.6}_{-0.6}$ | $-3.14$. $\pm$ | $0.14$ | $-6.44$. $\pm$ | $0.16$ | $41$. $\pm$ | $3$
G019.60$-$00.23 | $131.3_{-3.9}^{+3.1}$ | $-9.8$. $\pm$ | $6.0$ | $-8.2$. $\pm$ | $19.8$ | $-4.8$. $\pm$ | $9.7$ | $12.6^{+0.7}_{-0.7}$ | $-3.11$. $\pm$ | $0.16$ | $-6.36$. $\pm$ | $0.17$ | $41$. $\pm$ | $3$
Note. — Peculiar motion for sources listed in Table 2. Column 1 lists the
source names nominated with Galactic coordinates. The sources are sorted in
increasing Galactic azimuth as listed in Column 2. Columns 3, 4, and 5 list
the peculiar motions toward the Galactic center, in the direction of Galactic
rotation, and toward the north Galactic pole, respectively. Columns 6–9 list
the distances, proper motions in the eastward and northward directions, and
LSR velocities, respectively. The Galactic “Univ” model and solar motions
found by R19 were used to calculate the peculiar motions.
## References
* Anderson et al. (2014) Anderson, L. D., Bania, T. M., Balser, D. S., et al. 2014, ApJS, 212, 1, doi: 10.1088/0067-0049/212/1/1
* Bian et al. (2022) Bian, S. B., Xu, Y., Li, J. J., et al. 2022, AJ, 163, 54, doi: 10.3847/1538-3881/ac3d90
* Bronfman et al. (2000) Bronfman, L., Casassus, S., May, J., & Nyman, L. Å. 2000, A&A, 358, 521, doi: 10.48550/arXiv.astro-ph/0006104
* Dame et al. (2001) Dame, T. M., Hartmann, D., & Thaddeus, P. 2001, ApJ, 547, 792, doi: 10.1086/318388
* Deller et al. (2007) Deller, A. T., Tingay, S. J., Bailes, M., & West, C. 2007, PASP, 119, 318, doi: 10.1086/513572
* Greisen (2003) Greisen, E. W. 2003, in Astrophysics and Space Science Library, Vol. 285, Information Handling in Astronomy - Historical Vistas, ed. A. Heck, 109, doi: 10.1007/0-306-48080-8_7
* Gwinn et al. (1992) Gwinn, C. R., Moran, J. M., & Reid, M. J. 1992, ApJ, 393, 149, doi: 10.1086/171493
* Hyland et al. (2022) Hyland, L. J., Reid, M. J., Ellingsen, S. P., et al. 2022, ApJ, 932, 52, doi: 10.3847/1538-4357/ac6d5b
* Jackson et al. (2006) Jackson, J. M., Rathborne, J. M., Shah, R. Y., et al. 2006, ApJS, 163, 145, doi: 10.1086/500091
* Kettenis et al. (2006) Kettenis, M., van Langevelde, H. J., Reynolds, C., & Cotton, B. 2006, in Astronomical Society of the Pacific Conference Series, Vol. 351, Astronomical Data Analysis Software and Systems XV, ed. C. Gabriel, C. Arviset, D. Ponz, & S. Enrique, 497
* Melis et al. (2014) Melis, C., Reid, M. J., Mioduszewski, A. J., Stauffer, J. R., & Bower, G. C. 2014, Science, 345, 1029, doi: 10.1126/science.1256101
* Moscadelli et al. (2002) Moscadelli, L., Menten, K. M., Walmsley, C. M., & Reid, M. J. 2002, ApJ, 564, 813, doi: 10.1086/324304
* Nguyen et al. (2022) Nguyen, H., Rugel, M. R., Murugeshan, C., et al. 2022, A&A, 666, A59, doi: 10.1051/0004-6361/202244115
* Olmi et al. (1993) Olmi, L., Cesaroni, R., & Walmsley, C. M. 1993, A&A, 276, 489
* Pandian et al. (2012) Pandian, J. D., Wyrowski, F., & Menten, K. M. 2012, ApJ, 753, 50, doi: 10.1088/0004-637X/753/1/50
* Reid (2022) Reid, M. J. 2022, AJ, 164, 133, doi: 10.3847/1538-3881/ac80bb
* Reid et al. (2016) Reid, M. J., Dame, T. M., Menten, K. M., & Brunthaler, A. 2016, ApJ, 823, 77, doi: 10.3847/0004-637X/823/2/77
* Reid et al. (2009) Reid, M. J., Menten, K. M., Brunthaler, A., et al. 2009, ApJ, 693, 397, doi: 10.1088/0004-637X/693/1/397
* Reid et al. (2014) —. 2014, ApJ, 783, 130, doi: 10.1088/0004-637X/783/2/130
* Reid et al. (2017) Reid, M. J., Brunthaler, A., Menten, K. M., et al. 2017, AJ, 154, 63, doi: 10.3847/1538-3881/aa7850
* Reid et al. (2019) Reid, M. J., Menten, K. M., Brunthaler, A., et al. 2019, ApJ, 885, 131, doi: 10.3847/1538-4357/ab4a11
* Rioja et al. (2017) Rioja, M. J., Dodson, R., Orosz, G., Imai, H., & Frey, S. 2017, AJ, 153, 105, doi: 10.3847/1538-3881/153/3/105
* Stil et al. (2006) Stil, J. M., Taylor, A. R., Dickey, J. M., et al. 2006, AJ, 132, 1158, doi: 10.1086/505940
* VERA Collaboration et al. (2020) VERA Collaboration, Hirota, T., Nagayama, T., et al. 2020, PASJ, 72, 50, doi: 10.1093/pasj/psaa018
* Wegg et al. (2015) Wegg, C., Gerhard, O., & Portail, M. 2015, MNRAS, 450, 4050, doi: 10.1093/mnras/stv745
* Wienen et al. (2012) Wienen, M., Wyrowski, F., Schuller, F., et al. 2012, A&A, 544, A146, doi: 10.1051/0004-6361/201118107
* Wu et al. (2014) Wu, Y. W., Sato, M., Reid, M. J., et al. 2014, A&A, 566, A17, doi: 10.1051/0004-6361/201322765
* Wu et al. (2019) Wu, Y. W., Reid, M. J., Sakai, N., et al. 2019, ApJ, 874, 94, doi: 10.3847/1538-4357/ab001a
* Xu et al. (2023) Xu, Y., Hao, C. J., Liu, D. J., et al. 2023, ApJ, 947, 54, doi: 10.3847/1538-4357/acc45c
* Xu et al. (2016) Xu, Y., Reid, M., Dame, T., et al. 2016, Science Advances, 2, e1600878, doi: 10.1126/sciadv.1600878
* Xu et al. (2021) Xu, Y., Bian, S. B., Reid, M. J., et al. 2021, ApJS, 253, 1, doi: 10.3847/1538-4365/abd8cf
* Zhang et al. (2019) Zhang, B., Reid, M. J., Zhang, L., et al. 2019, AJ, 157, 200, doi: 10.3847/1538-3881/ab141d
|
Microsoft Research, Bengaluru<EMAIL_ADDRESS>Indian Institute of
Technology, Mandi, H.P<EMAIL_ADDRESS>CC-BY [100]CCS ->
Theory of Computation -> Randomness, Geometry and Discrete Structures ->
Computational Geometry
###### Acknowledgements.
We would like to thank anonymous reviewers for their careful reading of our
manuscript and their insightful comments and suggestions.Mikołaj Bojańczyk,
Emanuela Merelli, and David P. Woodruff 3 49th International Colloquium on
Automata, Languages, and Programming (ICALP 2022) ICALP 2022 ICALP 2022 July
4–8, 2022 Paris, France 229 64
# One-pass additive-error subset selection for $\ell_{p}$ subspace
approximation
Amit Deshpande Rameshwar Pratap
###### Abstract
We consider the problem of subset selection for $\ell_{p}$ subspace
approximation, that is, to efficiently find a _small_ subset of data points
such that solving the problem optimally for this subset gives a good
approximation to solving the problem optimally for the original input.
Previously known subset selection algorithms based on volume sampling and
adaptive sampling [16], for the general case of $p\in[1,\infty)$, require
multiple passes over the data. In this paper, we give a one-pass subset
selection with an additive approximation guarantee for $\ell_{p}$ subspace
approximation, for any $p\in[1,\infty)$. Earlier subset selection algorithms
that give a one-pass multiplicative $(1+\epsilon)$ approximation work under
the special cases. Cohen et al. [11] gives a one-pass subset section that
offers multiplicative $(1+\epsilon)$ approximation guarantee for the special
case of $\ell_{2}$ subspace approximation. Mahabadi et al. [31] gives a one-
pass _noisy_ subset selection with $(1+\epsilon)$ approximation guarantee for
$\ell_{p}$ subspace approximation when $p\in\\{1,2\\}$. Our subset selection
algorithm gives a weaker, additive approximation guarantee, but it works for
any $p\in[1,\infty)$.
###### keywords:
Subspace approximation, streaming algorithms, low-rank approximation, adaptive
sampling, volume sampling, subset selection.
###### category:
## 1 Introduction
In subset selection problems, the objective is to pick a small subset of the
given data such that solving a problem optimally on this subset gives a good
approximation to solving it optimally on the entire data. Many coreset
constructions in computational geometry and clustering [22], sampling-based
algorithms for large matrices [24], algorithms for submodular optimization and
active learning [37] essentially perform subset selection. The main advantage
of subset selection lies in its interpretability, for example, in gene
expression analysis, we would like to find a representative subset of genes
from gene expression data rather than just fitting a subspace to the data [20,
33, 36, 32, 29]. In several machine learning applications such as document
classification, face recognition etc., it is desirable to go beyond dimension
reduction alone, and pick a subset of representative items or features [28,
33]. Subset selection has been well studied for many fundamental problems such
as $k$-means clustering [2, 14], low-rank approximation [24, 17, 15, 28] and
regression [13], to name a few. In low-rank and subspace approximation, the
subset selection approach leads to more interpretable solutions than using SVD
or random projections-based results. Therefore, subset selection has been a
separate and well-studied problem even within the low-rank approximation and
subspace approximation literature [28, 12].
In the following, we formally state the $\ell_{p}$ subspace approximation
problem for $p\in[1,\infty)$.
$\ell_{p}$ subspace approximation: In this problem, given a dataset
$\mathcal{X}=\\{x_{1},x_{2},\dotsc,x_{n}\\}$ of $n$ points in
$\mathbb{R}^{d}$, a positive integer $1\leq k\leq d$ and a real number
$p\in[1,\infty)$, the objective is to find a linear subspace $V$ in
$\mathbb{R}^{d}$ of dimension at most $k$ that minimizes the sum of $p$-th
powers of the Euclidean distances of all the points to the subspace $V$, that
is, to minimize
$\displaystyle\mathrm{err}_{p}(\mathcal{X},V):=\sum_{i=1}^{n}d(x_{i},V)^{p}.$
(1)
Throughout this paper, we use $V^{*}$ to denote the optimal subspace for
$\ell_{p}$ subspace approximation. The optimal solutions are different for
different values of $p$ but we do not include that in the notation to keep the
presentation simple, as our results hold for any $p\in[1,\infty)$.
Before stating our results, we first explain what a _small_ subset and a
_good_ approximation means in the context of subset selection for $\ell_{p}$
subspace approximation.
For $\ell_{p}$ subspace approximation, we consider $n$ and $d$ to be large,
$k\ll n,d$, and $p$ to be a small constant. Thus, a _small_ subset of
$\mathcal{X}$ desired in subset selection has size independent of $n$ and $d$,
and is bounded by $\text{poly}(k/\epsilon)$, where $\epsilon$ is a parameter
that controls the approximation guarantee (as explained later). Note that the
trivial solution $V=0$ gives
$\mathrm{err}_{p}(\mathcal{X},V)=\sum_{i=1}^{n}\left\|x_{i}\right\|^{p}$.
Using the standard terminology from previous work [24, 15, 16], an additive
approximation guarantee means outputting $V$ such that
$\mathrm{err}_{p}(\mathcal{X},V)\leq\mathrm{err}_{p}(\mathcal{X},V^{*})+\epsilon\leavevmode\nobreak\
\sum_{i=1}^{n}\left\|x_{i}\right\|^{p}$, whereas a multiplicative
approximation guarantee means
$\mathrm{err}_{p}(\mathcal{X},V)\leq(1+\epsilon)\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},V^{*})$. Most subset selection algorithms for
$\ell_{p}$ subspace approximation select a $\text{poly}(k/\epsilon)$-sized
subset of $\mathcal{X}$ such that its span contains a subspace $V$ of
dimension at most $k$ that is close enough to $V^{*}$ to obtain the above
approximation guarantees.
Our objective in this paper is to propose an efficient, one-pass sampling
algorithm that performs subset selection for $\ell_{p}$ subspace approximation
for $p\in[1,\infty)$ defined as above. We note that the problem of one-pass
subset selection for $\ell_{p}$ subspace approximation has been studied for
special values of $p$, for example, Cohen et. al. [11] gives one-pass subset
selection for $p=2$, Mahabadi et al. [31] suggest one-pass noisy subset
selection for $p=\\{1,2\\}$. To the best of our knowledge this problem has not
been studied in generality for $p\in[1,\infty)$. In this work, we consider
studying this problem. We state our results as follows.
### 1.1 Our results
Our main technical contribution is a one-pass MCMC-based sampling algorithm
that can approximately simulate multiple rounds of adaptive sampling. As a
direct application of the above, we get the following results for the
$\ell_{p}$ subspace approximation problem: For $p\in[1,\infty)$, our algorithm
makes only one pass over the given data and outputs a subset of
$\mathrm{poly}(k/\epsilon)^{p}$ points whose span contains a $k$ dimensional
subspace with an additive approximation guarantee for $\ell_{p}$ subspace
approximation. This generalizes the well-known squared-length sampling
algorithm of Frieze et al. [24] that gives additive approximation guarantee
for $\ell_{2}$ subspace approximation (or low-rank approximation under the
Frobenium norm). Even though stronger multiplicative $(1+\epsilon)$
approximation algorithms for $\ell_{p}$ subspace approximation are known in
the previous work, either they cannot do subset selection, or they are not
one-pass, or they do not work for all $p\in[1,\infty)$.
Organization of the paper: In Section 2, we compare and contrast our result
with the state-of-the-art algorithms, and explain the key technical
challenges, and workarounds. In Section 3, we state our MCMC based subset
selection algorithm for subset selection for $\ell_{p}$ subspace
approximation. In Section 4, we give theoretical bounds on the sample size and
approximation guarantee. Finally, in Section 5, we conclude our discussion and
state some potential open questions of the paper.
## 2 Related work
In this section, we discuss related work on sampling and sketching algorithms
for $\ell_{p}$ subspace approximation, and do a thorough comparison of our
results with the state of the art.
### 2.1 Sampling-based $\ell_{p}$ subspace approximation
Frieze et al. [24] show that selecting a subset of $O(k/\epsilon)$ data points
as an i.i.d. sample from $x_{1},x_{2},\dotsc,x_{n}$ picked by squared-length
sampling, i.e., $x_{i}$ is picked with probability proportional to
$\left\|x_{i}\right\|_{2}^{2}$, gives an additive approximation for $\ell_{2}$
subspace approximation (also known as low-rank approximation under the
Frobenius norm). Squared-length sampling can be implemented in one pass over
$\mathcal{X}$ using reservoir sampling [35, 21]. It is known how to improve
the additive approximation guarantee to a multiplicative approximation by
combining two generalizations of squared-length sampling, namely, adaptive
sampling and volume sampling [15, 16] but it requires $O(k\log k)$ passes over
the data. In adaptive sampling, we pick points with probability proportional
to the distance from the span of previously picked points, and in volume
sampling, we pick a subset of points with probability proportional to the
squared volume of the parallelepiped formed by them. Volume sampling a subset
of size $k$ can itself be simulated with an approximation factor $k!$ in $k$
rounds of adaptive sampling [15]. For $p=2$, it is also known that picking a
subset of $O(k/\epsilon)$ points by volume sampling gives a bi-criteria
$(1+\epsilon)$ approximation for $\ell_{2}$ subspace approximation [28]. For
general $p\in[1,\infty)$, it is known that subset selection based on adaptive
sampling and volume sampling can be generalized to get a $(1+\epsilon)$
multiplicative approximation for $\ell_{p}$ subspace approximation, for any
$p\in[1,\infty)$, where the subset is of size $O\left((k/\epsilon)^{p}\right)$
and it is picked in $O(k\log k)$ passes over the data [16]. The main
bottleneck for implementing this in one pass is the inability to simulate
multiple rounds of adaptive sampling in a single pass.
The only known workarounds to get one-pass subset selection for $\ell_{p}$
subspace approximation are known for the special cases $p=1$ and $p=2$. Cohen
et al. [11] give a one-pass subset selection algorithm with a multiplicative
$(1+\epsilon)$ approximation guarantee for $\ell_{2}$ subspace approximation
based on ridge leverage score sampling. Their one-pass implementation
crucially uses deterministic matrix sketching [25] to approximate the SVD and
ridge leverage scores, and works only for $p=2$, to the best of our knowledge.
Braverman et al. [6] give online algorithms for $\ell_{2}$ subspace
approximation (or low-rank approximation) via subset selection but their
subset size $O(\frac{k}{\epsilon}\log n\log\kappa)$ is not independent on $n$
and depends logarithmically on the number of points $n$ and the condition
number $\kappa$. Recent work by Mahabadi et al. [31] gives a one-pass
algorithm with a multiplicative $(1+\epsilon)$ approximation guarantee for
$\ell_{p}$ subspace approximation. However, their algorithm works only in the
special cases $p\in\\{1,2\\}$ and it outputs a subset of noisy data points
instead of the actual data points.
A different objective for $\ell_{p}$ subspace approximation has also been
studied in literature [5, 9], namely, minimizing the entry-wise
$\ell_{p}$-norm low-rank approximation error. To state it formally, given an
input matrix $A\in\mathbb{R}^{n\times d}$ and a real number $p\in[0,\infty)$,
their objective is to find a matrix $B$ of rank at most $k$ that minimizes
$\sum_{i,j}|A_{i,j}-B_{i,j}|^{p}$.
### 2.2 Sketching-based $\ell_{p}$ subspace approximation
Sketching-based algorithms compute a sketch of a given data in a single pass,
using which one can compute an approximately optimal solution to a given
problem on the original data. The problem of $\ell_{p}$ subspace approximation
has been well-studied in previous work on sketching algorithms. However, a
limitation of these results is that they do not directly perform subset
selection. We mention a few notable results as follows: For $p=2$, extending
deterministic matrix sketching of Liberty [30], Ghashami et al. [27, 26] give
a deterministic one-pass sketching algorithm that gives a multiplicative
$(1+\epsilon)$ approximation guarantee for $\ell_{2}$ subspace approximation
(or low-rank approximation under the Frobenius norm). Cormode et al. [19]
extend the above deterministic sketching idea to $p\neq 2$ and give a
$\mathrm{poly}(k)$ approximation for entry-wise $\ell_{1}$-norm low-rank
approximation and an additive $\epsilon\leavevmode\nobreak\
\left\|b\right\|_{\infty}$ approximation for $\ell_{\infty}$ regression. There
is another line of work based on sketching algorithms using random projection.
Random projection gives a multiplicative $(1+\epsilon)$ approximation for
$\ell_{2}$ subspace approximation in running time
$O(\mathrm{nnz}(X)\cdot\text{poly}(k/\epsilon))$, subsequently improved to a
running time of $O(\text{nnz}(X)+(n+d)\cdot\text{poly}(k/\epsilon))$ by
Clarkson and Woodruff [10]. Feldman et al. [23] also give a one-pass algorithm
for multiplicative $(1+\epsilon)$ approximation for $\ell_{p}$ subspace
approximation, for $p\in[1,2]$. However, these results do not provide a one-
pass subset selection.
### 2.3 Comparison with other MCMC-based sampling results
Theorem $4$ of Anari et al. [1] gives a MCMC based sampling algorithm to
approximate volume sampling distribution. However, their algorithm requires a
greedy algorithm to pick the initial subset that requires $k$ passes over the
input.
The MCMC sampling has also been explored in the context of $k$-means
clustering. The $D^{2}$-sampling proposed by Arthur and Vassilvitskii [2]
adaptively samples $k$ points – one point in each passes over the input, and
the sampled points give $O(\log k)$ approximation to the optimal clustering
solution. The results due to [4, 3] suggest generating MCMC sampling
distribution by taking only one pass over the input that closely approximates
the underlying $D^{2}$ sampling distribution, and offer close to the optimal
clustering solution. Building on these MCMC based sampling techniques, Pratap
et al. [34] gives one pass subset section for spherical $k$-means clustering
[18].
## 3 MCMC sampling algorithm
In this section, we state our MCMC based sampling algorithm for subset
selection for $\ell_{p}$ subspace approximation. We first recall the adaptive
sampling algorithm[15, 16] for $\ell_{p}$ subspace approximation.
Adaptive sampling [15, 16] w.r.t. a subset $S\subseteq\mathcal{X}$ is defined
as picking points from $\mathcal{X}$ such that the probability of picking any
point $x\in\mathcal{X}$ is proportional to
$d(x,\operatorname{span}\left(S\right))^{p}$. We denote this probability by
$p_{S}(x)=\frac{d(x,\operatorname{span}\left(S\right))^{p}}{\mathrm{err}_{p}(\mathcal{X},S)},\quad\text{for
$x\in\mathcal{X}$}.$ (2)
For any subset $S$ whose $\mathrm{err}_{p}(\mathcal{X},S)$ is not too small,
we show that adaptive sampling w.r.t. $S$ can be approximately simulated by an
MCMC sampling algorithm that only has access to i.i.d. samples of points
$x\in\mathcal{X}$ picked from the following easier distribution:
$q(x)=\frac{d(x,\operatorname{span}\left(\tilde{S}\right))^{p}}{2\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})}+\frac{1}{2\left|\mathcal{X}\right|},$
(3)
for some initial subset $\tilde{S}$. We give the above definition of $q(x)$
using an arbitrary initial or _pivot_ subset $\tilde{S}$ because it will be
useful in our analysis of multiple rounds of adaptive sampling. However, our
final algorithm uses a fixed subset $\tilde{S}=\emptyset$ such that
$q(x)=\frac{\left\|x\right\|_{2}^{p}}{2\sum_{x\in\mathcal{X}}\left\|x\right\|_{2}^{p}}+\frac{1}{2\left|\mathcal{X}\right|}.$
(4)
Note that sampling from this easier distribution, namely, picking
$x\in\mathcal{X}$ with probability $q(x)$ (mentioned in Equation (4)), can be
done in only one pass over $\mathcal{X}$ using weighted reservoir sampling
[8]. Weighted reservoir sampling keeps a reservoir of finite items, and for
every new item, calculates its relative weight to randomly decide if the item
should be added to the reservoir. If the new item is selected, then one of the
existing items from the reservoir is picked uniformly and replaced with the
new item. Further, given any non-negative weights $w_{x}$, for each point
$x\in\mathcal{X}$, weighted reservoir sampling can pick an i.i.d. sample of
points, where $x$ is picked with probability proportional to its weight
$w_{x}$. Note that this does not require the knowledge of
$\sum_{x\in\mathcal{X}}w_{x}$. Thus, we can run two reservoir sampling
algorithms in parallel to maintain two samples, one that picks points with
probability proportional to $||x||_{2}^{p}$, and another that picks points
with uniform probability. Our actual sampling with probability proportional
$q(x)=\tfrac{\left\|x\right\|_{2}^{p}}{2\sum_{x\in\mathcal{X}}\left\|x\right\|_{2}^{p}}+\tfrac{1}{2\left|\mathcal{X}\right|}$
picks from one of the two reservoirs with probability $1/2$ each. Therefore,
our MCMC algorithm uses a single pass of $\mathcal{X}$ to pick a small sample
of i.i.d. random points from the probability distribution $q(\cdot)$, in
advance. Note that $q(\cdot)$ is an easier and fixed distribution compared to
$p_{S}(\cdot)$. The latter one depends on $S$ and could change over multiple
rounds of adaptive sampling.
Let $x\in\mathcal{X}$ be a random point sampled with probability $q(x)$.
Consider a random walk whose single step is defined as follows: sample another
point $y\in\mathcal{X}$ independently with probability $q(y)$ and sample a
real number $r$ u.a.r. from the interval $(0,1)$, and if
$\frac{d(y,\operatorname{span}\left(S\right))^{p}\leavevmode\nobreak\
q(x)}{d(x,\operatorname{span}\left(S\right))^{p}\leavevmode\nobreak\
q(y)}=\frac{p_{S}(y)\leavevmode\nobreak\ q(x)}{p_{S}(x)\leavevmode\nobreak\
q(y)}>r,$
then move from $x$ to $y$, else, stay at $x$. Essentially, this does rejection
sampling using a simpler distribution $q(\cdot)$. Observe that the stationary
distribution of the above random walk is the adaptive sampling distribution
$p_{S}(\cdot)$. We use $\tilde{P}_{m}^{(1)}(\cdot\;|\;S)$ to denote the
resulting distribution on $\mathcal{X}$ after $m$ steps of the above random
walk. Note that $m$ steps of the above random walk can be simulated by
sampling $m$ i.i.d. points from the distribution $q(\cdot)$ in advance, and
representing them implicitly as $m$-dimensional points.
[t!] One-pass (approximate MCMC) adaptive sampling algorithm: Input: a
discrete subset $\mathcal{X}\subseteq\mathbb{R}^{d}$ and integer parameters
$t,l,m\in\mathbf{Z}_{\geq 0}$. Ouput: a subset $S\subseteq\mathcal{X}$. 1.
Pick an i.i.d. sample $\mathcal{Y}$ of size $\left|\mathcal{Y}\right|=ltm$
from $\mathcal{X}$, without replacement, where the probability of picking
$x\in\mathcal{X}$ is
$q(x)=\frac{d(x,\operatorname{span}\left(\tilde{S}\right))^{p}}{2\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})}+\frac{1}{2\left|\mathcal{X}\right|}.$
We use the _pivot_ subset $\tilde{S}=\emptyset$ so the corresponding
distribution is
$q(x)=\dfrac{1}{2}\dfrac{\left\|x\right\|_{2}^{p}}{\sum_{x\in\mathcal{X}}\left\|x\right\|_{2}^{p}}+\dfrac{1}{2\left|\mathcal{X}\right|}.$
%% This can be implemented in one pass over $\mathcal{X}$ using weighted
reservoir sampling [8]. Weighted reservoir sampling is a weighted version of
the classical reservoir sampling where the probability of inclusion of an item
in the sample is proportional to the weight associated with the item. 2.
Initialize $S\leftarrow\emptyset$. 3. For $i=1,2,\dotsc,l$ do: (a) Pick an
i.i.d. sample $A_{i}$ of size $\left|A_{i}\right|=t$ from $\mathcal{X}$ as
follows. Each point in $A_{i}$ is sampled by taking $m$ steps of the following
random walk starting from a point $x$ picked with probability $q(x)$. In each
step of the random walk, we pick another point $y$ from $\mathcal{X}$ with
probability $q(y)$ and pick a real number $r$ uniformly at random from the
interval $(0,1)$. If
$\dfrac{d(y,\operatorname{span}\left(S\right))^{p}\leavevmode\nobreak\
q(x)}{d(x,\operatorname{span}\left(S\right))^{p}\leavevmode\nobreak\ q(y)}>r$
then move to $y$, else, stay at the current point.
%% Note that we add only the final point obtained after the $m$-step random
walk in the subset $S$.
%% We note that the steps $1$-$3$ of the algorithm can be simulated by taking
only one pass over the input as discussed below. Suppose we have a single-pass
Algorithm $A$ for sampling from a particular distribution, we can design
another Algorithm $B$ that runs in parallel to Algorithm $A$ and post-
processes its sample. In our setting, once we know how to get an i.i.d. sample
of points, where point $x$ is picked with probability $q(x)$, we can run
another parallel thread that simulates a random walk whose each step requires
a point picked with probability $q(x)$ and performs Step $3$. (b) $S\leftarrow
S\cup A_{i}$.
4. Output $S$.
[t!] One-pass MCMC $\ell_{p}$ subspace approximation algorithm: Input: a
discrete subset $\mathcal{X}\subseteq\mathbb{R}^{d}$, an integer parameter
$k\in\mathbf{Z}_{\geq 0}$ and an error parameter $\delta\in\mathbb{R}_{\geq
0}$. Output: a subset $\mathcal{S}\subseteq\mathcal{X}$ of
$\tilde{O}\left((k/\epsilon)^{p+1}\right)$ points. 1. Repeat the following
$O(k\log\frac{1}{\epsilon})$ times in parallel and pick the best sample,
$\mathcal{S}$ that minimizes
$\sum_{x\in\mathcal{X}}d(x,\operatorname{span}\left(\mathcal{S}\right))^{p}.$
(a) Call One-pass (approximate MCMC) adaptive sampling algorithm with
$t=\tilde{O}((k/\epsilon)^{p+1})$, $l=k$ and
$m=1+\frac{2}{\epsilon_{1}}\log\frac{1}{\epsilon_{2}}$. 2. Output
$\mathcal{S}.$
Lemma 3.1 below shows that for any subsets $\tilde{S}\subseteq
S\subseteq\mathcal{X}$ (where $\tilde{S}$ is the initial subset, and $S$ is
the current subset), either $\mathrm{err}_{p}(\mathcal{X},S)$ is small
compared to $\mathrm{err}_{p}(\mathcal{X},\tilde{S})$, or our MCMC sampling
distribution closely approximates the adaptive sampling distribution
$p_{S}(\cdot)$ in total variation distance. Proof of Lemma 3.1 relies on
Corollary $1$ of Cai [7] that gives an upper bound on the TV distance between
these two distributions in terms of: 1) length of the Markov chain, and 2)
upper bound on the ratio between these two distributions for any input point.
###### Lemma 3.1.
Let $\epsilon_{1},\epsilon_{2}\in(0,1)$ and $\tilde{S}\subseteq
S\subseteq\mathcal{X}$. Let $P^{(1)}(\cdot\;|\;S)$ denote the distribution
over an i.i.d. sample of $t$ points picked from adaptive sampling w.r.t. $S$,
and let $\tilde{P}^{(1)}_{m}(\cdot\;|\;\tilde{S})$ denote the distribution
over $t$ points picked by $t$ independent random walks of length $m$ each in
our one-pass adaptive sampling algorithm; see step 3(a). Then for $m\geq
1+\frac{2}{\epsilon_{1}}\log\tfrac{1}{\epsilon_{2}}$, either
$\mathrm{err}_{p}(\mathcal{X},S)\leq\epsilon_{1}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})$ or
$\left\|P^{(1)}(\cdot\;|\;S)-\tilde{P}^{(1)}_{m}(\cdot\;|\;S)\right\|_{TV}\leq\epsilon_{2}t$.
###### Proof 3.2.
First, consider the $l=1,t=1$ case of the one-pass adaptive sampling algorithm
described above. In this case, the procedure outputs only one element of
$\mathcal{X}$. This random element is picked by $m$ steps of the following
random walk starting from an $x$ picked with probability $q(x)$. In each step,
we pick another point $y$ with probability $q(y)$ and sample a real number $r$
u.a.r. from the interval $(0,1)$, and if $p_{S}(y)q(x)/p_{S}(x)q(y)>r$, then
we move from $x$ to $y$, else, we stay at $x$. Observe that the stationary
distribution of the above random walk is the adaptive sampling distribution
w.r.t. $S$ given by
$p_{S}(x)=d(x,\operatorname{span}\left(S\right))^{p}/\mathrm{err}_{p}(\mathcal{X},S)$.
Using Corollary $1$ of [7], the total variation distance after $m$ steps of
the random walk is bounded by
$\left(1-\frac{1}{\gamma}\right)^{m-1}\leq
e^{-(m-1)/\gamma}\leq\epsilon_{2},\leavevmode\nobreak\ \text{where
$\gamma=\max_{x\in\mathcal{X}}\frac{p_{S}(x)}{q(x)}$}.$
The above bound is at most $\epsilon_{2}$ if we choose to run the random walk
for $m\geq 1+\gamma\log\frac{1}{\epsilon_{2}}$ steps. Now suppose
$\mathrm{err}_{p}(\mathcal{X},S)>\epsilon_{1}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})$. Then, for any $x\in\mathcal{X}$
$\displaystyle\frac{p_{S}(x)}{q(x)}$
$\displaystyle=\dfrac{\dfrac{d(x,\operatorname{span}\left(S\right))^{p}}{\mathrm{err}_{p}(\mathcal{X},S)}}{\dfrac{1}{2}\dfrac{d(x,\operatorname{span}\left(\tilde{S}\right))^{p}}{\mathrm{err}_{p}(\mathcal{X},\tilde{S})}+\dfrac{1}{2\left|\mathcal{X}\right|}}$
$\displaystyle\leq\dfrac{2\leavevmode\nobreak\
d(x,\operatorname{span}\left(S\right))^{p}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})}{d(x,\operatorname{span}\left(\tilde{S}\right))^{p}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},S)}\leq\frac{2}{\epsilon_{1}},$
using $d(x,\operatorname{span}\left(S\right))^{p}\leq
d(x,\operatorname{span}\left(\tilde{S}\right))^{p}$ because
$\tilde{S}\subseteq S$, and the above assumption
$\mathrm{err}_{p}(\mathcal{X},S)>\epsilon_{1}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})$. Therefore,
$m>\frac{2}{\epsilon_{1}}\log\frac{1}{\epsilon_{2}}$ ensures that $m$ steps of
the random walk gives a distribution within total variation distance
$\epsilon_{2}$ from the adaptive sampling distribution for picking a single
point.
Note that for $t>1$ both the adaptive sampling and the MCMC sampling procedure
pick an i.i.d. sample of $t$ points, so the total variation distance is
additive in $t$, which means
$\left\|P^{(1)}(\cdot\;|\;S)-\tilde{P}^{(1)}_{m}(\cdot\;|\;S)\right\|_{TV}\leq\epsilon_{2}t,$
assuming $\mathrm{err}_{p}(\mathcal{X},S)>\epsilon_{1}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})$. This completes a proof of the lemma.
## 4 $\ell_{p}$ subspace approximation
In this section, we give our result for one pass subset selection for
$\ell_{p}$ subspace approximation. We first show (in Lemma 4.1) that the true
adaptive sampling can be well approximated by one pass (approximate) MCMC
based sampling algorithm. Building on this result, in Proposition 4.3 and
Theorem 4.4, we show bounds on the number of steps taken by the Markov chain,
and on the sample size that gives an additive approximation for the $\ell_{p}$
subspace approximation. Our MCMC-based sampling ensures that our problem
statement’s single-pass subset selection criteria are satisfied.
First, let’s set up the notation required to analyze the true adaptive
sampling as well as our one-pass (approximate MCMC) adaptive sampling
algorithm. For any fixed subset $S\subseteq\mathcal{X}$, we define
$\displaystyle\mathrm{err}_{p}(\mathcal{X},S)$
$\displaystyle=\sum_{x\in\mathcal{X}}d(x,\operatorname{span}\left(S\right))^{p},$
(5) $\displaystyle P^{(1)}(T|S)$ $\displaystyle=\prod_{x\in
T}\frac{d(x,\operatorname{span}\left(S\right))^{p}}{\mathrm{err}_{p}(\mathcal{X},S)},$
(6) $\displaystyle\qquad\text{for any subset $T$ of size $t$},$
$\displaystyle\underset{T}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S\cup
T)\right]$
$\displaystyle=\sum_{T\;:\;\left|T\right|=t}P^{(1)}(T\;|\;S)\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},S\cup T).$ (7)
Given a subset $S\subseteq\mathcal{X}$, $P^{(1)}(T\;|\;S)$ denotes the
probability of picking a subset $T\subseteq\mathcal{X}$ of i.i.d. $t$ points
by adaptive sampling w.r.t. $S$. We use $P^{(l)}(T_{1:l}|S)$ to denote the
probability of picking a subset $T_{1:l}=B_{1}\cup B_{2}\cup\dotsc\cup
B_{l}\subseteq\mathcal{X}$ of $tl$ points by $l$ iterative rounds of adaptive
sampling, where in the first round we sample a subset $B_{1}$ consisting of
i.i.d. $t$ points w.r.t. $S$, in the second round we sample a subset $B_{2}$
consisting of i.i.d. $t$ points w.r.t. $S\cup B_{1}$, and so on to pick
$T_{1:l}=B_{1}\cup B_{2}\cup\dotsc\cup B_{l}$ over $l$ iterations. Similarly,
in the context of adaptive sampling, we use $T_{2:l}$ to denote
$B_{2}\cup\dotsc\cup B_{l}$. We abuse the notation
$\underset{T_{1:l}\;|\;S}{\operatorname{E}}\left[\cdot\right]$ to denote the
expectation over $T_{1:l}$ picked in $l$ iterative rounds of adaptive sampling
starting from $S$.
Given a _pivot_ subset $\tilde{S}\subseteq\mathcal{X}$ and another subset
$S\subseteq\mathcal{X}$ such that $\tilde{S}\subseteq S$, consider the
following MCMC sampling with parameters $l,t,m$ that picks $l$ subsets
$A_{1},A_{2},\dotsc,A_{l}$ of $t$ points each, where $m$ denotes the number of
steps of a random walk used to pick these points. This sampling can be
implemented in a single pass over $\mathcal{X}$, for any $l,t,m$, and any
given subsets $\tilde{S}\subseteq S$. For $T_{1:l}=A_{1}\cup
A_{2}\cup\dotsc\cup A_{l}$. We use $\tilde{P}^{(l)}_{m}(T_{1:l}\;|\;S)$ to
denote the probability of picking $T_{1:l}$ as the output of the following
MCMC sampling procedure. Similarly, in the context of MCMC sampling, we use
$T_{2:l}$ to denote $A_{2}\cup\dotsc\cup A_{l}$. We abuse the notation
$\underset{T_{1:l}\;|\;S}{\operatorname{\tilde{E}}}\left[\cdot\right]$ to
denote the expectation over $T_{1:l}$ picked using the MCMC sampling procedure
starting from $S$ with a pivot subset $\tilde{S}\subseteq S$.
We require the following additional notation in our analysis of the above MCMC
sampling. We use $\tilde{P}^{(1)}_{m}(T\;|\;S)$ to denote the resulting
distribution over subsets $T$ of size $t$, when we use the above sampling
procedure with $l=1$. We define the following expressions:
$\displaystyle\mathrm{ind}_{p}(\mathcal{X},S)$
$\displaystyle=\mathbbm{1}\left(\mathrm{err}_{p}(\mathcal{X},S)\leq\epsilon_{1}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})\right),$ (8)
$\displaystyle\underset{T}{\operatorname{\tilde{E}}}\left[\mathrm{err}_{p}(\mathcal{X},S\cup
T)\right]$
$\displaystyle=\sum_{T\;:\;\left|T\right|=t}\tilde{P}^{(1)}_{m}(T\;|\;S)\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},S\cup T),$ (9)
$\displaystyle\underset{T}{\operatorname{\tilde{E}}}\left[\mathrm{ind}_{p}(\mathcal{X},S\cup
T)\right]$
$\displaystyle=\sum_{T\;:\;\left|T\right|=t}\tilde{P}^{(1)}_{m}(T\;|\;S)\leavevmode\nobreak\
\mathrm{ind}_{p}(\mathcal{X},S\cup T).$ (10)
The expression $\mathrm{ind}_{p}(\mathcal{X},S)$ (in Equation (8)) denotes an
indicator random variable that takes value $1$ if error w.r.t. subset $S$ is
smaller than $\epsilon_{1}$ times error w.r.t. subset $\tilde{S}$, and $0$
otherwise. The expression
$\underset{T}{\operatorname{\tilde{E}}}\left[\mathrm{err}_{p}(\mathcal{X},S\cup
T)\right]$ (in Equation (9)) denotes the expected error over the subset $T$
picked using the MCMC sampling procedure starting from the set $S$ such that
the initial subset $\tilde{S}\subseteq S$.
Now Lemma 4.1 analyzes the effect of starting with an initial subset $S_{0}$
and using the same $S_{0}$ as a pivot subset for doing the MCMC sampling for
$l$ subsequent iterations of adaptive sampling, where we pick $t$ i.i.d.
points in each iteration using $t$ independent random walks of $m$ steps.
Lemma 4.1 shows that the expected error for subspace approximation after doing
the $l$ iterations of adaptive sampling is not too far from the expected error
for subspace approximation after replacing the $l$ iterations with MCMC
sampling.
###### Lemma 4.1.
For any subset $S_{0}\subseteq\mathcal{X}$, any
$\epsilon_{1},\epsilon_{2}\in(0,1)$ and any positive integers $t,l,m$ with
$m\geq 1+\frac{2}{\epsilon_{1}}\log\frac{1}{\epsilon_{2}}$,
$\displaystyle\underset{T_{1:l}\;|\;S_{0}}{\operatorname{\tilde{E}}}\left[\mathrm{err}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})\right]\leq\underset{T_{1:l}\;|\;S_{0}}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})\right]+\left(\epsilon_{1}+\epsilon_{2}tl\right)\mathrm{err}_{p}(\mathcal{X},S_{0}).$
###### Proof 4.2.
We show a slightly stronger inequality than the one given above, i.e., for any
$S_{0}$ such that $\tilde{S}\subseteq S_{0}$,
$\displaystyle\underset{T_{1:l}\;|\;S_{0}}{\operatorname{\tilde{E}}}\left[\mathrm{err}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})\right]$
$\displaystyle\leq\underset{T_{1:l}\;|\;S_{0}}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})\right]$
$\displaystyle\quad+\left(\epsilon_{1}\underset{T_{1:l}\;|\;S_{0}}{\operatorname{\tilde{E}}}\left[\mathrm{ind}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})\right]+\epsilon_{2}tl\right)\mathrm{err}_{p}(\mathcal{X},\tilde{S}).$
The special case $S_{0}=\tilde{S}$ gives the lemma. We prove the above-
mentioned stronger statement by induction on $l$. For $l=0$, the above
inequality holds trivially. Now assuming induction hypothesis, the above holds
true for $l-1$ iterations (instead of $l$) starting with any subset
$S_{1}=S_{0}\cup A\subseteq\mathcal{X}$ because $\tilde{S}\subseteq
S_{0}\subseteq S_{1}$.
$\displaystyle\underset{T_{1:l}\;|\;S_{0}}{\operatorname{\tilde{E}}}\left[\mathrm{err}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})\right]$
$\displaystyle=\underset{S_{1}\;|\;S_{0}}{\operatorname{\tilde{E}}}\left[\underset{T_{2:l}\;|\;S_{1}}{\operatorname{\tilde{E}}}\left[\mathrm{err}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]\right]$
$\displaystyle=\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=1}\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\underset{T_{2:l}\;|\;S_{1}}{\operatorname{\tilde{E}}}\left[\mathrm{err}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]$
$\displaystyle\qquad+\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=0}\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\underset{T_{2:l}\;|\;S_{1}}{\operatorname{\tilde{E}}}\left[\mathrm{err}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right].$ (11)
If $\mathrm{ind}_{p}(\mathcal{X},S_{1})=1$ then
$\mathrm{err}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\leq\mathrm{err}_{p}(\mathcal{X},S_{1})\leq\epsilon_{1}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},S_{0})$, so the first part of the above sum can
be bounded as follows.
$\displaystyle\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=1}\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\underset{T_{2:l}\;|\;S_{1}}{\operatorname{\tilde{E}}}\left[\mathrm{err}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]$ $\displaystyle\leq\epsilon_{1}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},S_{0})\cdot\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=1}\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\underset{T_{2:l}\;|\;S_{1}}{\operatorname{\tilde{E}}}\left[\mathrm{ind}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right].$ (12)
We give an upper bound on the second part as follows.
$\displaystyle\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=0}\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\underset{T_{2:l}\;|\;S_{1}}{\operatorname{\tilde{E}}}\left[\mathrm{err}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]$
$\displaystyle=\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=0}\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\underset{T_{2:l}\;|\;S_{1}}{\operatorname{\tilde{E}}}\left[\mathrm{err}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right].$
$\displaystyle\leq\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=0}\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\cdot$
$\displaystyle\quad\left(\underset{T_{2:l}\;|\;S_{1}}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]+(\epsilon_{1}\leavevmode\nobreak\
\underset{T_{2:l}\;|\;S_{1}}{\operatorname{\tilde{E}}}\left[\mathrm{ind}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]+\epsilon_{2}t(l-1))\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})\right).$ (13)
$\displaystyle\qquad\left(\text{by applying the induction hypothesis to
$(l-1)$ iterations starting from $S_{1}$}.\right)$
$\displaystyle\leq\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=0}P^{(1)}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\underset{T_{2:l}\;|\;S_{1}}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]$ $\displaystyle\qquad+\epsilon_{1}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})\leavevmode\nobreak\
\cdot\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=0}\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\underset{T_{2:l}\;|\;S_{1}}{\operatorname{\tilde{E}}}\left[\mathrm{ind}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]$ $\displaystyle\qquad+\epsilon_{2}t(l-1)\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})\leavevmode\nobreak\
\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=0}\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})$
$\displaystyle\qquad+\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=0}\left|\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})-P^{(1)}(S_{1}\;|\;S_{0})\right|\leavevmode\nobreak\
\cdot\underset{T_{2:l}\;|\;S_{1}}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right].$ $\displaystyle\qquad\left(\text{by adding and subtracting
the term
$\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=0}P^{(1)}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\underset{T_{2:l}\;|\;S_{1}}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]$ in Eq.\leavevmode\nobreak\ \eqref{eq:intermediateEq}.
}\right)$
$\displaystyle\leq\sum_{S_{1}}P^{(1)}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\underset{T_{2:l}\;|\;S_{1}}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]$ $\displaystyle\qquad+\epsilon_{1}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})\leavevmode\nobreak\
\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=0}\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\cdot\underset{T_{2:l}\;|\;S_{1}}{\operatorname{\tilde{E}}}\left[\mathrm{ind}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]$ $\displaystyle\qquad+\epsilon_{2}t(l-1)\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})+\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=0}\left|\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})-P^{(1)}(S_{1}\;|\;S_{0})\right|\leavevmode\nobreak\
\cdot\mathrm{err}_{p}(\mathcal{X},\tilde{S}).$
$\displaystyle\qquad\left(\text{by upper bounding the probability expression
$\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=0}\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})$
by $1$. }\right)$
$\displaystyle\leq\underset{T_{1:l}\;|\;S_{0}}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})\right]$ $\displaystyle\qquad+\epsilon_{1}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})\leavevmode\nobreak\
\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=0}\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\cdot\underset{T_{2:l}\;|\;S_{1}}{\operatorname{\tilde{E}}}\left[\mathrm{ind}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]$ $\displaystyle\qquad+\epsilon_{2}t(l-1)\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})+\left\|\tilde{P}^{(1)}(\cdot\;|\;S_{0})-P^{(1)}(\cdot\;|\;S_{0})\right\|_{TV}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S}).$ $\displaystyle\qquad\left(\text{as
$\underset{T_{1:l}\;|\;S_{0}}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})\right]=\sum_{S_{1}}P^{(1)}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\underset{T_{2:l}\;|\;S_{1}}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]$ by Eq.\leavevmode\nobreak\ \eqref{eq:exp_T}. }\right)$
$\displaystyle\leq\underset{T_{1:l}\;|\;S_{0}}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})\right]$ $\displaystyle\qquad+\epsilon_{1}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})\leavevmode\nobreak\
\sum_{S_{1}\;:\;\mathrm{ind}_{p}(\mathcal{X},S_{1})=0}\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\cdot\underset{T_{2:l}\;|\;S_{1}}{\operatorname{\tilde{E}}}\left[\mathrm{ind}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]$ $\displaystyle\qquad+\epsilon_{2}t(l-1)\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})+\epsilon_{2}t\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S}).$ (14)
Finally, Equation (14) holds using Lemma 3.1 about the total variation
distance between $P^{(1)}$ and $\tilde{P}^{(1)}$ distributions. Plugging the
bounds $\eqref{eq:part-1-ind}$ and $\eqref{eq:part-2-ind}$ into
$\eqref{eq:parts-by-ind}$, we get
$\displaystyle\underset{T_{1:l}\;|\;S_{0}}{\operatorname{\tilde{E}}}\left[\mathrm{err}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})\right]$
$\displaystyle\leq\underset{T_{1:l}\;|\;S_{0}}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})\right]+\epsilon_{1}\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})\leavevmode\nobreak\
\sum_{S_{1}}\tilde{P}^{(1)}_{m}(S_{1}\;|\;S_{0})\leavevmode\nobreak\
\cdot\underset{T_{2:l}\;|\;S_{1}}{\operatorname{\tilde{E}}}\left[\mathrm{ind}_{p}(\mathcal{X},S_{1}\cup
T_{2:l})\right]$ $\displaystyle\quad+\epsilon_{2}t(l-1)\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S})+\epsilon_{2}t\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S}).$
$\displaystyle=\underset{T_{1:l}\;|\;S_{0}}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})\right]+\left(\epsilon_{1}\underset{T_{1:l}\;|\;S_{0}}{\operatorname{\tilde{E}}}\left[\mathrm{ind}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})\right]+\epsilon_{2}tl\right)\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S}).$
$\displaystyle\leq\underset{T_{1:l}\;|\;S_{0}}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})\right]+\left(\epsilon_{1}+\epsilon_{2}tl\right)\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\tilde{S}),$
which completes the proof of Lemma 4.1.
Theorem 5 from [16] shows that in $l=k$ rounds of adaptive sampling, where in
each round we pick $t=\tilde{O}\left((k/\epsilon)^{p+1}\right)$ points and
take their union, gives an additive approximation guarantee for $\ell_{p}$
subspace approximation with probability at least $1/2k$. Repeating it multiple
times and taking the best can boost the probability further. We restate the
main part of this theorem below.
###### Proposition 4.3.
(Theorem 5, [16]) Let $k$ be any positive integer, let $\epsilon\in(0,1)$ and
$S_{0}=\emptyset$. Let $l=k$ and $t=\tilde{O}\left((k/\epsilon)^{p+1}\right)$.
If $S_{l}=S_{0}\cup T_{1:l}$ is obtained by starting from $S_{0}$ and doing
adaptive sampling according to the $p$-th power of distances in $l$
iterations, and in each iteration we add $t$ points from $\mathcal{X}$, then
we have $\left|S_{l}\right|=tl=\tilde{O}(k\cdot(k/\epsilon)^{p+1})$ such that
$\displaystyle\mathrm{err}_{p}(\mathcal{X},S_{0}\cup
T_{1:l})^{1/p}\leq\mathrm{err}_{p}(\mathcal{X},V^{*})^{1/p}+\epsilon\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\emptyset)^{1/p},$
with probability at least $1/2k$, and where $V^{*}$ minimizes
$\mathrm{err}_{p}(\mathcal{X},V)$ over all linear subspaces $V$ of dimension
at most $k$. If we repeat this $O(k\log\frac{1}{\epsilon})$ times then the
probability of success can be boosted to $1-\epsilon$.
Combining Lemma 4.1 and Proposition 4.3 we get the following Theorem.
###### Theorem 4.4.
For any positive integer $k$, any $p\in[1,\infty)$, and any
$\delta\in\mathbb{R}_{\geq 0}$, starting from $S_{0}=\emptyset$ and setting
the following parameters in one-pass MCMC $\ell_{p}$ subspace approximation
algorithm (see Section 3)
$\displaystyle\epsilon$ $\displaystyle=\delta/4,$ $\displaystyle\epsilon_{1}$
$\displaystyle=\delta^{p}/2^{p+1},$ $\displaystyle\epsilon_{2}$
$\displaystyle=\delta^{p}/2^{p+1}tl,$ $\displaystyle m$
$\displaystyle=1+\frac{2}{\delta^{p}}\log\frac{k}{\delta^{p}},$ $\displaystyle
t$ $\displaystyle=\tilde{O}((k/\epsilon)^{p+1}),$ $\displaystyle l$
$\displaystyle=k,$
we get a subset $\mathcal{S}$ of size $\tilde{O}(k\cdot(k/\delta)^{p+1})$ with
an additive approximation guarantee on its expected error as
$\mathrm{err}_{p}(\mathcal{X},V^{*})^{1/p}+\delta\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\emptyset)^{1/p}$. Further, the running time of
the algorithm is
$nd+k\cdot\tilde{O}\left(\left(\frac{k}{\delta}\right)^{p+1}\right).$
###### Proof 4.5.
From Lemma 4.1 we know that
$\underset{T_{1:l}\;|\;\emptyset}{\operatorname{\tilde{E}}}\left[\mathrm{err}_{p}(\mathcal{X},T_{1:l})\right]\leq\underset{T_{1:l}\;|\;\emptyset}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},T_{1:l})\right]+\left(\epsilon_{1}+\epsilon_{2}tl\right)\mathrm{err}_{p}(\mathcal{X},\emptyset).$
Thus, for $p\in[1,\infty)$ we have
$\displaystyle\underset{T_{1:l}\;|\;\emptyset}{\operatorname{\tilde{E}}}\left[\mathrm{err}_{p}(\mathcal{X},T_{1:l})\right]^{1/p}$
$\displaystyle\leq\underset{T_{1:l}\;|\;\emptyset}{\operatorname{E}}\left[\mathrm{err}_{p}(\mathcal{X},T_{1:l})\right]^{1/p}+\left(\epsilon_{1}+\epsilon_{2}tl\right)^{1/p}\mathrm{err}_{p}(\mathcal{X},\emptyset)^{1/p}.$
$\displaystyle\leq(1-\epsilon)\left(\mathrm{err}_{p}(\mathcal{X},V^{*})^{1/p}+\epsilon\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\emptyset)^{1/p}\right)+\epsilon\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\emptyset)^{1/p}$
$\displaystyle\qquad\qquad+\left(\epsilon_{1}+\epsilon_{2}tl\right)^{1/p}\mathrm{err}_{p}(\mathcal{X},\emptyset)^{1/p}.$
$\displaystyle\qquad\qquad\left(\text{using Proposition \ref{prop:adaptive-
iter-p}}.\right)$
$\displaystyle\leq\mathrm{err}_{p}(\mathcal{X},V^{*})^{1/p}+\left(2\epsilon+\left(\epsilon_{1}+\epsilon_{2}tl\right)^{1/p}\right)\mathrm{err}_{p}(\mathcal{X},\emptyset)^{1/p}.$
$\displaystyle\leq\mathrm{err}_{p}(\mathcal{X},V^{*})^{1/p}+\delta\leavevmode\nobreak\
\mathrm{err}_{p}(\mathcal{X},\emptyset)^{1/p},$
using $\epsilon=\delta/4$, $\epsilon_{1}=\delta^{p}/2^{p+1}$ and
$\epsilon_{2}=\delta^{p}/2^{p+1}tl$.
We now give a bound on the running time of our algorithm. We require $nd$ time
to generate the probability distribution $q(x)$, for $x\in\mathcal{X}$.
Further, the running time of MCMC sampling step is $t\cdot m\cdot
l=k\cdot\tilde{O}\left(\left(\frac{k}{\delta}\right)^{p+1}\right)$. Therefore,
the overall running time of the algorithm is
$nd+k\cdot\tilde{O}\left(\left(\frac{k}{\delta}\right)^{p+1}\right)$.
## 5 Conclusion and open questions
In this work, we give an efficient one-pass MCMC algorithm that does subset
selection with additive approximation guarantee for $\ell_{p}$ subspace
approximation, for any $p\in[1,\infty)$. Previously this was only known for
the special case of $p=2$ [11]. For general case $p\in[1,\infty)$, adaptive
sampling algorithm due to [16] requires taking multiple passes over the input.
Coming up with a one-pass subset selection algorithm that offers stronger
multiplicative guarantees for $p\in[1,\infty)$ remains an interesting open
problem.
## References
* [1] Nima Anari, Shayan Oveis Gharan, and Alireza Rezaei. Monte carlo markov chain algorithms for sampling strongly rayleigh distributions and determinantal point processes. In 29th Annual Conference on Learning Theory (COLT), volume 49, pages 103–115. PMLR, 2016. URL: http://proceedings.mlr.press/v49/anari16.html.
* [2] David Arthur and Sergei Vassilvitskii. k-means++: the advantages of careful seeding. In Nikhil Bansal, Kirk Pruhs, and Clifford Stein, editors, Proceedings of the Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2007, New Orleans, Louisiana, USA, January 7-9, 2007, pages 1027–1035. SIAM, 2007. URL: http://dl.acm.org/citation.cfm?id=1283383.1283494.
* [3] Olivier Bachem, Mario Lucic, S. Hamed Hassani, and Andreas Krause. Approximate k-means++ in sublinear time. In Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, page 1459–1467. AAAI Press, 2016.
* [4] Olivier Bachem, Mario Lucic, Seyed Hamed Hassani, and Andreas Krause. Fast and provably good seedings for k-means. In Daniel D. Lee, Masashi Sugiyama, Ulrike von Luxburg, Isabelle Guyon, and Roman Garnett, editors, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 55–63, 2016. URL: https://proceedings.neurips.cc/paper/2016/hash/d67d8ab4f4c10bf22aa353e27879133c-Abstract.html.
* [5] Frank Ban, Vijay Bhattiprolu, Karl Bringmann, Pavel Kolev, Euiwoong Lee, and David P. Woodruff. A PTAS for $\mathscr{l}$p-low rank approximation. In Timothy M. Chan, editor, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 747–766. SIAM, 2019. URL: https://doi.org/10.1137/1.9781611975482.47, doi:10.1137/1.9781611975482.47.
* [6] Vladimir Braverman, Petros Drineas, Cameron Musco, Christopher Musco, Jalaj Upadhyay, David P. Woodruff, and Samson Zhou. Near optimal linear algebra in the online and sliding window models. In 61st IEEE Annual Symposium on Foundations of Computer Science, FOCS 2020, Durham, NC, USA, November 16-19, 2020, pages 517–528. IEEE, 2020. URL: https://doi.org/10.1109/FOCS46700.2020.00055, doi:10.1109/FOCS46700.2020.00055.
* [7] Haiyan Cai. Exact bound for the convergence of metropolis chains. Stochastic Analysis and Applications, 18(1):63–71, 2000. URL: https://doi.org/10.1080/07362990008809654, arXiv:https://doi.org/10.1080/07362990008809654, doi:10.1080/07362990008809654.
* [8] M. T. CHAO. A general purpose unequal probability sampling plan. Biometrika, 69(3):653–656, 12 1982. URL: https://doi.org/10.1093/biomet/69.3.653, arXiv:https://academic.oup.com/biomet/article-pdf/69/3/653/591311/69-3-653.pdf, doi:10.1093/biomet/69.3.653.
* [9] Flavio Chierichetti, Sreenivas Gollapudi, Ravi Kumar, Silvio Lattanzi, Rina Panigrahy, and David P. Woodruff. Algorithms for $\ell_{p}$ low-rank approximation. In Doina Precup and Yee Whye Teh, editors, Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, pages 806–814. PMLR, 06–11 Aug 2017\. URL: https://proceedings.mlr.press/v70/chierichetti17a.html.
* [10] Kenneth L. Clarkson and David P. Woodruff. Low-rank approximation and regression in input sparsity time. J. ACM, 63(6), jan 2017. URL: https://doi.org/10.1145/3019134, doi:10.1145/3019134.
* [11] Michael B. Cohen, Cameron Musco, and Christopher Musco. Input sparsity time low-rank approximation via ridge leverage score sampling. In Philip N. Klein, editor, Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 16-19, pages 1758–1777. SIAM, 2017. URL: https://doi.org/10.1137/1.9781611974782.115, doi:10.1137/1.9781611974782.115.
* [12] Chen Dan, Hong Wang, Hongyang Zhang, Yuchen Zhou, and Pradeep K Ravikumar. Optimal analysis of subset-selection based l_p low-rank approximation. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019. URL: https://proceedings.neurips.cc/paper/2019/file/80a8155eb153025ea1d513d0b2c4b675-Paper.pdf.
* [13] Michal Derezinski and Manfred K. Warmuth. Unbiased estimates for linear regression via volume sampling. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 3084–3093, 2017. URL: https://proceedings.neurips.cc/paper/2017/hash/54e36c5ff5f6a1802925ca009f3ebb68-Abstract.html.
* [14] Amit Deshpande, Praneeth Kacham, and Rameshwar Pratap. Robust k-means++. In Ryan P. Adams and Vibhav Gogate, editors, Proceedings of the Thirty-Sixth Conference on Uncertainty in Artificial Intelligence, UAI 2020, virtual online, August 3-6, 2020, volume 124 of Proceedings of Machine Learning Research, pages 799–808. AUAI Press, 2020. URL: http://proceedings.mlr.press/v124/deshpande20a.html.
* [15] Amit Deshpande, Luis Rademacher, Santosh S. Vempala, and Grant Wang. Matrix approximation and projective clustering via volume sampling. Theory Comput., 2(12):225–247, 2006. URL: https://doi.org/10.4086/toc.2006.v002a012, doi:10.4086/toc.2006.v002a012.
* [16] Amit Deshpande and Kasturi Varadarajan. Sampling-based dimension reduction for subspace approximation. In Proceedings of the Thirty-Ninth Annual ACM Symposium on Theory of Computing, STOC ’07, page 641–650, New York, NY, USA, 2007. Association for Computing Machinery. URL: https://doi.org/10.1145/1250790.1250884, doi:10.1145/1250790.1250884.
* [17] Amit Deshpande and Santosh S. Vempala. Adaptive sampling and fast low-rank matrix approximation. In Josep Díaz, Klaus Jansen, José D. P. Rolim, and Uri Zwick, editors, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, 9th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2006 and 10th International Workshop on Randomization and Computation, RANDOM 2006, Barcelona, Spain, August 28-30 2006, Proceedings, volume 4110 of Lecture Notes in Computer Science, pages 292–303. Springer, 2006. URL: https://doi.org/10.1007/11830924_28, doi:10.1007/11830924\\_28.
* [18] Inderjit S. Dhillon and Dharmendra S. Modha. Concept decompositions for large sparse text data using clustering. Mach. Learn., 42(1/2):143–175, 2001. URL: https://doi.org/10.1023/A:1007612920971, doi:10.1023/A:1007612920971.
* [19] Charlie Dickens, Graham Cormode, and David Woodruff. Leveraging well-conditioned bases: Streaming and distributed summaries in Minkowski $p$-norms. In Jennifer Dy and Andreas Krause, editors, Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, pages 1243–1251. PMLR, 10–15 Jul 2018\. URL: https://proceedings.mlr.press/v80/dickens18a.html.
* [20] Petros Drineas, Michael W. Mahoney, and S. Muthukrishnan. Relative-error CUR matrix decompositions. SIAM J. Matrix Anal. Appl., 30(2):844–881, 2008. URL: https://doi.org/10.1137/07070471X, doi:10.1137/07070471X.
* [21] Pavlos S. Efraimidis and Paul (Pavlos) Spirakis. Weighted random sampling. In Encyclopedia of Algorithms, pages 2365–2367. 2016. URL: https://doi.org/10.1007/978-1-4939-2864-4_478, doi:10.1007/978-1-4939-2864-4\\_478.
* [22] Dan Feldman. Introduction to core-sets: an updated survey, 2020. arXiv:2011.09384.
* [23] Dan Feldman, Morteza Monemizadeh, Christian Sohler, and David P. Woodruff. Coresets and sketches for high dimensional subspace approximation problems. In Moses Charikar, editor, Proceedings of the Twenty-First Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2010, Austin, Texas, USA, January 17-19, 2010, pages 630–649. SIAM, 2010. URL: https://doi.org/10.1137/1.9781611973075.53, doi:10.1137/1.9781611973075.53.
* [24] Alan M. Frieze, Ravi Kannan, and Santosh S. Vempala. Fast monte-carlo algorithms for finding low-rank approximations. J. ACM, 51(6):1025–1041, 2004. URL: https://doi.org/10.1145/1039488.1039494, doi:10.1145/1039488.1039494.
* [25] Mina Ghashami, Edo Liberty, Jeff M. Phillips, and David P. Woodruff. Frequent directions : Simple and deterministic matrix sketching. CoRR, abs/1501.01711, 2015. URL: http://arxiv.org/abs/1501.01711, arXiv:1501.01711.
* [26] Mina Ghashami, Edo Liberty, Jeff M. Phillips, and David P. Woodruff. Frequent directions: Simple and deterministic matrix sketching. SIAM J. Comput., 45(5):1762–1792, 2016. URL: https://doi.org/10.1137/15M1009718, doi:10.1137/15M1009718.
* [27] Mina Ghashami and Jeff M. Phillips. Relative errors for deterministic low-rank matrix approximations. In Proceedings of the Twenty-Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’14, page 707–717, USA, 2014. Society for Industrial and Applied Mathematics.
* [28] Venkatesan Guruswami and Ali Kemal Sinop. Optimal column-based low-rank matrix reconstruction. In Yuval Rabani, editor, Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012, Kyoto, Japan, January 17-19, 2012, pages 1207–1214. SIAM, 2012. URL: https://doi.org/10.1137/1.9781611973099.95, doi:10.1137/1.9781611973099.95.
* [29] Yasutoshi Ida, Sekitoshi Kanai, Yasuhiro Fujiwara, Tomoharu Iwata, Koh Takeuchi, and Hisashi Kashima. Fast deterministic CUR matrix decomposition with accuracy assurance. In Hal Daumé III and Aarti Singh, editors, Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 4594–4603. PMLR, 13–18 Jul 2020\. URL: http://proceedings.mlr.press/v119/ida20a.html.
* [30] Edo Liberty. Simple and deterministic matrix sketching. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’13, page 581–588, New York, NY, USA, 2013. Association for Computing Machinery. URL: https://doi.org/10.1145/2487575.2487623, doi:10.1145/2487575.2487623.
* [31] Sepideh Mahabadi, Ilya P. Razenshteyn, David P. Woodruff, and Samson Zhou. Non-adaptive adaptive sampling on turnstile streams. In Konstantin Makarychev, Yury Makarychev, Madhur Tulsiani, Gautam Kamath, and Julia Chuzhoy, editors, Proccedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020, Chicago, IL, USA, June 22-26, 2020, pages 1251–1264. ACM, 2020. URL: https://doi.org/10.1145/3357713.3384331, doi:10.1145/3357713.3384331.
* [32] Michael W Mahoney. Randomized algorithms for matrices and data. arXiv preprint arXiv:1104.5557, 2011.
* [33] Michael W. Mahoney and Petros Drineas. CUR matrix decompositions for improved data analysis. Proc. Natl. Acad. Sci. USA, 106(3):697–702, 2009. URL: https://doi.org/10.1073/pnas.0803205106, doi:10.1073/pnas.0803205106.
* [34] Rameshwar Pratap, Anup Anand Deshmukh, Pratheeksha Nair, and Tarun Dutt. A faster sampling algorithm for spherical $k$-means. In Jun Zhu and Ichiro Takeuchi, editors, Proceedings of The 10th Asian Conference on Machine Learning, ACML 2018, Beijing, China, November 14-16, 2018, volume 95 of Proceedings of Machine Learning Research, pages 343–358. PMLR, 2018. URL: http://proceedings.mlr.press/v95/pratap18a.html.
* [35] Jeffrey Scott Vitter. Random sampling with a reservoir. ACM Trans. Math. Softw., 11(1):37–57, 1985. URL: https://doi.org/10.1145/3147.3165, doi:10.1145/3147.3165.
* [36] Shusen Wang and Zhihua Zhang. Improving CUR matrix decomposition and the nyström approximation via adaptive sampling. J. Mach. Learn. Res., 14(1):2729–2769, 2013. URL: http://dl.acm.org/citation.cfm?id=2567748.
* [37] Kai Wei, Rishabh Iyer, and Jeff Bilmes. Submodularity in data subset selection and active learning. In Proceedings of the 32nd International Conference on Machine Learning, volume 37, pages 1954–1963, 2015.
|
# EVE: Efficient Vision-Language Pre-training with Masked Prediction and
Modality-Aware MoE
Junyi Chen1, Longteng Guo2, Jia Sun3, Shuai Shao3, Zehuan Yuan3, Liang Lin1,
Dongyu Zhang1 Corresponding author.
###### Abstract
Building scalable vision-language models to learn from diverse, multimodal
data remains an open challenge. In this paper, we introduce an Efficient
Vision-languagE foundation model, namely EVE, which is one unified multimodal
Transformer pre-trained solely by one unified pre-training task. Specifically,
EVE encodes both vision and language within a shared Transformer network
integrated with modality-aware sparse Mixture-of-Experts (MoE) modules, which
capture modality-specific information by selectively switching to different
experts. To unify pre-training tasks of vision and language, EVE performs
masked signal modeling on image-text pairs to reconstruct masked signals,
i.e., image pixels and text tokens, given visible signals. This simple yet
effective pre-training objective accelerates training by 3.5x compared to the
model pre-trained with Image-Text Contrastive and Image-Text Matching losses.
Owing to the combination of the unified architecture and pre-training task,
EVE is easy to scale up, enabling better downstream performance with fewer
resources and faster training speed. Despite its simplicity, EVE achieves
state-of-the-art performance on various vision-language downstream tasks,
including visual question answering, visual reasoning, and image-text
retrieval.
## Introduction
Vision-Language Pre-training aims to learn a general multimodal representation
that can be transferred to various vision-language downstream tasks, such as
vision-language understanding and image-text retrieval. A vision-language
foundation model should have excellent performance while being easy to train
and scale up, which can be achieved through the model architecture and the
pre-training tasks.
Figure 1: Performance of different models on VQA test-dev under different
training hours. Training hours of all models are reproduced by us on A100
GPUs.
The model architectures of recent methods can be roughly divided into two
categories: dual-encoder architecture and unified architecture. Dual-encoder
methods (Radford et al. 2021; Zeng, Zhang, and Li 2022) employ modality-
specific models (e.g. BERT (Devlin et al. 2019), ViT (Dosovitskiy et al.
2021)) to encode different modalities separately and a fusion module to
integrate them. As for the fusion module, some methods (Radford et al. 2021)
employ shallow fusion (e.g., dot product) for the interaction of vision and
language. Some alternative methods (Zeng, Zhang, and Li 2022) use deep neural
networks, such as Transformer Encoders, to perform deep fusion on modality
interaction, but lead to difficulties in scaling up and low efficiency.
Unified methods (Kim, Son, and Kim 2021; Wang et al. 2022b) use a modality-
shared Transformer to encode different modalities jointly. This approach
simplifies the framework and improves the speed, helping with model scaling
up. However, they overlook the inherent gap between modalities, leading to
lower overall performance. Image is continuous, redundant, and low-level on
the raw signals, while text is discrete, refined, and high-level. Directly
using a shared Transformer to encode different modalities with semantic gap
poses problems. Therefore, it is necessary to consider the differences between
different modalities carefully.
Previous methods also have explored numerous pre-training tasks for vision-
language pre-training, including Image-Text Contrastive Learning (Radford et
al. 2021), Image-Text Matching (Li et al. 2021), Word-Patch Alignment (Chen et
al. 2020), Masked Language Modeling (Su et al. 2020), Masked Image Modeling
(Bao et al. 2022b), and so on. They have been widely used to improve vision-
language pre-training. While incorporating more pre-training tasks can enhance
performance, adding too many tasks can also lead to some problems. Foremost,
it significantly prolongs the pre-training time and increases the
computational resources required. Additionally, it necessitates manual weight
adjustments for different objectives. Furthermore, excessive pre-training
objectives can result in a reduction in the model’s scalability, which is
crucial in designing pre-training models, as the recent success has shown in
large language models (Ouyang et al. 2022; Wei et al. 2022b). Therefore, it is
necessary to use effective and scalable pre-training tasks.
In this paper, we propose an Efficient Vision-languagE foundation model (EVE)
with a unified modality-aware Transformer pre-trained with a single unified
pretraining task, i.e., masked signal modeling.
In terms of model architecture, we use a unified modality-aware Transformer
and revisit the integration of Mixture-of-Experts in vision-language pre-
training. We employ a shared Multi-Head Self-Attention module and a Modality-
Aware MoE module for the modality-aware Transformer to encode and fuse various
modalities jointly. Using a unified shared Transformer is more concise and
flexible, which simplifies the extension to additional modalities and
facilitates cross-modal alignment. By incorporating MoE, we can take into
account the differences between modalities and capture more modality-specific
information. We also introduce a modality routing technique in MoE that
enables the router select more appropriate experts for processing.
In terms of pre-training tasks, we propose a unified masked signal modeling
technique combining masked pixel and language modeling, which significantly
improves training speed and reduces scaling difficulty. Some methods (Wang et
al. 2023; Kwon et al. 2023; Zhao et al. 2022) have applied generative pre-
training paradigm to vision-language pre-training. While they either add the
generative objective with other complex objectives like ITC and ITM (Kwon et
al. 2023) or employ more complicated targets such as visual tokens (Wang et
al. 2023) or momentum features (Zhao et al. 2022), which require a nontrivial
visual tokenizer or momentum model. All of these increase the complexity of
pre-training. In contrast to them, we just utilize the _raw signals_ from the
image-text pairs themselves to minimize the complexity of pre-training and
achieve better scalability. Pre-training speed is 3.5x faster than
incorporating ITC and ITM.
EVE can greatly enhance pre-training speed, as shown in Figure 1. It decreases
the demand for extensive computational resources while being easy to scale up.
We demonstrate the effectiveness of EVE on various vision-language downstream
tasks, including visual question answering, visual reasoning, and image-text
retrieval. EVE achieves state-of-the-art performance on Image-Text Retrieval
and Vision-Language Understanding (VQA and NLVR2) tasks.
Our contributions are summarized as follows:
* •
We introduce EVE, an efficient vision-language foundation model that achieves
state-of-the-art performance while improving training speed, with one unified
multimodal Transformer and one unified pre-training task.
* •
We integrate Modality-Aware MoE with a shared multimodal Transformer to
achieve a more profound fusion of different modalities and capture more
modality-specific information simultaneously, resulting in better performance
and faster inference speed within a unified architecture.
* •
We propose a unified masked signal modeling technique, simplifying vision-
language pre-training into a single unified objective, resulting in
significantly improved pre-training speed and competitive performance.
## Related Work
Model architecture and pre-training tasks are crucial factors in the
representation learning of vision-language.
### Model Architecture
Dual-encoder with a fusion module (Li et al. 2021; Zeng, Zhang, and Li 2022;
Dou et al. 2022b; Zhao et al. 2022) performs well on vision-language tasks but
with higher time and architecture complexity. Unified architecture methods
(Kim, Son, and Kim 2021; Wang et al. 2022b; Bao et al. 2022a, b) can flexibly
encode different modalities as a fusion encoder or process a single modality
as a unimodal encoder, demonstrating faster inference speed and promising
performance. Some of them (Kim, Son, and Kim 2021; Wang et al. 2022b) use a
shared standard Transformer (Vaswani et al. 2017) to jointly encode different
modalities, while they ignore the modality gap and lead to worse performance.
Others (Bao et al. 2022a, b) use MoME Transformer instead and prove that
shared attention is better for multimodal learning. However, MoME Transformer
uses modality-shared FFN in the deep layers may neglect some modality-specific
information.
Considering the simplicity, effectiveness, and flexibility of the unified
architecture, we adopt a unified architecture with Modality-Aware MoE to
better capture modality specifics during fusion for multimodal representation
learning. We achieve state-of-the-art performance with approximately the same
inference cost.
Figure 2: Overview of EVE and Masked Signal Modeling. We use a unified
architecture with shared attention and Modality-Aware MoE for EVE and a single
unified masked signal modeling for pre-training. We employ random masking on
both image and text. Masked image and complete text are used in masked image
modeling, vice versa.
### Masked Signal Modeling
Recently, several methods (Bao et al. 2022b; Zhao et al. 2022; He et al.
2022b; Diao et al. 2023; Geng et al. 2022) explore the ”mask then predict”
paradigm in the vision for vision-language pre-training. While VLBEiT (Bao et
al. 2022b) introduces training on the visual modality through masked image
modeling, their reconstruction target is the visual token, which may
significantly influence performance depending on the visual tokenizer
employed. DAVINCI (Diao et al. 2023) extends prefix language modeling further
to vision, but it also uses the discrete visual token as the target. MAMO
(Zhao et al. 2022) enriches multimodal representation by using momentum
features in masked representation modeling, which relies heavily on a momentum
teacher model to avoid divergence. Some methods (Kwon et al. 2023; He et al.
2022b; Gui et al. 2022) use masked pixel modeling, but they all require
additional costly pre-training tasks such as ITC (Radford et al. 2021) and ITM
(Li et al. 2019). Among these methods, VLMAE (He et al. 2022b) only applies
masked pixel modeling to the image encoder. M3AE (Geng et al. 2022) leverages
a unified Image-Language masking approach to mask and reconstruct both images
and text simultaneously, but it is not used in multimodal downstream tasks.
We unify masked pixel and language modeling into masked signal modeling,
reconstructing masked raw signals from visible signals. This simplifies and
accelerates training, achieving better performance and scalability.
### Mixture-of-Experts (MoE)
Mixture-of-Experts has been extensively explored in computer vision (Riquelme
et al. 2021) and natural language processing (Shazeer et al. 2017; Lepikhin et
al. 2021). These methods generally aim to improve performance by learning a
better routing using auxiliary losses (Lepikhin et al. 2021; Zoph et al.
2022), converting it into a linear assignment problem (Lewis et al. 2021), or
making it differentiable (Hazimeh et al. 2021). MoE seems well-suited for
multimodal learning, but the differences between modalities present some
challenges. LIMoE (Mustafa et al. 2022) involves more auxiliary losses to
balance different modalities, uni-perceiver-moe (Zhu et al. 2022) employs
conditional MoE, and VLMO (Bao et al. 2022a) uses shared expert in the deep
layers.
However, existing methods increase complexity or limit performance due to
manual routing and ignoring modality information. Therefore, we propose
Modality-Aware MoE as a simple way to apply MoE to multimodal learning. We
simplify the auxiliary loss and capture more modality specifics by expert
switching.
## Methods
### Backbone Network
As shown in Figure 2, we adopt a unified multimodal Transformer with shared
attention and Modality-Aware Mixture-of-Experts as the backbone network, which
is capable of encoding different modalities. After pre-training, the model can
be utilized as either a fusion encoder or a unimodal encoder for various
downstream tasks through fine-tuning.
For Image $\boldsymbol{I}$, following VIT(Dosovitskiy et al. 2021), we first
split the Image $\boldsymbol{I}$ into $N$ patches with a patch size of $P$.
The resulting $N=HW/P^{2}$ patches are projected into a shared embedding space
using a linear projector. A special token $\boldsymbol{I}_{\text{cls}}$ is
added at the beginning of all visual tokens. We employ learnable visual
position embeddings $\boldsymbol{I}_{\text{pos}}$ and visual type embeddings
$\boldsymbol{I}_{\text{type}}$ on visual tokens. Image embedding can be
summarized as follows.
$\boldsymbol{I}_{\text{emb}}=[\boldsymbol{I}_{\text{cls}},\boldsymbol{I}_{1},\dots,\boldsymbol{I}_{N}]+\boldsymbol{I}_{\text{pos}}+\boldsymbol{I}_{\text{type}}$
(1)
For Text $T$, following BERT(Devlin et al. 2019), we tokenize the text into
discrete tokens with the maximum length of $n$ and project them into the joint
embedding space. We add a special token $\boldsymbol{T}_{\text{cls}}$ at the
beginning of all text tokens and use learnable text position embeddings
$\boldsymbol{T}_{\text{pos}}$ and text type embeddings
$\boldsymbol{T}_{\text{type}}$ for text encoding Text embedding can be
summarized as follows.
$\boldsymbol{T}_{\text{emb}}=[\boldsymbol{T}_{\text{cls}},\boldsymbol{T}_{1},\dots,\boldsymbol{T}_{n}]+\boldsymbol{T}_{\text{pos}}+\boldsymbol{T}_{\text{type}}$
(2)
We concatenate $\boldsymbol{I}_{\text{emb}}$ and $\boldsymbol{T}_{\text{emb}}$
as the input to the model:
$\mathbb{P}_{\text{emb}}=[\boldsymbol{I}_{\text{emb}},\boldsymbol{T}_{\text{emb}}]$
(3)
### Modality-Aware Mixture-of-Experts
Multimodal learning differs significantly from unimodal learning, as the
differences between modalities cannot be ignored. Using the same Feed-Forward
Network for all modalities can lead to inappropriate fusion of modalities,
resulting in degraded performance. Conversely, using modality-specific MoE in
all layers may not benefit the alignment of different modalities. Therefore,
we propose the Modality-Aware Mixture-of-Experts (MoE) as shown in Figure3,
which incorporates the modality routing technique on top of the general MoE to
capture modality-specific information while fusing by selectively switching to
different experts.
In the general MoE, each MoE block typically consists of $N$ experts, and each
input token is processed by $k$ experts selected from the $N$ experts. A
lightweight router $g$ is used to select the $k$ experts for each token, which
employs a simple linear-softmax predictor to calculate the routing weight.
This can be formulated as:
$g(\mathbf{x})=\text{softmax}\left(\mathbf{W}\cdot\mathbf{x}\right)$ (4)
$\mathbf{W}\in\mathbb{R}^{D\times N}$ is a learnable projector for input
$\mathbf{x}\in\mathbb{R}^{D}$.
The final output of the MoE block is the weighted average of the $k$ selected
experts, which can be formulated as:
$\text{MoE}(\mathbf{x})=\sum_{i=1}^{k}g(\mathbf{x})_{i}\cdot\text{FFN}_{i}(\mathbf{i})$
(5)
Figure 3: Architecture of Modality-Aware MoE.
#### Modality Routing
General MoE does not impose any restrictions on the router, which can easily
lead to unbalanced routing. LIMoE (Mustafa et al. 2022) points out that this
phenomenon can be exacerbated in multimodal learning due to the difference in
token count across different modalities.
To address this issue, we propose a modality-aware routing approach to enhance
the router. We adopt a best-effort strategy for routing to preserve all tokens
while explicitly providing modality information to the router by adding
modality-specific embeddings. The new routing function can be formulated as
follows:
$g(\mathbf{x})=\text{softmax}\left(\mathbf{W}\cdot(\mathbf{x}+\mathbf{b}_{m})\right)$
(6)
Here, we use modality-specific embeddings $\mathbf{b}_{m}\in\mathbb{R}^{D}$
for different modalities, i.e., $\mathbf{b}_{I}$ for images and
$\mathbf{b}_{T}$ for text.
#### Auxiliary Loss
In addition to modality routing, we use a single simple auxiliary loss to
balance routing and avoid carefully tuning the weight. Following (Shazeer et
al. 2017), we add Load-Balancing Loss as the auxiliary loss to train the
router. It can be formulated as follows:
$\mathcal{L}_{aux}=\alpha\cdot N\sum_{i}^{N}f_{i}\times p_{i}$ (7)
This objective encourages uniform routing of tokens, where $N$ denotes the
number of experts, $f_{i}$ denotes the fraction of tokens dispatched to the
$i^{\text{th}}$ expert, and $p_{i}$ denotes the average routing weight for the
$i^{\text{th}}$ expert. The weight $\alpha$ is a hyperparameter that we set at
0.001 by default to avoid overwhelming other objectives.
Considering efficiency, we use a soft router with top-$k=2$ in the deep layers
and a hard router in the shallow layers. An MoE module equipped with a hard
router has the same number of experts as the number of modalities. The hard
router directly selects the corresponding expert based on the modality of each
token.
### Pre-training Task: Masked Signal Modeling
Previous multimodal models (Li et al. 2021; Radford et al. 2021; Bao et al.
2022a; Li et al. 2019; Zhao et al. 2022) typically involve complex pre-
training tasks like Image-Text Contrastive Learning (ITC) (Radford et al.
2021), Image-Text Matching (ITM) (Li et al. 2019), and Masked Representation
Modeling (MRM) (Zhao et al. 2022). These methods have shown good performance,
but pre-training still requires significant computational resources, and is
challenging to scale up.
Table 1 shows the efficiency comparison between different pre-training tasks,
which indicates a significant difference in time consumption and batch size.
Compared to pre-training without ITC and ITM, including them requires four
times more computational resources to achieve a similar speed. Moreover, ITC
and ITM tasks are similar to other contrastive learning-based methods that
typically require a larger batch size to achieve better performance.
Incorporating additional pre-training tasks can significantly decrease
training speed, increase training difficulty, and have an impact on the
scalability of the model.
Pre-training Tasks | Batch size | Time
---|---|---
MLM | ITC | ITM | MIM Token | MIM Pixel
2713 | | | | 2713 | 224 | 2.14h
2713 | | | 2713 | | 152 | 3.09h
2713 | 2713 | | | 2713 | 132 | 3.26h
2713 | 2713 | 2713 | | | 80 | 6.88h
2713 | 2713 | 2713 | | 2713 | 64 | 7.73h
Table 1: Maximum batch size per GPU and pre-training time per epoch of
different pre-training tasks on 8 A100 GPUs with the same architecture as EVE-
Base. We add vision mask tokens in the encoder during masked token modeling.
Thus, we pre-train our model with only one unified masked signal modeling
objective on image-text pairs to reconstruct masked signals by visible signals
as shown in Figure 2. Specifically, masked signal modeling combines masked
image modeling and masked language modeling, and only utilizes the raw signals
from image-text pairs themselves without relying on any additional techniques.
We use masked image and complete text in masked image modeling, while complete
image and masked text in masked language modeling. Despite its simplicity, our
approach achieves competitive performance compared to previous methods and can
be easily scaled up.
In this section, we use $h(\cdot)$ and $\theta(\cdot)$ to denote the encoder
and the decoder. $\hat{I}$ and $\hat{T}$ are represented for masked image and
masked text. $D$ indicates the dataset.
#### Masked Language Modeling (MLM)
Following BERT (Devlin et al. 2019), we randomly mask some of the text tokens
and predict them based on the information provided by the image and corrupted
text. The Masked Language Modeling (MLM) objective can be formulated as
follows:
$\mathcal{L}_{mlm}=\mathbb{E}_{(I,T)\sim
D}\ell_{mlm}\left(\theta_{t}\left(h(I,\hat{T})\right),T\right)$ (8)
$\ell_{mlm}$ computes the cross-entropy loss between the prediction
probability $P_{mlm}$, obtained from the text decoder $g_{t}$, and the ground
truth on each masked token. We use a two-layer MLP with a softmax layer as the
text decoder.
#### Masked Image Modeling (MIM)
Previous methods (Zhao et al. 2022; Zhang et al. 2023; Wang et al. 2023) have
typically employed semantically rich visual features obtained from the model
itself or discrete visual tokens obtained from visual tokenizers as the
targets for MIM. However, both approaches have their drawbacks. Training
visual tokenizers (Ramesh et al. 2021; Peng et al. 2022) is a challenging task
as different tokenizers can have varying impacts on performance and may lead
to error propagation. Meanwhile, using visual features (Zhao et al. 2022;
Zhang et al. 2023) requires either applying momentum distillation techniques
or employing other loss functions and techniques to prevent the model from
diverging during training. These MIM targets make the overall framework more
complex.
In visual self-supervised learning, some works use other information as the
MIM targets, such as RGB pixels (He et al. 2022a), scene depth (Bachmann et
al. 2022), HOG (Wei et al. 2022a), etc. However, using targets such as scene
depth and HOG requires additional techniques, which increases the complexity
of the training process. In order to maintain simplicity and effectiveness, we
choose to utilize the image pixels themselves as the reconstruction target.
Following MAE (He et al. 2022a), we adopt an asymmetric design for MIM, where
only observed image patches and all text tokens are fed into the encoder. A
lightweight decoder is used to reconstruct raw pixels on masked positions from
partial image representation and masked tokens, as shown in Figure 2. We use
multiple Transformer blocks with narrower hidden widths as the decoder. The
MIM objective can be formulated as:
$\mathcal{L}_{mim}=\mathbb{E}_{(I,T)\sim
D}\ell_{mim}\left(\theta_{i}\left(h(\hat{I},T)\right),I\right)$ (9)
$\ell_{mim}$ calculates the mean square error between the raw pixels and the
reconstructed result generated by the image decoder. We compute the loss on
masked image patches.
The overall objective of masked signal modeling is:
$\mathcal{L}=\mathcal{L}_{mlm}+\mathcal{L}_{mim}$ (10)
## Experiments
### Pre-training Datasets
Following Previous methods, we pre-train EVE on four widely used public
datasets: MSCOCO Captions (Lin et al. 2014), Visual Genome (Krishna et al.
2017), SBU Captions (Ordonez, Kulkarni, and Berg 2011) and Conceptual Captions
(Sharma et al. 2018). There are about 4M images and 10M image-text pairs in
all datasets. Since some downstream tasks are based on COCO, we exclude all
images in the test sets of downstream tasks from the pre-training data. We
also pre-train EVE-Large on a larger dataset with 21M image-text pairs by
adding CC12M (Changpinyo et al. 2021).
### Implementation Details
EVE-Base has 12 Transformer blocks and EVE-Large has 24 Transformer blocks. We
employ a soft router with 32 experts in EVE-Base on top-2 blocks, EVE-Large on
top-3 blocks, and a hard router on the other blocks. We pre-train EVE-Base for
480k steps with a batch size of 2048 and EVE-Large with the same batch size
for 280k steps. We use AdamW (Loshchilov and Hutter 2019) optimizer. The peak
learning rate is 5e-4 for EVE-Base and 2e-4 for EVE-Large. During pre-
training, the image resolution is $224\times 224$. We use random resized
cropping and horizontal flipping for data augmentation. We mask 75% of image
in MIM and 50% of text in MLM. EVE is initialized with BEiTv2. More details
are provided in Appendix.
Model | #Images | VQA | NLVR2 | COCO | Flickr30K
---|---|---|---|---|---
test-dev | test-std | dev | test-P | TR@1 | IR@1 | TR@1 | IR@1
ALBEF (Li et al. 2021) | 4M | 74.54 | 74.70 | 80.24 | 80.50 | 73.1 | 56.8 | 94.3 | 82.8
Triple (Yang et al. 2022) | 4M | 74.90 | 74.92 | 80.54 | 81.33 | 75.6 | 59.0 | 94.9 | 84.0
Codebook (Duan et al. 2022) | 4M | 74.86 | 74.97 | 80.50 | 80.84 | 75.3 | 58.7 | 95.1 | 83.3
METER (Dou et al. 2022a) | 4M | 77.68 | 77.64 | 82.33 | 83.05 | 76.2 | 57.1 | 94.3 | 82.2
MAMO (Zhao et al. 2022) | 4M | 76.12 | 76.20 | 81.86 | 81.53 | 77.1 | 60.3 | 95.6 | 85.4
VLMO (Bao et al. 2022a) | 4M | 76.64 | 76.89 | 82.77 | 83.34 | 74.8 | 57.2 | 92.3 | 79.3
VL-BEiT (Bao et al. 2022b) | 4M | 77.53 | 77.75 | 81.93 | 82.66 | 79.5 | 61.5 | 95.8 | 83.9
VLMAE (He et al. 2022b) | 4M | 75.30 | 75.40 | 80.50 | 81.20 | 77.3 | 59.6 | 95.2 | 83.6
MaskVLM (Kwon et al. 2023) | 4M | 75.45 | 75.40 | 81.58 | 81.98 | 76.3 | 60.1 | 95.6 | 84.5
VLC-Base (Li et al. 2021) | 5.6M | 74.02 | 74.00 | 77.70 | 79.04 | 72.4 | 50.7 | 89.2 | 71.3
DAVINCI (Diao et al. 2023) | 631.8M | 76.32 | 76.44 | 80.03 | 80.25 | - | - | - | -
SimVLM-Base (Wang et al. 2022b) | 1.8B | 77.87 | 78.14 | 81.72 | 81.77 | - | - | - | -
BEiT3-Base (Wang et al. 2023) | 3.1B | 77.65 | - | 83.60 | 84.40 | 79.1 | 61.4 | 96.3 | 86.2
EVE-Base (Ours) | 4M | 78.00 | 78.02 | 83.34 | 83.93 | 79.6 | 62.0 | 95.6 | 84.1
Table 2: Comparison with state-of-the-art base-size models on VQA, NLVR2,
MSCOCO and Flickr30K. Gray lines indicate the model pre-trained with much more
data (more than 400M).
### Vision-Language Downstream Tasks
We evaluate our pre-trained model on three common Vision-Language Tasks. More
implementation details and comparison on inference speed are provided in
Appendix.
#### Visual Question Answering (VQA)
VQA requires the model to predict an answer based on the given image and
question. We use VQA2.0 dataset (Goyal et al. 2017) to evaluate our model.
Following previous work (Bao et al. 2022a), we view the task as a
classification task.
#### Natural Language for Visual Reasoning (NLVR2)
Given a sentence and two images, NLVR2 asks the model to judge whether the
sentence accurately describes the relationship between the two images. We
evaluate our model on NLVR2 dataset (Suhr et al. 2019). Following (Chen et al.
2020), we convert the triplet input into two image-text pairs with the same
text description and different images.
#### Image-Text Retrieval
Retrieval task contains two sub-tasks: Image-to-Text Retrieval (TR) and Text-
to-Image Retrieval (IR). We evaluate the model on widely used Flickr30K
(Plummer et al. 2015) and MSCOCO (Lin et al. 2014) benchmarks following
Karpathy split (Karpathy and Fei-Fei 2015). Following (Li et al. 2021), we
apply ITC and ITM losses in the fine-tuning stage and we use rerank strategy
during inference.
MIM Target | NLVR2 | Flickr30K | VQA
---|---|---|---
dev | test-P | TR | IR
BEiTv2 Token | 78.0 | 78.5 | 92.6 | 78.3 | 76.6
DALL-E Token | $\times$ | $\times$ | 92.4 | 77.4 | 75.8
Pixel (Ours) | 79.7 | 80.1 | 93.9 | 80.7 | 77.3
Table 3: Ablation study on MIM target. $\times$ denotes divergence during
fine-tuning.
### Results on Downstream Tasks
We present the results of VQA, NLVR2, COCO, and Flickr30K with state-of-the-
art base models in Table 2 and large models in Table 4. We report the accuracy
for VQA and NLVR2, top-1 recall for TR and IR.
##### Results on Vision-Language Understanding
EVE-Base outperforms all previous methods on Understanding tasks and even
marginally outperforms BEiT3-Base (Wang et al. 2023) pre-trained with 3.1B
data on VQA. EVE-Base outperforms VLMO (Bao et al. 2022a), which also employs
a unified architecture with more pre-training objectives by 1.77% on VQA test-
dev and 0.70% on NLVR2 test-P. EVE-Large4M shows similar performance to
SimVLM-Large (Wang et al. 2022b), whereas EVE-Large16M surpasses SimVLM-Huge
which is larger and pre-trained on much more data.
##### Results on Image-Text Retrieval
EVE-Base achieves competitive results on Flickr and outperforms the previous
state-of-the-art methods on COCO. Compared to VLMO, EVE-Base achieves
improvements of 6.42% on COCO text retrieval R@1 and 8.39% on COCO image
retrieval R@1. In addition, EVE-Large demonstrates better performance on both
COCO and Flickr30K to other Large or even Huge models with very limited data.
Notably, Image-Text Contrastive Learning and Image-Text Matching are not
involved in the pre-training of EVE.
Model | #Images | VQA | NLVR2 | COCO | Flickr30K
---|---|---|---|---|---
test-dev | test-std | dev | test-P | TR@1 | IR@1 | TR@1 | IR@1
VinVL-Large (Zhang et al. 2021) | 8.9M | 76.52 | 76.60 | 82.67 | 83.98 | 75.4 | 58.8 | - | -
BLIP-CapFiltL (Li et al. 2022) | 129M | 78.25 | 78.32 | 82.15 | 82.24 | 81.2 | 64.1 | 97.2 | 87.5
BLIP-Large (Li et al. 2022) | 129M | - | - | - | - | 82.4 | 65.1 | 97.4 | 87.6
Uni-PerceiverMoE-L (Zhu et al. 2022) | 44.1M | - | - | - | - | 74.7 | 58.3 | 94.1 | 83.7
FILIP-Large (Yao et al. 2022) | 340M | - | - | - | - | 78.9 | 61.2 | 96.6 | 87.1
Prismer-Large (Liu et al. 2023) | 12.7M | 78.4 | 78.5 | - | - | - | - | - | -
GIT (Wang et al. 2022a) | 800M | 75.5 | - | - | - | - | - | - | -
ALIGN-Large (Jia et al. 2021) | 1.8B | - | - | - | - | 77.0 | 59.9 | 95.3 | 84.9
SimVLM-Large (Wang et al. 2022b) | 1.8B | 79.32 | 79.56 | 84.13 | 84.84 | - | - | - | -
SimVLM-Huge (Wang et al. 2022b) | 1.8B | 80.03 | 80.34 | 84.53 | 85.15 | - | - | - | -
Florence-Huge (Yuan et al. 2021) | 900M | 80.16 | 80.36 | - | - | 81.8 | 63.2 | 97.2 | 87.9
EVE-Large (Ours) | 4M | 79.25 | 79.20 | 84.03 | 84.69 | 82.5 | 65.2 | 96.3 | 86.3
EVE-Large (Ours) | 16M | 80.17 | 80.18 | 85.63 | 86.22 | 83.5 | 66.7 | 98.0 | 87.9
Table 4: Comparison with state-of-the-art large-size models on VQA, NLVR2,
MSCOCO and Flickr30K. Gray lines indicate the model pre-trained with much more
data (more than 400M).
### Ablation Studies
For all ablation studies, we pre-train the model for 25 epochs with a similar
architecture to EVE-Base and report accuracy on NLVR2, VQA dev set, and top-1
recall on Flickr30K. We use the soft router with top-$k=2$ by default. We
present some more ablation studies in Appendix.
Figure 4: Ablation study on masking ratio.
#### MIM Target
We compare different MIM targets in Table 3, including image token and pixel.
We use the tokenizer from BEiT v2 (Peng et al. 2022) and DALL-E (Ramesh et al.
2021). It is observed that reconstructing pixels is better than reconstructing
image tokens in all tasks. Using a more complex MIM target does not achieve
the expected effect.
##### Masking Ratio
In Figure 4, we investigate the impact of different masking ratios on both
vision and language. Results indicate that a higher vision masking ratio leads
to improved performance. We hypothesize that the raw signals are highly
redundant for image, and a higher masking ratio is needed to facilitate
representation learning. The noteworthy difference from previous work (Zhao et
al. 2022) is that we achieve better performance at a higher text masking
ratio. Our interpretation is that with a more profound integration of vision
and language, the model can more easily predict masked text tokens with the
aid of vision.
Figure 5: Ablation study on the number of experts and top-$k$ design. We use
soft router in [8, 10, 12] Transformer blocks.
##### Number of Experts and Top-K
The number of experts and the selection of top-$k$ are crucial aspects of MoE
design, as they determine the model’s parameters, computational complexity,
and performance. Figure 5 clearly demonstrates that performance deteriorates
as the number of selected experts decreases from 2 to 1. When $k=1$,
increasing the number of experts can actually lead to a decrease in
performance, which is more evident in retrieval tasks. When $k=2$, increasing
the number of experts leads to corresponding improvements in the performance
of both VQA and retrieval tasks, with a more significant improvement observed
in the retrieval task.
Tasks | NLVR2 | Flickr30K | VQA
---|---|---|---
MIM | MLM | dev | test-P | TR | IR
2713 | | 57.2 | 57.4 | 30.4 | 22.9 | 60.9
| 2713 | 78.8 | 79.3 | 92.2 | 79.2 | 77.0
2713† | 2713† | 75.4 | 75.7 | 88.6 | 74.2 | 74.6
2713 | 2713 | 79.7 | 80.1 | 93.9 | 80.7 | 77.3
Table 5: Ablation study on MIM and MLM. $\dagger$ denotes the model is pre-trained by MIM and MLM simultaneously with masked image and text inputs. Masking ratio is set to 50% for both image and text in $\dagger$, but 75% for image in others. Pre-training Tasks | Flickr30K | VQA
---|---|---
MIM | MLM | ITC | ITM | TR | IR
2713 | 2713 | 2713 | | 94.0 | 80.0 | 76.8
2713 | 2713 | | 2713 | 94.0 | 80.7 | 77.0
2713 | 2713 | 2713 | 2713 | 94.2 | 80.8 | 77.1
2713 | 2713 | | | 94.4 | 81.2 | 77.4
Table 6: Ablation study on more pre-training tasks. All models are pre-trained
with the same pre-training GPU hours.
##### Pre-training Tasks
We explore the use of different pre-training tasks for masked signal modeling
in Table 5. Experiments reveal that MLM with a high masking ratio is
sufficient for learning the interaction between vision and language. The
addition of MIM further improves the results by reducing bias, as observed in
(Kwon et al. 2023). Pre-training with MIM alone results in a minimal fusion
between vision and language. We hypothesize that text descriptions are
typically coarse-grained and may not offer significant assistance in fine-
grained vision reconstruction. Simultaneously masking both modalities and
performing MIM and MLM is not recommended. This task reduces the amount of
vision and language information available, which in turn increases the
difficulty of MLM and MIM, resulting in performance decline.
We further explore more pre-training tasks under the same pre-training GPU
hours in Table 6. Pre-training only on MIM and MLM achieves better results in
both retrieval tasks and understanding tasks, thereby demonstrating the
efficiency of Masked Signal Modeling. Performance on NLVR task is provided in
Appendix.
##### Deep FFN
We compare different designs of FFN in the deep layers in Table 7. Modality-
shared FFN performs better than modality-specific MoE in the deep layers, as
deep features require more alignment between modalities. Using a soft router
can align different modalities while obtaining more modality-specific
information, thereby further enhancing performance compared to deeper
architecture.
Deep FFN | NLVR2 | Flickr30K | VQA
---|---|---|---
dev | test-P | TR | IR
Shared FFN | 79.6 | 80.1 | 93.5 | 80.1 | 77.0
Shared FFN† | 80.1 | 80.2 | 93.9 | 80.6 | 77.1
Hard Router | 79.8 | 80.1 | 93.2 | 79.3 | 77.0
Soft Router | 80.3 | 80.7 | 94.4 | 81.2 | 77.4
Table 7: Ablation study on deep (top-2 layers) FFN design. Shared FFN indicates different modalities use the same FFN. We additionally add one more Transformer block to investigate the impact of parameters per token for $\dagger$. Modality Routing | NLVR2 | Flickr30K | VQA
---|---|---|---
dev | test-P | TR | IR
EVE-Base | 80.3 | 80.7 | 94.4 | 81.2 | 77.4
w/o MR | 79.7 | 80.0 | 93.7 | 80.8 | 77.3
Table 8: Ablation study on modality routing technique.
##### Modality Routing
We compare the performance of the model whether use modality routing in the
soft router or not in Table8, and the results show that our proposed modality
routing can help the router to distinguish the inputs of different modalities
and thus achieve better performance.
## Conclusion
In this paper, we present a new multimodal foundation model EVE only pre-
trained by Maksed Signal Modeling with Modality-Aware MoE which is flexible
and capable of encoding different modalities in a unified manner. We
accelerate pre-training speed 3.5x faster than pre-training with ITC and ITM.
Additionally, it is easy to scale up with a larger model or more pre-training
data. Extensive experiments demonstrate that EVE outperforms existing methods
in various Vision Language downstream tasks.
## References
* Bachmann et al. (2022) Bachmann, R.; Mizrahi, D.; Atanov, A.; and Zamir, A. 2022. MultiMAE: Multi-modal Multi-task Masked Autoencoders. _arXiv preprint arXiv:2204.01678_.
* Bao et al. (2022a) Bao, H.; Wang, W.; Dong, L.; Liu, Q.; Mohammed, O. K.; Aggarwal, K.; Som, S.; Piao, S.; and Wei, F. 2022a. VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts. In _Advances in Neural Information Processing Systems_.
* Bao et al. (2022b) Bao, H.; Wang, W.; Dong, L.; and Wei, F. 2022b. VL-BEiT: Generative Vision-Language Pretraining. _arXiv preprint arXiv:2206.01127_.
* Changpinyo et al. (2021) Changpinyo, S.; Sharma, P.; Ding, N.; and Soricut, R. 2021. Conceptual 12M: Pushing Web-Scale Image-Text Pre-Training To Recognize Long-Tail Visual Concepts. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 3558–3568.
* Chen et al. (2020) Chen, Y.; Li, L.; Yu, L.; Kholy, A. E.; Ahmed, F.; Gan, Z.; Cheng, Y.; and Liu, J. 2020. UNITER: UNiversal Image-TExt Representation Learning. In _European Conference on Computer Vision_ , 104–120.
* Devlin et al. (2019) Devlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics_ , 4171–4186.
* Diao et al. (2023) Diao, S.; Zhou, W.; Zhang, X.; and Wang, J. 2023. Write and Paint: Generative Vision-Language Models are Unified Modal Learners. In _International Conference on Learning Representations_.
* Dosovitskiy et al. (2021) Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In _International Conference on Learning Representations_.
* Dou et al. (2022a) Dou, Z.; Xu, Y.; Gan, Z.; Wang, J.; Wang, S.; Wang, L.; Zhu, C.; Zhang, P.; Yuan, L.; Peng, N.; Liu, Z.; and Zeng, M. 2022a. An Empirical Study of Training End-to-End Vision-and-Language Transformers. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 18145–18155.
* Dou et al. (2022b) Dou, Z.-Y.; Kamath, A.; Gan, Z.; Zhang, P.; Wang, J.; Li, L.; Liu, Z.; Liu, C.; LeCun, Y.; Peng, N.; Gao, J.; and Wang, L. 2022b. Coarse-to-Fine Vision-Language Pre-training with Fusion in the Backbone. In _Advances in Neural Information Processing Systems_.
* Duan et al. (2022) Duan, J.; Chen, L.; Tran, S.; Yang, J.; Xu, Y.; Zeng, B.; and Chilimbi, T. 2022\. Multi-modal Alignment using Representation Codebook. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 15630–15639.
* Geng et al. (2022) Geng, X.; Liu, H.; Lee, L.; Schuurams, D.; Levine, S.; and Abbeel, P. 2022. Multimodal Masked Autoencoders Learn Transferable Representations. _arXiv preprint arXiv:2205.14204_.
* Goyal et al. (2017) Goyal, Y.; Khot, T.; Summers-Stay, D.; Batra, D.; and Parikh, D. 2017. Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 6325–6334.
* Gui et al. (2022) Gui, L.; Huang, Q.; Hauptmann, A.; Bisk, Y.; and Gao, J. 2022. Training Vision-Language Transformers from Captions Alone. _arXiv preprint arXiv:2205.09256_.
* Hazimeh et al. (2021) Hazimeh, H.; Zhao, Z.; Chowdhery, A.; Sathiamoorthy, M.; Chen, Y.; Mazumder, R.; Hong, L.; and Chi, E. H. 2021. DSelect-k: Differentiable Selection in the Mixture of Experts with Applications to Multi-Task Learning. In _Advances in Neural Information Processing Systems_ , 29335–29347.
* He et al. (2022a) He, K.; Chen, X.; Xie, S.; Li, Y.; Dollár, P.; and Girshick, R. B. 2022a. Masked Autoencoders Are Scalable Vision Learners. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 15979–15988.
* He et al. (2022b) He, S.; Guo, T.; Dai, T.; Qiao, R.; Wu, C.; Shu, X.; and Ren, B. 2022b. VLMAE: Vision-Language Masked Autoencoder. _arXiv preprint arXiv:2208.09374_.
* Jia et al. (2021) Jia, C.; Yang, Y.; Xia, Y.; Chen, Y.; Parekh, Z.; Pham, H.; Le, Q. V.; Sung, Y.; Li, Z.; and Duerig, T. 2021. Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision. In _Proceedings of the 38th International Conference on Machine Learning_ , volume 139, 4904–4916.
* Karpathy and Fei-Fei (2015) Karpathy, A.; and Fei-Fei, L. 2015. Deep visual-semantic alignments for generating image descriptions. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 3128–3137.
* Kim, Son, and Kim (2021) Kim, W.; Son, B.; and Kim, I. 2021. ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision. In _Proceedings of the 38th International Conference on Machine Learning_ , volume 139, 5583–5594.
* Krishna et al. (2017) Krishna, R.; Zhu, Y.; Groth, O.; Johnson, J.; Hata, K.; Kravitz, J.; Chen, S.; Kalantidis, Y.; Li, L.; Shamma, D. A.; Bernstein, M. S.; and Fei-Fei, L. 2017\. Visual Genome: Connecting Language and Vision Using Crowdsourced Dense Image Annotations. _International Journal of Computer Vision_ , 123(1): 32–73.
* Kwon et al. (2023) Kwon, G.; Cai, Z.; Ravichandran, A.; Bas, E.; Bhotika, R.; and Soatto, S. 2023. Masked Vision and Language Modeling for Multi-modal Representation Learning. In _International Conference on Learning Representations_.
* Lepikhin et al. (2021) Lepikhin, D.; Lee, H.; Xu, Y.; Chen, D.; Firat, O.; Huang, Y.; Krikun, M.; Shazeer, N.; and Chen, Z. 2021. GShard: Scaling Giant Models with Conditional Computation and Automatic Sharding. In _International Conference on Learning Representations_.
* Lewis et al. (2021) Lewis, M.; Bhosale, S.; Dettmers, T.; Goyal, N.; and Zettlemoyer, L. 2021. BASE Layers: Simplifying Training of Large, Sparse Models. In _Proceedings of the 38th International Conference on Machine Learning_ , volume 139, 6265–6274.
* Li et al. (2022) Li, J.; Li, D.; Xiong, C.; and Hoi, S. 2022. BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation.
* Li et al. (2021) Li, J.; Selvaraju, R. R.; Gotmare, A.; Joty, S. R.; Xiong, C.; and Hoi, S. C. 2021\. Align before Fuse: Vision and Language Representation Learning with Momentum Distillation. In _Advances in Neural Information Processing Systems_ , 9694–9705.
* Li et al. (2019) Li, L. H.; Yatskar, M.; Yin, D.; Hsieh, C.; and Chang, K. 2019. VisualBERT: A Simple and Performant Baseline for Vision and Language. _arXiv preprint arXiv:1908.03557_.
* Lin et al. (2014) Lin, T.; Maire, M.; Belongie, S. J.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; and Zitnick, C. L. 2014. Microsoft COCO: Common Objects in Context. In _Proceedings of the European Conference on Computer Vision_ , 740–755.
* Liu et al. (2023) Liu, S.; Fan, L.; Johns, E.; Yu, Z.; Xiao, C.; and Anandkumar, A. 2023. Prismer: A Vision-Language Model with An Ensemble of Experts. _arXiv preprint arXiv:2303.02506_.
* Loshchilov and Hutter (2019) Loshchilov, I.; and Hutter, F. 2019. Decoupled Weight Decay Regularization. In _International Conference on Learning Representations_.
* Mustafa et al. (2022) Mustafa, B.; Riquelme, C.; Puigcerver, J.; Jenatton, R.; and Houlsby, N. 2022. Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts. In _Advances in Neural Information Processing Systems_.
* Ordonez, Kulkarni, and Berg (2011) Ordonez, V.; Kulkarni, G.; and Berg, T. L. 2011. Im2Text: Describing Images Using 1 Million Captioned Photographs. In _Advances in Neural Information Processing Systems_ , 1143–1151.
* Ouyang et al. (2022) Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C. L.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; Schulman, J.; Hilton, J.; Kelton, F.; Miller, L.; Simens, M.; Askell, A.; Welinder, P.; Christiano, P. F.; Leike, J.; and Lowe, R. 2022. Training language models to follow instructions with human feedback. _arXiv preprint arXiv:2203.02155_.
* Peng et al. (2022) Peng, Z.; Dong, L.; Bao, H.; Ye, Q.; and Wei, F. 2022. BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers. _arXiv preprint arXiv:2208.06366_.
* Plummer et al. (2015) Plummer, B. A.; Wang, L.; Cervantes, C. M.; Caicedo, J. C.; Hockenmaier, J.; and Lazebnik, S. 2015. Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models. In _IEEE/CVF International Conference on Computer Vision_ , 2641–2649.
* Radford et al. (2021) Radford, A.; Kim, J. W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.; Krueger, G.; and Sutskever, I. 2021. Learning Transferable Visual Models From Natural Language Supervision. In _Proceedings of the 38th International Conference on Machine Learning_ , volume 139, 8748–8763.
* Ramesh et al. (2021) Ramesh, A.; Pavlov, M.; Goh, G.; Gray, S.; Voss, C.; Radford, A.; Chen, M.; and Sutskever, I. 2021. Zero-Shot Text-to-Image Generation. In Meila, M.; and Zhang, T., eds., _Proceedings of the 38th International Conference on Machine Learning_ , volume 139, 8821–8831.
* Riquelme et al. (2021) Riquelme, C.; Puigcerver, J.; Mustafa, B.; Neumann, M.; Jenatton, R.; Pinto, A. S.; Keysers, D.; and Houlsby, N. 2021. Scaling Vision with Sparse Mixture of Experts. In _Advances in Neural Information Processing Systems_ , 8583–8595.
* Selvaraju et al. (2017) Selvaraju, R. R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; and Batra, D. 2017. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In _IEEE/CVF International Conference on Computer Vision_ , 618–626.
* Sharma et al. (2018) Sharma, P.; Ding, N.; Goodman, S.; and Soricut, R. 2018. Conceptual Captions: A Cleaned, Hypernymed, Image Alt-text Dataset For Automatic Image Captioning. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics_ , 2556–2565.
* Shazeer et al. (2017) Shazeer, N.; Mirhoseini, A.; Maziarz, K.; Davis, A.; Le, Q. V.; Hinton, G. E.; and Dean, J. 2017. Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer. In _International Conference on Learning Representations_.
* Su et al. (2020) Su, W.; Zhu, X.; Cao, Y.; Li, B.; Lu, L.; Wei, F.; and Dai, J. 2020. VL-BERT: Pre-training of Generic Visual-Linguistic Representations. In _International Conference on Learning Representations_.
* Suhr et al. (2019) Suhr, A.; Zhou, S.; Zhang, A.; Zhang, I.; Bai, H.; and Artzi, Y. 2019. A Corpus for Reasoning about Natural Language Grounded in Photographs. In _Proceedings of the 57th Conference of the Association for Computational Linguistics_ , 6418–6428.
* Vaswani et al. (2017) Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All you Need. In _Advances in Neural Information Processing Systems_ , 5998–6008.
* Wang et al. (2022a) Wang, J.; Yang, Z.; Hu, X.; Li, L.; Lin, K.; Gan, Z.; Liu, Z.; Liu, C.; and Wang, L. 2022a. GIT: A Generative Image-to-text Transformer for Vision and Language. _Transactions on Machine Learning Research_.
* Wang et al. (2023) Wang, W.; Bao, H.; Dong, L.; Bjorck, J.; Peng, Z.; Liu, Q.; Aggarwal, K.; Mohammed, O. K.; Singhal, S.; Som, S.; and Wei, F. 2023. Image as a foreign language: BEiT pretraining for vision and vision-language tasks. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition_.
* Wang et al. (2022b) Wang, Z.; Yu, J.; Yu, A. W.; Dai, Z.; Tsvetkov, Y.; and Cao, Y. 2022b. SimVLM: Simple Visual Language Model Pretraining with Weak Supervision. In _International Conference on Learning Representations_.
* Wei et al. (2022a) Wei, C.; Fan, H.; Xie, S.; Wu, C.; Yuille, A. L.; and Feichtenhofer, C. 2022a. Masked Feature Prediction for Self-Supervised Visual Pre-Training. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 14648–14658.
* Wei et al. (2022b) Wei, J.; Tay, Y.; Bommasani, R.; Raffel, C.; Zoph, B.; Borgeaud, S.; Yogatama, D.; Bosma, M.; Zhou, D.; Metzler, D.; Chi, E. H.; Hashimoto, T.; Vinyals, O.; Liang, P.; Dean, J.; and Fedus, W. 2022b. Emergent Abilities of Large Language Models. _arXiv preprint arXiv:2206.07682_.
* Yang et al. (2022) Yang, J.; Duan, J.; Tran, S.; Xu, Y.; Chanda, S.; Chen, L.; Zeng, B.; Chilimbi, T.; and Huang, J. 2022. Vision-Language Pre-Training with Triple Contrastive Learning. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 15650–15659.
* Yao et al. (2022) Yao, L.; Huang, R.; Hou, L.; Lu, G.; Niu, M.; Xu, H.; Liang, X.; Li, Z.; Jiang, X.; and Xu, C. 2022. FILIP: Fine-grained Interactive Language-Image Pre-Training. In _International Conference on Learning Representations_.
* Yuan et al. (2021) Yuan, L.; Chen, D.; Chen, Y.; Codella, N.; Dai, X.; Gao, J.; Hu, H.; Huang, X.; Li, B.; Li, C.; Liu, C.; Liu, M.; Liu, Z.; Lu, Y.; Shi, Y.; Wang, L.; Wang, J.; Xiao, B.; Xiao, Z.; Yang, J.; Zeng, M.; Zhou, L.; and Zhang, P. 2021. Florence: A New Foundation Model for Computer Vision. _arXiv preprint arXiv:2111.11432_.
* Zeng, Zhang, and Li (2022) Zeng, Y.; Zhang, X.; and Li, H. 2022. Multi-Grained Vision Language Pre-Training: Aligning Texts with Visual Concepts. In _Proceedings of the 39th International Conference on Machine Learning_ , volume 162, 25994–26009.
* Zhang et al. (2021) Zhang, P.; Li, X.; Hu, X.; Yang, J.; Zhang, L.; Wang, L.; Choi, Y.; and Gao, J. 2021\. VinVL: Revisiting Visual Representations in Vision-Language Models. In _IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 5579–5588.
* Zhang et al. (2023) Zhang, X.; Zeng, Y.; Zhang, J.; and Li, H. 2023. Toward Building General Foundation Models for Language, Vision, and Vision-Language Understanding Tasks. _arXiv preprint arXiv:2301.05065_.
* Zhao et al. (2022) Zhao, Z.; Guo, L.; He, X.; Shao, S.; Yuan, Z.; and Liu, J. 2022. MAMO: Masked Multimodal Modeling for Fine-Grained Vision-Language Representation Learning. _arXiv preprint arXiv:2210.04183_.
* Zhu et al. (2022) Zhu, J.; Zhu, X.; Wang, W.; Wang, X.; Li, H.; Wang, X.; and Dai, J. 2022. Uni-Perceiver-MoE: Learning Sparse Generalist Models with Conditional MoEs. In _Advances in Neural Information Processing Systems_.
* Zoph et al. (2022) Zoph, B.; Bello, I.; Kumar, S.; Du, N.; Huang, Y.; Dean, J.; Shazeer, N.; and Fedus, W. 2022. ST-MoE: Designing Stable and Transferable Sparse Expert Models. _arXiv preprint arXiv:2202.08906_.
Method | Inference Time | #Params Per Token | VQA
---|---|---|---
ALBEF (Li et al. 2021) | 35.3 min | 210M | 74.54
VLMO (Bao et al. 2022a) | 21.6 min | 180M | 76.64
EVE-Base (Ours) | 24.8 min | 190M | 78.00
Table 9: VQA test set inference time, parameters per token, and VQA test-dev accuracy of different methods on 8 V100 GPUs. The inference time of other methods is reproduced by us. Pre-training Tasks | NLVR2
---|---
MIM | MLM | ITC | ITM | dev | test-P
2713 | 2713 | 2713 | | 79.5 | 79.4
2713 | 2713 | | 2713 | 81.4 | 81.7
2713 | 2713 | 2713 | 2713 | 81.6 | 81.8
2713 | 2713 | | | 81.6 | 82.8
Table 10: Ablation study on more pre-training tasks. All models are pre-trained with the same pre-training GPU hours. We use the model fine-tuned on retrieval task for initialization. Decoder Depth | NLVR2 | Flickr30K | VQA
---|---|---|---
dev | test-P | TR | IR
2 | 79.7 | 80.2 | 93.6 | 80.1 | 77.2
4 | 79.3 | 80.4 | 93.5 | 80.4 | 77.1
8 | 79.7 | 80.1 | 93.9 | 80.7 | 77.3
12 | 79.2 | 80.1 | 93.0 | 79.8 | 77.1
Table 11: Ablation study on MIM decoder depth. Position | NLVR2 | Flickr30K | VQA
---|---|---|---
dev | test-P | TR | IR
[11,12] | 79.7 | 80.1 | 93.9 | 80.7 | 77.3
[10,12] | 79.6 | 79.8 | 93.3 | 80.4 | 77.2
[6,7] | 78.6 | 79.2 | 92.3 | 78.4 | 76.7
[1,2] | 79.0 | 79.7 | 92.8 | 78.7 | 76.9
Table 12: Ablation study on the position of soft router. Transformer blocks with soft router are shown in list form. Shallow FFN | NLVR2 | Flickr30K | VQA
---|---|---|---
dev | test-P | TR | IR
Shared FFN | 79.3 | 79.6 | 93.7 | 80.5 | 76.6
Hard Router | 79.7 | 80.1 | 93.9 | 80.7 | 77.3
Soft Router | 80.8 | 80.5 | 95.0 | 81.1 | 77.5
Table 13: Ablation study on shallow (bottom-10 layers) FFN design. Shared FFN
indicates different modalities use the same FFN.
## Appendix A Pre-training and Inference Speed
We present pre-training time in Figure 1 and inference time in Table 9. We
exclude the data preprocessing time in inference for a fair comparison. Our
method surpasses other methods in pre-training speed, especially compared to
VL-BEiT (Bao et al. 2022b) and VLMO (Bao et al. 2022a) by a large margin. We
test the inference speed on VQA test set. After adding the MoE, EVE’s
inference time does not increase significantly and achieves the best
performance, only slightly slower than VLMO but much faster than ALBEF. This
phenomenon is consistent with the parameters per token of these models.
## Appendix B More Ablation Studies
##### MIM Decoder Depth
In Table 11, we compare the different depths of the decoder in masked image
modeling. Due to full supervision of downstream tasks, various decoder designs
have no noticeable difference, but the performance drops slightly with a too-
deep decoder.
##### Position of Soft Router
We also explore the impact of the position of soft router in Table 12. The
experimental results indicate that the performance is higher when the soft
router is placed in the top layers, followed by the bottom layers, and then
the middle layers. Using a soft router across layers yields slightly lower
performance than continuous use. We argue that high-level features are
relatively uniform, while the modality information of low-level features is
relatively more pronounced, making it easier for the router to process.
Additionally, high-level features require more fusion, resulting in better
performance when the soft router is placed in the top layers.
##### Shallow FFN
Table 13 presents the results of different designs of FFN in the shallow
layers. Experimental results show that using modality-specific MoE to obtain
modality information in shallow layers can achieve better results than using
modality-shared FFN, which emphasizes more on modality fusion. Using a soft
router can combine the advantages of both approaches and exhibit promising
performance, but this will lead to an increase in computational overhead.
## Appendix C Model Architecture Implementation Details
EVE-Base consists of 12 Transformer blocks with 12 attention heads and 768
hidden size. EVE-Large consists of 24 Transformer blocks with 16 attention
heads and 1024 hidden size. We employ a soft router with 32 experts in EVE-
Base on top-2 layers, EVE-Large on top-3 layers, and a hard router on the
other layers. We pre-train EVE-Base for 480k steps with a batch size of 2048
on 16 A100 GPUs and EVE-Large with the same batch size for 280k steps on 32
A100 GPUs. We use AdamW (Loshchilov and Hutter 2019) optimizer with
$\beta_{1}=0.9$, $\beta_{2}=0.98$ and a weight decay of 0.05. The peak
learning rate is 5e-4 for EVE-Base and 2e-4 for EVE-Large. We use linear
warmup for the first 10k steps and cosine learning rate decay. During pre-
training, the image resolution is $224\times 224$, and the patch size is
$16\times 16$. We use random resized cropping and horizontal flipping for data
augmentation. We mask 75% of image patches by random sampling in masked image
modeling and 50% of text in masked language modeling. EVE is initialized with
BEiTv2 (Peng et al. 2022). Mixed-precision is used for pre-training.
## Appendix D Downstream Tasks Fine-tuning Details
We use the model fine-tuned on MSCOCO Retrieval Task for other downstream
tasks. We use AdamW (Loshchilov and Hutter 2019) optimizer with a weight decay
of 0.01 and a cosine learning rate scheduler. We use linear warmup in the
first 10% steps during fine-tuning. The input image resolution is $480\times
480$ for VQA and $384\times 384$ for other tasks.
### Image-Text Understanding Task
##### Visual Question Answering (VQA)
Following previous methods (Li et al. 2021; Zhao et al. 2022), we use both
train and validation sets in VQA2.0 dataset (Goyal et al. 2017) for training,
and we do not use question-answer pairs from Visual Genome (Krishna et al.
2017) for augmentation. Following (Bao et al. 2022a; Wang et al. 2023), we
view the task as a classification task to choose the answer from a set of size
3129. $\boldsymbol{T}_{\text{cls}}$ token is used as the representation of the
image-question pair and fed into a two-layer classifier to predict the answer.
We fine-tune the models for 10 epochs with 128 batch size and a peak learning
rate of 3e-5.
##### Natural Language for Visual Reasoning (NLVR2)
Following (Li et al. 2021; Zhao et al. 2022), we convert the triplet input
into two image-text pairs with the same text description and different images
and extract their multimodal representation separately. We use a bi-attention
module to fuse two multimodal representations following (Zhao et al. 2022). We
concatenate fused representations and use a two-layer classifier to predict
the answer. We fine-tune the models for 10 epochs with 128 batch size. The
peak learning rate is 4e-5 for EVE-Base and 3e-5 for EVE-Large.
### Image-Text Retrieval Task
We split MSCOCO and Flickr30K into train, validation, and test sets following
widely used Karpathy split (Karpathy and Fei-Fei 2015). During inference, we
select the top-128 candidates using ITC scores and rerank them based on ITM
score following (Li et al. 2021; Zhao et al. 2022).
##### MSCOCO Image-Text Retrieval
We fine-tune the models with 256 batch size for 10 epochs. The peak learning
rate is 3.5e-5 for the base model and 5e-5 for the large model.
##### Flickr Image-Text Retrieval
We fine-tune the models with 128 batch size for 10 epochs and a peak learning
rate of 1.5e-5.
## Appendix E Visualization
We use Grad-CAM (Selvaraju et al. 2017) heatmap to visualize the self-
attention maps of EVE in masked signal modeling and VQA Task. We employ Grad-
CAM visualization for both masked image patches and masked text tokens on the
last layer of EVE. For MLM, we mask a keyword and take the Grad-CAM heatmap of
the corresponding image as the result. For MIM, we randomly mask 75% patches
as pre-training and take the Grad-CAM values of corresponding text as the
result. It is obvious that EVE pays more attention to information from other
modalities that is semantically related to the masked portion. This reflects
that complementary information can be learned between different modalities
through simple masked signal modeling, without necessarily having to use
complex pre-training tasks.
### Visualization on Masked Language Modeling
In Figure 6 we present some examples of Grad-CAM (Selvaraju et al. 2017)
visualization on a masked keyword in Masked Language Modeling. The heatmaps on
images are from the last layer of EVE-Base.
Figure 6: More examples of Grad-CAM visualization of Masked Language Modeling
on masked text tokens. EVE pays more attention to the masked word in the image
to reconstruct the word.
### Visualization on Masked Image Modeling
In Figure 7 we present more examples of Grad-CAM (Selvaraju et al. 2017)
visualization on masked image patches in Masked Image Modeling. The weights on
texts are from the last layer of EVE-Base.
Figure 7: More examples of Grad-CAM visualization of Masked Image Modeling on
masked image patches. We present the Grad-CAM value of each word in the
histogram. Similar to MLM, EVE places more emphasis on the masked image
regions described by the text.
### Visualization on VQA
We show some examples of the Grad-CAM (Selvaraju et al. 2017) heatmap from the
last layer of EVE-Base on VQA in Figure 8.
Figure 8: Grad-CAM visualization on VQA. It is clear that EVE focuses on the
critical regions in the image that can answer the question.
|
# Fast Posterior Probability Sampling with Normalizing Flows and Its
Applicability in Bayesian analysis in Particle Physics
Mathias El Baz<EMAIL_ADDRESS>Federico Sánchez Département de
Physique Nucléaire et Corpusculaire, Université de Genève
###### Abstract
In this study, we use Rational-Quadratic Neural Spline Flows, a sophisticated
parametrization of Normalizing Flows, for inferring posterior probability
distributions in scenarios where direct evaluation of the likelihood is
challenging at inference time. We exemplify this approach using the T2K near
detector as a working example, focusing on learning the posterior probability
distribution of neutrino flux binned in neutrino energy. The predictions of
the trained model are conditioned at inference time by the momentum and angle
of the outgoing muons released after neutrino-nuclei interaction. This
conditioning allows for the generation of personalized posterior
distributions, tailored to the muon observables, all without necessitating a
full retraining of the model for each new dataset. The performances of the
model are studied for different shapes of the posterior distributions.
††preprint: APS/123-QED
## Introduction
Particle physics experiments often face the challenge of deducing hidden
properties underlying the collected data. For instance, when studying
neutrinos, statistical methods are employed to estimate latent variables like
neutrino masses and mixing angles.
In this context, both frequentist and Bayesian methods are used and even
sometimes cross-validated. The frequentist approach focuses on deriving
estimates by analyzing the observed data, assuming that the latent variables
have fixed, but unknown, values. This implies searching for a single set of
latent variable values that is inferred with tools such as maximum likelihood
estimation. On the other hand, the Bayesian approach does not assume fixed
latent variables, but aims to estimate their posterior distribution. This is
particularly advantageous when dealing with complex parameter spaces, common
in particle physics, as it facilitates the quantification of uncertainty and
the exploration of parameter correlations.
However, the implementation of Bayesian methods is not without challenges,
especially in situations where sampling the posterior probability distribution
is required to be fast. Traditional Bayesian methods such as Variational
Inference (VI) or Markov Chain Monte-Carlo (MCMC) rely heavily on the
likelihood estimation, often rendering them ineffective when the density
estimation is slow. Moreover, these methods suffer from long ”burn-in” or
”training” times, making the overall sampling process very slow.
The rapid advancements in Artificial Intelligence, particularly in Deep
Learning have unlocked new avenues for fast and flexible Bayesian inference.
In this context, our paper explores an alternative machine-learning-based
method for Bayesian inference, using a model based on Conditional Normalizing
Flows (CNF). Normalizing flows- the foundation of CNF- model complex
distributions by transforming a simple distribution like a normal distribution
into a more intricate one. This is achieved through a series of learnable
transformations that progressively shape the simple distribution into one that
closely resembles the target distribution. CNF extends this concept by
conditioning the flow transformation on the given dataset of observed
variables at inference time. This adaptability allows for the elimination of
the need for retraining with new datasets, making the sampling process more
effective. Furthermore, unlike traditional likelihood-based Bayesian methods
like MCMC, CNF does not require the evaluation of the likelihood at inference
time, thereby making the sampling faster when the calculation of the
likelihood is timely.
To illustrate this method, we will use the near detector fit of the T2K (Tokai
to Kamioka) experiment [1] as a working example for our CNF model. At the near
detector, muon neutrinos interact with nuclei releasing muons observed
further. In our exploratory approach, we aim at estimating the posterior
distributions of the neutrino energy bin values (the latent variables) from a
dataset of muon momenta and angles (the observed variables). The variance of
the posterior probability density translates the uncertainty created by the
Poissonian statistics when inferring the latent variables. A detailed problem
description is given in Section I. Section II introduces the main concepts of
conditional probability estimation using normalizing flows. We apply this
concept to a simplified version of the near detector fit in Section III.
Finally, Section IV is an exploration of how CNFs behave when tasked to
predict a more complex posterior distribution.
## I Problem definition
The T2K experiment investigates neutrino oscillations. The principal challenge
encountered by the T2K collaboration is the complex parametrization of the
models of neutrino cross-section, neutrino flux as a function of the neutrino
energy and detector response used to infer the oscillation parameters. One
objective of the near detector data fit is to constrain the neutrino cross-
section and flux models. The near detector data fit involves searching for an
optimal set of parameters describing how systematic uncertainties change the
predictions on the event rate, given the data. So far, only the outgoing muon
observables $(p_{\mu},\theta_{\mu})$ after interaction between the neutrino
and the nuclei are used for the near detector fit at T2K. The near detector
data fit is achieved by studying a binned Barlow-Beeston[2] likelihood with
penalty terms corresponding to the Gaussian priors of the systematics
parameters of the flux, cross-section, and detector response models [3]
jointly noted $\vec{s}$:
$\displaystyle-\ln(L)$
$\displaystyle=\sum\limits_{i}\beta_{i}N_{i}^{p}(\vec{s})-N_{i}^{d}+N_{i}^{d}\ln\left(\frac{N_{i}^{d}}{\beta_{i}N_{i}^{p}(\vec{s})}\right)+\frac{(\beta_{i}-1)^{2}}{2\sigma_{\beta_{i}}^{2}}$
$\displaystyle+\sum\limits_{i}^{\vec{s}}\sum\limits_{j}^{\vec{s}}\Delta(\vec{s})_{i}\left(V_{\vec{s}}^{-1}\right)_{i,j}\Delta(\vec{s})_{j}$
where the sum is over the energy bins, $N_{i}^{\text{obs}}$ is the observed
number of events and $N_{i}^{\text{pred}}$ is the predicted number of events
in the $i$-th bin. The Barlow-Beeston likelihood introduces parameters
$\beta_{i}$ that account for the Poisson fluctuations in the Monte-Carlo
dataset that yields to $N_{i}^{\text{pred}}$. The second line captures the
Gaussian uncertainties arising from the systematics $\vec{s}$ with covariance
matrix $V_{\vec{s}}$.
T2K’s near detector fit uses a frequentist method called BANFF [4] and a
Bayesian method rooted in MCMC called Mach3 [5]. In our exploratory method, we
exclude the T2K systematics that account for cross-sections and detector
uncertainties. Furthermore, the remaining latent variable, represented by the
vector of Barlow-Beeston reweights $\beta$, only conditions the marginal
distribution of $E_{\nu}$. Therefore, the joint probability distribution of
seeing an event with $(p_{\mu},\theta_{\mu})$ is as follows:
$p(p_{\mu},\theta_{\mu}|\mathbf{\beta})=\int
p(p_{\mu},\theta_{\mu}|E_{\nu})\times p(E_{\nu}|\mathbf{\beta})dE_{\nu}$
Given a dataset denoted as $\mathbf{X}_{d}$, consisting of events sampled from
the distribution $p(p_{\mu},\theta_{\mu}|\mathbf{\beta})$, the objective of
our deep learning model is to predict the corresponding posterior distribution
$p(\beta|\mathbf{X}_{d})$ during the inference phase. The vector $\beta$,
representing Barlow-Beeston reweights, is the only latent variable in this
context. Hence, the spread of the posterior distribution is solely a
reflection of the Poisson statistics. Our goal is to design a model to perform
Bayesian inference automatically at inference time for a diverse range of
datasets $\mathbf{X}_{d}$.
As Papamakarios outlines in his thesis [6], Bayesian techniques can be divided
into two categories: ”likelihood-based inference” methods, which require an
estimation of the likelihood density $p(\mathbf{X}_{d}|\beta)$ during
inference, and ”likelihood-free inference” methods, which do not.
By binning the $(p_{\mu},\theta_{\mu})$ space, we can estimate the Poisson
negative log-likelihood given the observation of a dataset $\mathbf{X}_{d}$
using pixel values $N_{\text{pix}}^{d}$ , as follows:
$p(\mathbf{X}_{d}|\beta)=\sum_{\text{pix}}\left[N_{\text{pix}}^{p}(\beta)-N_{\text{pix}}^{d}+N_{\text{pix}}^{d}\ln\left(\frac{N_{\text{pix}}^{d}}{N_{\text{pix}}^{p}(\beta)}\right)\right]$
(1)
Here, the summation is over pixels pix, and $N_{\text{pix}}^{p}(\beta)$
denotes the predicted number of events in pixel pix for the reweight $\beta$.
Therefore, using the likelihood to train and test our Deep Learning model is
feasible. However, this calculation is computationally intensive, especially
for a significant number of pixels or in a high-dimensional latent space.
Therefore, we opted for an alternative method that does not require likelihood
estimation during training and inference. This method will be further
discussed in the following sections and in Appendix A.
## II Conditional density estimation with Normalizing Flows
Normalizing Flows (NF) are often used to estimate a target distribution. In
this particular context, the adoption of NF emerges organically as a means to
predict a posterior distribution $q_{\phi}(\beta|\mathbf{X}_{d})$ close to the
true posterior distribution $p(\beta|\mathbf{X}_{d})$, where $\phi$ are the
parameters of the NF model. The basic concepts of Normalizing Flows will be
explained in Section II.1. A more complete overview of Normalizing Flows can
be found in the review of Papamakarios et al [7]. In this work, the density
estimation is performed through Rational-Quadratic Neural Spline Flows, a
specific implementation of Normalizing Flows that will be presented in Section
II.2. Finally, Section II.3 describes the utilization of Normalizing Flows for
Bayesian inference, explaining the concept of ”Conditional Normalizing Flows”
as a means to learn a posterior distribution without the need of re-training
the model for new datasets $\mathbf{X}_{d}$.
### II.1 Normalizing Flows
#### II.1.1 Definition
At its core, a Normalizing Flow is a diffeomorphism of the probabilistic space
that transforms a simple probability distribution into a more complex one. The
concept revolves around the simple change of variable rule in probability
theory.
For a diffeomorphism of random variables, from $\mathbf{x}$ to $\mathbf{z}$,
where $\mathbf{z}=T(\mathbf{x})$, the probability density function of
$\mathbf{z}$ is related to the one of $\mathbf{x}$ as follows:
$p_{\mathbf{z}}(\mathbf{z})=p_{\mathbf{x}}(\mathbf{x})\left|\det\left(\mathbf{J}_{T}(\mathbf{x})\right)\right|^{-1}$
with $J_{T}(\mathbf{x})$ the Jacobian of the transformation. $p_{\mathbf{x}}$
represents here the base distribution (a normal distribution in general) from
where we model the target distribution $p_{\mathbf{z}}$ by applying the
diffeomorphism $T$ to the probabilistic place.
The flow transformation can be composed of multiple NFs. Suppose we represent
the transformation $T$ as a composition of simpler $T_{k}$ transformations,
with $T=T_{K}\circ\cdots\circ T_{1}$. Starting with an initial value
$\mathbf{z}_{0}=\mathbf{x}$ and target value $\mathbf{z}_{K}=\mathbf{z}$, we
can evaluate the transformation and compute the Jacobian as follows:
$\displaystyle\mathbf{z}_{k}=T_{k}$
$\displaystyle\left(\mathbf{z}_{k-1}\right),\quad k=1:K,$
$\displaystyle\left|J_{T}(\mathbf{z})\right|$
$\displaystyle=\left|\prod_{k=1}^{K}J_{T_{k}}\left(\mathbf{z}_{k-1}\right)\right|,$
where $J_{T_{k}}\left(\mathbf{z}_{k-1}\right)$ represents the Jacobian
determinant of the $T_{k}$ transformation evaluated at $\mathbf{z}_{k-1}$. In
practical applications of Normalizing Flows, the transformations $T_{k}$ (or
$T_{k}^{-1}$) are often implemented using a neural network, which provides the
required flexibility to model complex mappings.
#### II.1.2 Loss function
The training of neural networks requires a loss function to estimate the
divergence between the predicted and true distributions. The Kullback-Leibler
(KL) divergence is a fundamental concept in statistics, widely used to compare
two probability distributions. The key intuition behind using KL-divergence
lies in its ability to quantify the information lost when one distribution is
used to approximate another. Formally, for two probability distributions P and
Q over the same space $\Omega$, the KL-divergence is defined as :
$D_{\text{KL}}(P||Q)=\sum_{x\in\Omega}P(x)\log\left(\frac{P(x)}{Q(x)}\right)$
An essential characteristic of the KL-divergence is that it is non-negative,
with $D_{\text{KL}}(P||Q)=0$ if and only if $P$ and $Q$ are identical
distributions. This property allows us to derive a loss function from the KL-
divergence as:
$\displaystyle L(\phi)$
$\displaystyle=D_{\mathrm{KL}}\left(p_{\mathbf{z}}(\mathbf{z})\|q_{\phi}(\mathbf{z})\right)$
$\displaystyle=\mathbb{E}_{\mathbf{z}\sim p_{\mathbf{z}}(\mathbf{z})}[\log
p_{\mathbf{z}}(\mathbf{z})-\log q_{\phi}(\mathbf{z})]$
$\displaystyle=\mathbb{E}_{\mathbf{z}\sim p_{\mathbf{z}}(\mathbf{z})}[\log
p_{\mathbf{z}}(\mathbf{z})]$ $\displaystyle\quad-\mathbb{E}_{\mathbf{z}\sim
p_{\mathbf{z}}(\mathbf{z})}\left[\log
p_{\mathbf{x}}\left(T^{-1}(\mathbf{z};\phi)\right)+\log\left|\operatorname{det}J_{T}^{-1}(\mathbf{z};\phi)\right|\right]$
where $\mathbb{E}_{\mathbf{z}\sim p_{\mathbf{z}}(\mathbf{z})}$ is the
expectation for samples of the target distribution $p_{\mathbf{z}}$, $\phi$
represents the parameters of the flow $T$ parametrized by the neural network,
$p_{\mathbf{x}}$ is the base distribution and $q_{\phi}$ is the predicted
distribution.
The loss function can be computed for target densities $p_{\mathbf{z}}$ from
which one can sample, but the density evaluation for a specific point $z$ is
not required. When optimizing the transformation $T$, we estimate the gradient
of the KL-divergence by drawing samples from the target distribution
${\mathbf{z}_{n}}\sim p_{\mathbf{z}}(\mathbf{z})$ :
$\displaystyle\nabla_{\phi}L(\phi)$
$\displaystyle\approx-\frac{1}{N}\sum_{n=1}^{N}[\nabla_{\phi}\log
p_{\mathbf{x}}\left(T^{-1}\left(\mathbf{z}_{n};\phi\right)\right)$ (2)
$\displaystyle\;+\nabla_{\phi}\log\left|\operatorname{det}J_{T}^{-1}\left(\mathbf{z}_{n};\phi\right)\right|]$
#### II.1.3 Autoregressive flows
For a transformation to be valid, it must be both invertible and possess a
tractable Jacobian determinant. However, even if a network ensures theoretical
invertibility, practical computations might still be expensive or infeasible.
The computation of the determinant of the Jacobian is required for the loss
calculation and sampling scales with a cubic complexity of the probability
space dimension. A tractable Jacobian determinant implies a complexity that
scales at most $\mathcal{O}(D)$, which is ensured by autoregressive flows [8].
In autoregressive flows, the transformation is structured such that each
output dimension depends only on its lower dimensions. This involves employing
D component-wise transformations, referred to as transformers, such as:
$z_{i}^{\prime}=\tau\left(z_{i};\mathbf{h}_{i}\right)\text{ with
}\mathbf{h}_{i}=\mathbf{h}_{i}\left(\mathbf{z}_{<i};\phi\right)$
where $z_{i}^{\prime}$ is the $i$-th component of $\mathbf{z}^{\prime}$,
$z_{i}$ is the $i$-th component of $\mathbf{z}$, $\tau$ represents the
transformer, which is a one-dimensional diffeomorphism concerning $z_{i}$ and
depends on the $i$-th dimension of the conditioner $\mathbf{h}_{i}$.
$\mathbf{h}_{i}$ takes
$\mathbf{z}_{<i}=\left(z_{1},z_{2},\ldots,z_{i-1}\right)$ as input, i.e., the
previous components of $\mathbf{z}$.
Due to this definition, the Jacobian matrix, denoted as $J_{T}(\mathbf{z})$,
is lower triangular. Consequently, the log-determinant of the Jacobian matrix
can be efficiently computed by taking the sum of the logarithm of its diagonal
elements:
$\log\left|\operatorname{det}J_{T}(\mathbf{z})\right|=\sum_{i=1}^{D}\log\left|\frac{\partial\tau}{\partial
z_{i}}\left(z_{i};\mathbf{h}_{i}\right)\right|$
However, the diffeomorphism constraint imposed on the transformer is highly
restrictive, requiring it to be a monotonic and $\mathcal{C}^{1}$ function. To
enhance expressiveness, various flows have been developed. In the upcoming
section, we will use Neural Spline Flows, specifically focusing on Rational-
Quadratic Neural Spline Flows, which are among the most expressive flows
developed to date [9].
### II.2 Neural Spline Flows
Significant efforts have been dedicated to a specific class of normalizing
flows known as Neural Spline Flows (NSF) [9]. Splines are piecewise-defined
differentiable functions. In the context of NSFs, each transformer is a
spline, with each piece serving as a bijective and differentiable function on
its defined segment. To guarantee that the transformer remains bijective and
differentiable, making it a $\mathcal{C}^{1}$ function, the overall spline
should not only be monotonic, but also maintain continuity along with its
derivative.
#### II.2.1 Rational-Quadratic Neural Spline Flows
Durkan et al. [9] advocated for the use of monotonic rational-quadratic spline
flows (RQ-NSF), where the transformers are rational-quadratic functions, i.e.
the ratio of two quadratic functions. Rational-quadratic splines have a
convenient flexibility due to their infinite Taylor-series expansion while
being defined by a small number of parameters. Additionally, these splines are
analytically invertible and differentiable.
The overall spline acts as a diffeomorphism, providing a smooth one-to-one
mapping within a specific region of interest, typically chosen as the segment
$[A,B]$. Within this segment, the transformer distorts the parameter space,
while beyond this interval, it is the identity. This transformation is
achieved through the use of the rational-quadratic splines parametrization
introduced by Gregory and Delbourgo [10]. The parametrization involves a total
of N different rational-quadratic functions, with their boundaries determined
by pairs of coordinates ${\left(x^{(n)},y^{(n)}\right)}_{n=0}^{N}$ known as
knots, where the spline passes through. To ensure continuity, the first knot
is $\left(x^{(0)},y^{(0)}\right)=(A,A)$ and the last knot is
$\left(x^{(N)},y^{(N)}\right)=(B,B)$. In order to parameterize a monotonic
spline, N-1 intermediate positive derivative values ${(f^{(n)})}_{n=1}^{N-1}$
need to be defined. The derivatives at the boundary points are set to 1 to
match the identity function (i.e., $f^{(0)}=f^{(N)}=1$).
#### II.2.2 Masked Autoregressive Network
The conditioners $\mathbf{h}_{i}$ associated with the transformers,
responsible for providing the parameters of the RQ-NSF are expressed as
functions of the autoregressive input features $\mathbf{z}_{<i}$. However,
implementing these conditioners as separate neural networks for each dimension
is computationally inefficient, particularly for high-dimensional data.
To overcome this issue, a solution known as masked autoregressive network
(MAN) [11] is adopted in this work. The masked autoregressive network takes
the entire vector $\mathbf{z}$ as input and directly outputs all the
parameters of the $D$ conditioners $\left(h_{1},h_{2},\ldots,h_{D}\right)$
simultaneously. This is achieved by modifying a standard feed-forward network,
to ensure that no connections exist from the input $z_{\geq i}$ to the outputs
$\left(h_{1},\ldots,h_{i}\right)$. This connection cut is implemented by
element-wise multiplication of the weight matrices connecting the neurons of
the network with a binary matrix of the same size. The binary matrix ”masks
out” the undesired connections by setting them to zero, preserving all other
connections. This network was used by Papamakarios et al. [12] to parameterize
flows, leading to masked autoregressive flows (MAF).
However, this can lead to richer relations for later components and poorer
relations for the first dimensions. To ensure that all input variables
interact with each other, random permutations of dimensions are commonly
introduced between autoregressive flows.
### II.3 Normalizing Flows for Amortized Bayesian Inference
The previous section highlighted the advantages of NFs for density estimation
due to their expressive nature. Using the previous definition, NFs are
effective at estimating a density $p_{\theta}(\mathbf{x})$. Building on this,
a natural question arises: can we extend NFs to learn conditional
probabilities $p_{\theta}(\mathbf{x}|\mathbf{z})$ in a similar manner, where
$\mathbf{z}$ is a condition provided by the user at inference time? We will
keep the notation of Section I with the input dataset $\mathbf{X}_{d}$ from
where we want to infer the target posterior distribution,
$p(\beta|\mathbf{X}_{d})$.
A commonly used approach in this context is to apply Normalizing Flows to
Variational Inference (VI), as noted in previous works [13, 14]. VI requires
evaluating the likelihood. In our study, the computation of the Poisson
likelihood is feasible, but extremely time and computationally expensive at
training time and sampling time, especially for high dimensional
$\mathbf{X}_{d}$ or $\beta$. Addressing this limitation, Likelihood-free
Inference (LFI) emerges as an alternative. LFI refers to a set of methods used
for statistical inference when the likelihood function (or equivalently the
target posterior distribution) is either unavailable or too computationally
intensive to calculate. The potential of Normalizing Flows for LFI has been
previously explored in works like [15, 16].
In this work, we investigate a similar method as in [15] utilizing Normalizing
Flows. This method bypasses the need for direct likelihood evaluation during
training and sampling, and instead train the model using samples of the target
posterior distribution that have already been generated. We generate samples
of the target posterior distribution using a simulation process detailed in
Appendix A, which serve as the basis for computing our loss function, as
described in Equation 2. These samples can be generated once prior to
training. Another key feature of our model is its amortization aspect, which
enables the inference process to generalize across different $\mathbf{X}_{d}$
datasets, thereby eliminating the need for separate training for each dataset.
This model approximates the posterior distribution of latent variables for a
wide range of $\mathbf{X}_{d}$ datasets. At the heart of our model are one or
more encoder neural networks designed to distill information from the
conditional variable $\mathbf{X}_{d}$ into the parameters of the flow
transformation.
Figure 1: Concept of a conditional normalizing flows model using masked
autoregressive networks (MAN). The encoder network outputs a lower dimension
representation of the input dataset which is fed into the MAN of each NF,
producing input-dependent flows parameters $\theta_{i}$.
NF models embedding encoder networks are referred to as Conditional
Normalizing Flows (CNF) in this work. Such models are particularly used in
Amortized Variational Inference [13, 14]. CNFs can be seen as estimators of a
family of flows, instead of a single flow, with the ability to choose the
right flow under the conditions given by the input. The encoder networks learn
a lower-dimensional representation $h_{\phi}(\mathbf{X}_{d})$ of the dataset,
referred to as context features, where $\phi$ are the parameters of the
encoder network, and their outputs are fed into each MAN of the Normalizing
Flows. This results in an input-dependent flow transformation, endowing it to
be used for Amortized Bayesian Inference as it will be applied in the
following section. An illustration of the concept of CNF is given in Figure 1.
## III Conditional Normalizing Flows for the Near Detector fit at T2K
In this section, CNF will be used to sample from the posterior probability
$p(\beta|\mathbf{X}_{d})$ given a dataset $\mathbf{X}_{d}$. The implementation
of RQ-NSF is based on the nflows Pytorch implementation of Durkan et al. [17].
In this work, the training is done in two steps, which will be detailed in
Section III.1. Section III.2 delves into the architecture of the CNF model.
Section III.3 presents the performance of the model tasked with the prediction
of the posterior distribution accounting for Poisson fluctuations at inference
time. In our exploratory methodology, we have simplified the problem by
segmenting the neutrino flux into three energy bins, each associated with a
reweight variable $\beta$ ranging from $0.5$ to $1.5$. Consequently, the
latent space under consideration is the cube $[0.5,1.5]^{3}$.
### III.1 Training methodology
Training the model involves simultaneously learning two essential components:
(1) the context features denoted as $h_{\phi}(\mathbf{X}_{d})$ generated by a
Convolutional Neural Network (CNN) where $\theta$ signifies the CNN’s
parameters and (2) a flow transformation, that is parametrized by the context
feature the parameters of the MAN noted $\phi$.
The training is divided into two steps. Figure 2 shows a conceptual
representation of the posterior distribution learning during these two phases.
$\mathbf{1.}$ In the initial stage, the objective is to obtain an approximate
representation of the posterior distribution using a linear flow. The aim is
to convert a normally distributed variable
$\mathbf{u}\sim\mathcal{N}(0,\mathbf{I})$ into a tridimensional correlated
Gaussian variable $\mathbf{x}\sim\mathcal{N}(\mu,\Sigma)$ that closely
resembles $p(\beta|\mathbf{X}_{d})$. Given a normal variable $\mathbf{u}$, we
can simply add a shift and correlations with a linear transformation:
$\mathbf{x}=L\mathbf{u}+\mu.$
Here, $L$ corresponds to the Cholesky decomposition of the covariance matrix
$\Sigma=LL^{T}$, and $\mu$ represents the mean of the $\mathbf{x}$
distribution. The only requirement for this linear flow is that $L$ is a
lower-triangular matrix with strictly positive diagonal elements. This first
step is important for two reasons. Firstly, it prevents training instabilities
by initially shifting and scaling $p(\mathbf{u})$ to cover
$p(\beta|\mathbf{X}_{d})$. More crucially, it empowers the complex flows to
fully use their expressiveness in learning local non-Gaussian characteristics,
rather than expending their potential on locating and scaling $\mathbf{u}$.
During this step, the loss function can be expressed as :
$\displaystyle L(\theta)$
$\displaystyle=D_{\mathrm{KL}}\left(p(\beta|\mathbf{X}_{d})\|q_{\theta}(\beta|\mathbf{X}_{d})\right)$
$\displaystyle\approx\frac{1}{N}\sum_{j=1}^{N}\sum_{i=1}^{3}[\frac{1}{2}(L^{-1}(\beta_{j}-\mu))_{i}^{2}+\log(\sigma^{i})]$
Here, $\sigma^{i}$ corresponds to the i-th diagonal element of $L$. To
maintain clarity, we have purposefully refrained from introducing an
additional summation that would account for the mean loss expectation across
various datasets $\mathbf{X_{d}}$.
The first term within this expression bears a strong resemblance to a
$\chi^{2}$ term, and it reaches minimal values when $\mu$ is equal to the mean
of the $\beta$ samples and when the introduced spreads $\sigma^{i}$ from $L$
are maximized. The second term is proportional to the Shannon entropy of the
predicted posterior distribution, exerting an opposing effect to the first
term. In particular, the reduction in entropy leads to a minimization of
$\sigma^{i}$.
$\mathbf{2.}$ After 10$\%$ of the training, we enable more complex flows, the
RQ-NSFs, before the linear flow. These flows introduce non-Gaussian properties
to the predicted posterior distribution
$q_{\theta,\phi}(\beta|\mathbf{X}_{d})$ that depend also on the input
$\mathbf{X}_{d}$. To ensure smooth transitions in the loss function between
the stages $\mathbf{1.}$ and $\mathbf{2.}$, we initialize the RQ-NSFs to the
identity transformation. Consequently, the expression for the predicted
$\beta$ can be written as:
$\mathbf{\beta}=L[\mathbf{T_{K}\circ}...\circ T_{1}(\mathbf{u})]+\mu$
Upon enabling the additional flows, the loss function transforms into:
$\displaystyle L(\theta,\phi)=$
$\displaystyle\frac{1}{2N}\sum_{j=1}^{N}\sum_{i=1}^{3}[T_{K}^{-1}\circ...\circ
T_{1}^{-1}(L^{-1}(\beta_{j}-\mu))]_{i}^{2}$
$\displaystyle+\sum_{i=1}^{3}[\sum_{k=1}^{K}\log(|J_{T_{k}}|_{ii})+\log(\sigma^{i})]$
The complexity of this loss function may seem greater compared to the previous
one, yet it can still be broken down into a combination of a $\chi^{2}$ term
and an entropy term. What is important to note is that this formulation also
reveals the computational efficiency of the backward computations, which
exhibit linear complexity concerning both the number of flows and the
dimensions of the posterior distribution. This property enhances the
training’s scalability, allowing it to be effectively applied to higher
dimensions with larger numbers of flows.
(a)
(b)
(c)
Figure 2: Representation of the predicted posterior distribution during
training. The current approximation of the posterior distribution is
represented in blue and the target posterior distribution in red. (a)
corresponds to the initial state, (b) to the transition between Step
$\mathbf{1.}$ and Step $\mathbf{2.}$, and (c) to the end of the training.
### III.2 Model’s architecture
The architecture of the model is represented in Figure 3. The model’s
structure can be broken down into two main blocks: the Encoder block housing
the two CNNs, and the Flows block encompassing the RQ-NSFs and the Linear
flow. One CNN yields both the Cholesky Matrix $L$ and $\mu$, while the second
CNN generates the context features used by the RQ-NSFs. Both CNNs share a
common architecture, differing only in their last fully connected layers. The
architecture is based on a fine-tuned ResNet-50 model [18] followed by one
fully connected layer. The first CNN produces 9 output nodes (6 for $L$ and 3
for $\mu$), while the second CNN generates 10 context features. Although the
number of context features proved sufficient for the study, it is important to
note that a complex distribution may require more context features.
The initial phase of the flow model involves a sequence of K=4 blocks, each
comprising an RQ-NSF followed by a generalized LU permutation as defined by
Oliva et al. [19]. The parameters of the RQ-NSFs are learned through a MAN
consisting of three hidden layers, each containing 256 nodes. The MAN outputs
the parameters of a 9-knot tri-dimensional spline. The linear flow is only
parameterized by the output of the first CNN.
The model is trained using $7,500$ $\mathbf{X_{d}}$ datasets of muon
observables, and for each dataset, $4,000$ samples from the target posterior
distribution to compute the loss as in Equation 2. The generation of the
datasets and the posterior samples is detailed in Appendix A.
Figure 3: Model’s architecture used for the inference of
$p(\beta|\mathbf{X}_{d})$ from a dataset $\mathbf{X}_{d}$
### III.3 Model’s performances
(a) Target posterior distribution
(b) Predicted posterior distribution
Figure 4: 10,000 Samples from the target posterior distribution (a) and from
the predicted posterior distribution (b). The diagonal plots represent the
marginal distribution of the 3 reweights bins $\beta_{i}$. The off-diagonal
plots correspond to the 2D histograms of $(\beta_{i},\beta_{j})$.
The present section focuses on evaluating the model’s performances up to the
fourth moment prediction for the posterior distribution. In this analysis, we
compare two posterior distributions: the sampled predicted posterior
distribution using NF $q_{\text{NF}}(\beta|\mathbf{X_{d}})$ and the sampled
target distribution $p_{\text{target}}(\beta|\mathbf{X_{d}})$ used to compute
the loss during training along with the designed reweight
$\beta_{\text{Asimov}}$, referred to as the Asimov datapoint, used to generate
$\mathbf{X_{d}}$. $p_{\text{target}}$ is produced by generating Poisson
fluctuations of $\mathbf{X_{d}}$ and identifying the most likely reweight for
each fluctuation. Therefore, both $p_{\text{target}}$ and $q_{\text{NF}}$
retains the initial bias inherent in the creation of the dataset
$\mathbf{X_{d}}$. Consequently, we focus on comparing $q_{\text{NF}}$ not with
$\beta_{\text{Asimov}}$ but with $p_{\text{target}}$.
For each $\beta_{\text{Asimov}}$, we generate a dataset $\mathbf{X_{d}}$ from
which we generate 10,000 samples from $q_{\text{NF}}(\beta|\mathbf{X_{d}})$
and 10,000 samples from $p_{\text{target}}(\beta|\mathbf{X_{d}})$. A
comparison between $q_{\text{NF}}(\beta|\mathbf{X_{d}})$ and
$p_{\text{target}}(\beta|\mathbf{X_{d}})$ is shown in Figure 4 for a specific
dataset $X_{d}$. The model seems able to predict correctly the shape of the
posterior distribution including correlations and higher order moments. To
evaluate the model’s performance, we analyze its predictions over a complete
range of $\beta$ values. Figure 5 presents this analysis, offering a
comparison among the means of $q_{\text{NF}}(\beta|\mathbf{X_{d}})$,
$p_{\text{target}}(\beta|\mathbf{X_{d}})$, and the Asimov datapoint.
Additionally, we quantify the accuracy using the coefficient of determination,
denoted as $R^{2}$ for $N=1,000$ Asimov datapoints uniformly sampled from the
$[0.5,1.5]^{3}$ cube. In this case, $R^{2}$ is defined as:
$R^{2}=1-\frac{\sum_{i}||\beta_{\text{Asimov}}-\hat{\beta}||^{2}}{\sum_{i}||\beta_{\text{Asimov}}-1||^{2}}=1-\frac{12}{3N}\sum_{i}||\beta_{\text{Asimov}}-\hat{\beta}||^{2}$
where $\hat{\beta}$ represents the mean of the $q_{\text{NF}}$ (or
$p_{\text{target}}$) and $\beta_{\text{Asimov}}$ denotes the Asimov datapoint.
An $R^{2}$ of 1 implies that the model predicts the mean with perfect
accuracy, while an $R^{2}$ of 0 indicates that the model randomly predicts the
mean. For our model, the $R^{2}_{\text{NF}}$ value is 0.9975. This score is
marginally lower than the $R^{2}_{\text{target}}$ of 0.9980 using
$p_{\text{target}}$. The coefficient of determination is further reflected in
the Root Mean Square (RMS) errors: $0.013$ when comparing the mean of
$p_{\text{target}}$ against the Asimov data point, and $0.014$ for the mean of
$p_{\text{NF}}$ against the Asimov data point. This highlights that the
model’s deviations are minor compared to the Poisson fluctuations in
generating the dataset $\mathbf{X}_{d}$.
Figure 5: Projection in the $(\beta_{0},\beta_{1})$ plane of the mean of the predicted posterior distribution (blue), the reweight given $\mathbf{X}_{d}$ using likelihood maximization (red) and the Asimov datapoints (green) on a grid and for a constant $\beta_{2}$ of 1. The circles in dashed lines have a radius of 0.05 and are centered at the designed $\beta$ value. Table 1: Table of the comparison of a chosen set of statistics between the predicted and target posterior distribution. The values are averaged across the 200 Asimov datapoints. $a$ and $b$ corresponds to the parameters of the fit of the MSE. $\overline{S_{\text{target}}}$ corresponds to the mean of the statistic from the sampled target posterior distribution. Bin | $\overline{S_{\text{target}}}$ | a | b | Bias ($\sqrt{\textbf{b}}$)
---|---|---|---|---
Mean
0 | 1.00 | 8.87e-4 | 4.86e-5 | 5.6e-3
1 | 1.00 | 1.05e-3 | 5.29e-5 | 5.9e-3
2 | 1.00 | 7.64e-4 | 5.49e-5 | 5.9e-3
Standard Deviation
0 | 1.31e-2 | 0.91e-4 | 5.61e-8 | 1.81e-4
1 | 1.48e-2 | 1.13e-4 | 5.67e-8 | 1.90e-4
2 | 1.20e-2 | 0.73e-4 | 3.13e-8 | 1.37e-4
Pearson Correlation factor
(0,1) | -0.49 | 1.22 | 1.47e-4 | 9.84e-3
(1,2) | -0.39 | 1.14 | 8.96e-5 | 7.77e-3
(2,0) | 0.03 | 0.98 | 2.62e-4 | 1.29e-2
Our goal is to assess the alignment of $q_{\text{NF}}$ with
$p_{\text{target}}$, particularly in terms of the first four statistical
moments: mean, standard deviation, Pearson correlation factors, skewness, and
kurtosis. The discrepancies in these statistical moments come from two
sources: (1) statistical variations due to the finite sample size from
posterior distributions, and (2) systematical discrepancies introduced by the
model itself.
We estimate these errors by calculating the Mean Squared Error (MSE) between a
statistic from the predicted posterior distribution $S_{\text{NF}}$ and that
of the target posterior distribution $S_{\text{target}}$ for a given Asimov
datapoint. We generate many samples of the two statistics using bootstrapping
on the posterior datasets. In order to decouple the fluctuations of
$S_{\text{NF}}$ from the ones of $S_{\text{target}}$, we compare
$S_{\text{NF}}$ to $\overline{S_{\text{target}}}$ the average of the target
statistic obtained with bootstrapping for a given Asimov datapoint. The MSE
calculation is as follows:
MSE
$\displaystyle=\mathbb{E}_{\text{Boot}}\left(\left|S_{\text{NF}}-\overline{S_{\text{target}}}\right|^{2}\right)$
$\displaystyle=V^{\text{S}}_{\text{stat}}+{V_{\text{bias}}^{S}}$ (3)
$\mathbb{E}_{\text{Boot}}$ denotes the expectation over the bootstrapped
posterior datasets. The first term corresponds to the statistical error
$V^{\text{S}}_{\text{stat}}(N)$ which typically scales with $\frac{1}{N}$,
where $N$ is the number of posterior samples while the second term is constant
as a function of $N$. Therefore, the MSE can be modeled by the function:
$\text{MSE}_{\text{fit}}(N)=\frac{a}{N}+b,\quad a,b>0$
In order to estimate these fit parameters $a$ and $b$, we calculate the MSE
for increasing number of posterior samples and fit the curve with the above
equation for each Asimov datapoint. This is done using $200$ datasets
generated from Asimov data points uniformly sampled from the $[0.5,1.5]^{3}$
volume. The errors for the studied statistics are summarized in Table 1.
Our analysis shows that the error in the mean prediction is mainly
systematical and tends also to be bias-dominated for the standard deviation
for $N>2,000$. More posterior samples are required for the error in the
Pearson correlation factors to be dominated by the bias, between $4,000$ and
$13,000$ posterior samples depending on the correlation factor. The skewness
and kurtosis measurements exhibit higher statistical error. While the MSE
presented in the table can be linked to intrinsical error of the model, the
skewness and kurtosis values are not significant at a $0.05$ level, meaning
that both $p_{\text{target}}$ and $q_{\text{NF}}$ are very close to Gaussian
distributions.
Furthermore, this inference method offers advantages in terms of sampling
speed while keeping a high predictive accuracy. Traditional methods based on
likelihood estimation are inherently limited in speed by the process of
estimating the likelihood. For instance, calculating the likelihood as defined
in Equation I a million times on one of our CPU cores requires $1135$ seconds.
Consequently, for approaches that need at least one likelihood estimation for
each posterior sample (like the Metropolis-Hasting algorithm used in MCMC),
the fastest possible sampling rate assuming a 100$\%$ sampling efficiency is a
million posterior samples in $1135$ seconds. In contrast, the model’s sampling
speed represents a more efficient alternative, generating 1 million samples
from $q_{\text{NF}}$ in $174$ seconds on the same CPU core. This translates to
a minimum improvement in sampling speed by a factor of $6.5$. Such an increase
in speed during inference is particularly advantageous for real-time
applications, including object tracking or stock price prediction. This is
particularly interesting if the model can predict non-gaussian posterior
distribution. This question will be addressed in the following section.
(a) Target posterior distribution
(b) Predicted posterior distribution
Figure 6: 10,000 Samples from the target posterior distribution (a) and from
the predicted posterior distribution (b). The diagonal plots represent the
marginal distribution of the 3 reweights bins $\beta_{i}$. The off-diagonal
plots correspond to the 2D histograms of $(\beta_{i},\beta_{j})$.
## IV Quantifying and Modeling Non-Gaussian Characteristics
While the model demonstrated its proficiency in accurately predicting a
posterior distribution, the last section did not extensively highlight the
expressiveness of the RQ-NSF. The posterior distributions, accounting for the
Poisson statistics, are close to multivariate Gaussian distributions. In
particular, the important features, the mean, and the covariance matrix could
potentially be learned by the linear flow only.
This section is an exploration of the model’s behavior when tasked with
predicting non-Gaussian attributes. To this end, we introduce modifications to
$p_{\text{target}}$ introducing a distinct ”island” within the third energy
bin dimension. This bimodal structure of the posterior distribution serves as
a meaningful test case, as conventional methods like Metropolis Hasting’s
algorithm for Markov Chain Monte-Carlo often struggle to infer multimodal
distributions.
A crucial aspect of our model is put to the test: its capability to predict
non-Gaussian features that depend on the input dataset $\mathbf{X}_{d}$. This
test demonstrates that the context features learned from $\mathbf{X}_{d}$ and
fed to the MAN influence the geometry of the $q_{\text{NF}}$. To this end, the
introduced shift and weight of the island are input-dependent and more
precisely, proportional to the total event count in the dataset
$\mathbf{X}_{d}$.
This is done by taking the already generated samples from $p_{\text{target}}$
and applying a stochastic transformation with probability p of shifting by a
quantity s the posterior samples. We have chosen expressions for the
probability and for the shift, defined as follows, where
$\text{N}_{\text{events}}$ is the total event count in million:
$\displaystyle\textbf{p}(\text{N}_{\text{events}})=4\text{N}_{\text{events}}$
$\displaystyle\textbf{s}(\text{N}_{\text{events}})=2\text{N}_{\text{events}}$
An example of the comparison of $p_{\text{target}}$ and $q_{\text{NF}}$ for a
specific choice of reweight $\beta$ is given in Figure 6. The close agreement
between them illustrates the model’s capacity to infer this non-gaussian
characteristic.
We aim to assess the model’s accuracy in predicting the island weight (related
to p) and the distance between the means of the two modes (related to s)
across various $\text{N}_{\text{events}}$ values. We construct datasets for
Asimov datapoint $\beta_{\text{Asimov}}$ corresponding to increasing event
counts. We estimate the marginal distribution of $\beta_{3}$ by fitting the
samples using a mixture of two Gaussian distributions for $q_{\text{NF}}$. We
compute the distance between the modes using the absolute difference of the
two means given by the fit while the weight of the island is directly given by
the fit.
Figure 7: Variation of the distance between modes (top) and the island weight
(bottom) across varying total event counts. Error bars represent $\pm 1\sigma$
deviations calculated for $50$ posterior distributions from $50$ different
$\mathbf{X_{d}}$.
Figure 7 presents the evolution of the predicted distance and island weight
for $7$ different $\text{N}_{\text{events}}$ values between $37,500$ and
$62,500$. For each value of $N_{\text{events}}$, the posterior distributions
are predicted for $50$ $\beta_{\text{Asimov}}$ sampled in the hyperplane
yielding $\mathbf{X_{d}}$ with $N_{\text{events}}$ events (see Appendix A for
more detail). Although predictions for the island weight may exhibit some
variability, both the island weight and distance are consistently predicted
with accuracy within a 1$\sigma$ margin from the designed values.
## Conclusion
In this work, we explored the potential of Normalizing Flows for Amortized
Bayesian Inference in the context of the near detector fit at T2K. A brief
introduction to Normalizing Flows as conditional density estimators is
performed, emphasizing the particular implementation of Rational-Quadratic
Neural Spline Flows (RQ-NSF). In a simplistic case, it was shown that such a
model can predict accurately the posterior distribution of latent variables
(the energy bin reweights) from observables (the muon momentum and angle)
provided at inference time with a minimum improvement in sampling speed by a
factor of $6.5$. In the last section, it was also demonstrated that
Normalizing Flows can predict more intricate posterior distributions. However,
the full potential of RQ-NSF might not have been fully tapped in this study.
The exploration could extend to introducing multiple non-Gaussian attributes,
such as incorporating more than one island with varying characteristics like
locations, scales, amplitudes, and even shapes. Ultimately, the demonstration
of NF’s flexibility and efficiency paves the way for more informed and
accelerated Bayesian inferences, a prospect that holds substantial promise for
a myriad of data-driven applications.
## Acknowledgments
The author acknowledges support from the Swiss National Foundation grants No.
$200021E\\_213196$.
## Appendix A Generating the datasets
To generate the datasets, we divide the neutrino flux into 3 energy bins: [200
MeV, 620 MeV], [620 MeV, 800 MeV], and [800 MeV, 1.7 GeV]. For this study, we
choose three equally populated bins using the nominal T2K flux where all
systematics are set to their nominal values 1. The momentum $p_{\mu}$ ranges
from 0 to 1.7 GeV, the angle $\theta_{\mu}$ ranges from 0 to $\pi$. We choose
a nominal distribution for the triplet $(p_{\mu},\theta_{\mu},E_{\nu})$
corresponding to CCQE events generated by the NEUT Monte-Carlo generator [20]
with a ”nominal” T2K neutrino flux at the near detector [21].
$\mathbf{\beta}$ is a tridimensional vector, where each component represents a
reweight for a specific energy bin. Consequently, the probability of having a
neutrino in the i-th energy bin is given by
$p(E_{\nu}^{i}|\mathbf{\beta})=\frac{\beta_{i}p(E_{\nu}^{i})}{\sum\limits_{j}\beta_{j}p(E_{\nu}^{j})}=\frac{\beta_{i}}{\sum\limits_{j}\beta_{j}}$.
Now, the modified joint probability of $p_{\mu}$ and $\theta_{\mu}$ becomes
the sum of the individual probabilities corresponding to each energy bin:
$\displaystyle p(p_{\mu},\theta_{\mu}|\mathbf{\beta})$
$\displaystyle=\sum\limits_{i=1}^{3}p(p_{\mu},\theta_{\mu}|E_{\nu}^{i})\times
p(E_{\nu}^{i}|\mathbf{\beta})$ (4)
$\displaystyle=\sum\limits_{i=1}^{3}p(p_{\mu},\theta_{\mu}|E_{\nu}^{i})\times\frac{\beta_{i}}{\sum\limits_{j}\beta_{j}}$
where $E_{\nu}^{i}$ represents the neutrino energy corresponding to the i-th
bin.
To train and test the model, we need multiple datasets, which are generated in
the following way:
$\mathbf{1.}$ A sample of the reweight $\mathbf{\beta}$, called Asimov
datapoint, is taken from a uniform distribution in the range $[0.5,1.5]$ for
each component.
$\mathbf{2.}$ The modified probability grid is computed using the Equation 4
multiplied by $\sum_{j}\beta_{j}$ to account for the increasing total number
of events for increasing reweight values.
$\mathbf{3.}$ $(p_{\mu},\theta_{\mu})$ samples are sampled from this grid
using the Accept-Reject Monte-Carlo technique. The total number of generated
samples is proportional to $\sum_{j}\beta_{j}$ and goes from $25,000$ to
$75,000$. This means that we train our model on datasets of varying size,
where the datasets corresponding to the nominal reweight value $\beta=[1,1,1]$
have $50,000$ events.
$\mathbf{4.}$ The $(p_{\mu},\theta_{\mu})$ events are stored in a histogram
with a size of $200\times 200$, noted $\mathbf{X}_{d}$.
$\mathbf{5.}$ This process is repeated $7,500$ times for distinct Asimov
datapoints $\beta$, generating $7,500$ datasets containing between $25,000$
and $75,000$ $(p_{\mu},\theta_{\mu})$ samples.
During training, the loss calculation, referenced in Equation 2, requires
sampling from the target posterior distribution $p(\beta|\mathbf{X}_{d})$.
This is achieved by applying Poisson fluctuations to each dataset
$\mathbf{X}_{d}$. Subsequently, for each varied dataset, we identify the most
likely $\beta$. The resulting $\beta$ values from this process for a given
dataset effectively sample the target posterior distribution. While this
simulation step might be resource-heavy for latent spaces with high
dimensions, it is a one-time requirement before training, conducted alongside
the creation of the dataset. Consequently, one must balance the trade-off
between the lengths of pre-training and training against the speed of
sampling, which ultimately hinges on the particular scenario at hand.
## References
* Abe _et al._ [2011] K. Abe _et al._ (T2K), The T2K Experiment, Nucl. Instrum. Meth. A 659, 106 (2011), arXiv:1106.1238 [physics.ins-det] .
* Barlow and Beeston [1993] R. J. Barlow and C. Beeston, Fitting using finite Monte Carlo samples, Comput. Phys. Commun. 77, 219 (1993).
* Abe _et al._ [2023] K. Abe _et al._ (T2K), Measurements of neutrino oscillation parameters from the T2K experiment using $3.6\times 10^{21}$ protons on target, Eur. Phys. J. C 83, 782 (2023), arXiv:2303.03222 [hep-ex] .
* Walsh [2022] J. Walsh, _Constraining the T2K neutrino oscillation parameter results using data from the off-axis near detector, ND280: Implementation of a nucleon removal energy systematic uncertainty treatment in the BANFF fit_ , Ph.D. thesis, Lancaster U. (main), Lancaster U. (2022).
* Sztuc [2020] A. A. Sztuc, _Standard and non-standard neutrino-antineutrino oscillation analyses and event reconstruction studies using Markov chain Monte Carlo methods at T2K_ , Ph.D. thesis, Imperial Coll., London (2020).
* Papamakarios [2019] G. Papamakarios, Neural density estimation and likelihood-free inference (2019), arXiv:1910.13233 [stat.ML] .
* Papamakarios _et al._ [2021] G. Papamakarios, E. Nalisnick, D. J. Rezende, S. Mohamed, and B. Lakshminarayanan, Normalizing flows for probabilistic modeling and inference (2021).
* Kingma _et al._ [2017] D. P. Kingma, T. Salimans, R. Jozefowicz, X. Chen, I. Sutskever, and M. Welling, Improving variational inference with inverse autoregressive flow (2017), arXiv:1606.04934 [cs.LG] .
* Durkan _et al._ [2019] C. Durkan, A. Bekasov, I. Murray, and G. Papamakarios, Neural spline flows (2019), arXiv:1906.04032 [stat.ML] .
* Delbourgo and Gregory [1983] R. Delbourgo and J. Gregory, C 2 rational quadratic spline interpolation to monotonic data, IMA Journal of Numerical Analysis (1983).
* Germain _et al._ [2015] M. Germain, K. Gregor, I. Murray, and H. Larochelle, Made: Masked autoencoder for distribution estimation (2015).
* Papamakarios _et al._ [2018] G. Papamakarios, T. Pavlakou, and I. Murray, Masked autoregressive flow for density estimation (2018).
* Rezende and Mohamed [2016] D. J. Rezende and S. Mohamed, Variational inference with normalizing flows (2016), arXiv:1505.05770 [stat.ML] .
* van den Berg _et al._ [2019] R. van den Berg, L. Hasenclever, J. M. Tomczak, and M. Welling, Sylvester normalizing flows for variational inference (2019).
* Papamakarios _et al._ [2019] G. Papamakarios, D. C. Sterratt, and I. Murray, Sequential neural likelihood: Fast likelihood-free inference with autoregressive flows (2019), arXiv:1805.07226 [stat.ML] .
* Winkler _et al._ [2023] C. Winkler, D. Worrall, E. Hoogeboom, and M. Welling, Learning likelihoods with conditional normalizing flows (2023), arXiv:1912.00042 [cs.LG] .
* Durkan _et al._ [2020] C. Durkan, A. Bekasov, I. Murray, and G. Papamakarios, nflows: normalizing flows in PyTorch (2020).
* He _et al._ [2015] K. He, X. Zhang, S. Ren, and J. Sun, Deep residual learning for image recognition (2015).
* Oliva _et al._ [2018] J. Oliva, A. Dubey, M. Zaheer, B. Poczos, R. Salakhutdinov, E. Xing, and J. Schneider, Transformation autoregressive networks, in _Proceedings of the 35th International Conference on Machine Learning_ (PMLR, 2018).
* Hayato [2009] Y. Hayato, A neutrino interaction simulation program library NEUT, Acta Phys. Polon. B 40, 2477 (2009).
* T2K [2013] T2K (T2K Collaboration), T2k neutrino flux prediction, Phys. Rev. D 87, 012001 (2013).
|
††thanks: All authors contributed equally and names are ordered
alphabetically. Correspondence should be sent to<EMAIL_ADDRESS>
# Information limits and Thouless-Anderson-Palmer equations
for spiked matrix models with structured noise
Jean Barbier Francesco Camilli Yizhou Xu The Abdus Salam International
Centre for Theoretical Physics
Strada Costiera 11, Trieste 34151, Italy Marco Mondelli Institute of Science
and Technology Austria,
Am Campus 1, 3400 Klosterneuburg, Austria
###### Abstract
We consider a prototypical problem of Bayesian inference for a structured
spiked model: a low-rank signal is corrupted by additive noise. While both
information-theoretic and algorithmic limits are well understood when the
noise is i.i.d. Gaussian, the more realistic case of structured noise still
proves to be challenging. To capture the structure while maintaining
mathematical tractability, a line of work has focused on rotationally
invariant noise. However, existing studies either provide sub-optimal
algorithms or they are limited to a special class of noise ensembles. In this
paper, we establish the first characterization of the information-theoretic
limits for a noise matrix drawn from a general trace ensemble. These limits
are then achieved by an efficient algorithm inspired by the theory of adaptive
Thouless-Anderson-Palmer (TAP) equations. Our approach leverages tools from
statistical physics (replica method) and random matrix theory (generalized
spherical integrals), and it unveils the equivalence between the rotationally
invariant model and a surrogate Gaussian model.
††preprint: APS/123-QED
## I Introduction
Recovering a low-rank signal from a high-dimensional observation corrupted by
noise is a ubiquitous problem, appearing e.g. in sparse principal component
analysis (PCA) [1, 2], community detection [3, 4], group synchronization [5]
and sub-matrix localization or clustering [6]. In this paper, we consider the
prototypical task of estimating the rank-1 signal
${\mathbf{X}}^{*}{\mathbf{X}}^{*\intercal}\in\mathbb{R}^{N\times N}$ from a
symmetric matrix ${\mathbf{Y}}$ of noisy observations given by
$\displaystyle{\mathbf{Y}}=\frac{\lambda}{N}{\mathbf{X}}^{*}{\mathbf{X}}^{*\intercal}+{\mathbf{Z}},$
(1)
where $\lambda\geq 0$ represents the signal-to-noise ratio (SNR) and
${\mathbf{Z}}\in\mathbb{R}^{N\times N}$ is additive noise. This is often
referred to as the Johnstone spiked covariance model [7], and it was
originally formulated as a probabilistic model for PCA. Starting with the
seminal result of [8], the behavior of eigenvalues and eigenvectors of (1) has
been extensively studied in statistics and random matrix theory, see e.g. [9,
10, 11, 12, 13, 14]. Specifically, the authors of [8] identified a phase
transition phenomenon – named BBP after their initials – tuned by the SNR
$\lambda$: above the transition, the largest eigenvalue of ${\mathbf{Y}}$
detaches from the bulk of the spectrum containing the noise eigenvalues, and
the top eigenvector of ${\mathbf{Y}}$ is correlated with ${\mathbf{X}}$; below
the transition, the largest eigenvalue sits at the edge of the bulk, and the
top eigenvector exhibits vanishing correlation with ${\mathbf{X}}$.
Going beyond the estimator obtained from the top eigenvector of
${\mathbf{Y}}$, a line of work has focused on Approximate Message Passing
(AMP) algorithms. Originally proposed in the context of compressed sensing
[15] and CDMA [16], AMP methods have been developed for numerous high-
dimensional inference problems, including the estimation of low-rank matrices
[17, 18] as in (1), generalized linear regression [19, 20] and inference in
multi-layer models [21]. The popularity of the AMP paradigm stems from its
attractive features: _(i)_ AMP can be tailored to take advantage of structural
information about the signal, in the form of a Bayesian prior; _(ii)_ the AMP
performance in the high-dimensional limit is precisely characterized by a low-
dimensional deterministic recursion known as state evolution [22, 23]; and
_(iii)_ using state evolution, it has been proved that AMP achieves Bayes-
optimal performance in a number of settings [24, 18] and, even when
information-theoretic limits are not met, AMP remains optimal among a vast
class of efficient algorithms [25, 26].
However, most theoretical studies on low-rank matrix estimation are limited by
an _i.i.d._ (independently and identically distributed) hypothesis on the
noise matrix components. In this setting, the fundamental limits of inference
are well understood [27], and they are achieved by an AMP algorithm, unless
there is a statistical-to-computational gap. While some of the results on AMP
can be generalized to the broader class of i.i.d. sub-Gaussian matrices via
universality arguments [28, 29], the i.i.d. assumption is rather limiting:
these models remain structureless, and no concrete correlations can appear in
the data matrices. A way to relax the identicality assumption was proposed in
the mathematical physics literature of spin glasses, in the context of the
Sherrington-Kirkpatrick model. Specifically the authors of [30, 31], and later
[32, 33, 34, 35], consider random couplings whose variances depend on the
index labeling the coupled sites. This idea also appeared earlier in the
context of inference, under the name of spatial coupling [36, 37, 38]. Yet, in
the mentioned works the independence hypothesis still remains crucial.
In the seminal papers [39, 40, 41], the authors considered instead a class of
_rotationally invariant matrices_ , which break the independence between the
elements of the coupling matrices, leaving a model which is still tractable.
The amount of works in this setting, or similar ones, see for instance [42,
43, 44] for spin glasses, and [45, 46, 47, 48, 49, 50] in inference, shows a
growing interest towards the topic. Even if the performance of spectral PCA
can be predicted with a fairly generic additive rotationally invariant noise
(see e.g. [11]), establishing how to factor in also the prior information in
the inference procedure, as well as characterizing information-theoretic
limits has proven to be significantly more challenging.
The recent paper [51] takes a step forward by revealing that, in order to
achieve the information-theoretic limits, it is necessary to apply a peculiar
pre-processing to the data that depends on the type of correlations in the
noise. Despite the new mechanism pinpointed by [51], the analysis has remained
limited only to certain classes of noise distributions, until now. The goal of
the present paper is precisely to elaborate a concise theory, and to formulate
implementable algorithms, that can be tailored to treat _any_ kind of
rotationally invariant noise coming from a trace ensemble, i.e., whose
distribution is encoded in a matrix potential.
We finally note that, while this manuscript was in the final stage of
preparation, the concurrent paper [52] focusing on the same setting was posted
on arXiv. Specifically, [52] develops a new class of AMP algorithms, together
with a rigorous state evolution result for them. A fixed point of that AMP is
shown to match the replica predictions of [51] and, in fact, we verified that
such a fixed point matches the replica predictions we make in the present
paper as well. Therefore, the algorithm of [52], just as the one we propose
here, is conjectured to be Bayes-optimal when no statistical-to-computational
gap is present.
### I.1 Our contributions
* _i)_
Using the celebrated _replica method_ [53] and the inhomogeneous spherical
integral of [51], we compute the information-theoretic limits for low-rank
estimation in the presence of rotationally invariant noise. Specifically, we
consider the teacher-student scenario in (1), i.e., the teacher plants a
rank-1 spike matrix inside an extensive rank noise bulk, and we compute the
mutual information between the observation ${\mathbf{Y}}$, and the ground
truth signal ${\mathbf{X}}^{*}{\mathbf{X}}^{*\intercal}$.
* _ia)_
We simplify and solve numerically the fixed point equations coming from the
replica symmetric variational potential for the mutual information.
Remarkably, thanks to some inherent symmetries of the model, called Nishimori
identities, the fixed point equations, which have functions among their
unknowns, reduce to one simple scalar equation.
* _ib)_
The channel is not Gaussian, so one cannot use the usual _I-MMSE_ relation
[54] to compute the minimum mean square error (MMSE) from the mutual
information. Nevertheless, our final formula for the mutual information
consists in a variational principle, whose order parameters, at their
stationary values, yield the MMSE when properly combined.
* _ii)_
We express the mutual information between data and ground truth using the
AdaTAP formalism of [43], relying on the validity of the latter in the
presence of a spike. This approach outputs the pre-processing function as a
functional of the matrix potential of the noise ensemble.
* _iii)_
We run numerical experiments supporting the consistency between the fixed
point of the TAP equations of point _ii)_ and our replica prediction for the
MMSE.
## II Setting
Let the data be constructed, conditionally on the unknown spike
${\mathbf{X}}^{*}{\mathbf{X}}^{*\intercal}$, according to (1), with noise
matrix
$\displaystyle{\mathbf{Z}}$
$\displaystyle={\mathbf{O}}^{\intercal}{\mathbf{D}}{\mathbf{O}}\sim
C_{V}\exp\Big{(}-\frac{N}{2}\text{Tr}V({\mathbf{Z}})\Big{)}d{\mathbf{Z}},$ (2)
where $d{\mathbf{Z}}=\prod_{i\leq j}^{N}dZ_{ij}$ and $Z_{ij}=Z_{ji}$ for any
$1\leq i,j\leq N$, and $C_{V}$ is a normalizing constant depending on the
matrix potential $V$. The matrices have $O(1)$ spectral norm (i.e.,
independent on $N$) and the signal vector ${\mathbf{X}}^{*}$ has i.i.d.
entries drawn from the prior distribution $P_{X}$ with second moment equal to
one. The prior thus implies $\|{\mathbf{X}}^{*}\|^{2}=N+o_{N}(1)$ with high
probability. With an abuse of notation, we use $P_{X}({\mathbf{x}})$ instead
of $P_{X}^{\otimes N}({\mathbf{x}})$. Here, and throughout the paper, any
function $f$ applied to a symmetric matrix ${\mathbf{M}}$ with
eigendecomposition
${\mathbf{M}}={\mathbf{O}}{\mathbf{D}}{\mathbf{O}}^{\intercal}$ is actually
applied only to its eigenvalues $(D_{i})_{i\leq N}$:
$f({\mathbf{M}}):={\mathbf{O}}f({\mathbf{D}}){\mathbf{O}}^{\intercal}$,
$f({\mathbf{D}})=\text{diag}(f(D_{i}))_{i\leq N}$.
We stress that, compared to the previous work [51] where the matrix potential
was restricted to be a low degree polynomial, here $V$ can be _any_ analytic
function.
The posterior measure of the problem is given by
$dP_{X|Y}({\mathbf{x}}|{\mathbf{Y}})=\frac{C_{V}}{P_{Y}({\mathbf{Y}})}dP_{X}({\mathbf{x}})e^{-\frac{N}{2}\text{Tr}V({\mathbf{Y}}-\frac{\lambda}{N}{\mathbf{x}}{\mathbf{x}}^{\intercal})},$
(3)
where the evidence, i.e. the probability of the observations is
$P_{Y}({\mathbf{Y}})=C_{V}\int_{\mathbb{R}^{N}}dP_{X}({\mathbf{x}})e^{-\frac{N}{2}\text{Tr}V({\mathbf{Y}}-\frac{\lambda}{N}{\mathbf{x}}{\mathbf{x}}^{\intercal})}.$
(4)
Our main object of interest is the free entropy (or minus the free energy)
$\displaystyle
f_{N}:=\frac{1}{N}\mathbb{E}_{\mathbf{Y}}\log\mathcal{Z}({\mathbf{Y}}),$ (5)
where
$\displaystyle\mathcal{Z}({\mathbf{Y}}):=\int_{\mathbb{R}^{N}}dP_{X}({\mathbf{x}})e^{-\frac{N}{2}\big{(}\text{Tr}V({\mathbf{Y}}-\frac{\lambda}{N}{\mathbf{x}}{\mathbf{x}}^{\intercal})-{\rm
Tr}V({\mathbf{Z}})\big{)}}$ (6)
and in particular its high-dimensional limit $f=\lim_{N\to\infty}f_{N}$. From
the above, we can define a Hamiltonian function
$H_{N}({\mathbf{x}},\lambda,{\mathbf{X}}^{*},{\mathbf{Z}})=H_{N}({\mathbf{x}})$
equal to
$\displaystyle\mathcal{H}_{N}({\mathbf{x}})=\frac{N}{2}\big{(}{\rm
Tr}V({\mathbf{Y}}-\frac{\lambda}{N}{\mathbf{x}}{\mathbf{x}}^{\intercal})-{\rm
Tr}V({\mathbf{Z}})\big{)}$ (7)
and interpret the problem as a spin glass model where ${\mathbf{Z}}$ and
${\mathbf{X}}^{*}$ play the role of quenched disorder. The subtraction of the
term ${\rm Tr}V({\mathbf{Z}})$ is needed to ensure that the Hamiltonian
remains of $O(N)$, and thus the free entropy stays $O(1)$ in the thermodynamic
limit $N\to\infty$. In fact,
$\displaystyle\mathcal{H}_{N}({\mathbf{x}})$
$\displaystyle=\frac{\lambda}{2}\int_{0}^{1}dt\,{\rm
Tr}V^{\prime}\big{(}{\mathbf{Z}}+\frac{t\lambda}{N}\big{(}{\mathbf{X}}^{*}{\mathbf{X}}^{*\intercal}-{\mathbf{x}}{\mathbf{x}}^{\intercal}\big{)}\big{)}$
$\displaystyle\qquad\times\big{(}{\mathbf{X}}^{*}{\mathbf{X}}^{*\intercal}-{\mathbf{x}}{\mathbf{x}}^{\intercal}\big{)}=O(N),$
(8)
since the difference between the two projectors in the last line is at most
rank two and the related eigenvalues remain $O(1)$.
The free entropy is intimately connected with the mutual information between
data and signal:
$\displaystyle I({\mathbf{X}}^{*}{\mathbf{X}}^{*\intercal};{\mathbf{Y}})$
$\displaystyle=-\mathbb{E}_{\mathbf{Y}}\log
P_{Y}({\mathbf{Y}})+\mathbb{E}_{\mathbf{Z}}\log C_{V}e^{-\frac{N}{2}{\rm
Tr}V({\mathbf{Z}})}$ $\displaystyle=-Nf_{N}.$ (9)
Finally, let us recall some basic concepts from random matrix theory. For a
square symmetric random matrix ${\mathbf{M}}\in\mathbb{R}^{N\times N}$, denote
the associated resolvent matrix
${\mathbf{G}}_{M}(z):=(zI_{N}-{\mathbf{M}})^{-1}.$ (10)
Notice that the resolvent matrix shares the same eigenvectors with
${\mathbf{M}}$. Assume that the empirical spectral density (ESD)
$\hat{\rho}_{M}^{(N)}$ of ${\mathbf{M}}$ converges weakly almost surely to a
distribution $\rho_{M}$, i.e.
$\displaystyle\hat{\rho}_{M}^{(N)}=\frac{1}{N}\sum_{i=1}^{N}\delta_{\lambda_{i}({\mathbf{M}})}\xrightarrow[w.a.s.]{N\to\infty}\rho_{M}.$
(11)
We can then define the Stieltjes transform associated with the random matrix
ensemble of ${\mathbf{M}}$:
$g_{M}(z):=\mathbb{E}_{D}\frac{1}{z-D},\quad D\sim\rho_{M}.$ (12)
$g_{M}(z)$ is well defined for $z\in\mathbb{C}$ outside the support of
$\rho_{M}$. Under the aforementioned hypothesis on the ESD, the Stieltjes
transform is closely related to the resolvent matrix through
$\lim_{N\to\infty}\frac{1}{N}\text{Tr}{\mathbf{G}}_{M}(z)=g_{M}(z).$ (13)
Denote the inverse function of $g_{M}(z)$ as $\zeta_{M}(g)$, and then the
R-transform of ${\mathbf{M}}$ is given by
$R_{M}(g):=\zeta(g)-\frac{1}{g}.$ (14)
The resolvent and the R-transform play a crucial role in our analysis since
they encode all the relevant combinatorics of the random matrix ensemble. As
we shall see later, the resolvent allows us to define a new family of order
parameters, which are quadratic forms of replicas drawn from the posterior
with the resolvent mediating their product. In one shot, this new family of
overlaps encompasses all the new order parameters that were introduced in
[51], as well as additional ones.
## III Information limits through
the replica method
In this section we analyse the spiked model with generic rotationally
invariant noise using the powerful replica method from statistical physics of
disordered systems [53, 55]. This method is non-rigorous, but it is believed
to be exact for the asymptotic analysis of a broad class of spin glass,
inference and learning models. Historically, one of the first proofs of its
exactness was given for the Sherrington-Kirkpatrick model by Guerra [56] and
Talagrand [57], and later remarkably refined by Panchenko leveraging
ultrametricity [58]. Moreover, the replica symmetry assumption we are going to
employ during the analysis is intimately connected to concentration-of-measure
phenomena proven in broad generality in optimal Bayesian inference. We
therefore conjecture the analysis below leads to asymptotically exact
formulas. For further dicussions on the topic, we refer the reader to [59, 60,
61, 62, 63, 64, 65].
We now state our main result from the information-theoretic perspective. This
comes in the form of a variational formula for the free entropy. The physical
meaning of some of the order parameters entering these formulas is given in
the replica analysis of the next section.
###### Result (Information-theoretic limits: Replica free entropy and minimum
mean-square error).
Let $\Gamma$ be an arbitrary contour in the complex plane $\mathbb{C}$ that
encircles all eigenvalues of the matrix
${\mathbf{Y}}-{\mathbf{x}}{\mathbf{x}}^{\intercal}/N$ for any choice of
${\mathbf{x}}$ with positive measure according to the prior $P_{X}$. Let
$Z\sim\mathcal{N}(0,1)$ and $X\sim P_{X}$ as well as
$D,D_{1},D_{2}\sim\rho_{Z}$ i.i.d. from the noise asymptotic spectral density.
The replica free entropy at the replica symmetric level (which is exact in
Bayes-optimal inference) is given by
$\displaystyle f$ $\displaystyle={\rm extr}\Big{\\{}\frac{1}{4\pi
i}\oint_{\Gamma}dz\Big{[}V^{\prime}(z)\log(1+\lambda
B(z))-2\frac{\lambda\hat{B}(z)M(z)^{2}}{1-\lambda
g_{Z}(z)}+2\hat{M}(z)M(z)+2\hat{B}(z)B(z)\Big{]}$
$\displaystyle\qquad+\frac{v\hat{v}}{2}-\hat{m}m+\frac{q\hat{q}}{2}+\mathbb{E}_{Z,X}\log\int_{\mathbb{R}}dP_{X}(x)\exp\Big{(}\sqrt{\hat{q}}Zx-\frac{\hat{q}+\hat{v}}{2}x^{2}+\hat{m}Xx\Big{)}+m\bar{m}+\frac{v\bar{v}}{2}-\frac{q\bar{q}}{2}-\frac{1}{2}$
$\displaystyle\qquad-\frac{1}{2}\mathbb{E}_{D}\log(\bar{v}-\bar{q}+2\tilde{B}(D))-\frac{1}{2}\mathbb{E}_{D}\frac{\bar{q}-(\bar{q}+\tilde{M}(D))^{2}}{\bar{v}-\bar{q}+2\tilde{B}(D)}-\frac{1}{2}\log(v-q)-\frac{q-m^{2}}{2(v-q)}\Big{\\}}$
$\displaystyle\qquad+\frac{1}{4\pi
i}\oint_{\Gamma}dzV^{\prime}(z)\log(1-\lambda g_{Z}(z)),$ (15)
with an extremization w.r.t. nine scalar order parameters
$(m,q,v,\hat{m},\hat{q},\hat{v},\bar{m},\bar{q},\bar{v})$ and four functions
$(M,B,\hat{M},\hat{B})$ from $\mathbb{C}$ to $\mathbb{C}$. The extremization
selects the solution of the saddle point equations, obtained by equating to
zero the gradient of the replica potential $\\{\cdots\\}$, which maximizes it.
After simplifications of the replica saddle point equations, this can also be
written as
$\displaystyle f=\max_{\mathcal{M}^{\rm RS}}f^{\rm
RS}(m,\hat{m})-\frac{\lambda}{2}\mathbb{E}_{D}V^{\prime}(D),$ (16)
$\displaystyle f^{\rm
RS}(m,\hat{m}):=-\frac{\lambda^{2}}{2}\mathbb{E}_{D_{1},D_{2}}Q(D_{1})Q(D_{2})H(D_{1})H(D_{2})\frac{V^{\prime}(D_{1})-V^{\prime}(D_{2})}{D_{1}-D_{2}}-\frac{m^{2}}{2(1-m)}-\frac{1}{2}\log(1-m)-\frac{m}{2}$
$\displaystyle\quad+\mathbb{E}_{Z,X}\log\int_{\mathbb{R}}dP_{X}(x)\exp\Big{(}\sqrt{\hat{m}}Zx-\frac{\hat{m}}{2}x^{2}+\hat{m}Xx\Big{)}+\frac{1}{2}\mathbb{E}_{D}\log
H(D)-\frac{1}{2}\mathbb{E}_{D}\Big{(}\hat{m}-\frac{1}{1-m}-Q(D)^{2}\Big{)}H(D),$
(17)
with
$\displaystyle H(x)$
$\displaystyle:=\Big{(}\frac{1}{1-m}-\hat{m}-J(x)\Big{)}^{-1},$ (18)
$\displaystyle J(x)$ $\displaystyle:=\lambda
V^{\prime}(x)-\lambda^{2}\mathbb{E}_{D\sim\rho_{Z}}\frac{V^{\prime}(x)-V^{\prime}(D)}{x-D},$
(19)
and $Q(x)$ is the solution of
$Q(x)=\hat{m}-\frac{1}{1-m}+\lambda^{2}\mathbb{E}_{D}\frac{V^{\prime}(x)-V^{\prime}(D)}{x-D}Q(D)H(D).$
(20)
Finally, $\mathcal{M}^{\rm RS}$ represents the set of solution(s) of the
following fixed point equations:
$\displaystyle\begin{cases}\hat{m}=-R_{J({\mathbf{Z}})}(1-m),\\\
m=\mathbb{E}X\langle x\rangle_{\hat{m}},\end{cases}$ (21)
where $\langle\cdot\rangle_{\hat{m}}$ denotes the posterior of a scalar
Gaussian channel with signal-to-noise ratio $\hat{m}$:
$\langle f(x)\rangle_{\hat{m}}:=\frac{\int
dP_{X}(x)e^{\sqrt{\hat{m}}Zx+\hat{m}xX-\frac{\hat{m}}{2}x^{2}}f(x)}{\int
dP_{X}(x)e^{\sqrt{\hat{m}}Zx+\hat{m}xX-\frac{\hat{m}}{2}x^{2}}}.$ (22)
Recall that $\int P_{X}(dx)x^{2}=1$. Let $m_{*}$ be the value of the order
parameter $m$ picked by the above extremization, i.e., the solution of (21)
which maximizes $f^{\rm RS}(m,\hat{m})$. The asymptotic minimum mean-square
error corresponding to the Bayes-optimal estimator (i.e., the posterior mean)
$\mathbb{E}[{\mathbf{X}}^{*}{\mathbf{X}}^{*\intercal}\mid{\mathbf{Y}}]$ is
given by
$\displaystyle\lim_{N\to\infty}\frac{1}{N^{2}}\mathbb{E}\big{\|}{\mathbf{X}}^{*}{\mathbf{X}}^{*\intercal}-\mathbb{E}[{\mathbf{X}}^{*}{\mathbf{X}}^{*\intercal}\mid{\mathbf{Y}}]\big{\|}_{\rm
F}^{2}=1-m_{*}^{2}.$ (23)
### III.1 Analysis by the replica method
We now provide the derivation of the previous result. Before replicating the
partition function we are going to re-express it in a more amenable form. We
start by extracting the matrix entering the potential in the log-likelihood
term of the partition function using Cauchy’s formula. We will then repeatedly
use Sherman-Morrison’s formula to deal with inverses of rank-one perturbations
of matrices:
$\displaystyle\mathcal{Z}$
$\displaystyle=\int_{\mathbb{R}^{N}}dP_{X}({\mathbf{x}})e^{-\frac{N}{2}\text{Tr}\big{(}V\big{(}{\mathbf{Y}}-\frac{\lambda}{N}{\mathbf{x}}{\mathbf{x}}^{\intercal}\big{)}-{\rm
Tr}V({\mathbf{Z}})\big{)}}$ (24)
$\displaystyle=\mathbb{E}_{\mathbf{x}}e^{\frac{N}{2}{\rm
Tr}V({\mathbf{Z}})-\frac{N}{4\pi
i}\text{Tr}\oint_{\Gamma}dzV(z)\big{(}zI_{N}-{\mathbf{Y}}+\frac{\lambda}{N}{\mathbf{x}}{\mathbf{x}}^{\intercal}\big{)}^{-1}}$
$\displaystyle=\mathbb{E}_{\mathbf{x}}\exp\Big{(}\frac{N}{2}{\rm
Tr}V({\mathbf{Z}})-\frac{N}{4\pi i}\text{Tr}\oint_{\Gamma}dzV(z)$
$\displaystyle\qquad\times\Big{[}{\mathbf{G}}_{Y}(z)-\frac{\lambda}{N}\frac{{\mathbf{G}}_{Y}(z){\mathbf{x}}{\mathbf{x}}^{\intercal}{\mathbf{G}}_{Y}(z)}{1+\frac{\lambda}{N}{\mathbf{x}}^{\intercal}{\mathbf{G}}_{Y}(z){\mathbf{x}}}\Big{]}\Big{)}$
$\displaystyle=C\,\mathbb{E}_{\mathbf{x}}\exp\Big{(}\frac{N}{4\pi
i}\oint_{\Gamma}dzV(z)\frac{\lambda}{N}\frac{{\mathbf{x}}^{\intercal}{\mathbf{G}}_{Y}(z)^{2}{\mathbf{x}}}{1+\frac{\lambda}{N}{\mathbf{x}}^{\intercal}{\mathbf{G}}_{Y}(z){\mathbf{x}}}\Big{)}$
$\displaystyle=C\,\mathbb{E}_{\mathbf{x}}\exp\Big{(}\frac{N}{4\pi
i}\oint_{\Gamma}dzV^{\prime}(z)\log\Big{(}1+\frac{\lambda}{N}{\mathbf{x}}^{\intercal}{\mathbf{G}}_{Y}(z){\mathbf{x}}\Big{)}\Big{)},$
where we used $\partial_{z}{\mathbf{G}}_{Y}(z)=-{\mathbf{G}}_{Y}(z)^{2}$ and
an integration by part in the last equality. Here
$C:=\exp\Big{(}-\frac{N}{2}\big{(}\text{Tr}V({\mathbf{Y}})-{\rm
Tr}V({\mathbf{Z}})\big{)}\Big{)}$
is a multiplicative constant which we will compute separately.
We are now ready to replicate the partition function. We denote with the
replica index 0 the signal ${\mathbf{X}}^{*}={\mathbf{x}}_{0}$. We then get
that
$\displaystyle\mathbb{E}\mathcal{Z}^{n}$
$\displaystyle=C^{n}\int\prod_{a=0}^{n}dP_{X}({\mathbf{x}}_{a})$
$\displaystyle\ \ \times\exp\Big{(}\frac{N}{4\pi
i}\sum_{a=1}^{n}\oint_{\Gamma}dzV^{\prime}(z)\log(1+\lambda
B^{aa}(z))\Big{)},$
where we introduce the following order parameters for $1\leq a\leq n$, which
are generalized data-dependent self-overlap functions,
$B^{aa}(z):=\frac{1}{N}{\mathbf{x}}_{a}^{\intercal}{\mathbf{G}}_{Y}(z){\mathbf{x}}_{a}\in\mathbb{C}.$
(25)
We now introduce delta functions in their Fourier form, together with the
conjugate order parameters. Jointly denote
$\mathcal{D}[\hat{B},B]:=\prod_{a=1}^{n}\mathcal{D}[\hat{B}^{aa},B^{aa}]$ for
the differential element in functional (path) integrals. The replicated
partition function $\mathbb{E}\mathcal{Z}^{n}$ then becomes
$\displaystyle
C^{n}\int\mathcal{D}[\hat{B},B]\prod_{a=0}^{n}dP_{X}({\mathbf{x}}_{a})$
$\displaystyle\ \times\exp\Big{(}\frac{N}{4\pi i}\sum_{a\leq
n}\oint_{\Gamma}dzV^{\prime}(z)\log(1+\lambda B^{aa}(z))\Big{)}$
$\displaystyle\ \times\mathbb{E}_{\mathbf{O}}\exp\Big{(}\sum_{a\leq
n}\oint_{\Gamma}dz\hat{B}^{aa}(z)(NB^{aa}(z)-{\mathbf{x}}_{a}^{\intercal}{\mathbf{G}}_{Y}(z){\mathbf{x}}_{a})\Big{)}.$
The $z$-integral is on the contour $\Gamma$. In order to perform the quenched
average over the Haar distributed eigenbasis ${\mathbf{O}}$ of the noise, we
need to decompose explicitly the data into signal plus noise. The last term
can then be simplified using Sherman-Morrison again:
$\displaystyle\frac{{\mathbf{x}}_{a}^{\intercal}{\mathbf{G}}_{Y}(z){\mathbf{x}}_{a}}{N}\\!$
$\displaystyle=\\!\frac{1}{N}{\mathbf{x}}_{a}^{\intercal}\Big{(}{\mathbf{G}}_{Z}(z)+\frac{\lambda{\mathbf{G}}_{Z}(z){\mathbf{x}}_{0}{\mathbf{x}}_{0}^{\intercal}{\mathbf{G}}_{Z}(z)}{N(1-\frac{\lambda}{N}{\mathbf{x}}_{0}^{\intercal}{\mathbf{G}}_{Z}(z){\mathbf{x}}_{0})}\Big{)}{\mathbf{x}}_{a}$
$\displaystyle\\!=\\!\frac{{\mathbf{x}}_{a}^{\intercal}{\mathbf{G}}_{Z}(z){\mathbf{x}}_{a}}{N}+\frac{\lambda
M^{aa}(z)^{2}}{1-\lambda g_{Z}(z)},$ (26)
where we introduce the main order parameters, i.e., generalized overlaps
between replicas and the ground-truth (which, by Bayes-optimality, also
corresponds to the generalized overlap between different replicas): for $1\leq
a\leq n$,
$M^{aa}(z):=\frac{1}{N}{\mathbf{x}}_{a}^{\intercal}{\mathbf{G}}_{Z}(z){\mathbf{x}}_{0}\in\mathbb{C}.$
(27)
Note that by definition we have
$\displaystyle\lim_{N\to\infty}\frac{1}{N}{\mathbf{x}}_{0}^{\intercal}{\mathbf{G}}_{Z}(z){\mathbf{x}}_{0}=g_{Z}(z).$
(28)
Let us make a remark concerning the generalized overlap function $M^{aa}(z)$.
By expanding in series the resolvent around $z\to+\infty$, we realize that it
corresponds to the generating function for an infinite family of scalar
overlaps
$(\frac{1}{N}{\mathbf{x}}_{a}^{\intercal}{\mathbf{Z}}^{k}{\mathbf{x}}_{0})_{k\geq
0}$. A similar observation can be made for $B^{aa}(z)$ which encodes
$(\frac{1}{N}{\mathbf{x}}_{a}^{\intercal}{\mathbf{Z}}^{k}{\mathbf{x}}_{a})_{k\geq
0}$. The first few of these overlaps are the order parameters identified in
[51]. We note that the analysis of [51] is restricted to low-degree
polynomials for the potential $V$. This is because the number of order
parameters – and, therefore, the number of replica saddle point equations –
grows with the degree of the polynomial, which quickly leads to intractable,
non interpretable, formulas. However, by identifying these generating
functions order parameters, we can easily encode such infinite families of
scalars and write down compact equations. This is one key mechanism that
allows us to treat generic potential functions $V$. Similar ideas have been
used to study gradient-flow dynamics in [66, 67, 68, 69].
We consider a replica symmetric ansatz: for all $1\leq a\leq n$, we set
${\rm RS\
ansatz}\mbox{:}\quad\begin{cases}(B^{aa}(z),\hat{B}^{aa}(z))=(B(z),\hat{B}(z)),\\\
(M^{aa}(z),\hat{M}^{aa}(z))=(M(z),\hat{M}(z)).\end{cases}$
This implies the following simplifications for the replicated partition
function: letting $\mathcal{D}[\cdots]=\mathcal{D}[\hat{B},B,\hat{M},M]$, we
have
$\displaystyle\mathbb{E}\mathcal{Z}^{n}$
$\displaystyle=C^{n}\int\mathcal{D}[\cdots]\exp\Big{(}\frac{Nn}{4\pi
i}\oint_{\Gamma}dzV^{\prime}(z)\log(1+\lambda B(z))$ $\displaystyle\qquad-4\pi
i\frac{\lambda\hat{B}(z)M(z)^{2}}{1-\lambda
g_{Z}(z)}+Nn\oint_{\Gamma}dz\hat{M}(z)M(z)$
$\displaystyle\qquad+Nn\oint_{\Gamma}dz\hat{B}(z)B(z)+NnI^{\rm RS}\Big{)},$
(29)
where we also define
$\displaystyle\exp(nNI^{\rm RS})$
$\displaystyle:=\mathbb{E}_{\mathbf{O}}\exp\Big{(}-\sum_{a\leq
n}\oint_{\Gamma}dz\Big{(}\hat{B}(z){\mathbf{x}}_{a}^{\intercal}{\mathbf{G}}_{Z}(z){\mathbf{x}}_{a}$
$\displaystyle\qquad+\hat{M}(z){\mathbf{x}}_{0}^{\intercal}{\mathbf{G}}_{Z}(z){\mathbf{x}}_{a}\Big{)}\Big{)}.$
(30)
The above integral $I^{\rm RS}$ is an instance of the _inhomogeneous spherical
integral_ defined and analysed in [51]. A key property of this integral is
that it depends on the replicas only through their overlap structure which,
under a replica symmetric ansatz, reads
$\displaystyle{\rm RS\
ansatz}\mbox{:}\quad\begin{cases}{\mathbf{x}}_{a}^{\intercal}{\mathbf{x}}_{b}/N=q,\quad
1\leq a<b\leq n,\\\ {\mathbf{x}}_{0}^{\intercal}{\mathbf{x}}_{a}/N=m,\quad
1\leq a\leq n,\\\ {\mathbf{x}}_{a}^{\intercal}{\mathbf{x}}_{a}/N=v,\quad 1\leq
a\leq n.\end{cases}$ (31)
Let us define
$\displaystyle\tilde{M}(x)$ $\displaystyle:=\frac{1}{2\pi
i}\oint_{\Gamma}\frac{\hat{M}(z)dz}{z-x},\quad\tilde{B}(x):=\frac{1}{2\pi
i}\oint_{\Gamma}\frac{\hat{B}(z)dz}{z-x}.$ (32)
It is important to note that in general $\tilde{B}(x)\neq\hat{B}(x)$ and
$\tilde{M}(x)\neq\hat{M}(x)$ because they might not be holomorphic. Then, the
result of the inhomogeneous spherical integral reads
$\displaystyle I^{\rm
RS}(q,m,v,\tilde{B},\tilde{M})=\text{extr}_{(\bar{m},\bar{v},\bar{q})}\Big{\\{}m\bar{m}+\frac{v\bar{v}}{2}-\frac{q\bar{q}}{2}$
$\displaystyle\quad-\frac{1}{2}\mathbb{E}_{D}\log(\bar{v}-\bar{q}+2\tilde{B}(D))-\frac{1}{2}\mathbb{E}_{D}\frac{\bar{q}-(\bar{q}+\tilde{M}(D))^{2}}{\bar{v}-\bar{q}+2\tilde{B}(D)}\Big{\\}}$
$\displaystyle\quad-\frac{1}{2}-\frac{1}{2}\log(v-q)-\frac{q-m^{2}}{2(v-q)}+O(n).$
(33)
Now, fixing the overlap definitions using additional delta functions in
Fourier form, under the same replica ansatz for the Fourier conjugates, we
reach
$\displaystyle\mathbb{E}\mathcal{Z}^{n}$
$\displaystyle=C^{n}\int\mathcal{D}[\cdots]\exp\Big{(}\frac{Nn}{4\pi
i}\oint_{\Gamma}dzV^{\prime}(z)\log(1+\lambda B(z))-4\pi
i\frac{\lambda\hat{B}(z)M(z)^{2}}{1-\lambda g_{Z}(z)}$
$\displaystyle\qquad+Nn\oint_{\Gamma}dz\hat{M}(z)M(z)+Nn\oint_{\Gamma}dz\hat{B}(z)B(z)+NnI^{RS}(q,m,v,\tilde{B},\tilde{M})+\frac{Nn}{2}v\hat{v}-Nnm\hat{m}$
$\displaystyle\qquad-\frac{Nn(n-1)}{2}q\hat{q}+N\log\int_{\mathbb{R}^{n+1}}\prod_{a=0}^{n}dP_{X}(x_{a})\exp\Big{(}-\frac{\hat{v}}{2}\sum_{a=1}^{n}x_{a}^{2}+\hat{m}\sum_{a=1}^{n}x_{0}x_{a}+\hat{q}\sum_{1\leq
a<b\leq n}x_{a}x_{b}\Big{)}\Big{)}.$ (34)
In order to decouple the replicas in the last term we use an Hubbard-
Stratonovitch transform: with $Z\sim\mathcal{N}(0,1)$,
$\displaystyle\mathbb{E}_{(x_{a})}e^{-\frac{\hat{v}}{2}\sum_{a=1}^{n}x_{a}^{2}+\hat{m}\sum_{a=1}^{n}x_{0}x_{a}+\hat{q}\sum_{1\leq
a<b\leq n}x_{a}x_{b}}$
$\displaystyle\quad=\mathbb{E}_{Z,x_{0}}\big{(}\mathbb{E}_{x}e^{-\frac{\hat{v}+\hat{q}}{2}x^{2}+\hat{m}x_{0}x+\sqrt{\hat{q}}Zx}\big{)}^{n}.$
The final steps are an integration with respect to the order parameters by
saddle point, followed by an application of the replica trick (assuming
commutation of thermodynamic and replica limits for the saddle point
integration):
$\displaystyle\lim_{N\to\infty}\frac{1}{N}\mathbb{E}\ln\mathcal{Z}$
$\displaystyle=\lim_{N\to\infty}\lim_{n\to
0}\frac{1}{Nn}\ln\mathbb{E}\mathcal{Z}^{n}$ $\displaystyle=\lim_{n\to
0}\lim_{N\to\infty}\frac{1}{Nn}\ln\mathbb{E}\mathcal{Z}^{n}.$ (35)
This finally gives the average limiting free entropy expression (15) after
having changed variables as follows $(2\pi i\hat{M},2\pi
i\hat{B})\to(\hat{M},\hat{B})$.
To conclude, let us compute the contribution related to the constant $C$. One
may think it is irrelevant, however it is a function of the SNR and concurs to
the value of the mutual information and some of its fundamental properties
(for instance, monotonicity and concavity). We can again use Cauchy’s integral
representation and the Shermann-Morrison inversion formula for
$V({\mathbf{Y}})$:
$\displaystyle\frac{1}{N}\mathbb{E}\log C$
$\displaystyle=-\frac{1}{2}\mathbb{E}{\rm
Tr}\big{(}V({\mathbf{Y}})-V({\mathbf{Z}})\big{)}$
$\displaystyle=-\frac{1}{4\pi i}\mathbb{E}{\rm
Tr}\oint_{\Gamma}dzV(z)\big{(}{\mathbf{G}}_{Y}(z)-{\mathbf{G}}_{Z}(z)\big{)}$
$\displaystyle=-\frac{1}{4\pi
i}\mathbb{E}\oint_{\Gamma}dzV(z)\frac{\lambda}{N}\frac{{\mathbf{x}}_{0}^{\intercal}{\mathbf{G}}^{2}_{Z}(z){\mathbf{x}}_{0}}{(1-\frac{\lambda}{N}{\mathbf{x}}_{0}^{\intercal}{\mathbf{G}}_{Z}(z){\mathbf{x}}_{0})},$
which in the limit converges to
$\displaystyle\frac{1}{N}\mathbb{E}\log C$ $\displaystyle\to\frac{1}{4\pi
i}\oint_{\Gamma}dzV(z)\frac{\lambda\partial_{z}g_{Z}(z)}{(1-\lambda
g_{Z}(z))}$ $\displaystyle=\frac{1}{4\pi
i}\oint_{\Gamma}dzV^{\prime}(z)\log(1-\lambda g_{Z}(z))\,.$ (36)
### III.2 Simplifying the saddle point equations
In this section we show how to go from the variational formulation (15) for
the free entropy, to a simpler formula (16), (17) with only two order
parameters. Before giving the complete set of saddle point equations derived
from (15), we stress that the physical meaning of some order parameters makes
it possible to fix directly their values to their expectation (assuming
concentration), obtainable using the Nishimori identities, see [70,
Proposition 15] for a proof.
Nishimori identity. For any bounded function $f$ of the signal
${\mathbf{X}}^{*}$, the data ${\mathbf{Y}}$ and of conditionally i.i.d.
samples from the posterior ${\mathbf{x}}^{j}\sim P_{X\mid
Y}(\,\cdot\mid{\mathbf{Y}})$, $j=1,2,\ldots,n$, we have that
$\displaystyle\mathbb{E}\langle
f({\mathbf{Y}},{\mathbf{X}}^{*},{\mathbf{x}}^{2},\ldots,{\mathbf{x}}^{n})\rangle=\mathbb{E}\langle
f({\mathbf{Y}},{\mathbf{x}}^{1},{\mathbf{x}}^{2},\ldots,{\mathbf{x}}^{n})\rangle,$
(37)
where the bracket notation $\langle\,\cdot\,\rangle$ is used for the joint
expectation over the posterior samples $({\mathbf{x}}^{j})_{j\leq n}$, and
$\mathbb{E}$ is over the signal ${\mathbf{X}}^{*}$ and data ${\mathbf{Y}}$.
To begin with, recall that we fixed $v$ to be the squared norm of a sample
from the posterior re-scaled by the number of components. Assume that
concentration effects take place, i.e. that the order parameters of the
problem are limiting values of self-averaging quantities, as they should in
this optimal setting [62], and denote
$\langle f(x)\rangle:=\frac{\int
dP_{X}(x)e^{\sqrt{\hat{q}}Zx+\hat{m}xX-\frac{\hat{q}+\hat{v}}{2}x^{2}}f(x)}{\int
dP_{X}(x)e^{\sqrt{\hat{q}}Zx+\hat{m}xX-\frac{\hat{q}+\hat{v}}{2}x^{2}}}.$ (38)
Using the Nishimori identity, we have that
$v=\lim_{N\to\infty}\frac{1}{N}\mathbb{E}\langle\|{\mathbf{x}}\|^{2}\rangle=\lim_{N\to\infty}\frac{1}{N}\mathbb{E}\|{\mathbf{X}}^{*}\|^{2}=1.$
(39)
We have $\hat{v}=0$ because by Bayes-optimality the constraint $v=1$ is
already enforced by the prior without the need of a delta constraint. The
Nishimori identity also imposes
$m=q.$ (40)
Moreover, $B(z)$ is also fixed by the Nishimori identity (below $N$ is large
and equalities are understood up to a vanishing correction as $N\to\infty$):
$\displaystyle B(z)$
$\displaystyle=\frac{1}{N}\mathbb{E}\langle{\mathbf{x}}^{\intercal}{\mathbf{G}}_{Y}(z){\mathbf{x}}\rangle$
$\displaystyle=\frac{1}{N}\mathbb{E}{\mathbf{X}}^{*\intercal}{\mathbf{G}}_{Y}(z){\mathbf{X}}^{*}$
$\displaystyle=\frac{1}{N}\mathbb{E}{\mathbf{X}}^{*\intercal}\Big{[}{\mathbf{G}}_{Z}(z)+\frac{\lambda}{N}\frac{{\mathbf{G}}_{Z}(z){\mathbf{X}}^{*}{\mathbf{X}}^{*\intercal}{\mathbf{G}}_{Z}(z)}{1-\frac{\lambda}{N}{\mathbf{X}}^{*\intercal}{\mathbf{G}}_{Z}(z){\mathbf{X}}^{*}}\Big{]}{\mathbf{X}}^{*}$
$\displaystyle=g_{Z}(z)+\frac{\lambda g_{Z}(z)^{2}}{1-\lambda
g_{Z}(z)}=\frac{g_{Z}(z)}{1-\lambda g_{Z}(z)},$ (41)
where we used the Nishimori identity in the second equality, Sherman-Morrison
in the third one, and
$\frac{1}{N}\mathbb{E}{\mathbf{X}}^{*\intercal}{\mathbf{G}}_{Z}(z){\mathbf{X}}^{*}=g_{Z}(z)$
in the fourth (by independence of the signal and noise).
We now state the complete set of saddle point equations obtained by cancelling
the gradient of the replica free entropy potential $\\{\cdots\\}$ in (15)
w.r.t. the order parameters. The parameter w.r.t. which the gradient is
computed in order to obtain a certain saddle point equation is reported in the
round parenthesis. Let
$\displaystyle H(x)$ $\displaystyle:=(\bar{v}-\bar{q}+2\tilde{B}(x))^{-1},$
(42) $\displaystyle R(x)$
$\displaystyle:=\bar{q}-v(\bar{m}+\tilde{M}(x))^{2}.$ (43)
Let $D\sim\rho_{Z}$ be drawn from the spectral distribution of the noise. Then
the saddle point equations read
$\displaystyle(\hat{m}):\ m=\mathbb{E}X\langle x\rangle$
$\displaystyle(\hat{q}):\ q=\mathbb{E}\langle x\rangle^{2}$
$\displaystyle(\hat{v}):\ v=\mathbb{E}\langle x^{2}\rangle=1$
$\displaystyle(\bar{m}):\ m=-\mathbb{E}_{D}(\bar{m}+\hat{M}(D))H(D)$
$\displaystyle(\bar{q}):\ q=\mathbb{E}_{D}H(D)^{2}R(D)$
$\displaystyle(\bar{v}):\ v=\mathbb{E}_{D}H(D)(1-H(D)R(D))$
$\displaystyle(m):\ -\hat{m}+\bar{m}+\frac{m}{1-q}=0$ $\displaystyle(q):\
\hat{q}-\bar{q}=\frac{q}{1-q}$ $\displaystyle(v):\ \bar{v}=1$
$\displaystyle(\hat{B}):\ B(z)-\frac{\lambda M(z)^{2}}{1-\lambda g_{Z}(z)}$
$\displaystyle\qquad\qquad=-\mathbb{E}_{D}\Big{[}\frac{1}{D-z}(H(D)-R(D)H(D)^{2})\Big{]}$
$\displaystyle(\hat{M}):\
M(z)=\mathbb{E}_{D}\Big{[}\frac{1}{D-z}(\bar{m}+\tilde{M}(D))H(D)\Big{]}$
$\displaystyle(B):\ \frac{\lambda V^{\prime}(z)}{1+\lambda
B(z)}+2\hat{B}(z)=0$ $\displaystyle(M):\
-\frac{2\lambda\hat{B}(z)M(z)}{1-\lambda g_{Z}(z)}+\hat{M}(z)=0,$
where we used $v=1$, $\hat{v}=0$.
We can now simplify. Firstly, from $(\hat{m})$, $(\hat{q})$ and (40), we have
$m=q=\mathbb{E}X\langle x\rangle$ (44)
and $\hat{m}=\hat{q}$. Then, from $(m)$ and $(q)$, we have
$\bar{q}=\bar{m}=\hat{m}-\frac{m}{1-m}.$ (45)
From $(\bar{q})$ and $(\bar{v})$, we have
$m=1-\mathbb{E}_{D}H(D).$ (46)
From $(B)$ and $(M)$, we have
$\displaystyle\hat{B}(z)$
$\displaystyle=-\frac{\lambda}{2}V^{\prime}(z)(1-\lambda g_{Z}(z)),$ (47)
$\displaystyle\hat{M}(z)$ $\displaystyle=-\lambda^{2}V^{\prime}(z)M(z).$ (48)
Let us keep in mind that $\hat{B}(z)$ is not holomorphic, and $\hat{M}(z)$ is
in general not holomorphic.
The complex integrals in $\tilde{B}$ and $\tilde{M}$ can be performed by the
residue theorem.
$\displaystyle\tilde{B}(x)$ $\displaystyle=\frac{1}{2\pi
i}\oint_{\Gamma}\frac{dz}{z-x}\Big{[}-\frac{\lambda}{2}V^{\prime}(z)(1-\lambda
g_{Z}(z))\Big{]}$ $\displaystyle=-\frac{\lambda}{2}V^{\prime}(x)+\frac{1}{4\pi
i}\oint_{\Gamma}\frac{dz}{z-x}\Big{[}\lambda^{2}\mathbb{E}_{D}\frac{V^{\prime}(z)}{D-z}\Big{]}$
$\displaystyle=-\frac{\lambda}{2}V^{\prime}(x)+\frac{\lambda^{2}}{2}\mathbb{E}_{D}\frac{V^{\prime}(x)-V^{\prime}(D)}{x-D},$
(49)
and similarly for the other function
$\displaystyle\tilde{M}(x)$ $\displaystyle=-\frac{1}{2\pi
i}\oint_{\Gamma}\frac{dz}{z-x}\lambda^{2}V^{\prime}(z)M(z)$
$\displaystyle=\frac{1}{2\pi
i}\oint_{\Gamma}\frac{dzV^{\prime}(z)}{z-x}\mathbb{E}\Big{[}\frac{\lambda^{2}}{z-D}(\bar{m}+\tilde{M}(D))H(D)\Big{]}$
$\displaystyle=\lambda^{2}\mathbb{E}_{D}\frac{V^{\prime}(x)-V^{\prime}(D)}{x-D}(\bar{m}+\tilde{M}(D))H(D),$
(50)
where we used $(\hat{M})$, (47) and (48).
Finally, let us denote
$J(x):=-2\tilde{B}(x),$ (51)
which leads to (19) according to (49). Recall that $\bar{v}=1$ according to
$(v)$. Then, the definition of $H(x)$ and (45) lead to (18). Combining (18)
with (46) gives
$\hat{m}=-R_{J({\mathbf{Z}})}(1-m).$ (52)
This forms the fixed point equations together with
$m=\mathbb{E}X\langle x\rangle_{\hat{m}},$ (53)
after noticing that the posterior mean (38) simplifies to (22) when
$\hat{q}=\hat{m}$ and $\hat{v}=0$. The above analysis gives the simplified
saddle point equations (21).
To obtain (17), we simply represent all order parameters through $m$,
$\hat{m}$ and $Q(x):=\bar{m}+\tilde{M}(x)$, while (20) is obtained from (50).
We also simplify two contour integrals as follows. The first integral is
$\displaystyle-\frac{1}{2\pi i}\oint_{\Gamma}dz\hat{M}(z)M(z)$
$\displaystyle=\frac{1}{2\pi
i}\oint_{\Gamma}dz\lambda^{2}V^{\prime}(z)M(z)^{2}$
$\displaystyle=\frac{1}{2\pi
i}\oint_{\Gamma}dz\lambda^{2}V^{\prime}(z)\mathbb{E}_{D1,D2}\frac{Q(D_{1})Q(D_{2})H(D_{1})H(D_{2})}{(D_{1}-z)(D_{2}-z)}$
$\displaystyle=\lambda^{2}\mathbb{E}Q(D_{1})Q(D_{2})H(D_{1})H(D_{2})\frac{V^{\prime}(D_{1})-V^{\prime}(D_{2})}{D_{1}-D_{2}},$
with $D_{1},D_{2}$ i.i.d. from $\rho_{Z}$ and where we have used (48),
$(\hat{M})$ and the residue theorem. The second integral is
$\displaystyle\frac{1}{2\pi i}\oint_{\Gamma}dz\hat{B}(z)B(z)$
$\displaystyle=-\frac{\lambda}{4\pi i}\oint_{\Gamma}dzV^{\prime}(z)g_{Z}(z)$
$\displaystyle=-\frac{\lambda}{2}\mathbb{E}_{D}V^{\prime}(D),$ (54)
where we have used (47) and the residue theorem.
Finally, notice that at the saddle point, specifically using (41), the first
term in the free entropy (15) is precisely
$\displaystyle-\frac{1}{4\pi i}\oint_{\Gamma}dzV^{\prime}(z)\log(1-\lambda
g_{Z}(z)),$ (55)
which cancels with the constant evaluated in (III.1).
### III.3 Relation to a Gaussian surrogate model
We can notice that the replica saddle point equations (as well as the TAP
equations defined in the next section) are closely related to those appearing
in the the Gaussian noise case. In fact, the replica saddle point equations
for Gaussian noise (with SNR $\tilde{\lambda}$) reads as follows:
$\hat{m}=\tilde{\lambda}^{2}m,\qquad m=\mathbb{E}X\langle x\rangle_{\hat{m}}.$
(56)
Therefore, by choosing
$\tilde{\lambda}=\sqrt{-R_{J({\mathbf{Z}})}(1-m_{*})/m_{*}},$
where $m_{*}$ takes the value at the extremizer of (21), the Gaussian model
and the rotational invariant model share the same fixed point (and thus same
minimum mean-square error, but necessarily the same mutual information).
## IV Thouless-Anderson-Palmer
free entropy and equations
Along the lines of [51], we employ here the adaTAP approach [43] as an
alternative to the replica method. AdaTAP offers the advantage of expressing
the free entropy as a variational principle over an extensive number of
parameters, which can be interpreted as site marginal means and variances for
every variable in the system, namely signal components in our setting.
Strictly speaking, before our work, the validity of this approach was verified
only for spin glass models containing at most two-body interactions, mediated
by rotationally invariant matrices. In contrast, our model is not quadratic,
but the precise point of the first steps of the replica computation is to make
it quadratic. Hence, we take again as a starting point (recall that notation
$\mathbb{E}_{\mathbf{x}}$ means integration against the prior $P_{X}^{\otimes
N}$):
$\displaystyle\mathcal{Z}\propto\mathbb{E}_{\mathbf{x}}\exp\Big{(}\frac{N}{4\pi
i}\oint_{\Gamma}dzV^{\prime}(z)\log(1+\frac{\lambda}{N}{\mathbf{x}}^{\intercal}{\mathbf{G}}_{Y}(z){\mathbf{x}})\Big{)}$
$\displaystyle\propto\mathbb{E}_{\mathbf{x}}\int\mathcal{D}[B,i\hat{B}]\exp\Big{(}\frac{N}{4\pi
i}\oint_{\Gamma}dzV^{\prime}(z)\log(1+\lambda B(z))\Big{)}$
$\displaystyle\qquad\times\exp\Big{(}\oint_{\Gamma}\hat{B}(z)(NB(z)-{\mathbf{x}}^{\intercal}{\mathbf{G}}_{Y}(z){\mathbf{x}})dz\Big{)}$
$\displaystyle=\int\mathcal{D}[B,i\hat{B}]\exp\Big{(}\frac{N}{4\pi
i}\oint_{\Gamma}dzV^{\prime}(z)\log(1+\lambda B(z))$
$\displaystyle\qquad+N\oint_{\Gamma}\hat{B}(z)B(z)dz\Big{)}\mathbb{E}_{\mathbf{x}}\exp\Big{(}\frac{1}{2}{\mathbf{x}}^{\intercal}J({\mathbf{Y}}){\mathbf{x}}\Big{)},$
(57)
where $J({\mathbf{Y}})=-2\oint_{\Gamma}\hat{B}(z){\mathbf{G}}_{Y}(z)dz$ ends
up being equal to (19). The last factor in (57) is precisely a two-body
(quadratic) model, whose partition function is therefore computable via the
adaTAP approach. Define
$\displaystyle\varphi({\mathbf{Y}}):=\log\mathbb{E}_{\mathbf{x}}\exp\Big{(}\frac{1}{2}{\mathbf{x}}^{\intercal}J({\mathbf{Y}}){\mathbf{x}}\Big{)}.$
(58)
Then, following Opper and Winther’s prescription, the TAP representation of
the free entropy $\varphi({\mathbf{Y}})$ reads
$\displaystyle\varphi_{\rm TAP}({\mathbf{Y}})$
$\displaystyle=\sum_{i=1}^{N}\Big{[}\lambda_{i}m_{i}+\frac{\gamma_{i}}{2}(\sigma_{i}+m_{i}^{2})+\frac{c_{i}\sigma_{i}-\log\sigma_{i}-1}{2}$
$\displaystyle+\frac{1}{2}{\mathbf{m}}^{\intercal}J({\mathbf{Y}}){\mathbf{m}}+\log\int
dP_{X}(x)e^{-\lambda_{i}x-\frac{\gamma_{i}}{2}x^{2}}\Big{]}$
$\displaystyle-\frac{1}{2}\log\det(\text{diag}(\mathbf{c})-J({\mathbf{Y}}))$
(59)
where ${\mathbf{m}}=(m_{i})_{i\leq N},\mathbf{c}=(c_{i})_{i\leq N}$ and an
extremization w.r.t. the parameters
$\lambda_{i},m_{i},\gamma_{i},\sigma_{i},c_{i}$ is intended. Since we are
interested only in leading terms, we can carry out some simplifications of the
above.
First of all, a common assumption in the thermodynamic limit (see [43, 71]) is
that of homogeneous variances $\sigma_{i}=\sigma$ together with
$\gamma_{i}=\gamma$, which in turn yields $c_{i}=c$. Let us now focus on the
determinant term in $\varphi_{\rm TAP}$, which is supposed to reconstruct the
_Onsager reaction term_ in the TAP equations. We argue that at leading order
it does not depend on the spike, nor on the specific realization of
${\mathbf{Z}}$. The leading contribution is determined just by the spectral
distribution of ${\mathbf{Z}}$. Assume $J({\mathbf{Y}})$ is a regular enough
non-linearity applied to the eigenvalues of a matrix whose spectrum consists
of a bulk of eigenvalues inherited by ${\mathbf{Z}}$, plus possibly one spike
detached from the mentioned bulk. The non-linearity changes the shape of the
spectrum, but it preserves the bulk-plus-spike structure. A spike of one or
few eigenvalues cannot alter the spectral distribution of the overall matrix.
From these considerations we get
$\displaystyle\frac{1}{2}\log\det(\text{diag}(\mathbf{c})-J({\mathbf{Y}}))$
$\displaystyle\simeq\frac{1}{2}\log\det(cI_{N}-J({\mathbf{Z}}))$
$\displaystyle\simeq\frac{N}{2}\mathbb{E}\log(c-J(D)),$ (60)
where $\mathbb{E}$ is intended over $D$, distributed according to the
asymptotic spectral density of the noise. Hence, the TAP representation of the
overall free entropy of the model at leading order in $N$ reads (equalities
are up to a constant and a vanishing correction $o_{N}(1)$):
$\displaystyle Nf_{\rm TAP}$
$\displaystyle=\sum_{i=1}^{N}\Big{[}\lambda_{i}m_{i}+\frac{\gamma}{2}(m_{i}^{2}+\sigma)+\frac{c\sigma-1-\log\sigma}{2}$
$\displaystyle+\log\int
dP_{X}(x)e^{-\lambda_{i}x-\frac{\gamma}{2}x^{2}}\Big{]}-\frac{N}{2}\mathbb{E}\log(c-J(D))$
$\displaystyle+\frac{1}{2}{\mathbf{m}}^{\intercal}J({\mathbf{Y}}){\mathbf{m}}+\frac{N}{4\pi
i}\oint_{\Gamma}dzV^{\prime}(z)\log(1+\lambda B(z))$
$\displaystyle+N\oint_{\Gamma}\hat{B}(z)B(z)dz.$ (61)
Again, extremization is intended w.r.t. $\lambda_{i},m_{i},\gamma,\sigma,c$
and the two functions $\hat{B},B$. As anticipated, extremizing w.r.t. $B$ and
$\hat{B}$ only results in matching the coupling matrix $J({\mathbf{Y}})$ with
the pre-processed matrix using (19).
We can now write the remaining TAP equations. Define for future convenience
the Bayes “denoiser”
$\displaystyle\eta(a,b):=\frac{\int dP_{X}(x)e^{ax-\frac{bx^{2}}{2}}\,x}{\int
dP_{X}(y)e^{ay-\frac{by^{2}}{2}}}.$ (62)
Extremization w.r.t. $c$ yields $\sigma=g_{J({\mathbf{Z}})}(c)$, namely
$\displaystyle c=\frac{1}{\sigma}+R_{J({\mathbf{Z}})}(\sigma).$ (63)
Recall that we are looking for equilibrium configurations that satisfy
Nishimori identities, so in the limit we must have
$\displaystyle\frac{1}{N}\sum_{i=1}^{N}(\sigma+m_{i}^{2})=1\text{, that is
}\sigma=1-\tilde{q},$ (64)
where $\tilde{q}:=\frac{1}{N}\sum_{i=1}^{N}m_{i}^{2}$. Cancelling the
$\sigma$-derivative one then gets
$\displaystyle\gamma=-c+\frac{1}{\sigma}=-R_{J({\mathbf{Z}})}(\sigma)=-R_{J({\mathbf{Z}})}(1-\tilde{q}).$
(65)
Finally, extremizing w.r.t. $\lambda_{i}$ and $m_{i}$ yields the final TAP
equations for our model
$\displaystyle{\mathbf{m}}=\eta(J({\mathbf{Y}}){\mathbf{m}}+\gamma{\mathbf{m}},\gamma),\quad\gamma=-R_{J({\mathbf{Z}})}(1-\bar{q}),$
(66)
where $\eta$ is applied component-wise to the vector in the first entry.
The outcome of this analysis is a fundamental equivalence between the original
model with non-linear likelihood governed by $V$ and a model quadratic in
${\mathbf{x}}$, with effective interaction matrix $J({\mathbf{Y}})$. The
equivalence is information-theoretic: the two models have asymptotically the
same free entropy and, therefore, mutual information and minimum mean-square
error. The main advantage of this equivalence resides in the fact that since
the effective model is quadratic, we are able to employ known analytical and
algorithmic approaches in the next section.
Figure 1: The performance of the TAP iterations (dots) matches well the
replica prediction for the minimum mean-square error (solid lines), for
various eigenvalue distributions for the noise. Top left: Quartic potential.
Top right: Sestic potential. Bottom left: Marchenko–Pastur distribution of
eigenvalues. Bottom right: Normal distribution of eigenvalues. Green dashed
lines (which overlap perfectly the red solid lines) denote the theoretical
performance of spectral PCA as predicted by [11]. We do not include the
performance of spectral PCA for the normal distribution of eigenvalues due to
numerical instabilities.
###### Algorithm (Optimal data pre-processing, and TAP iterations).
Define the optimal pre-processing function $J(x)$ as in (19). Let
${\mathbf{m}}^{0}=\sqrt{N}v_{1}({\mathbf{Y}})$ with $v_{1}({\mathbf{Y}})$ the
unit norm first principal component of ${\mathbf{Y}}$. For $t\geq 1$ the TAP
iterations are defined as
$\displaystyle{\mathbf{m}}^{t+1}=\tau{\mathbf{m}}^{t}+(1-\tau)\eta(J({\mathbf{Y}}){\mathbf{m}}^{t}+\gamma^{t}{\mathbf{m}}^{t-1},\gamma^{t}),\quad\gamma^{t}=-R_{J({\mathbf{Z}})}(1-\tilde{q}^{t}),\quad\tilde{q}^{t}=\frac{\|{\mathbf{m}}^{t}\|^{2}}{N},$
(67)
where $\eta$ is defined in (62), we use a damping parameter $\tau\in[0,1)$ for
improved numerical stability, and $R_{J({\mathbf{Z}})}$ is the $R$-transform
of the asymptotic spectral density of $J({\mathbf{Z}})$.
## V From TAP equations to an efficient iterative algorithm
Now that the information-theoretic analysis has been performed through the
replica method, we switch towards algorithmic aspects and focus on how to
efficiently match the performance predicted by our theory, based on the
Thouless-Anderson-Palmer formalism [72, 73, 43].
### V.1 TAP iterations
We can now state our second main result (Algorithm above), which is of an
algorithmic nature. This comes in the form of a Bayes-optimal pre-processing
function to be applied to the data matrix, and an efficient iterative
algorithm exploiting it, in order to reach a solution of the TAP equations.
We draw attention to the time indexing in the algorithm (67). The update rule
is inspired by that of a usual AMP algorithm, and as supported by our
numerical experiments, it proves to be effective to match the results
predicted by the replica analysis. Despite its similarity, with an evident
candidate Onsager reaction term, $\gamma^{t}{\mathbf{m}}^{t-1}$, (67) cannot
be really regarded as an AMP algorithm per se, since we have no theoretical
guarantee that the components of the iterates
$J({\mathbf{Y}}){\mathbf{m}}^{t}+\gamma^{t}{\mathbf{m}}^{t-1}$ have Gaussian
statistics.
### V.2 Numerical experiments
To verify the match between replica theory and algorithm, we choose four
concrete examples for the noise potential. $(i)$ A quartic potential
$V(x)=\gamma x^{4}/4$ with $\gamma=16/{27}$. Its eigenvalue distribution is
given by
$\rho_{Z}(x)=\frac{1}{2\pi}(2a^{2}\gamma+\gamma x^{2})\sqrt{4a^{2}-x^{2}},$
(68)
where $a=3/4$. $(ii)$ A sestic potential $V(x)=\xi x^{6}/6$ with $\xi=27/80$.
Its eigenvalue distribution is given by
$\rho_{Z}(x)=\frac{1}{2\pi}(6a^{4}\xi+2a^{2}\xi x^{2}+\xi
x^{4})\sqrt{4a^{2}-x^{2}},$ (69)
where $a=\sqrt{2/3}$. In both cases, the constants are chosen in order to
enforce unit variance for the spectral densities. These two cases are (among)
those studied in the previous paper [51], the sestic potential being the
highest degree of a polynomial that the techniques in the reference allowed to
study. With the present contribution we can now analyse arbitrary potentials,
even non-polynomial ones such as $(iii)$ eigenvalues following the
Marchenko–Pastur distribution:
$\rho_{Z}(x)=\frac{1}{2\pi\sigma^{2}}\frac{\sqrt{(\lambda_{+}-x)(x-\lambda_{-})}}{\alpha
x},$ (70)
where $\lambda_{\pm}=\sigma^{2}(1\pm\sqrt{\alpha})^{2}$ and $\alpha=0.2$. The
associated potential is given by $V(x)=[(1-1/\alpha)/x+1/\alpha]/\sigma^{2}$.
Finally, as an example of more exotic distribution of eigenvalues we
considered $(iv)$ eigenvalues following a standard normal distribution
truncated between $[-5,5]$. Its potential has probably no analytical
expression, so we numerically calculated its derivative through [74]
$V^{\prime}(x)=2\,\text{P.V.}\int\frac{\rho_{Z}(d\lambda)}{x-\lambda},$ (71)
where P.V. denotes the Cauchy principal value. Thus, we are able to calculate
$J({\mathbf{Z}})$ and its R-transform. In all cases, the noise is properly
normalized such that $\mathbb{E}(D-\mathbb{E}D)^{2}=1$, which is also how we
determine $\sigma^{2}$ in (70). We consider two choices for the prior $P_{X}$:
the standard Gaussian law $\mathcal{N}(0,1)$ and Rademacher law
$\frac{1}{2}\delta_{-1}+\frac{1}{2}\delta_{1}$.
The algorithm uses a PCA initialization [49, 48], that can be obtained
efficiently via the power method. For the normally distributed eigenvalues,
however, we manually choose an initialization having positive correlation
$\sqrt{0.5}$ with the ground truth for numerical stability. In all
experiments, we use $N=2000$ and show the results averaged over $10$ trials
and the corresponding standard deviation. In some cases, about $20\%$ of the
trials does not converge to the right fixed point, which we exclude when
gathering statistics. Moreover, we fix the Onsager coefficient to its fixed
point value predicted by the replica theory for numerical stability. We use a
damping $\tau=0.9$ in all experiments.
Fig. 1 shows that in all successful cases, our algorithm approaches the Bayes-
optimal performance predicted by our replica-based theory. We therefore
conjecture that the fixed point performance of our algorithm matches that of
the minimum mean-square error estimator, when no statistical-to-computational
gap is present, as in all the above experiments.
Note that, as already predicted in [51], PCA remains Bayes-optimal when the
prior of the signal is Gaussian (or more generically rotationally invariant),
regardless of the noise eigenvalue distribution.
Another remark is that, as can be seen from Fig. 1, a Rademacher prior leads
to better numerical stability, due to a more attractive fixed point for the
dynamics. In fact, the Rademacher prior – being more informative – constrains
the signal estimate more strongly than a Gaussian prior.
Finally, the equivalence between the models with structured and Gaussian
noises does not only hold at the level of static (thermodynamic) properties.
Indeed, Fig. 2 numerically verifies that the gap between the TAP iterations
run for the Gaussian model and those run for the rotational invariant model is
small.
Figure 2: Comparison between the TAP iterations for the quartic model (with
$\lambda=2$) and its information-theoretic equivalent Gaussian surrogate
model. The error bars represent the standard deviation computed over 10
trials. The dashed black line represents the MMSE predicted by the replica
theory.
## Acknowledgments
J.B., F.C. and Y.X. were funded by the European Union (ERC, CHORAL, project
number 101039794). Views and opinions expressed are however those of the
authors only and do not necessarily reflect those of the European Union or the
European Research Council. Neither the European Union nor the granting
authority can be held responsible for them. M.M. was supported by the 2019
Lopez-Loreta Prize. J.B. acknowledges discussions with TianQi Hou at the
initial stage of the project, as well as with Antoine Bodin.
## References
* Johnstone and Lu [2009a] I. M. Johnstone and A. Y. Lu, On consistency and sparsity for principal components analysis in high dimensions, Journal of the American Statistical Association 104, 682 (2009a).
* Johnstone and Lu [2009b] I. M. Johnstone and A. Y. Lu, Sparse principal components analysis, arXiv preprint arXiv:0901.4392 (2009b).
* Abbe [2018] E. Abbe, Community detection and stochastic block models: Recent developments, Journal of Machine Learning Research 18, 1 (2018).
* Moore [2017] C. Moore, The computer science and physics of community detection: landscapes, phase transitions, and hardness, arXiv preprint, arXiv:1702.00467 (2017).
* Perry _et al._ [2018] A. Perry, A. S. Wein, A. S. Bandeira, and A. Moitra, Message-passing algorithms for synchronization problems over compact groups, Communications on Pure and Applied Mathematics 71, 2275 (2018).
* Lesieur _et al._ [2015] T. Lesieur, F. Krzakala, and L. Zdeborová, Mmse of probabilistic low-rank matrix estimation: Universality with respect to the output channel, in _Annual Allerton Conference_ (2015).
* Johnstone [2001] I. M. Johnstone, On the distribution of the largest eigenvalue in principal components analysis, The Annals of statistics 29, 295 (2001).
* Baik _et al._ [2005] J. Baik, G. B. Arous, and S. Péché, Phase transition of the largest eigenvalue for nonnull complex sample covariance matrices, The Annals of Probability 33, 1643 (2005).
* Bai and Yao [2012] Z. Bai and J. Yao, On sample eigenvalues in a generalized spiked population model, Journal of Multivariate Analysis 106, 167 (2012).
* Baik and Silverstein [2006] J. Baik and J. W. Silverstein, Eigenvalues of large sample covariance matrices of spiked population models, Journal of multivariate analysis 97, 1382 (2006).
* Benaych-Georges and Nadakuditi [2011] F. Benaych-Georges and R. R. Nadakuditi, The eigenvalues and eigenvectors of finite, low rank perturbations of large random matrices, Advances in Mathematics 227, 494 (2011).
* Capitaine _et al._ [2009] M. Capitaine, C. Donati-Martin, and D. Féral, The largest eigenvalues of finite rank deformation of large Wigner matrices: convergence and nonuniversality of the fluctuations, The Annals of Probability 37, 1 (2009).
* Féral and Péché [2007] D. Féral and S. Péché, The largest eigenvalue of rank one deformation of large Wigner matrices, Communications in mathematical physics 272, 185 (2007).
* Knowles and Yin [2013] A. Knowles and J. Yin, The isotropic semicircle law and deformation of Wigner matrices, Communications on Pure and Applied Mathematics 66, 1663 (2013).
* Donoho _et al._ [2009] D. L. Donoho, A. Maleki, and A. Montanari, Message-passing algorithms for compressed sensing, Proceedings of the National Academy of Sciences 106, 18914 (2009).
* Kabashima [2003] Y. Kabashima, A cdma multiuser detection algorithm on the basis of belief propagation, Journal of Physics A: Mathematical and General (2003).
* Deshpande and Montanari [2014] Y. Deshpande and A. Montanari, Information-theoretically optimal sparse PCA, in _IEEE International Symposium on Information Theory_ (2014) pp. 2197–2201.
* Montanari and Venkataramanan [2021] A. Montanari and R. Venkataramanan, Estimation of low-rank matrices via approximate message passing, Annals of Statistics 45, 321 (2021).
* Rangan [2011] S. Rangan, Generalized approximate message passing for estimation with random linear mixing, in _International Symposium on Information Theory_ (2011) pp. 2168–2172.
* Sur and Candès [2019] P. Sur and E. J. Candès, A modern maximum-likelihood theory for high-dimensional logistic regression, Proceedings of the National Academy of Sciences 116, 14516 (2019).
* Manoel _et al._ [2017] A. Manoel, F. Krzakala, M. Mézard, and L. Zdeborová, Multi-layer generalized linear estimation, in _2017 IEEE International Symposium on Information Theory (ISIT)_ (IEEE, 2017) pp. 2098–2102.
* Bayati and Montanari [2011] M. Bayati and A. Montanari, The dynamics of message passing on dense graphs, with applications to compressed sensing, IEEE Transactions on Information Theory 57, 764 (2011).
* Bolthausen [2014] E. Bolthausen, An iterative construction of solutions of the TAP equations for the Sherrington–Kirkpatrick model, Communications in Mathematical Physics 325, 333 (2014).
* Barbier _et al._ [2019] J. Barbier, F. Krzakala, N. Macris, L. Miolane, and L. Zdeborová, Optimal errors and phase transitions in high-dimensional generalized linear models, Proceedings of the National Academy of Sciences 116, 5451 (2019).
* Celentano _et al._ [2020] M. Celentano, A. Montanari, and Y. Wu, The estimation error of general first order methods, in _Conference on Learning Theory (COLT)_ (2020) pp. 1078–1141.
* Montanari and Wein [2022] A. Montanari and A. S. Wein, Equivalence of approximate message passing and low-degree polynomials in rank-one matrix estimation, arXiv preprint arXiv:2212.06996 (2022).
* Lelarge and Miolane [2018] M. Lelarge and L. Miolane, Fundamental limits of symmetric low-rank matrix estimation, Probability Theory and Related Fields 173, 859 (2018).
* Bayati _et al._ [2015] M. Bayati, M. Lelarge, and A. Montanari, Universality in polytope phase transitions and message passing algorithms, Annals of Applied Probability 25, 753 (2015).
* Chen and Lam [2021] W. K. Chen and W.-K. Lam, Universality of approximate message passing algorithms, Electronic Journal of Probability 26, 36 (2021).
* Barra _et al._ [2013] A. Barra, P. Contucci, E. Mingione, and D. Tantari, Multi-species mean field spin glasses. rigorous results, Annales Henri Poincaré 16, 691 (2013).
* Panchenko [2013a] D. Panchenko, The free energy in a multi-species sherrington-kirkpatrick model, The Annals of Probability 43 (2013a).
* Alberici _et al._ [2021a] D. Alberici, F. Camilli, P. Contucci, and E. Mingione, The multi-species mean-field spin-glass on the nishimori line, Journal of Statistical Physics 182 (2021a).
* Alberici _et al._ [2021b] D. Alberici, F. Camilli, P. Contucci, and E. Mingione, The solution of the deep boltzmann machine on the nishimori line, Communications in Mathematical Physics 387 (2021b).
* Bates and Sohn [2022] E. Bates and Y. Sohn, Free energy in multi-species mixed p-spin spherical models, Electronic Journal of Probability 27, 1 (2022).
* Guionnet _et al._ [2022] A. Guionnet, J. Ko, F. Krzakala, and L. Zdeborová, Low-rank matrix estimation with inhomogeneous noise, arXiv preprint, ariXiv:2208.05918 , arXiv:2208.05918 (2022).
* Felstrom and Zigangirov [1999] A. J. Felstrom and K. S. Zigangirov, Time-varying periodic convolutional codes with low-density parity-check matrix, IEEE Transactions on Information Theory 45, 2181 (1999).
* Kudekar _et al._ [2011] S. Kudekar, T. Richardson, and R. Urbanke, Threshold saturation via spatial coupling: Why convolutional ldpc ensembles perform so well over the bec, IEEE Trans. Info. Th. 57, 803 (2011).
* Barbier _et al._ [2016a] J. Barbier, M. Dia, N. Macris, F. Krzakala, T. Lesieur, and L. Zdeborová, Mutual information for symmetric rank-one matrix estimation: A proof of the replica formula, in _Advances in Neural Information Processing Systems_ (2016).
* Marinari _et al._ [1994a] E. Marinari, G. Parisi, and F. Ritort, Replica field theory for deterministic models: I. binary sequences with low autocorrelation, Journal of Physics A 27, 7615 (1994a).
* Marinari _et al._ [1994b] E. Marinari, G. Parisi, and F. Ritort, Replica field theory for deterministic models: II. A non-random spin glass with glassy behaviour, Journal of Physics A 27, 7647 (1994b).
* Parisi and Potters [1999] G. Parisi and M. Potters, Mean-field equations for spin models with orthogonal interaction matrices, Journal of Physics A: Mathematical and General 28, 5267 (1999).
* Opper _et al._ [2016] M. Opper, B. Cakmak, and O. Winther, A theory of solving tap equations for ising models with general invariant random matrices, Journal of Physics A: Mathematical and Theoretical 49, 114002 (2016).
* Opper and Winther [2011] M. Opper and O. Winther, Adaptive and self-averaging thouless-anderson-palmer mean-field theory for probabilistic modeling, Physical Review E 64 (2011).
* Maillard _et al._ [2019a] A. Maillard, L. Foini, A. L. Castellanos, F. Krzakala, M. Mézard, and L. Zdeborová, High-temperature expansions and message passing algorithms, Journal of Statistical Mechanics: Theory and Experiment 2019, 113301 (2019a).
* Rangan _et al._ [2019] S. Rangan, P. Schniter, and A. K. Fletcher, Vector approximate message passing, IEEE Transactions on Information Theory 65, 6664 (2019).
* Gerbelot _et al._ [2020] C. Gerbelot, A. Abbara, and F. Krzakala, Asymptotic errors for teacher-student convex generalized linear models (or: how to prove Kabashima’s replica formula), arXiv preprint, arXiv:2006.06581 (2020).
* Fan [2022] Z. Fan, Approximate message passing algorithms for rotationally invariant matrices, The Annals of Statistics 50, 197 (2022).
* Zhong _et al._ [2021] X. Zhong, T. Wang, and Z. Fan, Approximate message passing for orthogonally invariant ensembles: Multivariate non-linearities and spectral initialization, arXiv:2110.02318 (2021).
* Mondelli and Venkataramanan [2021] M. Mondelli and R. Venkataramanan, PCA initialization for approximate message passing in rotationally invariant models, in _Advances in Neural Information Processing Systems_ , Vol. 34 (2021) pp. 29616–29629.
* Venkataramanan _et al._ [2022] R. Venkataramanan, K. Kögler, and M. Mondelli, Estimation in rotationally invariant generalized linear models via approximate message passing, in _International Conference on Machine Learning_ (2022) pp. 22120–22144.
* Barbier _et al._ [2023] J. Barbier, F. Camilli, M. Mondelli, and M. Sáenz, Fundamental limits in structured principal component analysis and how to reach them, Proceedings of the National Academy of Sciences 120, e2302028120 (2023).
* Dudeja _et al._ [2024] R. Dudeja, S. Liu, and J. Ma, Optimality of approximate message passing algorithms for spiked matrix models with rotationally invariant noise, arXiv preprint arXiv:2405.18081 (2024).
* Mézard _et al._ [1990] M. Mézard, G. Parisi, and M.-A. Virasoro, _Spin glass theory and beyond_ (World Scientific Publishing Co., Inc., Pergamon Press, 1990).
* Guo _et al._ [2005] D. Guo, S. Shamai, and S. Verdú, Mutual information and minimum mean-square error in gaussian channels, IEEE Transactions on Information Theory 51, 1261 (2005).
* Mézard and Montanari [2009] M. Mézard and A. Montanari, _Information, physics, and computation_ (Oxford Uni. Press, 2009).
* Guerra [2003] F. Guerra, Broken replica symmetry bounds in the mean field spin glass model, Communications in mathematical physics 233, 1 (2003).
* Talagrand [2006] M. Talagrand, The parisi formula, Annals of mathematics , 221 (2006).
* Panchenko [2013b] D. Panchenko, The parisi ultrametricity conjecture, Annals of Mathematics , 383 (2013b).
* Barbier _et al._ [2016b] J. Barbier, M. Dia, and N. Macris, Proof of threshold saturation for spatially coupled sparse superposition codes, in _IEEE International Symposium on Information Theory (ISIT)_ (2016) pp. 1173–1177.
* El Alaoui and Krzakala [2018] A. El Alaoui and F. Krzakala, Estimation in the spiked wigner model: a short proof of the replica formula, in _IEEE International Symposium on Information Theory (ISIT)_ (IEEE, 2018) pp. 1874–1878.
* Barbier and Macris [2019] J. Barbier and N. Macris, The adaptive interpolation method for proving replica formulas. Applications to the Curie–Weiss and Wigner spike models, Journal of Physics A: Mathematical and Theoretical 52, 294002 (2019).
* Barbier and Panchenko [2022] J. Barbier and D. Panchenko, Strong replica symmetry in high-dimensional optimal bayesian inference, Communications in Mathematical Physics 393, 1 (2022).
* Barbier [2020] J. Barbier, Overlap matrix concentration in optimal Bayesian inference, Information and Inference: A Journal of the IMA 10, 597 (2020).
* Alaoui _et al._ [2020] A. E. Alaoui, F. Krzakala, and M. Jordan, Fundamental limits of detection in the spiked Wigner model, The Annals of Statistics 48, 863 (2020).
* Barbier _et al._ [2021] J. Barbier, D. Panchenko, and M. Sáenz, Strong replica symmetry for high-dimensional disordered log-concave Gibbs measures, Information and Inference: A Journal of the IMA 11, 1079 (2021).
* Bodin and Macris [2021a] A. Bodin and N. Macris, Rank-one matrix estimation: analytic time evolution of gradient descent dynamics, arXiv preprint arXiv:2105.12257 (2021a).
* Bodin and Macris [2021b] A. Bodin and N. Macris, Model, sample, and epoch-wise descents: exact solution of gradient flow in the random feature model, Advances in Neural Information Processing Systems 34, 21605 (2021b).
* Bodin [2024] A. P. M. Bodin, _Random matrix methods for high-dimensional machine learning models_ , Tech. Rep. (EPFL, 2024).
* Bodin and Macris [2022] A. Bodin and N. Macris, Gradient flow in the gaussian covariate model: exact solution of learning curves and multiple descent structures, arXiv preprint arXiv:2212.06757 (2022).
* Lelarge and Miolane [2019] M. Lelarge and L. Miolane, Fundamental limits of symmetric low-rank matrix estimation, Probability Theory and Related Fields 173, 859 (2019).
* Maillard _et al._ [2019b] A. Maillard, L. Foini, A. L. Castellanos, F. Krzakala, M. Mézard, and L. Zdeborová, High-temperature expansions and message passing algorithms, Journal of Statistical Mechanics: Theory and Experiment 2019, 113301 (2019b).
* Thouless _et al._ [1977] D. J. Thouless, P. W. Anderson, and R. G. Palmer, Solution of solvable model of a spin glass’, Philosophical Magazine 35, 593 (1977).
* Mézard _et al._ [1987] M. Mézard, G. Parisi, and M. A. Virasoro, _Spin-Glass Theory and Beyond_ , Lecture Notes in Physics, Vol. 9 (World Scientific, Singapore, 1987).
* Potters and Bouchaud [2020] M. Potters and J.-P. Bouchaud, _A First Course in Random Matrix Theory: For Physicists, Engineers and Data Scientists_ (Cambridge University Press, 2020).
|
# CORE4D: A 4D Human-Object-Human Interaction Dataset for Collaborative Object
REarrangement
Chengwen Zhang 1,2, Yun Liu∗1,3,4, Ruofan Xing1, Bingda Tang1, Li Yi 1,3,4
1Institute for Interdisciplinary Information Sciences, Tsinghua University
2Beijing University of Posts and Telecommunications
3Shanghai Artificial Intelligence Laboratory 4Shanghai Qi Zhi Institute
<EMAIL_ADDRESS>
{yun-liu22, xingrf20<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
Equal contributionCorresponding author
###### Abstract
Understanding how humans cooperatively rearrange household objects is critical
for VR/AR and human-robot interaction. However, in-depth studies on modeling
these behaviors are under-researched due to the lack of relevant datasets. We
fill this gap by presenting CORE4D, a novel large-scale 4D human-object-human
interaction dataset focusing on collaborative object rearrangement, which
encompasses diverse compositions of various object geometries, collaboration
modes, and 3D scenes. With 1K human-object-human motion sequences captured in
the real world, we enrich CORE4D by contributing an iterative collaboration
retargeting strategy to augment motions to a variety of novel objects.
Leveraging this approach, CORE4D comprises a total of 11K collaboration
sequences spanning 3K real and virtual object shapes. Benefiting from
extensive motion patterns provided by CORE4D, we benchmark two tasks aiming at
generating human-object interaction: human-object motion forecasting and
interaction synthesis. Extensive experiments demonstrate the effectiveness of
our collaboration retargeting strategy and indicate that CORE4D has posed new
challenges to existing human-object interaction generation methodologies. Our
dataset and code are available at
https://github.com/leolyliu/CORE4D-Instructions.
## 1 Introduction
Humans frequently rearrange household items through multi-person collaboration
, such as moving a table or picking up an overturned chair together. Analyzing
and synthesizing these diverse collaborative behaviors could be widely
applicable in VR/AR, human-robot interaction [77, 55, 54], and dexterous [10,
90, 76, 99] and humanoid [53, 18, 86, 42] manipulation. However, understanding
and modeling these interactive motions have been under-researched due to the
lack of large-scale, richly annotated datasets. Most existing human-object and
hand-object interaction datasets focus on individual behaviors [71, 3, 49, 94,
20, 34, 100, 40, 46, 98] and two-person handovers [96, 81, 48]. But these
datasets typically encompass a limited number of object instances, thus
struggling to support generalizable interaction understanding across diverse
object geometries. Scaling up precise human-object interaction data is
challenging. While vision-based human-object motion tracking methods [3, 85]
have made significant progress, they still struggle with low fidelity in
severe occlusion, common in multi-human collaboration scenes. However, mocap
[34, 40] is expensive and hard to scale up to cover a large number of objects
to be rearranged. We want to curate a large-scale category-level human-object-
human (HOH) interaction dataset with high motion quality in a cost-efficient
manner.
We observe that HOH collaborations mainly vary in two aspects including the
temporal collaboration patterns of two humans and the spatial relations
between human and object. The temporal collaboration patterns could vary in
many ways depending on scene complexity, motion range, and collaboration mode.
In contrast, the spatial relations between human and object tend to possess
strong homogeneity when facing objects from the same category, e.g., two
persons holding the two sides of a chair. This allows retargeting interactions
involving one specific instance to another using automatic algorithms,
avoiding the need to capture interactions with thousands of same-category
objects in the real world. The above observations make it possible for us to
leverage expensive motion capture systems to capture only humans’ diverse
temporal collaboration patterns while leaving the richness of human-object
spatial relations to automatic spatial retargeting algorithms.
Using these insights, we build a large-scale dataset, CORE4D, encompassing a
wide range of human-object interactions for collaborative object
rearrangement. CORE4D includes various types of household objects,
collaboration modes, and 3D environments. Our data acquisition strategy
combines mocap-based capturing and synthetic retargeting, allowing us to scale
the dataset effectively. The retargeting algorithm transfers spatial relation
between human and object to novel object geometries while preserving temporal
pattern of human collaboration. As a result, CORE4D includes 1K real-world
motion sequences (CORE4D-Real) paired with videos and 3D scenes, and 10K
synthetic collaboration sequences (CORE4D-Synthetic) covering 3K diverse
object geometries.
We benchmark two tasks for generating human-object collaboration: (1) motion
forecasting [12, 88] and (2) interaction synthesis [73, 40] on CORE4D,
revealing challenges in modeling human behaviors, enhancing motion
naturalness, and adapting to new object geometries. Ablation studies
demonstrate the effectiveness of our hybrid data acquisition strategy, and the
quality and value of CORE4D-Synthetic, highlighting its role in helping to
improve existing motion generation methods.
In summary, our main contributions are threefold: (1) We present CORE4D, a
large-scale 4D HOH interaction dataset for collaborative object rearrangement.
(2) We propose a novel hybrid data acquisition methodology, incorporating
real-world data capture and synthetically collaboration retargeting. (3) We
benchmark two tasks for collaboration generation, revealing new challenges and
research opportunities.
## 2 Related Work
### 2.1 Human-object Interaction Datasets
Tremendous progress has been made in constructing human-object interaction
datasets. To study how humans interact with 3D scenes, various widely-used
datasets record human movements and surrounding scenes separately, regarding
objects as static [4, 66, 28, 80, 27, 44, 17, 102, 104, 26, 79, 2, 31, 91, 16,
25, 72, 92, 35, 97] or partially deformable [43] without pose changes. For
dynamic objects, recent works [5, 71, 15, 3, 32, 34, 100, 40, 81, 48, 105,
101, 45] have captured human-object interaction behaviors with different
focuses. Table 1 generally summarizes the characteristics of 4D human-object-
interaction datasets. To support research for vision-based human-object motion
tracking and shape reconstruction, a line of datasets [15, 3, 32, 34, 100,
105, 101, 45] presents human-object mesh annotations with multi-view RGB or
RGBD signals. With the rapid development of human-robot cooperation, several
works [5, 71, 81, 48] focus on specific action types such as grasping [71] and
human-human handover [5, 81, 48]. Our dataset uniquely captures multi-person
and object collaborative motions, category-level interactions, and both
egocentric and allocentric views, offering comprehensive features with the
inclusion of both real and synthetic datasets.
Table 1: Comparison of CORE4D with existing 4D human-object interaction datasets. dataset | multi- | collaboration | category- | egocentric | RGBD | $\\#$view | mocap | $\\#$object | $\\#$sequence
---|---|---|---|---|---|---|---|---|---
human | level
Carfi et.al. [5] | ✓ | | | | ✓ | 1 | ✓ | 10 | 1.1K
GRAB [71] | | | | | | - | ✓ | 57 | -
GraviCap [15] | | | | | | 3 | | 4 | 9
BEHAVE [3] | | | | | ✓ | 4 | | 20 | 321
InterCap [32] | | | | | ✓ | 6 | | 10 | 223
CHAIRS [34] | | | ✓ | | ✓ | 4 | ✓ | 81 | 1.4K
HODome [100] | | | | | | 76 | ✓ | 23 | 274
Li et.al. [40] | | | | | | - | ✓ | 15 | 6.1K
HOH [81] | ✓ | | | | ✓ | 8 | | 136 | 2.7K
CoChair [48] | ✓ | | ✓ | | | - | ✓ | 8 | 3.0K
FORCE [105] | | | | | ✓ | 1 | ✓ | 8 | 450
HOI-M3 [101] | ✓ | | | | | 42 | ✓ | 90 | 199
CORE4D-Real | ✓ | ✓ | ✓ | ✓ | ✓ | 5 | ✓ | 37 | 1.0K
CORE4D-Synthetic | ✓ | ✓ | ✓ | | | - | - | 3.0K | 10K
### 2.2 Human Interaction Retargeting
Human interaction retargeting focuses on how to apply human interactive
motions to novel objects in human-object interaction scenarios. Existing
methodologies [37, 65, 94, 67, 83, 8, 33] are object-centric, which propose
first finding contact correspondences between the source and the target
objects and then adjusting human motion to touch specific regions on the
target object via optimization. As crucial guidance of the result, contact
correspondences are discovered by aligning either surface regions [65, 94,
83], spatial maps [37, 33], distance fields [8], or neural descriptor fields
[67] between the source and the target objects, which are all limited to
objects with similar topology and scales. Our synthetic data generation
strategy incorporates object-centric design [94] with novel human-centric
contact selection, creating a chance to adapt to these challenging objects
using human priors.
### 2.3 Human-object Interaction Generation
Human-object interaction generation is an emerging research topic that aims to
synthesize realistic human-object motions conditioned on surrounding 3D
scenes, known object trajectories, or action types. To generate humans
interacting with static 3D scenes, POSA [29] and COINS [106] synthesize static
human poses with CVAE [68], while a line of work [69, 22, 84, 70, 104, 38,
103] further presents dynamic human motions by auto-regressive manners [69,
104], diffusion models [38], or two-stage designs that first generates start
and end poses and then interpolates motion in-between [22, 84, 70, 103].
InterDiff [88] and OMOMO [40] further fulfill this task for dynamic objects.
To generate human-object interaction under specific action descriptions,
recent works [19, 39, 63, 82, 89] extract text features with pretrained CLIP
[64] encoders or LLMs [57, 74] and use them to guide diffusion models [30].
## 3 Constructing CORE4D
CORE4D is a large-scale 4D human-object-human interaction dataset acquired in
a novel hybrid scheme, comprising CORE4D-Real and CORE4D-Synthetic.
CORE4D-Real is captured (Section 3.1) and annotated (Section 3.2) from
authentic collaborative scenarios. It provides human-object-human poses,
allocentric RGB-D videos, egocentric RGB videos, and 2D segmentation across
1.0K sequences accompanied by 37 object models. To augment spacial relation
between human and object, we present an innovative collaboration retargeting
technique in Section 3.3, integrating CORE4D-Real with CORE4D-Synthetic,
thereby expanding our collection with an additional 10K sequences and 3K rigid
objects. Detailed characteristics such as data diversities are demonstrated in
Section 3.4.
### 3.1 CORE4D-Real Data Capture
Figure 1: Data capturing system. (a) demonstrates the wearing of mocap suits
and the positioning of the egocentric camera. (b) shows an object with four
markers. (c) illustrates the data capturing system and camera views. Figure 2:
Dataset overview.
To collect precise human-object motions with visual signals, we set up a
hybrid data capturing system shown in Fig. 1, consisting of an inertial-
optical mocap system, four allocentric RGB-D cameras and a camera worn by
persons for egocentric sensing. The frequency of our system is 15 FPS.
Inertial-optical Mocap System. To accurately capture human-object poses in
multi-person collaboration scenarios, often involving severe occlusion, we use
an inertial-optical mocap system [56] inspired by CHAIRS [34] This system
includes 12 infrared cameras, mocap suits with 8 inertial-optical trackers and
two data gloves per person, and markers of a 10mm radius. The mocap suits
capture Biovision Hierarchy (BVH) skeletons of humans, while markers attached
to the objects track object motion.
Visual Sensors. Kinect Azure DK cameras are integrated to capture allocentric
RGB-D signals, and an Osmo Action3 is utilized to capture egocentric color
videos. The resolution of all the visual signals is 1920x1080. Cameras are
calibrated by the mocap system and synchronized via timestamp. Details on
camera calibration and synchronization are provided in the supplementary
material.
Object Model Acquisition. CORE4D-Real includes 37 3D models of rigid objects
spanning six household object categories. Each object model is constructed by
an industrial 3D scanner with up to 100K triangular faces. We additionally
adopt manual refinements on captured object models to remove triangle outliers
and improve accuracy.
Privacy Protection. To ensure participant anonymity, blurring is applied to
faces [58] in RGB videos, and fake facial meshes are generated via SMPL-X
[61]. The participants all consented to releasing CORE4D, and were also
notified of their right to have their data removed from CORE4D at any time.
### 3.2 CORE4D-Real Data Annotation
Object Pose Tracking. To acquire the 6D pose of a rigid object, we attach four
to five markers to the object’s surface. The markers formulate a virtual rigid
that the mocap system can track. With accurate localization of the object
manually, the object pose can be precisely determined by marker positions
captured by the infrared cameras.
Human Mesh Acquisition. Aligning with existing dataset efforts [34, 40], we
retarget the BVH [52] human skeleton to the widely-used SMPL-X [61]. SMPL-X
[61] formulates a human mesh as $D_{\text{smplx}}=M(\beta,\theta)$. The body
shape $\beta\in\mathbb{R}^{10}$ are optimized to fit the constraints on
manually measured human skeleton lengths. With $\beta$ computed, we optimize
the full-body pose $\theta\in\mathbb{R}^{159}$ with the loss function:
$\displaystyle\mathcal{L}=\mathcal{L}_{\text{reg}}+\mathcal{L}_{j\text{3D}}+\mathcal{L}_{j\text{Ori}}+\mathcal{L}_{\text{smooth}}+\mathcal{L}_{h\text{3D}}+\mathcal{L}_{h\text{Ori}}+\mathcal{L}_{\text{contact}},$
(1)
where $\mathcal{L}_{\text{reg}}$ ensures the simplicity of the results and
prevents unnatural, significant twisting of the joints.
$\mathcal{L}_{j\text{3D}}$ and $\mathcal{L}_{j\text{Ori}}$ encourage the
rotation of joints and the global 3D positions to closely match the ground
truth. $\mathcal{L}_{h\text{3D}}$ and $\mathcal{L}_{h\text{Ori}}$ guide the
positioning and orientation of the fingers. $\mathcal{L}_{\text{smooth}}$
promotes temporal smoothness. $\mathcal{L}_{\text{contact}}$ encourages
realistic contact between the hands and objects. Then using SMPL-X [61]
$M(\beta,\theta,\Phi):\mathbb{R}^{|\theta|\times|\beta|}\mapsto\mathbb{R}^{3N}$
to generate human mesh. Details on the loss functions are presented in the
supplementary material.
2D Mask Annotation. We offer automatic 2D segmentation for individuals and the
manipulated object to aid in predictive tasks like vision-based human-object
pose estimation [3, 85]. We first use DEVA [9] to segment human and object
instances in a captured interaction image with text prompts. Then, we render
human and object meshes separately on each image and select the instance with
the highest Intersection-over-Union (IoU) for mask annotation.
Figure 3: Collaboration retargeting pipeline. We propose a collaboration
retargeting algorithm by iteratively refining interaction motion. The
algorithm takes input source-target pair. First, we sample contact candidates
from whole CORE4D-Real contact knowledge on source. For each contact
candidate, we apply contact retargeting to propagate contact candidates to
contact constraints on target. Sampled motion from CORE4D-Real provides a
high-level collaboration pattern, together with low-level contact constraints,
we obtain interaction candidates from interaction retargeting. Then, the human
pose discriminator selects the optimal candidates, prompting a contact
constraints update via beam search. After multiple iterations, the process
yields augmented interactions. This iterative mechanism can effectively get a
reasonable one from numerous contact constraints and ensures a refined
interaction, enhancing the dataset’s applicability across various scenarios.
### 3.3 CORE4D-Synthetic Data Generation
In order to enrich the diversities of object geometries and human-object
spatial relations, our retargeting algorithm transfers real interactions to
ShapeNet [6] objects of the same category, thereby significantly expanding the
dataset regarding the object’s diversity. When transferring interactions
across objects, contact points are always the key and it is important to
consider whether they can be properly transferred with consistent semantics on
new objects [95, 107]. However, we find this insufficient when object
geometries vary largely and correspondences become hard to build. We thus
tackle interaction retargeting from a novel human-centric perspective where
good contact points should support natural human poses and motions. We realize
this idea through the pipeline depicted in Figure 3, which comprises three key
components. First, object-centric contact retargeting uses whole contact
knowledge from CORE4D-Real to obtain accurate contact with different objects.
Second, contact-guided interaction retargeting adapts motion sequences to new
object geometries while considering the contact constraints. Third, a human-
centric contact selection evaluates poses from interaction candidates to
select the most plausible contacts.
Object-centric Contact Retargeting. To acquire reasonable human poses, contact
constraints on the target object are essential. We draw inspiration from Tink
[94] and train DeepSDF on all objects’ signed distance fields (SDFs). For
source object SDF $O_{s}$ and target object SDF $O_{t}$, we first apply linear
interpolation on their latent vectors $o_{s}$ and $o_{t}$ and obtain $N$
intermediate vectors $o_{i}=\frac{N+1-i}{N+1}o_{s}+\frac{i}{N+1}o_{t}(1\leq
i\leq N)$. We then decode $o_{i}$ to its SDF $O_{i}$ via the decoder of
DeepSDF, and reconstruct the corresponding 3D mesh $M_{i}$ using the Marching
Cubes algorithm [50]. Thereby get mesh sequence
$\mathcal{M}=[\textit{source},M_{1},M_{2},...,M_{N},\textit{target}]$ and
successively transfer contact positions between every two adjacent meshes in
$\mathcal{M}$ via Nearest-neighbor searching. In addition, we leverage all
contact candidates from CORE4D-Real on source to form a pool of contact
candidates and transfer them to target as contact constraints.
Contact-guided Interaction Retargeting. For each contact constraint,
interaction retargeting aims to transfer human interaction from source to
target. To greatly enforce the consistency of interaction motion, we optimize
variables including the object rotations $R_{o}\in\mathbb{R}^{N\times 3}$ and
translations $T_{o}\in\mathbb{R}^{N\times 3}$, human poses
$\theta_{1,2}\in\mathbb{R}^{2\times N\times 153}$, translation
$T_{1,2}\in\mathbb{R}^{2\times N\times 3}$ and orientation
$O_{1,2}\in\mathbb{R}^{2\times N\times 3}$ on the SMPL-X [61]. $N$ is the
frame number.
We first estimate the target’s motion $\\{R_{o},T_{o}\\}$ by solving an
optimization problem as follows:
$\displaystyle
R_{o},T_{o}\longleftarrow\mathop{\operatorname{argmin}}\limits_{R_{o},T_{o}}(\mathcal{L}_{f}+\mathcal{L}_{\text{spat}}+\mathcal{L}_{\text{smooth}}),$
(2)
where fidelity loss $\mathcal{L}_{f}$ evaluates the difference of the target’s
rotation and translation against the source, restriction loss
$\mathcal{L}_{\text{spat}}$ penalizes target’s penetration with the ground,
and smoothness loss $\mathcal{L}_{\text{smooth}}$ constrains the target’s
velocities between consecutive frames.
Given the target’s motion and contact constraints, we then transfer humans’
interactive motion $\\{\theta_{1,2},T_{1,2},O_{1,2}\\}$ from the source to the
target by solving another optimization problem as follows:
$\displaystyle\theta_{1,2},T_{1,2},O_{1,2}\longleftarrow\mathop{\operatorname{argmin}}\limits_{\theta_{1,2},T_{1,2},O_{1,2}}(\mathcal{L}_{j}+\mathcal{L}_{c}+\mathcal{L}_{\text{spat}}+\mathcal{L}_{\text{smooth}}),$
(3)
where fidelity loss $\mathcal{L}_{\text{j}}$ evaluates the difference in human
joint positions before and after the transfer, contact loss $\mathcal{L}_{c}$
computes the difference between human-object contact regions and the contact
constraints, $\mathcal{L}_{\text{spat}}$ and $\mathcal{L}_{\text{smooth}}$
ensures the smoothness of human motion. Details on the loss designs and their
motivations are provided in the supplementary material.
Human-centric Contact Selection. Selecting reasonable contact constraints
efficiently is challenging due to their large scale and the time-consuming
interaction retargeting. We address this challenge by developing a beam search
algorithm to select contact constraints from a human-centric perspective.
Specifically, we train a human pose discriminator inspired by GAN-based motion
generation strategies [93, 87]. To train the discriminator, we build a
pairwise training dataset, with each pair consisting of one positive human
pose sample and one negative sample. Positive samples are encouraged to get
higher scores than negative ones. We use CORE4D-Real as positive samples. We
add 6D pose noise $\Delta(\alpha,\beta,\gamma,x,y,z)$ on target motion, and
regard corresponding human motions generated by contact-guided interaction
retargeting as negative samples. The loss function is:
$\displaystyle\mathcal{L}_{\text{ranking}}=-\log(\sigma(R_{\text{pos}}-R_{\text{neg}}-m(S_{\text{pos}},S_{\text{neg}}))),$
(4)
where $S_{\text{pos}}$ and $S_{\text{neg}}$ denote inputs for positive and
negative samples respectively, with $R_{\text{pos}}$ and $R_{\text{neg}}$
being their corresponding discriminator scores. $\sigma$ is Sigmoid function,
and $m(S_{\text{pos}},S_{\text{neg}})=||\Delta(\alpha,\beta,\gamma,x,y,z)||$
is human-guide margin [59] between positive and negative poses. This margin
could explicitly instruct the discriminator to yield more significant
disparities across different poses.
To ensure the realism of human interactions, we also introduce an
interpenetration penalty. We prioritize those with the highest discriminator
scores while ensuring acceptable levels of interpenetration as the optimal
contact constraints.
### 3.4 Dataset Characteristics
Figure 4: Dataset statistics. (a) shows some object samples from five
categories. Bars in (b) indicate when the person is in contact with the object
during the entire collaborative object rearrangement interaction process. (c)
presents the proportion of collaboration modes in the dataset.
To better model collaborative object rearrangement interactions, we focus on
diversifying our dataset in several vital areas: object geometries,
collaboration modes, and 3D scenes. These ensure a comprehensive
representation of real-world interactions.
Diversity in Object Geometries. We design six object categories to cover the
main collaborative object rearrangement interaction scenarios as Fig. 4(a).
Categories with relatively simple geometry, uniformity, and typically
exhibiting symmetry include box, board, barrel, and stick. Categories with
more complex geometries and significant individual differences include chair
and desk.
Diversity in Collaboration Modes. We define five human-human collaboration
modes in collaborative object rearrangement. Each mode represents a unique
form of collaboration between two individuals, providing a new perspective and
possibilities for understanding and researching collaborative behaviors. At
first, we define the person with the egocentric camera as Person 2, and the
other as Person 1. Collaborative carrying tasks are divided by whether Person
2 knows the goal or not. Tasks of handover and solely move alternate between
the two participants. In join and leave tasks, Person 2 will either join in to
help or leave halfway through, respectively.
Diversity in 3D Scenes. Surrounding scenarios are set up with varying levels
of scene complexity: no obstacle, single obstacle, and many obstacles (more
than one). Participants are asked to navigate through these randomly placed
obstacles by their own means. We observe that this typically involved
behaviors including bypassing, going through, stepping over, or moving
obstacles aside.
## 4 Experiments
In this section, we first present the train-test split of CORE4D (Section
4.1). We then propose two benchmarks for generating human-object
collaboration: human-object motion forecasting (Section 4.2), and interaction
synthesis (Section 4.3). Finally, Section 4.4 presents extensive studies on
the collaboration retargeting approach.
### 4.1 Data Split
We construct a training set from a random assortment of real objects,
combining their real motions and corresponding synthetic data. We also create
two test sets from CORE4D-Real for non-generalization and inner-category
generalization studies. Test set S1 includes interactions with training set
objects, while S2 features interactions with new objects. CORE4D-Synthetic is
not included in the test set, avoiding potential biases from the retargeting
algorithm. Details are shown in supplementary material.
### 4.2 Human-object Motion Forecasting
Forecasting 4D human motion [23, 62, 51, 24] is a crucial problem with
applications in VR/AR and embodied perception [36]. Current research [13, 1,
77, 88] is limited to individual behaviors due to data constraints. Our work
expands this by using diverse multi-person collaborations, making the
prediction problem both intriguing and challenging.
Task Formulation. Given the object’s 3D model and human-object poses in
adjacent 15 frames, the task is to predict their subsequent poses in the
following 15 frames. The human pose $P_{h}\in\mathbb{R}^{23\times 3}$
represents joint rotations of the SMPL-X [61] model, while the object pose
$P_{o}=\\{R_{o}\in\mathbb{R}^{3},T_{o}\in\mathbb{R}^{3}\\}$ denotes 3D
orientation and 3D translation of the rigid object model.
Evaluation Metrics. Following existing motion forecasting works [12, 78, 88],
we evaluate human joints position error $J_{e}$, object translation error
$T_{e}$, object rotation error $R_{e}$, human-object contact accuracy
$C_{\text{acc}}$, and penetration rate $P_{r}$. Details are provided in the
supplementary material.
Methods, Results, and Analysis. We evaluate three state-of-the-art motion
forecasting methods, MDM [73], InterDiff [88], and CAHMP [12]. Table 2
quantitatively shows these methods reveal a consistent drop in performance for
unseen objects (S2) versus seen ones (S1) regarding human pose prediction.
Meanwhile, errors in object pose prediction remain similar. This highlights
the challenges in generalizing human collaborative motion for novel object
shapes.
Table 2: Quantitative results on human-object motion forecasting. Test Set | Method | Human | Object | Contact
---|---|---|---|---
$J_{e}$ (mm, $\downarrow$) | $T_{e}$ (mm, $\downarrow$) | $R_{e}$ (∘, $\downarrow$) | $C_{\text{acc}}$ ($\%$, $\uparrow$) | $P_{r}$ ($\%$, $\downarrow$)
S1 | MDM [73] | 170.8 ($\pm$0.9) | 136.8 ($\pm$0.1) | 10.7 ($\pm$0.1) | 84.9 ($\pm$ 0.2) | 0.3 ($\pm$ 0.0)
InterDiff [88] | 170.8 ($\pm$0.9) | 135.1 ($\pm$0.1) | 10.2 ($\pm$0.1) | 84.9 ($\pm$ 0.2) | 0.3 ($\pm$ 0.0)
CAHMP [12] | 169.4 ($\pm$0.3) | 110.3 ($\pm$0.1) | 9.0 ($\pm$0.1) | - | -
S2 | MDM [73] | 186.4 ($\pm$0.7) | 136.0 ($\pm$0.2) | 11.1 ($\pm$0.0) | 88.0 ($\pm$ 0.0) | 0.3 ($\pm$ 0.0)
InterDiff [88] | 186.4 ($\pm$0.7) | 133.6 ($\pm$0.2) | 10.7 ($\pm$ 0.1) | 88.0 ($\pm$ 0.0) | 0.3 ($\pm$ 0.0)
CAHMP [12] | 170.5 ($\pm$0.3) | 112.9 ($\pm$0.1) | 9.5 ($\pm$0.1) | - | -
### 4.3 Interaction Synthesis
Generating human-object interaction [40, 19, 39, 63] is an emerging research
topic benefiting human avatar animation and human-robot collaboration [11,
55]. With extensive collaboration modes and various object categories, CORE4D
constitutes a knowledge base for studying generalizable algorithms of human-
object-human interactive motion synthesis.
Task Formulation. Following recent studies [70, 40], we define the task as
object-conditioned human motion generation. Given an object geometry sequence
$G_{o}\in\mathbb{R}^{T\times N\times 3}$, the aim is to generate corresponding
two-person collaboration motions $M_{h}\in\mathbb{R}^{2\times T\times 23\times
3}$. This involves frame numbers $T$, object point clouds $G_{o}$, and human
pose parameters for the SMPL-X [61] model.
Evaluation Metrics. Following individual human-object interaction synthesis
[40], we evaluate human joint position error $RR.J_{e}$, object vertex
position error $RR.V_{e}$, and human-object contact accuracy $C_{\text{acc}}$.
The FID score ($FID$) is leveraged to quantitatively assess the naturalness of
synthesized results. Details of the metric designs are presented in the
supplementary material.
Methods, Results, and Analysis. We utilize two advanced generative models, MDM
[73] and OMOMO [40], as baselines. MDM is a one-stage conditional motion
diffusion model, while OMOMO is a two-stage approach with hand positions as
intermediate results. Quantitative evaluations reveal larger errors in OMOMO
when modeling multi-human collaboration compared to individual interaction
synthesis by Li et al. [40]. Furthermore, the synthesized results have a
higher FID than real motion data, indicating challenges in motion naturalness.
Table 3: Quantitative results on interaction synthesis. Test Set | Method | $RR.J_{e}$ (mm, $\downarrow$) | $RR.V_{e}$ (mm, $\downarrow$) | $C_{\text{acc}}$ ($\%$, $\uparrow$) | $FID$ ($\downarrow$)
---|---|---|---|---|---
S1 | MDM [73] | 138.0 ($\pm$ 0.3) | 194.6 ($\pm$ 0.2) | 76.9 ($\pm$ 0.5) | 7.7 ($\pm$ 0.2)
OMOMO [40] | 137.8 ($\pm$ 0.2) | 196.7 ($\pm$ 0.3) | 78.2 ($\pm$ 0.5) | 8.3 ($\pm$ 0.6)
S2 | MDM [73] | 145.9 ($\pm$ 0.2) | 208.2 ($\pm$ 0.2) | 76.7 ($\pm$ 0.1) | 7.7 ($\pm$ 0.2)
OMOMO [40] | 145.2 ($\pm$ 0.6) | 209.9 ($\pm$ 1.0) | 77.8 ($\pm$ 0.3) | 8.3 ($\pm$ 1.0)
### 4.4 Collaboration Retargeting
Table 4: Ablation study. CC denotes contact candidates. D denotes the human pose discriminator. CCU denotes the contact candidate update. Comparisons | Designs | Penetration | User Preferences
---|---|---|---
CC | D | CCU | Distance | Contact | Motion
(mm, $\downarrow$) | A/B/Approx.(%, $\uparrow$)
Abl.1 | A | | | | 6.06 | 7.8/88.9/3.3 | 3.3/84.4/12.3
B | ✓ | ✓ | | 2.38
Abl.2 | A | ✓ | | | 5.49 | 1.2/91.4/7.4 | 3.2/85.1/11.7
B | ✓ | ✓ | | 2.38
Abl.3 | A | ✓ | ✓ | | 2.38 | 5.0/76.0/19.0 | 4.0/69.0/27.0
B | ✓ | ✓ | ✓ | 2.27
User Studies. We conduct user studies to examine the quality of
CORE4D-Synthetic in terms of naturalness of contact and human motion. Each
study comprises two collections, each with at least 100 sequences displayed in
pairs on a website. Users are instructed to assess the realism of human-object
contacts and the naturalness of human motions, and then select the superior
one in each pair separately. Recognizing the diversity of acceptable contacts
and motions, participants are permitted to deem the performances as roughly
equivalent.
Ablation on Contact Candidates. In Table 4.Abl.1, we only use the contact
points from a source trajectory for retargeting to the target instead of
resorting to the CORE4D-Real for many candidates, making the whole retargeting
process similar to the OakInk [94] method. We observe a sharp decline in both
physical plausibility and user preferences, indicating that our method
compensates for OakInk’s shortcomings in retargeting objects with significant
geometric and scale variations.
Ablation on Discriminator. In this ablation, as shown in Table 4.Abl.2, we
omit the human pose discriminator in the collaboration retargeting. We will
randomly choose a candidate from the contact candidates. There are obvious
performance drops, demonstrating the critical role of the human pose
discriminator in selecting appropriate candidates.
Ablation on Contact Candidate Update. We exclude contact candidate update
process in Table 4.Abl.3 experiment. This removal has weakened our method’s
ability to search for optimal solutions on objects, resulting in a modest
degradation in penetration distance. The user study still exhibited a strong
bias, indicating a perceived decline in the plausibility of both contact and
motion. This ablation underscores the importance of contact candidate update
within our methodology.
Comparing CORE4D-Synthetic with CORE4D-Real. We assess the quality of
CORE4D-Synthetic by comparing it with CORE4D-Real through user study. In
conclusion, there is a 43% probability that users perceive the quality of both
options as comparable. Furthermore, in 14% of cases, users even exhibit a
preference for synthetic data. This indicates that the quality of our
synthetic data closely approximates that of real data.
Application of CORE4D-Synthetic. Table 5 compares the motion forecasting
ability of light-weighted CAHMP [14]. The test set is S2 defined in Section
4.1. We assess the quality of CORE4D-Synthetic by comparing No.A and No.B.
No.A even have better performance on object due to enriched spacial relation
between human and object in CORE4D-Synthetic. No.C shows the value of the
CORE4D-Synthetic by largely improving the performance. Details are in
supplementary material.
Table 5: Ablation on the incorporation of CORE4D-Synthetic on the motion forecasting task. No | Train Set | Human | Object
---|---|---|---
Total | Real | Synthetic | $J_{e}$ (mm, $\downarrow$) | $T_{e}$ (mm, $\downarrow$) | $R_{e}$ (∘, $\downarrow$)
A | 1.0K | 0.1K | 0.9K | 127.7($\pm$ 0.7) | 121.7($\pm$ 0.7) | 8.04($\pm$ 0.4)
B | 1.0K | 1.0K | 0 | 127.0($\pm$ 0.8) | 120.5 ($\pm$ 0.6) | 9.48 ($\pm$ 0.1)
C | 5.0K | 1.0K | 4.0K | 116.2($\pm$ 0.3) | 112.1($\pm$ 0.2) | 6.99($\pm$ 0.1)
## 5 Conclusion and Limitations
We present CORE4D, a novel large-scale 4D human-object-human interaction
dataset for collaborative object rearrangement. It comprises diverse
compositions of various object geometries, collaboration modes, and
surrounding 3D scenes. To efficiently enlarge the data scale, we contribute a
hybrid data acquisition method involving real-world data capturing and a novel
synthetic data augmentation algorithm, resulting in 11K motion sequences
covering 37 real-world and 3K virtual objects. Extensive experiments
demonstrate the effectiveness of the data augmentation strategy and the value
of the augmented motion data. We benchmark human-object motion forecasting and
interaction synthesis on CORE4D, revealing new challenges and research
opportunities.
Limitations. One limitation is that outdoor scenes are not incorporated due to
the usage of the mocap system. Another limitation is that our data
augmentation strategy currently focuses on adopting collaboration to novel
object geometries while excluding human shape diversity. Integrating our
retargeting approach with human shape modeling could be an interesting future
direction.
## References
* [1] Adeli, V., Ehsanpour, M., Reid, I., Niebles, J.C., Savarese, S., Adeli, E., Rezatofighi, H.: Tripod: Human trajectory and pose dynamics forecasting in the wild. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 13390–13400 (2021)
* [2] Araújo, J.P., Li, J., Vetrivel, K., Agarwal, R., Wu, J., Gopinath, D., Clegg, A.W., Liu, K.: Circle: Capture in rich contextual environments. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 21211–21221 (2023)
* [3] Bhatnagar, B.L., Xie, X., Petrov, I.A., Sminchisescu, C., Theobalt, C., Pons-Moll, G.: Behave: Dataset and method for tracking human object interactions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 15935–15946 (2022)
* [4] Cao, Z., Gao, H., Mangalam, K., Cai, Q.Z., Vo, M., Malik, J.: Long-term human motion prediction with scene context. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16. pp. 387–404. Springer (2020)
* [5] Carfì, A., Foglino, F., Bruno, B., Mastrogiovanni, F.: A multi-sensor dataset of human-human handover. Data in brief 22, 109–117 (2019)
* [6] Chang, A.X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q., Li, Z., Savarese, S., Savva, M., Song, S., Su, H., et al.: Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012 (2015)
* [7] Chao, Y.W., Yang, W., Xiang, Y., Molchanov, P., Handa, A., Tremblay, J., Narang, Y.S., Van Wyk, K., Iqbal, U., Birchfield, S., et al.: Dexycb: A benchmark for capturing hand grasping of objects. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 9044–9053 (2021)
* [8] Chen, Z.Q., Van Wyk, K., Chao, Y.W., Yang, W., Mousavian, A., Gupta, A., Fox, D.: Learning robust real-world dexterous grasping policies via implicit shape augmentation. arXiv preprint arXiv:2210.13638 (2022)
* [9] Cheng, H.K., Oh, S.W., Price, B., Schwing, A., Lee, J.Y.: Tracking anything with decoupled video segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 1316–1326 (2023)
* [10] Christen, S., Kocabas, M., Aksan, E., Hwangbo, J., Song, J., Hilliges, O.: D-grasp: Physically plausible dynamic grasp synthesis for hand-object interactions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 20577–20586 (2022)
* [11] Christen, S., Yang, W., Pérez-D’Arpino, C., Hilliges, O., Fox, D., Chao, Y.W.: Learning human-to-robot handovers from point clouds. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 9654–9664 (2023)
* [12] Corona, E., Pumarola, A., Alenya, G., Moreno-Noguer, F.: Context-aware human motion prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 6992–7001 (2020)
* [13] Corona, E., Pumarola, A., Alenya, G., Moreno-Noguer, F.: Context-aware human motion prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 6992–7001 (2020)
* [14] Corona, E., Pumarola, A., Alenyà, G., Moreno-Noguer, F.: Context-aware human motion prediction (2020)
* [15] Dabral, R., Shimada, S., Jain, A., Theobalt, C., Golyanik, V.: Gravity-aware monocular 3d human-object reconstruction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 12365–12374 (2021)
* [16] Dai, Y., Lin, Y., Lin, X., Wen, C., Xu, L., Yi, H., Shen, S., Ma, Y., Wang, C.: Sloper4d: A scene-aware dataset for global 4d human pose estimation in urban environments. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 682–692 (2023)
* [17] Dai, Y., Lin, Y., Wen, C., Shen, S., Xu, L., Yu, J., Ma, Y., Wang, C.: Hsc4d: Human-centered 4d scene capture in large-scale indoor-outdoor space using wearable imus and lidar. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 6792–6802 (2022)
* [18] Dao, J., Duan, H., Fern, A.: Sim-to-real learning for humanoid box loco-manipulation. arXiv preprint arXiv:2310.03191 (2023)
* [19] Diller, C., Dai, A.: Cg-hoi: Contact-guided 3d human-object interaction generation. arXiv preprint arXiv:2311.16097 (2023)
* [20] Fan, Z., Taheri, O., Tzionas, D., Kocabas, M., Kaufmann, M., Black, M.J., Hilliges, O.: Arctic: A dataset for dexterous bimanual hand-object manipulation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12943–12954 (2023)
* [21] Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J.W., Wallach, H., Iii, H.D., Crawford, K.: Datasheets for datasets. Communications of the ACM 64(12), 86–92 (2021)
* [22] Ghosh, A., Dabral, R., Golyanik, V., Theobalt, C., Slusallek, P.: Imos: Intent-driven full-body motion synthesis for human-object interactions. In: Computer Graphics Forum. vol. 42, pp. 1–12. Wiley Online Library (2023)
* [23] Guo, W., Bie, X., Alameda-Pineda, X., Moreno-Noguer, F.: Multi-person extreme motion prediction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13053–13064 (2022)
* [24] Guo, W., Du, Y., Shen, X., Lepetit, V., Alameda-Pineda, X., Moreno-Noguer, F.: Back to mlp: A simple baseline for human motion prediction. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. pp. 4809–4819 (2023)
* [25] Guzov, V., Chibane, J., Marin, R., He, Y., Sattler, T., Pons-Moll, G.: Interaction replica: Tracking human-object interaction and scene changes from human motion. arXiv preprint arXiv:2205.02830 (2022)
* [26] Guzov, V., Mir, A., Sattler, T., Pons-Moll, G.: Human poseitioning system (hps): 3d human pose estimation and self-localization in large scenes from body-mounted sensors. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4318–4329 (2021)
* [27] Hassan, M., Ceylan, D., Villegas, R., Saito, J., Yang, J., Zhou, Y., Black, M.J.: Stochastic scene-aware motion prediction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 11374–11384 (2021)
* [28] Hassan, M., Choutas, V., Tzionas, D., Black, M.J.: Resolving 3d human pose ambiguities with 3d scene constraints. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 2282–2292 (2019)
* [29] Hassan, M., Ghosh, P., Tesch, J., Tzionas, D., Black, M.J.: Populating 3d scenes by learning human-scene interaction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 14708–14718 (2021)
* [30] Ho, J., Jain, A., Abbeel, P.: Denoising diffusion probabilistic models. Advances in neural information processing systems 33, 6840–6851 (2020)
* [31] Huang, C.H.P., Yi, H., Höschle, M., Safroshkin, M., Alexiadis, T., Polikovsky, S., Scharstein, D., Black, M.J.: Capturing and inferring dense full-body human-scene contact. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13274–13285 (2022)
* [32] Huang, Y., Taheri, O., Black, M.J., Tzionas, D.: Intercap: Joint markerless 3d tracking of humans and objects in interaction. In: DAGM German Conference on Pattern Recognition. pp. 281–299. Springer (2022)
* [33] Huang, Z., Xu, H., Huang, H., Ma, C., Huang, H., Hu, R.: Spatial and surface correspondence field for interaction transfer. arXiv preprint arXiv:2405.03221 (2024)
* [34] Jiang, N., Liu, T., Cao, Z., Cui, J., Zhang, Z., Chen, Y., Wang, H., Zhu, Y., Huang, S.: Full-body articulated human-object interaction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 9365–9376 (2023)
* [35] Jiang, N., Zhang, Z., Li, H., Ma, X., Wang, Z., Chen, Y., Liu, T., Zhu, Y., Huang, S.: Scaling up dynamic human-scene interaction modeling. arXiv preprint arXiv:2403.08629 (2024)
* [36] Kasahara, S., Konno, K., Owaki, R., Nishi, T., Takeshita, A., Ito, T., Kasuga, S., Ushiba, J.: Malleable embodiment: changing sense of embodiment by spatial-temporal deformation of virtual human body. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. pp. 6438–6448 (2017)
* [37] Kim, Y., Park, H., Bang, S., Lee, S.H.: Retargeting human-object interaction to virtual avatars. IEEE transactions on visualization and computer graphics 22(11), 2405–2412 (2016)
* [38] Kulkarni, N., Rempe, D., Genova, K., Kundu, A., Johnson, J., Fouhey, D., Guibas, L.: Nifty: Neural object interaction fields for guided human motion synthesis. arXiv preprint arXiv:2307.07511 (2023)
* [39] Li, J., Clegg, A., Mottaghi, R., Wu, J., Puig, X., Liu, C.K.: Controllable human-object interaction synthesis. arXiv preprint arXiv:2312.03913 (2023)
* [40] Li, J., Wu, J., Liu, C.K.: Object motion guided human motion synthesis. ACM Transactions on Graphics (TOG) 42(6), 1–11 (2023)
* [41] Li, J., Bian, S., Xu, C., Chen, Z., Yang, L., Lu, C.: Hybrik-x: Hybrid analytical-neural inverse kinematics for whole-body mesh recovery. arXiv preprint arXiv:2304.05690 (2023)
* [42] Li, J., Nguyen, Q.: Kinodynamics-based pose optimization for humanoid loco-manipulation. arXiv preprint arXiv:2303.04985 (2023)
* [43] Li, Z., Shimada, S., Schiele, B., Theobalt, C., Golyanik, V.: Mocapdeform: Monocular 3d human motion capture in deformable scenes. In: 2022 International Conference on 3D Vision (3DV). pp. 1–11. IEEE (2022)
* [44] Liu, M., Yang, D., Zhang, Y., Cui, Z., Rehg, J.M., Tang, S.: 4d human body capture from egocentric video via 3d scene grounding. In: 2021 international conference on 3D vision (3DV). pp. 930–939. IEEE (2021)
* [45] Liu, S., Li, Y.L., Fang, Z., Liu, X., You, Y., Lu, C.: Primitive-based 3d human-object interaction modelling and programming. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 38, pp. 3711–3719 (2024)
* [46] Liu, Y., Yang, H., Si, X., Liu, L., Li, Z., Zhang, Y., Liu, Y., Yi, L.: Taco: Benchmarking generalizable bimanual tool-action-object understanding. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 21740–21751 (2024)
* [47] Liu, Y., Yang, H., Si, X., Liu, L., Li, Z., Zhang, Y., Liu, Y., Yi, L.: Taco: Benchmarking generalizable bimanual tool-action-object understanding. arXiv preprint arXiv:2401.08399 (2024)
* [48] Liu, Y., Chen, C., Yi, L.: Interactive humanoid: Online full-body motion reaction synthesis with social affordance canonicalization and forecasting. arXiv preprint arXiv:2312.08983 (2023)
* [49] Liu, Y., Liu, Y., Jiang, C., Lyu, K., Wan, W., Shen, H., Liang, B., Fu, Z., Wang, H., Yi, L.: Hoi4d: A 4d egocentric dataset for category-level human-object interaction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 21013–21022 (2022)
* [50] Lorensen, W.E., Cline, H.E.: Marching cubes: A high resolution 3d surface construction algorithm. In: Seminal graphics: pioneering efforts that shaped the field, pp. 347–353 (1998)
* [51] Mao, W., Hartley, R.I., Salzmann, M., et al.: Contact-aware human motion forecasting. Advances in Neural Information Processing Systems 35, 7356–7367 (2022)
* [52] Meredith, M., Maddock, S., et al.: Motion capture file formats explained. Department of Computer Science, University of Sheffield 211, 241–244 (2001)
* [53] Murooka, M., Kumagai, I., Morisawa, M., Kanehiro, F., Kheddar, A.: Humanoid loco-manipulation planning based on graph search and reachability maps. IEEE Robotics and Automation Letters 6(2), 1840–1847 (2021)
* [54] Ng, E., Liu, Z., Kennedy, M.: Diffusion co-policy for synergistic human-robot collaborative tasks. IEEE Robotics and Automation Letters (2023)
* [55] Ng, E., Liu, Z., Kennedy, M.: It takes two: Learning to plan for human-robot cooperative carrying. In: 2023 IEEE International Conference on Robotics and Automation (ICRA). pp. 7526–7532. IEEE (2023)
* [56] NOITOM INTERNATIONAL, INC: Noitom motion capture systems (2024), https://www.noitom.com.cn/
* [57] OpenAI: Chatgpt (2023), https://chat.openai.com/
* [58] OpenCV: opencv. https://github.com/opencv/opencv/blob/master/data/haarcascades/haarcascade_frontalface_default.xml (2013)
* [59] Ouyang, L., Wu, J., Jiang, X., Almeida, D., Wainwright, C., Mishkin, P., Zhang, C., Agarwal, S., Slama, K., Ray, A., et al.: Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems 35, 27730–27744 (2022)
* [60] Park, J.J., Florence, P., Straub, J., Newcombe, R., Lovegrove, S.: Deepsdf: Learning continuous signed distance functions for shape representation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 165–174 (2019)
* [61] Pavlakos, G., Choutas, V., Ghorbani, N., Bolkart, T., Osman, A.A., Tzionas, D., Black, M.J.: Expressive body capture: 3d hands, face, and body from a single image. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 10975–10985 (2019)
* [62] Peng, X., Shen, Y., Wang, H., Nie, B., Wang, Y., Wu, Z.: Somoformer: Social-aware motion transformer for multi-person motion prediction. arXiv preprint arXiv:2208.09224 (2022)
* [63] Peng, X., Xie, Y., Wu, Z., Jampani, V., Sun, D., Jiang, H.: Hoi-diff: Text-driven synthesis of 3d human-object interactions using diffusion models. arXiv preprint arXiv:2312.06553 (2023)
* [64] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International conference on machine learning. pp. 8748–8763. PMLR (2021)
* [65] Rodriguez, D., Behnke, S.: Transferring category-based functional grasping skills by latent space non-rigid registration. IEEE Robotics and Automation Letters 3(3), 2662–2669 (2018)
* [66] Savva, M., Chang, A.X., Hanrahan, P., Fisher, M., Nießner, M.: Pigraphs: learning interaction snapshots from observations. ACM Transactions On Graphics (TOG) 35(4), 1–12 (2016)
* [67] Simeonov, A., Du, Y., Tagliasacchi, A., Tenenbaum, J.B., Rodriguez, A., Agrawal, P., Sitzmann, V.: Neural descriptor fields: Se (3)-equivariant object representations for manipulation. In: 2022 International Conference on Robotics and Automation (ICRA). pp. 6394–6400. IEEE (2022)
* [68] Sohn, K., Lee, H., Yan, X.: Learning structured output representation using deep conditional generative models. Advances in neural information processing systems 28 (2015)
* [69] Starke, S., Zhang, H., Komura, T., Saito, J.: Neural state machine for character-scene interactions. ACM Trans. Graph. 38(6), 209–1 (2019)
* [70] Taheri, O., Choutas, V., Black, M.J., Tzionas, D.: Goal: Generating 4d whole-body motion for hand-object grasping. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 13263–13273 (2022)
* [71] Taheri, O., Ghorbani, N., Black, M.J., Tzionas, D.: Grab: A dataset of whole-body human grasping of objects. In: Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part IV 16. pp. 581–600. Springer (2020)
* [72] Tanke, J., Kwon, O.H., Mueller, F.B., Doering, A., Gall, J.: Humans in kitchens: A dataset for multi-person human motion forecasting with scene context. Advances in Neural Information Processing Systems 36 (2024)
* [73] Tevet, G., Raab, S., Gordon, B., Shafir, Y., Cohen-Or, D., Bermano, A.H.: Human motion diffusion model. arXiv preprint arXiv:2209.14916 (2022)
* [74] Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S., et al.: Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023)
* [75] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural information processing systems 30 (2017)
* [76] Wan, W., Geng, H., Liu, Y., Shan, Z., Yang, Y., Yi, L., Wang, H.: Unidexgrasp++: Improving dexterous grasping policy learning via geometry-aware curriculum and iterative generalist-specialist learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 3891–3902 (2023)
* [77] Wan, W., Yang, L., Liu, L., Zhang, Z., Jia, R., Choi, Y.K., Pan, J., Theobalt, C., Komura, T., Wang, W.: Learn to predict how humans manipulate large-sized objects from interactive motions. IEEE Robotics and Automation Letters 7(2), 4702–4709 (2022)
* [78] Wan, W., Yang, L., Liu, L., Zhang, Z., Jia, R., Choi, Y.K., Pan, J., Theobalt, C., Komura, T., Wang, W.: Learn to predict how humans manipulate large-sized objects from interactive motions. IEEE Robotics and Automation Letters 7(2), 4702–4709 (2022)
* [79] Wang, Z., Chen, Y., Liu, T., Zhu, Y., Liang, W., Huang, S.: Humanise: Language-conditioned human motion generation in 3d scenes. Advances in Neural Information Processing Systems 35, 14959–14971 (2022)
* [80] Wang, Z., Shin, D., Fowlkes, C.C.: Predicting camera viewpoint improves cross-dataset generalization for 3d human pose estimation. In: Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16. pp. 523–540. Springer (2020)
* [81] Wiederhold, N., Megyeri, A., Paris, D., Banerjee, S., Banerjee, N.: Hoh: Markerless multimodal human-object-human handover dataset with large object count. Advances in Neural Information Processing Systems 36 (2024)
* [82] Wu, Q., Shi, Y., Huang, X., Yu, J., Xu, L., Wang, J.: Thor: Text to human-object interaction diffusion via relation intervention. arXiv preprint arXiv:2403.11208 (2024)
* [83] Wu, R., Zhu, T., Peng, W., Hang, J., Sun, Y.: Functional grasp transfer across a category of objects from only one labeled instance. IEEE Robotics and Automation Letters 8(5), 2748–2755 (2023)
* [84] Wu, Y., Wang, J., Zhang, Y., Zhang, S., Hilliges, O., Yu, F., Tang, S.: Saga: Stochastic whole-body grasping with contact. In: European Conference on Computer Vision. pp. 257–274. Springer (2022)
* [85] Xie, X., Bhatnagar, B.L., Pons-Moll, G.: Visibility aware human-object interaction tracking from single rgb camera. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4757–4768 (2023)
* [86] Xie, Z., Tseng, J., Starke, S., van de Panne, M., Liu, C.K.: Hierarchical planning and control for box loco-manipulation. Proceedings of the ACM on Computer Graphics and Interactive Techniques 6(3), 1–18 (2023)
* [87] Xu, L., Song, Z., Wang, D., Su, J., Fang, Z., Ding, C., Gan, W., Yan, Y., Jin, X., Yang, X., et al.: Actformer: A gan-based transformer towards general action-conditioned 3d human motion generation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 2228–2238 (2023)
* [88] Xu, S., Li, Z., Wang, Y.X., Gui, L.Y.: Interdiff: Generating 3d human-object interactions with physics-informed diffusion. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 14928–14940 (2023)
* [89] Xu, S., Wang, Z., Wang, Y.X., Gui, L.Y.: Interdreamer: Zero-shot text to 3d dynamic human-object interaction. arXiv preprint arXiv:2403.19652 (2024)
* [90] Xu, Y., Wan, W., Zhang, J., Liu, H., Shan, Z., Shen, H., Wang, R., Geng, H., Weng, Y., Chen, J., et al.: Unidexgrasp: Universal robotic dexterous grasping via learning diverse proposal generation and goal-conditioned policy. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 4737–4746 (2023)
* [91] Yan, M., Wang, X., Dai, Y., Shen, S., Wen, C., Xu, L., Ma, Y., Wang, C.: Cimi4d: A large multimodal climbing motion dataset under human-scene interactions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 12977–12988 (2023)
* [92] Yan, M., Zhang, Y., Cai, S., Fan, S., Lin, X., Dai, Y., Shen, S., Wen, C., Xu, L., Ma, Y., et al.: Reli11d: A comprehensive multimodal human motion dataset and method. arXiv preprint arXiv:2403.19501 (2024)
* [93] Yan, S., Li, Z., Xiong, Y., Yan, H., Lin, D.: Convolutional sequence generation for skeleton-based action synthesis. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 4394–4402 (2019)
* [94] Yang, L., Li, K., Zhan, X., Wu, F., Xu, A., Liu, L., Lu, C.: Oakink: A large-scale knowledge repository for understanding hand-object interaction. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 20953–20962 (2022)
* [95] Yang, L., Zhan, X., Li, K., Xu, W., Li, J., Lu, C.: Cpf: Learning a contact potential field to model the hand-object interaction. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 11097–11106 (2021)
* [96] Ye, R., Xu, W., Xue, Z., Tang, T., Wang, Y., Lu, C.: H2o: A benchmark for visual human-human object handover analysis. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 15762–15771 (2021)
* [97] Yi, H., Thies, J., Black, M.J., Peng, X.B., Rempe, D.: Generating human interaction motions in scenes with text control. arXiv preprint arXiv:2404.10685 (2024)
* [98] Zhan, X., Yang, L., Zhao, Y., Mao, K., Xu, H., Lin, Z., Li, K., Lu, C.: Oakink2: A dataset of bimanual hands-object manipulation in complex task completion. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 445–456 (2024)
* [99] Zhang, H., Christen, S., Fan, Z., Zheng, L., Hwangbo, J., Song, J., Hilliges, O.: Artigrasp: Physically plausible synthesis of bi-manual dexterous grasping and articulation. In: 2024 International Conference on 3D Vision (3DV). pp. 235–246. IEEE (2024)
* [100] Zhang, J., Luo, H., Yang, H., Xu, X., Wu, Q., Shi, Y., Yu, J., Xu, L., Wang, J.: Neuraldome: A neural modeling pipeline on multi-view human-object interactions. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 8834–8845 (2023)
* [101] Zhang, J., Zhang, J., Song, Z., Shi, Z., Zhao, C., Shi, Y., Yu, J., Xu, L., Wang, J.: Hoi-m3: Capture multiple humans and objects interaction within contextual environment. arXiv preprint arXiv:2404.00299 (2024)
* [102] Zhang, S., Ma, Q., Zhang, Y., Qian, Z., Kwon, T., Pollefeys, M., Bogo, F., Tang, S.: Egobody: Human body shape and motion of interacting people from head-mounted devices. In: European Conference on Computer Vision. pp. 180–200. Springer (2022)
* [103] Zhang, W., Dabral, R., Leimkühler, T., Golyanik, V., Habermann, M., Theobalt, C.: Roam: Robust and object-aware motion generation using neural pose descriptors. arXiv preprint arXiv:2308.12969 1 (2023)
* [104] Zhang, X., Bhatnagar, B.L., Starke, S., Guzov, V., Pons-Moll, G.: Couch: Towards controllable human-chair interactions. In: European Conference on Computer Vision. pp. 518–535. Springer (2022)
* [105] Zhang, X., Bhatnagar, B.L., Starke, S., Petrov, I., Guzov, V., Dhamo, H., Pérez-Pellitero, E., Pons-Moll, G.: Force: Dataset and method for intuitive physics guided human-object interaction. arXiv preprint arXiv:2403.11237 (2024)
* [106] Zhao, K., Wang, S., Zhang, Y., Beeler, T., Tang, S.: Compositional human-scene interaction synthesis with semantic control. In: European Conference on Computer Vision. pp. 311–327. Springer (2022)
* [107] Zheng, J., Zheng, Q., Fang, L., Liu, Y., Yi, L.: Cams: Canonicalized manipulation spaces for category-level functional hand-object manipulation synthesis. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 585–594 (2023)
Appendix
The project page of CORE4D is CORE4D Project Page.
Contents:
* •
A. Cross-dataset Evaluation
* •
B. Details on Real-world Data Acquisition
* •
C. Details on CORE4D-Synthetic Data Generation
* •
D. Dataset Statistics and Visualization
* •
E. Details on Data Split
* •
F. Evaluation Metrics for Benchmarks
* •
G. Qualitative Results on Benchmarks
* •
H. Details on the Application of CORE4D-Synthetic
* •
I. CORE4D-Real Data Capturing Instructions and Costs
* •
J. Experiment Configurations and Codes
* •
K. URLs of Dataset, Repository, Metadata, DOI, and License
* •
L. Dataset Documentation and Intended Uses
* •
M. Author Statement
## Appendix A Cross-dataset Evaluation
To examine the data quality of CORE4D-Real, we follow existing dataset
efforts[91, 7, 47] and conduct the vision-based cross-dataset evaluation. We
select an individual human-object-interaction dataset BEHAVE[3] that includes
color images and select 2D human keypoint estimation as the evaluation task.
Data Preparation. For a color image from CORE4D-Real and BEHAVE[3], we first
detect the bounding box for each person via ground truth human pose and obtain
the image patch for the person. We then resize the image patch to get a
maximal length of 256 pixels and fill it up into a 256x256 image with the
black color as the background. Finally, for each 256x256 image, we
automatically acquire the ground truth 2D-pixel coordinates of 22 SMPL-X[61]
human body joints from 3D human poses. For data split, we follow the original
train-test split for BEHAVE[3] and merge the two test sets (S1, S2) for
CORE4D-Real.
Task Formulation. Given a 256x256 color image including a person, the task is
to estimate the 2D-pixel coordinate for each of the 22 SMPL-X[61] human body
joints.
Evaluation Metrics. $P_{e}$ denotes the mean-square error of 2D coordinate
estimates. $Acc$ denotes the percentage of the coordinate estimates with the
Euclidean distance to the ground truth smaller than $15$ pixels.
Method, Results, and Analysis. We draw inspiration from HybrIK-X[41] and adopt
their vision backbone as the solution. Table 6 shows the method performances
on the two datasets under different training settings. Due to the significant
domain gaps in visual patterns and human behaviors, transferring models
trained on one dataset to the other would consistently encounter error
increases. Despite the domain gaps, integrally training on both datasets
achieves large performance gains on both CORE4D-Real and BEHAVE[3], indicating
the accuracy of CORE4D-Real and the value of the dataset serving for visual
perception studies.
Table 6: Cross-dataset evaluation with BEHAVE[3] on 2D human keypoint estimation. Results are in $P_{e}$ (pixel2, lower is better) and $Acc$ ($\%$, higher is better), respectively. Test Train | CORE4D-Real | BEHAVE[3] | CORE4D-Real + BEHAVE[3]
---|---|---|---
CORE4D-Real | 152.4 / 91.2 | 904.9 / 35.6 | 121.7 / 92.4
BEHAVE[3] | 887.9 / 37.8 | 146.3 / 88.9 | 128.2 / 89.8
## Appendix B Details on Real-world Data Aquisition
In this section, we describe our system calibration (Section B.1) and time
synchronization (Section B.2) in detail. Moreover, we provide detailed
information on loss functions of the human mesh acquisition (Section B.3).
### B.1 System Calibration
Calibrating the Inertial-optical Mocap System. Three reflective markers are
fixed at known positions on a calibration rod, by which the 12 high-speed
motion capture cameras calculate their relative extrinsic parameters,
providing information about their spatial relationships. Additionally, three
markers fixed at the world coordinate origin are employed to calibrate the
motion capture system coordinate with the defined world coordinate.
Calibrating Camera Intrinsic. The intrinsic parameters of allocentric and
egocentric cameras are calibrated using a chessboard pattern.
Calibrating Extrinsic of the Allocentric Cameras. We place ten markers in the
camera view to locate each allocentric camera. By annotating the markers’ 3D
positions in the world coordinate system and their 2D-pixel coordinates on
allocentric images, the camera’s extrinsic parameters are estimated by solving
a Perspective-n-Point (PnP) problem via OpenCV.
Calibrating Extrinsic of the Egocentric Camera. We obtain the camera’s pose
information by fixing the camera to the head tracker of the motion capture
suit. Similarly, ten markers are used to calibrate the relative extrinsic
parameters of the first-person perspective cameras, allowing for determining
their positions and orientations relative to the motion capture system.
Additionally, to mitigate errors introduced by the integration of optical and
inertial tracking systems, a purely optical tracking rigid is mounted on the
motion camera.
### B.2 Time Synchronization
To implement our synchronization method, we first set up a Network Time
Protocol (NTP) server on the motion capture host. This server serves as the
time synchronization reference for the Windows computer connected to the
Kinect Azure DK. We minimize time discrepancies by connecting the Windows
computer to the NTP server in high-precision mode and thus achieving precise
synchronization.
Additionally, we employ a Linear Timecode (LTC) generator to encode a time
signal onto the action camera’s audio track. This time signal serves as a
synchronization reference for aligning the first-person perspective RGB
information with the motion capture data.
### B.3 Loss Function Designs for Human Mesh Acquisition
To transfer the BVH[52] human skeleton to the widely-used SMPL-X[61] model. We
optimize body shape parameters $\beta\in\mathbb{R}^{10}$ to fit the
constraints on manually measured human skeleton lengths and then optimize the
full-body pose $\theta\in\mathbb{R}^{159}$ with the following loss function:
$\displaystyle\mathcal{L}=\mathcal{L}_{\text{reg}}+\mathcal{L}_{j\text{3D}}+\mathcal{L}_{j\text{Ori}}+\mathcal{L}_{\text{smooth}}+\mathcal{L}_{h\text{3D}}+\mathcal{L}_{h\text{Ori}}+\mathcal{L}_{\text{contact}}.$
(5)
Regularization Loss $\mathcal{L}_{\text{reg}}$. The regularization loss term
is defined as
$\displaystyle\mathcal{L}_{\text{reg}}=\sum\left|\left|\theta_{\text{body}}\right|\right|^{2}\cdot\lambda_{\text{body}}+\left(\sum\left|\left|\theta_{l\\_\text{hand}}\right|\right|^{2}+\sum\left|\left|\theta_{r\\_\text{hand}}\right|\right|^{2}\right)\cdot\lambda_{\text{hand}},$
(6)
where $\theta_{\text{body}}\in\mathbb{R}^{21\times 3}$ represents the body
pose parameters defined by 21 joints of the skeleton,
$\theta_{l\\_hand}\in\mathbb{R}^{12}$ and
$\theta_{r\\_\text{hand}}\in\mathbb{R}^{12}$ represents the hand pose
parameters. For each hand, the original SMPL-X skeleton has 15 joints with
parameters $\theta_{\text{hand}}\in\mathbb{R}^{15\times 3}$. However,
principal component analysis (PCA) is applied to the hand pose parameters. The
$\theta_{\text{hand}}$ parameters are transformed into a lower-dimensional
space, specifically $\mathbb{R}^{12}$. $\lambda_{\text{body}}=10^{-3}$ and
$\lambda_{\text{hand}}=10^{-4}$ are different weights that are used to control
the regularization strength for the body and hand pose parameters,
respectively. This loss ensures the simplicity of the results and prevents
unnatural, significant twisting of the joints.
3D Position Loss $\mathcal{L}_{j\text{3D}}$ and $\mathcal{L}_{h\text{3D}}$.
The 3D position loss term is defined as
$\displaystyle\mathcal{L}_{\text{3D}}=\sum\left|\left|\textbf{T}_{\text{smplx}}-\textbf{T}_{\text{bvh}}\right|\right|^{2}\cdot\lambda_{\text{3D}},$
(7)
where $\textbf{T}_{\text{smplx}}\in\mathbb{R}^{3}$ represents the 3D global
coordinates of the joints in the SMPL-X model and
$\textbf{T}_{\text{bvh}}\in\mathbb{R}^{3}$ represents the corresponding 3D
global coordinates of the joints in the BVH representation.
$\mathcal{L}_{j\text{3D}}$ represents the 3D position loss sum for the 21 body
joints, while $\mathcal{L}_{h\text{3D}}$ represents the 3D position loss sum
for the 30 hand joints (15 joints per hand). These two terms have different
weights, set as $\lambda_{j\text{3D}}=1.0$ and $\lambda_{h\text{3D}}=2.0$,
respectively.
Orientation Loss $\mathcal{L}_{j\text{Ori}}$ and $\mathcal{L}_{h\text{Ori}}$.
The orientation loss term is defined as
$\displaystyle\mathcal{L}_{\text{Ori}}=\sum\left|\left|\textbf{R}_{\text{smplx}}-\textbf{R}_{\text{bvh}}\right|\right|^{2}\cdot\lambda_{\text{Ori}},$
(8)
which is similar to $\mathcal{L}_{\text{3D}}$, except that
$\mathcal{R}_{\text{smplx}}\in\mathbb{R}^{3\times 3}$ and
$\mathcal{R}_{\text{bvh}}\in\mathbb{R}^{3\times 3}$ represent the rotation
matrices for the adjacent joints in the SMPL-X and corresponding BVH
representations, respectively. Specifically, body joints named head, spine,
spine2, leftUpLeg, rightUpLeg, rightShoulder, leftShoulder, rightArm, leftArm,
and neck are subjected to orientation loss, ensuring that their rotations
relative to adjacent nodes are close to the BVH ground truth.
$\lambda_{\text{Ori}}$ is set to $0.2$.
Temporal Smoothness Loss $\mathcal{L}_{\text{smooth}}$. The temporal
smoothness loss term is defined as
$\displaystyle\mathcal{L}_{\text{smooth}}=\sum_{i=1}^{N}\left(\left|\left|\theta_{i}-\theta_{i-1}\right|\right|^{2}\right)\cdot\lambda_{\text{smooth}}$
(9)
where $\theta_{i}\in\mathbb{R}^{(21+30)\times 3}$ represents the body and hand
pose of the $i$-th frame. $\lambda_{\text{smooth}}$ is set to $20.0$.
Contact Loss $\mathcal{L}_{\text{contact}}$. The contact loss term is defined
as
$\displaystyle\mathcal{L}_{\text{contact}}=\sum\left(\left|\left|\textbf{T}_{\text{finger}}-\textbf{T}_{\text{obj}}\right|\right|^{2}\cdot\mathcal{J}(\textbf{T}_{\text{finger}},\textbf{T}_{\text{obj}})\right)\cdot\lambda_{\text{contact}}$
(10)
where $\mathcal{T}_{\text{finger}}\in\mathbb{R}^{10\times 3}$ is the global
coordinates of ten fingers, and
$\mathcal{T}_{\text{obj}}\in\mathbb{R}^{10\times 3}$ is the corresponding
global coordinates of the point closest to finger.
$\mathcal{J}(\textbf{T}_{\text{finger}},\textbf{T}_{\text{obj}})$ is 1 when
the distance between $\textbf{T}_{\text{finger}}$ and
$\textbf{T}_{\text{obj}}$ is less than a threshold, otherwise it is 0. And
$\lambda_{\text{contact}}$ is $2.0$.
## Appendix C Details on CORE4D-Synthetic Data Generation
In this section, we provide details on our synthetic data generation
(collaboration retargeting) method. Firstly, we clarify term definitions in
Section C.1. We then explicitly introduce the whole method pipeline in detail
in Section C.2. Finally, we provide implementation details in Sections C.3 and
C.4.
### C.1 Term Definitions
We provide definitions for the terms in our collaboration retargeting pipeline
as follows.
Contact Candidate: Contact candidate is a quadruple list containing all
possible contact region index (person1_leftHand, person1_rightHand,
person2_leftHand, person2_rightHand) on source’s vertices. For each source, we
record the contact regions of the four hands in each frame of each data
sequence. At the beginning of the synthetic data generation pipeline, we
sample contact candidates from these records.
Contact Constraint: Having contact candidate on source, we apply DeepSDF-
based[60] contact retargeting to transfer the contact regions to target. These
contact regions on target are the contact constraints fed into the contact-
guided interaction retargeting module.
Source Interaction: During each collaboration retargeting process, we sample a
human-object-human collaborative motion sequence from CORE4D-Real as the
source interaction to guide temporal collaboration pattern.
Interaction Candidate: Sampling $N$ contact candidates, we apply contact-
guided interaction retargeting $N$ times and have $N$ human-object-human
motion outputs, dubbed interaction candidates. These motions would be fed into
the human-centric contact selection module to assess their naturalness.
### C.2 Method Pipeline
The algorithm takes a source-target pair as input. First, we sample contact
candidates from the whole CORE4D-Real contact knowledge on source. For each
contact candidate, we apply object-centric contact retargeting to propagate
contact candidates to contact constraints on target. Sampling motion from
CORE4D-Real provides a high-level temporal collaboration pattern, and together
with augmented low-level spatial relations, we obtain interaction candidates
from the contact-guided interaction retargeting. Then, the human-centric
contact selection module selects the optimal candidates, prompting a contact
constraint update. After multiple iterations, the process yields augmented
interactions. This iterative mechanism ensures a refined augmentation of
interactions, enhancing the dataset’s applicability across various scenarios.
### C.3 Contact-guided Interaction Retargeting
The contact-guided interaction retargeting is a two-step optimization. We
start by optimizing the motion of target. Then with target contact
constraints, we optimize the poses of the two persons.
Object motion retargeting. We deliberately design temporal and spatial losses
to acquire consistent and smooth target motion. In the concern of efficiency,
we jointly optimize all frames in a single data sequence with $N$ frames. To
guarantee the fidelity of object motion, we design the fidelity loss $L_{f}$
to restrict the rotation $R_{o,i}$ and the translation $T_{o,i}$ with the
ground-truth rotation $R^{\prime}_{o,i}$ and translation $T^{\prime}_{o,i}$ in
$N$ frames:
$\displaystyle\mathcal{L}_{f}=\lambda_{f}\sum\limits_{i}(||R^{\prime}_{o,i}-R_{o,i}||_{1}+||T^{\prime}_{o,i}-T_{o,i}||_{1}).$
(11)
We then address restriction on target’s spatial position to avoid penetration
with the ground. The spatial loss is defined as:
$\displaystyle\mathcal{L}_{\text{spat}}=\lambda_{\text{spat}}\sum\limits_{i}\text{max}(-\text{min}(\text{height}_{i}),0),$
(12)
where $\text{min}(\text{height}_{i})$ represents the lowest spatial position
of the objects per frame. A smoothness loss is designed to constrain the
object pose difference between consecutive frames:
$\displaystyle\mathcal{L}_{\text{smooth}}=\lambda_{\text{smooth}}\sum\limits_{i}a_{R_{o,i}}^{2}+a_{T_{o,i}}^{2},$
(13)
where $a$ is the acceleration of rotation and translation during $N$ frames
defined as:
$\displaystyle a_{R_{o,i}}=2R_{o,i}-R_{o,i-1}-R_{o,i+1},$ (14) $\displaystyle
a_{T_{o,i}}=2T_{o,i}-T_{o,i-1}-T_{o,i+1},$ (15)
The total object motion retargeting problem is:
$\displaystyle
R_{o},T_{o}\longleftarrow{}\mathop{\operatorname{argmin}}\limits_{R_{o},T_{o}}(\mathcal{L}_{f}+\mathcal{L}_{\text{spat}}+\mathcal{L}_{\text{smooth}}).$
(16)
Human motion retargeting. We next optimize each person’s motion based on the
motion of target and the contact constraint. To acquire visually plausible
motion, we design the fidelity loss $\mathcal{L}_{j}$ and the smoothness loss
$\mathcal{L}_{\text{smooth}}$. Besides, we utilize the contact correctness
loss $\mathcal{L}_{c}$ to acquire contact consistency in target interaction
motion, and leverage spatial loss $L_{\text{spat}}$ similar to Equation 12 to
avoid human-ground inter-penetration.
To enhance motion fidelity, we define two loss functions
$\mathcal{L}_{\text{sr}}$ and $\mathcal{L}_{\text{wr}}$ and let
$L_{j}=\mathcal{L}_{\text{sr}}+\mathcal{L}_{\text{wr}}$. For joints from the
human arms, despite following the correct temporal collaboration pattern,
their global positions would vary concerning diverse object geometries.
Therefore, we utilize oriented vectors pointing to their parent body joints to
obtain a relative joint fidelity:
$\displaystyle\mathcal{L}_{\text{sr}}=\lambda_{\text{sr}}\sum\limits_{i}\sum\limits_{j\in\text{arm}}\|(P_{j,i}-P_{\text{parent}(j),i})-(P^{\prime}_{j,i}-P^{\prime}_{\text{parent}(j),i})\|_{2}^{2},$
(17)
where $P_{j,i}$ denotes the 3D global position of joint $j$ in frame $i$, and
$P^{\prime}$ denotes ground-truth values. $\mathcal{L}_{\text{wr}}$ denotes
constraints on the global positions of other joints:
$\displaystyle\mathcal{L}_{\text{wr}}=\lambda_{\text{wr}}\sum\limits_{i}\sum\limits_{j\notin\text{arm}}\|P_{j,i}-P^{\prime}_{j,i}\|_{2}^{2}.$
(18)
The design of the smoothness loss is similar to Equation 13, penalizing huge
acceleration of human SMPL-X parameters to avoid great motion differences
between frames:
$\displaystyle\mathcal{L}_{\text{smooth}}=\lambda_{\text{smooth}}\sum\limits_{i}\sum\limits_{j\in\\{1,2\\}}(a_{\theta_{j,i}})^{2}+(a_{T_{j,i}})^{2}+(a_{O_{j,i}})^{2}.$
(19)
To leverage contact constraints, we attract human hands to the corresponding
contact region on target. We select the positions of 20 fingertips of the two
persons in the $i$-th frame as
$\mathcal{H}_{i}=\\{\bar{P}_{\text{tip,i}}\\}_{\text{tip}\in[1,20]}$, where
$\bar{P}$ are tip positions in the object’s coordinate system. The contact
vertices on the target from object-centric contact retargeting are defined as
$\mathcal{C}=\\{\bar{P}^{\prime}_{\text{tip}}\\}_{\text{tip}\in[1,20]}$. We
minimize the Chamfer Distance ($CD$) between $\mathcal{H}_{i}$ and
$\mathcal{C}$ to obtain contact consistency:
$\displaystyle\mathcal{L}_{c}=\lambda_{c}\sum\limits_{i}CD(\mathcal{H}_{i},\mathcal{C}).$
(20)
The total human motion retargeting problem is:
$\displaystyle\theta_{1,2},T_{1,2},O_{1,2}\longleftarrow\mathop{\operatorname{argmin}}\limits_{\theta_{1,2},T_{1,2},O_{1,2}}(\mathcal{L}_{j}+\mathcal{L}_{c}+\mathcal{L}_{\text{spat}}+\mathcal{L}_{\text{smooth}}),$
(21)
In practice, we run 1,000 and 1,500 iterations respectively for object motion
retargeting and human motion retargeting. The whole pipeline is implemented in
PyTorch with Adam solver. The learning rate is 0.01. In object motion
retargeting, $\lambda_{f}$ for rotation is 500, for translation is 0.005,
$\lambda_{\text{spat}}=0.01$, $\lambda_{\text{smooth}}=1$. In human motion
retargeting, $\lambda_{\text{sr}}=0.1$, $\lambda_{\text{wr}}=0.003$,
$\lambda_{c}=1,000$, $\lambda_{\text{spat}}=0.01$, and
$\lambda_{\text{smooth}}=1$.
### C.4 Human-centric contact selection
The pairwise training dataset utilized for the human pose discriminator
training comprises 636,424 pairs of data. Each pair encompasses a positive
human pose $S_{\text{pos}}\in\mathbb{R}^{21\times 3}$ and a negative human
pose $S_{\text{neg}}\in\mathbb{R}^{21\times 3}$. The positive human pose is
sampled from the CORE4D-Real. Conversely, the negative human pose is derived
from the corresponding positive sample by introducing noise to its object
pose, subsequently employing the original contact information to perform
contact-guided interaction retargeting. The discriminator is trained by:
$\displaystyle\mathcal{L}_{\text{ranking}}=-\log(\sigma(R_{\text{pos}}-R_{\text{neg}}-m(S_{\text{pos}},S_{\text{neg}}))),$
(22)
iterating 1,000 epochs by the Adam solver with a learning rate 2e-4.
Specifically, the noise $\Delta(\alpha,\beta,\gamma,x,y,z)$ incorporates both
rotational and translational components. The rotational noise
$\Delta(\alpha,\beta,\gamma)$ ranges from 20 to 60 degrees, while the
translational noise $\Delta(x,y,z)$ falls within the range of 0.2 to 0.5
meters. The margin is computed by:
$\displaystyle
m(S_{\text{pos}},S_{\text{neg}})=(|\alpha|+|\beta|+|\gamma|)/10+(|x|+|y|+|z|)*10.$
(23)
During the contact constraint update process, a penetration filtering step is
performed. For each frame, the penetration volume between the human and object
is calculated. If the penetration volume exceeds $10^{-4}$ cubic meters, it is
considered a penetration case. If more than 2.5% of frames within an
interaction candidate exhibit penetration, the entire candidate is discarded.
Among the remaining candidates, the one with the highest score from the human
pose discriminator is selected to proceed with the contact constraint update.
Figure 5: Visualization of all participants in CORE4D-Real.
## Appendix D Dataset Statistics and Visualization
Table 7: Statistics on object in CORE4D. Set | $\\#$Object | $\\#$Sequence
---|---|---
Chair | Desk | Box | Board | Barrel | Stick | Chair | Desk | Box | Board | Barrel | Stick
Real | 5 | 6 | 9 | 5 | 9 | 4 | 157 | 213 | 200 | 128 | 206 | 58
Synthetic | 418 | 408 | 376 | 589 | 602 | 596 | 1767 | 1344 | 1326 | 2123 | 1495 | 1961
Figure 6: T-SNE visualization of human poses for different collaboration
modes.
### D.1 Collaboration Modes
CORE4D encompasses five human-human cooperation modes in collaborative object
rearrangement. “Move1” refers to the scenario where two participants
simultaneously rearrange objects and both are aware of the target. On the
other hand, “move2” represents the scenario where objects are rearranged
simultaneously, but only Person 1 knows the target. “Pass” indicates that one
participant passes the object to another for relay transportation. “Join”
means that Person 2 joins Person 1 in carrying the object during
transportation. Lastly, “leave” signifies that Person 2 leaves during the
joint transportation with Person 1.
According to the different durations of the two participants’ contact with the
object, “move1” and “move2” can be combined into collaborative carrying tasks.
“Pass” represents the task of handover and solely moving the object.
Incorporating the join task and the leave task, CORE4D totally comprises four
different tasks (see Figure 4 in the main paper) based on the interaction
between humans and objects. Fig. 10 exemplifies the motions for each task.
As depicted in Fig. 6, distinct characteristics are exhibited by different
cooperation modes in high-level movements, thereby offering an innovative
standpoint and potential for comprehending and investigating collaborative
behaviors.
### D.2 Participants
As illustrated in Fig. 5, a total of 31 participants, encompassing variations
in height, weight, and gender, contributed to the capturing of CORE4D-Real.
### D.3 Objects
CORE4D-Real has 38 objects while CORE4D-Synthetic has about 3k objects. The
objects encompass six categories, namely box, board, barrel, stick, chair, and
desk, each exhibiting a rich diversity in surface shape and size. The
distribution of object categories is detailed in Table 7. All the objects in
CORE4D-Real are shown in Fig. 9. Fig. 8 shows samples from CORE4D-Synthetic
and their interpolation process.
### D.4 Camera Views
Fig. 7 shows the four allocentric and one egocentric views of our data
capturing system.
Figure 7: Visualization of CORE4D camera views. Figure 8: Visualization of
CORE4D-Synthetic objects and interpolation.
## Appendix E Details on Data Split
Benefiting from the diverse temporal collaboration patterns from CORE4D-Real
and the large data amount of CORE4D-Synthetic, we randomly select a subset of
real object models and construct the training set as the combination of their
real (T-Real) and synthesized (T-Synthetic) collaboration motion sequences. We
formulate two test sets on CORE4D-Real supporting studies of both non-
generalization and inner-category generalization. The first test set (S1)
consists of interaction performed on the objects that appear in the training
set, while the second one (S2) is composed of interaction from novel objects.
Detailed data distribution of each object category is shown in Table 8.
Table 8: Train-test split on CORE4D. Set | $\\#$Object | $\\#$Sequence
---|---|---
Chair | Desk | Box | Board | Barrel | Stick | Chair | Desk | Box | Board | Barrel | Stick
T-Real | 3 | 4 | 6 | 3 | 6 | 2 | 93 | 104 | 96 | 51 | 113 | 25
T-Synthetic | 418 | 408 | 376 | 589 | 602 | 596 | 1767 | 1344 | 1326 | 2123 | 1495 | 1961
S1 | 3 | 4 | 6 | 3 | 6 | 2 | 40 | 62 | 45 | 21 | 51 | 6
S2 | 2 | 2 | 3 | 2 | 3 | 2 | 24 | 47 | 59 | 56 | 42 | 27
## Appendix F Evaluation Metrics for Benchmarks
The code of our evaluation metrics is provided in Code Repository.
### F.1 Human-object Motion Forecasting
Evaluation metrics include the human joints position error $J_{e}$, the object
translation error $T_{e}$, the object rotation error $R_{e}$, the human-object
contact accuracy $C_{\text{acc}}$, and the penetration rate $P_{r}$.
* •
We define $J_{e}$ as the average Mean Per Joint Position Error (MPJPE) of the
two persons. MPJPE represents the mean per-joint position error of the
predicted human joint positions and the ground-truth values.
* •
Translation error ($T_{e}$) and rotation error ($R_{e}$) denote the average L2
difference between the predicted object translation vectors and the ground-
truth ones, and the average geodesic difference between the estimated object
rotation matrices and the ground-truth ones, respectively.
* •
Physical metrics: To assess contact fidelity, we detect contacts on the two
hands of the two persons for each frame with an empirically designed distance
threshold (5 centimeters). We then examine the contact accuracy ($C_{acc}$),
which indicates the average percentage of contact detection errors in the
predicted motions. Additionally, we examine the object penetration ratio
($P_{r}$) representing the mean percentage of object vertices inside the human
meshes.
### F.2 Interaction Synthesis
Following an existing individual human-object interaction synthesis study[40],
the evaluation metrics include the root-relative human joint position error
$RR.J_{e}$, the root-relative human vertex position error $RR.V_{e}$, the
human-object contact accuracy $C_{\text{acc}}$, and the FID score (FID).
* •
$RR.J_{e}$ denotes the average root-relative MPJPE of the two persons. The
root-relative MPJPE represents the mean per-joint position error of the
predicted human joint positions relative to the human root position and the
ground-truth values.
* •
$RR.V_{e}$ denotes the average root-relative Mean Per Vertex Position Error
(MPVPE) of the two persons. The root-relative MPVPE represents the mean per-
vertex position error of the predicted human vertex positions relative to the
human root position and the ground-truth values.
* •
$C_{\text{acc}}$ is the same as that in Section F.1.
* •
The Fréchet Inception Distance (FID) quantitatively evaluates the naturalness
of synthesized human motions. We first train a feature extractor on
CORE4D-Real to encode each human-object-human motion sequence to a 256D
feature vector $\bar{f}_{i}$ and acquire the ground-truth human motion feature
distribution $\bar{D}$=$\\{\bar{f}_{i}\\}$. We then replace the motions of the
two persons as synthesized ones and obtain another distribution
$D$=$\\{f_{i}\\}$. Eventually, the FID denotes the 2-Wasserstein distance
between $\bar{D}$ and $D$. Since CORE4D-Real provides action labels, the
feature extractor is supervised-trained by fulfilling the action recognition
task. The network structure of the feature extractor is a single-layer
Transformer[75]. We provide the code of the feature extractor and pre-trained
parameters in Code Repository.
## Appendix G Qualitative Results on Benchmarks
Figure 11 and Figure 12 exemplify generated motions for the human-object
motion forecasting task and the interaction synthesis task, respectively,
where “GT” denotes the ground truth motions, and others are method
predictions. Since the baseline methods do not focus on generating hand poses,
we replace hand poses in ground truth with flat hands to facilitate fair
comparisons. Despite diverse cooperation modes that can be generated, the
baseline methods consistently encompass unsatisfactory performances including
unnatural collaboration, inter-penetration, and unnatural contact.
## Appendix H Details on the Application of CORE4D-Synthetic
To evaluate the application of CORE4D-Synthetic, we use the lightweight
CAHMP[14] to conduct the motion forecasting experiments. Unlike the
experiments in section Human-object Motion Forecasting mentioned in the main
paper, where 15 frames are predicted, here we predict the human-object motion
for the next 10 frames given the previous 10 frames.
### H.1 Task Formulation
Given the object’s 3D model and human-object poses in adjacent 10 frames, the
task is to predict their subsequent poses in the following 10 frames. The
human pose $P_{h}\in\mathbb{R}^{23\times 3}$ represents the joint rotations of
the SMPL-X[61] model, while the object pose
$P_{o}=\\{R_{o}\in\mathbb{R}^{3},T_{o}\in\mathbb{R}^{3}\\}$ denotes 3D
orientation and 3D translation of the rigid object model.
### H.2 Evaluation Metrics
Following existing motion forecasting works[12, 78, 88], we evaluate human
joints position error $J_{e}$, object translation error $T_{e}$, object
rotation error $R_{e}$. Details of the three metrics can be found in Section
F.1.
### H.3 Results
Comparing the 1K real dataset with the 0.1K real dataset supplemented with
synthetic data generated through retargeting, we observed that the quality of
the synthetic data is comparable to the real data. Additionally, due to the
increased diversity of objects and enriched spatial relations between humans
and objects in the synthetic data, it exhibits better generalization
performance in object motion forecasting.
Comparing the evaluation results of the 1K real dataset with the results
obtained by augmenting it with additional 4K synthetic data, we observed a
significant performance gain from the synthetic data. This demonstrates that
the inclusion of synthetic data enhances the value of our dataset and better
supports downstream tasks.
Figure 9: Visualization of CORE4D-Real objects. Figure 10: Visualization of
CORE4D object rearrangement tasks. Figure 11: Qualitative results of human-
object motion forecasting. Grey meshes are from the task inputs. Figure 12:
Qualitative results of interaction synthesis.
## Appendix I CORE4D-Real Data Capturing Instructions and Costs
### I.1 Instructions.
Target. We divide a $4m\times 5m$ field into 20 squares and number them, and
place colored labels as markers along the perimeter of the field. The
following language instructs participants: "Please collaboratively move the
object to the target square. You can choose any path and orientation of the
object as you like. It is not necessary to be overly precise with the final
position - a rough placement is fine. Do not make unnatural motions just to
achieve an exact position. Do not use verbal communication with each other.".
As for the settings when only one participant knows the target, the target
square number is written on a piece of paper and shown to the participant who
knows the target. And additional instructions are given as: "If you know the
target, do not use language or direct body language to inform the other party
(such as pointing out the location). If you do not know the target, please
assist the other participant in completing the transportation.".
Collaboration Mode. The instructions are given as follows to indicate
different Collaboration Modes for the participants. For Collaborate mode:
"Based on the target, please cooperatively transport the object, or upright
any overturned tables, chairs, etc. Both participants should be in contact
with the object throughout the process.". For Handover mode: "Please decide
the handover point yourselves, then have one person hand the object to the
other, completing the object transfer in relay.". For Leave and Join modes:
"One person will transport the object throughout, while the other leaves or
joins to help at a time point not disclosed to the collaborator.".
Obstacle. The instructions are given as follows to guide the participants in
tackling obstacles: "There are a varying number of obstacles on the field. If
they get in your way, please decide on your own how to solve it using some
common everyday operations. If the obstacles occupy the destination, please
place the object near the destination.".
### I.2 Costs.
Scanning the object took 30 person-hours, modeling the object into the mocap
system took 27.5 person-hours, data capture took 78 person-hours, data
annotation took 7 person-hours, and the user study took 60 person-hours. The
wage is 100 RMB per person-hour.
## Appendix J Experiment Configurations and Codes
We evaluate existing methods for the two benchmarks on Ubuntu 20.04 with one
NVIDIA GeForce RTX 3090 GPU. The code of benchmarks and relevant methods are
provided in Code Repository. During the quantitative evaluation, we select
three random seeds (0, 42, 233) for each method, train the network
respectively, and then report the mean performances and standard deviations as
the evaluation results. More experimental details are provided in Code
Repository.
## Appendix K URLs of Dataset, Repository, Metadata, DOI, and License
* •
Dataset project page: https://core4d.github.io/.
* •
Data link: OneDrive Repository Link.
* •
Dataset usage instruction: https://github.com/leolyliu/CORE4D-Instructions.
* •
Code link: https://github.com/leolyliu/CORE4D-Instructions.
* •
Croissant metadata: Croissant Metadata Link.
* •
Schema.org metadata: https://core4d.github.io/.
* •
DOI: 10.5281/zenodo.11607666.
* •
License. This work is licensed under a CC BY 4.0 license.
## Appendix L Dataset Documentation and Intended Uses
We use the documentation framework from Gebru et.al[21].
### L.1 Motivation
* •
For what purpose was the dataset created? Was there a specific task in mind?
Was there a specific gap that needed to be filled? Please provide a
description.
The dataset was created to facilitate research studies in multi-person
collaboration for object rearrangement. The dataset can support various
research topics for understanding and synthesizing collaborative behaviors,
including human-object motion tracking, action recognition, human-object
motion forecasting, and collaboration synthesis.
* •
Who created the dataset (e.g., which team, research group) and on behalf of
which entity (e.g., company, institution, organization)?
The dataset was created by Chengwen Zhang from Beijing University of Posts and
Telecommunications, together with Yun Liu, Ruofan Xing, Bingda Tang, and Li Yi
from Tsinghua University.
* •
Who funded the creation of the dataset? If there is an associated grant,
please provide the name of the grantor and the grant name and number.
Funding was provided by the Institute for Interdisciplinary Information
Sciences at Tsinghua University.
* •
Any other comments?
None.
### L.2 Composition
* •
What do the instances that comprise the dataset represent (e.g., documents,
photos, people, countries)? Are there multiple types of instances (e.g.,
movies, users, and ratings; people and interactions between them; nodes and
edges)? Please provide a description.
The dataset comprises two parts: CORE4D-Real and CORE4D-Synthetic. The
CORE4D-Real includes 3D object models, human-object motions, allocentric RGBD
videos, egocentric RGB videos, human-object segmentations, camera parameters,
and action labels. The CORE4D-Synthetic includes 3D object models and human-
object motions. Please refer to the Dataset Documentation for explicit
definitions of these files.
* •
How many instances are there in total (of each type, if appropriate)?
CORE4D-Real includes 37 object models, 1.0K human-object motion sequences,
4.0K allocentric RGBD videos, 1.0K egocentric videos, 4.0K human-object
segmentations, and 1.0K action labels. CORE4D-Synthetic includes 3.0K object
models and 10K human-object motion sequences.
* •
Does the dataset contain all possible instances or is it a sample (not
necessarily random) of instances from a larger set? If the dataset is a
sample, then what is the larger set? Is the sample representative of the
larger set (e.g., geographic coverage)? If so, please describe how this
representativeness was validated/verified. If it is not representative of the
larger set, please describe why not (e.g., to cover a more diverse range of
instances, because instances were withheld or unavailable).
The dataset is a representative sample of all possible and infinitely many
multi-human collaborative behaviors for household object rearrangement. To
cover as diverse collaboration as possible, we collect five typical
collaboration modes in the CORE4D-Real, and enrich human-object spatial
relations greatly in the CORE4D-Synthetic. Each collaboration sequence is
complete.
* •
What data does each instance consist of? “Raw” data (e.g., unprocessed text or
images)or features? In either case, please provide a description.
For CORE4D-Real, each collaboration instance consists of SMPLX[61] models of
two persons in each frame, the 3D model for the manipulated object, the
object’s 6D pose in each frame, four allocentric RGBD videos with camera
intrinsic and extrinsic, one egocentric RGB video with the camera intrinsic,
four human-object segmentation sequences, and one action label. For
CORE4D-Synthetic, each collaboration instance consists of SMPLX[61] models of
two persons in each frame, the 3D model for the manipulated object, and the
object’s 6D pose in each frame. Details are provided in Dataset Documentation.
* •
Is there a label or target associated with each instance? If so, please
provide a description.
For CORE4D-Real, each collaboration instance is associated with an action
label. There is no label in CORE4D-Synthetic.
* •
Is any information missing from individual instances? If so, please provide a
description, explaining why this information is missing (e.g., because it was
unavailable). This does not include intentionally removed information, but
might include, e.g., redacted text.
No information is missing.
* •
Are relationships between individual instances made explicit (e.g., users’
movie ratings, social network links)? If so, please describe how these
relationships are made explicit.
Relationships between individual collaboration instances include the same
persons or the same objects. We provide an explicit SMPLX[61] shape parameter
for each person, and an explicit name and 3D object model for each object.
* •
Are there recommended data splits (e.g., training, development/validation,
testing)? If so, please provide a description of these splits, explaining the
rationale behind them.
The recommended data split is provided in Code Repository.
* •
Are there any errors, sources of noise, or redundancies in the dataset? If so,
please provide a description.
Noises come from the hardware noises of the inertial-optical mocap system, the
3D scanner, and the cameras.
* •
Is the dataset self-contained, or does it link to or otherwise rely on
external resources (e.g., websites, tweets, other datasets)? If it links to or
relies on external resources, a) are there guarantees that they will exist,
and remain constant, over time; b) are there official archival versions of the
complete dataset (i.e., including the external resources as they existed at
the time the dataset was created); c) are there any restrictions (e.g.,
licenses, fees) associated with any of the external resources that might apply
to a dataset consumer? Please provide descriptions of all external resources
and any restrictions associated with them, as well as links or other access
points, as appropriate.
The dataset is fully self-contained.
* •
Does the dataset contain data that might be considered confidential (e.g.,
data that is protected by legal privilege or by doctor-patient
confidentiality, data that includes the content of individuals’ nonpublic
communications)? If so, please provide a description.
No confidential data.
* •
Does the dataset contain data that, if viewed directly, might be offensive,
insulting, threatening, or might otherwise cause anxiety? If so, please
describe why.
No.
* •
Does the dataset identify any subpopulations (e.g., by age, gender)? If so,
please describe how these subpopulations are identified and provide a
description of their respective distributions within the dataset.
No.
* •
Is it possible to identify individuals (i.e., one or more natural persons),
either directly or indirectly (i.e., in combination with other data) from the
dataset? If so, please describe how.
No. We ensure participants’ anonymity by mosaicking their faces.
* •
Does the dataset contain data that might be considered sensitive in any way
(e.g., data that reveals race or ethnic origins, sexual orientations,
religious beliefs, political opinions or union memberships, or locations;
financial or health data; biometric or genetic data; forms of government
identification, such as social security numbers; criminal history)? If so,
please provide a description.
No.
* •
Any other comments?
None.
### L.3 Collection Process
* •
How was the data associated with each instance acquired? Was the data directly
observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey
responses), or indirectly inferred/derived from other data (e.g., part-of-
speech tags, model-based guesses for age or language)? If the data was
reported by subjects or indirectly inferred/derived from other data, was the
data validated/verified? If so, please describe how.
The data is acquired by the method described in Section 3 of the main paper.
The data quality is evaluated in Section A and Section 4.4 of the main paper.
* •
What mechanisms or procedures were used to collect the data (e.g., hardware
apparatuses or sensors, manual human curation, software programs, software
APIs)? How were these mechanisms or procedures validated?
The data is collected by the method described in Section 3 of the main paper.
The hardware qualities of the inertial-optical mocap system, the 3D scanner,
and the cameras are examined by their developers.
* •
If the dataset is a sample from a larger set, what was the sampling strategy
(e.g., deterministic, probabilistic with specific sampling probabilities)?
The dataset is a representative sample of all possible and infinitely many
multi-human collaborative behaviors for household object rearrangement. To
cover as diverse collaboration as possible, we collect five typical
collaboration modes in the CORE4D-Real, and enrich human-object spatial
relations greatly in the CORE4D-Synthetic. The dataset is not a sample from a
known larger set since each motion sequence is complete.
* •
Who was involved in the data collection process (e.g., students, crowdworkers,
contractors) and how were they compensated (e.g., how much were crowdworkers
paid)?
Students from Universities participated in the data collection process. They
were paid 100 RMB/hour. Thanks to them all.
* •
Over what timeframe was the data collected? Does this timeframe match the
creation timeframe of the data associated with the instances (e.g., recent
crawl of old news articles)? If not, please describe the timeframe in which
the data associated with the instances was created.
The data was created and collected between July 2023 and December 2023. The
creation time and the collection time of each data instance are the same.
* •
Were any ethical review processes conducted (e.g., by an institutional review
board)? If so, please provide a description of these review processes,
including the outcomes, as well as a link or other access point to any
supporting documentation.
No.
* •
Did you collect the data from the individuals in question directly, or obtain
it via third parties or other sources (e.g., websites)?
The data is the individuals’ motions.
* •
Were the individuals in question notified about the data collection? If so,
please describe (or show with screenshots or other information) how notice was
provided, and provide a link or other access point to, or otherwise reproduce,
the exact language of the notification itself.
Yes. The language is: "As a data collector, you will be familiar with the
working principle and usage of the optical-inertial hybrid motion capture
system. You will personally wear the motion capture device to collect motion
data. All the data collected in this project will be used solely for research
purposes by the Yili Research Group at Tsinghua University’s Institute for
Interdisciplinary Information Sciences."
* •
Did the individuals in question consent to the collection and use of their
data? If so, please describe (or show with screenshots or other information)
how consent was requested and provided, and provide a link or other access
point to, or otherwise reproduce, the exact language to which the individuals
consented.
Yes. All the participants signed that: "I am aware of and agree that the
collected data will be used for research purposes related to the project and
may be released as a dataset."
* •
If consent was obtained, were the consenting individuals provided with a
mechanism to revoke their consent in the future or for certain uses? If so,
please provide a description, as well as a link or other access point to the
mechanism (if appropriate).
Yes. All participants were notified that they have the right to request the
removal of their data at any time in the future by reimbursing their salary
and compensating for the expenses incurred in collecting their data.
* •
Has an analysis of the potential impact of the dataset and its use on data
subjects (e.g., a data protection impact analysis) been conducted? If so,
please provide a description of this analysis, including the outcomes, as well
as a link or other access point to any supporting documentation.
No.
* •
Any other comments?
None.
### L.4 Preprocessing/cleaning/labeling
* •
Was any preprocessing/cleaning/labeling of the data done (e.g., discretization
or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction,
removal of instances, processing of missing values)? If so, please provide a
description. If not, you may skip the remaining questions in this section.
No.
* •
Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data
(e.g., to support unanticipated future uses)? If so, please provide a link or
other access point to the “raw” data.
N/A
* •
Is the software that was used to preprocess/clean/label the data available? If
so, please provide a link or other access point.
N/A
* •
Any other comments?
None.
### L.5 Uses
* •
Has the dataset been used for any tasks already? If so, please provide a
description.
Currently, the dataset has been used to establish two benchmarks, human-object
motion forecasting and interaction synthesis, in Section 4 of the main paper.
* •
Is there a repository that links to any or all papers or systems that use the
dataset? If so, please provide a link or other access point.
Yes. Please refer to Relevant Works.
* •
What (other) tasks could the dataset be used for?
The dataset can support various research topics for understanding and
synthesizing collaborative behaviors, including human-object motion tracking,
action recognition, human-object motion forecasting, and collaboration
synthesis. Besides, the dataset can potentially be further used to study robot
policies for robot manipulations and human-robot collaborations.
* •
Is there anything about the composition of the dataset or the way it was
collected and preprocessed/cleaned/labeled that might impact future uses? For
example, is there anything that a dataset consumer might need to know to avoid
uses that could result in unfair treatment of individuals or groups (e.g.,
stereotyping, quality of service issues) or other risks or harms (e.g., legal
risks, financial harms)? If so, please provide a description. Is there
anything a dataset consumer could do to mitigate these risks or harms?
Unknown to the authors.
* •
Are there tasks for which the dataset should not be used? If so, please
provide a description.
Unknown to the authors.
* •
Any other comments?
None.
### L.6 Distribution
* •
Will the dataset be distributed to third parties outside of the entity (e.g.,
company, institution, organization) on behalf of which the dataset was
created? If so, please provide a description.
Yes. The dataset is fully released at Dataset Repository. The project page of
the dataset is CORE4D Project Page.
* •
How will the dataset will be distributed (e.g., tarball on website, API,
GitHub)? Does the dataset have a digital object identifier (DOI)?
The dataset is distributed on Yun Liu’s OneDrive Cloud Storage: Dataset
Repository.
DOI: https://doi.org/10.5281/zenodo.11607666.
Project page: CORE4D Project Page.
* •
When will the dataset be distributed?
The dataset is distributed in June 2024.
* •
Will the dataset be distributed under a copyright or other intellectual
property (IP) license, and/or under applicable terms of use (ToU)? If so,
please describe this license and/or ToU, and provide a link or other access
point to, or otherwise reproduce, any relevant licensing terms or ToU, as well
as any fees associated with these restrictions.
The dataset is licensed under a CC BY 4.0 license.
* •
Have any third parties imposed IP-based or other restrictions on the data
associated with the instances? If so, please describe these restrictions, and
provide a link or other access point to, or otherwise reproduce, any relevant
licensing terms, as well as any fees associated with these restrictions.
No.
* •
Do any export controls or other regulatory restrictions apply to the dataset
or to individual instances? If so, please describe these restrictions, and
provide a link or other access point to, or otherwise reproduce, any
supporting documentation.
No.
* •
Any other comments?
None.
### L.7 Maintenance
* •
Who will be supporting/hosting/maintaining the dataset?
Chengwen Zhang is supporting/maintaining the dataset.
* •
How can the owner/curator/manager of the dataset be contacted (e.g., email
address)?
The curators of the dataset, Chengwen Zhang, Yun Liu, Ruofan Xing, Bingda
Tang, and Li Yi, can be contacted at<EMAIL_ADDRESS>yun-
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>and<EMAIL_ADDRESS>respectively.
* •
Is there an erratum? If so, please provide a link or other access point.
No.
* •
Will the dataset be updated (e.g., to correct labeling errors, add new
instances, delete instances)? If so, please describe how often, by whom, and
how updates will be communicated to dataset consumers (e.g., mailing list,
GitHub)?
The dataset update will be posted at Dataset Instructions and CORE4D Project
Page.
* •
If the dataset relates to people, are there applicable limits on the retention
of the data associated with the instances (e.g., were the individuals in
question told that their data would be retained for a fixed period of time and
then deleted)? If so, please describe these limits and explain how they will
be enforced.
No.
* •
Will older versions of the dataset continue to be supported/hosted/maintained?
If so, please describe how. If not, please describe how its obsolescence will
be communicated to dataset consumers.
After a dataset update, older versions will be kept for consistency. These
notices will be posted at Dataset Instructions and CORE4D Project Page.
* •
If others want to extend/augment/build on/contribute to the dataset, is there
a mechanism for them to do so? If so, please provide a description. Will these
contributions be validated/verified? If so, please describe how. If not, why
not? Is there a process for communicating/distributing these contributions to
dataset consumers? If so, please provide a description.
Others are allowed to do these and should contact the original authors.
* •
Any other comments?
None.
## Appendix M Author Statement
Author Responsibility Statement
This statement is intended to emphasize the author’s responsibility regarding
the dataset work, including ensuring compliance with all relevant laws,
regulations, and ethical guidelines. By participating in the dataset work, the
author agrees and commits to the following responsibilities:
* •
Legality: The authors guarantee that all data and materials used in the
dataset work are obtained and used legally. The authors will ensure compliance
with all applicable laws, regulations, and policies within their country,
region, or organization.
* •
Rights Protection: The authors will make every effort to protect the privacy
rights, intellectual property rights, and other legitimate interests of
individuals within the dataset. The authors will respect individuals’ privacy
and take appropriate measures to safeguard the security of personal
identifying information.
* •
Transparency: The authors will provide sufficient information and explanations
to enable users of the dataset to understand the sources, purposes, and
limitations of the data. The authors will strive to ensure that the use and
publication of the dataset are transparent and traceable.
* •
Compliance: The authors will ensure that the dataset work complies with all
applicable laws, regulations, and policies. In the event of any violation of
rights, the authors will bear full responsibility and be willing to accept the
corresponding legal consequences and liability for damages.
* •
Shared Responsibility: The authors require others who use the dataset to also
assume appropriate responsibilities and adhere to similar obligations and
guidelines to ensure the legal and responsible use of the dataset.
* •
License Confirm: This work is licensed under a CC BY 4.0 license.
|
#
FindFacts: A Scalable Theorem Search
Fabian Huch Technische Universität München QAware GmbH Alexander Krauss
QAware GmbH
(May 2020)
###### Abstract
The Isabelle Archive of Formal Proofs (AFP) has grown to over 500 articles in
late 2019. Meanwhile, finding formalizations in it has not exactly become
easier.
At the time of writing, the site-specific AFP google search111https://www.isa-
afp.org/search.html and the Isabelle find_theories and find_consts commands
(that only work on imported theories) are still the only tools readily
available to find formalizations in Isabelle.
We present _FindFacts_ 222https://search.isabelle.in.tum.de, a novel domain-
specific search tool for formal Isabelle theory content. Instead of utilizing
term unification, we solve the problem with a classical drill-down search
engine. We put special emphasis on scalability of the search system, so that
the whole AFP can be searched interactively.
## 1 Introduction
The AFP has grown to a magnificent store of knowledge, incorporating more than
500 articles from over 340 different authors, which, in total, consist of
nearly 2.5 million lines of Isabelle code [3]. Due to the sheer size of the
archive, finding specific formalizations becomes an increasingly challenging
task.
The current solution to this problem is a site-specific Google search, in
which the theory files of Isabelle and the AFP are indexed. However, this
generic search does not exploit much of the potential that lies in searching
in formal theories, since it does not utilize the semantic content at all.
Moreover, there are multiple practical issues: Since every release is indexed,
results contain lots of near-duplicates – a problem that only gets worse as
more revisions are published. Moreover, the order of the returned results is
not very meaningful, as the ranking is based on links on and between the
theory files.
The built-in Isabelle query tools on the other hand – find_theorems and
find_consts – operate directly on loaded theory content, but can only find
results of the currently active session. Thus, they are impractical for
searching the whole AFP, which is far too big to be loaded into memory. Also,
they require the user to have a good idea of the structure of entities they
are looking for – which is often not the case when looking for unknown
formalizations. We thus state the issue as follows:
Problem. Finding unknown formalizations in the AFP (or large sessions of
Isabelle) is hard to do within the system itself, and has no proper tool
support outside of it.
Solution. To alleviate this problem, we propose a domain-specific search
system that indexes theory code as well as semantic content, and allows
drilling down on large result sets by filtering for theory-specific
properties. Our solution focuses on scalability to large data sets, such that
the whole AFP (and potentially much more) can be searched with sub-second
response times. As a trade-off, we accept to give up some of the more advanced
search capabilities of find_theorems, such as matching and unification. This
allows us to rely on widely-used open-source components for the actual
indexing and search functionality. Our implementation relies on Apache Solr
[2] for this purpose. While Solr is primarily text-based and does not
understand lambda calculus or term structure, we claim that search results are
helpful for many use cases.
Contribution. In this paper, we lay out the architecture of our novel
FindFacts tool, and explain in detail how its components function, as well as
how the application core can be re-used and integrated into other tools, for
instance as a standalone Isabelle tool.
Organization. Section 2 covers related tools. The FindFacts system is
explained in Section 3; in Section 4, we will pragmatically break down the
backlog of future changes and potential additions.
## 2 Related Work
The built-in commands find_theorems and find_consts find formal content in the
currently loaded context. Both support filtering by names, allowing wildcards.
find_theorems can search for terms matching a term pattern, and for theorems
that act as specific deduction rules on the current goal, for instance
elimination rules. Similarly, find_consts can search for a type pattern [9].
These tools bring up useful results when they are available in the current
context, but do not help much when the relevant material lives outside the
current scope, such as a separate AFP entry.
Coq provides a command Search that is comparable to find_theorems and has
similar limitations [7].
To allow searching for content that is not currently loaded, the verification
group at NICTA contributed a simple tool called _wwwfind_ to Isabelle 2009-1,
where users could run queries via a web interface, which were then executed by
find_theorems in an Isabelle instance with a pre-loaded image running on the
server. This approach definitely helped in some cases, but it was still
limited to a specific logic image and could not scale to the whole AFP. The
tool was removed again in Isabelle 2014 after being unmaintained for a while.
Another theorem search approach is the _MathWebSearch_ (MWS) system. Its main
purpose is to find results in mathematical documents; to do that, it indexes
formulas (in the _MathML_ format [5]) as well as text, and – in addition to
search by keywords – it allows searching for terms by unification, even for
large indexes [4]. However, it is not directly usable for Isabelle theories.
## 3 The FindFacts System
The FindFacts search is an external tool with a web interface rather than an
Isabelle command. There are multiple reasons for this: To begin, it does not
have a concept of a ‘currently loaded context’; rather, it operates on a pre-
built index all at once, making it possible to search in a far greater scope
than tools such as find_theorems can. Building this index is computationally
very expensive and thus has a very different life-cycle than an Isabelle
session. Another reason is decoupling: The search does not share much
functionality with Isabelle. Reducing coupling to the fast-changing Isabelle
codebase greatly decreases maintenance effort, and also minimizes external
dependencies of the Isabelle system.
To be able to index the whole AFP and also complete searches instantaneously,
the search system is separated into a data indexing batch process and a
persistent data querying/retrieving component.
### 3.1 Import Process
The import process is depicted in Figure 1. For the resource-intensive export
of semantic theory content, the existing Isabelle dump tool is used. It
handles partitioning the theorem graph of the AFP into processible chunks,
dynamically loading and unloading theories to minimize the re-building of base
sessions while limiting memory consumption. It yields theory entities as well
as the semantic markup of theory sources; further processing of these
artifacts is far less demanding.
Figure 1: Import architecture overview
We import the resulting artifacts in a second step; the dump provides an
appropriate interface between the export and the indexing re-import. Our
external Isabelle component dump_importer (with a corresponding command-line
wrapper) is responsible for the second step. Isabelle-specific parts of the
implementation reside in the importer-isabelle module; in an adapter pattern,
that module implements the TheoryView interface to expose theory elements. The
implementation is provided to the importer-base module, which is responsible
for processing of the entities and creation of the search index. The final
data model is explained in Subsection 3.3. We use Apache Solr [2] as a search
database. It is a scalable high performance in-memory document store that is
based on the Apache Lucene [1] full-text search engine. We provide an
abstraction over the search server (via SolrRepository interface), so that it
can interchangeably be run embedded, as a separate (remote) database, or in a
distributed cloud setup.
### 3.2 Search Application
Once an index has been built, the search-core module can be used standalone to
execute queries via its Scala interface. The query specification is explained
in Subsection 3.4. As summarized in Figure 2, our web application located in
the search-webapp module provides a REST-based API for queries and serves the
front-end (in search-webapp-ui). The back-end is built with the Play
framework, and the front-end is a standalone Elm project.
Figure 2: Search application architecture; legend as in Figure 1
### 3.3 Data Model
The data model of the application is closely related to Isabelle command
spans. Source code is partitioned into blocks of text that correspond to
Isabelle commands; for each block, the semantic entities defined in that block
are grouped together. We currently differentiate between constants, facts and
types for the theory entities. When filtering for properties of theory
entities in a search, only blocks that define matching entities will be
returned.
Blocks and theory entities are individually stored as Solr documents. A Solr
document consists of values for a variable set of _fields_. The list of
available fields is shown in Table 1 – although more fields exist for
technical reasons, these are the fields relevant for querying. Relationships
between entities (such as usage in a proof, proposition or type) are also
stored in fields, as a list of IDs.
Field | Description
---|---
Id | Unique identifier of a code block
ChildId | Unique identifier of a theory entity
Command | Isabelle command
SourceCode | Source code, with HTML elements
SourceTheory | Session-qualified theory name
SourceTheoryFacet | Field for facet queries on SourceTheory
StartLine | Line at which code block starts
Kind | Theory entity kind: Constant, Fact, or Type
Name | Theory entity name
NameFacet | Field for facet queries on Name
ConstantType | Type of constants
ConstantTypeFacet | Field for facet queries on ConstantType
Uses | Relationship to other entities (as list of IDs)
Table 1: Fields to be used for querying
### 3.4 Query Specification
A query restricts the result set by defining _filter_ s on fields (available
fields are listed in Table 1); the result is the intersection of all filters.
Filters include standard logical connectives and nested queries. They are
defined as follows:
Filter $\displaystyle:=$ Term(String) $\displaystyle|$ Exact(String)
$\displaystyle|$ InRange(Int, Int) $\displaystyle|$ Not(Filter)
$\displaystyle|$ And(List Filter) $\displaystyle|$ Or(List Filter)
$\displaystyle|$ InResult(Field, List (Field, Filter))
Filters operate on _Solr terms_ , which are generated from both query and
indexed values by transformations that are defined in the field
configurations. For instance, values for a text field might be split on
whitespace characters and transformed into lower case.
To start with, the ‘Term’ filter matches if at least one of the Solr terms of
its filter string is found in a indexed value; additionally, a ‘*’ wildcard is
allowed. Second, for an ‘Exact’ filter to match, its Solr terms must be a
subsequence of the indexed Solr terms. Some sloppiness is allowed though, i.e.
as long as all Solr terms are present and reasonably close next to each other,
it is considered a sequence. Next, the ‘InRange’ filter only works on
numerical fields, and is inclusive for both start and end of the range. After
that, the Boolean filter connectives work as one would expect; lastly, the
’InResults’ filter executes a sub-query, returning values of the specified
field from matching elements, and acts as a disjunction of ’Term’ filters with
these results.
The term splitting and transformation of the different fields is defined as
part of the Solr configuration. For the SourceText field, special characters
are stored in Unicode; a synonym mechanism allows searching by their Isabelle
representations, e.g. \<Longrightarrow> or ==> would both match
$\Longrightarrow$.
Besides retrieving a list of results, queries can also be used to retrieve
_facets_ , i.e. a list of distinct values for a field together with their
number of occurrences in the result set. Facets are a key factor in building
drill-down searches (and are extensively used in our user interface).
### 3.5 Search Example
To illustrate the search functionality, we portray a search for facts about
prime numbers in the following. This example can also be followed
interactively at the ‘example’ page on the FindFacts website.
First, the appropriate definition has to be found. Typing in ’prime’ in the
main search bar on the landing page of the web UI is an obvious starting
point, but yields more than 2000 results, which, by first glance, do not
appear to be all definitions. The facet on entity kind that has appeared
allows us to select constants, which reduces the result set to less than 150
results. However, from the first few hits, it is obvious that most results
only use primes, not define them. Since we are looking for some kind of
definition, only blocks containing semantic entities with ‘prime’ in their
name are relevant – which can be expressed by adding a filter on entity names.
A prime definition from ZF is the first hit from the remaining 60 results.
Assuming that we are looking for a HOL definition in our example, a filter on
the constant type would be useful. Notably, the drop-down menu on the constant
type filter makes it possible to scroll through the available options, if it
was not clear how the type should look like. However, in this example, ‘nat =>
bool’ is fairly obvious. Writing down the exact type (‘Nat.nat $\Rightarrow$
Hol.bool’) is not necessary as the type is looked up using full-text search,
not pattern unification. A total of 18 results remain, and a facet on the
Isabelle command appears with only a handful of options. Selecting all that
could be relevant (omitting locales and functions, for instance) leaves 8
results, of which 6 are in fact definitions of primes, in different theories.
Utilizing the ‘used by’ filter of those entities, and then filtering for facts
as well as theorem and lemma commands, is an appropriate solution for this
example.
## 4 Future Work
While we believe that the current solution is already quite useful, there are
several aspects which can be improved.
First of all, term matching by unification is not possible, yet would be quite
helpful in some situations. This could be done in multiple ways: One option is
to build a term index, following the example of [6]; the MathWebSearch system
could also be integrated directly (some work has already been done to import
Isabelle theories in MMT, the underlying logical framework of MWS [8]), but
this would come with significant complexity. An alternative and simpler option
could be to first apply other filters and heuristics, and then perform term
matching directly on the reduced result set.
Syntax highlighting in the search results is not yet supported. While this
would be easy to obtain in a running PIDE session, re-generating it from the
dumped markup artifacts is not straightforward. One could also attempt to
extract the syntax-highlighting from the Isabelle HTML export.
The ordering of results can still be improved. Currently, it only depends on
the Solr-internal ranking, which is based on how many terms match for a
’Terms’ filter, or how close the matching terms of an ’Exact’ filter are.
Frequently, this rank is the same for lots of results, for example if the
search only consists of filters with single Solr terms. The order of results
then appears arbitrary. Although the extensive search facets make it possible
to further restrict to relevant results, a ranking that orders results based
on their importance would be beneficial. To measure importance, a graph
ranking model (e.g., Google PageRank) could be employed on the underlying
graph of theory entities.
In the short testing period prior to the writing of this paper, integration
into Isabelle has already been requested by multiple users. Using an
integrated search server, the search core can run completely standalone; but
building the whole index in the Isabelle session is way too resource-
intensive. A download mechanism for a pre-built index would be a feasible
solution. The active session could be indexed as well, though there is no
mechanism for incremental updates, so affected theories would need to be
completely re-indexed at every change.
## References
* [1] “Apache Lucene core”, 2020 Apache Software Foundation URL: https://lucene.apache.org/core
* [2] “Apache Solr reference guide”, 2019 Apache Software Foundation URL: https://lucene.apache.org/solr/guide/8_2
* [3] Manuel Eberl, Gerwin Klein, Tobias Nipkow, Larry Paulson and René Thiemann “Archive of formal proofs - Statistics” URL: https://www.isa-afp.org/statistics.html
* [4] Radu Hambasan, Michael Kohlhase and Corneliu-Claudiu Prodescu “MathWebSearch at NTCIR-11.” In _NTCIR_ , 2014 Citeseer
* [5] Patrick Ion, Robert Miner, Stephen Buswell and A Devitt “Mathematical Markup Language (MathML) 1.0 Specification” World Wide Web Consortium (W3C), 1998
* [6] Michael Kohlhase “MathWebSearch”, 2020 The KWARC Group URL: https://kwarc.info/systems/mws
* [7] “Vernacular commands - Search”, 2019 InriaCNRS URL: https://coq.inria.fr/refman/proof-engine/vernacular-commands.html#coq:cmd.search
* [8] Makarius Wenzel “Isabelle/MMT: Export of Isabelle theories and import as OMDoc content”, 2018 URL: https://sketis.net/2018/isabelle-mmt-export-of-isabelle-theories-and-import-as-omdoc-content
* [9] Makarius Wenzel “The Isabelle/Isar Reference Manual”, 2019
|
case, he chooses a vertex $a$ in $G$. Let $A_{i}=A_{i-1}\cup\\{a\\}$ and
$B_{i}=B_{i-1}\cup\\{b\\}$. If no isomorphism $\pi_{i}:G[A_{i}]\rightarrow
G[B_{i}]$ extending $\pi_{i-1}$ exists such that $\pi(a)=b$, then Spoiler wins
the game. Otherwise, the game continues until $i=q$ (notice that $\pi_{i}$ is
uniquely determined by $\pi_{i-1}$ and the vertices $a$ and $b$). If $i$
reaches $q$ and $\pi_{q}$ is an isomorphism from $G[A_{q}]$ to $H[B_{q}]$ then
Duplicator wins the game.
If Duplicator has a winning strategy for $q$ then we write $G\equiv^{q}H$, and
we say that $G$ and $H$ are _$q$ -back-and-forth equivalent_. The meaning of
the $q$-back-and-forth equivalence is clarified by the following classical
result.
###### Theorem 43 ([Lib04, Theorem 3.5]).
Two graphs (and more generally two possibly infinite relational structures)
are $q$-back and forth equivalent if and only if they satisfy the same first
order sentences of quantifier rank $q$.
Let $\mathcal{U}=\\{M_{1},\dots,M_{p}\\}$ be a finite set of unary relations.
A _compressed_ $\mathcal{U}$-colored caterpillar is a pair $(P,f)$, where $P$
is a $\mathcal{U}$-colored path and
$f:V(P)\times\mathcal{P}([p])\rightarrow\mathbb{N}$. The $\mathcal{U}$-colored
caterpillar represented by $(P,f)$ is the caterpillar $\Upsilon(P;f)$ obtained
by $P$ by adding at each vertex $v$ of $P$, for each $I\subseteq[p]$, $f(v,I)$
vertices that belong exactly to the relations $M_{i}$ with $i\in I$ (see
Figure 16); these vertices are called the _$I$ -children_ of $v$ in
$\Upsilon(P,f)$. We further denote by $L_{P,f}(v,I)$ the set of all the
$I$-children of $v$ in $\Upsilon(P,f)$.
Figure 16: A compressed $\\{M_{1},M_{2}\\}$-colored caterpillar and the
caterpillar $G$ it represents. (White vertices are in no unary relations,
green ones are in $M_{1}$, red ones are in $M_{2}$ and bicolored ones are both
in $M_{1}$ and $M_{2}$.)
###### Fact 44.
Let $(P,f)$ be a compressed caterpillar, and let $H=\Upsilon(P,f)$. Then, for
every $v\in V(P)$ and every $I\subseteq[p]$, every permutation of
$L_{P,f}(v,I)$ defines an automorphism of $H$.
###### Corollary 45.
Let $\mathcal{U}=\\{M_{1},\dots,M_{p}\\}$ be a set of unary relations, let
$\mathsf{I}$ be an interpretation of graphs in $\mathcal{U}$-colored graphs,
let $(P,f)$ be a compressed caterpillar, and let $H=\Upsilon(P,f)$.
Then, for every $v\in V(P)$ and every $I\subseteq[p]$, the vertices of
$L_{P,f}(v,I)$ are clones in $\mathsf{I}(H)$. In particular, $L_{P,f}(v,I)$ is
either an independent set or a clique of $\mathsf{I}(H)$.
###### Proof 17.1.
According to Fact 44, every permutation of f $L_{P,f}(v,I)$ defines an
automorphism of $H$, hence an automorphism of $\mathsf{I}(H)$.
Let $\mathcal{U}=\\{M_{1},\dots,M_{p}\\}$ be a set of unary relations.
###### Proposition 46.
Let $\Delta$ be an integer. The class of $\mathcal{U}$-colored caterpillars
with maximum degree $\Delta$ is a non-copying transduction of the class
$\mathscr{P}$ of paths.
###### Proof 17.2.
Let $\mathcal{U}^{\prime}=\mathcal{U}\cup\\{S,T\\}$, where $S$ and $T$ are two
unary symbols (not in $\mathcal{U}$). Let $G$ be a $\mathcal{U}$-colored
caterpillar, and let $(P,f)$ be a compressed caterpillar with
$\Upsilon(P,f)=G$. Let $v_{1},\dots,v_{n}$ be the vertices of $P$ (in order).
For each $i\in[n]$ we associate with $v_{i}$ a path
$\mathcal{U}^{\prime}$-colored path $Q_{i}$ consisting of a vertex marked $S$,
a vertex marked the same way as $v_{i}$, for each $I\subseteq[p]$,
$f(v_{i},I)$ vertices marked by all the $M_{i}$ with $i\in I$, then a vertex
marked $T$. We consider the interpretation $\mathsf{I}=(\neg S(x)\vee\neg
T(x),\eta(x,y)$, where $\eta(x,y)$ expresses that $x$ and $y$ are at distance
at most $\Delta+2$, and
* •
either one of $x$ and $y$ is adjacent to a vertex marked $S$ and no vertex
between $x$ and $y$ is marked $S$,
* •
or both $x$ and $y$ are adjacent to a vertex marked $S$ and there is only one
vertex marked $S$ between $x$ and $y$.
Let $Q$ be the path obtained by concatenating the paths $Q_{1},\dots,Q_{n}$.
Then, $\mathsf{I}(Q)=G$ (cf Figure 17). It follows that the non-copying
transduction defined by $\mathsf{I}$ witnesses that the class of
$\mathcal{U}$-colored caterpillars with maximum degree $\Delta$ is a non-
copying transduction of $\mathscr{P}$.
Figure 17: Encoding a caterpillar with bounded degree in a path. Mark $S$ is
blue, mark $T$ is orange.
Let $(P,f_{1})$ and $(P,f_{2})$ be two compressed caterpillars based on the
same $\mathcal{U}$-colored path such that
$\min(f_{1}(v,I),q)=\min(f_{2}(v,I),q)$ and $f_{1}(v,I)\leq f_{2}(v,I)$ for
all $v\in V(P)$ and all $I\subseteq[p]$, and let $H_{1}=\Upsilon(P,f_{1})$ and
$H_{2}=\Upsilon(P,f_{1})$, where $H_{1}$ is naturally identified with an
induced subgraph of $H_{2}$.
###### Lemma 47.
For every formula $\varphi(x,y)$ with quantifier rank at most $q-2$ and for
every two vertices $u,v$ in $H_{1}$ we have
$H_{1}\models\varphi(u,v)\qquad\iff\qquad H_{2}\models\varphi(u,v).$
###### Proof 17.3.
This follows from a standard argument based on an Ehrenfeucht-Fraïssé game.
###### Corollary 48.
For every interpretation $\mathsf{I}=(M_{1}(x),\eta(x,y))$ where $\eta$ has
quantifier rank at most $q-2$, the graph $\mathsf{I}(H_{1})$ is an induced
subgraph of $\mathsf{I}(H_{2})$.
###### Lemma 49.
Let $\mathcal{U}=\\{M_{1},\dots,M_{p}\\}$ be a set of unary relations, let
$\mathsf{I}=(M_{1}(x),\eta(x,y)$ be an interpretation of graphs in
$\mathcal{U}$-colored graphs, let $q$ be at least the quantifier rank of
$\eta$ plus two, and let $G=\Upsilon(P,f)$. Define the function
$\widehat{f}:V(P)\times\mathcal{P}([p])\rightarrow\\{0,\dots,q\\}$ by
$\widehat{f}(v,I)=\min(f(v,I),q)$, and let
$\widehat{G}=\Upsilon(P,\widehat{f})$.
Then, the graph $\mathsf{I}(G)$ is obtained from $\mathsf{I}(\widehat{G})$ as
follows: for each $v\in V(P)$ and each $I\subseteq[p]$ with $1\in 1$ and
$f(v,I)>q$, select arbitrarily a vertex $\ell(v,I)\in L_{P,\widehat{f}}(v,I)$
and blow the vertex $\ell(v,I)$ in $\mathsf{I}(\widehat{G})$ into a clique or
an independent set of size $f(v,I)-\widehat{f}(v,I)+1$ (depending on whether
$L_{P,\widehat{f}}(v,I)$ is, in $\mathsf{I}(\widehat{G})$, a clique or an
independent set).
###### Proof 17.4.
We prove the statement of the lemma by induction over
$W(f):=\sum_{v\in V(P)}\sum_{I\subseteq[p]}|f(v,I)-\widehat{f}(v,I)|.$
If $W(f)=0$, then $f=\widehat{f}$, $\widehat{G}=G$ and
$\mathsf{I}(\widehat{G})=\mathsf{I}(G)$. Hence, the base case obviously holds.
Assume that the statement holds for $W(f)=k\geq 0$ and let $f$ be such that
$W(f)=k+1$. Then, there exists a vertex $v_{0}\in V(P)$ and a subset
$I_{0}\subseteq[p]$ with $f(v_{0},I_{0})>0$. Let
$f(v,I)=f(v,I)-\delta_{v,v_{0}}\cdot\delta_{I,I_{0}}$, where $\delta$ is
Kronecker’s symbol. Note that $G^{\prime}=G-v_{0}$ is the
$\mathcal{U}$-colored caterpillar represented by $(P,f^{\prime})$. If $1\notin
I$, it directly follows from Corollary 48 that
$\mathsf{I}(G^{\prime})=\mathsf{I}(G)$. Thus, the statement of the theorem
holds by the induction hypothesis. Otherwise, by Corollary 48, we have
$\mathsf{I}(G^{\prime})=\mathsf{I}(G)-v_{0}$. By construction, there exists a
vertex $u_{0}\in V(P)$ such that $v_{0}$ is an $I$-child of $u_{0}$ in $G$. As
the set of $I_{0}$-children of $u_{0}$ in $G^{\prime}$ has size at least two
and are clones in $\mathsf{I}(G^{\prime})$, there is no choice about how to
define the adjacencies of $v_{0}$, and the statement of the lemma holds.
###### Theorem 50.
Let $\mathscr{C}$ be a (non-copying) transduction of the class of all
caterpillars. If the class $\mathscr{C}$ has bounded maximum degree, then
$\mathscr{C}$ is a transduction of the class of all paths. Otherwise, there
exists a class $\mathscr{C}^{\prime}$ with bounded degree that is a
transduction of the class of all caterpillars, such that the graphs in
$\mathscr{C}$ are obtained from graphs in $\mathscr{C}^{\prime}$ by blowing
each vertex either into an independent set or into a complete graph (of
arbitrary size).
###### Proof 17.5.
Let $\mathscr{C}$ be a $\mathsf{T}$-transduction of the hereditary closure
$\mathscr{D}$ of the class of all caterpillars444as the hereditary closure of
a class is non-copying transduction equivalent to it, we can consider the
hereditary closure of the class of all the caterpillars instead of the class
of all the caterpillars., where $\mathsf{T}$ is non-copying. Let $\mathsf{I}$
be the transduction defining $\mathsf{T}$. According to Lemma 6, we may assume
that $\mathsf{I}=(M_{1}(x),\eta(x,y))$, where $M_{1}$ is a unary relation and
$\eta$ is a local formula. Let $q$ be the quantifier-rank of $\eta$. Let
$\mathscr{C}^{\prime}$ be the class containing, for each graph
$G\in\mathscr{C}$, the induced subgraph of $G$ obtained by deleting all the
isolated vertices. Note that
$\mathscr{C}^{\prime}\subseteq\mathsf{T}(\mathscr{D})$.
Assume that $\mathscr{C}$ has bounded degree, and let $\Delta$ be the maximum
degree of the graphs in $\mathscr{C}$. Let $G\in\mathscr{C}^{\prime}$ and let
$H$ be a graph with a minimum number of vertices such that $H\in\mathscr{D}$
and $G\in\mathsf{T}(H)$. As $G\in\mathsf{T}(H)$, there exists a unary
expansion $H^{+}$ of $H$ with $G=\mathsf{I}(H^{+})$. Let $(P,f)$ be a
compressed $\mathcal{U}$-colored caterpillar representing $H^{+}$, let
$\widehat{f}$ be defined from $f$ as in Lemma 49 and let $\widehat{H}^{+}$ be
the $\mathcal{U}$-colored caterpillar represented by $(P,\widehat{f})$.
According to Lemma 49 we get that $\mathsf{I}(G)$ contains a subset $X$ of
clones of $G=\mathsf{I}(H^{+})$ of size $\max_{v\in V(P)}\max_{1\in
I\subseteq[p]}f(v,I)$. As $G$ contains no isolated vertex, any neighbor of
some $v\in X$ has degree at least $|X|-1$. Hence, $\max_{v\in V(P)}\max_{1\in
I\subseteq[p]}f(v,I)\leq\Delta+1$. It follows that if we choose
$q\geq\Delta+1$, we have $\mathsf{I}(\widehat{H}^{+})=\mathsf{I}(H^{+})=G$.
Note that $\widehat{H}^{+}$ is a $\mathcal{U}$-colored caterpillar with
maximum degree at most $\Delta^{\prime}=2^{p}(\Delta+1)+2$. It follows that
$\mathscr{C}^{\prime}$ is a non-copying transduction of the class
$\mathscr{D}^{\prime}$ of all the caterpillars with maximum degree
$\Delta^{\prime}$. Hence, according to Proposition 46, the class
$\mathscr{D}^{\prime}$ is a non-copying transduction of the class
$\mathscr{P}$ of all paths. As $\mathscr{E}$ is also a non-copying
transduction of $\mathscr{P}$ (consider the transduction defined by the
interpretation $(\top,\bot)$ that delete all the edges) and as
$\mathscr{D}\subseteq\mathscr{D}^{\prime}+\mathscr{E}$ (every graph in
$\mathscr{D}$ is the disjoint union of a graph in $\mathscr{D}^{\prime}$ and
isolated vertices), we deduce from Fact 25 that $\mathscr{D}$ is a non-copying
transduction of $\mathscr{P}$.
We now consider the case where $\mathscr{C}$ does not have bounded degree.
Then, the conclusion directly follows from Lemma 49 where we choose $q$ to be
the quantifier rank of $\eta$ plus two.
### 18 A distributive lattice among some low classes
Recall that a lattice is a partially ordered set in which every pair of
elements has a unique supremum (also called a least upper bound or join) and a
unique infimum (also called a greatest lower bound or meet). We shall see in
Theorem 73 that the transduction quasi-order is not a lattice. However, we now
give some non-trivial examples of the existence of the greatest lower bound of
two incomparable classes.
The class of all cubic trees is the greatest lower bound of the class of all
trees and of the class of all cubic graphs.
###### Proof 18.1.
That the class of all cubic trees is a lower bound of the class of all trees
and the class of all cubic graphs is obvious, as it is included in both
classes.
Let $\mathscr{C}$ be a lower bound of the class of all trees and the class of
all cubic graphs. Being a transduction of the class of all cubic graphs,
$\mathscr{C}$ is a perturbation of a class $\mathscr{D}$ with bounded degree
(by Proposition 40). Trees have cliquewidth $3$ and the property of a class of
having bounded cliquewidth is preserved by transduction (see [Col07], for
instance). Thus, being a transduction of the class of all trees, the class
$\mathscr{D}$ has bounded cliquewidth. As the class $\mathscr{D}$ is also
weakly sparse, it has bounded treewidth [GW00a]. Thus, $\mathscr{C}$ is a
perturbation of a class with both bounded degree and bounded treewidth. Hence,
according to Proposition 42, $\mathscr{C}$ is a transduction of the class of
all cubic trees.
Also, we have: The class of all paths is the greatest lower bound of the class
of all caterpillars and of the class of all cubic graphs.
###### Proof 18.2.
That the class of all paths is a lower bound of the class of all caterpillars
and of the class of all cubic graphs is obvious, as it is included in both
classes.
Let $\mathscr{C}$ be a lower bound of the class of all caterpillars and of the
class of all cubic graphs. Being a transduction of the class of all cubic
graphs, $\mathscr{C}$ is a perturbation of a class $\mathscr{D}$ with bounded
degree (by Proposition 40). Note that it follows that $\mathscr{D}$ is a
perturbation of $\mathscr{C}$. As $\mathscr{C}$ is a transduction of the class
of all caterpillars, so is $\mathscr{D}$. According to Theorem 50,
$\mathscr{D}$ (and hence $\mathscr{C}$) is a non-copying transduction of the
class of all paths.
Note that in the above, the class of cubic graphs can be replaced by any class
of bounded degree transducing all paths, for instance the class of all cubic
trees or the class of all grids.
The lattice structure shown in Figure 18 is effectively present in the quasi-
order, in the sense that the meets and joins shown in the figure are actually
meets and joins in the general quasi-order.
|
---|---
$\textstyle{\text{\scriptsize tree}\cup\text{\scriptsize
cubic}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$tree$\textstyle{\text{\scriptsize
caterpillar}\cup\text{\scriptsize
cubic}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\text{\scriptsize
caterpillar}\cup\text{\scriptsize cubic
tree}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\text{\scriptsize
star forest}\cup\text{\scriptsize
cubic}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$caterpillar$\textstyle{\text{\scriptsize
star forest}\cup\text{\scriptsize cubic
tree}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$cubic$\textstyle{\text{\scriptsize
star forest}\cup\text{\scriptsize
path}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$cubic
treestar forestpathedgeless Figure 18: The lattice structure induced by the
classes of all trees, by the class of all caterpillars (i.e. the class of all
connected graphs with pathwidth $1$ [PT99]), by the class of all star forests,
by the class of edgeless graphs, by the class of all paths, by the class of
all cubic trees, and by the class of all cubic graphs.
## Part IV Applications
Our applications are divided into three parts. In structural graph theory, it
is common to weaken properties by not insisting that they hold globally but
only at every local scale, as in [Gro03]. As might be expected, this interacts
well with the locality of first-order logic, and we give some applications in
the first section. The next section is concerned with the minimum and maximum
classes for properties $\Pi$ of the transduction quasi-order, which serve as
canonical obstructions for $\Pi$ or capture all the complexity of $\Pi$,
respectively. This leads to transduction dualities, which are a framework for
dichotomy statements akin to the Grid Minor Theorem [RS86] stating that a
class has bounded tree-width if and only if it does not contain arbitrarily
large grids as minors (and which can be seen as an MSO-transduction duality
for weakly sparse classes between the class of trees and the class of grids).
Finally, we turn to dense analogues, which uses the transduction quasi-order
to generalize various properties of weakly sparse classes to general
hereditary classes.
### 19 Local properties
We begin with yet another application of the local normal form, showing that
all transductions of certain graph classes that are locally well-behaved can
be obtained by perturbations of such classes. For a transduction downset $\Pi$
(a class property $\Pi$ is a _transduction downset_ if it is preserved by
transductions), the set of classes of graphs that are _locally_ $\Pi$, denoted
$\mbox{\sf loc-}\Pi$, is the set of all classes $\mathscr{C}$ such that for
every integer $d$, the class $\mathcal{B}_{d}^{\mathscr{C}}$ of all balls of
radius $d$ of graphs in $\mathscr{C}$ belongs to $\Pi$, and we denote by
$\overline{\mbox{\sf loc-}\Pi}$ the set of all the classes that are
perturbations of a class in $\mbox{\sf loc-}\Pi$.
###### Theorem 51.
Let $\Pi$ be a transduction downset. Then $\overline{\mbox{\sf loc-}\Pi}$ is a
transduction downset.
###### Proof 19.1.
Let $\mathscr{C}\in\overline{\mbox{\sf loc-}\Pi}$ and let $\mathsf{T}$ be a
transduction. By definition, there exists a perturbation $\mathsf{P}_{0}$ and
a class $\mathscr{C}^{\prime}\in\mbox{\sf loc-}\Pi$ with
$\mathscr{C}\subseteq\mathsf{P}_{0}(\mathscr{C}^{\prime})$. According to
Theorem 8, the transduction $\mathsf{T}\circ\mathsf{P}_{0}$ is subsumed by the
composition $\mathsf{P}\circ\mathsf{T}_{\rm imm}\circ\mathsf{C}$ of a copying
transduction, an immersive transduction, and a perturbation. Consider
$H\in\mathsf{T}_{\rm imm}\circ\mathsf{C}(G)$, for $G\in\mathscr{C}^{\prime}$.
For every integer $d$, there exists an integer $d^{\prime}$ such that for
every vertex $v\in V(H)$ there is a vertex $v^{\prime}\in V(G)$ (the
projection of $v$) with the property that $B_{d}(H,v)$ is an induced subgraph
of $\mathsf{T}_{\rm imm}\circ\mathsf{C}(B_{d^{\prime}}(G,v^{\prime}))$, thus
in a fixed transduction of $B_{d^{\prime}}(G,v^{\prime})$. As
$\mathscr{C}^{\prime}$ is locally $\Pi$, the class
$\mathscr{C}_{d^{\prime}}^{\prime}=\\{B_{d^{\prime}}(G,v^{\prime})\colon
G\in\mathscr{C}^{\prime},v^{\prime}\in V(G)\\}$ belongs to $\Pi$. As $\Pi$ is
a transduction downset, the class $\\{B_{d}(H,v)\colon H\in\mathsf{T}_{\rm
imm}\circ\mathsf{C}(\mathscr{C}^{\prime})\\}$ is also in $\Pi$. As this holds
for every integer $d$, $\mathsf{T}_{\rm
imm}\circ\mathsf{C}(\mathscr{C}^{\prime})\in\mbox{\sf loc-}\Pi$. Hence,
$\mathsf{T}(\mathscr{C})\in\overline{\mbox{\sf loc-}\Pi}$.
Since having bounded linear cliquewidth, having bounded cliquewidth, and
having bounded twin-width are transduction downsets (See [Col07, BKTW20]), we
have the following corollaries.
###### Corollary 52.
Every transduction of (a perturbation of) a class with locally bounded linear
cliquewidth is a perturbation of a class with locally bounded linear
cliquewidth.
###### Corollary 53.
Every transduction of (a perturbation of) a class with locally bounded
cliquewidth is a perturbation of a class with locally bounded cliquewidth.
###### Corollary 54.
Every transduction of (a perturbation of) a class with locally bounded twin-
width is a perturbation of a class with locally bounded twin-width.
Assume that a transduction downset $\Pi$ is included in a set of classes
$\Pi^{\prime}$. Then it directly follows from Theorem 51 that every
transduction of a class that is locally $\Pi$ is a perturbation of a class
that is locally $\Pi^{\prime}$. Recall that a class is _$\chi$ -bounded_ if
there is a function $f$ such that every graph $G\in\mathscr{C}$ with clique
number $\omega$ has chromatic number at most $f(\omega)$. It is known that
every transduction of a class with bounded expansion is $\chi$-bounded
[NOdMP+21, Corollary 4.1]. Hence, we have
###### Corollary 55.
Every transduction of a locally bounded expansion class is a perturbation of a
locally $\chi$-bounded class.
We now turn to the local versions of some the most important model-theoretic
properties in the quasi-order, namely (monadic) stability and (monadic)
dependence. Recall (section 2.3) that dependence and stability correspond to
the impossibility to define arbitrarily large _power set graphs_ (that is,
bipartite graphs with vertex set $[n]\cup 2^{[n]}$ where $i$ is adjacent to
$I$ whenever $i\in I$) and arbitrarily large half-graphs (see Section 2.3),
respectively. Further model-theoretic dividing lines may be derived by
considering monadic expansions, i.e. expansions by unary predicates. We say
that a class $\mathscr{C}^{+}$ of colored graphs is a _monadic expansion_ of a
class $\mathscr{C}$ of graphs if each colored graph in $\mathscr{C}^{+}$ is a
monadic expansion of some graph in $\mathscr{C}$.
* •
A class $\mathscr{C}$ of graphs is _monadically dependent_ if every monadic
expansion of $\mathscr{C}$ is dependent;
* •
a class $\mathscr{C}$ of graphs is _monadically stable_ if every monadic
expansion of $\mathscr{C}$ is stable.
These dividing lines are characterized by the impossibility to transduce some
classes.
###### Theorem 56 ([BS85]).
A class $\mathscr{C}$ is _monadically dependent_ (or _monadically NIP_) if the
class of all graphs is not a transduction of $\mathscr{C}$; it is _monadically
stable_ if the class of all half-graphs is not a transduction of
$\mathscr{C}$.
The distinction between the classical versions of these properties and their
monadic variants is actually not so important in the setting of hereditary
classes, as witnessed by the following theorem.
###### Theorem 57 ([BL22a]).
Let $\mathscr{C}$ be a hereditary class of graphs. Then,
* •
$\mathscr{C}$ is monadically dependent iff $\mathscr{C}$ is dependent;
* •
$\mathscr{C}$ is monadically stable iff $\mathscr{C}$ is stable.
We also introduce monadically straight classes. Recall the definitions of
trivially perfect graphs and forest orders from section 1.1.
A class $\mathscr{C}$ is monadically straight if the class of trivially
perfect graphs (equivalently, the class of forest orders) is not transducible
from $\mathscr{C}$.
We believe that monadic straightness could well be an important dividing line
in the transduction quasi-order (and maybe from a general model-theoretic
point of view).
As above, we may consider the local versions of these properties, which we
will prove to be equivalent to their non-local versions.
###### Theorem 58.
For a class $\mathscr{C}$ of graphs we have the following equivalences:
1. 1.
$\mathscr{C}$ is locally monadically dependent if and only if $\mathscr{C}$ is
monadically dependent;
2. 2.
$\mathscr{C}$ is locally monadically straight if and only if $\mathscr{C}$ is
monadically straight;
3. 3.
$\mathscr{C}$ is locally monadically stable if and only if $\mathscr{C}$ is
monadically stable.
###### Proof 19.2.
The proof will follow from the following claim.
###### Claim 59.
Let $\mathscr{C}$ be a class such that the class
$\mathscr{C}^{\prime}=\\{n(G\operatorname{+}K_{1})\mid
n\in\mathbb{N},G\in\mathscr{C}\\}$ is a transduction of $\mathscr{C}$. Then,
for every class $\mathscr{D}$ we have $\mathscr{C}\sqsubseteq_{\rm
FO}\mathscr{D}$ if and only if there exists some integer $r$ with
$\mathscr{C}\sqsubseteq_{\rm FO}\mathcal{B}_{r}^{\mathscr{D}}$.
* _Proof of the claim_.
Let $\mathsf{H}$ be the hereditary transduction; which closes a class by
taking induced subgraphs (see Figure 3).
As $\mathcal{B}_{r}^{\mathscr{D}}\sqsubseteq_{\rm FO}\mathscr{D}$ (as
witnessed by $\mathsf{H}$), if there exists some integer $r$ with
$\mathscr{C}\sqsubseteq_{\rm FO}\mathcal{B}_{r}^{\mathscr{D}}$, then
$\mathscr{C}\sqsubseteq_{\rm FO}\mathscr{D}$. Now assume
$\mathscr{C}\sqsubseteq_{\rm FO}\mathscr{D}$. First, note that the class
$\mathscr{C}^{\prime}$ is addable (as
$n(G+K_{1})\operatorname{\uplus}m(G+K_{1})=(m+n)(G+K_{1})$). As
$\mathscr{C}^{\prime}\sqsubseteq_{\rm FO}\mathscr{C}$ (by assumption), we have
$\mathscr{C}^{\prime}\sqsubseteq_{\rm FO}\mathscr{D}$. Let $\mathsf{T}$ be a
transduction such that $\mathscr{C}^{\prime}\subseteq\mathsf{T}(\mathscr{D})$.
According to Lemma 15, there exist of a copy operation $\mathsf{C}$ and an
immersive transduction $\mathsf{T}_{\rm imm}$, such that $\mathsf{T}$ is
subsumed by $\mathsf{T}_{\rm imm}\circ\mathsf{C}$. Let
$\mathscr{D}^{\prime}=\mathsf{C}(\mathscr{D})$. According to Lemma 14, there
is an integer $r$ such that $\mathscr{C}\sqsubseteq_{\rm
FO}^{\circ}\mathcal{B}_{r}^{\mathscr{D}^{\prime}}$. Note that, for every graph
$G$ and every positive integer $r$, every ball of radius $r$ of
$\mathsf{C}(G)$ is an induced subgraph of $\mathsf{C}(B)$, for some ball $B$
of radius $r$ of $G$. Consequently,
$\mathcal{B}_{r}^{\mathscr{D}^{\prime}}\subseteq\mathsf{H}\circ\mathsf{C}(\mathcal{B}_{r}^{\mathscr{D}})$.
Thus, we have $\mathscr{C}\sqsubseteq_{\rm
FO}\mathcal{B}_{r}^{\mathscr{D}^{\prime}}\sqsubseteq_{\rm
FO}\mathcal{B}_{r}^{\mathscr{D}}$. $\vartriangleleft$
The class $\\{n(G\operatorname{+}K_{1})\mid n\in\mathbb{N},G\in\mathscr{G}\\}$
is obviously a transduction of $\mathscr{G}$. Hence, according to Claim 59, a
class $\mathscr{C}$ is locally monadically dependent if and only if it is
monadically dependent. The class $\\{n(G\operatorname{+}K_{1})\mid
n\in\mathbb{N},G\in\mathscr{T\\!\\!P}\\}$ is a transduction of
$\mathscr{T\\!\\!P}$. Hence, according to Claim 59, a class $\mathscr{C}$ is
locally monadically straight if and only if it is monadically straight. The
class $\\{n(G\operatorname{+}K_{1})\mid n\in\mathbb{N},G\in\mathscr{H}\\}$ is
a transduction of $\mathscr{H}$. Hence, according to Claim 59, a class
$\mathscr{C}$ is locally monadically stable if and only if it is monadically
stable.
Although the class of unit interval graphs has unbounded clique-width, every
proper hereditary subclass of unit interval graphs has bounded clique-width
[Loz08, Theorem 3]. This is in particular the case for the class of unit
interval graphs with bounded radius. As classes with bounded cliquewidth are
monadically dependent, the class of unit interval graphs is locally
monadically dependent, hence monadically dependent.
We next prove the analogue of Theorem 58 for stability and dependence, after
recalling their definitions.
For a formula $\varphi(\bar{x},\bar{y})$ and a bipartite graph $H=(I,J,E)$, we
say a graph $G$ encodes $H$ via $\varphi$ if there are sets
$A=\set{\bar{a}_{i}}{i\in I}\subseteq V(G)^{|x|},B=\set{\bar{b}_{j}}{j\in
J}\subseteq V(G)^{|y|}$ such that
$G\models\varphi(\bar{a}_{i},\bar{b}_{j})\Leftrightarrow H\models E(i,j)$ for
all $i,j\in I\times J$.
Given a class $\mathscr{C}$ of graphs, $\varphi$ encodes $H$ in $\mathscr{C}$
if there is some $G\in\mathscr{C}$ encoding $H$ via $\varphi$.
Recall that a formula $\varphi(\bar{x},\bar{y})$ with its free variables
partitioned into two parts is dependent over a class $\mathscr{C}$ of graphs
if there is some finite bipartite graph $H$ such that $\varphi$ does not
encode $H$ in $\mathscr{C}$, while $\varphi$ is stable over $\mathscr{C}$ if
there is some half-graph $H$ such that $\varphi$ does not encode $H$ in
$\mathscr{C}$. The class $\mathscr{C}$ is dependent, resp. stable, if every
partitioned formula is dependent, resp. stable, over $\mathscr{C}$ (cf Section
2.3).
###### Lemma 60.
If a class $\mathscr{C}$ of graphs is independent (resp. unstable), then there
is a strongly local formula that is independent (resp. unstable).
###### Proof 19.3.
We make use of the standard fact that dependence of formulas is preserved by
Boolean combinations [Sim15, Lemma 2.9]. Suppose $\varphi(\bar{x},\bar{y})$ is
independent. Using Gaifman normal form and Lemma 2, we may rewrite $\varphi$
as a Boolean combination of basic local sentences and strongly local formulas.
Since no sentence can be independent, some strongly local formula must be.
The argument for stability is identical, using the fact that stability of
formulas is preserved by Boolean combinations [Pal18, Remark 3.4].
###### Corollary 61.
A class of graphs is dependent (resp. stable) if and only if it is locally
dependent (resp. locally stable).
### 20 Maximum classes, minimum classes, and transduction dualities
For some properties, there is a unique (up to transduction-equivalence)
maximum class with that property, which can be thought of as essentially
exhibiting all the complexity of classes with the property. For example, the
maximum class of bounded cliquewidth is the class of trivially perfect graphs
(or forest-orders; or trees, if considering MSO-transductions) and the maximum
class of bounded linear cliquewidth is the class of half-graphs (or linear
orders; or paths, if considering MSO-transductions), and these statements
capture the “tree-like” and “order-like” behavior of these classes.
Dually, for some properties there is a unique minimum class without that
property, which can be considered a canonical obstruction to that property.
Model-theoretic properties are often defined by such canonical obstructions,
such as (monadic) stability and the class of linear orders or (monadic)
dependence and the class of all graphs.
Some properties may admit both types of characterization, which we will term a
_(singleton) transduction duality_ ; this is in analogy with the well-
established theory of homomorphism dualities [NT00], which suggests many
further lines of inquiry. More generally, a _transduction duality_
$(\Pi,\Psi)$ consists of two sets $\Pi$ and $\Psi$ of graph classes such that
for any graph class $\mathscr{C}$, either there is some $\mathscr{F}\in\Pi$
such that $\mathscr{C}\sqsubseteq_{\rm FO}\mathscr{F}$, or there is some
$\mathscr{D}\in\Psi$ such that $\mathscr{D}\sqsubseteq_{\rm FO}\mathscr{C}$
This formalizes a structural dichotomy stating that the obstructions to being
in $\Pi$ are caputred by $\Psi$. For example, while the classes of shrubdepth
$\leq d$ form a strictly increasing chain in the transduction quasi-order
[GHN+19, Theorem 4.5] and so the bounded shrubdepth property does not admit a
maximum class, [POdMS22] shows a transduction duality where
$\Pi=\\{\mathscr{T}_{n}|n\in\mathbb{N}\\}$ where $\mathscr{T}_{n}$ is the
class of trees of height $n$, and $\Psi$ is the singleton class of paths.
#### 20.1 A transduction duality for structurally bounded degree
Given a class property $\Pi$, the class property structurally $\Pi$ is the set
of graph classes that are transducible from a class in $\Pi$. From Proposition
40, we see that the class of cubic graphs is the maximum class of structurally
bounded degree. We now establish a transduction duality by showing the class
of star forests is the minimum class not of structurally bounded degree. The
proof will largely follow from known results on the model-theoretic property
of _mutual algebraicity_.
Given a structure $M$, an $n$-ary relation $R(\bar{x})$ is mutually algebraic
if there is a constant $K$ such that for each $m\in M$, the number of tuples
$\bar{b}\in M^{n}$ such that $M\models R(\bar{b})$ and $m\in\bar{b}$ is at
most $K$. Note a unary relation is always mutually algebraic. Given a
structure $M$, a formula $\varphi(\bar{x})$ is mutually algebraic if it
defines a mutually algebraic relation in $M^{|\bar{x}|}$.
A relational structure $M$ is bounded degree if every atomic relation is
mutually algebraic.
A relational structure $M$ is mutually algebraic if every formula with
parameters from $M$ is equivalent to a Boolean combination of mutually
algebraic formulas with parameters from $M$.
A hereditary class $\mathscr{C}$ is mutually algebraic if every model of
$\operatorname{\text{Th}}(\mathscr{C}):=\bigcap_{M\in\mathscr{C}}\operatorname{\text{Th}}(M)$
is mutually algebraic.
We remark that [Las13, Theorem 3.3] shows that mutual algebraicity of a theory
is equivalent to monadic NFCP. NFCP is a model-theoretical property that is
stronger than stability at the level of theories, although not at the level of
individual formulas. At the level of formulas, NFCP has appeared in [FPST18]
in the context of graph algorithms.
There are two main points in what follows. If $M$ is not mutually algebraic,
then (some elementary extension of) $M$ transduces an equivalence relation
with infinitely many infinite classes by [BL22b, Theorem 3.2]. If $M$ is
mutually algebraic, then some expansion naming finitely many constants is
quantifier-free interdefinable with a bounded degree structure $M^{\prime}$ by
[LT22, Lemma 4.3], which is thus transduction-equivalent to $M$. There is then
some work to transfer these results to hereditary graph classes.
###### Proposition 62.
Let $\mathscr{C}$ be a hereditary class of relational structures in a finite
language. Then the following hold.
1. 1.
$\mathscr{C}$ is mutually algebraic if and only if $\mathscr{C}$ is (non-
copying, quantifier-free) transduction-equivalent to a hereditary class of
bounded degree structures,
2. 2.
If $\mathscr{C}$ is mutually algebraic, then so is every transduction of
$\mathscr{C}$,
3. 3.
$\mathscr{C}$ is mutually algebraic if and only if $\mathscr{C}$ does not
transduce the class of all equivalence relations.
###### Proof 20.1.
(1) $(\Rightarrow)$ By [LT22, Proposition 4.12], if $\mathscr{C}$ is mutually
algebraic then there are bounded degree hereditary classes
$\widetilde{\mathscr{C}}_{1},\dots,\widetilde{\mathscr{C}}_{m}$ such that each
$\widetilde{\mathscr{C}}_{i}$ is (non-copying, quantifier-free) transduction-
equivalent to some $\mathscr{C}_{i}\subset\mathscr{C}$ and
$\bigcup_{i\in[m]}\mathscr{C}_{i}=\mathscr{C}$. Thus $\mathscr{C}$ is (non-
copying, quantifier-free) transduction-equivalent to
$\bigcup_{i\in[m]}\widetilde{\mathscr{C}}_{i}$
(2) Mutual algebraicity of $\mathscr{C}$ is clearly preserved by simple
interpretations. By [Las13, Theorem 3.3], mutual algebraicity is preserved by
coloring. We now show mutual algebraicity is preserved by copying. Suppose
$\mathscr{C}$ is mutually algebraic, and by (1) let
$T\colon\mathscr{D}\to\mathscr{C}$ be a non-copying transduction with
$\mathscr{D}$ bounded degree. Then $T$ also induces a non-copying transduction
from $C_{k}(\mathscr{D})$ to $C_{k}(\mathscr{C})$. Taking copies preserves
being bounded degree, bounded degree classes are mutually algebraic by [LT22,
Lemma 4.3], and we have already shown that non-copying transductions preserve
mutual algebraicity.
(1) $(\Leftarrow)$ By [LT22, Lemma 4.3], bounded degree classes are mutually
algebraic, and we have shown mutual algebraicity is preserved by
transductions.
(3) $(\Rightarrow)$ The class of all equivalence relations is not mutually
algebraic, and mutual algebraicity is preserved by transductions.
$(\Leftarrow)$ From the proof of [BL22b, Theorem 3.2(4)], if $\mathscr{C}$ is
not mutually algebraic then there is a formula $\varphi(x,y)$ and $M\models
Th(\mathscr{C})$ containing elements
$\set{b_{i}}{i\in\mathbb{N}}\cup\set{a_{i,j}}{i,j\in\mathbb{N}}$ such that
$M\models\varphi(a_{i,j},b_{k})\iff i=k$. By compactness, for every
$n\in\mathbb{N}$ there is some $M_{n}\in\mathscr{C}$ containing elements
$\set{b_{i}}{i\in[n]}\cup\set{a_{i,j}}{i,j\in[n]}$ such that
$M_{n}\models\varphi(a_{i,j},b_{k})\iff i=k$. Adding unary predicates
$A:=\set{a_{i,j}}{i,j\in[n]}$ and $B:=\set{b_{i}}{i\in[n]}$, we have that
$M_{n}$ transduces an equivalence relation with $n$ classes of size $n$ by the
formula $\varphi(x,y):=A(x)\wedge A(y)\wedge\exists
z(B(z)\wedge\varphi(x,z)\wedge\varphi(y,z))$, and similarly can transduce any
substructure of that equivalence relation.
When $\mathscr{C}$ is a class of graphs rather than relational structures, we
may strengthen the result.
###### Theorem 63.
Let $\mathscr{C}$ be a hereditary graph class. Then the following are
equivalent.
1. 1.
$\mathscr{C}$ is mutually algebraic;
2. 2.
$\mathscr{C}$ is transduction-equivalent to (in particular, is a perturbation
of) a hereditary class of bounded degree graphs;
3. 3.
$\mathscr{C}$ is a transduction of the class
${\mathscr{C}}\\!\\!\textit{\calligra ubic}\,$ of all cubic graphs,
4. 4.
$\mathscr{C}$ does not (non-copying) transduce the class $\mathscr{T}_{2}$ of
all star forests.
###### Proof 20.2.
Proposition 62 immediately gives $(2)\Rightarrow(1)$, Proposition 40 gives
$(2)\Leftrightarrow(3)$, and from the fact that the class of star forests is
transduction-equivalent to the class of all equivalence relations also gives
$(1)\Leftrightarrow(4)$. It nearly gives $(1)\Rightarrow(2)$, but a priori
only produces a class $\mathscr{D}_{0}$ of bounded degree relational
structures, rather than graphs, such that $\mathscr{D}_{0}\equiv^{\circ}_{{\rm
FO}}\mathscr{C}$.
From [LT22, Definition 4.10], we see the relations of $\mathscr{D}_{0}$ are
given by relations of arity no greater than those of $\mathscr{C}$, so they
are unary and binary. The class $\mathscr{D}_{0}^{-}$ obtained by forgetting
the unary relations still transduces $\mathscr{C}$ since the transduction can
add the unary relations back. We may view $\mathscr{D}_{0}^{-}$ as a class of
directed edge-colored graphs, and we let $\mathscr{D}_{1}$ be the class of
2-subdivisions of the class of graphs obtained from $\mathscr{D}_{0}^{-}$ by
symmetrizing the edge relations and forgetting the edge-colors. Then
$\mathscr{D}_{1}$ still has bounded degree, and $\mathscr{D}_{1}$ transduces
$\mathscr{D}_{0}^{-}$ as follows: we define a colored directed edge by taking
a 2-subdivided edge, coloring the subdivision vertex closer to what will be
the out-vertex of the directed edge with a special color, and then coloring
the other subdivision vertex with the color of the desired directed edge. Thus
$\mathscr{D}_{1}$ transduces $\mathscr{C}$. As every transduction of a class
with bounded degree is a perturbation of a class with bounded degree, we are
finished.
In particular, we have the following singleton transduction-duality:
$\mathscr{T}_{2}\not\sqsubseteq_{\rm
FO}\mathscr{C}\qquad\iff\qquad\mathscr{C}\sqsubseteq_{\rm
FO}{\mathscr{C}}\\!\\!\textit{\calligra ubic}\,.$
There is a correspondence between dualities and covers in the homomorphism
quasi-order [NT00], and we see a continued connection here.
###### Corollary 64.
The only two $\sqsubseteq_{\rm FO}$-covers of the class of edgeless graphs are
star forests and paths.
###### Proof 20.3.
Suppose $\mathscr{C}$ is a hereditary graph class that does not transduce star
forests. By Theorem 63, $\mathscr{C}$ is transduction-equivalent to a class
$\mathscr{D}$ of graphs with bounded degree. Suppose there is no bound on the
size of connected components in $\mathscr{D}$. Since $\mathscr{D}$ has bounded
degree, we can find arbitrarily long induced paths in the graphs in
$\mathscr{D}$, hence we can transduce $\mathscr{P}$ in $\mathscr{D}$.
If there is a bound on the size of connected components, then $\mathscr{D}$ is
FO-transduction equivalent to the class of edgeless graphs by Proposition 39.
#### 20.2 The absence of maximum classes
We prove a strong negative result, implying that no property containing all
bounded expansion classes admits a maximum class. The idea is that we may
“diagonalize” against all the countably many transductions from any candidate
maximum class. On the other hand, we prove a positive result for countable
sets of nowhere dense classes or classes of bounded expansion.
###### Proposition 65.
Up to transduction equivalence, the class of all graphs is the only class that
is an $\sqsubseteq_{\rm FO}$-upper bound for all classes with bounded
expansion.
###### Proof 20.4.
Assume $\mathscr{B}$ is a monadically dependent bound for all classes with
bounded expansion. To a monadically dependent hereditary class $\mathscr{C}$
we associate the mapping
$h_{\mathscr{C}}\colon\mathbb{N}^{+}\rightarrow\mathbb{N}$ as follows:
$h_{\mathscr{C}}(i)$ is the largest integer $n$ such that the exact
$i$-subdivision of $K_{n}$ belongs to $\mathscr{C}$ (this is well-defined by
the assumption that $\mathscr{C}$ is monadically dependent). We now consider
an enumeration $\mathsf{T}_{1},\mathsf{T}_{2},\dots$ of all first-order
transductions (there are countably many) and let
$\mathscr{C}_{i}=\mathsf{T}_{i}(\mathscr{B})$. We further define the function
$h\colon\mathbb{N}^{+}\rightarrow\mathbb{N}$ by
$h(i)=h_{\mathscr{C}_{i}}(i)+1$. By taking $\mathscr{D}$ to be the hereditary
closure of the set $\set{i\text{-subdivided }K_{h(i)}\colon
i\in\mathbb{N}^{+}}$ we get a bounded expansion class with
$h_{\mathscr{D}}(i)\geq h(i)$ for every integer $i$. As $\mathscr{B}$ is a
bound of all bounded expansion classes, there exists a transduction, say
$\mathsf{T}_{k}$, such that
$\mathscr{D}\subseteq\mathsf{T}_{k}(\mathscr{B})=\mathscr{C}_{k}$. However,
this implies $h_{\mathscr{D}}(i)\leq h_{\mathscr{C}_{k}}(i)$ for every integer
$i$, which contradicts $h_{\mathscr{D}}(k)\geq h(k)\geq
h_{\mathscr{C}_{k}}(k)+1$.
However, we show that for countable sets of classes with bounded expansion, we
can find an upper bound with bounded expansion.
###### Lemma 66.
Let $\mathscr{C}_{1},\dots,\mathscr{C}_{k},\dots$ be countably many classes of
graphs.
Then, there exists a class $\mathscr{B}$ such that
$\mathscr{C}_{i}\sqsubseteq_{\rm FO}^{\circ}\mathscr{B}$ holds for all
$i\in\mathbb{N}$, and such that
* •
if every $\mathscr{C}_{i}$ has bounded expansion, then $\mathscr{B}$ has
bounded expansion;
* •
if every $\mathscr{C}_{i}$ is nowhere dense, then $\mathscr{B}$ is nowhere
dense.
###### Proof 20.5.
Let $\mathscr{B}$ contain, for each $i\in\mathbb{N}$ the $i$-subdivision of
all the graphs in $\mathscr{C}_{i}$.
By construction, the non-copying transduction $\mathsf{T}_{i}$ which connects
vertices at distance exactly $i+1$ and keep only marked vertices is such that
$\mathsf{T}_{i}(\mathscr{B})\supseteq\mathscr{C}_{i}$. Thus,
$\mathscr{C}_{i}\sqsubseteq_{\rm FO}^{\circ}\mathscr{B}$ holds for all
$i\in\mathbb{N}$.
Assume that the $k$-th subdivision $H^{(k)}$ of a graph $H$ belongs to
$\mathscr{B}$, where $H$ has minimum degree at least $3$. Then, $H^{(k)}$ is
the $i$-subdivision of some graph $H^{\prime}\in\mathscr{C}_{i}$ for some
$i\leq k$. It follows that
$\mathscr{B}\,\widetilde{\nabla}\,k\subseteq\bigcup_{i=1}^{k}\mathscr{C}_{i}\,\widetilde{\nabla}\,k$.
Consequently, if all the $\mathscr{C}_{i}$’s have bounded expansion then
$\mathscr{B}$ has bounded expansion, and if all the $\mathscr{C}_{i}$’s are
nowhere dense then $\mathscr{B}$ is nowhere dense.
#### 20.3 Bounded twin-width
Graph classes of bounded twin-width were introduced in [BKTW20], and have been
intensively studied since. While we will not need their definition for this
section, we note that the property of bounded twin-width is preserved by FO-
transductions, and any further required facts will be referenced. We provide a
counterexample to a conjecture, stated in different terminology, that the
cubic graphs are the minimum class of unbounded twin-width.
Let $\mathscr{C}$ be a hereditary graph class. Then $\mathscr{C}$ is
_delineated_ if for every hereditary subclass
$\mathscr{D}\subseteq\mathscr{C}$, $\mathscr{D}$ has bounded twin-width if and
only if $\mathscr{D}$ is monadically dependent.
In [BCK+22, Conjecture 66], it was conjectured that if $\mathscr{C}$ is not
delineated then $\mathscr{C}$ transduces the class $\mathscr{S}$ of subcubic
graphs. This is equivalent to conjecturing that $\mathscr{S}$ is the minimum
class of unbounded twin-width. In one direction, suppose $\mathscr{S}$ is the
minimum such class; if $\mathscr{C}$ is not delineated then it has unbounded-
twin width, and so $\mathscr{S}\sqsubseteq_{\rm FO}\mathscr{C}$. In the other
direction, suppose the original conjecture holds. If $\mathscr{C}$ has
unbounded twin-width then either it is not monadically dependent or it is not
delineated (witnessed by considering $\mathscr{C}$ as a subclass of itself);
in either case, $\mathscr{S}\sqsubseteq_{\rm FO}\mathscr{C}$.
A graph class $\mathscr{C}$ is small (resp. unlabeled-small) if there is
$c\in\R$ such that the number of labeled structures of size $n$ in
$\mathscr{C}$ is $O(c^{n}\cdot n!)$ (resp. the number of unlabeled structures
of size $n$ in $\mathscr{C}$ is $O(c^{n})$).
###### Lemma 67.
Let $\mathscr{C}$ be an unlabeled-small class of graphs with bounded degree.
Then every transduction $\mathscr{D}$ of $\mathscr{C}$ is also unlabeled-
small.
###### Proof 20.6.
Let $\mathsf{T}$ be a transduction so that
$\mathscr{D}\subseteq\mathsf{T}(\mathscr{C})$. By Theorem 8, we may decompose
$\mathsf{T}$ into the composition of a copy operation, an immersion, and a
perturbation. Note that copying and perturbations both preserve being
unlabeled-small since every structure in the image has a pre-image that is no
larger, and that copying also preserves being bounded-degree; thus these
operations have no effect and we may assume $\mathsf{T}$ is an immersion. Fix
a target graph $D\in\mathscr{D}$ and let $C\in\mathsf{T}^{-1}(D)$. If
$\mathsf{T}$ is strongly $r$-local, then let $C^{\prime}\subseteq
C\in\mathscr{C}$ be obtained by keeping the vertices of $C$ that remain in
$\mathsf{T}(C)$, as well as their $r$-neighborhoods. By strong $r$-locality,
$C^{\prime}\in\mathsf{T}^{-1}(D)$. Since $\mathscr{C}$ has bounded degree,
there is some $k$ such that $|C^{\prime}|\leq k|D|$.
Let $\mathscr{C}_{n}$ denote the number of unlabeled graphs of size $n$ in
$\mathscr{C}$, and similarly $\mathscr{D}_{n}$. Suppose
$\mathscr{C}_{n}=O(c^{n})$. Then if the transduction uses $\ell$ colors, we
have $\mathscr{D}_{n}\leq 2^{\ell kn}O(c^{kn})$ since every graph in
$\mathscr{D}_{n}$ is the $\mathsf{T}$-image of a graph in $\mathscr{C}_{kn}$,
so $\mathscr{D}$ is unlabeled-small.
###### Corollary 68.
The class of subcubic graphs is not the $\sqsubseteq_{\rm FO}$-minimum class
of unbounded twin-width.
###### Proof 20.7.
Let $\mathscr{C}$ be a hereditary unlabeled-small class of graphs of unbounded
twin-width and bounded degree (so $\mathscr{C}$ is monadically dependent), as
constructed in [BGTT22], so $\mathscr{C}$ is not delineated. Since the class
of subcubic graphs is not unlabeled-small [Noy15, Formula 6.6], Lemma 67 shows
it cannot be transduced from $\mathscr{C}$.
We remark that the class $\mathscr{C}$ is only shown to be small rather than
unlabeled-small in [BGTT22], using [BGK+22, Lemma 41] that the class of
induced subgraphs of the Cayley graph of a finitely-generated group is small.
However, the same proof shows such a class is unlabeled-small; the only change
needed is that Cayley’s formula for the number of labeled rooted forests on
$n$ vertices should be replaced by an exponential upper bound on the number of
unlabeled rooted forests on $n$ vertices, which follows from such a bound on
unlabeled rooted trees [Drm15, Theorem 4.4.6].
###### Problem 69.
Is (unlabeled-)smallness preserved by transductions?
However, we have the following positive results concerning maximum classes.
###### Theorem 70 (follows from [BBGT23, Theorem 1.5]).
There exists a constant $c$ such that every class with bounded twin-width is a
transduction of the class of all graphs with twin-width at most $c$. In other
words, the bounded twin-width property has a maximum.
The following is also a direct consequence of results in [BBGT23].
###### Theorem 71.
The property of being monadically stable and having bounded twin-width has a
maximum, which can be chosen to be the class of all graphs with twin-width at
most $c$ and girth at least $6$ (for some constant $c$).
###### Proof 20.8.
It has been proved in [GPT22] that every monadically stable class of graphs
with bounded twin-width is a transduction of a weakly sparse class with
bounded twin-width. For $k,t\in\mathbb{N}$, let $\mathscr{C}_{k,t}$ be the
class of all $K_{t,t}$-free graphs with twin-width at most $k$. According to
[BBGT23, Theorem 1.4], there exists a constant $c$ and a function $f$ such
that, for every graph in $G\in\mathscr{C}_{k,t}$, the $f(k,t)$-subdivision of
$G$ has twin-width at most $c$ (and girth at least $6$). It follows that
$\mathscr{C}_{k,t}$ is an ${\rm FO}$-transduction of the class $\mathscr{B}$
of all graphs with girth at least $6$ and twin-width at most $c$.
Consequently, $\mathscr{B}$ is a maximum for the property of being monadically
stable and having bounded twin-width.
#### 20.4 Not a lattice
We now show the transduction quasi-order is not a lattice, by showing the
classes of planar graphs and of half-graphs have no greatest lower bound, i.e.
there is no maximal class for the downset below both classes. This also shows
that the join of infinitely many classes is not generally defined, as we could
then define the meet of two classes as the join of all classes below both. We
will make use of Lemma 38 showing that the class of planar graphs transduces
every class of bounded pathwidth. We first show that the pathwidth hierarchy
is strict in the transduction quasi-order, and thus has no maximum class.
###### Theorem 72.
For $n\geq 1$ we have $\mathscr{T}_{n+2}\subseteq_{\rm
FO}\mathscr{P\\!W\\!}_{n+1}$ but $\mathscr{T}_{n+2}\not\sqsubseteq_{\rm
FO}\mathscr{P\\!W\\!}_{n}$. Consequently,
$\mathscr{P\\!W\\!}_{n+1}\not\sqsubseteq_{\rm FO}\mathscr{P\\!W\\!}_{n}$.
###### Proof 20.9.
For convenience, we define
$\mathscr{T}_{1}=\mathscr{P\\!W\\!}_{0}=\mathscr{E}$, which is consistent with
our definitions. We now prove $\mathscr{T}_{n+1}\not\sqsubseteq_{\rm
FO}\mathscr{P\\!W\\!}_{n-1}$ by induction on $n$. For $n=1$, this follows from
Proposition 39 (or from the known fact $\mathscr{T}_{2}\sqsupset\mathscr{E}$).
Now assume that we have proved the statement for $n\geq 1$ and assume towards
a contradiction that $\mathscr{T}_{n+2}\sqsubseteq_{\rm
FO}\mathscr{P\\!W\\!}_{n}$. The class $\mathscr{P\\!W\\!}_{n}$ is the monotone
closure of the class $\mathcal{I}_{n+1}$ of interval graphs with clique number
at most $n+1$. As the class $\mathcal{I}_{n+1}$ has bounded star chromatic
number, it follows from Corollary 20 that
$\mathscr{P\\!W\\!}_{n}\sqsubseteq_{\rm FO}\mathcal{I}_{n+1}$. Let
$\mathsf{T}$ be the composition of the respective transductions, so that
$\mathscr{T}_{n+2}\subseteq\mathsf{T}(\mathcal{I}_{n+1})$. As
$\mathscr{T}_{n+2}$ is additive, it follows from Corollary 16 that there is a
copy operation $\mathsf{C}$ and an immersive transduction $\mathsf{T}_{0}$
such that $\mathsf{T}_{0}\circ\mathsf{C}$ subsumes $\mathsf{T}$. As
$\\{G+K_{1}:G\in\mathscr{T}_{n+1}\\}\subseteq\mathscr{T}_{n+2}$ if follows
from Lemma 14 that there exists an integer $r$ such that
$\mathscr{T}_{n+1}\sqsubseteq^{\circ}_{\rm
FO}\mathcal{B}_{r}^{\mathsf{C}(\mathcal{I}_{n+1})}\subseteq\mathsf{C}(\mathcal{B}_{r}^{\mathcal{I}_{n+1}})$.
According to Lemma 15, as $\mathscr{T}_{n+1}$ is additive, there exists an
immersive transduction $\mathsf{T}_{1}$ with
$\mathscr{T}_{n+1}\subseteq\mathsf{T}_{1}\circ\mathsf{C}(\mathcal{B}_{r}^{\mathcal{I}_{n+1}})$.
Let $G\in\mathcal{B}_{r}^{\mathcal{I}_{n+1}}$ and let $P$ be a shortest path
connecting the minimal and maximal vertices in an interval representation of
$G$. Then $P$ has length at most $2r$, $P$ dominates $G$, and
$G-P\in\mathcal{I}_{n}$. By encoding the adjacencies in $G$ to the vertices of
$P$ by a monadic expansion, we get that that there exists a transduction
$\mathsf{T}_{2}$ (independent of our choice of $G$) such that $G$ is a
$\mathsf{T}_{2}$-transduction of $H$, where $H$ is obtained from $G$ by
deleting all the edges incident to a vertex in $P$. In particular,
$\mathcal{B}_{r}^{\mathcal{I}_{n+1}}\subseteq\mathsf{T}_{2}(\mathcal{I}_{n})$
thus
$\mathscr{T}_{n+1}\subseteq\mathsf{T}_{1}\circ\mathsf{C}\circ\mathsf{T}_{2}(\mathcal{I}_{n})$,
which contradicts our induction hypothesis.
###### Theorem 73.
The class $\mathscr{H}$ of half-graphs and the class
${\mathscr{P}}\\!\\!\textit{\calligra lanar}\,$ of planar graphs have no
greatest lower bound in $\sqsubseteq_{\rm FO}$ (or in $\sqsubseteq_{\rm
FO}^{\circ}$).
###### Proof 20.10.
Consider one of the quasi-orders $\sqsubseteq_{\rm FO}$, $\sqsubseteq_{\rm
FO}^{\circ}$, which we will denote by $(\mathfrak{X},\sqsubseteq)$. Assume
$\mathscr{C}$ is the greatest lower bound of $\mathscr{H}$ and
${\mathscr{P}}\\!\\!\textit{\calligra lanar}\,$ in
$(\mathfrak{X},\sqsubseteq)$. According to Theorem 5, the class
${\mathscr{P}}\\!\\!\textit{\calligra lanar}\,$, being nowhere dense, is
monadically stable. Thus, as $\mathscr{C}$ is (by assumption) a transduction
of ${\mathscr{P}}\\!\\!\textit{\calligra lanar}\,$, it is also monadically
stable. Hence, according to [NOdMRS20], there exists an integer $k$ such that
$\mathscr{C}\sqsubseteq_{\rm FO}^{\circ}\mathscr{P\\!W\\!}_{k}$ (hence
$\mathscr{C}\sqsubseteq\mathscr{P\\!W\\!}_{k}$). According to Theorem 72,
$\mathscr{P\\!W\\!}_{k+1}\not\sqsubseteq_{\rm FO}\mathscr{P\\!W\\!}_{k}$
(hence $\mathscr{P\\!W\\!}_{k+1}\not\sqsubseteq\mathscr{P\\!W\\!}_{k}$). Thus,
$\mathscr{C}\sqsubset\mathscr{P\\!W\\!}_{k+1}$. However,
$\mathscr{P\\!W\\!}_{k+1}$ is addable (hence additive),
$\mathscr{P\\!W\\!}_{k+1}\sqsubseteq_{\rm FO}^{\circ}\mathscr{H}$ and,
according to Lemma 38,
$\mathscr{P\\!W\\!}_{k+1}\sqsubseteq^{\circ}{\mathscr{P}}\\!\\!\textit{\calligra
lanar}\,$. In particular, $\mathscr{P\\!W\\!}_{k+1}\in\mathfrak{X}$,
$\mathscr{P\\!W\\!}_{k+1}\sqsubseteq\mathscr{H}$, and
$\mathscr{P\\!W\\!}_{k+1}\sqsubseteq{\mathscr{P}}\\!\\!\textit{\calligra
lanar}\,$, what contradicts the assumption that $\mathscr{C}$ is the greatest
lower bound of $\mathscr{H}$ and ${\mathscr{P}}\\!\\!\textit{\calligra
lanar}\,$ in $(\mathfrak{X},\sqsubseteq)$.
On the other hand, it is easy to see that the least upper bound of two classes
is always defined.
###### Proposition 74.
Let $\mathscr{C}_{1}$ and $\mathscr{C}_{2}$ be graph classes. Then
$\mathscr{C}_{1}\cup\mathscr{C}_{2}$ is the least upper bound of
$\mathscr{C}_{1}$ and $\mathscr{C}_{2}$ in $\sqsubseteq_{\rm FO}$ (and in
$\sqsubseteq_{\rm FO}^{\circ}$).
###### Proof 20.11.
We only do the proof for $\sqsubseteq_{\rm FO}$, since that for
$\sqsubseteq_{\rm FO}^{\circ}$ is essentially identical. It is immediate that
$\mathscr{C}_{1}\cup\mathscr{C}_{2}$ transduces each of$\mathscr{C}_{1}$ and
$\mathscr{C}_{2}$. So suppose $\mathscr{D}$ transduces $\mathscr{C}_{1}$ and
$\mathscr{C}_{2}$ via transductions $\mathsf{T}_{1}$ and $\mathsf{T}_{2}$. We
wish to encode these both in a single transduction (and for the proof to carry
over to $\sqsubseteq_{\rm FO}^{\circ}$, we should not achieve this via
copying). We do this by introducing a new color $U(x)$. We first perform the
copy operation the max of the number of times it was performed in
$\mathsf{T}_{1}$ and $\mathsf{T}_{2}$. Then, if any point is colored by $U$,
we perform the interpretation from $\mathsf{T}_{1}$, and otherwise perform the
interpretation from $\mathsf{T}_{2}$.
### 21 Dense analogues and open problems
In this section, we discuss the notion of _dense analogue_ of a class
property, as introduced in [GPT22]. In the following, we consider only
infinite hereditary classes. Let $\Sigma$ be the class property of being
weakly sparse (that is the property of a class that there is some integer $s$
such that no graph in the class includes $K_{s,s}$ as a subgraph). Unlike the
model-theoretic dividing lines of stability and independence for hereditary
classes, the notion of weakly sparse is not preserved by transductions, but
finds its motivation in the study of sparse classes. Recall that a class
property $\Pi$ is a _transduction downset_ if it is preserved by
transductions. In other words, a transduction downset consists of a downset
for the transduction order $(\mathfrak{C},\sqsubseteq_{\rm FO})$. The
_transduction closure_ $\Pi^{\downarrow}$ of a class property $\Pi$ is the
property to be a transduction of a class in $\Pi$, that is the smallest
transduction downset (for inclusion) that includes $\Pi$. (This is the same as
the class property structurally $\Pi$.) A _transduction ideal_ is a
transduction downset $\Pi$ such that every two elements of $\Pi$ have an upper
bound in $\Pi$, i.e. if the union of any two classes in $\Pi$ is in $\Pi$ (by
Proposition 74). A class property $\Pi_{s}$ is a _sparse transduction downset_
if it contains only weakly sparse classes and is preserved by transductions to
weakly sparse classes. In other words, a sparse transduction downset is the
trace on $\Sigma$ of a transduction downset. A sparse transduction downset
$\Pi_{s}$ is a _sparse transduction ideal_ if the union of any two classes in
$\Pi_{s}$ is in $\Pi_{s}$.
The _dense analogue_ $\Pi$ of a sparse transduction downset $\Pi_{s}$ is the
largest transduction downset $\Pi$ (for inclusion) with
$\Pi\cap\Sigma=\Pi_{s}$. This definition is valid as
$\Pi_{s}^{\downarrow}\cap\Sigma=\Pi_{s}$ for every sparse transduction
downset, and the union of all transduction downsets $\Pi$ with
$\Pi\cap\Sigma=\Pi_{s}$ is a transduction downset with the same property.
###### Proposition 75.
The dense analogue of a sparse transduction ideal is a transduction ideal.
###### Proof 21.1.
Let $\Pi$ be the dense analogue of a sparse transduction ideal $\Pi_{s}$.
Assume $\mathscr{C}_{1},\mathscr{C}_{2}\in\Pi$, and let $\mathscr{D}$ be a
weakly sparse transduction of
$\mathscr{C}_{1}\operatorname{\uplus}\mathscr{C}_{2}$. Then, there is a
partition $\mathscr{D}_{1}\operatorname{\uplus}\mathscr{D}_{2}$ of
$\mathscr{D}$ with $\mathscr{D}_{1}\sqsubseteq\mathscr{C}_{1}$ and
$\mathscr{D}_{2}\sqsubseteq\mathscr{C}_{2}$. As $\mathscr{D}$ is weakly
sparse, so are $\mathscr{D}_{1}$ and $\mathscr{D}_{2}$. Thus, both
$\mathscr{D}_{1}$ and $\mathscr{D}_{2}$ belong to $\Pi_{s}$. As $\Pi_{s}$ is a
sparse transduction ideal, $\mathscr{D}\in\Pi$. It follows that every weakly
sparse transduction of $\mathscr{C}_{1}\operatorname{\uplus}\mathscr{C}_{2}$
belongs to $\Pi_{s}$, thus
$\mathscr{C}_{1}\operatorname{\uplus}\mathscr{C}_{2}\in\Pi$. Therefore, $\Pi$
is a transduction ideal.
###### Proposition 76.
Let $\Pi$ be a transduction downset. Then, $\Pi$ is the dense analogue of
$\Pi\cap\Sigma$ if and only if the complement $\Pi^{*}$ of $\Pi$ is such that
$\mathscr{C}\in\Pi^{*}\quad\Longrightarrow\quad(\exists\mathscr{D}\in\Sigma\cap\Pi^{*})\
\mathscr{D}\sqsubseteq_{\rm FO}\mathscr{C}\qquad\text{(\/property $\Pi^{*}$ is
grounded in $\Sigma$)}.$
###### Proof 21.2.
Assume $\Pi$ is the dense analogue of $\Pi\cap\Sigma$ and let $\Pi^{*}$ be the
complement of $\Pi$. As $\Pi$ is the dense analogue of $\Pi\cap\Sigma$, for
every class $\mathscr{C}\notin\Pi$ there exists a weakly sparse class
$\mathscr{D}\notin\Pi$ with $\mathscr{D}\sqsubseteq_{\rm FO}\mathscr{C}$.
Conversely, let $\Pi_{s}=\Pi\cap\Sigma$. For every $\mathscr{C}\notin\Pi$, we
have $\mathscr{C}\in\Pi^{*}$. Thus, there exists $\mathscr{D}\sqsubseteq_{\rm
FO}\mathscr{C}$ with $\mathscr{D}\in\Sigma\cap\Pi^{*}$. Hence, $\mathscr{D}$
is weakly sparse and $\mathscr{D}\notin\Pi$. It follows that $\Pi$ is the
dense analogue of $\Pi_{s}$.
Monadic dependence is the dense analogue of nowhere dense. If a class is
nowhere dense, then it is monadically dependent [AA14]. Conversely, assume
that $\mathscr{C}$ is a hereditary weakly sparse monadically dependent class.
Assume for contradiction that $\mathscr{C}$ is not nowhere dense. Then,
according to [Dvo18, Theorem 6], there is some integer $k$ such that
$\mathscr{C}$ contains a $\leq k$-subdivision of every complete graph. It
follows that the class of all graphs can be obtained as a transduction of
$\mathscr{C}$, contradicting the hypothesis that $\mathscr{C}$ is monadically
dependent.
Note that this shows that the dense analogue of $\Pi$ can be strictly larger
than $\Pi^{\downarrow}$, since nowhere dense classes are monadically stable
and thus their transduction closure is contained within monadic stability.
Bounded shrubdepth is the dense analogue of bounded treedepth. Indeed, every
weakly sparse class with bounded shrubdepth has bounded treedepth [NOdM12,
Proposition 6.4], while the class of paths (which has unbounded treedepth) can
be transduced from any class with unbounded shrubdepth [POdMS22, Theorem 1.1].
It would be tempting to think that the prominent role played by weakly sparse
classes in our understanding of the transduction quasi-order is due to the
fact that the complexity of a set of classes is determined by its relation to
$\Sigma$. A strong formalization of this is given by the following order-
theoretic notion: a subset $\Upsilon$ of classes is sup-dense in the
transduction quasi-order $\sqsubseteq_{\rm FO}$ if, for every two classes
$\mathscr{C}$ and $\mathscr{D}$ we have
$\mathscr{C}\equiv_{\rm
FO}\mathscr{D}\qquad\iff\qquad\forall\mathscr{F}\in\Upsilon\quad(\mathscr{F}\sqsubseteq_{\rm
FO}\mathscr{C})\iff(\mathscr{F}\sqsubseteq_{\rm FO}\mathscr{D}).$
###### Proposition 77.
The set $\Sigma$ is not sup-dense in the FO-transduction quasi-order.
###### Proof 21.3.
Let $\mathscr{C}$ be the class of all planar graphs, and let $\mathscr{D}$ be
the union of $\mathscr{C}$ and of the class $\mathscr{H}$ of all half-graphs.
As every transduction of $\mathscr{C}$ is monadically stable and $\mathscr{D}$
is not (as it includes $\mathscr{H}$), we have that
$\mathscr{D}\sqsubseteq_{\rm FO}\mathscr{C}$ but $\mathscr{D}\not\equiv_{\rm
FO}\mathscr{C}$. However, according to [NOdMRS20], every weakly sparse
transduction of half-graphs has bounded pathwidth, and thus by Lemma 38 is a
transduction of $\mathscr{C}$. Thus $\Sigma$ is not sup-dense in the
transduction quasi-order.
While the classes $\mathscr{C}$ and $\mathscr{D}$ above had the same weakly
sparse classes $\sqsubseteq_{\rm FO}$-below them, they had different weakly
sparse classes $\sqsubseteq_{\rm FO}$-above them, since the only weakly sparse
classes $\sqsubseteq_{\rm FO}$-above $\mathscr{D}$ are somewhere dense. We now
show that even requiring that $\mathscr{C}$ and $\mathscr{D}$ have the same
relation to every weakly sparse class does not force them to be $\equiv_{\rm
FO}$-equivalent.
###### Proposition 78.
There are hereditary graph classes $\mathscr{C}\not\equiv_{\rm FO}\mathscr{D}$
such that for every weakly sparse class $\mathscr{F}$, we have
$\mathscr{F}\sqsubseteq_{\rm FO}\mathscr{C}\iff\mathscr{F}\sqsubseteq_{\rm
FO}\mathscr{D}$ and $\mathscr{F}\sqsupseteq_{\rm
FO}\mathscr{C}\iff\mathscr{F}\sqsupseteq_{\rm FO}\mathscr{D}$.
###### Proof 21.4.
Let $\mathscr{C}_{0}$ be a maximum monadically stable class of bounded twin-
width as given by Theorem 71, let $\mathscr{H}$ be the class of half-graphs,
let $\mathscr{T\\!\\!P}$ be the class of trivially perfect graphs, let
$\mathscr{C}=\mathscr{C}_{0}\cup\mathscr{H}$, and let
$\mathscr{D}=\mathscr{C}_{0}\cup\mathscr{T\\!\\!P}$. Then,
$\mathscr{D}\sqsubseteq_{\rm FO}\mathscr{C}$ but $\mathscr{D}\not\equiv_{\rm
FO}\mathscr{C}$. By [GW00b], every weakly sparse transduction of
$\mathscr{T\\!\\!P}$ or $\mathscr{H}$ has bounded treewidth, and thus is
contained in $\mathscr{C}_{0}$. On the other hand, the only weakly sparse
classes above either $\mathscr{C}$ or $\mathscr{D}$ are somewhere dense.
We now turn to various conjectures concerning dense analogues, and prove some
equivalences between them, for which the following proposition will be
helpful.
###### Proposition 79.
Let $\Pi_{s}$ be a sparse transduction downset. Assume that $\Pi_{s}$ has the
following characterization, where $\mathscr{F}\in\Sigma$ is a non-empty weakly
sparse class of graphs:
$\mathscr{C}\in\Pi_{s}\iff\mathscr{C}\in\Sigma\text{ and }\exists
F\in\mathscr{F}\text{ s.t. }\mathscr{C}\text{ excludes $F$ as a topological
minor.}$ (4)
Then, a weakly sparse class $\mathscr{C}$ belongs to $\Pi_{s}$ if and only if
it has no transduction to a class $\mathscr{D}$ containing, for each
$F\in\mathscr{F}$, some subdivision of $F$.
###### Proof 21.5.
Let $\mathscr{C}$ be a weakly sparse class. We prove by contraposition that if
$\mathscr{C}$ belongs to $\Pi_{s}$, then $\mathscr{C}$ has no transduction to
a class $\mathscr{D}$ containing, for each $F\in\mathscr{F}$, some subdivision
of $F$. Assume that $\mathscr{C}$ has a transduction to a class $\mathscr{D}$
containing, for each $F\in\mathscr{F}$, some subdivision of $F$. By
considering a subclass, we can assume that $\mathscr{D}$ contains no other
graphs than these subdivisions. Then, $\mathscr{D}$ is weakly sparse (as
$\mathscr{F}$ is weakly sparse) and, according to (4),
$\mathscr{D}\notin\Pi_{s}$. As $\Pi_{s}$ is a downset,
$\mathscr{C}\notin\Pi_{s}$.
Conversely, assume that $\mathscr{C}$ has no transduction to a class
containing, for each $F\in\mathscr{F}$, some subdivision of $F$. In
particular, the hereditary closure $\mathsf{H}(\mathscr{C})$ of $\mathscr{C}$
does not contain any subdivision of some $F\in\mathscr{F}$. In other words, no
graph in $\mathscr{C}$ contains a subdivision of $F$ as an induced subgraph.
As $\mathscr{C}$ is weakly sparse, it follows from [Dvo18] that $\mathscr{C}$
has bounded expansion. In particular, $\mathscr{C}$ has bounded star chromatic
number. According to Corollary 20 the monotone closure $\mathscr{C}^{\prime}$
of $\mathscr{C}$ is a transduction of $\mathscr{C}$. Considering this time the
monotone closure transduction of $\mathscr{C}$, we get that there is a graph
$F^{\prime}\in\mathscr{F}$ such that no graph in $\mathscr{C}^{\prime}$
contains a subdivision of $\mathscr{F}^{\prime}$ as a subgraph. By (4), it
follows that $\mathscr{C}^{\prime}\in\Pi_{s}$. As
$\mathscr{C}\subseteq\mathscr{C}^{\prime}$, we get $\mathscr{C}\in\Pi_{s}$ as
well.
Let us give a pair of examples of applications of Proposition 79.
Every weakly sparse class with unbounded treewidth has a transduction
containing a subdivision of each wall.
###### Proof 21.6.
Let $\Pi_{s}$ be the property of having bounded treewidth. It is well known
that a weakly sparse class has bounded treewidth if and only if it excludes
some wall as a topological minor. Hence, according to Proposition 79, a class
has unbounded twin-width if and only if it has a transduction to a class
containing a subdivision of each wall.
Every weakly sparse class with unbounded pathwidth has a transduction
containing a subdivision of each cubic tree.
###### Proof 21.7.
Let $\Pi_{s}$ be the property of having bounded pathwidth. It is well known
that a weakly sparse class has bounded pathwidth if and only if it excludes
some cubic tree as a topological minor. Hence, according to Proposition 79, a
class has unbounded pathwidth if and only if it has a transduction to a class
containing a subdivision of each cubic tree.
It was conjectured in [GPT22] that cliquewidth is the dense analog of
treewidth and linear cliquewidth is the dense analog of pathwidth. Every
weakly sparse transduction of a class with bounded cliquewidth (resp. linear
cliquewidth) has bounded treewidth (resp. bounded pathwidth). Hence, according
to the characterization given in Sections 21 and 21, these conjectures can be
restated as follows.
###### Conjecture 80.
Bounded linear cliquewidth is the dense analogue of bounded pathwidth.
###### Conjecture 81.
Bounded cliquewidth is the dense analogue of bounded treewidth
A particular case concerns sparse transduction downset $\Pi_{s}$ that do not
include bounded pathwidth. Indeed, if some class $\mathscr{C}$ with bounded
pathwidth does not belong to $\Pi_{s}$, this means that the class of half-
graphs does not belong to the dense analogue of $\Pi_{s}$. In such a case, the
dense analogue of $\Pi_{s}$ is stable and the conjecture restates as:
###### Conjecture 82.
Let $\Pi_{s}$ be a sparse transduction downset that does not include all
classes with bounded pathwidth. Then, the largest transduction downset whose
trace on $\Sigma$ is $\Pi_{s}$ is the smallest transduction downset including
$\Pi_{s}$.
An example where this conjecture holds is the property of having bounded
degree:
###### Proposition 83.
Let $\Pi_{s}$ be the property of nearly having bounded degree. Then, $\Pi_{s}$
is a sparse transduction downset and the largest transduction downset $\Pi$
whose trace on $\Sigma$ is $\Pi_{s}$ is the smallest transduction downset
including $\Pi_{s}$.
Furthermore, $\Pi$ consists of the mutually algebraic (equiv. monadically
NFCP) classes.
###### Proof 21.8.
First note that, according to Proposition 40, $\Pi_{s}$ is equivalent to the
property of being a weakly sparse perturbation of a class with bounded degree
and is a sparse transduction downset.
By Theorem 63 every mutually algebraic class of graphs is transduction-
equivalent to a class of bounded degree graphs. Hence, as the property of
being mutually algebraic is a transduction downset, it is the smallest
transduction downset including $\Pi_{s}$.
On the other hand, if $\mathscr{C}$ is a class that is not mutually algebraic,
it transduces star forests by Theorem 63, hence it does not belong to $\Pi$.
Thus, the property of being mutually algebraic is also the largest
transduction downset whose trace on $\Sigma$ is $\Pi_{s}$.
A related problem is the relation between sparse and stable properties.
###### Conjecture 84.
Let $\Pi_{s}$ be a sparse transduction downset of bounded expansion classes
with dense analogue $\Pi$. Then, the smallest transduction downset
$\Pi_{s}^{\downarrow}$ including $\Pi_{s}$ is the set of all (monadically)
stable classes in $\Pi$.
This conjecture is deeply related to the studies in [NOdMRS20, NOdMP+21,
GPT22]. However, it is plausible that one cannot relax the condition that the
classes in $\Pi_{s}$ have bounded expansion, because of the following
equivalence.
###### Proposition 85.
The following statements are equivalent:
1. 1.
For every sparse transduction downset $\Pi_{s}$ with dense analogue $\Pi$, the
transduction downset $\Pi_{s}^{\downarrow}$ is the set of all stable classes
in $\Pi$;
2. 2.
All monadically stable classes are sparsifiable (i.e. transduction-equivalent
to some weakly sparse class).
###### Proof 21.9.
(1)$\Rightarrow$(2): Let $\mathscr{C}$ be a monadically stable class, let
$\mathscr{C}^{\downarrow}$ be the set of all transductions of $\mathscr{C}$
and let $\Pi_{s}=\mathscr{C}^{\downarrow}\cap\Sigma$. Obviously,
$\mathscr{C}^{\downarrow}$ is included in the dense analogue of $\Pi_{s}$.
Thus, (1) implies that $\mathscr{C}$ is a transduction of a class
$\mathscr{D}\in\Pi_{s}$, which is itself a transduction of $\mathscr{C}$.
Hence, $\mathscr{C}$ is sparsifiable.
(2)$\Rightarrow$(1): Let $\Pi_{s}$ be a monotone sparse transduction downset
with dense analogue $\Pi$ and let $\mathscr{C}$ be a (monadically) stable
class in $\Pi$. By (2) $\mathscr{C}$ is transduction-equivalent to a weakly
sparse class $\mathscr{D}$. As $\Pi\cap\Sigma=\Pi_{s}$, we have
$\mathscr{D}\in\Pi_{s}$. Hence, $\mathscr{C}\in\Pi_{s}^{\downarrow}$.
For the case where $\Pi_{s}$ is the property of having bounded expansion,
Conjecture 84 restates as follows:
###### Conjecture 86.
If a monadically stable class does not have structurally bounded expansion,
then it has a transduction to a class that is nowhere dense but not bounded
expansion.
###### Proposition 87.
Conjectures 84 and 86 are equivalent.
###### Proof 21.10.
The only direction to prove is that Conjecture 86 implies Conjecture 84.
Assume Conjecture 86 holds and let $\Pi_{s}$ be a sparse transduction downset
of bounded expansion classes with dense analogue $\Pi$. Let $\mathscr{C}$ be a
monadically stable class. If $\mathscr{C}$ does not have structurally bounded
expansion, then there exists a nowhere dense class
$\mathscr{D}\sqsubseteq\mathscr{C}$ that does not have bounded expansion.
Thus, $\mathscr{D}\notin\Pi_{s}$ and $\mathscr{C}\notin\Pi$. Otherwise,
$\mathscr{C}$ is transduction-equivalent to a class $\mathscr{D}$ with bounded
expansion. Then,
$\mathscr{C}\in\Pi\iff\mathscr{D}\in\Pi\iff\mathscr{D}\in\Pi_{s}\iff\mathscr{C}\in\Pi_{s}^{\downarrow}.$
One main difficulty in proving all these conjectures is the non-monotonicity
of the involved classes. The following, if true, would provide a great tool.
Recall that a graph $G$ is _$k$ -degenerate_ if every induced subgraph of $G$
has a minimum degree at most $k$. A class of graphs $\mathscr{C}$ is
_degenerate_ if there exists $k\in\mathbb{N}$ such that all the graphs in
$\mathscr{C}$ are $k$-degenerate.
###### Conjecture 88.
Every weakly sparse non-degenerate class has a transduction to a monotone non-
degenerate class.
###### Conjecture 89.
For every weakly sparse class of graphs $\mathscr{C}$,
* •
either $\mathscr{C}$ has a transduction to its monotone closure,
* •
or $\mathscr{C}$ has a transduction to a monotone nowhere dense class with
unbounded fractional chromatic number.
Though Conjecture 89 seems to be stronger than Conjecture 88, this is not the
case.
###### Proposition 90.
Conjectures 88 and 89 are equivalent.
###### Proof 21.11.
On the one hand, Conjecture 89 obviously implies Conjecture 88. On the other
hand, assume Conjecture 88 holds and let $\mathscr{C}$ be a weakly sparse
class of graphs. If $\mathscr{C}$ has bounded expansion or is not monadically
dependent, then $\mathscr{C}$ has a transduction to its monotone closure.
Otherwise, $\mathscr{C}$ is nowhere dense but not bounded expansion. If
$\mathscr{C}$ is degenerate, then it follows from [Dvo18] that $\mathscr{C}$
contains an shallow induced subdivision of graphs with arbitrarily large
minimum degree, thus transduces onto a non-degenerate monotone (monadically
dependent) class. On the other hand, if $\mathscr{C}$ is non-degenerate, we
have the same property according to Conjecture 88. Thus, we can reduce to the
case where $\mathscr{C}$ is a non-degenerate monotone monadically dependent
class. In particular, $\mathscr{C}$ is nowhere dense. Then it follows from
[DOdMW20] that $\mathscr{C}$ contains $1$-subdivisions of graphs with
unbounded fractional chromatic number.
### References
* [AA14] H. Adler and I. Adler. Interpreting nowhere dense graph classes as a classical notion of model theory. European Journal of Combinatorics, 36:322–330, 2014.
* [AMR92] N. Alon, C. McDiarmid, and B. Reed. Star arboricity. Combinatorica, 12:375–380, 1992.
* [BBGT23] E. Bonnet, R. Bourneuf, C. Geniet, and S. Thomassé. Factoring pattern-free permutations into separable ones. arXiv preprint, 2023.
* [BC10] A. Blumensath and B. Courcelle. On the monadic second-order transduction hierarchy. Logical Methods in Computer Science, 6(2), 2010. doi:10.2168/LMCS-6(2:2)2010.
* [BCK+22] E. Bonnet, D. Chakraborty, E.J. Kim, N. Köhler, R. Lopes, and S. Thomassé. Twin-width VIII: delineation and win-wins. arXiv preprint arXiv:2204.00722, 2022.
* [BGK+22] E. Bonnet, C. Geniet, E.J. Kim, S. Thomassé, and R. Watrigant. Twin-width II: small classes. Combinatorial Theory, 2(2):#10, 2022. doi:10.5070/C62257876.
* [BGTT22] E. Bonnet, C. Geniet, R. Tessera, and S. Thomassé. Twin-width VII: groups. arXiv preprint arXiv:2204.12330, 2022.
* [BKTW20] E. Bonnet, E.J. Kim, S. Thomassé, and R. Watrigant. Twin-width I: tractable FO model checking. In 61st Annual Symposium on Foundations of Computer Science (FOCS 2020), pages 601–612. IEEE, 2020.
* [BL22a] S. Braunfeld and M.C. Laskowski. Existential characterizations of monadic NIP. arXiv preprint arXiv:2209.05120, 2022.
* [BL22b] S. Braunfeld and M.C. Laskowski. Worst-case expansions of complete theories. Model Theory, 1(1):15–30, 2022.
* [BNOdM+21] E. Bonnet, J. Nešetřil, P. Ossona de Mendez, S. Siebertz, and S. Thomassé. Twin-width and permutations. 2021\.
* [BS85] J. T. Baldwin and S. Shelah. Second-order quantifiers and the complexity of theories. Notre Dame Journal of Formal Logic, 26(3):229–303, 1985.
* [CE12] B. Courcelle and J. Engelfriet. Graph structure and monadic second-order logic: a language-theoretic approach, volume 138. Cambridge University Press, 2012.
* [CO00] B. Courcelle and S. Olariu. Upper bounds to the clique width of graphs. Discrete Applied Mathematics, 101(1-3):77–114, 2000.
* [Col07] T. Colcombet. A combinatorial theorem for trees. In International Colloquium on Automata, Languages, and Programming, pages 901–912. Springer, 2007.
* [Cou90] B. Courcelle. The monadic second-order logic of graphs. I. Recognizable sets of finite graphs. Information and computation, 85(1):12–75, 1990.
* [Cou92] B. Courcelle. The monadic second-order logic of graphs VII: Graphs as relational structures. Theoretical Computer Science, 101(1):3–33, 1992.
* [Die12] R. Diestel. Graph theory, volume 173 of. Graduate texts in mathematics, page 7, 2012.
* [DO95] G. Ding and B. Oporowski. Some results on tree decomposition of graphs. Journal of Graph Theory, 20(4):481–499, 1995. doi:10.1002/jgt.3190200412.
* [DOdMW20] Z. Dvořák, P. Ossona de Mendez, and H. Wu. $1$-subdivisions, fractional chromatic number and Hall ratio. Combinatorica, 40(6):759–774, 2020. doi:10.1007/s00493-020-4223-9.
* [Drm15] M. Drmota. Trees. In Handbook of Enumerative Combinatorics, pages 281–334. Chapman and Hall/CRC, 2015.
* [Dvo18] Z. Dvořák. Induced subdivisions and bounded expansion. European Journal of Combinatorics, 69:143 – 148, 2018. doi:10.1016/j.ejc.2017.10.004.
* [EK63] P. Erdős and P. J. Kelly. The minimal regular graph containing a given graph. Amer. Math. Monthly, 70:1074–1075, 1963.
* [FPST18] G. Fabiański, M. Pilipczuk, S. Siebertz, and S. Toruńczyk. Progressive algorithms for domination and independence. arXiv preprint arXiv:1811.06799, 2018.
* [FV59] S. Feferman and R. Vaught. The first order properties of products of algebraic systems. Fundamenta Mathematicae, 47(1):57–103, 1959.
* [GHN+19] R. Ganian, P. Hliněný, J. Nešetřil, J. Obdržálek, and P. Ossona de Mendez. Shrub-depth: Capturing height of dense graphs. Logical Methods in Computer Science, 15, 2019.
* [GHO+20] J. Gajarský, P. Hliněný, J. Obdržálek, D. Lokshtanov, and M.S. Ramanujan. A new perspective on FO model checking of dense graph classes. ACM Transactions on Computational Logic (TOCL), 21(4):1–23, 2020\.
* [GKN+20] J. Gajarský, S. Kreutzer, J. Nešetřil, P. Ossona de Mendez, M. Pilipczuk, S. Siebertz, and S. Toruńczyk. First-order interpretations of bounded expansion classes. ACM Transactions on Computational Logic, 21(4):Article 29, 2020\. doi:10.1145/3382093.
* [GKS17] M. Grohe, S. Kreutzer, and S. Siebertz. Deciding first-order properties of nowhere dense graphs. Journal of the ACM, 64(3):17:1–17:32, 2017.
* [GPT22] J. Gajarskỳ, M. Pilipczuk, and S. Toruńczyk. Stable graphs of bounded twin-width. In Proceedings of the 37th Annual ACM/IEEE Symposium on Logic in Computer Science, pages 1–12, 2022.
* [Gro03] Martin Grohe. Local tree-width, excluded minors, and approximation algorithms. Combinatorica, 23(4):613–632, 2003.
* [GW00a] F. Gurski and E. Wanke. The Tree-Width of Clique-Width Bounded Graphs without $K_{n,n}$, volume 1928 of Lecture Notes in Computer Science, pages 196–205. Springer Berlin Heidelberg, 2000.
* [GW00b] F. Gurski and E. Wanke. The tree-width of clique-width bounded graphs without $k_{n,n}$. In International Workshop on Graph-Theoretic Concepts in Computer Science, pages 196–205. Springer, 2000.
* [Hod93] W. Hodges. Model theory, volume 42. Cambridge University Press, 1993.
* [Las13] M. C. Laskowski. Mutually algebraic structures and expansions by predicates. The Journal of Symbolic Logic, 78(1):185–194, 2013.
* [Lib04] Leonid Libkin. Elements of Finite Model Theory. Springer, 2004.
* [Loz08] V. V. Lozin. From tree-width to clique-width: Excluding a unit interval graph. In Seok-Hee Hong, Hiroshi Nagamochi, and Takuro Fukunaga, editors, Algorithms and Computation, pages 871–882, Berlin, Heidelberg, 2008. Springer Berlin Heidelberg.
* [LT22] M. C. Laskowski and C. A. Terry. Jumps in speeds of hereditary properties in finite relational languages. Journal of Combinatorial Theory, Series B, 154:93–135, 2022.
* [Mak04] J. A. Makowsky. Algorithmic uses of the Feferman–Vaught theorem. Annals of Pure and Applied Logic, 126(1-3):159–213, 2004.
* [NOdM03] J. Nešetřil and P. Ossona de Mendez. Colorings and homomorphisms of minor closed classes. In B. Aronov, S. Basu, J. Pach, and M. Sharir, editors, The Goodman-Pollack Festschrift, volume 25 of Algorithms and Combinatorics, pages 651–664. Discrete & Computational Geometry, 2003\.
* [NOdM05] J. Nešetřil and P. Ossona de Mendez. The grad of a graph and classes with bounded expansion. In A. Raspaud and O. Delmas, editors, 7th International Colloquium on Graph Theory, volume 22 of Electronic Notes in Discrete Mathematics, pages 101–106. Elsevier, 2005. doi:10.1016/j.endm.2005.06.018.
* [NOdM06] J. Nešetřil and P. Ossona de Mendez. Linear time low tree-width partitions and algorithmic consequences. In STOC’06. Proceedings of the 38th Annual ACM Symposium on Theory of Computing, pages 391–400. ACM Press, 2006. doi:10.1145/1132516.1132575.
* [NOdM08a] J. Nešetřil and P. Ossona de Mendez. Grad and classes with bounded expansion I. Decompositions. European Journal of Combinatorics, 29(3):760–776, 2008. doi:10.1016/j.ejc.2006.07.013.
* [NOdM08b] J. Nešetřil and P. Ossona de Mendez. Grad and classes with bounded expansion I. decompositions. European Journal of Combinatorics, 29(3):760–776, 2008.
* [NOdM10a] J. Nešetřil and P. Ossona de Mendez. From sparse graphs to nowhere dense structures: Decompositions, independence, dualities and limits. In European Congress of Mathematics, pages 135–165. European Mathematical Society, 2010. doi:10.4171/077-1/7.
* [NOdM10b] J. Nešetřil and P. Ossona de Mendez. Sparse combinatorial structures: Classification and applications. In R. Bhatia and A. Pal, editors, Proceedings of the International Congress of Mathematicians 2010 (ICM 2010), volume IV, pages 2502–2529, Hyderabad, India, 2010. World Scientific. URL: http://eproceedings.worldscinet.com/9789814324359/9789814324359_0156.html.
* [NOdM12] J. Nešetřil and P. Ossona de Mendez. Sparsity (Graphs, Structures, and Algorithms), volume 28 of Algorithms and Combinatorics. Springer, 2012. 465 pages.
* [NOdM17] J. Nešetřil and P. Ossona de Mendez. Cluster analysis of local convergent sequences of structures. Random Structures & Algorithms, 51(4):674–728, 2017.
* [NOdMP+21] J. Nešetřil, P. Ossona de Mendez, M. Pilipczuk, R. Rabinovich, and S. Siebertz. Rankwidth meets stability. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2014–2033, 2021. doi:10.1137/1.9781611976465.120.
* [NOdMRS20] J. Nešetřil, P. Ossona de Mendez, R. Rabinovich, and S. Siebertz. Linear rankwidth meets stability. In Shuchi Chawla, editor, Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, pages 1180–1199, 2020.
* [Noy15] M. Noy. Graphs. In Handbook of Enumerative Combinatorics, pages 397–436. Chapman and Hall/CRC, 2015.
* [NT00] J. Nešetřil and C. Tardif. Duality theorems for finite structures (characterising gaps and good characterisations). Journal of Combinatorial Theory, Series B, 80(1):80–97, 2000.
* [Pal18] D. Palacín. An introduction to stability theory. In Lectures in model theory. 2018.
* [POdMS22] M. Pilipczuk, P. Ossona de Mendez, and S. Siebertz. Transducing paths in graph classes with unbounded shrubdepth. European Journal of Combinatorics, page 103660, 2022. doi:10.1016/j.ejc.2022.103660.
* [PT99] A. Proskurowski and J. A. Telle. Classes of graphs with restricted interval models. Discrete Mathematics & Theoretical Computer Science, 3, 1999.
* [PZ78] K.-P. Podewski and M. Ziegler. Stable graphs. Fund. Math., 100:101–107, 1978.
* [RS86] N. Robertson and P. D. Seymour. Graph minors. V. Excluding a planar graph. Journal of Combinatorial Theory, Series B, 41(1):92–114, 1986.
* [She90] S. Shelah. Classification theory: and the number of non-isomorphic models, volume 92. Elsevier, 1990.
* [Sim15] P. Simon. A guide to NIP theories. Cambridge University Press, 2015.
|
MinMax Networks
Winfried Lohmiller, Philipp Gassert, and Jean-Jacques Slotine
Nonlinear Systems Laboratory
Massachusetts Institute of Technology
Cambridge, Massachusetts, 02139, USA
<EMAIL_ADDRESS>
###### Abstract
While much progress has been achieved over the last decades in neuro-inspired
machine learning, there are still fundamental theoretical problems in
gradient-based learning using combinations of neurons. These problems, such as
saddle points and suboptimal plateaus of the cost function, can lead in theory
and practice to failures of learning. In addition, the discrete step size
selection of the gradient is problematic since too large steps can lead to
instability and too small steps slow down the learning.
This paper describes an alternative discrete MinMax learning approach for
continuous piece-wise linear functions. Global exponential convergence of the
algorithm is established using Contraction Theory with Inequality Constraints
[6], which is extended from the continuous to the discrete case in this paper:
* •
The parametrization of each linear function piece is, in contrast to deep
learning, linear in the proposed MinMax network. This allows a linear
regression stability proof as long as measurements do not transit from one
linear region to its neighbouring linear region.
* •
The step size of the discrete gradient descent is Lagrangian limited
orthogonal to the edge of two neighbouring linear functions. It will be shown
that this Lagrangian step limitation does not decrease the convergence of the
unconstrained system dynamics in contrast to a step size limitation in the
direction of the gradient.
We show that the convergence rate of a constrained piece-wise linear function
learning is equivalent to the exponential convergence rates of the individual
local linear regions.
## 1 Introduction
In this paper, we revisit standard convergence difficulties of gradient
descent on a quadratic error cost, such as the possible presence of saddle
points, sub-optimal plateaus, non-Lipschitz edges, time-varying measurements
and the time discretization of the gradient. Initial results of the MinMax
learning we discuss were derived in [7][8].
The classical Rectified Linear Unit (ReLU) approach as e.g. in [4] implies a
piece-wise linear approximation function. The edges of the linear surfaces are
not Lipschitz continuous. Hence it is difficult to prove stability and even
uniqueness of the solution.
The stability problem of the non-Lipschitz edges is overcome in this paper
with a Lagrangian constraint step (2) to the edge. Since the edge belongs to
both Lipschitz continuous regions it is possible at the next iteration to
transit to the neighbouring region with a Lipschitz continuous step. Hence as
a first step Contraction Theory of Constrained Continuous Systems [6] is
generalized in section 2 to discrete dynamics of the form
${\bf x}^{i+1}={\bf f}({\bf x}^{i},i)$ (1)
with $n$-dimensional discrete state $x^{i}$, time index $i$ and
$l=1,...,L$-dimensional linear inequality constraints
$g_{l}={\bf g}_{l}^{T}(i+1){\bf x}^{i+1}+h_{l}(i+1)\leq 0.$ (2)
Note that many non-linear constraints can be brought into a linear form with a
coordinate transformation, such that the results of this paper apply again.
Another problem in the stability analysis of deep learning is that e.g. a
network of depth $100$ actually multiplies $100$ linear parameters with each
other. Hence the smooth surface pieces are piece-wise polynomials of order
$100$ w.r.t. the chosen parametrization, although the approximation function
is actually piece-wise linear. To overcome this problem this paper suggests to
use the sum of several piece-wise linear functions
$\hat{y}({\bf x})=\sum_{j=1}^{J_{\min}}\hat{y}_{j\min}({\bf
x})+\sum_{j=J_{\min}}^{J_{\max}}\hat{y}_{j\max}({\bf x})$ (3)
which uses the convex and concave neurons $j$
$\displaystyle\hat{y}_{j\min}({\bf x})$ $\displaystyle=$
$\displaystyle\min(\hat{z}_{j1},...,\hat{z}_{jK_{j}})$
$\displaystyle\hat{y}_{j\max}({\bf x})$ $\displaystyle=$
$\displaystyle\max(\hat{z}_{j1},...,\hat{z}_{jK_{j}})$
which consist of $k=1,...,K_{j}$ linear basic neurons $\hat{z}_{jk}={\bf
x}^{T}\hat{\bf w}_{jk}$ of estimated $N+1$-dimensional weight vector $\hat{\bf
w}_{jk}$.
The concave $\max$ and convex $\min$ functions are a direct generalization of
the ReLU to achieve piece-wise linear functions. Multiple local convex and
concave functions can be approximated with multiple $\min$ and $\max$
operators. The key advantage to deep networks is that the parametrization is
still linear in $\hat{\bf w}_{jk}$ between the edges, i.e. linear stability
proofs can be used with the mentioned step size limitation to the edge.
The following example shows the main modeling difference of both approaches:
* ###### Example 1.1
: Let us consider the unit pyramid in subfigure 1a
$\displaystyle y({\bf x})$ $\displaystyle=$
$\displaystyle\max(0,\min(x_{1}+1,x_{2}+1,-x_{1}+1,-x_{2}+1)).$
In a deep ReLU network each ReLU adds an edge to the piece-wise linear
function. A ReLU network estimation of depth 2 exactly models the pyramid
above with
$\displaystyle\hat{y}({\bf x})$ $\displaystyle=$
$\displaystyle\max(0,\hat{z}_{11})$ $\displaystyle\hat{z}_{11}$
$\displaystyle=$ $\displaystyle
1-0.5\left(\hat{y}_{21}+\hat{y}_{22}+\hat{y}_{23}+\hat{y}_{24}\right))$
$\displaystyle\hat{y}_{2j}$ $\displaystyle=$
$\displaystyle\max(0,\hat{z}_{2j})$ $\displaystyle\hat{z}_{21}$
$\displaystyle=$ $\displaystyle-x_{1}-x_{2},\ \hat{z}_{22}=x_{1}-x_{2},\
\hat{z}_{23}=x_{1}+x_{2},\ \hat{z}_{24}=-x_{1}+x_{2}$
The lower ReLUs $\hat{y}_{2j}=\max(0,\hat{z}_{2j})$ define the the 4 edges of
the pyramid without ground leading to the linear input $\hat{z}_{11}$ to the
last layer in subfigure 1b. The upper ReLU $\hat{y}=\max(0,\hat{z}_{11})$ then
adds the remaining 4 edges to the ground. In general a high over-
parametrization is needed to approximate a piece-wise linear functions with
deep ReLUs [4]. Also it is not very easy to understand which edge is defined
by which ReLU in which layer.
In contrast to the above the proposed MinMax approach (3) systematically
defines
* –
all convex edges of the pyramid in $\hat{y}_{1\max}$ and
* –
all concave edges of the pyramid in $\hat{y}_{2\min}$
with
$\displaystyle\hat{y}({\bf x})$ $\displaystyle=$
$\displaystyle\hat{y}_{1\max}+\hat{y}_{2\min}$ $\displaystyle\hat{y}_{1\max}$
$\displaystyle=$
$\displaystyle\max(\hat{z}_{11},\hat{z}_{12},\hat{z}_{13},\hat{z}_{14},\hat{z}_{15}),\
\hat{y}_{2\min}=\min(\hat{z}_{21},\hat{z}_{22},\hat{z}_{23},\hat{z}_{24})$
$\displaystyle\hat{z}_{11}$ $\displaystyle=$ $\displaystyle 0,\
\hat{z}_{12}=x_{1}+1,\ \hat{z}_{13}=x_{2}+1,\ \hat{z}_{14}=-x_{1}+1,\
\hat{z}_{15}=-x_{2}+1$ $\displaystyle\hat{z}_{21}$ $\displaystyle=$
$\displaystyle x_{1}+1,\ \hat{z}_{22}=x_{2}+1,\ \hat{z}_{23}=-x_{1}+1,\
\hat{z}_{24}=-x_{2}+1$
where subfigures 1c and 1d show the convex and concave neurons of the MinMax
approach. The legend indicates which basic neurons produce a nonzero output
for the colored surfaces.
(a) Unit pyramid
(b) Intermediate neuron $\hat{z}_{1}$ of ReLU network
(c) Concave neuron $\hat{y}_{1\max}$ of MinMax
(d) Convex neuron $\hat{y}_{2\min}$ of MinMax
Figure 1: Unit pyramid, intermediate ReLU neuron and a MinMax network with two
neurons
$\Box$
Motivated by the above section 3 introduces a piece-wise linear MinMax
discrete function learning (3) for the $N$-dimensional case. The approach
still uses gradient descent on a quadratic cost in discrete time. Exact
exponential stability guarantees are provided using the results of section 2.
Saddle points or sub-optimal plateaus are avoided with a linear
parametrization. Possible instabilities of the non-Lipschitz edges are avoided
with intermediate Lagrange constraints (2). Note that time-varying
measurements and the time discretization of the gradient are part of the
exponential stability proof. Finally the result is summarised in the Summary.
## 2 Discrete-time constrained systems
This section extends Contraction Analysis of Continuous Constrained Systems
[6] to the discrete case:
A constraint $l$ in (2) only has an impact on the unconstrained dynamics (1)
if the inequality turns into an equality $g_{l}=0$ , leading to the following
definition.
###### Definition 1
The set of active constraints $\mathcal{A}(({\bf
x}^{i+1},{i+1})\subseteq\\{1,...,L\\}$ contains the elements $l$ (2) which are
on the boundary of the original constraint
$\displaystyle g_{l}={\bf g}_{l}^{T}(i+1){\bf x}^{i+1}+h_{l}(i+1)$
$\displaystyle=$ $\displaystyle 0.$
The constrained dynamic equations (1, 2) are then of the form
${\bf x}^{i+1}={\bf f}({\bf x}^{i},i)+\sum_{all\ l\in\mathcal{A}}{\bf
g}_{l}{\lambda}_{l}.$ (4)
All constraints $g_{l},l\in\mathcal{A}$ are not violated at $i+1$ if
${g}_{l}={\bf g}^{T}_{l}(i+1)\left({\bf f}({\bf x}^{i},i)+\sum_{all\
l\in\mathcal{A}}{\bf g}_{l}(i+1){\lambda}_{l}\right)+h_{l}(i+1)\leq 0$
leading to the following definition:
###### Definition 2
The set of Lagrange multipliers $\lambda_{l},l\in\mathcal{A}$ of (2) is the
solution of the linear programming problem
$\displaystyle maximize\sum_{all\ l\in\mathcal{A}}\lambda_{l}\ subject\ to$
$\displaystyle{\bf g}^{T}_{l}\left({\bf f}({\bf x}^{i},i)+\sum_{all\
l\in\mathcal{A}}{\bf g}_{l}{\lambda}_{l}\right)+h_{l}$ $\displaystyle\leq$
$\displaystyle 0$ $\displaystyle\lambda_{l}$ $\displaystyle\leq$
$\displaystyle 0$
where we requested similar to [6] that the constraint term points in the
interior of the constraint with $\lambda_{l}\leq 0$ and the negative
$\lambda_{l}$ are maximized [3] to minimize their joint usage. The equation
above corresponds to the initial solution problem of Linear Programming (LP),
see e.g. section LP in [2]. Diverse solutions exist for this problem, such as
the simplex algorithm in section LP in [2].
Similar to [6] we introduce a virtual displacement between two neighbouring
trajectories, which is constrained by $g_{l}=0$ at the active
$l\in\mathcal{A}$. This virtual displacement has to be parallel to $g_{l}=0$,
i.e. orthogonal to the normals ${\bf g}_{l}$, which implies
$\displaystyle{\bf\delta x}^{i+1}$ $\displaystyle=$ $\displaystyle{\bf
G}_{\parallel}(i+1){\bf\delta x}^{i+1\ast}$ (5) $\displaystyle{\bf
G}_{\parallel}^{T}{\bf G}_{\parallel}$ $\displaystyle=$ $\displaystyle{\bf I}$
$\displaystyle{\bf G}_{\parallel}^{T}{\bf g}_{l}(i+1)$ $\displaystyle=$
$\displaystyle{\bf 0}\ \ \ \ \ \ \forall\ l\in\mathcal{A}$
where ${\bf\delta x}^{i+1\ast}$ is the reduced virtual displacement of
dimension $n$ minus by the number of active constraints in $\mathcal{A}$.
The constrained dynamic equations (4) can be rewritten as:
${\bf x}^{i+1}={\bf f}({\bf x}^{i},i)+\sum_{all\ l\in\mathcal{A}}{\bf
g}_{l}\lambda_{l}={\bf f}({\bf x}^{i},i)+\sum_{l=1}^{L}{\bf
g}_{l}step(g_{l})\lambda_{l}$
with the step function
$step(g_{l})=\left\\{\begin{array}[]{l}0\text{\ \ for \ \ }g_{l}<0\\\ 1\text{\
\ for \ \ }g_{l}=0\\\ undefined\text{\ \ for \ \ }g_{l}>0\end{array}\right.$
whose variation is
$\displaystyle\delta{\bf x}^{i+1}$ $\displaystyle=$
$\displaystyle\frac{\partial{\bf f}}{\partial{\bf x}^{i}}{\bf\delta
x}^{i}+\sum_{l=1}^{L}\left({\bf g}_{l}\frac{\partial\lambda_{l}}{\partial{\bf
x}^{i+1}}+{\bf g}_{l}\frac{\partial step(g_{l})}{\partial g_{l}}{\bf
g}_{l}^{T}\lambda_{l}\right){\bf\delta x}^{i+1}$
Now the squared virtual length dynamics with a metric ${\bf M}^{i}({\bf
x}^{i},i)$ can be bounded as
$\displaystyle\frac{1}{2}\ {\bf\delta x}^{i+1T}{\bf M}^{i+1}{\bf\delta
x}^{i+1}$ $\displaystyle=$ $\displaystyle{\bf\delta x}^{iT}\frac{\partial{\bf
f}}{\partial{\bf x}^{i}}^{T}{\bf M}^{i+1}\frac{\partial{\bf f}}{\partial{\bf
x}^{i}}{\bf\delta x}^{i}+\frac{1}{2}\ {\bf\delta x}^{i+1T}({\bf M}^{i+1}{\bf
g}_{l}\frac{\partial step(g_{l})}{\partial g_{l}}{\bf
g}_{l}^{T})_{H}\lambda_{l}{\bf\delta x}^{i+1}$
where we used ${\bf G}_{\parallel}^{T}{\bf
g}_{l}\frac{\partial\lambda_{l}}{\partial{\bf x}^{i+1}}={\bf 0}$ since on the
constraint the first two terms vanish and outside the constraint the last term
vanishes. The Dirac impulse $\frac{\partial step(g_{l})}{\partial g_{l}}$
discontinuously sets the virtual displacement ${\bf\delta x}^{i+1}$ to ${\bf
G}_{\parallel}{\bf\delta x}^{i+1\ast}$ when a constraint is activated.
Thus the dynamics of ${\bf\delta x}^{i+1T}{\bf\delta x}^{i+1}$ is composed of
exponentially convergent continuous segments and an enforcement of ${\bf\delta
x}^{i+1}$ to ${\bf G}_{\parallel}{\bf\delta x}^{i+1\ast}$ at the activation of
a constraint. Let us summarize this result.
###### Theorem 1
Consider the discrete dynamics
${\bf x}^{i+1}={\bf f}({\bf x}^{i},i)+\sum_{all\ l\in\mathcal{A}}{\bf
g}_{l}\lambda_{l}$ (6)
within the metric ${\bf M}({\bf x}^{i},i)$ constrained by a
$l=1,...,L$-dimensional inequality constraint
$g_{l}={\bf g}_{l}^{T}(i+1){\bf x}^{i+1}+h_{l}(i+1)\leq 0$ (7)
The set of active constraints $\mathcal{A}$ and Lagrange multipliers
$\lambda_{l}$ are given in Definition 1 and 2
The distance $s=\min\int_{{\bf x}^{i}(s)={\bf x}^{i}_{1}}^{{\bf
x}^{i}_{2}}\sqrt{\delta{\bf x}^{iT}{\bf M}^{i}\delta{\bf x}^{i}}$ within
$\mathbb{G}^{n}$ from any trajectory ${\bf x}^{i}_{1}(t)$ to any other
trajectory ${\bf x}^{i}_{2}(t)$ converges exponentially to $0$ with an
exponential convergence rate $\leq\max_{along\ s}(\sigma_{\max}({\bf
x}^{i},i)),\sigma_{\max}>0$ ($\geq\min_{along\ s}(\sigma_{\min}({\bf
x}^{i},i)),\sigma_{\min}>0$) with
$\displaystyle\sigma_{\min}^{2}{\bf G}^{T}_{\parallel}{\bf M}^{i}{\bf
G}^{T}_{\parallel}\leq{\bf G}_{\parallel}\frac{\partial{\bf f}}{\partial{\bf
x}^{i}}^{T}{\bf M}^{i+1}\frac{\partial{\bf f}}{\partial{\bf x}^{i}}{\bf
G}^{T}_{\parallel}$ $\displaystyle\leq$ $\displaystyle\sigma_{\max}^{2}{\bf
G}^{T}_{\parallel}{\bf M}^{i}{\bf G}^{T}_{\parallel}$ (8)
with the constrained tangential space ${\bf G}_{\parallel}$ from equation (5).
In addition the activation of a constraint discontinuously sets the virtual
displacement ${\bf\delta x}^{i+1}$ to ${\bf G}_{\parallel}{\bf\delta
x}^{i+1\ast}$.
Note that section 3.3. of [1] provides a stability condition on the
constrained step ${\bf f}_{i+1}-{\bf f}_{i}$ for the original contraction
mapping theorem. Theorem 1 makes this condition more concrete by giving
explicit conditions on the derivatives of ${\bf f}$. It also extends the
result to a metric. In addition all notes of the continuous theorem in [6]
apply here as well.
## 3 Discrete exponential stable learning
Let us us assume a piece-wise linear $N$-dimensional measurement function
$y=y({\bf x})$
where the $N$-dimensional input vector ${\bf x}^{\prime}$ is augmented to the
$N+1$-dimensional input vector ${\bf x}=({\bf x}^{{}^{\prime}T},1)^{T}$. We
have $m=1,...,M$ measurements
$y_{m}^{i}=y({\bf x}_{m}^{i})$
with the measured input vector ${\bf x}^{i}_{m}$ at time index $i$. The goal
is to approximate $y_{m}^{i}$ with $\hat{y}_{m}^{i}({\bf x}_{m}^{i})$. We
achieve the approximation of the true function by minimizing the weighted cost
$V=\frac{1}{2}\sum_{m=1}^{M}\alpha_{m}^{2}\tilde{y}_{m}^{i2}$
with $\tilde{y}_{m}^{i}=\hat{y}_{m}^{i}({\bf x}_{m}^{i})-y^{i}_{m}({\bf
x}_{m}^{i})$ and measurement weight $\alpha_{m}^{2}({\bf x}_{m}^{i})\geq 0$.
The unconstrained parameter learning law is the classical gradient descent of
$V$
$\displaystyle{\hat{\bf W}}_{j}^{i+1}$ $\displaystyle=$
$\displaystyle{\hat{\bf W}}_{j}^{i}-\frac{\partial V}{\partial\hat{\bf
W}_{j}}$ (9) $\displaystyle=$ $\displaystyle{\hat{\bf
W}}_{j}^{i}-\sum_{m=1}^{M}{\bf A}_{j}({\bf
x}_{m}^{i})\alpha_{m}^{2}\tilde{y}_{m}$ $\displaystyle=$
$\displaystyle{\hat{\bf W}}_{j}^{i}-\sum_{m=1}^{M}{\bf A}_{j}({\bf
x}_{m}^{i})\alpha_{m}^{2}\left(\sum_{l=1}^{J}{\bf A}_{l}^{T}({\bf
x}_{m}^{i}){\hat{\bf W}}_{l}^{i}-y^{i}_{m}\right)$
with $\hat{\bf W}_{j}^{iT}=(\hat{\bf w}_{j1}^{iT},...,\hat{\bf
w}_{jK_{j}}^{iT})$ and the activation function
${\bf A}_{j}({\bf x})=\left\\{\begin{array}[]{l}{\bf x}\text{\ \ if
}\hat{z}_{jk{\ast}}=\hat{y}_{j\min}(\hat{y}_{j\max})\text{ for a
}\min(\max)\text{ neuron}\\\ {\bf 0}\text{\ \ for all other \ }k\neq
k^{\ast}\end{array}\right.$ (10)
where for multiple solutions
$\hat{z}_{jk{\ast}}=\hat{y}_{j\min}(\hat{y}_{j\max})$ only one activation is
set to ${\bf x}$ and all others to ${\bf 0}$. Note that the gradient in (9)
could be multiplied on top with a gain $\alpha_{j}(i)$ which we have skipped
for simplicity here.
The weight dynamics (9) is equivalent to the measurement estimation dynamics
$\displaystyle\hat{y}^{i+1}_{n}$ $\displaystyle=$
$\displaystyle\sum_{j=1}^{J}{\bf A}_{j}^{T}({\bf x}_{n}^{i+1}){\hat{\bf
W}}_{j}^{i+1}$ (11) $\displaystyle=$
$\displaystyle\hat{y}^{i}_{n}-\sum_{j=1}^{J}{\bf A}_{j}^{T}({\bf
x}_{n}^{i})\sum_{m=1}^{M}{\bf A}_{j}({\bf
x}_{m}^{i})\alpha_{m}^{2}\tilde{y}_{m}.$
In this last formulation we assume static measurements ${\bf x}_{n}^{i+1}={\bf
x}_{n}^{i}$.
To assure Lipschitz continuity of the active basic neurons
$\hat{z}_{jk^{\ast}}$ at the learning step $i+1$ we have to exclude with
Theorem 1 a transition beyond the edges of the $\min$ and $\max$ operator with
the constraints
$\displaystyle g_{l}={\bf g}_{l}^{T}\hat{\bf W}_{j}^{i+1}={\bf
x}_{m}^{i+1T}\hat{\bf w}_{jk^{\ast}}^{i+1}-{\bf x}_{m}^{i+1T}\hat{\bf
w}_{jk}^{i+1}$ $\displaystyle\leq$ $\displaystyle 0\text{ for a }\min\text{
neuron j and all }k\neq k^{\ast}$ $\displaystyle g_{l}={\bf g}_{l}^{T}\hat{\bf
W}_{j}^{i+1}={\bf x}_{m}^{i+1T}\hat{\bf w}_{jk}^{i+1}-{\bf
x}_{m}^{i+1T}\hat{\bf w}_{jk^{\ast}}^{i+1}$ $\displaystyle\leq$ $\displaystyle
0\text{ for a }\max\text{ neuron j and all }k\neq k^{\ast}$
where the related Lagrange parameters are given in Definition 2.
Theorem 1 then implies global contraction behaviour with the largest and
smallest singular value of the variation of (9) or alternatively (11) within
the constraints above.
Summarizing the above leads to:
###### Theorem 2
Consider a piece-wise linear $N$-dimensional measurement function
$y=y({\bf x})$ (12)
where the $N$-dimensional input vector ${\bf x}^{\prime}$ is augmented to the
$N+1$-dimensional input vector ${\bf x}=(1,{\bf x}^{{}^{\prime}T})^{T}$. We
have $m=1,...,M$ measurements
$y_{m}^{i}=y({\bf x}_{m}^{i})$ (13)
with the measured input vector ${\bf x}_{m}^{i}$ at time index $i$. We
approximate (12) with
$\hat{y}({\bf x}_{m}^{i})=\sum_{j=1}^{J_{\min}}\hat{y}_{j\min}({\bf
x}_{m}^{i})+\sum_{j=J_{\min}}^{J_{\max}}\hat{y}_{j\max}({\bf x}_{m}^{i})$ (14)
which uses the convex and concave neurons $j$
$\displaystyle\hat{y}_{j\min}({\bf x}_{m}^{i})$ $\displaystyle=$
$\displaystyle\min(\hat{z}_{j1},...,\hat{z}_{jK_{j}})$
$\displaystyle\hat{y}_{j\max}({\bf x}_{m}^{i})$ $\displaystyle=$
$\displaystyle\max(\hat{z}_{j1},...,\hat{z}_{jK_{j}})$ (15)
which consist of $k=1,...,K_{j}$ linear basic neurons $\hat{z}_{jk}={\bf
x}_{m}^{iT}\hat{\bf w}_{jk}^{i}$ of estimated $N+1$-dimensional weight vector
$\hat{\bf w}_{jk}^{i}$ at time $i$ and the activation function (10). For a
neuron $j$ we constrain the dynamics with
$\displaystyle g_{l}={\bf g}_{l}^{T}\hat{\bf W}_{j}^{i+1}={\bf
x}_{m}^{i+1T}\hat{\bf w}_{jk^{\ast}}^{i+1}-{\bf x}_{m}^{i+1T}\hat{\bf
w}_{jk}^{i+1}$ $\displaystyle\leq$ $\displaystyle 0\text{ for a }\min\text{
neuron j, all }k\neq k^{\ast}$ $\displaystyle g_{l}={\bf g}_{l}^{T}\hat{\bf
W}_{j}^{i+1}={\bf x}_{m}^{i+1T}\hat{\bf w}_{jk}^{i+1}-{\bf
x}_{m}^{i+1T}\hat{\bf w}_{jk^{\ast}}^{i+1}$ $\displaystyle\leq$ $\displaystyle
0\text{ for a }\max\text{ neuron j, all }k\neq k^{\ast}$ (16)
with $\hat{\bf W}_{j}^{iT}=(\hat{\bf w}_{j1}^{iT},...,\hat{\bf
w}_{jK_{j}}^{iT})$. The constrained learning of the cost
$V=\frac{1}{2}\sum_{m=1}^{M}\alpha_{m}^{2}\tilde{y}_{m}^{i2}$ (17)
with $\tilde{y}_{m}^{i}=\hat{y}_{m}^{i}({\bf x}_{m}^{i})-y^{i}_{m}({\bf
x}_{m}^{i})$ and measurement weight $\alpha_{m}^{2}({\bf x}_{m}^{i})\geq 0$
$\displaystyle\hat{\bf W}_{j}^{i+1}$ $\displaystyle=$ $\displaystyle{\hat{\bf
W}}_{j}^{i}-\frac{\partial V}{\partial\hat{\bf W}_{j}}+\sum_{all\
l\in\mathcal{A}}{\bf g}_{l}\lambda_{l}$ (18) $\displaystyle=$
$\displaystyle\hat{\bf W}_{j}^{i}-\sum_{m=1}^{M}{\bf A}_{j}({\bf
x}_{m})\alpha_{m}^{2}(i)\tilde{y}_{m}^{i}+\sum_{all\ l\in\mathcal{A}}{\bf
g}_{l}\lambda_{l}$
where the set of active constraints $\mathcal{A}$ and Lagrange multipliers
$\lambda_{l}$ are given in Definition 1 and 2 is globally exponentially
converging to $\frac{\partial V}{\partial\hat{\bf W}_{j}}={\bf 0}$ with the
largest and smallest singular value of the matrix
$\displaystyle{\bf I}_{jk}-\sum_{m=1}^{M}{\bf A}_{j}({\bf
x}_{m}^{i})\alpha_{m}^{2}{\bf A}_{k}^{T}({\bf x}_{m}^{i})$ (19)
or alternatively in the metric $\alpha_{n}^{2}$ with the block diagonal matrix
$\displaystyle{\bf I}_{nm}-\sum_{j=1}^{J}\alpha_{n}{\bf A}_{j}^{T}({\bf
x}_{n}^{i}){\bf A}_{j}({\bf x}_{m}^{i})\alpha_{m}$ (20)
In this last formulation we assume static measurements ${\bf x}_{n}^{i+1}={\bf
x}_{n}^{i}$ e.g. for batch processing.
The equilibrium point $\frac{\partial V}{\partial\hat{\bf W}_{j}}={\bf 0}$ is
unique or global if the largest singular value is strictly less then $1$.
Note that e.g. $0<\alpha_{m}\leq\frac{1}{J\lvert{\bf x}_{m}^{i}\rvert}$
assures contraction behaviour in (20).
Also note that at the minimum of $V$ the cost $V$ will not be $0$ if an
incomplete active topology (14) was used. Hence the following pruning and
creation principles have to be applied to find a right active topology (14).
For a right topology $V$ will go to $0$ exponentially since a contracting
solution $V=0$ then exists.
* •
Inactive neurons (15) close to $0$ can be pruned.
* •
Basic neurons of a neuron, which never become active or which are similar to
another basic neuron, can be pruned.
* •
New convex or concave neurons (15) $j$ can be initialized with all
$\hat{z}_{jk}=0$ and one activated basic neuron if persistent relevant errors
$\tilde{y}_{m}$ remain.
In principle one convex and one concave neuron are sufficient to activate
convex or concave edges everywhere. However, the numerical complexity of the
simplex algorithm in Definition 2 is exponential in the worst case. Hence it
is computationally more efficient to take several convex or concave neurons of
low dimension rather then one convex or concave neuron of very high dimension.
* •
Similarly, new basic neurons can be created by duplicating existing basic
neurons $jk$ with persistent relevant errors $\tilde{y}_{m}$
$\hat{z}_{jnew}=\hat{z}_{jk}$
Note that when the Lagrangian constraint is active at the boundary between two
basic neurons the following cases exist for the trajectory which starts at the
next iteration on the boundary between both neurons:
* •
Both trajectories move away from the boundary. Here one trajectory can be
selected. Several solutions may exist here since the boundary is not Lipschitz
continuous. Note that especially in this case $\delta{\bf x}^{i+1}$ does not
diverge since it was parallelized to the constraint the iteration before.
* •
One trajectory moves to the boundary the other away from the boundary. Here
the trajectory which moves away from the boundary is selected.
* •
Both trajectories again move to the boundary. Here the Lagrangian constraint
has to be maintained active at the next iteration, such that the trajectory
moves on the edge until it eventually leaves the edge at a later iteration.
The following example illustrates the effect above:
* ###### Example 3.1
: Figure 2 shows the learning of a single measurement at the edge of 2 basic
neurons of Theorem 2:
* –
The left side shows a measurement on the convex side of 2 basic neurons.
Learning of the left or right neuron alone does not work since the measurement
cost (17) at $i$
$V=\frac{1}{2}\sum_{m=1}^{M}\tilde{y}_{m}^{i2}$
has different measurements $m$ at $i+1$. Learning works here only with an
active Lagrangian constraint (16) of Theorem 2
* –
On the right side the measurement is on the concave side of 2 basic neurons.
Here both neurons can learn without an active Lagrangian constraint (16) since
the measurement stays at $i+1$ at the same basic neuron where it was at $i$.
Figure 2: Learning with measurements on convex or concave side
$\Box$
* ###### Example 3.2
: Figure 3 shows how a MinMax network of Theorem 2 evolves for a polygon. We
approximate the target polygon $y$ (12) in subfigure 3a with the MinMax
learning (18):
* –
We start with one linear neuron $\hat{y}=y_{1,min}=min(\hat{z}_{11})$ (15) in
subfigure 3b.
* –
We then insert a new basic neuron in
$\hat{y}=y_{1,min}=min(\hat{z}_{11},\hat{z}_{12})$, where $\hat{z}_{11}$ and
$\hat{z}_{12}$ have initially the same parameters. After training, the network
converges to the approximation in subplot 3c.
* –
A new concave neuron with two basic neurons is then inserted leading to
$\hat{y}=min(\hat{z}_{11},\hat{z}_{12})+max(\hat{z}_{21},\hat{z}_{22})$ in
subfigure 3d.
* –
After another insertion of a basic neuron, we get a perfect approximation in
subfigure 3e with
$\hat{y}=min(\hat{z}_{11},\hat{z}_{12})+max(\hat{z}_{21},\hat{z}_{22},\hat{z}_{23})$.
The final two neurons (15) are depicted in subfigures 3f and 3g.
The above example shows how important the (basic) neuron creation is to learn
the topology of the MinMax network.
The approach of this paper uses 5 basic neurons, whereas the benchmark in [10]
had 100 to 2000 neurons in several layers. The benchmark in [10] took up to
50000 iterations to converge to remaining persistent errors, where the MinMax
network only needs a few hundred. The approach of this paper needs 8
measurement points, i.e. the minimum number of points to define the polygon.
[10] used 100 measurement points for learning.
(a) Target function $y$
(b) $\hat{y}$ with one linear neuron
(c) $\hat{y}$ with one convex neuron
(d) $\hat{y}$ with one concave and one convex neuron
(e) $\hat{y}$ with with one concave and one convex neuron
(f) Concave neuron $\hat{y}_{1\min}$
(g) Convex neuron $\hat{y}_{2\max}$
Figure 3: Approximation of 1-dimensional target function by MinMax network
$\Box$
* ###### Example 3.3
: Figure 4 shows how a MinMax network of Theorem 2 evolves. We approximate
the target function $y$ (12) in subfigure 4a with the MinMax learning (18):
* –
We start with one linear neuron in subfigure 4b.
* –
The network then continuously inserts new basic neurons. Subfigure 4c depicts
the approximated function with one neuron consisting of 3 linear basic neurons
(15).
* –
Figure 4d shows the final result, which perfectly matches the target function.
The final approximation $\hat{y}$ is the sum of the concave neuron
$\hat{y}_{1\max}$ and convex neuron $\hat{y}_{2\max}$ (15) shown in subfigure
4e and 4f respectively.
(a) Target function $y$
(b) $\hat{y}$ after initialization with one linear neuron
(c) $\hat{y}$ after two insertions of two basic neurons
(d) $\hat{y}$ after full training with two neurons
(e) Concave neuron $\hat{y}_{1\max}$ of MinMax
(f) Convex neuron $\hat{y}_{2\min}$ of MinMax
Figure 4: Approximation of 2-dimensional target function by MinMax network
$\Box$
* ###### Example 3.4
: Figure 5 shows the convergence of a MinMax Network of Theorem 2 to a
$8$-dimensional function
$y({\bf x})=\max_{n=1,...,8}(\lvert x_{n}\rvert)$
The network is initialized with a single neuron with random parameters
$w_{n}\in[-0.5,0.5]$ and further neurons are created after 100 iterations
until the total cost $V$ (17) converges to zero. Steep drops in the error
value indicate such insertions. Neurons are pruned if they become inactive or
too similar to other basic neurons, leading to a network with the minimal
required number of 16 basic neurons in one $\max$ neuron for the 16 linear
surfaces.
Figure 5: Convergence of the cost $V$ over $i$ for the approximation of 16
surfaces in an 8-dimensional space by a MinMax network.
$\Box$
## 4 Summary
This paper first extends discrete contraction theory to non-linear constrained
systems in Theorem 1. This allows to propose a new class of MinMax networks in
Theorem 2 for the learning of piece-wise linear functions:
* •
Possible instabilities or even non-unique solutions at the discontinuity
between the linear regions are avoided by limiting ${\bf x}^{i+1}$ to its
linear subspace with a Lagrangian constraint of Theorem 1. This linear
subspace may change at the next time instance since ${\bf x}^{i+1}$ then
belongs to both neighbouring linear regions.
From a Contraction Theory perspective this constrained step parallelizes the
virtual displacement $\delta{\bf x}^{i+1}$ to the edge, which is then
orthogonal to the Dirac instability at the edge. Hence, it has no impact on
the contraction rate under this constraint.
* •
Saddles points or sub-optimal plateaus are avoided with a linear
parametrization of the MinMax network. This is not possible with a deep
network since the parametrization is highly polynomial.
As a result exponential convergence guarantees are given with Theorem 2 for
the discrete time learning of piece-wise linear functions.
Although the learning was shown to be contracting in Theorem 2 the remaining
errors will not necessarily go to $0$ if a wrong topology was used. Hence the
current research focus is to define finite neuron creation principles to find
a correct topology of the MinMax network, where one error free solution
exists.
Also, since each basic neuron is linear in Theorem 2, all linear estimation
techniques can be exploited as e.g. computation of covariance matrices.
## References
* [1] Bertesekas D., Tsitsiklis N., Parallel and Distributed Computation: Numerical Methods, Athena Scientific, 2014.
* [2] Bronstein, Semendjajew, Taschenbuch der Mathematik, Teubner, 1991.
* [3] Bryson A., Yu-Chi H., Applied Optimal Control, Taylor and Francis, 1975.
* [4] LeCun Y., Bengio Y., and Hinton G., Deep learning, Nature, 2015.
* [5] Lohmiller, W., and Slotine, J.J.E., On Contraction Analysis for Nonlinear Systems, Automatica, 34(6), 1998.
* [6] Lohmiller, W., and Slotine, J.J.E., Contraction Theory with Inequality Constraints, arXiv:1804.10085, 2023.
* [7] Lohmiller, W., Gassert P. and Slotine, J.J.E., Notes on stable learning with piecewise-linear basis functions, arXiv:1804.10085, 2018.
* [8] Lohmiller, W., Gassert P. and Slotine, J.J.E., Deep MinMax Networks, 60th IEEE Conference on Decision and Control (CDC), IEEE, 2021.
* [9] https://en.wikipedia.org/wiki/Minimax
* [10] Shalev-Shwartz S., Shamir O. and Shammah S., Failures of Gradient-Based Deep Learning, arXiv:1703.07950v2, Apr 2017.
|
# Improving Locality in Sparse and Dense Matrix Multiplications
Mohammad Mahdi Salehi Dezfuli McMaster UniversityHamiltonCanada
<EMAIL_ADDRESS>and Kazem Cheshmi 0000-0002-2968-5176 McMaster
UniversityHamiltonCanada<EMAIL_ADDRESS>
###### Abstract.
Consecutive matrix multiplications are commonly used in graph neural networks
and sparse linear solvers. These operations frequently access the same
matrices for both reading and writing. While reusing these matrices improves
data locality, it presents a challenge due to the irregular dependencies
between iterations across the two multiplication operations. Existing fusion
methods often introduce excessive synchronization overhead or overlapped
computations with limited benefits. This paper proposes tile fusion, a runtime
approach that fuses tiles of the two matrix-matrix multiplications, where at
least one of the involved matrices is sparse. Tile fusion aims to improve data
locality while providing sufficient workload for cores in shared-memory multi-
core processors. For a pair of matrix-matrix multiplications, tile fusion
outperforms unfused baseline and MKL implementations with a geometric mean
speedup of 1.97$\times$ 1.64$\times$, respectively, on multi-core CPUs.
## 1\. Introduction
Consecutive calls to matrix multiplications are the computational bottleneck
in many scientific (O’Leary, 1980) and machine learning (Wang et al., 2019;
Fey and Lenssen, 2019) applications. Particularly this paper focuses on
accelerating a pair of matrix multiplications, represented as an equation:
(1) $D=A(BC)$
where matrix $\underset{n\times n}{A}$ is sparse, $\underset{n\times bCol}{B}$
is either sparse or dense, and $\underset{bCol\times cCol}{C}$ is dense. For
example, in a layer of graph convolution network (Kipf and Welling, 2016),
either cases happen. Existing frameworks such as PyTorch Geometric (PyG) (Fey
and Lenssen, 2019) and Deep Graph Library (DGL) (Wang et al., 2019) break the
expression into two matrix multiplication operations, $D_{1}=BC$ and
$D=AD_{1}$. The two operations are commonly mapped to a pair of General Matrix
Multiplication (GeMM)-Sparse Matrix-Matrix Multiplication (SpMM) or SpMM-SpMM
routines when $B$ is dense and sparse, respectively. These routines benefit
from efficient tiling and load balancing techniques (Hong et al., 2019; Wang
et al., 2014) that enable using memory and computing resources efficiently.
However, $D_{1}$ is shared between the two routines and often a large matrix
that can be reused but it is not used when the operation is mapped to GeMM or
SpMM separately.
Fusing operations or loops are commonly used to remove intermediate matrices
between the two operations. Tensor compilers (Kjolstad et al., 2017; Dias et
al., 2022; Mutlu et al., 2022) generate a fused code for Equation 1 when $A$
is sparse and $B$ and $C$ are dense. The generated code iterates over $A$ and
performs a general matrix-vector multiplication (GeMV) for each nonzero of
$A$. While this removes the need for storing intermediate results, i.e.
$D_{1}$, it causes random access to $B$ and thus inefficient use of memory
hierarchy. Additionally, this methodology does not apply when $A$ and $B$ are
sparse because memory accesses are unknown at compile time.
Figure 1. The ratio of computations in coarse fused tiles for all matrices
from SuiteSparse for GEMM-SpMM operation. Figure 2. Three different iteration
fusion schedules (Figure 2d–f) for the GeMM-SpMM in Figure 2b and the matrix
in Figure 2a. Figure 2c shows the dependence DAG between iterations of the
outermost loop of GeMM and SpMM, where colored and white vertices correspond
to GeMM and SpMM iterations, respectively. Dark solid lines show
synchronization barriers, the dotted red line shows a potential race
condition, and vertical dashed lines show per thread workload.
Prior approaches such as sparse tiling (Krieger et al., 2013) and
communication-avoiding (CA) (Demmel et al., 2008) methods have used sparsity
information at runtime to fuse sparse matrix-vector multiplications (SpMV) and
enable reuse between the two operations. They model SpMV operations as an
iteration data acyclic graph (DAG) where vertices are iterations of the
outermost loop of SpMV and edges represent dependencies between iterations.
Then a scheduler tiles iterations of operations by grouping vertices of DAG at
runtime. Then, sparse tiling uses barrier and atomic operations to ensure
dependence between tiles are not violated during parallel execution. Some CA
methods (Demmel et al., 2008) replicate dependent iterations within a tile to
make all tiles independent so they run in parallel without synchronization.
Since GeMM, SpMM, and SpMV have parallel iterations in their outermost loops,
the same techniques can be adopted for fusing GeMM-SpMM and SpMM-SpMM.
However, the computation of each fused iteration in the two operations, is
proportional with $bCol$ and $cCol$, increasing race conditions in sparse
tiling and redundant computation in CA methods.
Coarse-grain tiles provide opportunities for fusion in sparse matrices and
graphs without redundant computation or excessive synchronization. A coarse
tile contains large enough iterations of the first operation such that it
allows running some iterations of the second operation that solely depend on
iterations inside the tile. This allows tiles to execute in parallel without
synchronization. Figure 1 shows the percentage of GeMM-SpMM computations that
share data across the operations if coarse-grain tiles with the size of 2048
are selected for all 2893 matrices from SuiteSparse matrix collection (Davis
and Hu, 2011). As shown, an average of 34% of GeMM-SpMM computation reuse data
in fused coarse tiles. However, growing the tiles reduces the number of
parallel workloads, affecting load balance. Also, picking coarse grain tiles
groups a larger number of iterations from the two operations. This grouping
improves locality if the memory accesses of the tile fit within the size of
the fast memory.
We propose sparsity-oriented tile fusion, in short, tile fusion, that creates
fused tiles based on the opportunities shown in Figure 1 to improve locality
in GeMM-SpMM and SpMM-SpMM for shared memory multicore processors. This paper
makes the following contributions:
* •
Tile fusion scheduler and fused code that turn data reuse between and across
iterations of GeMM and SpMM into locality. The tile fusion scheduler uses the
sparsity pattern of $A$ and selects tile sizes and a number of tiles to ensure
locality and load balance.
* •
An implementation that is tested for a wide range of graphs and matrices and
provides a speedup of 1.97$\times$ and 3.52$\times$ compared to existing
unfused and best-fused codes. Also an analysis and adoption of prior tiling
approaches and comparison with tile fusion.
## 2\. Motivating Example
We use the matrix in Figure 2a to discuss how different fusion strategies
improve locality for computing Equation 1. The corresponding code to the
computation is shown in Figure 2b where lines 1–4 perform GeMM, $D_{1}=BC$,
and lines 5–8 perform SpMM, $D=AD_{1}$. Iterations of loops i1 and j1 are
independent so they execute in parallel. Fusing loops i1 and j1 can
potentially enable reusing $D_{1}$ but each iteration in j1 depends on a
variant number of i1 iterations. This irregular dependence is due to
D1[A.i[j2]][j3] in line 8 in Figure 2b, stemming from sparsity pattern of $A$.
The DAG shown in Figure 2c shows the dependence between i1 and j1. Colored and
white vertices in Figure 2c represent iterations of i1 and j1 loops,
respectively. Edges show dependence between iterations. While grouping
vertices with common edges as a tile improves locality, dependence between
tiles can prevent keeping all cores busy. Three different fused schedules of
iterations for the DAG shown in Figure 2c are shown in Figure 2d–f for a
processor with three cores.
Figure 2d shows five tiles composed of vertices of both computations with
common edges. Dependent tiles are separated by synchronization barriers to
ensure partial order. Tiles are atomic to prevent race conditions. For
example, tile $\mathcal{T}_{1,0}$ and $\mathcal{T}_{1,1}$ depend on tile
$\mathcal{T}_{0,0}$ and $\mathcal{T}_{0,1}$ thus a synchronization is needed
between them. Iteration j=4 is split among tiles $\mathcal{T}_{1,0}$ and
$\mathcal{T}_{1,1}$, writing to the same location of C, thus an atomic
operation is needed. The race condition is shown with the dotted red line in
Figure 2. This schedule is inspired by sparse tiling (Krieger et al., 2013)
and named atomic tiling due to atomic operations used in tiles. The chance of
race condition on writing to C increases as the number of columns in $B$ nad
$C$ increases.
Figure 2e shows overlapped tiles that create independent tiles by replicating
dependent iterations. Replicated iterations are shown with red vertices in two
tiles in Figure 2e. Therefore all fused tiles execute in parallel with no
synchronization. Each replicated vertex in the tile corresponds to an
iteration i1 which multiplies a row of $B$ with $C$. Therefore redundant
computations increase with the number of columns in $B$ and $C$. Due to
replicated iterations, this method is called overlapped tiling, inspired by CA
(Demmel et al., 2008) methods.
The tile fusion schedule is shown in Figure 2f where two groups of tiles are
created, fused tiles and tiles of the SpMM iterations separated by one
synchronization barrier. As shown, tiles in the schedule can be large, such as
tile $\mathcal{T}_{0,0}$, to enable fusing more SpMM iterations, benefiting
from coarse tile fusion shown in Figure 1. The tiles contain a variable number
of iterations to ensure the memory accesses of the tile remain local to the
fast memory. Also, both levels have three independent workloads for all three
cores. As a result of tile fusion, the performance of GeMM-SpMM for a subset
of SuiteSparse matrices on a 20-core processor is faster than atomic tiling,
overlapped tiling, and unfused code with a geometric mean of 13.6$\times$,
3.5$\times$, and 1.64$\times$, respectively.
## 3\. Tile Fusion
Tile fusion looks into the sparsity pattern of the input matrix $A$ in GeMM-
SpMM or SpMM-SpMM and creates a fused schedule. The tile fusion approach has
an iteration scheduler and fused code. Both pairs of operations are commonly
used in applications where the sparsity pattern of matrix $A$ and $B$ (when
sparse) remains static during the execution. Therefore the created schedule
will be computed once based on their sparsity and reused for the rest of the
computation. The rest of this section explains how the scheduler computes the
fused schedule and how codes are fused.
1
Input : $G$, $bCol$, $cCol$, $p$, $cacheSize$, $ctSize$
Output : $\mathcal{T}$
/* Step 1: Coarse Tile Fusion */
2 $I\leftarrow range(rows(G))$
3 $J\leftarrow range(cols(G))$
4 if _$\lceil|I|/ctSize\rceil\geq p$_ then $t\leftarrow ctSize$ else
$t\leftarrow\lceil|I|/p\rceil$
5 $\mathcal{F}\leftarrow(\\{\\},\\{\\})$
6 for _ $i\in I$_ do
7 $v\leftarrow i/t$
8 $\mathcal{F}_{0,v}\leftarrow\mathcal{F}_{0,v}\cup range(i,i+t)$
9 for _$j\leftarrow i\quad to\quad i+t\quad\land\quad j\in J$_ do
10 if _( $i<inEdges(G,j)<i+t$)_ then
$\mathcal{F}_{0,v}\leftarrow\mathcal{F}_{0,v}\cup j$
11 else $\mathcal{F}_{1,v}\leftarrow\mathcal{F}_{1,v}\cup j$
12 $j\leftarrow j+1$
13
14 end for
15 $i\leftarrow i+t$
16
17 end for
18$\mathcal{F}_{1,v}\leftarrow balance(\mathcal{F}_{1,v},t)$
/* Step 2: Fused Tile Splitting */
19 for _$w\leftarrow 0\;to\;2$_ do
20 for _$v\leftarrow 0\;to\;|\mathcal{F}_{w}|$_ do
21 if _$cost(\mathcal{F}_{w,v},bCol,cCol) >cacheSize$ _ then
$\mathcal{T}_{w}\leftarrow\mathcal{T}_{w}\cup
split(\mathcal{F}_{w,v},bCol,cCol,cacheSize)$
22 else $\mathcal{T}_{w}\leftarrow\mathcal{T}_{w}\cup\mathcal{F}_{w,v}$)
23 $v\leftarrow v+1$
24
25 end for
26 $w\leftarrow w+1$
27
28 end for
Algorithm 1 Tile Fusion Scheduler
### 3.1. Scheduler
The tile fusion scheduler is shown in Algorithm 1 where it creates a schedule
of fused tiles based on sparsity pattern at runtime. This subsection explains
inputs, output, objective, and the two steps of the algorithm.
#### Inputs and output
The tile fusion scheduler takes the DAG $G$, number of columns of $B$ and $C$
as $bCol$ and $cCol$, architecture-specific information $p$ and $cacheSize$,
and heuristic parameter $ctSize$ as inputs and creates a set of fused tiles
$\mathcal{T}$. DAG $G$ represents dependence between $n$ iterations of the two
fused loops where $G_{i,j}=1$ shows iteration $j$ of the second loop depends
on the iteration $i$ of the first loop. $p$ represents the number of physical
cores and $cacheSize$ shows the total size of caches per core in the target
architecture. Each tile in the output fused tile $\mathcal{T}$ is shown with
$\mathcal{T}_{w,v}$ where $w$ and $v$ are wavefront number and tile number,
respectively. Wavefront is a set of iterations that can execute in any order
without violating correctness.
#### Objective and Constraints
The objective of the tile fusion scheduler is to maximize the fused ratio
across all tiles $\mathcal{T}$ while tiles fit into fast memory and only two
wavefronts are allowed. The fused ratio is computed as total iterations of the
second computation in the first wavefront over total iterations as shown in
Equation 2:
(2) $fused\\_ratio=\frac{\sum_{v=0}^{|\mathcal{T}_{0}|}|J_{0v}|}{|I|+|J|}$
where $J_{wv}$ represents the list of iterations from the second operation in
tile $\mathcal{T}_{wv}$ and $I$ and $J$ shows the list of all iterations (or
iteration space) of the first and second operations, respectively.
$\mathcal{T}_{0}$ is the set of tiles in the first wavefront. Operator $|.|$
shows the cardinal number for a set or size of a list. The tile fusion
scheduler maximizes fused ratio under two constraints, load balance constraint
and locality constraint. The load balance constraint ensures the schedule has
a maximum of two synchronization barriers (or two wavefronts) and the number
of tiles in each wavefront is larger than the number of cores, i.e., $\forall
0\leq w<2;\quad|\mathcal{T}_{w}|\geq p$. The locality constraint ensures the
data movement cost for a tile is smaller than cache sizes for each core
($cacheSize$). In other words the $\forall w,v;\quad
cost(\mathcal{T}_{w,v})<cacheSize$ where $cost(\mathcal{T}_{w,v})$ is data
movement cost of $T_{wv}$.
Figure 2c shows an example DAG $G$. There is an edge from the first iteration
of GeMM to the second iteration of SpMM thus $G_{1,2}=1$. The output schedule
in Figure 2f has two tiles. In tile $\mathcal{T}_{0,1}=\\{5,6,6\\}$,
$\\{5,6\\}\in I_{0,1}$ and $\\{6\\}\in J_{0,1}$.
#### 3.1.1. Step 1
The first step of the tile fusion scheduler creates an intermediate fused
schedule $\mathcal{F}$, composed of uniform coarse fused tiles to maximize the
fused ratio while ensuring the load balance constraint. The scheduler first
finds fused iterations from tiles of consecutive iterations to improve spatial
locality and reduce the scheduler overhead. The scheduler also ensures
iterations in different tiles of a wavefront are independent, no
synchronization is needed.
Lines 1–1 in Algorithm 1 shows how the intermediate fused tiling $\mathcal{F}$
is created. The scheduler first computes the uniform tile size of $t$ using
the given coarse tile size $ctSize$ in line 1. As shown, the tile size is
chosen to be $ctSize$ if the number of tiles, i.e., $\lceil|I|/ctSize\rceil$
is larger than or equal to $p$ otherwise, it defines $t=|I|/p$. This ensures
the number of tiles in each wavefront is larger than $p$, i.e., the load
balance constraint. Each fused tile $\mathcal{F}_{0,k}$ is created from $t$
consecutive iterations of $I$ as shown in line 1 and some of $t$ consecutive
iterations of $J$ as shown in line 1–1. An iteration of $J$ is added to tile
$\mathcal{F}_{0,k}$ if all of its incoming edges are already in the tile as
shown in line 1. Iterations that do not satisfy the criteria in line 1 are
added to tile $\mathcal{F}_{1,k}$ in the second wavefront as shown in line 1.
The iterations in the second wavefront, $\mathcal{F}_{1}$ is evenly
distributed into $t$ tiles using the $balance$ routine in line 1 to ensure
load balance in the second wavefront.
Figure 3. Tile fusion schedule after step 1 Figure 4. Variation of fused ratio
versus tile size.
The coarse tile size parameter, $ctSize$ used for specifying $t$ in line 1 in
Algorithm 1, is determined heuristically. To select the best value for
$ctSize$, we compute how the fused ratio changes when tile size increases.
Figure 4 shows tile size changes on the x-axis and the average of fused ratio
changes for all matrices of the SuiteSparse repository on the y-axis. The
value of $ctSize$ should be selected to maximize the tile fusion objective.
Since after $ctSize=2048$ in Figure 4, the rate of fused ratio improvement is
slowed down, we use $ctSize=2048$. While going beyond this value can slightly
increase the fused ratio, it reduces the number of tiles in a wavefront,
potentially leading to load imbalance.
Figure 3 shows the output of step 1 for the example shown in Figure 2. For
this example, we assume $ctSize=4$ and $p=3$ which makes a tile size $t=4$.
Two coarse tile size is shown in Figure 3 $\mathcal{F}_{0,0}={1,2,3,4}$ and
then since iterations ${1,2,3}\in J$ depend on iterations ${1,2,3}\in I$ and
they already exist in $\mathcal{F}_{0,0}$, then the three iterations are
added.
#### 3.1.2. Step 2
The second step of the tile fusion scheduler splits coarse tiles created in
the first step, $\mathcal{F}$, to fit them into fast memory. As a result,
tiles with different sizes are created to ensure the locality constraint of
the tile fusion. The scheduler iterates over $\mathcal{F}$ and measures their
data movement with a data movement cost model. Fused tiles whose data movement
is larger than the size of fast memory are split to fit into the fast memory.
The second step of the tile fusion scheduler is shown in Lines 1–1 in
Algorithm 1. The algorithm iterates over all tiles in the two wavefronts
$\mathcal{F}_{0}$ and $\mathcal{F}_{1}$ and computes the data movement cost of
each tile using a $cost$ function as shown in line 1. If the data movement
cost of a tile $\mathcal{F}_{i,j}$ is larger than the size of fast memory
$cacheSize$, the scheduler splits the tile recursively to create a set of
tiles, each fitting into the fast memory using the $split$ routine as shown in
line 1. The resulting split tiles are added to the same wavefront of the final
schedule $\mathcal{T}$.
#### Data movement cost
The tile fusion scheduler relies on a cost model to approximate data movement
for a coarse tile. As shown in line 1 in Algorithm 1, the data movement cost
model computes the cost of a tile $\mathcal{T}_{i,j}$ for a given $bCol$ and
$cCol$. The cost model is computed as shown in Equation 3:
(3) $cost(\mathcal{T}_{i,j},bCol,cCol)=\\\
(nz(\mathcal{T}_{i,j})+uc(\mathcal{T}_{i,j})+t+|J_{i,j}|)*cCol+idx$
where, $nz(\mathcal{T}_{i,j})$ is the number of unique nonzeros in the tile
from $A$ and $B$. When $B$ is dense, all $n\times bCol$ is added.
$uc(\mathcal{T}_{i,j})$ is the number of nonzeros with unique columns in the
tile, $|J_{i,j}|$ is the number of fused iterations from the second operation
and $idx$ is the indexing cost for when $A$ or $B$ are sparse.
The final fused schedule $\mathcal{T}$ in Figure 2f is computed from the
coarse fused tile schedule in $\mathcal{F}$ in Figure 3. The schedule has two
coarse fused tiles. Each tile is labeled with its data communication cost.
Assuming $bCol=cCol=1,t=4$, the cost of $\mathcal{F}_{0,0}$ is computed based
on Equation 3. Since the cost of $\mathcal{F}_{0,0}$ is less than
$cacheSize=30$, the tile will be directly added to $\mathcal{T}$. However,
$\mathcal{F}_{0,1}$ is larger than $cacheSize$ and is split in two smaller
tiles that has a cost less than $cacheSize$.
#### Computational Complexity
The first step of the algorithm, for every tile with size $t$, it checks
$inEdges$ for only $t$ columns of $G$ in the same range. Since tiles are
disjoint, $inEdges$ is called once per iteration, makes the first step of the
algorithm $O(nnz)$. Accessing incoming edges for an iteration is possible in
linear time. The second step only iterates over fused iterations in $J$ and
their dependent iterations from $I$ for splitting. Since the set of iterations
is split by a factor of 2, and each split function can visit up to $nnz$
edges, therefore, its complexity is $O(nnz*log(ctSize)$. The second wavefront
only operates on unfused iterations which take $O(|J|)$. Therefore the
complexity of the second step is $O(|J|+nnz*log(ctSize)$.
### 3.2. Fused Code
The fused code is created by fusing the outermost loop of the two operations.
The fused code is then replaced with a doubly nested loop that iterates over
the tiles in parallel using the OpenMP scheduler. The fused code uses the
fused tiling schedule $\mathcal{T}$ and maps iterations of fused tiles to
their associated code version to ensure correctness. Listing 1 and Listing 3
show the fused code for GEMM-SpMM (Figure 2) and SpMM-SpMM (Listing 2). As
shown the fused code is composed of a version of the two operations. Lines 4–7
and 8–11 in Listing 1 correspond to the innermost loops of GEMM and SpMM
respectively. Lines 3–7 and 8–11 in Listing 3 correspond to the innermost
loops of the first SpMM ($D_{1}=AC$) and the second SpMM ($D=AD_{1}$)
respectively. Loop-bounds in line 4 in Listing 1 and line 3 in Listing 3 are
determined by the schedule $\mathcal{T}$, playing as a runtime check to switch
between the two versions. Therefore, the fused code follows the schedule order
and ensures all dependence between fused iterations. The outermost loop of the
two computations is fused and replaced with a pair of loops that go over the
tile fusion schedule.
Fused code also turns data reuse between fused iterations into temporal
locality while also preserving the locality of each operation. The code
versions are next to each other in the code and when they execute right after
each other, the data reuse between them turns into temporal locality. For the
first tile in the schedule shown in Figure 2, the arrays corresponding to rows
1–4 stay in the cache when the fused code in Listing 1 switches to the SpMM
loop in line 8, thus improving temporal locality. Both fused codes in Listings
1 and 3 benefit from the fact that consecutive iterations are grouped in the
scheduler. The consecutive choice will eliminate the need for conditional
checking for every iteration of computations. It also improves spatial
locality and temporal locality exist within iterations of each operation, e.g.
spatial and temporal locality in GeMM will be in place in fused tiles.
Listing 1: Fused code of GeMM-SpMM, $D=A(BC)$.
⬇
1for (w in T){ // for each wavefront
2#pragma omp parallel for
3 for(t in T[w]){ // for each tile
4 for(i1 in t.first)
5 for(int i2=0; i2<cCol; ++i2)
6 for(int i3=0; i3<bCol; ++i3)
7 D1[i3][i2] += B[i1][i3] * C[i2][i3];
8 for(j1 in t.second)
9 for(int j2=A.p[j1]; j2<A.p[j1+1]; j2++)
10 for(int j3=0; j3<cCol; j3++)
11 D[j1][l2] += A.x[j2] * D1[A.i[j2]][j3];
12 }}
Listing 2: $D=A(AC)$ where $A$ is CSR.
⬇
1fuse: for(int i1=0; i1<n; ++i1) // SpMM 1
2 for(int i2=A.p[i1]; i2<A.p[i1+1]; ++j1)
3 for(int i3=0; i3<cCol; ++i3)
4 D1[i1][i3] += A.x[i2] * C[A.i[i2]][i3];
5fuse: for(int j1=0; j1<n; ++j1) // SpMM 2
6 for(int j2=A.p[j1]; j2<A.p[j1+1]; ++j2)
7 for(int j3=0; j3<cCol; ++j3)
8 D[j1][j3] += A.x[j2] * D1[A.i[j2]][j3];
Listing 3: Fused code for SpMM-SpMM, $D=A(AC)$.
⬇
1for (w in T){
2#pragma omp parallel for
3 for(t in T[w]){
4 for(i1 in t.first)
5 for(int i2=A.p[i1]; i2<A.p[i1+1]; ++i2)
6 for (int i3 = 0; i3<cCol; ++i3)
7 D1[i1][i3] += A.x[i2] * C[A.i[i2]][i3];
8 for(j1 in t.second)
9 for(int j2 = A.p[j1]; j2<A.p[j1+1]; j2++)
10 for(int j3 = 0; j3<cCol; j3++)
11 D[j1][j3] += A.x[j2] * D1[A.i[j2]][j3];
12 }}
The fused code enables thread-level parallelism by mapping fused tiles to
parallel threads. Mapping tiles to threads is done using omp scheduler as
shown in line 2 in Listing 1 and line 2 in Listing 3. The fused code will keep
all fine-grain parallelism opportunities such as vectorization that exist in
the unfused code. For example, lines 4–7 in Listing 1 is mapped to a highly
optimized GEMM BLAS to benefit from vector processors. Similarly, inner loop
vectorization is performed in lines 6 and 10 in Listing 3.
## 4\. Experimental Results
This section discusses the performance of tile fusion with existing fused and
unfused implementations for sparse matrices across two different shared memory
processors. Overall tile fusion is faster than unfused and best-fused code
with a geomean speedup of 1.98$\times$ and 3.52$\times$ and is scalable to 40
and 64 cores.
### 4.1. Setup
Platform | CascadeLake | EPYC
---|---|---
# of sockets $\times$ cores | 2 $\times$ 20 cores | 2 $\times$ 32 cores
L1/L2/L3 Cache Sizes | 32K/1M/28M | 32K/512K/256M
Compiler | ICC 2022 | GCC v.11
BLAS | MKL BLAS 2022 (Wang et al., 2014) | BLIS (Van Zee and Van De Geijn, 2015)
Table 1. Platform details
#### 4.1.1. Environment
All experiments are executed on multi-core processors shown in Table 1 to show
the performance of tile fusion cross-platform. The experiments are done in
single-node except stated otherwise. Since both computations are used in GNN
(Zhou et al., 2020) and sparse iterative linear solvers with multiple right-
hand side (Aggarwal et al., 2021), we test tile fusion for double-precision
(DP) and single-precision (SP) data types, commonly used in machine learning
and scientific computing respectively. All experiments are tested for three
different numbers of columns, 32, 64, and 128 in $B$ to enable testing
different matrix sizes $B$ and $C$. Each reported time in the paper is the
median of 7 runs. For each matrix, the theoretical FLOPs for the unfused code
is computed and used for all implementations. Parameter $cacheSize$ in
Algorithm 1 is set to the sum of L1+L2+(L3/cores) cache sizes in row 2 in
Table 1. A close thread binding is selected and each thread is pinned to a
physical core.
#### 4.1.2. Matrix Dataset
We select 233 matrices from SuiteSparse (Davis and Hu, 2011) matrix repository
to evaluate GeMM-SpMM and SpMM-SpMM computations. We select two groups of
matrices to represent scientific computing and machine learning applications.
I. all 132 symmetric positive definite matrices larger than $10^{5}$ nonzero
values. II. all 111 square matrices related to graph applications larger than
$10^{5}$ nonzeros with either integer or real types. We define a matrix as
graph-related if a“graph” keyword is included in its metadata.
Figure 5. GeMM-SpMM performance for all matrices on CascadeLAke (top) EPYC
(bottom)
#### 4.1.3. Fused and Unfused Implementations
To compare with unfused implementations, we use OneAPI MKL (Wang et al., 2014)
library v2022. Since fused versions of GEMM or SpMM are not supported in MKL,
we called routines cblas_?gemm and mkl_sparse_?_mm for double/single precision
of GeMM and SpMM, respectively. We set the number of threads in MKL with
mkl_set_num_threads() to the number of physical cores. We also develop an
unfused parallel implementation for both GeMM-SpMM and SpMM-SpMM with the same
set of optimizations to show the effect of tile fusion.
Tile fusion is compared with existing fused implementations and some in-house
implementations of prior fusion techniques. For GeMM-SpMM, we use the
generated C++ code from TACO (Kjolstad et al., 2017) and SparseLNR (Dias et
al., 2022) tensor compilers for the expression D(i,l) = A(i,j) * B(j,k) *
C(k,l) where $A$ is sparse and other matrices are dense. We added both
generated codes and reported their best timing as Best of Tensor Compilers. We
also additionally vectorize the generated tensor compiler code by using MKL
GeMV BLAS (Wang et al., 2014) to ensure the effect of vectorization is
incorporated.
For SpMM-SpMM, tensor compilers do not support fusing SpMM-SpMM and, therefore
excluded from the benchmark. Since the code for communication-avoiding (Demmel
et al., 2008) and sparse tiling (Krieger et al., 2013) are not publicly
available, per the authors’ recommendation, we adopted the idea and applied
them to SpMM-SpMM. For communication-avoiding methods, we first equally
partition iterations of the first SpMV, and then we add all dependent
iterations to the same partitions. This implementation is called overlapped
tiling. For sparse tiling, we partition iterations of the first SpMM equally.
Then we add dependent iterations of the second SpMM to each partition to
create balanced partitions. Finally, dependence is resolved with atomic and
barriers. This implementation is named atomic tiling.
We only report the fused code execution time for all experiments operating on
matrices. The scheduler overhead is separately evaluated and illustrated.
| | | 32 | 64 | 128
---|---|---|---|---|---
CascadeLake | Single Precision | MKL | 1.64 | 1.41 | 1.36
UnFused | 1.36 | 1.24 | 1.14
Double Precision | MKL | 1.37 | 1.33 | 1.23
UnFused | 1.45 | 1.34 | 1.24
EPYC | Single Precision | UnFused | 1.67 | 1.73 | 1.84
Double Precision | UnFused | 1.81 | 1.93 | 1.97
Table 2. The summary of gmean of speedups for GeMM-SpMM. Figure 6. GeMM-SpMM
performance of fused implementations on graph matrices.
### 4.2. GEMM-SpMM Evaluation
#### 4.2.1. Performance in FLOPs
Figure 5 shows the overall performance of GeMM-SpMM using tile fusion with
unfused MKL for the two architectures and three bcols. As shown tile fusion is
faster than MKL for 90% of matrices across bclos. Table 2 shows speedup
details for GeMM-SpMM and for single and double precision for the target
architectures shown in Table 1.
The performance of tile fusion increases as bCols increase due to increasing
arithmetic intensity. The tile fusion performance increases from a mean of 152
GFLOP/s when bCol=32 to 328 GFLOP/s when bCol=128. While MKL implementation
changes from 92 GLOP/s to 241 GFLOP/s when bCols changes from 32 to 128. As
bCols increase, the arithmetic intensity of fused tiles increases and tile
fusion can take advantage.
All implementations have a better performance for SPD matrices than graph
matrices. The reason is that the fused ratio in SPD matrices is on average 2
times higher than graph matrices. The performance of Tile Fusion for single
precision is 2$\times$ better than double precision. When operating on double,
the data movement increases, making computation more memory-bound than single,
thus reducing GFLOP/s. Also, since the EPYC processor has a larger L3 cache,
the performance gap between tile fusion and unfused baseline for large
matrices is higher than the CascadeLake processor.
Tile fusion also supports fusing Equation 1 when the transpose of $C$ should
be multiplied. Tile fusion provides a geometric mean of 1.49, 1.24, and 1.26
over unfused MKL on CascadeLake for bCol=cCol=32, 64, 128, respectively.
Figure 6 shows the performance of tile fusion compared to other fused
implementations. Tile fusion is faster than tensor compilers, atomic tiling,
and overlapped tiling with an average speedup of 9.4$\times$, 13.6$\times$,
and 3.5$\times$, respectively. Tensor compilers perform redundant computations
and also do not use memory hierarchy due to vector operations.
#### 4.2.2. Ablation Study
This section analyzes the effect of tile fusion on locality and load balance
and the effect of the two steps of the tile fusion scheduler on the
performance. We selected all 111 graph matrices, a subset of the matrix
dataset for profiling and analysis. All analysis is also done on the
CascadeLake target architecture.
Figure 7. The effect of average memory access time on performance of tile
fusion for GeMM-SpMM
We measure an average memory cycle to analyze the effect of tile fusion on
improving locality in GeMM-SpMM. We measure average memory access time (AMT)
as AMT = hit time + miss ratio * miss penalty for all three levels of caches
in the target architecture. We use PAPI (Terpstra et al., 2010) performance
counters, PAPI_L1_TCM, PAPI_L2_TCM, PAPI_L3_TCM to measure L1 accesses, L2
accesses, L3 accesses, and main memory accesses, respectively to compute hit
and miss ratio for each level. Average memory access times for the selected
subset of matrices are shown in Figure 7. As shown, tile fusion improves AMT
for 92% of graph matrices between 1.1-1.3$\times$ compared to the unfused
implementation which is the main cause for improving the performance.
Figure 8. Potential Gain of GeMM-SpMM (lower is better)
We measure the potential gain of both fused and unfused code to show the
effect of tile fusion on the load balance of GeMM-SpMM. Potential gain (PG) is
defined as the maximum time that can be saved if all threads are balanced. We
measure the average difference between the maximum time of threads and other
threads’ time. We use PAPI counter PAPI_TOT_CYC to measure the number of
cycles for each thread. Figure 8 shows the PG compared to unfused. As shown,
the tile fusion load balance is close to unfused. The unfused code has a
larger number of fine-grain tasks, enabling it to be more balanced.
Figure 9. Effect of different steps of Tile Fusion in GeMM-SpMM for graph
matrices
Figure 9 shows the performance breakdown of the two steps of the tile fusion
inspector. As shown, the first step of tile fusion improves the performance of
sequential baseline code with a gmean speedup of 6.7$\times$. The second step
of tile fusion contributes to the performance of 90% of matrices shown in
Figure. This first step contributes more because it adds threading and
improves locality. The second step further balances the loads and improves the
parallel workloads of step 1. The second step selects tile sizes based on the
cost model provided in Equation 3. For the selected graph matrices, the tile
sizes selected by the second step vary between 64-2048.
Figure 10. Number of runs needed to amortize the scheduling cost for GeMM-SpMM
(lower and positive values are better)
---
Figure 11. SpMM-SpMM performance on CascadeLake (top) and EPYC (bottom) Figure 12. SpMM-SpMM performance of fused implementations | | | 32 | 64 | 128
---|---|---|---|---|---
CascadeLake | Single Precision | MKL | 1.2 | 1.02 | 1.11
UnFused | 1.17 | 1.15 | 1.14
Double Precision | MKL | 1.09 | 1.16 | 1.11
UnFused | 1.14 | 1.15 | 1.13
EPYC | Single Precision | UnFused | 1.14 | 1.17 | 1.19
Double Precision | UnFused | 1.14 | 1.20 | 1.22
Table 3. G-mean speedups for SpMM-SpMM
#### 4.2.3. Scheduler Overhead analysis
The tile fusion performs scheduling once per sparsity pattern and can be
reused as long as sparsity remain static. Figure 10 shows the number of
iterations that fused code should run to amortize the scheduler overhead with
respect to the fastest baselines. The number of fused code runs is computed as
$\frac{scheduler\,time}{baseline-fusedCode\,time}$. As shown, tile fusion
needs less than 100 iterations to amortize the cost of the scheduler. In many
applications such as GNN training, GeMM-SpMM is called over hundreds or
thousands of times.
### 4.3. SpMM-SpMM Evaluation
The performance of tile fusion is compared with unfused implementations for
SpMM-SpMM as shown in Figure 11. Tile fusion is faster than unfused baseline
and MKL implementations in 100% and 70% of all matrices in any bCol that we
experimented on and for SP/DP. The detailed speedup for both CascadeLake and
EPYC and SP and DP are illustrated in Table 3. The performance of SpMM-SpMM is
overall lower than GeMM-SpMM for the same set of matrices due to the memory-
bound nature of SpMM.
Tile fusion provides a gmean speedup of 9.3$\times$, 13.2$\times$, and
13.7$\times$ over atomic tiling for bCol = 32, 64, and 128 respectively. A
similar trend exists for overlapped tiling where tile fusion provides a gmean
speedup of 5, 6.5, and 7.2 for bcols=32, 64, and 128. The main reason is the
amount of redundant computation that increases for overlapped tiles. For
example, matrix G2_circuit and inline_1 have redundant iterations of 126487
and 2844351 respectively while they only have 150102 and 503712 rows.
## 5\. Related Work
#### Tiling and Fusion for Sparse Codes
Loop tiling and fusion are common techniques to improve locality. A large body
of work applies these techniques to SpMM (Natesh et al., 2023; Hong et al.,
2019; Kurt et al., 2020; Ahmad et al., 2024; Cheshmi et al., 2022a; Wilkinson
et al., 2023) or GeMM (Wang et al., 2014; Kung et al., 2021) routines. Tile
fusion preserve locality between iterations of GeMM and SpMM thus enables
benefiting from existing developed efficient GeMM and SpMM. Using data
communication cost to define tile size is used in matrix signature (Kurt et
al., 2020) for SpMM CSR. Tile fusion however works on two operations that
needs to take into account common elements between the two operations. Fusing
based on compile-time information is commonly used in dense matrices. Indirect
memory accesses reduce fusion opportunities however, there are still
opportunities. Tensor expression compilers (Mutlu et al., 2022; Dias et al.,
2022; Kjolstad et al., 2017) generates code for a tensor expression. Sparse
tensor compilers (Mutlu et al., 2022; Dias et al., 2022; Kjolstad et al.,
2017) support fusing Equation 1. However, the generated fused code from
SparseLNR (Dias et al., 2022) and TACO (Kjolstad et al., 2017) make matrix
operations in Equation 1 to matrix-vector operations which do not efficiently
use the fast memory.
Modeling parallel loops such consecutive matrix multiplication as a graph
(Krieger et al., 2013; Strout et al., 2004; Demmel et al., 2008) or hypergraph
(Pawłowski et al., 2020) are commonly used for improve cache reuse. These
methods for shared memory processors either rely on synchronization (Krieger
et al., 2013) or overlapped computation (Demmel et al., 2008). The fused ratio
and its cost model can enhance locality and reduce overlapped computation in
these methods. Sympiler (Cheshmi, 2022; Cheshmi et al., 2017) uses DAG
schedulers (Cheshmi et al., 2018; Zarebavani et al., 2022) to build an initial
schedule of iteration then fuse the schedule with another loop using sparse
fusion (Cheshmi et al., 2023, 2022b). Sparse fusion scheduler is driven by
loop-carried dependency commonly occurs in scientific solvers (Cheshmi et al.,
2020) to ensure locality which does not exist in matrix multiplication
operations.
## 6\. Conclusion
This paper presents tile fusion to enable fusing GeMM-SpMM and SpMM-SpMM for
$D=A\times B\times C$ where $A$ is sparse and $B$, $C$, and $D$ are dense.
Tile fusion has a scheduler that builds a schedule of fused tiles based on the
sparsity pattern of the $A$ and the size of dense matrices. The created
schedule does not use redundant computation and its synchronizations are
always 2. Tile fusion outperforms existing unfused and fused implementations.
## References
* (1)
* Aggarwal et al. (2021) Isha Aggarwal, Aditya Kashi, Pratik Nayak, Cody J Balos, Carol S Woodward, and Hartwig Anzt. 2021. Batched sparse iterative solvers for computational chemistry simulations on GPUs. In _2021 12th Workshop on Latest Advances in Scalable Algorithms for Large-Scale Systems (ScalA)_. IEEE, 35–43.
* Ahmad et al. (2024) Khalid Ahmad, Cris Cecka, Michael Garland, and Mary Hall. 2024. Exploring data layout for sparse tensor times dense matrix on GPUs. _ACM Transactions on Architecture and Code Optimization_ 21, 1 (2024), 1–20.
* Cheshmi (2022) Kazem Cheshmi. 2022. _Transforming Sparse Matrix Computations_. Ph. D. Dissertation. University of Toronto (Canada).
* Cheshmi et al. (2022a) Kazem Cheshmi, Zachary Cetinic, and Maryam Mehri Dehnavi. 2022a. Vectorizing sparse matrix computations with partially-strided codelets. In _SC22: International Conference for High Performance Computing, Networking, Storage and Analysis_. IEEE, 1–15.
* Cheshmi et al. (2017) Kazem Cheshmi, Shoaib Kamil, Michelle Mills Strout, and Maryam Mehri Dehnavi. 2017. Sympiler: transforming sparse matrix codes by decoupling symbolic analysis. In _Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis_. 1–13.
* Cheshmi et al. (2018) Kazem Cheshmi, Shoaib Kamil, Michelle Mills Strout, and Maryam Mehri Dehnavi. 2018. ParSy: inspection and transformation of sparse matrix computations for parallelism. In _SC18: International Conference for High Performance Computing, Networking, Storage and Analysis_. IEEE, 779–793.
* Cheshmi et al. (2020) Kazem Cheshmi, Danny M Kaufman, Shoaib Kamil, and Maryam Mehri Dehnavi. 2020. NASOQ: numerically accurate sparsity-oriented QP solver. _ACM Transactions on Graphics (TOG)_ 39, 4 (2020), 96–1.
* Cheshmi et al. (2023) Kazem Cheshmi, Michelle Strout, and Maryam Mehri Dehnavi. 2023. Runtime composition of iterations for fusing loop-carried sparse dependence. In _Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis_. 1–15.
* Cheshmi et al. (2022b) Kazem Cheshmi, Michelle Mills Strout, and Maryam Mehri Dehnavi. 2022b. Optimizing sparse computations jointly. In _Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming_. 459–460.
* Davis and Hu (2011) Timothy A Davis and Yifan Hu. 2011. The University of Florida sparse matrix collection. _ACM Transactions on Mathematical Software (TOMS)_ 38, 1 (2011), 1–25.
* Demmel et al. (2008) James Demmel, Mark Hoemmen, Marghoob Mohiyuddin, and Katherine Yelick. 2008. Avoiding communication in sparse matrix computations. In _2008 IEEE International Symposium on Parallel and Distributed Processing_. IEEE, 1–12.
* Dias et al. (2022) Adhitha Dias, Kirshanthan Sundararajah, Charitha Saumya, and Milind Kulkarni. 2022. SparseLNR: accelerating sparse tensor computations using loop nest restructuring. In _Proceedings of the 36th ACM International Conference on Supercomputing_. 1–14.
* Fey and Lenssen (2019) Matthias Fey and Jan Eric Lenssen. 2019. Fast graph representation learning with PyTorch Geometric. _arXiv preprint arXiv:1903.02428_ (2019).
* Hong et al. (2019) Changwan Hong, Aravind Sukumaran-Rajam, Israt Nisa, Kunal Singh, and P Sadayappan. 2019. Adaptive sparse tiling for sparse matrix multiplication. In _Proceedings of the 24th Symposium on Principles and Practice of Parallel Programming_. 300–314.
* Kipf and Welling (2016) Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. _arXiv preprint arXiv:1609.02907_ (2016).
* Kjolstad et al. (2017) Fredrik Kjolstad, Shoaib Kamil, Stephen Chou, David Lugato, and Saman Amarasinghe. 2017. The tensor algebra compiler. _Proceedings of the ACM on Programming Languages_ 1, OOPSLA (2017), 1–29.
* Krieger et al. (2013) Christopher D Krieger, Michelle Mills Strout, Catherine Olschanowsky, Andrew Stone, Stephen Guzik, Xinfeng Gao, Carlo Bertolli, Paul HJ Kelly, Gihan Mudalige, Brian Van Straalen, et al. 2013\. Loop chaining: A programming abstraction for balancing locality and parallelism. In _2013 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum_. IEEE, 375–384.
* Kung et al. (2021) HT Kung, Vikas Natesh, and Andrew Sabot. 2021. Cake: matrix multiplication using constant-bandwidth blocks. In _Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis_. 1–14.
* Kurt et al. (2020) Süreyya Emre Kurt, Aravind Sukumaran-Rajam, Fabrice Rastello, and Ponnuswamy Sadayyapan. 2020. Efficient tiled sparse matrix multiplication through matrix signatures. In _SC20: International Conference for High Performance Computing, Networking, Storage and Analysis_. IEEE, 1–14.
* Mutlu et al. (2022) Erdal Mutlu, Ruiqin Tian, Bin Ren, Sriram Krishnamoorthy, Roberto Gioiosa, Jacques Pienaar, and Gokcen” Kestor. 2022. COMET: A Domain-Specific Compilation of High-Performance Computational Chemistry. In _Languages and Compilers for Parallel Computing_ , Barbara Chapman and José Moreira (Eds.). Springer International Publishing, Cham, 87–103.
* Natesh et al. (2023) Vikas Natesh, Andrew Sabot, HT Kung, and Mark Ting. 2023. Rosko: Row Skipping Outer Products for Sparse Matrix Multiplication Kernels. _arXiv preprint arXiv:2307.03930_ (2023).
* O’Leary (1980) Dianne P O’Leary. 1980. The block conjugate gradient algorithm and related methods. _Linear algebra and its applications_ 29 (1980), 293–322.
* Pawłowski et al. (2020) Filip Pawłowski, Rob H Bisseling, Bora Uçar, and AN Yzelman. 2020. Combinatorial Tiling for Sparse Neural Networks. In _2020 IEEE High Performance Extreme Computing Conference (HPEC)_. IEEE, 1–7.
* Strout et al. (2004) Michelle Mills Strout, Larry Carter, Jeanne Ferrante, and Barbara Kreaseck. 2004. Sparse tiling for stationary iterative methods. _The International Journal of High Performance Computing Applications_ 18, 1 (2004), 95–113.
* Terpstra et al. (2010) Dan Terpstra, Heike Jagode, Haihang You, and Jack Dongarra. 2010. Collecting performance data with PAPI-C. In _Tools for High Performance Computing 2009: Proceedings of the 3rd International Workshop on Parallel Tools for High Performance Computing, September 2009, ZIH, Dresden_. Springer, 157–173.
* Van Zee and Van De Geijn (2015) Field G Van Zee and Robert A Van De Geijn. 2015. BLIS: A framework for rapidly instantiating BLAS functionality. _ACM Transactions on Mathematical Software (TOMS)_ 41, 3 (2015), 1–33.
* Wang et al. (2014) Endong Wang, Qing Zhang, Bo Shen, Guangyong Zhang, Xiaowei Lu, Qing Wu, Yajuan Wang, Endong Wang, Qing Zhang, Bo Shen, et al. 2014\. Intel math kernel library. _High-Performance Computing on the Intel® Xeon Phi™: How to Fully Exploit MIC Architectures_ (2014), 167–188.
* Wang et al. (2019) Minjie Wang, Da Zheng, Zihao Ye, Quan Gan, Mufei Li, Xiang Song, Jinjing Zhou, Chao Ma, Lingfan Yu, Yu Gai, et al. 2019\. Deep graph library: A graph-centric, highly-performant package for graph neural networks. _arXiv preprint arXiv:1909.01315_ (2019).
* Wilkinson et al. (2023) Lucas Wilkinson, Kazem Cheshmi, and Maryam Mehri Dehnavi. 2023. Register Tiling for Unstructured Sparsity in Neural Network Inference. _Proceedings of the ACM on Programming Languages_ 7, PLDI (2023), 1995–2020.
* Zarebavani et al. (2022) Behrooz Zarebavani, Kazem Cheshmi, Bangtian Liu, Michelle Mills Strout, and Maryam Mehri Dehnavi. 2022. HDagg: hybrid aggregation of loop-carried dependence iterations in sparse matrix computations. In _2022 IEEE International Parallel and Distributed Processing Symposium (IPDPS)_. IEEE, 1217–1227.
* Zhou et al. (2020) Jie Zhou, Ganqu Cui, Shengding Hu, Zhengyan Zhang, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2020. Graph neural networks: A review of methods and applications. _AI open_ 1 (2020), 57–81.
|
# The double quasar Q2138-431: detection of a lensing galaxy
M. R. S. Hawkins 1
1Institute for Astronomy (IfA), University of Edinburgh, Royal Observatory,
Blackford Hill, Edinburgh EH9 3HJ, UK
E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
This paper reviews the question of whether the wide separation double quasar
Q2138-431 is a gravitational lens. From early work, the two quasar images are
known to have almost identical spectra and redshifts, but no lensing galaxy
has so far been detected. In this paper we used recent deep surveys in
infrared and optical bands to search for the presence of a galaxy with the
expected properties of a gravitational lens. The search revealed a $5\sigma$
detection of a faint galaxy between the two quasar images on a deep $J$-band
frame from the VISTA Science Archive, with apparent magnitude $J=20.68$. Non-
detection in the $I$-band implied a redshift $z>0.6$, and mass modelling of
the quasar system gave a mass of $1.31\times 10^{12}M_{\odot}$ for the lensing
galaxy, with mass-to-light ratio $M_{\odot}/L_{\odot}=9.0$. Archival
photographic data from the UK 1.2m Schmidt telescope covering 25 years were
used to construct light curves for the two quasar images, which were then
cross-correlated to measure any time lag. This showed image B to lead image A
by around a year, consistent with 410 days from the mass model. Although the
similarity of the spectra and the detection of the lensing galaxy are the most
compelling arguments for the classification of Q2138-431 as a gravitational
lens, the time delay and mass-to-light ratio provide a consistent picture to
support this conclusion. The wide separation of the quasar images and the
simplicity of the mass model make Q2138-431 an excellent system for the
measurement of the Hubble constant.
###### keywords:
quasars: individual (Q2138-431) – gravitational lensing: strong
††pubyear: 2021††pagerange: The double quasar Q2138-431: detection of a
lensing galaxy–References
## 1 Introduction
The double quasar Q2138-431 was discovered as part of a photographic survey
for quasars based on optical variability (Hawkins et al., 1997). It was
detected as an elongated blue image which varied by over a magnitude in a few
years, and closer inspection showed it to comprise two star-like images
separated by about 4.5 arcsec. Subsequent analysis, which we briefly review in
Section 2, revealed that the two images were quasars with the same redshift,
$z=1.641$. This raised the possibility that the quasar images were part of a
gravitational lens system, although there was no obvious sign of a lensing
galaxy. In Fig. 1 we have constructed a composite 3-colour image using frames
from the Dark Energy Survey (DES) (Abbott et al., 2018) in the $g$, $r$ and
$i$ passbands. The two quasar images are clearly visible near the centre,
together with a sparse population of mostly red galaxies with a limiting
magnitude of $R\approx 24$.
Wide separation gravitational lenses with image separations greater than two
arsecs are rare, and have proved particularly useful in investigating dark
matter distributions (Fadely et al., 2010), and the measurement of the Hubble
constant (Eigenbrod et al., 2005; Suyu et al., 2013; Wong et al., 2017).
Should Q2138-431 be shown to be a two-component gravitational lens, it would
be among the widest separation systems known, and very well suited to mass
modelling and the measurement of a time delay for calculating the Hubble
constant. In the CASTLES111https://www.cfa.harvard.edu/castles/ database of
gravitational lenses, there are only five systems with separations greater
than 4 arcsec. Two of these are quadruple cluster lenses (Oguri et al., 2008;
Fohlmeister et al., 2008; Hawkins, 2020), which although interesting in their
own right present major difficulties for the measurement of the Hubble
constant. The remaining three are doubles, of which RXJ 0921+4529 has now been
shown to be a binary system (Popović et al., 2010). CLASS B2108+213, which was
discovered as a double lensed radio source with a smooth optical spectrum, is
tentatively identified as a BL Lac object (McKean et al., 2005, 2010). So far
the source redshift has not been measured, and the complexity of the lensing
group or cluster suggests some difficulty in measuring the value of the Hubble
constant. The final large separation lens is the well-known double Q0957+561
(Walsh et al., 1979) which has indeed proved useful for the measurement of the
Hubble constant (Fadely et al., 2010), but even here the asymmetry of the
positions of the quasar images and galaxy lenses has complicated the modelling
of the system.
In the paper reporting the discovery of the double quasar Q2138-431, Hawkins
et al. (1997) showed that the spectra of the two quasar images are almost
identical, and that the redshifts are the same to within very small errors.
This made a strong case for the two images to be gravitationally lensed, but
long CCD integrations in the $R$-band failed to reveal a lensing galaxy. For
this paper we have taken advantage of modern deep photometric surveys,
especially in the infrared, to look again for a lensing galaxy between the two
quasar images. Image subtraction of the quasar images has resulted in the
positive $5\sigma$ detection of a galaxy in the $J$-band, but no significant
detections in the $r$-, $i$\- or $z$-bands. We used the photometry in these
passbands to estimate a lower limit to the redshift of the lensing galaxy, and
hence its absolute magnitude. We also used astrometric measurements of the
positions of the quasar images and lensing galaxy to fit a mass model to the
system, and obtain a value for the mass of the deflector. The resulting
parameters imply that the lens is a massive luminous red galaxy (LRG). The
survey data on which this analysis is based is described in Section 2, and the
image analysis techniques and photometry in Section 3.
For a gravitational lens system to be useful for the measurement of the Hubble
constant, it is necessary to measure time delays between the light paths to
the images. The photographic survey which resulted in the discovery of
Q2138-431 is described in more detail in Section 2, and comprised long runs of
yearly observations in several passbands between the years 1977 and 2001.
These plates yield light curves for the two quasar images which can be cross-
correlated to provide a preliminary estimate of the time lag between
variations in the two images, which we describe in Section 4. We then compare
this result with the predicted time delay from the mass model. Throughout the
paper we assume $\Omega_{M}=0.27$, $\Omega_{\Lambda}=0.73$ and $H_{0}=71$ km
s-1 Mpc-1 for cosmological calculations.
Figure 1: Composite 3-colour image from DES frames in $g$, $r$ and $i$
passbands, showing the galaxy environment of the two quasar images near the
centre of the frame, with the brighter image A to the right and image B to the
left. The field also includes some of the local standard stars used for the
measurement of the light curve. The frame is approximately 20 arcsec on a
side, and north is up the page, east to the left.
## 2 Observations
The double quasar Q2138-431 was discovered as part of a survey for variable
objects with an ultra-violet excess and an elongated image on photographic
plates from the 1.2m UK Schmidt telescope at Siding Springs Observatory,
Australia (Hawkins et al., 1997). The purpose of the survey was to find
gravitationally lensed quasars, and Q2138-431 was the most promising candidate
which on closer examination was seen to comprise two images separated by 4.5
arcsec. Spectroscopy of the two images was obtained with the ESO 3.6m
telescope at La Silla, Chile, which showed them to be quasars with no
detectable difference between the two high signal-to-noise spectra, as
illustrated in Figure 2 of Hawkins et al. (1997). The redshifts of the two
quasar images were the same at $z=1.641$, and cross-correlation between the
two spectra showed a difference in velocity of $0\pm 114$ km s-1. Such a small
velocity difference is hard to account for in a random alignment or binary
pair, and strongly suggested that the quasar images were part of a
gravitational lens system. There was however no obvious evidence for a lensing
galaxy. In order to clarify the situation, deep CCD observations of the quasar
images were obtained in the $R$-band, and the area between the two quasar
images investigated using image subtraction techniques to reduce contamination
from the quasar light. No lensing galaxy was detected, with a lower limit of
$R>23.8$.
The failure to find a lensing galaxy suggested the possibility that the lens
might be a dark galaxy, an idea which was receiving some attention at that
time (Hawkins, 1997). However, the limit obtained in the $R$-band is only
useful for relatively low redshift lenses. The 4000 Å feature starts to leave
the $R$-band at around $z\approx 0.4$ and from the $I$-band at $z\approx 0.8$,
making the detection of a lensing galaxy in this regime very challenging. On
this basis we moved the search into the infrared, and made use of the $J$-band
survey from the VISTA Science Archive222http://horus.roe.ac.uk/vsa/ (VSA). The
archive contains image tiles which we used for the detection of the lensing
galaxy, and catalogue data which we used for calibration. The details of the
analysis are described in Section 3. In addition to the $J$-band data, we also
made use of $r$-, $i$\- and $z$-band data from the Dark Energy
Survey333https://des.ncsa.illinois.edu/releases/dr1 (DES) (Abbott et al.,
2018) to look for detections or upper limits in the reddest available optical
bands. Data from this survey were also used for the construction of a local
sequence to calibrate the light curves described in Section 4. The light
curves themselves were measured from an extension of the series of
photographic plates used in the discovery of Q2138-431, and the SuperCOSMOS
scans form part of the SuperCOSMOS Science Archive444http://ssa.roe.ac.uk
(SSA). The measurement and analysis of the light curves of the two quasar
images is described in detail in Section 4.
## 3 The Lensing galaxy
### 3.1 Infrared observations
The Q2138-431 system has celestial coordinates $21^{\rm h}$ $41^{\rm m}$
$16.27^{\rm s}$, -42∘ $57^{\prime}$ $10.0^{\prime\prime}$ (2000) and in the
VISTA Science Archive is covered by frames in the $J$ and $K_{s}$ passbands.
The frames have a pixel size of 0.341 arcsec, and median seeing FWHM = 1.18
arcsec. In Fig. 2 we show cutouts from these frames, with the double quasar at
the top, and below it a star which has been helpful in the photometric
calibration and for image subtraction. Cursory examination of the $J$-band
image in the left hand panel shows a suggestive bulge along the line of
centres of the two quasar images, which has the potential to be the lensing
galaxy. The $K_{s}$-band image in the right hand panel of Fig. 2 does not show
this feature, but the frame does not go very deep, and indeed image B of the
double quasar is barely visible. We discuss detection limits in detail below.
In order to investigate the possibilty that the feature between the quasar
images is the lensing galaxy, we undertook a PSF subtraction procedure to
remove the contaminating quasar light. Attempts to use stars in the field as
models for the PSF, including the one visible in Fig. 2, were not very
successful due to difficulties in registration and poor signal-to-noise. The
best approach proved to be to use a Moffat profile (Trujillo et al., 2001) of
the form:
$P(r)=h\biggl{[}1+\Bigl{(}\frac{r}{\alpha}\Bigr{)}^{2}\biggr{]}^{-\beta}$ (1)
fitted to a star of similar magnitude to the quasar images. We first rebinned
the array to improve registration giving the images shown in the left hand
panel of Fig. 3, and then measured the image centroids with the Starlink GAIA
package. The next step was to fit the Moffat profile to the star in Fig. 2. We
followed Trujillo et al. (2001) and set $\beta=4.765$, and then varied
$\alpha$ and $h$ to give the best fit. For the PSF subtraction we kept these
values of $\alpha$ and $\beta$ and for each of the two quasar images varied
$h$ to get the best fit. The resulting profiles $P(r)$ were then subtracted
from each image using the measured centroids. The results of this procedure
are shown in the right hand panel of Fig. 3. Contours of the subtracted quasar
images are included to show the location and shape of the galaxy image.
In order to assess the likelihood that we had a detection of the lensing
galaxy, the first step was to measure its brightness. To do this, we used
local photometric standards from the VSA catalogue, and then with photometry
routines from the GAIA package we measured the apparent magnitude of the
lensing galaxy to be $J=20.68\pm 0.09$. Based on the rms variation of the sky
background, this is a $5\sigma$ detection. The detection limit of the frame
was determined from the faintest $5\sigma$ detections listed in the VSA
catalogue, and verified by our photometry of the faintest images observed on
the frame covering the quasar position. The $5\sigma$ detection limit we
derived was $J\lesssim 20.7$.
Figure 2: Frames from the VISTA Science Archive in the $J$-band (left hand
panel) and $K_{s}$-band (right hand panel). The frames are approximately 30
arcsec on a side, and north is up the page, east to the left. The frames show
the double image of Q2138-431 at the top, with image A to the right and image
B to the left, and a nearby star used to fit the PSF below.
The right hand panel of Fig. 2 shows the quasar system on the $K_{s}$ frame
from the VSA survey. Visual inspection of the area between the two quasar
images shows no evidence for the presence of a lensing galaxy. The $K_{s}$
frame is clearly less deep than the one for the $J$-band, and the B image of
the quasar is very faint, but we found it useful to measure the detection
limit to put bounds on the colour of the lensing galaxy and its redshift.
Using the same approach as for the $J$-band we found the detection limit to be
$K_{s}\lesssim 18.7$. To estimate the $K_{s}$ magnitude of the lensing galaxy
we used the intrinsic colours for luminous red galaxies (LRGs) from Mannucci
et al. (2001). For the $J-K$ passband they give $J-K=0.87$, which implies
$K_{s}\gtrsim 19.8$, well below the detection limit of the $K_{s}$ frame and
consistent with non-detection in the $K_{s}$ passband. Applying $K$ and
evolutionary corrections from Poggianti (1997) up to a redshift $z\sim 1$ does
not significantly alter this.
### 3.2 Optical observations
In addition to the $J$\- and $K_{s}$-band images from the VSA survey, the
Q2138-431 system was also observed as part of the Dark Energy Survey in the
$g$, $r$, $i$ and $z$ passbands (Abbott et al., 2018). These frames have a
pixel size of 0.263 arcsec with a median seeing FWHM = 0.94 arcsec, and proved
very useful in putting limits on the redshift of the lensing galaxy. To
estimate the expected magnitudes of the lensing galaxy in these passbands, we
again made use of the intrinsic colours of Mannucci et al. (2001), based on
template spectra of a large sample of galaxies. We converted the Johnson
colours of Mannucci et al. (2001) to the SDSS system using the colour
transformations of Jordi et al. (2006). These were then applied to the
observed $J$ magnitude of the lensing galaxy to obtain estimated magnitudes of
$g=23.56$, $r=22.73$, $i=22.36$ and $z=22.18$ in the galaxy rest frame.
Examination of the DES frames showed no indication of the presence of a
lensing galaxy, in agreement with the results of Hawkins et al. (1997) from
CCD frames of similar depth to the DES observations. The 95% completeness
limits in these passbands (Abbott et al., 2018) are $g<23.72$, $r<23.35$,
$i<22.88$ and $z<22.25$. In all cases the estimated rest wavelength magnitudes
are within these bounds, and so the non-detections can be used to set minimum
values for the $K$ corrections, and hence the minimum redshift of the lensing
galaxy. As might be expected, the non-detection in the $i$-band provides the
strongest constraint on the redshift of the lensing galaxy. Using $K$
corrections from Poggianti (1997), the minimum redshift for the lensing galaxy
to move below the detection threshhold is $z\gtrsim 0.5$.
So far, we have neglected the possible effect of an evolutionary correction in
our calculations. Evolutionary corrections can be unreliable and model
dependent (D’Souza et al., 2015), but this can be allowed for by comparing the
observed lower limit to the $i-J$ colour with the intrinsic $(i-J)_{0}$
colour. In this case, the large evolutionary corrections largely cancel out,
and the redshift can be estimated by determining the point at which the
addition of the combined $K+e$ correction to the intrinsic colour matches the
observation. We find that $(i-J)_{obs}>2.2$, and $(i-J)_{0}=1.7$ implying a
$K+e$ correction of 0.5, corresponding to a redshift of $z>0.6$. The inclusion
of the evolutionary correction thus slightly tightens the redshift limit.
A lower limit to the redshift of the lensing galaxy of around $z\gtrsim 0.6$
is not surprising, given the faintness of the $J$ magnitude, and provides an
explanation as to why the lens galaxy is not detected in optical passbands.
The lower limit on $z$ also implies a lower limit on the luminosity of the
lensing galaxy. Using the $K$ and evolutionary corrections for $z=0.6$ from
Poggianti (1997), we obtain $M_{J}=-21.02$. Applying a bolometric correction
BC = 2.13 for the VISTA $J$-band frames from Choi et al. (2016) gives an
absolute magnitude $M_{bol}=-23.15$, equivalent to a luminosity
$L_{gal}=1.45\times 10^{11}L_{\odot}$.
### 3.3 Lens model
Figure 3: The left hand panel is from a $J$-band frame from the VISTA Science
Archive. The frame has been re-binned and shows image A to the right and image
B to the left. The small bulge extending to the left of image A is the
candidate lensing galaxy. In the right hand panel the quasar light has been
subtracted, clearly revealing the presence of a galaxy between the two quasar
images, the positions of which are indicated by superimposed contours. The
plots are approximately 10 arcsec on a side, and north is up the page, east to
the left.
To provide a consistent picture of the properties of the lensing galaxy, we
made use of the lens modelling software of Keeton (2001) to estimate the lens
mass. We used the GAIA astrometry package to measure the positions of the two
quasar images, obtaining a refined value for the image separation of 4.481
arcsec, and also the position of the lensing galaxy. $J$-band flux measures
for the two quasar images from the VISTA frame were also included in the lens
model. We then used routines from the software of Keeton (2001) to fit a
singular isothermal ellipsoid plus external shear $({\rm SIE}+\gamma)$ model
to the astrometric and photometric data, which are summarised in Table 1. We
obtained a good fit to the positions, and derived a value for the Einstein
radius of $\theta_{E}=2.019$ arcsec, close to that found by Hawkins et al.
(1997). We obtained $e=0.1$ for the lens ellipticity, in position angle -10∘
measured east of north. For the external shear we found $\gamma=0.1$ in
position angle 40∘. The overall goodness of fit gave $\chi^{2}=12.6$.
Table 1: Summary of the observations for Q2138-431 used in the lens modelling. Observations | A | B | G
---|---|---|---
Positions | RA (arsec) | $4.117\pm 0.02$ | 0 | $2.815\pm 0.02$
| Dec (arsec) | $1.558\pm 0.02$ | 0 | $1.259\pm 0.02$
Fluxes | $J$ (mags) | $18.24\pm 0.03$ | $19.85\pm 0.07$ | $20.68\pm 0.09$
The mass $m$ of the lens depends mainly on $\theta_{E}$, but is also a
function of the quasar and galaxy redshifts. To be specific,
$m=\frac{c^{2}{\theta_{E}}^{2}}{4G}\frac{D_{LS}}{D_{L}D_{S}}$ (2)
where $D_{L}$, $D_{S}$ and $D_{LS}$ are the angular diameter distances to the
lens, to the source, and from the lens to the source respectively. Using the
lower limit $z=0.6$ derived above, this gives $m_{lens}=1.31\times
10^{12}M_{\odot}$. Combining this limit with the luminosity limit
$L_{gal}=1.45\times 10^{11}L_{\odot}$ derived above gives a mass-to-light
ratio $M_{\odot}/L_{\odot}=9.0$. To give an idea of the sensitivity of this
result to redshift, if we assume $z=1.0$ then $m_{lens}=2.88\times
10^{12}M_{\odot}$, $L_{gal}=3.53\times 10^{11}L_{\odot}$ and
$M_{\odot}/L_{\odot}=8.2$.
## 4 Light curves
The discovery of Q2138-431 was part of a long term survey to detect quasars on
the basis of their variability (Hawkins, 1996). The survey was based on an
extensive sequence of photographic plates in several colours taken with the UK
1.2m Schmidt Telescope of the ESO/SERC field 287, centred on $21^{\rm h}$
$28^{\rm m}$, $-45^{\circ}$ (1950). Of particular interest for the detection
of quasars is the $B_{j}$ passband, bounded by the GG395 filter in the blue,
and the IIIaJ emulsion cut-off in the red at about 5400 Å, and quite close to
the SDSS $g$-band. The plates were originally measured with the COSMOS
measuring machine at the Royal Observatory, Edinburgh (MacGillivray & Stobie,
1984), which after photometric calibration produced light curves covering some
16 years. Although these light curves were sufficient to detect a large number
of quasars from their variability, including Q2138-431, the machine measures
used a low detection threshhold above the sky, which merged the two quasar
images. This meant that the light curves of the two images could not be
measured separately. Since the discovery of Q2138-431, the original sequence
of plates in the $B_{j}$ passband has been extended to include at least one
plate, and usually 4 or more, every year between 1977 and 2001. These have now
all been measured with the SuperCOSMOS measuring machine (Hambly et al.,
2001), and the scans and catalogues form part of the SuperCOSMOS Science
Archive.
Although the SuperCOSMOS measures are superior in many ways to the earlier
COSMOS measures, the SSA catalogues are also based on measures with a low
detection threshhold in order to maximize the depth of the survey. This again
has the effect that the double quasar images are not resolved, and so we
retrieved the original digitized mapping mode scans with a view to using more
sophisticated photometric routines to measure the two images separately. Fig.
4 shows cut-outs of the double quasar images from two of the plates from the
quasar variability survey. The exposures were taken in 1977 and 1986, and
illustrate the change in magnitude difference between the two images over 9
years.
To measure the light curves we first extracted a square array from the
SuperCOSMOS scan of each plate of side 8 arcmin and centred on Q2138-431. We
then selected 18 stars in this field, spanning the magnitudes of the two
quasar images, with which to construct a local photometric sequence. These
stars were then identified in the $g$-band frame of the area from the DES
survey. Taking advantage of the linearity of the CCD frames as opposed to the
non-linear reciprocity failure of the photographic plates, we used the GAIA
photometric package to measure the magnitudes of the sequence stars. The DES
$g$-band and the photographic $B_{J}$-band are very similar, with an effective
wavelength of 4670Å, and we could detect no significant colour term. On this
basis we used the $g$-band measures of the sequence stars to directly
calibrate the $B_{J}$ photographic images.
Figure 4: The two frames are from photographic plates taken by the UK 1.2m
Schmidt telescope, and scanned by the SuperCOSMOS measuring machine at the
Royal Observatory, Edinburgh. They illustrate the material on which the quasar
light curves are based, and the change in magnitude difference between the two
quasar images in the centre, over a period of 9 years. Image A to the right
has decreased in brightness by about 0.5 magnitudes relative to image B on the
left. Also included near the bottom of the plot is a standard star. The plots
are approximately 40 arcsec on a side, and north is up the page, east to the
left. Figure 5: The top panel shows a 25 year light curve for image A (red)
and image B (blue) of the double quasar Q2138-431, from SuperCOSMOS measures
of a long series of UK 1.2m Schmidt plates in the $B_{J}$ passband. The middle
panel shows the same data, binned in yearly intervals. The bottom panel shows
the difference light curve for the A and B images, with the observed lag of 1
year for image A removed.
The first stage in the calibration was to transform the transmision values of
the SuperCOSMOS scans to $I$, the intensity of light incident on the
photographic plate, using the relation:
$I\propto\biggl{(}{\frac{T_{c}}{T}}\biggr{)}^{1/\gamma}$ (3)
where $T$ is the transmission through exposed emulsion, and $T_{c}$ through
clear plate. $\gamma$ is the gradient of the linear part of the response curve
which in the first instance we set to 2.5. However, $\gamma$ is basically a
scaling factor which becomes redundant with the sequence star calibration.
The next step was to measure the magnitudes of the 18 sequence stars and two
quasar images on each of the 79 plate scans from the Supercosmos archive which
were available for incorporation into the light curves. To do this, the
transmision values were transformed to intensity using equation (3), and the
resulting arrays converted to FITS files for analysis with the GAIA photometry
package. Then for each plate the sequence star and quasar images were measured
with the aperture photometry routine to give a quasi-linear relative flux, and
thence converted to instrumental magnitudes. Using an aperture of radius 2
arcsec, the two quasar images were easily resolved on all plates. The final
step was to transform these instrumental magnitudes to the sequence star
magnitudes using a least squares third order polynomial fit. This
transformation was then used to evaluate the true magnitudes of the two quasar
images in the $B_{J}$ passband.
The top panel of Fig. 5 shows the light curves for the two quasar images from
1977 to 2001. The error bars are derived from the dispersion about the
calibration curve, and are typically about 0.05 magnitudes, with little
dependence on brightness. The long term trend of the light curves is better
seen in the middle panel of Fig. 5 where the observations for each year are
averaged with a weighted mean. Little is lost by this procedure, as in most
years the plates were taken within a period of about three months, and are not
useful for measuring changes within the year. It will be seen that until 1985
the difference between the two light curves averaged around 1.5 magnitudes.
After this image A dimmed, and the difference decreased by about 0.3
magnitudes. In the event that the double quasar is a gravitational lens
system, we would expect intrinsic variations in the quasar to show up in both
light curves, separated by a time interval corresponding to the difference in
light travel time to the two quasar images. In addition, it is clear from
large scale monitoring programmes of gravitational lenses that in most lensed
systems the quasar images also vary independently of each other, which is
generally accepted to be the result of microlensing by stellar mass bodies
along the line of sight (Tewes et al., 2013). To look for a time lag between
variations in the two images, and minimise the effects of microlensing, we
split the light curves into two sections defined by the larger and smaller
average magnitude differences, and measured the correlation coefficient for
different time lags. The results are displayed in Table 2, and show for both
time intervals a strong correlation between the two light curves when image B
leads image A by one year. Given the poor time resolution of the light curves,
this result should not be taken too seriously, but the existence of a time lag
provides confirmation that Q2138-431 is a gravitational lens, and is a
starting point for a more precise measurement.
Table 2: Correlation coefficients for time lags between images A and B of Q2138-431. B leads A by (years) | 2 | 1 | 0 | -1 | -2
---|---|---|---|---|---
1977 – 1985 | +0.04 | +0.70 | +0.04 | -0.69 | -0.59
1986 – 1994 | -0.20 | +0.61 | +0.31 | +0.04 | +0.18
The bottom panel of Fig. 5 shows the difference between the two light curves
in the middle panel, with the curve for image B shifted back one year. The
zero-point for the difference curve can be calculated from the magnitude
difference of the two images in the infrared, where due to the size of the
emitting region the effects of microlensing are negligible (Blackburne et al.,
2011). From the $J$-band frames used for the detection of the lensing galaxy
we find for the magnitude difference $J_{\rm A}-J_{\rm B}=-1.77$, which we use
as the zero-point for the difference curve in the bottom panel of Fig. 5. This
now shows the true differential effect of microlensing.
## 5 Discussion
The idea behind this paper is to review the status of the double quasar
Q2138-431 by taking advantage of improved observational data since its
discovery in 1997. At the time the case for classifying it as a gravitational
lens was very strong. The spectra were almost identical, as illustrated in
Figures 1 and 2 of Hawkins et al. (1997), and for two randomly selected
quasars this is very unlikely as illustrated in Figure 4 of Hawkins et al.
(1997). There is of course the possibility that two quasars evolving in the
same cluster environment could aquire the same observational characteristics,
but the actual mechanism by which this could happen consistent with our
knowledge of cluster evolution is far from clear. The one sticking point in
the way of classifying the system as a gravitational lens was the failure to
find a lensing galaxy.
Attempts to find a lensing galaxy by Hawkins et al. (1997) were mainly
focussed on deep CCD observations in the $R$-band. They also reported
observations in the $K$-band, but their frame is of a similar depth to the
$K_{s}$-band frame in Fig. 2, and barely detects image B of the quasar. Their
$R$-band frame goes much deeper, and they derive a limit to the lensing galaxy
brightness of $R>23.8$. With this faint upper limit to the brightness and an
estimate of the lens mass they derive a very large lower limit to the mass-to-
light ratio of any lensing galaxy. Their lens mass is similar to the one we
derive from mass modelling, but for reasons which are not clear Hawkins et al.
(1997) appear to assume zero redshift luminosities, and do not take into
account the very large $K$ and evolutionary corrections associated with a
distant galaxy. When these are allowed for, it seems unlikely that the galaxy
we have detected in the $J$-band would have been detected by Hawkins et al.
(1997) in their $R$-band frame for redshifts $z\gtrsim 0.6$.
Based on measurements of random fluctuations in the sky background, the object
revealed after PSF subtraction in Fig. 3 is a $5\sigma$ detection. Its
appearance is similar to other faint galaxies in the surrounding 10 arcmin
field, where the presence of an optical counterpart confirms their identity as
galaxies. A good example is the faint galaxy to the south-east of the quasar
system, used by Hawkins et al. (1997) to put limits on the $R$-band magnitude
of a lensing galaxy. This galaxy is also clearly visible on deep $r$\- and
$i$-band frames from the DES survey, and is detected near the limit of the
$J$-band frame in Fig. 3. There is in addition some indication of other faint
galaxies in the vicinity of image A of the quasar. We have also searched the
field for artefacts which might be mistaken for faint galaxies, but these are
confined to single pixels, presumably cosmic ray tracks.
The value obtained for the mass-to-light ratio of the lens in Section 3 of
$M_{\odot}/L_{\odot}=9.0$ assumes a redshift of $z=0.6$ based on non-detection
of the lens in the $i$-band. As this redshift is essentially a lower limit, we
investigated the effect of increasing it to $z=1.0$ and derived
$M_{\odot}/L_{\odot}=8.2$ for the lensing galaxy. This small change in
$M_{\odot}/L_{\odot}$ is due to the fact that an increase in redshift results
in larger values for both luminosity and mass which tend to cancel out in the
mass-to-light ratio. Our value of $M_{\odot}/L_{\odot}$ lies quite close to
the relation between mass and mass-to-light found by Cappellari et al. (2006),
and thus adds to a consistent picture of the double quasar as a gravitational
lens.
The mass modelling of Q2138-431 turned out to be straightforward. This was not
unexpected, as the lensing galaxy lies close to the line of centres of the
quasar images, and roughly in the position one would expect given their
brightness ratio. This contrasts strongly with for example the asymmetric
Q0957+561 (Fadely et al., 2010), and more closely resembles HE1104-1805 (Sluse
et al., 2012), another wide separation binary. This simplicity of modelling is
important if Q2138-431 is to be used for measuring the Hubble constant, where
the accuracy with which light travel time to the two images can be modelled is
the most important factor limiting the accuracy of the result.
The software package of Keeton (2001) allows for the estimation of the time
delay between two images of the modelled system, assuming a value for the
Hubble constant. We applied this procedure to our model of Q2138-431 using a
canonical value $H_{0}=71$ km s-1 Mpc-1 which implied a time lag with image B
leading image A by 410 days. This is consistent with our crude estimate of 1
year from yearly observations, and hopefully can provide a starting point for
a well sampled photometric monitoring programe designed to measure the value
of $H_{0}$. For this calculation we assumed a lens redshift $z=0.6$, equal to
the lower limit derived in Section 3. Increasing the lens redshift results in
a larger predicted time lag. For example a lens redshift of $z=1.0$ implies a
time lag of 1130 days, which suggests that the true lens redshift is not much
greater than our lower limit of $z\gtrsim 0.6$.
In the discovery paper of the double quasar Q2138-431 (Hawkins et al., 1997),
the strongest evidence supporting the gravitational lens hypothesis was the
close similarity of the spectra of the two quasar images, and the very small
difference between their redshifts. However, the authors concluded that
without the detection of a lensing galaxy the claim that the system was a
gravitational lens was insecure. In this paper we have brought together a
number of new lines of argument to support the classification of Q2138-431 as
a gravitational lens. The detection of the lensing galaxy is perhaps the most
conclusive, but the simplicity of the mass model, the derivation of a
plausible mass-to-light ratio, and the detection of a time lag between the
light curves of the two quasar images in agreement with the mass model help to
provide a consistent picture of a gravitational lens system. The measurement
of the redshift of the lensing galaxy and the accurate determination of the
time lag between the two images should then provide an excellent basis for a
reliable estimate of the value of the Hubble constant.
## 6 Conclusions
In this paper we have re-examined the question of whether the double quasar
Q2138-431 is a gravitational lens system. The system was discovered as part of
a survey for quasars based on their variability, and elongated images were
included as possible detections of gravitational lenses. Early analysis of
Q2138-431 (Hawkins et al., 1997) showed it to comprise two quasar images
separated by 4.5 arcsec, with almost identical spectra and redshifts. This
provided strong evidence for a gravitational lens, but deep CCD photometry in
the $R$-band failed to reveal a lensing galaxy. The authors thus concluded
that there was insufficient evidence to definitively identify the system as a
gravitational lens.
With the advent of more recent deep photometric surveys, especially in the
infrared, we searched again for a lensing galaxy. In this case we successfully
detected a candidate lens on a deep $J$-band frame from the VISTA Science
Archive. The apparent magnitude of the $5\sigma$ detection was measured to be
$J=20.68$, and non-detection in the optical bands implied a redshift $z\gtrsim
0.6$.
The wide separation of the quasar images at 4.481 arcsec and the apparent
simplicity of the lens system make Q2138-431 an attractive candidate for the
measurement of the Hubble constant, which would require mass modelling of the
system. Based on our measurements of the positions of the quasar images and
lensing galaxy we were able to obtain a satisfactory fit with a ${\rm
SIE}+\gamma$ model, implying an estimated mass for the lens of
$m_{lens}=1.31\times 10^{12}M_{\odot}$. Combining this with the infrared
photometry in the $J$-band gives a mass-to-light ratio
$M_{\odot}/L_{\odot}=9.0$ for the lensing galaxy.
The estimation of the Hubble constant in a gravitational lens system also
requires a measurement of the time delay or difference in light travel time
between the quasar images. We were able to make a preliminary assessment of
this from an archival photographic monitoring programme covering 21 years.
Although the effective time resolution of the survey was only about 1 year, we
were able to show by cross-correlating the light curves of the two quasar
images that image B leads image A by about a year. It was possible to estimate
from our mass model that the expected time delay was about 410 days,
consistent with the rough value from the light curve analysis.
The overall conclusion of this paper is that the double quasar Q2138-431 is
confirmed as a gravitational lens system. The close similarity of the spectra,
the detection of the lensing galaxy, the plausible mass-to-light ratio, and
the measurement of a time delay between the two images in agreement with
predictions from the mass model provide a consistent picture of a
gravitational lens. Q2138-431 appears to be a system well suited for the
measurement of the Hubble constant. The wide separation of the quasar images
should make measuring the time delay from photometry of the light curves
straightforward, and the simplicity of the system should enable accurate
modelling to estimate the value of the Hubble constant.
## Acknowledgements
I am most grateful to Nigel Hambly for retrieving the SuperCOSMOS scans used
for constructing the light curves.
## Data Availability
The data upon which this paper is based are all publicly available and are
referenced in Section 2, with some additional comments in the text.
## References
* Abbott et al. (2018) Abbott T.M.C. et al., 2018, ApJS, 239, 18
* Blackburne et al. (2011) Blackburne J.A., Pooley D., Rappaport S., Schechter P.L., 2011, ApJ, 729, 34
* Cappellari et al. (2006) Cappellari M. et al., 2006, MNRAS, 366, 1126
* Choi et al. (2016) Choi J., Dotter A., Conroy C., Cantiello M., Paxton B., Johnson B.D., 2016, ApJ, 823, 102
* D’Souza et al. (2015) D’Souza R., Vegetti S., Kauffmann G., 2015, MNRAS, 454, 4027
* Eigenbrod et al. (2005) Eigenbrod A. et al., 2005, A&A, 436, 25
* Fadely et al. (2010) Fadely R., Keeton C.R., Nakajima R., Bernstein G.M., 2010, ApJ, 711, 246
* Fohlmeister et al. (2008) Fohlmeister J. et al., 2008, ApJ, 676, 761
* Hambly et al. (2001) Hambly N.C., Irwin M.J., MacGillivray H.T., 2001, MNRAS, 326, 1295
* Hawkins (1996) Hawkins M.R.S., 1996, MNRAS, 415, 2744
* Hawkins (1997) Hawkins M.R.S., 1997, A&A, 328, L25
* Hawkins (2020) Hawkins M.R.S., 2020, A&A, 643, A10
* Hawkins et al. (1997) Hawkins M.R.S, Clements D., Fried J.W., Heavens A.F., Véron P., Minty E.M., van der Werf P., 1997, MNRAS, 291, 811
* Jordi et al. (2006) Jordi K., Grebel E.K., Ammon K., 2006, A&A, 460, 339
* Keeton (2001) Keeton C.R., 2001, arXiv:astro-ph/0102340
* MacGillivray & Stobie (1984) MacGillivray H.T., Stobie R.S., 1984, Vistas Astron., 27, 433
* Mannucci et al. (2001) Mannucci F. et al., 2001, MNRAS, 326, 745
* McKean et al. (2005) McKean J.P. et al., 2005, MNRAS, 356, 1009
* McKean et al. (2010) McKean J.P. et al., 2010, MNRAS, 404, 749
* Oguri et al. (2008) Oguri M. et al., 2008, ApJ, 676, L1
* Poggianti (1997) Poggianti B.M., 1997, A&AS, 122, 399
* Popović et al. (2010) Popović L.C̆. et al., 2010, ApJ, 721, L139
* Sluse et al. (2012) Sluse D., Chantry V., Magain P., Courbin F., Meylan G., 2012, A&A, 538, A99
* Suyu et al. (2013) Suyu S.H. et al., 2013, ApJ, 766, 70
* Tewes et al. (2013) Tewes M., Courbin F., Meylan G., 2013, A&A, 553, A120
* Trujillo et al. (2001) Trujillo I., Aguerri J.A.L., Cepa J., Gutiérrez C.M., 2001, MNRAS, 328, 977
* Walsh et al. (1979) Walsh D., Carswell R.F., Weymann R.J., 1979, Nature, 279, 381
* Wong et al. (2017) Wong K.C. et al., 2017, MNRAS, 465, 4895
|
# Security of One-Way Entanglement Purification with Quantum Sampling Against
a Restricted Adversary
Cameron<EMAIL_ADDRESS>Physics Department, University of
Connecticut, Storrs, CT 06269, USA
Entanglement purification protocols promise to play a critical role in the
future of quantum networks by distributing entanglement across noisy channels.
However, only the security of two-way purification protocols have been closely
studied. To address this, we propose a one-way entanglement purification
protocol which utilizes quantum sampling and prove its security against an
adversary restricted to single qubit Pauli gates. This is done through
leveraging the equivalence of one-way entanglement purification protocols with
error-correcting codes. To prove the security of this protocol, we first use
the quantum sampling framework introduced by Bouman and Fehr to estimate the
Hamming weight of the qubits which passed through the channel and then use the
estimated relative Hamming weight $\omega$ to determine the amount of
interference that Eve has subjected to the quantum channel. Since Eve is
restricted to single qubit Pauli gates, the number of applied gates can be
directly estimated using the Hamming weight. Estimating the number of
adversarial single qubit gates, allows us to perform error correction and
disentangle the logical qubit from Eve with probability
$1-\epsilon_{qu}^{\delta}$. Since this protocol allows communication only in
one direction, the distance of the code must be decided before transmission,
and therefore Bob will be forced to abort the protocol if he finds that Eve
has applied more gates than the code can correct. One-way protocols may find
use when communication is limited, or when we desire to decrease latency
compared to the multiple rounds of communication needed in two-way protocols.
Further research may investigate the security of this protocol against
arbitrary single or multi-qubit gates to obtain security guarantees against a
more general adversary.
## 1 Introduction
Entanglement is a novel computational resource in quantum information science
with no similar classical resource. This resource has found numerous uses
throughout quantum information science. Two parties with entangled qubits can
transmit perfectly secure quantum information through quantum teleportation
[1, 2] or perfectly secure classical information through superdense coding
[3]. Parties with access to entanglement can use quantum correlations to
succeed at games such as ”magic squares” more than is classically possible
[4]. Quantum entanglement has also found use in quantum processes such as
distributed quantum computation [5] and quantum cookies [6]. Motivated by
these uses, the topic of how to securely distribute quantum states has
recently gained interest, including through investigation of quantum secure
direct communication [7, 8, 9] and secure quantum dialogues [10, 11].
In contrast to the quantum secure direct communication and quantum dialogue
protocols above which attempt to distribute entanglement through bell pairs,
the protocol proposed in this paper will attempt to distribute entanglement
through error correcting codes. Due to this focus on distributing
entanglement, we will allow Alice and Bob the classical resource of a shared
secret key prior to the start of the protocol. In this protocol, we will also
utilize entanglement purification protocols, which were historically brought
about to distill high fidelity entangled states quantum states travelling
through a noisy channel [12]. However, since an eavesdropper can be viewed as
a source of noise in the quantum channel, entanglement purification protocols
also work to remove the effect of an adversary on the transmitted message.
Entanglement purification protocols have been investigated for use in high
fidelity quantum communication [13], and previously explored two-way protocols
promise strong security for transferring quantum information [14].
### 1.1 Entanglement Purification Protocols
There are two different classes of entanglement purification protocols, one-
way and two-way protocols [6]. Two-way entanglement purification protocols
allow for communication between Alice and Bob after qubits pass through the
quantum channel. Since Alice and Bob can conditionally apply gates to their
systems based off each other’s measurement results, two-way entanglement
purification protocols are in general stronger than one-way protocols. For
example, two-way purification protocols can allow for Alice and Bob to purify
their qubits from a $50\%$ depolarizing channel, which one-way protocols
cannot correct [6]. Two-way purification protocols have been previously proven
secure through demonstrating that a two-way protocol can disentangle the
purified system from an external system or eavesdropper [14]. This posits the
question if one-way protocols can similarly be proven secure.
In comparison to two-way protocols, one-way entanglement purification
protocols do not allow for communication between parties after the message is
sent. Restriction to one-way communication incidentally makes these protocols
equivalent to quantum error correction since Alice and Bob can be time-like
separated [6]. Due to this, we can use quantum error-correcting codes to
evaluate the security of one-way entanglement purification protocols [15]. The
correspondence between one-way entanglement purification protocols and error-
correcting codes has a prior basis in the literature, as it has previously
been used by Shor and Preskill to prove the BB84 QKD protocol secure [16].
Investigating the security of one-way protocols may be useful in scenarios
where communication between two parties is limited, or when we desire to
decrease the latency of quantum communication compared to the multiple rounds
of classical communication needed between Alice and Bob in two-way protocols.
In one-way entanglement purification, Bob is forbidden from sending messages
to Alice. This presents a problem for quantum sampling, as this restriction
requires Bob to know the states in which the sampling qubits were prepared in
order to perform sampling and determine the Hamming weight of the sampling
qubits. If Alice naively announces the prepared sampling states over a
classical channel, then Eve can simply intercept all the qubits and use this
information to resend identical qubits to Bob, circumventing the quantum
sampling procedure. Fundamentally, this problem occurs because in one-way
protocols the actors Bob and Eve are symmetric [6]. To solve this problem, we
will assume that Alice and Bob share a secret classical key, breaking this
symmetry.
A shared classical key will allow Alice to send a secure classical message to
Bob. This message will contain many critical pieces of information for the
protocol, including the prepared sampling states, the permutation which Alice
applied to the transmitted qubits, and the distance of the error-correcting
code which Alice employed. At this point in the protocol, Bob will be able to
perform quantum sampling and estimate the relative Hamming weight $\omega$ of
Eve’s attack. Bob will then be able to use the relative Hamming weight to
estimate the number of gates Eve has applied to the logical qubit, if we
assume Eve is restricted to single qubit Pauli gates. Finally, Bob can
calculate if the error-correcting code distance is large enough to disentangle
Eve. If the code is sufficiently large, then Bob performs error correction and
keeps the resulting logical qubit. Otherwise, the code distance is too small
and Bob aborts.
## 2 Sampling
Before presenting the protocol we should first familiarize ourselves with
quantum sampling. The quantum sampling framework used here was introduced by
Bouman and Fehr [17], and we refer to their paper for a more rigorous
introduction. This section is intended to be a broad overview to the framework
put forth there.
Generically, sampling allows for an individual to learn information about a
population through measuring a subset of that population. Classical sampling
has been well studied [18]. However, due to entanglement in quantum systems it
is not obvious how classical sampling strategies could be extended into
quantum systems. Bouman and Fehr’s quantum sampling framework addresses this
by allowing for classical sampling strategies to be used in quantum systems.
The sampling framework outputs an estimate of the Hamming weight of a quantum
system. The definition of the Hamming weight is extended in this framework to
entangled states and states in superposition [17].
From Bouman and Fehr’s quantum sampling framework we will be interested in two
quantities: the estimated relative Hamming weight $\omega$ of the message
qubits and the error bound of the quantum sampling process
$\epsilon_{qu}^{\delta}$. In Sections 5 and 6 we will use the relative Hamming
weight $\omega$ to estimate the number of gates applied by an eavesdropper,
and we will use the error bound $\epsilon_{qu}^{\delta}$ to estimate the
probability that the sampling strategy has failed and Eve has gained access to
the transmitted logical qubit without detection.
### 2.1 Classical Sampling
As an introduction to sampling, let us consider a classical string
$q=(q_{1},q_{2},...q_{n})\in\\{0,1\\}^{n}$. The Hamming weight of this string
is defined as $wt(q)=|\\{i|q_{i}\neq 0\\}|$, or in simpler terms, the Hamming
weight is the number of nonzero characters in this string. The role of
sampling is to use a substring $q_{t}$ to estimate the Hamming weight of the
remaining string $q_{\bar{t}}$. The sampling strategy employed will select a
completely random subset of $q$ to determine $q_{t}$. With the sampled
substring $q_{t}$, the relative Hamming weight
$\omega(q_{t})=\frac{wt(q_{t})}{N}$ will be used as an estimate of the
relative Hamming weight of the remaining string
$\omega(q_{\bar{t}})=\frac{wt(q_{\overline{t}})}{M}$, where $N$ is the number
of sampling qubits and $M$ is the number of message qubits.
To find the failure probability of the sampling procedure, we will start by
considering the set of all substrings that would output a relative Hamming
weight which is $\delta$-close to the true relative Hamming weight.
$B_{t}^{\delta}=\\{\textbf{q}\in\mathcal{A}^{n}:\absolutevalue{\omega(q_{\overline{t}})-\omega(q_{t})}<\delta\\}$
(1)
This is the set of all substrings for which the sampling procedure will
succeed. Through considering this set, it is clear that the probability that
the sampling procedure fails, producing an estimate greater than $\delta$ from
the true Hamming weight, which is equal to the probability that the string is
not in the set $B_{t}^{\delta}$.
$\epsilon_{cl}^{\delta}=\max_{q\in\mathcal{A}^{n}}Pr[q\notin B_{t}^{\delta}]$
(2)
Assuming we are randomly sampling $k$ entries [17], we find,
$\epsilon_{cl}^{\delta}<4\exp{-\frac{1}{3}\delta^{2}k}$ (3)
Therefore, classical sampling is able to estimate the relative Hamming weight
$\omega(q_{t})$ of a string which is $\delta$-close to the true relative
Hamming weight $\omega(q_{\bar{t}})$, with probability
$1-\epsilon_{cl}^{\delta}$. From this point on we will simply refer to the
estimated relative Hamming weight as $\omega$.
### 2.2 Quantum Sampling
The quantum version of sampling naturally extends from classical sampling in
Bouman and Fehr’s framework, except that sampling is performed in both the $X$
and $Z$ bases. Classical sampling along these two non-orthogonal bases allows
for us to estimate the Hamming weight of the quantum system while still using
well-studied classical sampling methods. The main result of Bouman and Fehr’s
paper shows that the error bound of a quantum sampling protocol is simply the
square root of the error bound of the underlying classical sampling protocol
utilized [17].
$\epsilon_{qu}^{\delta}=\sqrt{\epsilon_{cl}^{\delta}}$ (4)
Using this, we find the quantum error bound for a randomly sampled substring
to be,
$\epsilon_{qu}^{\delta}<2\exp{-\frac{1}{6}\delta^{2}k}$ (5)
Through this equation, Bouman and Fehr’s framework allows for classical
sampling methods to be applied to quantum systems. This finding has allowed
this framework to aid in proving the security of Quantum Key Distribution [17]
and Quantum Random Number Generators [19, 20], as well as in deriving lower
bounds on the quantum conditional min entropy of high dimensional systems
[21].
Figure 1. Sampling and message qubits sent over a quantum channel. Eve applies
Pauli gates randomly to the message and sampling qubits. Bob can estimate the
number of Pauli gates applied to the message qubits through the sampling
procedure.
To illustrate quantum sampling with an example, consider the situation in
Figure 1. Focusing on the first qubit, Alice prepared a sampling qubit in the
$\ket{1}$ state and sent it to Bob over a quantum channel. However, Eve
tampered with the qubit in the channel, and when Bob measured this qubit he
found it in the $\ket{0}$ state. In this case, Bob can deduce that an error
has occurred. Now considering all the sampling qubits sent, Bob can obtain a
more accurate estimate of the influence of the quantum channel. With this
information, when Alice sends additional message qubits along with these
sampling qubits, Bob can use the quantum sampling to estimate the total error
of this quantum message in the form of the Hamming weight, $M\omega$.
When analyzing the proposed protocol in Section 5, $\omega$ will be used to
determine the number of gates Eve has applied, and $\epsilon_{qu}^{\delta}$
will be used to determine the failure probability of the protocol. The value
of $\delta$ will be determined in Sections 5 and 6 by the Hamming weight, code
distance, and gate set given to Eve.
## 3 Estimating Eve’s Interference
We will now use the Hamming weight, along with restrictions on Eve’s available
gate set, to estimate the number of gates Eve has applied. Let us first
consider the example of Eve tampering with the quantum sampling procedure in
Figure 1. Alice began by preparing sampling qubits in the states $\ket{0}$,
$\ket{1}$, $\ket{+}$, and $\ket{-}$ and sent these sampling qubits along with
some message qubits to Bob. After Bob received these qubits, he measured the
sampling qubits in the same basis Alice prepared and estimated the relative
Hamming weight $\omega$ of the remaining qubits. Bob can use this relative
Hamming weight $\omega$ to gain insight into the possible attacks Eve could
have performed as follows.
The Hamming distance is defined as the number of qubits which would be
measured by Bob in an orthogonal state as compared to the state which Alice
prepared. This change of state is caused by Eve’s interference in the quantum
channel, shown below by the operator $E$. This implies a definition of the
relative Hamming weight as follows,
$\omega=\frac{1}{4}\absolutevalue{\bra{1}E\ket{0}}^{2}+\frac{1}{4}\absolutevalue{\bra{0}E\ket{1}}^{2}+\frac{1}{4}\absolutevalue{\bra{+}E\ket{-}}^{2}+\frac{1}{4}\absolutevalue{\bra{-}E\ket{+}}^{2}$
(6)
Extending this equation to multiple qubits gives the Hamming weight for the
sampling qubits as,
$N\omega=\sum_{i}^{N}\frac{1}{4}\absolutevalue{\bra{1}E_{i}\ket{0}}^{2}+\frac{1}{4}\absolutevalue{\bra{0}E_{i}\ket{1}}^{2}+\frac{1}{4}\absolutevalue{\bra{-}E_{i}\ket{+}}^{2}+\frac{1}{4}\absolutevalue{\bra{+}E_{i}\ket{-}}^{2}$
(7)
By multiplying the relative Hamming weight by the number of message qubits
$M$, we can determine the estimated Hamming weight of the message qubits,
$M\omega=\frac{M}{N}\sum_{i}^{N}\frac{1}{4}\absolutevalue{\bra{1}E_{i}\ket{0}}^{2}+\frac{1}{4}\absolutevalue{\bra{0}E_{i}\ket{1}}^{2}+\frac{1}{4}\absolutevalue{\bra{-}E_{i}\ket{+}}^{2}+\frac{1}{4}\absolutevalue{\bra{+}E_{i}\ket{-}}^{2}$
(8)
Given $\omega$, we will use this equation to gain insight into the operators
$E_{i}$ in Sections 5 and 6. In these sections we will find that knowledge of
the gate set and the Hamming weight of the message qubits $M\omega$ can be
used to estimate the total number of gates applied.
## 4 The Protocol
With quantum sampling and the Hamming weight of quantum systems better
understood, we are now in a position to state the proposed one-way
entanglement purification protocol, which is as follows,
1. 1.
Alice chooses a code distance $d$ and encodes a logical qubit with this
distance. She entangles this logical qubit with a local qubit, which she will
keep.
2. 2.
Alice then concatenates many sampling qubits to this logical qubit and
permutes her quantum registers. She sends all these qubits through the quantum
channel.
3. 3.
Alice sends a classical message to Bob using a previously distributed shared
secret key, informing him of the permutation, sampling states, and code
distance.
4. 4.
Bob receives both the classical message and qubits from their respective
channels.
5. 5.
Bob uses the classical information given by Alice to locate the sampling
qubits and perform quantum sampling, obtaining an estimate of the relative
Hamming weight $\omega$.
6. 6.
Using the estimated Hamming weight $M\omega$, Bob can additionally estimate
the number of operations applied to the logical qubit. If the number of
operations is less than that allowed by the code distance, then Bob performs
error correction and keeps the logical qubit. Otherwise, Bob aborts.
Bob’s prediction is correct with probability $1-\epsilon_{qu}^{\delta}$. Bob
can determine the value of $\delta$ as is shown in the next section.
## 5 Security Against Pauli Gates
We will now prove the security of this protocol when Eve’s gate set is
restricted to single qubit Pauli gates. Starting from equation 8, with Eve’s
attacks restricted to single qubit Pauli gates
$E_{i}\in\\{X_{i},Y_{i},Z_{i}\\}$,
$M\omega=\frac{M}{N}\sum_{i}^{N}\frac{1}{4}\absolutevalue{\bra{1}E_{i}\ket{0}}^{2}+\frac{1}{4}\absolutevalue{\bra{0}E_{i}\ket{1}}^{2}+\frac{1}{4}\absolutevalue{\bra{-}E_{i}\ket{+}}^{2}+\frac{1}{4}\absolutevalue{\bra{+}E_{i}\ket{-}}^{2}$
(9)
From this equation, we find that the estimated Hamming weight of the sampled
substring is at least equal to half of the number of applied gates. For
example, applying Pauli $X$ gates to all qubits gives,
$M\omega=\frac{M}{N}\sum_{i}^{N}\frac{1}{4}\absolutevalue{\bra{1}X\ket{0}}^{2}+\frac{1}{4}\absolutevalue{\bra{0}X\ket{1}}^{2}+\frac{1}{4}\absolutevalue{\bra{-}X\ket{+}}^{2}+\frac{1}{4}\absolutevalue{\bra{+}X\ket{-}}^{2}$
(10)
From this we find that the relative Hamming weight of these applied $X$-gates
is,
$\omega=\frac{1}{2}$ (11)
We obtain the same results from applying $Z$-gates as well. $Y$-gates can be
detected from every sampling qubit and give a larger relative Hamming weight
of $\omega=1$. Therefore, given the adversary is restricted to Pauli gates, we
can estimate the number of gates applied to the message qubits as twice the
estimated Hamming weight, $2M\omega$. Given that the error-correcting code can
correct up to $\frac{d-1}{2}$ errors, the code can remove Eve’s influence if,
$2M\omega\leq\frac{d-1}{2}$ (12)
Figure 2. The proposed protocol using the distance 5 surface code and some
sampling qubits. In this depiction, Eve applies four X-gates in the quantum
channel. Bob uses the sampling qubits to estimate the number of gates applied
to the logical qubit and uses the surface code to remove Eve’s influence.
For a specific example, suppose that Alice prepared the logical qubit in the
distance 5 surface code as is depicted in Figure 2. She sends this logical
qubit through a quantum channel along with some sampling qubits. After passing
through the quantum channel, Bob is able to use the sampling qubits to
estimate the Hamming weight of the logical qubit as $M\omega=1$. In this
scenario, we can use this Hamming weight to estimate that Eve has applied 2
Pauli gates to the logical qubit. Through this, Bob can determine that the
distance 5 surface code prepared by Alice will be able to remove these two
gates applied by Eve.
### 5.1 Using the Error Bound $\epsilon_{qu}^{\delta}$
We must now address the probability that the sampling procedure has failed and
Eve has by chance avoided the sampling qubits. This failure would allow for
the true relative Hamming weight to be greater than $\omega+\delta$ with
probability $\epsilon_{qu}^{\delta}$. This would in turn imply that Eve has
applied greater than $2M(\omega+\delta)$ Pauli gates to the logical qubit. To
calculate this failure probability we must first determine the value of
$\delta$ which would cause the protocol to fail. As we are removing Eve’s
influence through an error-correcting code, the greatest value of $\delta$ we
can allow is the value which saturates the error-correcting code. In this way,
$\delta$ is set such that,
$2M(\omega+\delta)=\frac{d-1}{2}$ (13)
The true relative Hamming weight is less than this estimate of $\omega+\delta$
with probability $1-\epsilon_{qu}^{\delta}$. Recall from Section 2 that
$\epsilon_{qu}^{\delta}=2\exp{-\frac{1}{6}\delta^{2}k}$, where $k$ is the
number of sampling qubits, giving,
$1-\epsilon_{qu}^{\delta}=1-2\exp{-\frac{1}{6M^{2}}(\frac{d-1}{4}-M\omega)^{2}k}$
(14)
Let us illustrate this with a numerical example. Alice sends the distance 5
surface code, along with 20000 sampling qubits to Bob. Bob then performs
quantum sampling and estimates the Hamming weight of the message qubits as
$M\omega=\frac{1}{2}$. Since the employed code can correct up to
$\frac{5-1}{2}=2$ errors, Bob can now perform error correction, keep the
logical qubit, and state that the protocol has succeeded with probability,
$1-2\epsilon_{qu}^{\delta}=1-2\exp{-\frac{1}{12(25)^{2}}*20000}=86.1\%$ (15)
As can be seen through this example, this one-way protocol requires
significantly more resources to distribute a single entangled qubit than two-
way purification protocols. However, only one round of communication is
necessary, decreasing the latency of the protocol at the cost of requiring
more qubits.
## 6 Security Against Qubit Measurements
Now that the security of the proposed protocol has been established against
single qubit Pauli gates, we can explore the security of the protocol if we
additionally allow Eve the ability to perform qubit measurements.
For example, consider a measurement resulting in finding the qubit in the
$\ket{0}$ state,
$E_{i}=M_{\ket{0}}=\ket{0}\bra{0}$ (16)
We obtain from Equation 7,
$N\omega=\sum_{i}^{N}\frac{1}{4}(\absolutevalue{\bra{1}\ket{0}\bra{0}\ket{0}}^{2}+\absolutevalue{\bra{0}\ket{0}\bra{0}\ket{1}}^{2}+\absolutevalue{\bra{-}\ket{0}\bra{0}\ket{+}}^{2}+\absolutevalue{\bra{+}\ket{0}\bra{0}\ket{-}}^{2})$
(17)
This leads to the Hamming weight,
$\omega=\frac{1}{4}$ (18)
This similarly follows for measurements $M_{\ket{1}}$ which result in
$\ket{1}$. Qubit measurements therefore increase the Hamming weight by
$\frac{1}{4}$. This makes measurements more difficult to detect with quantum
sampling compared to a Pauli gate.
Equation 17 indicates that if we restrict Eve to measurements she can now
affect twice as many qubits while retaining the same Hamming weight as
compared to Section 4. For a code with distance $d$, we must ensure that the
distance is large enough to correct double the number of tampered qubits. This
changes the requirement in step 6 of the protocol from
$2M\omega\leq\frac{d-1}{2}$ to $4M\omega\leq\frac{d-1}{2}$ and Bob’s
calculation of $\delta$ to,
$4M(\omega+\delta)=\frac{d-1}{2}$ (19)
This changes the probability of success to,
$1-2\epsilon_{qu}^{\delta}=1-2\exp{-\frac{1}{6M^{2}}(\frac{d-1}{8}-M\omega)^{2}k}$
(20)
By changing this constraint in the protocol, Alice and Bob will now be able to
additionally guarantee security against qubit measurements.
## 7 Conclusion and Future Directions
In this paper, we have proposed a one-way entanglement purification protocol
with quantum sampling and proved its security against a restricted adversary
with access to Pauli gates and measurements. The proof of security of this
protocol is straightforwardly proven from the properties of quantum error-
correcting codes and the sampling framework utilized.
This protocol breaks the symmetry between Bob and Eve which is typically
present in one-way entanglement purification protocols by utilizing a shared
secret key. This secret key allows Bob to securely conduct quantum sampling
without requiring him to send message back to Alice. This protocol may allow
for lower latency as compared to two-way error-correcting protocols, as less
communication is needed between the two parties. However, more qubits are
necessary in one-way protocols to achieve similar performance to two-way
protocols.
The protocol proposed in this paper has so far only been proven secure to a
significantly restricted adversary. Further research directions include
examining the security of this protocol with respect to an adversary with
access to arbitrary single or multi-qubit gates. At first glance, this
approach may seem futile when Eve is given access to infinitesimally small
gates, as Eve could apply infinitely small rotation gates to each qubit while
maintaining approximately zero Hamming weight. However, two facts help to
mitigate the effectiveness of this approach for Eve. First, error-correcting
codes discretize errors, and therefore would discretize the small gates Eve
has applied, suppressing them from affecting the logical qubit. Second, the
Eastin Knill theorem states that an operator made of infinitesimally small
transverse gates cannot be a fault-tolerant logical operator. Due to this, any
single qubit gates Eve employs must be finite sized to manipulate the logical
qubit sent by Alice [22]. This indicates that there may be a lower limit to
the size of single qubit gates which Eve can apply to improve her likelihood
of eavesdropping.
However, as the Eastin-Knill theorem does not apply to multi-qubit non-
transverse gates, it may be more difficult to prove the security of one-way
entanglement purification against arbitrary multi-qubit attacks. This would
require research into the lowest Hamming weight multi-qubit operator that can
perform logical rotations on an error corrected qubit. However, there has been
relatively little research into constructing continuous logical operators on
error-corrected qubits [22, 23], since fault-tolerant quantum computation can
be achieved with finite sized gate sets such as Clifford+T [24]. Therefore,
further examination into constructing continuous logical operators may have
applications in the security of this protocol against arbitrary multi-qubit
attacks.
## References
* [1] M.A. Bashar et al. “A Review and Prospects of Quantum Teleportation” In _2009 International Conference on Computer and Automation Engineering_ , 2009, pp. 213–217 DOI: 10.1109/ICCAE.2009.77
* [2] Charles H. Bennett et al. “Purification of Noisy Entanglement and Faithful Teleportation via Noisy Channels” In _Physical Review Letters_ 76.5 American Physical Society (APS), 1996, pp. 722–725 DOI: 10.1103/physrevlett.76.722
* [3] Chuan Wang et al. “Quantum secure direct communication with high-dimension quantum superdense coding” In _Phys. Rev. A_ 71 American Physical Society, 2005, pp. 044305 DOI: 10.1103/PhysRevA.71.044305
* [4] Felix Leditzky, Mohammad A. Alhejji, Joshua Levin and Graeme Smith “Playing games with multiple access channels” In _Nature Communications_ 11.1 Springer ScienceBusiness Media LLC, 2020 DOI: 10.1038/s41467-020-15240-w
* [5] Marcello Caleffi et al. “Distributed Quantum Computing: a Survey” arXiv, 2022 DOI: 10.48550/ARXIV.2212.10609
* [6] Charles H. Bennett, David P. DiVincenzo, John A. Smolin and William K. Wootters “Mixed-state entanglement and quantum error correction” In _Physical Review A_ 54.5 American Physical Society (APS), 1996, pp. 3824–3851 DOI: 10.1103/physreva.54.3824
* [7] Kim Boström and Timo Felbinger “Deterministic Secure Direct Communication Using Entanglement” In _Phys. Rev. Lett._ 89 American Physical Society, 2002, pp. 187902 DOI: 10.1103/PhysRevLett.89.187902
* [8] Jiawei Wu, Zaisheng Lin, Liuguo Yin and Gui-Lu Long “Security of quantum secure direct communication based on Wyner’s wiretap channel theory” e26 que2.26 In _Quantum Engineering_ 1.4, 2019, pp. e26 DOI: https://doi.org/10.1002/que2.26
* [9] Dong Pan et al. “Single-Photon-Memory Two-Step Quantum Secure Direct Communication Relying on Einstein-Podolsky-Rosen Pairs” In _IEEE Access_ 8, 2020, pp. 121146–121161 DOI: 10.1109/ACCESS.2020.3006136
* [10] Tian-Yu Ye “Quantum Secure Dialogue with Quantum Encryption” In _Communications in Theoretical Physics_ 62.3 IOP Publishing, 2014, pp. 338–342 DOI: 10.1088/0253-6102/62/3/08
* [11] Tian-Yu Ye and Li-Zhen Jiang “Quantum dialogue without information leakage based on the entanglement swapping between any two Bell states and the shared secret Bell state” In _Physica Scripta_ 89.1 IOP Publishing, 2013, pp. 015103 DOI: 10.1088/0031-8949/89/01/015103
* [12] Charles H. Bennett, Herbert J. Bernstein, Sandu Popescu and Benjamin Schumacher “Concentrating partial entanglement by local operations” In _Physical Review A_ 53.4 American Physical Society (APS), 1996, pp. 2046–2052 DOI: 10.1103/physreva.53.2046
* [13] Jian-Wei Pan, Christoph Simon, Časlav Brukner and Anton Zeilinger “Entanglement purification for quantum communication” In _Nature_ 410.6832 Springer ScienceBusiness Media LLC, 2001, pp. 1067–1070 DOI: 10.1038/35074041
* [14] Hans Aschauer and Hans J. Briegel “Security proof of quantum cryptography based entirely on entanglement purification” In _Phys. Rev. A_ 66 American Physical Society, 2002, pp. 032302 DOI: 10.1103/PhysRevA.66.032302
* [15] W Dür and H J Briegel “Entanglement purification and quantum error correction” In _Reports on Progress in Physics_ 70.8 IOP Publishing, 2007, pp. 1381–1424 DOI: 10.1088/0034-4885/70/8/r03
* [16] Peter W. Shor and John Preskill “Simple Proof of Security of the BB84 Quantum Key Distribution Protocol” In _Physical Review Letters_ 85.2 American Physical Society (APS), 2000, pp. 441–444 DOI: 10.1103/physrevlett.85.441
* [17] Niek J. Bouman and Serge Fehr “Sampling in a Quantum Population, and Applications” arXiv, 2009 DOI: 10.48550/ARXIV.0907.4246
* [18] Steven Thompson “Sampling” John Wiley & Sons, 2012
* [19] Keegan Yao, Walter O. Krawec and Jiadong Zhu “Quantum Sampling for Optimistic Finite Key Rates in High Dimensional Quantum Cryptography”, 2020 arXiv:2012.04151 [quant-ph]
* [20] Walter O. Krawec “Quantum sampling and entropic uncertainty” In _Quantum Information Processing_ 18.12, 2019, pp. 368 DOI: 10.1007/s11128-019-2481-5
* [21] Walter O. Krawec “A New High-Dimensional Quantum Entropic Uncertainty Relation with Applications” In _2020 IEEE International Symposium on Information Theory (ISIT)_ , 2020, pp. 1978–1983 DOI: 10.1109/ISIT44484.2020.9174330
* [22] Bryan Eastin and Emanuel Knill “Restrictions on Transversal Encoded Quantum Gate Sets” In _Physical Review Letters_ 102.11 American Physical Society (APS), 2009 DOI: 10.1103/physrevlett.102.110502
* [23] Cameron Cianci “Toward Constructing a Continuous Logical Operator for Error-Corrected Quantum Sensing”, 2023 arXiv:2305.00547 [quant-ph]
* [24] Michael A. Nielsen and Isaac L. Chuang “Quantum Computation and Quantum Information” Cambridge University Press, 2000
|
# Intensity-resolved measurement of above-threshold ionization of Ar-H2O
Adrian Platz Institute for Optics and Quantum Electronics, Universität Jena,
D-07743 Jena, Germany Sebastian Hell Institute for Optics and Quantum
Electronics, Universität Jena, D-07743 Jena, Germany Yinyu Zhang Institute
for Optics and Quantum Electronics, Universität Jena, D-07743 Jena, Germany
Bo Ying Institute for Optics and Quantum Electronics, Universität Jena,
D-07743 Jena, Germany Helmholtz Institute Jena, D-07743 Jena, Germany
Gerhard G. Paulus Institute for Optics and Quantum Electronics, Universität
Jena, D-07743 Jena, Germany Helmholtz Institute Jena, D-07743 Jena, Germany
Matthias Kübel Institute for Optics and Quantum Electronics, Universität
Jena, D-07743 Jena, Germany Helmholtz Institute Jena, D-07743 Jena, Germany
<EMAIL_ADDRESS>
###### Abstract
Above-treshold ionization (ATI) by femtosecond laser pulses centered at 515 nm
is studied for a gas mixture containing the Van-der-Waals complex Ar-H2O. By
detecting photoions and -electrons in coincidence, the ATI spectra for Ar,
Ar2, H2O, and Ar-H2O are discerned and measured simultaneously. Using an
intensity-scanning technique, we observe the red-shift of the ATI spectra as a
function of the laser intensity. The intensity-dependent shift of the ATI peak
positions observed for Ar-H2O and H2O match but significantly differ from the
ones measured for Ar and Ar2. This indicates that the photoelectron is emitted
from the H2O site of the complex and the vertical ionization potential of
Ar-H2O is determined as $(12.4\pm 0.1)$ eV. For resacttered electrons,
however, an enhancement of high-order ATI is observed for Ar-H2O, as compared
to H2O, suggesting that the relatively large Ar atom acts as a scattering
center, which influences the ionization dynamics.
††preprint: APS/123-QED
## I Introduction
Rare-gas compounds exhibit rich light-matter interactions, and serve as model
systems for electron dynamics in weakly bound molecules. A prominent example
is interatomic Coulombic decay, which has been first observed in Ne2 and
studied using extreme ultraviolet radiation [1, 2]. Also in intense optical
fields, rare gas dimers have attracted significant interest [3, 4, 5] but only
few studies have explored light-matter interaction in more complex rare-gas
compounds [6]. A notable example for such complex target is Ar-H2O, which has
been extensively studied in the past using infrared vibrational spectroscopy
[7, 8, 9, 10, 11]. One focus of attention has been the location of the energy
minimum in the Ar-H2O intermolecular potential energy surface. After initially
contradicting results, studies have converged to the finding that the argon
atom resides in the plane of the water molecule, at an angle close to one of
the hydrogen atoms but at roughly four times the distance from the oxygen atom
[7, 9, 12, 13]. Despite this progress, ionization experiments of Ar-H2O have
been scarce. Conseqeuently, little is known about the structure of the Ar-H2O
cation.
Structural and dynamical information from atoms and molecules has been
successfully infered from above-threshold ionization (ATI) spectra, for
example using laser-induced electron diffraction [14, 15, 16], or
photoelectron momentum imaging [17, 4, 18]. For recent reviews see [19, 20].
Moreover, spectroscopy of electronic states in polyatomic molecules has been
demonstrated in channel-resolved ATI experiments [21, 22]. However, accurately
measuring the ionization potential using ATI is challenging since the ATI peak
positions sensitively depend on the laser intensity. The position of the S-th
ATI peak can be described by (atomic units are used if not otherwise stated)
$E_{S}=(N+S)\omega-E_{\mathrm{IP}}-\Delta E(I),$ (1)
where $\omega$ is the photon energy, $E_{\mathrm{IP}}$ is the ionization
potential of the target atom or molecule, $N=\lceil
E_{\mathrm{IP}}/\omega\rceil$ is the minimum number of photons required to
overcome $E_{\mathrm{IP}}$, and $\Delta E(I)$ is an intensity-dependent peak
shift. The latter is often approximated by the ponderomotive potential
$U_{P}=\frac{I}{4\omega^{2}}$, representing the AC Stark shift of the
ionization continuum. In addition, the ground state exhibits an intensity-
dependent level shift, as well. In the absence of resonances and for not too
intense fields, the AC Stark shift can be reasonably well approximated by the
DC Stark shift, calculated by second-order perturbation theory [23]. It is
given in terms of the peak intensity $I=\mathcal{E}_{0}^{2}$, where
$\mathcal{E}_{0}$ is the peak value of the electric field, and the ground
state polarizability $\alpha$ of the respective atom or molecule, by
$\delta E=-\alpha I/4,$ (2)
and is usually negative. Hence, the ground state stark shift usually increases
the peak shift $\Delta E(I)$. Note that this does not hold in the presence of
resonances. For example, Nicklich, _et al._ observed an intensity-dependent
blue-shift in the multiphoton ionization of Cs by visible wavelengths [24].
The intensity-dependent peak shift of ATI has been observed in various
experiments, which aimed at reducing the focal volume averaging effect [25,
26, 27], or identifying the role of resonances [27, 28]. In many of these
experiments, the Stark shift of the ground state has been neglected.
Here, we present intensity-resolved measurements of ATI of Ar, Ar2, H2O, and
Ar-H2O by 515 nm femtosecond laser pulses. The data allows us to extract the
unkown ionization potential of Ar-H2O and infer that the electron is emitted
from the H2O site of the compound. In contrast to this, we show that elastic
electron rescattering also occurs on the argon atom, whose presence
significantly enhances the elastic scattering cross-section of Ar-H2O with
respect to H2O.
## II Experimental approach
In our experiment, the gas sample containing Ar-H2O is obtained by bubbling
argon at a pressure of 3 bar through water and co-expanding the mixture
through a 30-$\mu$m-nozzle into a high-vacuum vessel. The nozzle was heated to
50°C to avoid clogging by condensed water. The gas jet subsequently passes a
150-$\mu$m-skimmer and two 1-mm apertures for differential pumping before it
enters a Cold Target Recoil Ion Momentum Specrometer (COLTRIMS), where it
intersects the laser focus at a total distance of
$\approx$1.5\text{\,}\mathrm{m}$$ behind the nozzle.
Figure 1: (a) Schematic of the experiment for intensity-resolved coincidence
measurements of above-threshold ionization. The power of the femtosecond laser
pulses at 1030 nm with are controlled using a motorized half-wave plate
followed by a polarizer. The laser frequency is doubled in a BBO, and the two
colours are separated. The laser power at 515 nm is measured by a photodiode
and a boxcar integrator. In the data acquisition computer, the measured laser
power is correlated with the ion and electron momenta detected in the
COLTRIMS. (b) Recorded photodiode signal as a function of the laser power, as
measured with a power meter before (calibration 1) and after (calibration 2)
conducting the main experiment. (c) Ion time-of-flight spectra recorded in the
COLTRIMS, with various ionic species marked.
Figure 1(a) shows a schematic of the experiment. Femtosecond laser pulses
centered at 1030 nm are obtained from a commercial Yb-doped fiber laser
(Active Fiber Systems) operated at 50 kHz repetition rate. The pulses are
compressed to 36 fs using an argon-filled hollow-core fiber and suitable
chirped mirrors. The laser power is controlled using a motorized half-wave
plate and a broadband thin-film polarizer. The transmitted light is frequency-
doubled using a 300-$\mu$m thick Beta Barium Borate (BBO) crystal and the
fundamental infrared beam is subsequently discarded. A small fraction of the
visible beam is reflected off a wedge and impinges on a photodiode, operated
well below saturation. The photodiode signal is further processed using a
boxcar integrator to yield a measurement of the pulse energy. Importantly, the
photodiode signal is proportional to the laser power recorded with a power
meter, as specifically shown for the range of 200 mW to 600 mW in Fig. 1(b).
Since the second harmonic generation was operated well below saturation, we
assume that the pulse duration and beam profiles remains unchanged when the
laser power is varied using the motorized half-wave plate. Thus, the
photodiode signal is proportional to the intensity in the laser focus. The
main beam is focused (7.5 cm focal length) into the center of a Cold Target
Recoil Ion Momentum Spectrometer (COLTRIMS) [29] where it intersects a cold
gas jet containing the argon-water gas mixture. The momenta of ions and
electrons produced in the laser focus are measured in coincidence using time
and position sensitive detectors on either side of the COLTRIMS. Figure 1(c)
shows the ion time-of-flight spectrum, which permits the distinction of
various ionic species by their time-of-flight, and the accurate measurement of
the recoil momentum along the laser polarization. Momentum conservation
between ion and photoelectron allows us reliably assign photoelectrons to the
ion of the species from which they originate. A total of $6.4\times 10^{4}$
counts are detected for the Ar-H2O+ \+ e- and $3.2\times 10^{7}$ counts for
the Ar+ \+ e- coincidence channels.
Besides ion-electron coincidences, coincident events of two ionic species,
notably, Ar+ \+ H2O+ Coulomb explosion events are observed. The kinetic energy
release of this break-up channel has a mean value of 3.820 eV and a width
(standard deviation) of 0.23 eV. Assuming instantaneous double ionization and
subsequent propagation of a $1/r$ repulsive potential, this corresponds to a
bond distance between Ar and H2O of $(3.77\pm 0.23)\,\AA$, in reasonable
agreement with but slightly longer than the predicted values of $3.636\,\AA$
[7] or $3.603\,\AA$ [9]. The experimental result might be systematically
shifted towards larger values due to the nonlinear ionization rate, especially
in the second ionization step, for which the ionization potential is expected
to strongly decrease with increasing internuclear distance. Moreover, we note
that the measured KER results from the effective distance of the two charges
rather than the center-of-mass distance between Ar and H2O.
Figure 2: Measured Energy distributions of photoelectrons detected in
coincidence with Ar+ (black), H2O+s (dashed blue) and Ar-H2O+ (green). Each
curve is normalized to its maximum (851059 counts for Ar, 22822 for H2O and
1036 for Ar-H2O). Missing data points are due to a lack of statistics. The
upper curves represent all data acquired in the experiment with a peak
intensity of $\approx 1.0\times 10^{14}\,\mathrm{W/cm^{2}}$, the lower ones
were recorded at $\approx 0.5\times 10^{14}\,\mathrm{W/cm^{2}}$. The data sets
with different intensity values are vertically displaced for visual
convenience. The inset shows the low-energy range of the ATI spectra.
## III Results and Discussion
### III.1 Intensity-dependent ATI spectra
Figure 2 shows photoelectron energy spectra for ATI of Ar, H2O, and Ar-H2O.
The spectra exhibit a typical shape with a rapid drop over the first few ATI
orders, followed by the rescattering plateau [30] that cuts off around 30 eV
(20 eV) for the upper (lower) curve. Remarkably, the signal in the plateau
region is approximately two times higher for Ar and Ar-H2O than for H2O. We
will return to this observation in the context of the photoelectron momentum
distributions presented in Fig. 5 below. The photoelectron spectra are
modulated by the ATI peaks, which are spaced by the photon energy of 2.4 eV,
as expected from equation 1. Furthermore, the ATI combs recorded for different
targets are shifted with respect to each other, due to the difference in the
ionization potentials. A double peak structure is observed for Ar at higher
intensity, which reduces the contrast between ATI peaks and suggests the
involvement of resonances. Comparing the ATI combs recorded at higher
intensity (upper curves) to the ones recorded at lower intensity (lower
curves) reveals the intensity-dependent peak shift. Importantly, a close
examination of the peak positions (see inset of Fig. 2 shows that the offset
between the peak positions observed for Ar and Ar-H2O differ for the two
intensity values. Hence, the ionization potential of Ar-H2O cannot be
retrieved simply by comparing the respective ATI peak positions to the ones
measured for Ar. The ATI peaks measured for H2O and Ar-H2O, however, are close
to each other for both intensity values. Whether the offset between the ATI
peaks for the two targets allow us to extract the ionization potential of
Ar-H2O will be tested using an intensity scan.
The intensity scan is carried out by scanning the half-wave plate while the
laser power transmitted trhough the polarizer is tracked by the photodiode, as
described above. The measured power-dependent ATI spectra for Ar are presented
in Fig. 3(a). The ATI spectra exhibit a pronounced intensity dependence: while
the cut-off energy increases proportionally to the laser power, the individual
ATI peaks are red-shifted and also broadened. This behaviour is an impressive
manifestation of the focal volume effect: with increasing peak intensity, more
and more intensity values contribute to the ATI yield. Since different
intensity values corresponds to different peak positions, the peaks in the
observed ATI spectra are broadened and eventually washed out. In addition,
Freeman (multi-photon) resonances [31, 32] lead to horizontal features in the
intensity-dependent ATI spectrum (see inset of Fig. 3), and are likely
responsible for the double peak structure observed for Ar in Fig. 2 at higher
intensity.
The intensity-resolved ATI spectra allow us to find the proportionality
constant between measured laser power and peak intensity. This is achieved by
comparison of the experimental results to numerical results displayed in Fig.
3(b). These were obtained by solving the one-dimensional time-dependent
Schrödinger equation (TDSE) using the split-operator method. The Hamiltonian
is given by $H(x,t)=\frac{Z}{|x|+\alpha}+\mathcal{E}(t)\cdot x$. The softcore
parameters $Z$ and $\alpha$ where tuned such that the energies of the ground
state and first-excited state of the model atom match those of real argon. The
laser field $\mathcal{E}(t)=\mathcal{E}_{0}(t)\cos{\omega t}$ has a Gaussian
pulse envelope $\mathcal{E}_{0}(t)$ with a full-width at half-maximum of
$40\text{\,}\mathrm{f}\mathrm{s}$ and a carrier frequency of
$\omega=$0.088\text{\,}\mathrm{a}\,\mathrm{u}\,$$ The calculated photoelectron
spectra were integrated over a Gaussian focal volume [33] and agree well with
the measured ones. The proportionality constant between peak intensity and
laser power in the experimental data is determined by minimizing the standard
deviation between the positions of ATI peaks of orders 1 to 10 in the measured
and calculated ATI spectra. We find best agreement for $(6.40\pm
0.30)\,\frac{\mathrm{mW}}{\mathrm{TW/cm^{2}}}$. The uncertainty indicates the
range in which the standard deviation increases by no more than 10%, see inset
of Fig. 3(b). Using this conversion, we confirm that the high-energy cut-off
of the spectra can be adequately described by the extended cut-off law
proposed in Ref. [34]: $E_{\mathrm{max}}=10\,U_{P}+0.538E_{\mathrm{IP}}$, as
indicated by the red lines in Fig. 3.
Figure 3: Intensity-dependent ATI spectra for ionization of Ar by 515 nm
light. (a) Experimental results obtained by integrating the measured
photoelectron momentum distributions over the full solid angle. The red solid
line indicates the cut-off energy of the photoelectron spectra and is
described by the expression
$0.0391\,\mathrm{eV}/\mathrm{mW}+8.5\,\mathrm{eV}$. (b) Numerical results
obtained by solving the one dimensional TDSE. The red line is described by the
expression $25.0\,\mathrm{eV}/(10^{14}\,\mathrm{W/cm^{2}})+8.5\,\mathrm{eV}$
indicating the high-energy cut-off. The inset shows the standard deviation
$\sigma$ as a function of the proportionality constant between peak intensity
and laser power.
### III.2 Peak shifts and ionization potential of Ar-H2O
In order to quantitatively analyze the intensity-dependent ATI spectra for all
targets, we extract the ATI peak positions from the measured photoelectron
spectra as follows. The photoelectron energy distribution $Y(E)$ around the
first ATI peak is fitted by a Gaussian,
$Y(E)=A\exp{\left(-\frac{(E-E_{C})^{2}}{\text{w}^{2}}\right)}+Y_{0}.$ (3)
We note that, especially for high intensities, the shape of the ATI peaks
deviates significantly from a Gaussian shape. Nevertheless, we obtain good
agreement for the peak positions, see figure reffigs:consistency, which shows
the measured intensity-dependent spectra along with the retrieved centroids of
the ATI peaks measured at each laser power and target. The procedure is
repeated for 25 different values of the measured laser power, which yields the
intensity-dependence of the ATI peak positions displayed in Fig. 4.
Figure 4: Intensity-dependent ATI peak positions for various targets. Shown is
the measured offset energy of the ATI comb for Ar, Ar2, H2O and Ar-H2O, as
function of the peak intensity. The colored solid lines represent linear fit
functions to the data. The peak positions extracted for the simulated data is
also shown (black). Around $7\cdot 10^{13}\,\mathrm{W/cm^{2}}$, no accurate
fit could be obtained due to strong modulations in the calculated
photoelectron spectra.
Figure 4 shows the offsets $E_{0}=E_{C}-2.4\,\mathrm{eV}$ of the ATI comb,
retrieved by the fitting procedure described above, as a function of the
calibrated laser intensity. Indeed, the ATI peaks measured for Ar and Ar-H2O
exhibit significantly different shifts as a function of intensity. In
contrast, the intensity-dependence of the ATI peak positions measured for Ar
and Ar2 are nearly identical. Indeed, the ionization potentials of Ar
($15.76\,\mathrm{eV}$) and Ar2 ($15.65\,\mathrm{eV}$) are very close and their
ATI peaks exhibit only a small offset with a mean of $(0.08\pm
0.01)\,\mathrm{eV}$, in good agreement with the difference in ionization
potential. Analogously, the energy offset between the ATI peaks measured for
H2O and Ar-H2O remains essentially constant at $0.26\,\mathrm{eV}$ within a
margin of $\pm 0.06\,\mathrm{eV}$ over the entire intensity range studied in
the present experiment. Therefore, it is reasonable to assume that the peak
offset corresponds to the difference in ionization potential (cf. equation 1).
With the known ionization potential of H2O of $12.6\,\mathrm{eV}$, we find a
value of $(12.4\pm 0.1)\,\mathrm{eV}$ for Ar-H2O, where the uncertainty
corresponds to the statistical error of the differences in the measured peak
positions. We point out that this deviation is independent of the intensity
determination conducted above.
We will now further investigate the differences in the intensity-dependence of
the ATI peak positions recorded for different species. Clearly, the red-shift
of the ATI peaks (cf. equation 1) is target-dependent. Specifically, the
linear fit functions, applied over the entire intensity range of the
experiment, yield a slope of $-2.2\,\mathrm{eV}/(10^{14}\,\mathrm{W/cm^{2}})$
for Ar and Ar2. These values are in reasonable agreement with the
ponderomotive shift of $-2.5\,\mathrm{eV}/(10^{14}\,\mathrm{W/cm^{2}})$ for
515-nm light and the difference is consistent with the effect of focal volume
averaging. The TDSE results for Ar exhibits some deviations from linearity,
likely due to the effect of resonances. Up to an intensity of approximately
$1.0\cdot 10^{14}\,\mathrm{W/cm^{2}}$, the intensity-dependent peak shift of
the TDSE results agrees with the experimental results, however, the curve is
blue shifted by approximately 0.2 eV. Such deviation is possible despite the
intensity calibration because the fitting performed above is carried out for
the positions of ATI orders 1 to 10, while in Fig. 4, only ATI order 1 is
analyzed. Above $1.0\cdot 10^{14}\,\mathrm{W/cm^{2}}$, the measured and
calculated peak positions deviate, potentially because the influence of
resonances is exaggerated in the one-dimensional TDSE calculations. This
observation emphasizes that the linear relationship between ATI peak position
and intensity is only applicable in the absence of resonances.
The ATI peak shift measured for Ar-H2O and H2O is only
$-1.5\,\mathrm{eV}/(10^{14}\,\mathrm{W/cm^{2}})$, significantly less than the
ponderomotive shift of $-2.5\,\mathrm{eV}/(10^{14}\,\mathrm{W/cm^{2}})$ for
515-nm light. A first candidate to explain the observed differences between Ar
and Ar2 on one hand and H2O and Ar-H2O on the other hand is the AC stark shift
of the ground state. However, the ground state polarizabilities of argon
($\alpha_{\textrm{Ar}}=$11.07\text{\,}\mathrm{a}_{\mathrm{0}}^{3}$$) and water
($\alpha_{\textrm{H2O}}=$10.74\text{\,}\mathrm{a}_{\mathrm{0}}^{3}$$) [35, 36,
37] are very similar and both lead to a Stark shift of the ground state energy
by $-0.2\,\mathrm{eV}/10^{14}\,\,\mathrm{W/cm^{2}}$. Hence, the difference of
ground state polarizabilities for argon and water is too small to explain the
observed difference in the intensity-dependent ATI peak shifts of the two
targets. Furthermore, as the different shift is observed over the entire
intensity range, resonances are unlikely to be responsible for the observed
differences. We propose that the smaller red shift of the ATI peaks observed
for H2O and Ar-H2O results from the permanent dipole [23]. Its effect may
become measurable due to the orientation-dependence of the ionization
probability of water [38]. However, a quantitative calculation of the
intensity-dependent ATI peak shift of water is outside the scope of the
present paper.
### III.3 Rescattering from Ar-H2O
In this section, we return to the observation of enhanced backscattering from
Ar-H2O with respect to H2O. Figure 5 presents the photoelectron momentum
distributions measured for Ar-H2O (upper half), H2O (bottom left quadrant) and
Ar (bottom right quadrant) corresponding to the ATI spectra presented in Fig.
2 at high intensity. The momentum distribution measured for Ar2 is compared to
the one for Ar in Fig. 7. The central, low-energy, part of the momentum
distributions for all three targets are governed by direct electron emission
and full ATI rings are observed. At energies significantly larger than
$2\,U_{\mathrm{P}}$, corresponding to $p\approx$0.6\text{\,}\,\mathrm{a.u.}$$,
re-scattered electrons dominate and the angular distributions are
significantly narrower such that only segments of the ATI rings are observed.
We first compare Ar-H2O to H2O and observe that the low-energy part of the
momentum distribution is nearly identical for the two targets. The slightly
lower ionization potential of Ar-H2O leads to slightly larger radii of the ATI
rings. In the high-energy part along the laser polarization, the signal
observed for Ar-H2O is slightly larger than for H2O. This is evident for the
highest ATI peaks but also the signal in the intermediate ones is higher by a
a factor of $\approx 2$, see Fig. 2. This enhancement suggests a larger
contribution from backscattering in the case of Ar-H2O. Compared to Ar (bottom
right quadrant), however, the signal remains slightly lower. The stronger
signal observed for Ar might be a consequence of focal volume averaging, which
leads to higher effective intensity for Ar, and therefore higher signal at
larger electron momenta. Our results suggest that in Ar-H2O, the relatively
large Ar atom acts as a scattering center, which enhances the contribution of
backscattering to the photoelectron yield. Indeed, for the differential
elastic electron scattering cross section around $180^{\circ}$, two to three
times larger values have been reported for Ar [39] than for water [40],
consistent with our results.
Figure 5: (a)Photoelectron momentum distribution recorded for non-dissociative
single-ionization of Ar-H2O (upper half), compared to the ones recorded for
H2O (bottom left), and Ar (bottom right). The ionization yield is integrated
over all intensity values and normalized to 1 by dividing by the number of
detected events for each species. The plot shows the normalized ionization
yield per momentum bin, as a function of the momentum components parallel
$p_{||}=p_{z}$ and perpendicular $p_{\perp}=\sqrt{p_{x}^{2}+p_{y}^{2}}$ to the
laser polarization, taking into account the appropriate Jacobian. The minimum
of the color scale is limited by the low statistics of the Ar-H2O channel.
## IV Summary and Outlook
In conclusion, we have presented measurements of ATI of Ar-H2O and compared
them to simultaneous measurements of both Ar and H2O. We found that the peak
shift for Ar-H2O matches that for H2O, while the ionization potential is
lowered by some $0.2\,\,\mathrm{eV}$ with respect to H2O. These observations
indicate that, on the one hand, strong-field ionization occurs on the H2O side
of the weakly bound Van-der-Waals molecule. On the other hand, the reduction
of the ionization potential indicates that the molecular ion is more tightly
bound than neutral Ar-H2O. This raises the question about details of the
potential energy landscape of the Ar-H2O cation, for example where the argon
atom resides with respect to the H atoms. A suitable tool to shed light on
this question is laser-driven Coulomb explosion imaging. While the Ar atom has
little effect on the initial step of strong-field ionization, we have further
shown that elastic rescattering in Ar-H2O is enhanced by the presence of the
Ar atom. By extension to inelastic rescattering, this suggests the existence
of a special mechanism for non-sequential double ionization, where a first
electron is emitted from water and its subsequent recollision with the parent
ion leads to ionization of the argon atom. This mechanism could lead to a
peculiar effect: at relatively low intensity and sufficiently long wavelength,
Ar-H2O could be doubly ionized even before single ionization of Ar becomes
efficient.
## Acknowledgements
We thank T. Weber, A. Rose, H. Wöhl, F. Ronneberger, T. Farhadova, J. Yu, S.
Voss and A. Czasch for technical support, in particular when setting up the
COLTRIMS. Fruitful discussions with M. Lesiuk and R. Della Picca are
acknowledged. This work has been funded by the DFG under Project No.
437321733.
## References
* Jahnke _et al._ [2020] T. Jahnke, U. Hergenhahn, B. Winter, R. Dörner, U. Frühling, P. V. Demekhin, K. Gokhberg, L. S. Cederbaum, A. Ehresmann, A. Knie, and A. Dreuw, Interatomic and Intermolecular Coulombic Decay, Chem. Rev. 120, 11295 (2020).
* Trinter _et al._ [2022] F. Trinter, T. Miteva, M. Weller, A. Hartung, M. Richter, J. B. Williams, A. Gatton, B. Gaire, J. Sartor, A. L. Landers, B. Berry, I. Ben-Itzhak, N. Sisourat, V. Stumpf, K. Gokhberg, R. Dörner, T. Jahnke, and T. Weber, Ultrafast temporal evolution of interatomic Coulombic decay in NeKr dimers, Chem. Sci. 13, 1789 (2022).
* Wunderlich _et al._ [1997] C. Wunderlich, E. Kobler, H. Figger, and T. W. Hänsch, Light-Induced Molecular Potentials, _Phys. Rev. Lett._ , Phys. Rev. Lett. 78, 2333 (1997).
* Kunitski _et al._ [2019] M. Kunitski, N. Eicke, P. Huber, J. Köhler, S. Zeller, J. Voigtsberger, N. Schlott, K. Henrichs, H. Sann, F. Trinter, L. Ph. H. Schmidt, A. Kalinin, M. S. Schöffler, T. Jahnke, M. Lein, and R. Dörner, Double-slit photoelectron interference in strong-field ionization of the neon dimer, Nat. Commun 10, 1 (2019).
* Tong _et al._ [2022] J. Tong, X. Liu, W. Dong, W. Jiang, M. Zhu, Y. Xu, Z. Zuo, P. Lu, X. Gong, X. Song, W. Yang, and J. Wu, Probing Resonant Photoionization Time Delay by Self-Referenced Molecular Attoclock, Phys. Rev. Lett. 129, 173201 (2022).
* Wu _et al._ [2012] J. Wu, M. Kunitski, L. P. H. Schmidt, T. Jahnke, and R. Dörner, Structures of N2Ar, O2Ar, and O2Xe dimers studied by Coulomb explosion imaging, J. Chem. Phys. 137, 104308 (2012).
* Cohen and Saykally [1993] R. C. Cohen and R. J. Saykally, Determination of an improved intermolecular global potential energy surface for Ar–H2O from vibration–rotation–tunneling spectroscopy, J. Chem. Phys. 98, 6007 (1993).
* Germann and Gutowsky [1993] T. C. Germann and H. S. Gutowsky, Nuclear hyperfine interactions and dynamic state of H2O in Ar–H2O, J. Chem. Phys. 98, 5235 (1993).
* Tao and Klemperer [1994] F. Tao and W. Klemperer, Accurate ab initio potential energy surfaces of Ar–HF, Ar–H2O, and Ar–NH3, J. Chem. Phys. 101, 1129 (1994).
* Weida and Nesbitt [1997] M. J. Weida and D. J. Nesbitt, High resolution mid-infrared spectroscopy of ArH2O: The v2 bend region of H2O, J. Chem. Phys. 106, 3078 (1997).
* Liu and Xu [2014] X. Liu and Y. Xu, New rovibrational bands of the Ar–H2O complex at the $\nu$2 bend region of H2O, J. Mol. Spectrosc. 301, 1 (2014).
* Makarewicz [2008] J. Makarewicz, Ab initio intermolecular potential energy surfaces of the water-rare gas atom complexes, J. Chem. Phys. 129, 184310 (2008).
* Hou _et al._ [2016] D. Hou, Y.-T. Ma, X.-L. Zhang, and H. Li, The origins of intra- and inter-molecular vibrational couplings: A case study of H2O-Ar on full and reduced-dimensional potential energy surface, J. Chem. Phys. 144, 14301 (2016).
* Meckel _et al._ [2008] M. Meckel, D. Comtois, D. Zeidler, A. Staudte, D. Pavicic, H. C. Bandulet, H. Pépin, J. C. Kieffer, R. Dörner, D. M. Villeneuve, and P. B. Corkum, Laser-induced electron tunneling and diffraction, Science 320, 1478 (2008).
* Blaga _et al._ [2012] C. I. Blaga, J. Xu, A. D. Dichiara, E. Sistrunk, K. Zhang, P. Agostini, T. A. Miller, L. F. Dimauro, and C. D. Lin, Imaging ultrafast molecular dynamics with laser-induced electron diffraction, Nature 483, 194 (2012).
* Wolter _et al._ [2016] B. Wolter, M. G. Pullen, A. T. Le, M. Baudisch, K. Doblhoff-Dier, A. Senftleben, M. Hemmer, C. D. Schröter, J. Ullrich, T. Pfeifer, R. Moshammer, S. Gräfe, O. Vendrell, C. D. Lin, and J. Biegert, Ultrafast electron diffraction imaging of bond breaking in di-ionized acetylene, Science 354, 308 (2016).
* Eckart _et al._ [2018] S. Eckart, M. Kunitski, M. Richter, A. Hartung, J. Rist, F. Trinter, K. Fehre, N. Schlott, K. Henrichs, L. Ph. H. Schmidt, T. Jahnke, M. Schöffler, K. Liu, I. Barth, J. Kaushal, F. Morales, M. Ivanov, O. Smirnova, and R. Dörner, Ultrafast preparation and detection of ring currents in single atoms, Nat. Phys. 14, 701 (2018).
* Kübel _et al._ [2019] M. Kübel, Z. Dube, A. Y. Naumov, D. M. Villeneuve, P. B. Corkum, and A. Staudte, Spatiotemporal imaging of valence electron motion, Nat. Commun. 10, 1042 (2019).
* De Giovannini _et al._ [2023] U. De Giovannini, J. Küpper, and A. Trabattoni, New perspectives in time-resolved laser-induced electron diffraction, J. Phys. B At. Mol. Opt. Phys. 56, 54002 (2023).
* Amini _et al._ [2021] K. Amini, A. Chacón, S. Eckart, B. Fetić, and M. Kübel, Quantum interference and imaging using intense laser fields, Eur. Phys. J. D 75, 275 (2021).
* Boguslavskiy _et al._ [2012] A. E. Boguslavskiy, J. Mikosch, A. Gijsbertsen, M. Spanner, S. Patchkovskii, N. Gador, M. Vrakking, and A. Stolow, The multielectron ionization dynamics underlying attosecond strong-field spectroscopies, Science (80-. ). 335, 1336 (2012), arXiv:arXiv:1106.5958 .
* Mikosch _et al._ [2013] J. Mikosch, A. E. Boguslavskiy, I. Wilkinson, M. Spanner, S. Patchkovskii, and A. Stolow, Channel- and Angle-Resolved Above Threshold Ionization in the Molecular Frame, Phys. Rev. Lett. 110, 23004 (2013).
* N B Delone and Vladimir P Krainov [1999] N B Delone and Vladimir P Krainov, AC Stark shift of atomic energy levels, Physics-Uspekhi 42, 669 (1999).
* Nicklich _et al._ [1992] W. Nicklich, H. Kumpfmüller, H. Walther, X. Tang, H. Xu, and P. Lambropoulos, Above-threshold ionization of Cesium under femtosecond laser pulses: New substructure due to strongly coupled bound states, Physical Review Letters 69, 3455 (1992).
* Walker _et al._ [1998] M. A. Walker, P. Hansch, and L. D. Van Woerkom, Intensity-resolved multiphoton ionization: Circumventing spatial averaging, Phys. Rev. A 57, R701 (1998).
* Hart _et al._ [2014] N. A. Hart, J. Strohaber, G. Kaya, N. Kaya, A. A. Kolomenskii, and H. A. Schuessler, Intensity-resolved above-threshold ionization of xenon with short laser pulses, Phys. Rev. A 89, 53414 (2014).
* Wiese _et al._ [2019] J. Wiese, J.-F. Olivieri, A. Trabattoni, S. Trippel, and J. Küpper, Strong-field photoelectron momentum imaging of OCS at finely resolved incident intensities, New J. Phys. 21, 83011 (2019).
* Wang _et al._ [2014] C. Wang, Y. Tian, S. Luo, W. G. Roeterdink, Y. Yang, D. Ding, M. Okunishi, G. Prümper, K. Shimada, K. Ueda, and R. Zhu, Resonance-like enhancement in high-order above-threshold ionization of formic acid, Phys. Rev. A 90, 23405 (2014).
* Ullrich _et al._ [2003] J. Ullrich, R. Moshammer, A. Dorn, R. Dörner, L. P. H. Schmidt, and H. Schmidt-B cking, Recoil-ion and electron momentum spectroscopy: reaction-microscopes, Reports Prog. Phys. 66, 1463 (2003).
* Paulus _et al._ [1994] G. G. Paulus, W. Nicklich, H. L. Xu, P. Lambropoulos, and H. Walther, Plateau in above-Threshold Ionization Spectra, Phys. Rev. Lett. 72, 2851 (1994).
* Freeman _et al._ [1987] R. R. Freeman, P. H. Bucksbaum, H. Milchberg, S. Darack, D. Schumacher, and M. E. Geusic, Above-threshold ionization with subpicosecond laser pulses, Phys. Rev. Lett. 59, 1092 (1987).
* Cormier _et al._ [2003] E. Cormier, P.-A. Hervieux, R. Wiehle, B. Witzel, and H. Helm, ATI of complex systems: Ar and C60, Eur. Phys. J. D - At. Mol. Opt. Plasma Phys. 26, 83 (2003).
* Zhang _et al._ [2020] Y. Zhang, D. Zille, D. Hoff, P. Wustelt, D. Würzler, M. Möller, A. M. Sayler, and G. G. Paulus, Observing the importance of the phase-volume effect for few-cycle light-matter interactions, Phys. Rev. Lett. 124, 133202 (2020).
* Busuladzic _et al._ [2006] M. Busuladzic, A. Gazibegovic-Busuladzic, and D. B. Milosevic, High-Order Above-Threshold Ionization in a Laser Field : Influence of the Ionization Potential on the High-Energy Cutoff, Laser Phys. 16, 289 (2006).
* Ge and Lu [2017] X. Ge and D. Lu, Molecular polarizability of water from local dielectric response theory, Phys. Rev. B 96, 75114 (2017).
* Gaiser and Fellmuth [2018] C. Gaiser and B. Fellmuth, Polarizability of Helium, Neon, and Argon: New Perspectives for Gas Metrology, Phys. Rev. Lett. 120, 123203 (2018).
* Lesiuk and Jeziorski [2023] M. Lesiuk and B. Jeziorski, First-principles calculation of the frequency-dependent dipole polarizability of argon, Physical Review A 107, 42805 (2023).
* Picca _et al._ [2012] R. D. Picca, J. Fiol, P. D. Fainstein, J. P. Hansen, and A. Dubois, Laser pulse ionization of fixed-in-space h2o, Journal of Physics B: Atomic, Molecular and Optical Physics 45, 194009 (2012).
* Bell _et al._ [1984] K. L. Bell, N. S. Scott, and M. A. Lennon, The scattering of low-energy electrons by argon atoms, Journal of Physics B: Atomic and Molecular Physics 17, 4757 (1984).
* Machado _et al._ [1995] L. E. Machado, L. Mu-Tao, L. M. Brescansin, M. A. P. Lima, and V. McKoy, Elastic electron scattering by water molecules, Journal of Physics B: Atomic, Molecular and Optical Physics 28, 467 (1995).
* von Veltheim _et al._ [2013] A. von Veltheim, B. Manschwetus, W. Quan, B. Borchers, G. Steinmeyer, H. Rottke, and W. Sandner, Frustrated Tunnel Ionization of Noble Gas Dimers with Rydberg-Electron Shakeoff by Electron Charge Oscillation, Phys. Rev. Lett. 110, 23001 (2013).
## Appendix A Intensity-dependent ATI peak positions
Fig. 6 shows experimental results for the four targets along with the
centroids obtained from the fitting procedure described above. The fitting is
performed in a range of $\pm$1\text{\,}\mathrm{e}\mathrm{V}$$ around the
approximate position of the first ATI peak. The accuracy of the results is not
notably improved if the fitting is repeated to ATI peaks of higher order. The
zeroth ATI peak was ignored because of channel closing. In the case of water,
no accurate fit was obtained at the highest intensities used in the
experiment.
(a)
(b)
(c)
(d)
Figure 6: Intensity-dependent ATI spectra of (a) argon, (b) argon dimer, (c)
argon-water and (d) water. The dots mark the centroids of Gaussians fitted to
the experimental data at various intensity values. The statistcal error of the
fits is on the order of the dot size.
## Appendix B Photoelectron momentum distribution for argon dimers
Figure 7 compares the photoelectron momentum distributions recorded for non-
dissociative single ionization of Ar2 to that for Ar. The momentum
distributions for the two targets are very similar to each other, in agreement
with earlier experiments using 800 nm light [41]. A slight yield enhancement
at minimal electron energy reported for the dimer in Ref. [41] can be
identified.
Figure 7: Same as Fig. 5 but shown the momentum distribution recorded for non-
dissociative single-ionization of Ar2 (upper half) compared to the one
recorded for Ar (bottom half), as in Fig. 5. At
$p_{||}\approx-1.3\,\mathrm{a.u.}$ and $p_{||}\approx 0.8\,\mathrm{a.u.}$, the
detector provides no resolution for the perpendicular momentum.
|
## 1 Introduction
The statistical distribution of the zeroes of the Riemann zeta function, and
the related family of Dirichlet $L$-functions, qualitatively resemble the
eigenvalue distribution of a random ensemble of unitary matrices[1, 2, 3]. It
is also reminiscent of the distribution of zeroes of partition functions of
statistical models. The latter observation is the motivation to search for a
suitable model in physicists’ approach to the problem—the literature is vast,
however, see e.g., Refs.[4, 5, 6, 7, 8, 9], the review [10] and references
therein. This resemblance may be an important guide as the zeroes of the
partition function of many systems, by the Yang-Lee type theorems [11], all
lie parallel to the imaginary axis (or on the unit circle). These zeroes are
called Yang-Lee zeroes or Fisher zeroes, depending upon whether the partition
function is viewed as a function of the applied external field, e.g., magnetic
field, or of $\beta=1/(k_{B}T)$, the inverse temperature. The arithmetic or
primon gas of Refs. [5, 7, 6] and the number theoretic spin chain of Refs.[8,
9], in particular, proposed interesting models for which the partition
functions are directly related to the Riemann zeta function. In this approach
the non-trivial zeroes of the zeta function are to be identified with the
Yang-Lee or Fisher zeroes.
Motivated by these, we shall propose a statistical model and compute its
partition function. The idea is again to associate the spectrum of an operator
with the Fisher zeroes of the partition function. In addition, however, we
shall study the spectrum of some relevant operators of these models. The
systems that relate to the $L$-functions of our interest can be thought of as
spins in an external magnetic field. Since the spectrum of a Hamiltonian of
this type of spin systems is discrete (the spins being integer/half-integer
valued) this operator is similar to the number operator of an oscillator. A
phase operator that is conjugate to this will be a new ingredient in our
investigation. The construction of a phase operator which is truly canonically
conjugate to the number operator is a subject of long-standing quest that may
not be completely closed yet. Nevertheless, several different ways to define
the phase operator have been proposed, for example, Refs.[12, 13, 14, 15, 16,
17, 18] is a partial list. In particular, we shall investigate two ways of
defining it for the spin models corresponding to the family of $L$-functions.
In the first construction, we follow Ref.[17], where the authors propose an
operator by directly constructing eigenstates of phase for a system with a
discrete spectrum. The second approach is motivated by the proposal in [15].
We shall argue that there are enough hints in these proposals to understand
the correspondence between the spectrum of these operators and the zeroes of
the partition function.
In the following, we shall first review (in Section 2) some of the relevant
arguments and results from the cited references, in the context of a simple
spin system on a one-dimensional lattice. In Section 3, after recalling some
properties of the Riemann zeta function and our earlier work on its relation
to operators on the Hilbert space of complex valued functions on the $p$-adic
number field $\mathbb{Q}_{p}$ [19, 20], we elaborate on a proposal to view it
as a statistical model of spins. In Sections 3.1 and 3.2 we detail two
constructions of the phase operators for the spin model for the Riemann zeta
function, which are then extended to the family of Dirichlet $L$-functions in
Section 4.
## 2 Quantum spins in external field
Spin models in one dimension are among the simplest statistical models, yet
they offer an arena rich enough to experiment, before considering more
complicated systems. The variables are ‘spins’ $s_{n}$ at lattice points
$n\in\mathbb{Z}$ or $n\in\mathbb{N}$ that can take $(2j+1)$ values
$\\{-j,-j+1,\cdots,j-1,j\\}$ in the spin-$j$ representation. In models of
magnetism, these spins interact locally, usually with the nearest neighbours.
In addition, one may turn on an external magnetic field.
Let us digress to recall the properties of a simpler model, the Ising model,
in which the classical spins take one of two possible values $\pm 1$ and the
total Hamiltonian is $H=-J\sum_{n}s_{n}s_{n+1}-B\sum_{n}s_{n}$, where $J$ is
the strength of interaction ($J>0$ being ferromagnetic and anti-ferromagnetic
otherwise) and the second term arises from an interaction with an external
magnetic field $B$. The partition function (in the absence of an external
field) of an Ising system of size $L$ at inverse temperature $\beta$ is
$Z(\beta)\equiv\text{Tr }e^{-\beta
H}=\sum_{\left\\{s_{n}\right\\}}\exp\left(\beta
J\sum_{n=1}^{L-1}s_{n}s_{n+1}\right)$
In this simple case, one may also change variables to
$\sigma_{n}\equiv\sigma_{\langle n-1,n\rangle}=s_{n-1}s_{n}$ associated to the
edges $\langle n-1,n\rangle$ joining nearest neighbours. Evidently,
$\sigma_{n}=\pm 1$ as well. Thus
$\displaystyle Z(\beta)$ $\displaystyle=$ $\displaystyle
2\sum_{\left\\{\sigma_{n}\right\\}}\exp\left(\beta
J\sum_{n=2}^{L}\sigma_{n}\right)$ $\displaystyle=$ $\displaystyle
2\sum_{\sigma_{n}}\left\langle\sigma_{2},\sigma_{3},\cdots\right|\exp\left(\beta
J\sum_{j}S_{j}\right)\left|\sigma_{2},\sigma_{3},\cdots\right\rangle$
where we have defined vectors $|\sigma_{n}\rangle$ in a (two-dimensional)
Hilbert space corresponding to the edge $\langle n-1,n\rangle$ and $S_{n}$s
are spin operators such that
$S_{n}\left|\sigma_{n}\right\rangle=\sigma_{n}\left|\sigma_{n}\right\rangle$.
A generalisation of this model allows the coupling constants $J$ to be
position dependent, so that the Hamiltonian is $H=-\sum_{n}J_{n}\sigma_{n}$
and
$Z(\beta)=2\sum_{\sigma_{n}}\big{\langle}\sigma_{2},\sigma_{3},\cdots\big{|}e^{\beta\sum_{n}J_{n}S_{n}}\big{|}\sigma_{2},\sigma_{3},\cdots\big{\rangle}$
is the canonical partition function of the generalised model at the
temperature $k_{B}T=1/\beta$.
We would like to consider the general case where the spins to be valued in the
spin-$j$ representation of $\mathfrak{su}(2)$. Although we seek a partition
function of the form as above, the general spin case is cannot be realised as
an Ising type model, rather it will be a model of spins in an external local
magnetic field $B_{n}$ at site $n$. It will be useful to think of $S_{n}$ to
be the third component $S_{n3}$ of the $\mathfrak{su}(2)$ spin operators on
the edge/site, the others being $S_{n\pm}$. The vectors
$|\sigma_{2},\sigma_{3},\cdots\rangle=|\sigma_{2}\rangle\otimes|\sigma_{3}\rangle\otimes\cdots$
belong to the product space.The interaction $\sim\mathbf{B}\cdot\mathbf{S}$
between the spin (magnetic moment to be precise) and the external field
(assumed to be along the $z$-direction) described by the Hamiltonian
$H=-\sum_{n}B_{n}\sigma_{n}$ leads to the partition function
$Z(\beta)=\sum_{\sigma_{n}}\big{\langle}\sigma_{2},\sigma_{3},\cdots\big{|}e^{\beta\sum_{n}B_{n}S_{n}}\big{|}\sigma_{2},\sigma_{3},\cdots\big{\rangle}$
at the temperature $k_{B}T=1/\beta$. Our objective is to obtain an identity
for the partition function for this model. To this end, we shall seek an
operator that, in a certain well defined sense, is formally canonically
conjugate to the $z$-component $S_{n3}$ of the spin operator at site $n$.
There are well known difficulties in defining such an operator, however, we
shall see that one needs to make a much weaker demand.
In this context, it is useful to remeber the Schwinger oscillator realisation
of the algebra $\mathfrak{su}(2)$ in terms of a pair of bosonic
creation/anhilation operators $(a_{1}^{\dagger},a_{1},a_{2}^{\dagger},a_{2})$
at each edge, where we drop the edge index for the time being. Then
$S_{+}=a_{1}^{\dagger}a_{2}$, $S_{-}=a_{2}^{\dagger}a_{1}$ and the third
component is the difference of the number operators
$S_{3}=\frac{1}{2}(n_{1}-n_{2})=\frac{1}{2}\left(a_{1}^{\dagger}a_{1}-a_{2}^{\dagger}a_{2}\right)$.
One can formally introduce the _phase operator_
$\Phi=\frac{1}{2}(\phi_{1}-\phi_{2})$ such that
$[\phi_{a},n_{b}]=i\delta_{ab}$, however, there are several mathematical
difficulties in defining the above[12, 13]. We will now review an explicit
construction to show how one can still work around this problem.
### 2.1 Phase operator via phase eigenstates
Let us label the eigenstates in the spin-$j$ representation of $S_{3}$ as
$|m\rangle$, for $m=-j,\cdots,j$. One can define an eigenstate of phase as a
unitary transform of these states as
$\displaystyle|\phi_{k}\rangle$
$\displaystyle=\frac{1}{\sqrt{2j+1}}\sum_{m=-j}^{j}e^{-im\phi_{k}B}|m\rangle$
$\displaystyle\text{where, }\;\phi_{k}$ $\displaystyle=\frac{2\pi
k}{B(2j+1)},\qquad k=-j,\cdots,j$ (1)
are the eigenvalues of the phase. The phase eigenstates satisfy
$\langle\phi_{k^{\prime}}|\phi_{k}\rangle=\frac{1}{2j+1}\sum_{m=-j}^{j}e^{-imB(\phi_{k}-\phi_{k^{\prime}})}=\delta_{k,k^{\prime}}$
(2)
and thus provide an orthonormal basis of the Hilbert space of states.
In terms of these, we may define the the ‘phase operator’ through spectral
decomposition as
$\hat{\phi}=\sum_{k=-j}^{j}\phi_{k}\,|\phi_{k}\rangle\langle\phi_{k}|$ (3)
We shall now show that it transforms covariantly when conjugated by $e^{\beta
BS_{3}}$. This works for special values of $\beta$ since, an eigenvalues
$\phi_{k}$ of $\hat{\phi}$, being angle-valued, is only defined modulo
$2\pi/B$. In order to see this, we note that
$e^{-\beta BS_{3}}\hat{\phi}\,e^{\beta
BS_{3}}=\sum_{k=-j}^{j}\frac{\phi_{k}}{2j+1}\sum_{m=-j}^{j}\sum_{m^{\prime}=-j}^{j}e^{-im(\phi_{k}-i\beta)B+im^{\prime}(\phi_{k}-i\beta)B}|m\rangle\langle
m^{\prime}|$
There are two cases to consider. The first is trivial: for $\beta=0$ or any of
its periodic images $i\beta=\frac{2\pi n}{B}$ ($n\in\mathbb{Z}$) in the
complex $\beta$-plane, the RHS is the phase operator $\hat{\phi}$. More
interestingly, if $i\beta$ takes any of the specific discrete values
$\frac{2\pi k^{\prime}}{B(2j+1)}+\frac{2\pi n}{B}$, where
$k^{\prime}=-j,\cdots,j$ (but $k^{\prime}\neq 0$) and $n\in\mathbb{Z}$, i.e.,
$i\beta$ is a difference between the phase eigenvalues (mod ${2\pi}/{B}$),
then $\phi_{k}-i\beta$ is again an allowed eigenvalue of the phase operator
$\pmod{2\pi/B}$. In this case, we can add and subtract $i\beta$ to the
eigenvalue $\phi_{k}$ and use the completeness of basis, to find
$e^{-\beta BS_{3}}\hat{\phi}\,e^{\beta BS_{3}}=\hat{\phi}+i\beta\>\text{ only
for }\,0\neq\beta=-\frac{2\pi ij}{B(2j+1)},\cdots,\frac{2\pi
ij}{B(2j+1)}\>\Big{(}\text{mod }\frac{2\pi}{B}\Big{)}$ (4)
This is called a shift covariance relation [21]. It may also be rewritten as a
commutator
$\big{[}\hat{\phi},e^{\beta BS_{3}}\big{]}=i\beta\,e^{\beta BS_{3}}\>\text{
only for }\,0\neq\beta=-\frac{2\pi ij}{B(2j+1)},\cdots,\frac{2\pi
ij}{B(2j+1)}\>\Big{(}\text{mod }\frac{2\pi}{B}\Big{)}$
i.e., at special values of the inverse temperature.
To summarise, we find that $\hat{\phi}$ in Eq. 3 satisfies shift covariance,
alternatively, though somewhat loosely, it is ‘canonically conjugate’ to
$S_{3}$ only for a special set of an infinite number of imaginary values of
$\beta$, all on the line $\text{Re}\,\beta=0$ as above. At $\beta=0$ (mod
$\frac{2\pi}{B}$), the commutator is trivial111It is also reflected in the
resolvent of the phase operator, as we shall see in the following..
In passing, it is instructive to take the trace of the ‘canonical commutator’.
The left hand side evidently vanishes, since the vector space of states is
finite, namely $(2j+1)$, dimensional. On the right hand side, the trace of
$e^{-\beta H}$, the partition function which vanishes, being a sum over the
roots of unity. Thus the values of $\beta$ for which Eq. 4 is valid must also
satisfy the condition $\text{Tr }e^{-\beta H}=0$. This means that mod
${2\pi}/{B}$, these values of $i\beta\neq 0$ for which the partition function
has a zero are same as that of the eigenvalues of $\hat{\phi}$.
The resolvent of the exponential of the phase operator at a single site (as a
function of $z=e^{i\phi}$) is
$\hat{R}[\hat{\phi}](\phi)=\left(1-e^{-i\phi}e^{i\hat{\phi}}\right)^{-1}$
and its trace is
$\mathrm{Tr}\left(\hat{R}[\hat{\phi}](\phi)\right)=\sum_{k=-j}^{j}\Big{\langle}\phi_{k}\Big{|}\frac{1}{1-e^{i(\hat{\phi}-\phi)}}\Big{|}\phi_{k}\Big{\rangle}\\\
=\sum_{k=-j}^{j}\frac{1}{1-e^{i(\phi_{k}-\phi)}}$
On the other hand, the partition function at a single site
$Z_{1}(\beta)=\text{Tr}\,e^{-\beta H}=\sum_{m}e^{\beta Bm}$ vanishes at
special values of the inverse temperature $\beta=\frac{2\pi mi}{B(2j+1)}$ (mod
$\frac{2\pi}{B}$) where $m\in\\{-j,\cdots,j\\}$ but $m\neq 0$. These zeroes of
the partition function in the complex $\beta$-plane are called Fisher zeroes.
At precisely these values, the resolvent function develops poles.
## 3 The case of Riemann zeta function
Before we get to our main goal to interpret the Riemann zeta function as a
partition function, let us briefly recall some of its relevant properties.
Originally defined by the analytical continuation of the series
$\zeta(s)=\sum_{n=1}^{\infty}\frac{1}{n^{s}}=\prod_{p\,\in\,{\mathrm{primes}}}\frac{1}{\left(1-p^{-s}\right)},\qquad\mathrm{Re}(s)>1$
(5)
to the complex $s$-plane by Riemann, the zeta function has a set of equally
spaced zeroes at negative even integers $-2n$, $n\in\mathbb{Z}$ called its
trivial zeroes. More interestingly, it has another infinite set of zeroes,
which, according to the _Riemann hypothesis_ lie on the _critical line_
${\mathrm{Re}}(s)=\frac{1}{2}$. The related Riemann $\xi$-function (sometimes
called the _symmetric zeta-function_) and the adelic zeta function share only
the latter (non-trivial) zeroes with Eq. 5 (i.e., the set of trivial zeroes
are absent in the following functions)
$\xi(s)=\frac{1}{2}s(s-1)\zeta_{\mathbb{A}}(s)=\frac{1}{2}s(s-1)\pi^{-\frac{s}{2}}\Gamma\left(\frac{s}{2}\right)\zeta(s)$
(6)
both of which satisfy the reflection identity $\xi(s)=\xi(1-s)$, respectively,
$\zeta_{\mathbb{A}}(s)=\zeta_{\mathbb{A}}(1-s)$, derived from a similar
identity for the original zeta function. The former is a holomorphic function
while the latter, $\zeta_{\mathbb{A}}(s)$, is meromorphic.
The non-trivial zeroes of $\zeta(s)$ (which are the only zeroes of $\xi(s)$
and $\zeta_{\mathbb{A}}(s)$) conjecturally on the critical line, seem to occur
randomly, although they are found to be correlated in the same way as the
eigenvalues of a Gaussian ensemble of $N\times N$ hermitian or unitary
matrices in the limit $N\to\infty$ [1, 2, 3]. Starting from Hilbert and Pólya,
it has long been thought that these zeroes correspond to the eigenvalues of an
operator, that is self-adjoint in an appropriately defined sense. A direct
analysis of the spectrum of the purported operator may lead to a proof of the
Riemann hypothesis. Despite many ingenious efforts, an operator has not yet
been found. In Ref.[19], in a larger collaboration, we attempted to find a
suitable operator by _assuming the validity of the hypothesis_ , specifically,
by assuming that the zeroes _are_ the eigenvalues of a unitary matrix model
(UMM). We found that the partition function can be expressed as the _trace_ of
an operator on the Hilbert space of complex valued locally constant Bruhat-
Schwarz functions supported on a compact subset $p^{-1}\mathbb{Z}_{p}$ of the
$p$-adic field $\mathbb{Q}_{p}$. This was achieved in two steps. First a UMM
was constructed for each prime $p$ corresponding to the Euler product form in
Eq. 5. These (as well as a UMM for the trivial zeroes) were combined to define
the random matrix model. In this paper, we shall use some of the technology
that were useful in [19], however, our goal will be different.
We begin by expanding the prime factors in the Euler product form of the zeta
function
$\zeta(s)=\prod_{p\,\in\,{\mathrm{primes}}}\frac{1}{\left(1-p^{-s}\right)}=\prod_{p\,\in\,{\mathrm{primes}}}\sum_{n^{(p)}=0}^{\infty}p^{-sn_{(p)}},\qquad{\mathrm{Re}}(s)>1$
(7)
The factor
$\zeta_{p}(s)=\frac{1}{\left(1-p^{-s}\right)}\>\>{\text{for a fixed prime }}p$
(8)
is sometimes called the _local_ zeta function at $p$. It can be thought of as
a complex valued function on the field $\mathbb{Q}_{p}$ (of $p$-adic numbers).
The prefactor $\zeta_{\mathbb{R}}(s)=\pi^{-{s\over 2}}\Gamma\left({s\over
2}\right)$ in Eq. 6 is known as the _local_ zeta functions corresponding to
$\mathbb{R}$ (of real numbers). It is the Mellin transform of the Gaussian
function $e^{-\pi x^{2}}$. In an exactly analogous fashion, $\zeta_{p}(s)$ in
Eq. 8 is the Mellin transform of the equivalent of the Gaussian function (in
the sense of a function that is its own Fourier transform) on
$\mathbb{Q}_{p}$.
We can express the sum in Eq. 7 as the trace of an operator. To this end, let
us recall that the space of (mean-zero) square integrable complex valued
functions on $\mathbb{Q}_{p}$ is spanned by the orthonormal set of Kozyrev
wavelets $\psi^{(p)}_{nml}(\xi)\in\mathbb{C}$ (for $\xi\in\mathbb{Q}_{p}$),
which have compact support in $\mathbb{Q}_{p}$[22]. In $p$ segments (of equal
Haar measure) its values are the $p$-th roots of unity. They are analogous to
the generalised Haar wavelets, with the labels $n$, $m$ and $l$ referring to
scaling, translation and phase rotation. Interestingly, the Kozyrev wavelets
are eigenfunctions of an operator with eigenvalue $p^{\alpha(1-n)}$
$D_{(p)}^{\alpha}\psi_{n,m,l}^{(p)}(\xi)=p^{\alpha(1-n)}\psi_{n,m,l}^{(p)}(\xi)$
(9)
where, the pseudodifferential operators $D_{(p)}^{\alpha}$, called the
_generalised Vladimirov derivatives_ , are defined by the following integral
kernel as
$D^{\alpha}_{(p)}f(\xi)=\frac{1-p^{\alpha}}{1-p^{-\alpha-1}}\,\displaystyle\int_{\mathbb{Q}_{p}}d\xi^{\prime}\,\frac{f(\xi^{\prime})-f(\xi)}{|\xi^{\prime}-\xi|_{p}^{\alpha+1}},\quad\alpha\in{\mathbb{C}}$
They satisfy
$D^{\alpha_{1}}_{(p)}D^{\alpha_{2}}_{(p)}=D^{\alpha_{2}}_{(p)}D^{\alpha_{1}}_{(p)}=D^{\alpha_{1}+\alpha_{2}}_{(p)}$.
Since the roles of translation and phase are not going to be important in what
follows, let us set $m=0$ and $l=1$ and define vectors $|n^{(p)}\rangle$
corresponding to $\psi^{(p)}_{-n+1,0,1}(\xi)$
$\psi^{(p)}_{-n+1,0,1}(\xi)\>\longleftrightarrow\>|n_{(p)}\rangle$ (10)
in the Hilbert space $L^{2}(\mathbb{Q}_{p})$. Then
$\displaystyle D_{(p)}^{\alpha}|n_{(p)}\rangle$
$\displaystyle=p^{n^{(p)}\alpha}|n_{(p)}\rangle$
$\displaystyle\log_{p}D_{(p)}|n_{(p)}\rangle$
$\displaystyle=\displaystyle{\lim_{\alpha\to
0}}\,\frac{D_{(p)}^{\alpha}-1}{\alpha\ln
p}|n_{(p)}\rangle\>=\>n_{(p)}|n_{(p)}\rangle$ (11)
The wavelets, by construction, transform naturally under the affine group of
scaling and translation. However, it was shown in Ref.[23] that the scaling
part of it enhances to a larger SL(2,$\mathbb{R})$ symmetry. In terms of the
raising and lowering operators
$a^{(p)}_{\pm}|n_{(p)}\rangle=|n_{(p)}\\!\pm\\!1\rangle$ the generators of
SL(2,$\mathbb{R}$) are $J^{(p)}_{\pm}=a_{\pm}^{(p)}\log_{p}\\!D_{(p)}$ and
$J_{3}^{(p)}=\log_{p}D_{(p)}$. The algebra of these generators and their
action on the wavelet states are as follows.
$\displaystyle\begin{split}\left[J_{3}^{(p)},J^{(p)}_{\pm}\right]=\pm
J^{(p)}_{\pm},&\qquad\left[J_{+}^{(p)},J^{(p)}_{-}\right]=-2J^{(p)}_{3}\\\
J^{(p)}_{3}|n_{(p)}\rangle=n_{(p)}|n_{(p)}\rangle,&\qquad
J^{(p)}_{\pm}|n_{(p)}\rangle=n_{(p)}|n_{(p)}\\!\pm\\!1\rangle\end{split}$ (12)
We can now write Eq. 7 as
$\displaystyle\zeta(s)$ $\displaystyle=$
$\displaystyle\prod_{p\,\in\,{\mathrm{primes}}}\sum_{n^{(p)}=0}^{\infty}\left\langle
n_{(p)}\right|D_{(p)}^{-s}\left|n_{(p)}\right\rangle$ (13) $\displaystyle=$
$\displaystyle\sum_{{\mathbf{n}}=(n_{(2)},n_{(3)},\cdots)}\\!\\!\\!\\!\left\langle{\mathbf{n}}\right|e^{-s\ln\mathcal{D}}\left|{\mathbf{n}}\right\rangle$
where we have used a shorthand $\ln\mathcal{D}\equiv\sum_{p}\ln
p\,\log_{p}\\!D_{(p)}$, and the vectors $\left|{\mathbf{n}}\right\rangle$
belong to the product of the Hilbert spaces for all primes
$\bigotimes_{p}L^{2}(\mathbb{Q}_{p})$. However, since the sum runs only over
the _positive_ integers (including zero), this subspace is actually
$\bigotimes_{p}L^{2}(p^{-1}\mathbb{Z}_{p})$, spanned by the Bruhat-Schwarz
functions restricted to $p^{-1}\mathbb{Z}_{p}$ due to which the trace is well
defined (see [22, 23, 24] for details on the wavelet functions). This
expression leads us to think of the zeta function as the partition function of
a statistical system, the configurations of which are parametrised by the
integers ${\mathbf{n}}=(n_{(2)},n_{(3)},\cdots)$.
The $\mathfrak{sl}_{2}(\mathbb{R})$ algebra Eq. 12 can be realised in terms of
a pair of oscillators in the Schwinger representation
$J^{(p)}_{3}=\log_{p}D_{(p)}=\frac{1}{2}\left(N_{\mathrm{I}(p)}-N_{\mathrm{II(p)}}\right),\>\,J^{(p)}_{+}=a_{\mathrm{I}(p)}^{\dagger}a_{\mathrm{II}(p)}\>\,\text{
and }\>\,J^{(p)}_{-}=a_{\mathrm{II}(p)}^{\dagger}a_{\mathrm{I}(p)}$ (14)
Formally there is a _phase difference operator_
$\Phi_{(p)}=\left(\Phi^{(p)}_{\mathrm{I}}-\Phi^{(p)}_{\mathrm{II}}\right)$
conjugate to the number difference operator
$N_{(p)}=\frac{1}{2}\left(N_{\mathrm{I}(p)}-N_{\mathrm{II}(p)}\right)$, such
that
$[\Phi_{I(p)}\,,N_{J(p^{\prime})}]=i\delta_{IJ}\delta_{pp^{\prime}},\qquad[\Phi_{(p)},N_{(p^{\prime})}]=i\delta_{pp^{\prime}}$
(15)
In Section 2 we reviewed a construction for the phase operator following [12,
13, 14, 15, 16, 17, 18]. Assuming for the moment that a phase operator with
the desired properties can be constructed, we define the operator
$\frac{1}{\ln p}\,\Phi_{(p)}p^{-N_{(p)}}=\frac{1}{\ln p}\,\Phi_{(p)}e^{-N\ln
p}$ and evaluate the following commutator
$\left[\frac{1}{\ln
p}\,\Phi_{p}\,p^{-N_{(p)}},{p^{\prime}}^{N_{(p^{\prime})}}\right]=i\delta_{pp^{\prime}}$
(16)
using Eq. 15. Thus, the operator $\frac{1}{\ln p}\,\Phi_{(p)}\,{p}^{-N_{(p)}}$
is formally canonically conjugate to $p^{N_{(p)}}=D_{(p)}$.
We would now like to extend it to the large Hilbert space obtained by
combining all primes. Let us first consider all prime numbers up to a fixed
prime $\mathfrak{p}$. The number of such primes is $\pi(\mathfrak{p})$, where
$\pi(x)$ is the prime counting function. We now define
$\mathcal{O}_{\mathfrak{p}}=\frac{1}{\pi(\mathfrak{p})}\sum_{p=2}^{\mathfrak{p}}\frac{1}{\ln
p}\,\Phi_{(p)}\,\mathcal{D}_{\mathfrak{p}}^{-1}\quad\text{ and
}\quad\ln\mathcal{D}_{\mathfrak{p}}={\sum_{p=2}^{\mathfrak{p}}\ln D_{(p)}}$
which are operators in the truncated Hilbert space
$\displaystyle{\bigotimes_{p=2}^{\mathfrak{p}}}L^{2}(p^{-1}\mathbb{Z}_{p})$.
These are canonically conjugate since
$\left[\mathcal{O}_{\mathfrak{p}},\mathcal{D}_{\mathfrak{p}}\right]=i$. Now we
take the limit $\mathfrak{p}\to\infty$ to obtain the canonically conjugate
operators
$\mathcal{O}=\lim_{\mathfrak{p}\to\infty}\mathcal{O}_{\mathfrak{p}},\quad\mathcal{D}=\lim_{\mathfrak{p}\to\infty}\mathcal{D}_{\mathfrak{p}}\quad\text{such
that}\quad\left[\mathcal{O},\mathcal{D}\right]=i$
on the large Hilbert space $\bigotimes_{p}L^{2}(p^{-1}\mathbb{Z}_{p})$. This
limit is analogous to the _thermodynamic limit_ of statistical models, as we
shall see in Section 3.1.
Associated to these operators is the Weyl symmetric product
$\displaystyle\begin{split}\frac{1}{2}\left(\mathcal{D}\mathcal{O}\right.&+\left.\mathcal{O}\mathcal{D}\right)=\lim_{\mathfrak{p}\to\infty}\frac{1}{2}\left(\mathcal{D}_{\mathfrak{p}}\mathcal{O}_{\mathfrak{p}}+\mathcal{O}_{\mathfrak{p}}\mathcal{D}_{\mathfrak{p}}\right)\>=\>\mathcal{O}\mathcal{D}-\frac{i}{2}\\\
&=\lim_{\mathfrak{p}\to\infty}\frac{1}{\pi(\mathfrak{p})}\sum_{p=2}^{\mathfrak{p}}\left(\mathbf{1}\otimes\cdots\otimes
e^{\frac{1}{2}\\!\ln D_{(p)}}\frac{\Phi_{(p)}}{\ln p}e^{-\frac{1}{2}\\!\ln
D_{(p)}}\otimes\mathbf{1}\otimes\cdots\right)\end{split}$ (17)
which is (formally) self-adjoint. In the last line, we have a similarity
transform of the sum of the $\Phi_{(p)}$ operators. As has been emphasised,
e.g. in Ref. [15], the operator canonically conjugate to the number operator
can only be defined up to a similarity transformation. Hence there ought to be
more than one (which could be infinite in number) _total_ phase operators
$\Phi$ canonically conjugate to the _total_ number operator
$\ln\mathcal{D}=\sum_{p}\ln D_{(p)}$. One may follow proposals in the
literature (e.g. [15]) to define $\Phi_{(p)}$, which would result in
$\frac{1}{2}\left(\mathcal{D}_{\mathfrak{p}}\mathcal{O}_{\mathfrak{p}}+\mathcal{O}_{\mathfrak{p}}\mathcal{D}_{\mathfrak{p}}\right)$,
canonically conjugate to $\ln\mathcal{D}_{\mathfrak{p}}$, on the outer product
of a dense subspace of the Hilbert space $L^{2}(p^{-1}\mathbb{Z}_{p})$ at the
$p$-th place. It is worth reiterating that the construction discussed above is
formal. The limit $\mathfrak{p}\to\infty$ is not straightforward. There is a
more convenient way to construct the phase operator over a subspace of the
Hilbert space. We shall attempt to do so in the next two subsections.
### 3.1 Aggregate phase operator for the Riemann zeta function
Let us return to the model of $\mathfrak{su}(2)$ spin in an external field of
Section 2 with the Hamiltonian containing a site dependent magnetic field
$H=-\displaystyle{\sum_{p=2}^{\mathfrak{p}}}B_{p}N_{p}=-\displaystyle{\sum_{p=2}^{\mathfrak{p}}}B_{p}\left(S_{3,p}+j\mathbf{1}\right)$
where we have now chosen an unusual convention222It should, however, be
mentioned that this type of numbering has been used before in [4, 5, 7, 6]. of
using _prime numbers_ $p$ to label the sites, $B_{p}$ are the values of the
magnetic field at site $p$ and we have shifted the zero of the energy for
convenience. The latter amounts to a shift in the spectrum of $S_{p,3}$ by
$S_{p,3}\rightarrow N_{p}=S_{3,p}+j\mathbf{1}$ so that $N_{p}$ takes the
integer value $0,1,\cdots,\mathfrak{n}$.
In this case one can define a phase operator at an individual site, say
$\boldsymbol{\phi}_{p}$ at the $p$-th site, as in Section 2. Each of these
individual operators satisfies the shift covariance relation (or commutator)
for special values of $\beta$
$\displaystyle\big{[}\boldsymbol{\phi}_{p},e^{\beta\sum_{2}^{\mathfrak{p}}B_{p}N_{p}}\big{]}$
$\displaystyle=i\beta e^{\beta\sum_{2}^{\mathfrak{p}}B_{p}N_{p}}$ (18)
$\displaystyle\text{for }\beta=\frac{2\pi
ik}{B_{p}(\mathfrak{n}+1)}\pmod{{2\pi}/{B_{p}}}\>$ $\displaystyle\text{ with
}k=1,\cdots,\mathfrak{n}\text{ and }p=2,\cdots,\mathfrak{p}$
This is valid over the entire Hilbert space, i.e., on an arbitrary state
vector, but only for these special values of $\beta$. Thus there are as many
shift covariant phase operators as the number of sites, and each individual
phase operator is covariant under the specific choices of $\beta$. Moreover,
since each of the Hilbert spaces, labelled by $p$, is finite dimensional, the
trace is a product over traces in each Hilbert space. Hence if we take the
trace of Eq. 18, exactly as in the case of the spin model in Section 2, the
trace of the commutator is zero, therefore,
$\text{Tr}\,e^{\beta\sum_{p}B_{p}N_{p}}=0$. Thus the shift covariance relation
is valid for those values of $\beta$ which also satisfy the zero trace
condition. This relates the zeroes of the partition function to the poles of
the following resolvent operators
$\mathfrak{R}[e^{i\boldsymbol{\phi}_{p}}](\phi)=\left(1-e^{-i\phi}e^{i\boldsymbol{\phi}_{p}}\right)^{-1}$
for all $p$. The trace of the resolvent is
$\sum_{k_{p}=0}^{\mathfrak{n}}\frac{1}{1-e^{-i\phi+i\phi_{k_{p}}}}$
apart from the pole for $k=0$, which yields the trivial commutator.
The similarity between the spin in a magnetic field and the zeta function is
apparent at this stage. (Recall that we have labelled the edges connecting
adjacent sites by the first $\mathfrak{p}$ prime numbers with this objective.)
Indeed, if we choose the local magnetic field $B_{p}=\ln p$, then the
partition function becomes
$Z(\beta)=\prod_{p=2}^{\mathfrak{p}}\bigg{(}\sum_{m_{p}=0}^{\mathfrak{n}}e^{\beta
m_{p}\ln
p}\bigg{)}=\prod_{p=2}^{\mathfrak{p}}\frac{1-p^{\beta(\mathfrak{n}+1)}}{1-p^{\beta}}$
In the thermodynamic limit $\mathfrak{p}\to\infty$, even for finite
$\mathfrak{n}$, the partition function has a simple form in terms of a ratio
of the Riemann zeta functions
$Z(\beta)=\lim_{\mathfrak{p}\to\infty}\,\prod_{p=2}^{\mathfrak{p}}\frac{1-p^{\beta(\mathfrak{n}+1)}}{1-p^{\beta}}=\frac{\zeta(-\beta)}{\zeta\left(-(\mathfrak{n}+1)\beta\right)}$
(19)
Remarkably this has the exact same form as the partition functions of a
$\kappa$-parafermionic primon gas of [5, 7, 6, 10] with
$\kappa=\mathfrak{n}+1$ and $s=-\beta$. It would be interesting to try to
relate the parafermionic variables to the spin degrees of freedom. Notice that
$Z(\beta)$ has zeroes at the non-trivial zeroes of $\zeta(-\beta)$ from the
numerator, as well as at $\beta=-1/(\mathfrak{n}+1)$ from the pole of
$\zeta\left(-(\mathfrak{n}+1)\beta\right)$ from the denominator. The latter is
the only real zero, although it is at an unphysical value of the (inverse)
temperature. However, the trivial zeroes of $\zeta(-\beta)$ are not zeroes of
the partition function. This is due to the fact that at these points, both the
numerator and the denominator have simple zeroes, hence
$\lim_{\beta\to
2n}\frac{\zeta(-\beta)}{\zeta\left(-(\mathfrak{n}+1)\beta\right)}=\text{
finite}$
Thus the nontrivial zeroes are the Fisher zeroes of the spin model in the
complex (inverse temperature) $\beta$-plane. However, since the zeroes of the
Riemann zeta function are believed to be isolated (and since there is no
accumulation point on the real line) these zeroes are not related to any phase
transition. This is consistent as the system of spins in a magnetic field is
not expected to undergo a phase transition. Finally, the partition function
has additional poles from the zeroes of the zeta function in the denominator.
The spectrum of the zeroes of the partition function is then given by
$\sum_{p=2}^{\mathfrak{p}}\sum_{{n_{p}\in\mathbb{Z}}}\sum_{k_{p}=0}^{\mathfrak{n}}\frac{1}{1-e^{i\phi_{k_{p}}+i\frac{2\pi
n_{p}}{\ln p}-i\phi}}-\sum_{p,n_{p}}\frac{1}{1-e^{i\frac{2\pi n_{p}}{\ln
p}-i\phi}}$ (20)
where we have subtracted the pole due to $k=0$. This function may be rewritten
as follows.
$\displaystyle\sum_{p=2}^{\mathfrak{p}}\sum_{n\in\mathbf{Z}}\frac{1}{1-e^{i\left(\frac{2\pi
n}{(\mathfrak{n}+1)\ln
p}-\phi\right)}}-\sum_{p=2}^{\mathfrak{p}}\sum_{n\in\mathbf{Z}}\frac{1}{1-e^{i\left(\frac{2\pi
n}{\ln p}-\phi\right)}}$
$\displaystyle\buildrel\text{sing.}\over{\approx}\sum_{p=2}^{\mathfrak{p}}\sum_{n\in\mathbf{Z}}\frac{-i}{\phi-\frac{2\pi
n}{(\mathfrak{n}+1)\ln
p}}-\sum_{p=2}^{\mathfrak{p}}\sum_{n\in\mathbf{Z}}\frac{-i}{\phi-\frac{2\pi
n}{\ln p}}$
$\displaystyle\approx\sum_{p=2}^{\mathfrak{p}}\frac{d}{d(i\phi)}\ln(1-p^{-(\mathfrak{n}+1)i\phi})-\frac{d}{d(i\phi)}\sum_{p=2}^{\mathfrak{p}}\ln(1-p^{-i\phi})$
$\displaystyle\approx-i\frac{d}{d\phi}\ln\bigg{(}\prod_{p=2}^{\mathfrak{p}}\frac{1-p^{-(\mathfrak{n}+1)i\phi}}{1-p^{-i\phi}}\bigg{)}$
In the above we have used the Mittag-Leffler expansion, assuming analyticity
of the partition function. The expression above, in the limit
$\mathfrak{p}\to\infty$, becomes
$-i\frac{d}{d\phi}\ln\bigg{(}\frac{\zeta(i\phi)}{\zeta\left((\mathfrak{n}+1)i\phi\right)}\bigg{)}$
for $\mathrm{Re}\,(i\phi)>1$.
We will now try to construct a single operator that can be understood as
‘canonically conjugate’ to the Hamiltonian. If we define the total phase
operator as $\boldsymbol{\Phi}=\sum_{p}\boldsymbol{\phi}_{p}$ (which is the
sum of individual phase operators $\boldsymbol{\phi}_{p}$ as defined in
Section 2) it does not, unfortunately, have the desired shift covariance
relation Eq. 4 with the Hamiltonian. This is due to the site dependence of the
magnetic field $B_{p}$, as is apparent from the steps leading to Eq. 4. The
commutator there is obtained only at specific discrete values of $\beta$ which
are integer multiples of $2\pi k^{\prime}/B_{p}(\mathfrak{n}+1)$. Therefore,
unless the magnetic field $B_{p}$ at all the sites are commensurate, which is
certainly not the case for $B_{p}=\ln p$, it is not possible to get the
desired commutator this way.
Instead, we propose to work with an aggregate phase operator
$\boldsymbol{\varphi}$ such that its action on the composite state
$\bigotimes_{p}\big{|}\phi_{p,k_{p}}\big{\rangle}$ is defined to be
$e^{i\hat{\phi}_{p}}$ if the eigenvalue $\phi_{p}\neq 0$, while all other
eigenvalues $\phi_{q\neq p}$ are zero, otherwise it acts as the identity.
Thus, _if two or more of the phases are non-zero_ ,
$e^{i\boldsymbol{\varphi}}=\mathbf{1}$. This may be expressed as
$e^{i\boldsymbol{\varphi}}=\sum_{p=2}^{\mathfrak{p}}e^{i\hat{\phi}_{p}}\prod_{q\neq
p}\delta_{\phi_{q},0}+\frac{2}{n_{\neq 0}(n_{\neq
0}-1)}\sum_{p_{1},p_{2}=1}^{\mathfrak{p}}\prod_{p_{1}\neq
p_{2}}(1-\delta_{\phi_{p_{1}},0})(1-\delta_{\phi_{p_{2}},0})$ (21)
where $n_{\neq 0}=\sum_{p}(1-\delta_{\phi_{p},0})$ is the number of sites
where the phase is non-zero. This is equivalent to projecting on a subspace
$\mathcal{H}^{(1)}$ of the Hilbert space, in which only one, and exactly one,
phase is different from zero333This is analogous, though not exactly
equivalent, to a projection of the Fock space of a quantum field theory of,
say a scalar field, on a subspace with single-particle excitation.. After the
projection, one can use the total phase operator in the subspace
$\boldsymbol{\Phi}|_{\mathcal{H}^{(1)}}=\Pi_{\mathcal{H}^{(1)}}\left(\sum_{p}\Phi_{p}\right)\Pi_{\mathcal{H}^{(1)}}$.
From either point of view, the action of the above is nontrivial on a subspace
of the Hilbert space parametrised by only one of the eigenvalues $\phi_{p}$ at
a time, i.e., on a union of circles $\cup_{p}S^{1}_{(p)}$ while the full
Hilbert space is parametrised by $(S^{1})^{\mathfrak{p}}$. In the complement
of this subspace, it is identity. In this subspace $\mathcal{H}^{(1)}$, we can
follow the steps leading to Eq. 4 to compute the commutator
$\left[\boldsymbol{\varphi},\Pi_{\mathcal{H}^{(1)}}e^{-\beta
H}\Pi_{\mathcal{H}^{(1)}}\right]=i\beta\,\Pi_{\mathcal{H}^{(1)}}e^{-\beta
H}\Pi_{\mathcal{H}^{(1)}}$
which holds in $\mathcal{H}^{(1)}$ for all $\beta=\frac{2\pi
k}{B_{p}(\mathfrak{n}+1)}$ (mod $2\pi/B_{p}$) where $k=1,\cdots,\mathfrak{n}$
and $p=2,\cdots,\mathfrak{p}$. It is worth emphasising that, as in several
examples in quantum theory, the domain of the canonical commutator is not the
entire Hilbert space, but a direct sum of closed orthogonal subspaces[14] of
the type $\mathcal{H}^{(1)}$. In the limit $\mathfrak{p}\to\infty$, one take
the closure of this subspace to obtain a closed subspace of the entire Hilbert
space.
The resolvent function of the exponential of the aggregate phase operator Eq.
21
$\mathfrak{R}[e^{i\boldsymbol{\varphi}}](\phi)=\left(1-e^{-i\phi}e^{i\boldsymbol{\varphi}}\right)^{-1}$
(22)
has the trace
$\displaystyle\mathrm{Tr}\left(\mathfrak{R}[e^{i\boldsymbol{\varphi}}](\phi)\right)$
$\displaystyle=\sum_{p=2}^{\mathfrak{p}}\sum_{k_{1},\cdots,k_{\mathfrak{p}}}\left(\otimes_{i=1}^{\mathfrak{p}}\big{\langle}\phi_{k_{i}}\big{|}\right)e^{-\beta
H}\sum_{n=0}^{\infty}e^{in(\boldsymbol{\varphi}-\phi)}\left(\otimes_{i=1}^{\mathfrak{p}}\big{|}\phi_{k_{i}}\big{\rangle}\right)$
$\displaystyle=\sum_{p=2}^{\mathfrak{p}}\Bigg{(}\sum_{{k_{p}\atop{\text{exactly
one}\atop\phi_{k_{p}}\neq
0}}}\sum_{n=0}e^{in\phi_{k_{p}}}e^{-in\phi}+\sum_{{k_{p}\atop{\text{at least
two}\atop\phi_{k_{p}}\neq 0}}}e^{-in\phi}\Bigg{)}$
$\displaystyle=\,\sum_{p=2}^{\mathfrak{p}}\Bigg{(}\sum_{{k_{p}\atop{\text{exactly
one}\atop\phi_{k_{p}}\neq
0}}}\frac{1}{1-e^{i\phi_{k_{p}}-i\phi}}+\sum_{{k_{p}\atop{\text{at least
two}\atop\phi_{k_{p}}\neq 0}}}\frac{1}{1-e^{-i\phi}}\Bigg{)}$ (23)
Except for the pole at $\phi=0$, this behaviour is in fact identical to that
of the resolvent
$\displaystyle\sum_{p=2}^{\mathfrak{p}}\left(1-e^{-i\phi}e^{i\hat{\phi}_{p}}\right)^{-1}$
in Eq. 20.
Even though we do not require to take the limit $\mathfrak{n}\to\infty$, it is
interesting to note that the phase operator $\boldsymbol{\phi}_{p}$ at the
$p$-th site approaches the phase operator described in [15], which is a
Toeplitz operator [25, 26, 27] in this limit. This has been shown in [17].
This provides a way to understand the relation to the spectrum without a
truncation to a finite $\mathfrak{n}$ case. As shown in [15, 17], each pair of
operators $(\boldsymbol{\phi}_{p},N_{p})$ satisfies the canonical commutation
relation in a subspace $\Omega_{p}$ of the $p$-th Hilbert space
$L^{2}(p^{-1}\mathbb{Z}_{p})$ as follows
$\Omega_{p}=\Big{\\{}|f\rangle_{(p)}=\displaystyle\sum_{n_{p}=0}^{\infty}f_{n_{p}}|n_{p}\rangle_{(p)}:\displaystyle\sum_{n_{p}=0}^{\infty}f_{n_{p}}=0\Big{\\}}$
where, $|n_{p}\rangle_{(p)}$ is an eigenstate of $N_{p}$ corresponding to the
eigenvalue $n_{p}$. Thus, each of the phase operators $\boldsymbol{\phi}_{p}$
satisfies the canonical commutator over a dense subspace of the Hilbert space
$\Big{[}\frac{1}{\ln
p}\,\boldsymbol{\phi}_{p}\,,\sum_{p\in\text{prime}}\\!\ln\mathcal{D}_{(p)}\Big{]}=i,\quad\text{in
}\,\Omega_{p}\,\bigotimes_{p^{\prime}\neq p}\mathcal{H}_{p^{\prime}}$
This is similar to the operator defined earlier, but without the sum
restricted to a finite prime $\mathfrak{p}$ (along with the normalisation
factor $\pi(\mathfrak{p})$ in the denominator). This is due to the fact that
in this case, one gets a contribution from only one of the subspaces (one
prime) at a time. In the next subsection we shall take a similar route to
define another phase operator.
### 3.2 Total phase operator for the Riemann zeta function
Following [15] (see also [17]) we would like to discuss another construction
of the phase operator $\hat{\Phi}$ conjugate to the number operator $\hat{N}$
such that
$\displaystyle\hat{N}|n\rangle$ $\displaystyle=n|n\rangle,\qquad
n=0,1,\cdots,\mathfrak{n}$ $\displaystyle\hat{\Phi}$
$\displaystyle=\sum_{m\neq n}\frac{i\,|m\rangle\langle n|}{m-n}$ (24)
This is a $(\mathfrak{n}+1)\times(\mathfrak{n}+1)$ hermitian Toeplitz matrix
[25, 26, 27]. When applied on a state
$|\mathbf{v}\rangle=\sum_{n}v_{n}|n\rangle$, we find that
$\big{[}\hat{\Phi},\hat{N}\big{]}|\mathbf{v}\rangle=i|\mathbf{v}\rangle\quad\text{if
and only if }\sum_{n=0}^{\mathfrak{n}}v_{n}=0$ (25)
Thus the commutator is valid in a codimension one subspace. For example, we
could choose the $v_{n}$s to be the nontrivial $(\mathfrak{n}+1)$-th roots of
unity. Toeplitz matrices and operators have a long history and have been
studied extensively (see e.g., [27]). Although eigenvalues $k_{0}\leq
k_{1}\leq\cdots\leq k_{\mathfrak{n}}$ and the corresponding eigenvectors
$|k_{m}\rangle$ ($k=0,1,\cdots,\mathfrak{n}$) of the matrix above exist, one
cannot write them explicitly. Moreover, by Szegö’s theorems, the spectrum is
bounded by $\pi$ as $\mathfrak{n}\to\infty$ (so that the matrix size goes to
infinity) and the eigenvalues are distributed uniformly and symmetrically
around zero, as one can also check numerically for small values of
$\mathfrak{n}$.
Coming back to the problem of our interest, in which the Hamiltonian is
$H=\sum_{p}\ln p\,N_{(p)}$, where $N_{(p)}$ is the number operator at the
$p$-th site, which in turn can be expressed in terms of the generalised
Vladimirov derivative. For a natural number $n\in\mathbb{N}$, we use the prime
factorisation444Prime factorisation also plays an important role in the
arithmetic gas models [4, 5, 7, 6]. to associate a vector in
$\otimes_{p}L^{2}(p^{-1}\mathbb{Z}_{p})$ as
$n=\prod_{p}p^{n_{(p)}}\>\longleftrightarrow\>|\mathbf{n}\rangle=\otimes_{p}|n_{(p)}\rangle$
(26)
using the wavelet basis. We emphasise that only a finite number of entries in
the infinite component vector are non-zero integers. Clearly
$|\mathbf{n}\rangle$ is an eigenvector of $H$
$H|\mathbf{n}\rangle=\sum_{p}n_{(p)}\ln p\,|\mathbf{n}\rangle=\ln
n\,|\mathbf{n}\rangle$
Moreover, these states are orthonormal
$\langle\mathbf{n}_{i}|\mathbf{n}_{j}\rangle=\prod_{p}\langle
n^{i}_{(p)}|n^{j}_{(p)}\rangle=\prod_{p}\delta_{n^{i}_{(p)}n^{i}_{(p)}}=\delta_{n_{i}n_{j}}$.
When restricted to a fixed value of $p$, the following definition for the
phase operator
$\frac{i}{\ln p}\sum_{n_{a}^{(p)}\neq
n_{b}^{(p)}}\frac{|\mathbf{n}^{(p)}_{a}\rangle\,\langle\mathbf{n}^{(p)}_{b}|}{(n^{(p)}_{a}-n^{(p)}_{b})}$
on $L^{2}(p^{-1}\mathbb{Z}_{p})$ is natural. This is a Toeplitz matrix,
therefore, it has eigenvectors $|k_{(p)}\rangle$. Let us define the phase
operator on the full space $\otimes_{p}L^{2}(p^{-1}\mathbb{Z}_{p})$
schematically to be of the form
$\Phi_{\text{tot}}\sim\sum_{n_{a}\neq
n_{b}}\frac{i\,|\mathbf{n}_{a}\rangle\langle\mathbf{n}_{b}|}{\ln n_{a}-\ln
n_{b}}=\sum_{\stackrel{{\scriptstyle\text{not
all}}}{{n_{a}^{(p)}=n_{b}^{(p)}}}}\frac{i\left(\otimes_{p_{a}}|\mathbf{n}^{(p_{a})}_{i}\rangle\right)\left(\otimes_{p_{b}}\langle\mathbf{n}^{(p_{b})}_{b}|\right)}{\sum_{p}(n^{(p)}_{a}-n^{(p)}_{b})\ln
p}$
We need to specify the limits of sums over the integers in the above. However,
before we undertake that exercise, we would like to check if $H$ and
$\Phi_{\text{tot}}$ could be a canonically conjugate pair, possibly on a
subspace spanned by vectors of the form in Eq. 26.
To this end, let us now consider a finite linear combination of the form
$|\mathbf{v}\rangle=\sum_{\mathbf{n}}v_{\mathbf{n}}|\mathbf{n}\rangle$, in
which we further require the coefficients to factorise as
$v_{\mathbf{n}}\equiv v_{(n_{(2)},n_{(3)},\cdots)}=\prod_{p}v_{n_{(p)}}$. One
can compute the commutator and verify that on such a state
$\big{[}\Phi_{\text{tot}},H\big{]}|\mathbf{v}\rangle=i|\mathbf{v}\rangle\quad\text{if
and only if }\sum_{n}v_{n}=0$ (27)
where the upper limit of the sum is the maximum integer $n_{\text{max}}$ that
appear in the definition of the vector $|\mathbf{v}\rangle$. Consider all the
vectors $|n\rangle$ that appear in the linear combination in defining
$|\mathbf{v}\rangle$ on which we want to check for the commutator, and the
prime factorisations of the corresponding integers $n$. Let the maximum
$\text{max}_{p}\\{n^{(p)}\\}$ of these be $\mathfrak{n}\in\mathbb{N}$. There
is also a highest prime $\mathfrak{p}$, i.e., above which all
$n^{(p>\mathfrak{p})}=0$ in the factorisations. We can now make the proposal
for the phase operator more precise. It is
$\Phi_{\text{tot}}=\begin{cases}\displaystyle{\sum^{\mathfrak{n}}_{{n_{a}^{(p)},\,n_{b}^{(p)}=0}\atop{\text{not
all
}n_{a}^{(p)}=n_{b}^{(p)}}}\\!\\!\\!\\!\frac{i\left(\otimes_{p_{a}\leq\mathfrak{p}}\big{|}\mathbf{n}^{(p_{a})}_{i}\big{\rangle}\right)\left(\otimes_{p_{b}\leq\mathfrak{p}}\big{\langle}\mathbf{n}^{(p_{b})}_{b}\big{|}\right)}{\sum_{p\leq\mathfrak{p}}(n^{(p)}_{a}-n^{(p)}_{b})\ln
p}}&\mbox{for }n_{a}^{(p)},n_{b}^{(p)}\leq\mathfrak{n}\\\
\displaystyle{\bigotimes_{p>\mathfrak{p}}\big{|}n_{(p)}=0\big{\rangle}\big{\langle}n_{(p)}=0\big{|}}&\mbox{otherwise}\end{cases}$
(28)
and acts on a space spanned by vectors of the form
$|\mathbf{k},\mathfrak{p}\rangle=\left(\otimes_{p\leq\mathfrak{p}}|k_{(p)}\rangle\right)\otimes\left(\otimes_{p^{\prime}>\mathfrak{p}}|n_{(p^{\prime})}=0\rangle\right)$
where at least one $k_{(p)}\neq 0$ for a prime $p\leq\mathfrak{p}$ and for
$p>\mathfrak{p}$, we have chosen the ‘vacuum’ state in the number
representation. In the limit $\mathfrak{p}\to\infty$ (even for finite values
of $\mathfrak{n}$), we expect this to be a well defined Toeplitz operator on
$\otimes_{p}L^{2}(p^{-1}\mathbb{Z}_{p})$. However, we are not able to offer a
rigorous mathematical proof of this assertion.
The phase operator cannot be defined uniquely, it is ambiguous upto a
similarity transform [15]. Given a phase operator $\Phi$, for example, as
defined in Eq. 24, let us consider the operator $\Phi_{\beta}=e^{-\beta N}\Phi
e^{\beta N}$ related by a similarity transform labelled by a parameter
$\beta$. This is would have been a trivial statement had the commutator Eq. 25
been true in the full vector space, however, as we have seen this relation
holds in a subspace of codimension one. It is straightforward to check that
the condition that restricts to the subspace is modified to $\sum_{n}e^{\beta
n}v_{n}=0$ for $\Phi_{\beta}$ to be conjugate to $N$. We may choose
$v_{n}=e^{2\pi im_{1}n/(\mathfrak{n}+1)}$ and $\beta=2\pi
im_{2}/(\mathfrak{n}+1)$ with $m_{1}+m_{2}\neq 0\pmod{\mathfrak{n}+1}$. This
condition is identical to the vanishing of the partition function $Z=\text{Tr
}e^{-\beta H}$ for the Hamiltonian $H=-N$ at these special values of $\beta$.
Now consider $\Phi_{\text{tot},\beta}=e^{-\beta H}\Phi_{\text{tot}}e^{\beta
H}$, the similarity transformation of Eq. 28. The modified condition that
defines the subspace is $\prod_{p}\sum_{n_{(p)}}e^{\beta n_{(p)}\ln
p}v_{n_{(p)}}=0$. If we choose the coefficient
$v_{n_{(p)}}=\chi(p)^{n_{(p)}}$, where $\chi(p)$ is a Dirichlet character (see
Eq. 30), the subspace is defined by the vanishing of
$\prod_{p=2}^{\mathfrak{p}}\sum_{n_{(p)}=0}^{\mathfrak{n}}\left(\chi(p)p^{\beta}\right)^{n_{(p)}}=\prod_{p=2}^{\mathfrak{p}}\frac{1-\chi(p^{\mathfrak{n}+1})p^{\beta(\mathfrak{n}+1)}}{1-\chi(p)p^{\beta}}$
(29)
which, in the limit $\mathfrak{p}\to\infty$ is a ratio of Riemann zeta or
Dirichlet $L$-functions, depending whether the character is trivial or not, as
in Eqs. 19 and 38, respectively. Thus the subspace in which the phase operator
Eq. 28, or its similarity transform, is canonically conjugate to the
Hamiltonian is defined by the vanishing of the Riemann zeta function (at
special values of the inverse temperature $\beta\neq 0$). We have previously
encountered this in Eq. 19 with the aggregate phase operator defined in
Section 3.1. As we see, different choices for the coefficients relate to the
vanishing of Dirichlet $L$-functions, to which we shall now turn our
attention.
## 4 Extension to the Dirichlet $L$-functions
The Riemann zeta function belongs to a family of functions, called the
Dirichlet $L$-functions, that are defined as the analytic continuation of the
Dirichlet series
$L(s,\chi)=\sum_{n=1}^{\infty}\frac{\chi(n)}{n^{s}}=\prod_{p\,\in\,{\mathrm{primes}}}\frac{1}{1-\chi(p)p^{-s}},\qquad\mathrm{Re}(s)>1$
(30)
to the complex $s$-plane. In the above, $\chi_{(}n)$, called the Dirichlet
character, is a homomorphism from the multiplicative group
$G(k)=\left(\mathbb{Z}/k\mathbb{Z}\right)^{*}$ of _invertible_ elements of
$\mathbb{Z}/k\mathbb{Z}$ to $\mathbb{C}^{*}$, which is then extended as a
character for all $\mathbb{Z}$ by setting $\chi(m)=0$ for all $m$ which are
zero $\pmod{k}$ [28]. A Dirichlet character so defined satisfy the following
properties
1. 1.
For all $m_{1},m_{2}\in\mathbb{Z}$, $\chi(m_{1}m_{2})=\chi(m_{1})\chi(m_{2})$
2. 2.
$\chi(m)\neq 0$ if and only if $m$ is relatively prime to $k$
3. 3.
$\chi(m_{1})=\chi(m_{2})$ if $m_{1}\equiv m_{2}$ $\pmod{k}$
Therefore, $\chi$ is a multiplicative character, defined modulo $k$, on the
set of integers. It is this multiplicative property that justifies the sum to
be written as an infinite product in Eq. 30.
There is a trivial character that assigns the value 1 to all integers,
including 0. (This may be taken to correspond to $k=1$.) The Riemann zeta
function corresponds to the choice of the trivial character. In all other
cases, only those integers (respectively, primes), the Dirichlet characters of
which are not zero, contribute to the sum (respectively, the product). This of
course depends on the periodicity $k$ of the character. Therefore, the product
restricts to primes that do not divide $k$
$\prod_{p}\frac{1}{1-\chi_{k}(p)p^{-s}}=\prod_{(p,k)=1}\frac{1}{1-\chi_{k}(p)p^{-s}}=\prod_{(p,k)=1}\sum_{n_{p}=0}^{\infty}\chi_{k}(p)^{n_{p}}p^{-n_{p}s}$
With this understanding, one can define the _inverse_ $\chi^{-1}$ by
restricting to the relevant set of primes. For these primes $p\nmid k$, the
Dirichlet character satisfies $\chi_{k}^{-1}(p)=\chi_{k}^{*}(p)$. (Formally,
for the others, we may take $\chi$ as well as $\chi^{-1}$ to be zero.)
Everything we discussed in the context of the Riemann zeta function in Section
3, including all the caveats, apply to the Dirichlet $L$-functions, with
obvious modifications at appropriate places. The role of the generalised
Vladimidrov derivative, acting on complex valued functions on the $p$-adic
numbers $\mathbb{Q}_{p}$ is played by the generalised Vladimirov derivative
_twisted by the character $\chi$_ [20], be denoted by $D_{(p)\mathfrak{x}}$.
The Kozyrev wavelets are eigenfunctions of these operators for all $\chi$. The
eigenvalues, however, are different and involve the Dirichlet character as
follows
$D_{(p)\mathfrak{x}}\psi_{1-n,m,j}^{(p)}(\xi)=\chi_{k}(p^{n})p^{n}\,\psi_{1-n,m,j}^{(p)}(\xi)$
(31)
We refer to [20] for details of the construction and other properties of these
operators.
The above equation and Eq. 9 lead to the conclusion that $D$ and
$D_{\mathfrak{x}}$ are simultaneously diagonalisable, hence the Kozyrev
wavelets are also eigenfunctions of the _unitary_ operator
$U_{\mathfrak{x}}=D_{\mathfrak{x}}D^{-1}$
$U_{(p)\mathfrak{x}}\psi_{1-n,m,j}^{(p)}(\xi)=D_{(p)\mathfrak{x}}D^{-1}_{(p)}\psi_{1-n,m,j}^{(p)}(\xi)=\chi_{k}(p^{n})\psi_{1-n,m,j}^{(p)}(\xi)$
(32)
We can define its inverse
$U_{(p)\mathfrak{x}}^{-1}=U_{(p)\mathfrak{x}}^{\dagger}=D_{(p)}D^{-1}_{(p)\mathfrak{x}^{*}}$
for those $k$ which do not contain $p$ in its factorisation (otherwise it is
the identity operator). Conversely, when we consider all primes, for a given
$k$, we need to restricted to the set of primes that do not divide $k$, i.e.,
with the formal extension of the inverse given after LABEL:EulerL.
As in the case of the Riemann zeta function, we can combine all the prime
factors to write $L(s,\chi_{k})$ as a trace
$L(s,\chi)=\sum_{{\mathbf{n}}=(n^{(2)},n^{(3)},\cdots)}\\!\\!\\!\\!\big{\langle}{\mathbf{n}}\big{|}\,\mathcal{U}_{\mathfrak{x}}e^{-s\ln\mathcal{D}}\,\big{|}\mathbf{n}\big{\rangle}$
where $\mathcal{U}_{\mathfrak{x}}=\otimes_{p}U_{(p)\mathfrak{x}}$. If we want
to interpret this as the partition function of a statistical mechanical model,
the Hamiltonian is such that
$e^{-\beta
H_{\mathfrak{x}}}\>\longleftrightarrow\>\mathcal{U}_{\mathfrak{x}}e^{-s\ln\mathcal{D}}=\mathcal{D}^{-1}\mathcal{D}_{\mathfrak{x}}\mathcal{D}^{-s}=\mathcal{D}_{\mathfrak{x}}\mathcal{D}^{-s-1}$
which reduces to $e^{-\beta H}\sim\mathcal{D}^{-s}$ corresponding to the
Riemann zeta function in Eq. 13, upto a phase. Now since a non-zero
$\chi_{k}(p)=e^{i\omega_{p}}$ is a root of unity, we can define a new phase
state555This is done by truncating the spectrum to relate with the previous
case.
$|\phi_{k_{p}}^{(\mathfrak{x})}\rangle=\frac{1}{\sqrt{\mathfrak{n}+1}}\sum_{n_{p}=0}^{\mathfrak{n}}e^{-in_{p}(\phi_{k_{p}}\ln
p+\omega_{p})}|n_{p}\rangle$ (33)
which provide an orthonormal set
$\big{\langle}\phi_{k^{\prime}_{p}}^{(\mathfrak{x})}\big{|}\phi_{k_{p}}^{(\mathfrak{x})}\big{\rangle}=\delta_{k_{p},k^{\prime}_{p}}$
One may construct a phase operator
$\hat{\phi}_{(p)\mathfrak{x}}=\sum_{k_{p}}\left(\phi_{k_{p}}+\frac{\omega_{p}}{\ln
p}\right)\big{|}\phi_{k_{p}}^{(\mathfrak{x})}\big{\rangle}\big{\langle}\phi_{k_{p}}^{(\mathfrak{x})}\big{|}\;\equiv\>\sum_{k_{p}}\phi_{k_{p}}^{(\mathfrak{x})}\big{|}\phi_{k_{p}}^{(\mathfrak{x})}\big{\rangle}\big{\langle}\phi_{k_{p}}^{(\mathfrak{x})}\big{|}$
(34)
using the eigenvalues and eigenstates as before.
Now we define the operator $U_{(p)}e^{\beta\ln pN_{p}}$ such that
$U_{(p)}e^{\beta\ln pN_{p}}|n_{p}\rangle=e^{in_{p}\omega_{p}}e^{\beta n_{p}\ln
p}|n_{p}\rangle$ (35)
It follows that
$\displaystyle\hat{\phi}_{(p)\mathfrak{x}}U_{(p)}e^{\beta\ln p\,N_{p}}$
$\displaystyle=U_{(p)}e^{\beta\ln
p\,N_{p}}\,\sum_{k_{p}}\left(\phi_{k_{p}}^{(\mathfrak{x})}+\frac{\omega_{p}}{\ln
p}-i\beta\right)\Big{|}\phi_{k_{p}}^{(\mathfrak{x})}+\frac{\omega_{p}}{\ln
p}-i\beta\Big{\rangle}\Big{\langle}\phi_{k_{p}}^{(\mathfrak{x})}+\frac{\omega_{p}}{\ln
p}-i\beta\Big{|}$ $\displaystyle{}\quad+U_{(p)}e^{\beta\ln
p\,N_{p}}\left(i\beta-\frac{\omega_{p}}{\ln
p}\right)\sum_{k_{p}}\Big{|}\phi_{k_{p}}^{(\mathfrak{x})}+\frac{\omega_{p}}{\ln
p}-i\beta\Big{\rangle}\Big{\langle}\phi_{k_{p}}^{(\mathfrak{x})}+\frac{\omega_{p}}{\ln
p}-i\beta\Big{|}$
This relation can be obtained by the same method used in the earlier sections.
If $i\beta$ takes any of the values $\frac{2\pi k_{p}}{(\mathfrak{n}+1)\ln
p}+\frac{\omega_{p}}{\ln p}$ then in the first term above, one gets the phase
operator Eq. 34, since $\phi_{k_{p}}$ is defined modulo $\frac{2\pi}{\ln p}$.
Hence, as in the case of the Riemann zeta function,
$\Big{[}\hat{\phi}_{(p)\mathfrak{x}},U_{(p)\mathfrak{x}}e^{\beta\ln
p\,N_{p}}\Big{]}=\left(i\beta-\frac{\omega_{p}}{\ln
p}\right)U_{(p)\mathfrak{x}}e^{\beta\ln p\,N_{p}}$ (36)
The definition of the (exponential of the) resolvent is completely analogous
to the case of the Riemann zeta function Eq. 22 — one only needs to substitute
$\hat{\phi}_{p}\to\hat{\phi}_{(p)\mathfrak{x}}$, resulting in the trace
$\sum_{p=2}^{\mathfrak{p}}\Bigg{(}\sum_{{k_{p}\atop{\text{exactly
one}\atop\phi_{k_{p}}\neq
0}}}\frac{1}{1-e^{i\left(\phi_{k_{p}}+\frac{\omega_{p}}{\ln
p}\right)-i\phi}}\;+\sum_{{k_{p}\atop{\text{at least two}\atop\phi_{k_{p}}\neq
0}}}\frac{1}{1-e^{\frac{i\omega_{p}}{\ln p}-i\phi}}\Bigg{)}$
in place of Eq. 23. Once again the poles (apart from that at $\phi=0$)
coincide with the zeroes of the partition function, which is666The unitary
opeartor $\mathcal{U}_{\mathfrak{x},\mathfrak{p}}$ is the product of the
corresponding operators at all sites
$\mathcal{U}_{\mathfrak{x},\mathfrak{p}}=\displaystyle{\prod_{p=1}^{\mathfrak{p}}}U_{(p)\mathfrak{x}}$.
$Z(\beta)=\mathrm{Tr}\left(\mathcal{U}_{\mathfrak{x},\mathfrak{p}}\exp\Big{(}\beta\sum_{p=2}^{\mathfrak{p}}\ln
p\,N_{p}\Big{)}\right)=\prod_{p=2}^{\mathfrak{p}}\frac{1-\chi^{\mathfrak{n}+1}(p)p^{\beta(\mathfrak{n}+1)}}{1-\chi(p)p^{\beta}}$
(37)
where we have used the fact that
$\chi(p^{\mathfrak{n}+1})(p)=\chi^{\mathfrak{n}+1}(p)$ is again a character
with the same periodicity. In the thermodynamic limit $\mathfrak{p}\to\infty$
we get the following ratio of the Dirichlet $L$-functions
$Z(\beta)=\frac{L(-\beta,\chi_{k})}{L(-(\mathfrak{n}+1)\beta,\chi^{\mathfrak{n}+1}_{k})}$
(38)
In the special case where $\mathfrak{n}+1$ is the Euler totient function
$\varphi(k)$ or its integer multiple,
$\chi(p^{\varphi(k)})=\left(\chi(p)\right)^{\varphi(k)}=\chi_{k,0}(p)$, the
principal character, which is 1 if $(p,k)=1$ and 0 otherwise. Except for the
trivial zeroes, the script of the discussions above is very similar to what we
argued for the Riemann zeta function.
In summary, we have proposed to view the Riemann zeta and the Dirichlet
$L$-functions as the partition functions (upto multiplication by a function
that plays no essential role) of _quantum spins_ in magnetic fields, the
values of which depend on the site. We have argued how to make sense of the
phase operator (upto a similarity transformation). The zeroes of the partition
function coincide with the poles of the resolvent function of the exponential
of the aggregate or total phase operators, as discussed in Sections 3.1 and 4.
A different approach to the phase operator was discussed in Section 3.2. Its
relation to the partition function, via similarity transforms, seems to relate
the zeta function and the $L$-functions in the same framework.
Acknowledgements: We thank Surajit Sarkar for collaboration at initial stages
of this work. It is a pleasure to acknowledge useful discussions with Rajendra
Bhatia and Ved Prakash Gupta. We would like to thank Toni Bourama and Wilson
Zùñiga-Galindo for the invitation to write this article.
## References
* [1] H. Montgomery, “The pair correlation of zeros of the zeta function,” Analytic number theory (Proc. Sympos. Pure Math., Vol. XXIV, St. Louis Univ., St. Louis, Mo., 1972), pp. 181–193, 1973.
* [2] B. Hayes, “Computing science: the spectrum of Riemannium,” American Scientist, vol. 91, no. 4, pp. 296–300, 2003.
* [3] A. Odlyzko, “The $10^{22}$-nd zero of the Riemann zeta function,” in Dynamical, spectral, and arithmetic zeta functions (San Antonio, TX, 1999), Contemp. Math., pp. 139–144, 2001.
* [4] D. Spector, “Supersymmetry and the Möbius inversion function,” Comm. Math. Phys., vol. 127, pp. 239–252, 1990.
* [5] B. Julia, Statistical theory of numbers. in Number Theory and Physics, J. Luck, P. Moussa, and M. Waldschmidt (Eds.), Springer Proceedings in Physics, Springer, 1990.
* [6] B. Julia, “Thermodynamic limit in number theory: Riemann-Beurling gases,” Physica A: Statistical Mechanics and its Applications, vol. 203, no. 3, pp. 425 – 436, 1994.
* [7] I. Bakas and M. Bowick, “Curiosities of arithmetic gases,” Journal of Mathematical Physics, vol. 32, pp. 1881–1884, 1991.
* [8] A. Knauf, “Phases of the number-theoretic spin chain,” J. Stat. Phys., vol. 73, pp. 423–431, 1993.
* [9] A. Knauf, “The number-theoretical spin chain and the Riemann zeroes,” Commun. Math. Phys., vol. 196, pp. 703–731, 1998.
* [10] D. Schumayer and D. Hutchinson, “Physics of the Riemann hypothesis,” Rev. Mod. Phys., vol. 83, pp. 307–330, 2011, 1101.3116 [math-ph].
* [11] C. Itzykson and J.-M. Drouffe, Statistical field theory: vol. 1, From Brownian motion to renormalization and lattice gauge theory. Cambridge Monographs on Mathematical Physics, Cambridge University Press, 1991.
* [12] L. Susskind and J. Glogower, “Quantum mechanical phase and time operator,” Physics, vol. 1, pp. 49–61, 1964.
* [13] P. Carruthers and M. Nieto, “Phase and angle variables in quantum mechanics,” Rev. Mod. Phys., vol. 40, pp. 411–440, 1968.
* [14] J. Garrison and J. Wong, “Canonically conjugate pairs, uncertainty relations and phase operators,” J. Math. Phys., vol. 11, pp. 2242–2249, 1970.
* [15] A. Galindo, “Phase and number,” Lett. Math. Phys., vol. 8, pp. 495–500, 1984.
* [16] D. Pegg and S. Barnett, “Unitary phase operator in quantum mechanics,” Europhys. Lett., vol. 6, pp. 483–487, 1988.
* [17] P. Busch, M. Grabowski, and P. Lahti, Operational quantum physics, vol. 31 of Lecture Notes in Physics. Springer, 1995.
* [18] X. Ma and W. Rhodes, “Quantum phase operator and phase states,” arXiv e-print, 2015, arXiv:1511.02847 [quant-ph].
* [19] A. Chattopadhyay, P. Dutta, S. Dutta, and D. Ghoshal, “Matrix Model for Riemann zeta via its local factors,” Nucl. Phys. B954, p. 114996, 2020, 1807.07342.
* [20] P. Dutta and D. Ghoshal, “Pseudodifferential operators on $\mathbf{Q}_{p}$ and $L$-Series,” 2020, arXiv:2003.00901.
* [21] P. Busch, M. Grabowski, and P. Lahti, Operational quantum physics. Lecture Notes in Physics, Springer, 2013.
* [22] S. Kozyrev, “Wavelet theory as $p$-adic spectral analysis,” Izv. Math., vol. 66, no. 2, p. 367—376, 2002, arXiv:math-ph/0012019.
* [23] P. Dutta, D. Ghoshal, and A. Lala, “Enhanced symmetry of the $p$-adic wavelets,” Phys. Lett., vol. B783, pp. 421–427, 2018, 1804.00958.
* [24] A. Khrennikov, S. Kozyrev, and W. Zúñiga-Galindo, Ultrametric pseudodifferential equations and applications. Encyclopedia of Mathematics and its Applications, Cambridge University Press, 2018.
* [25] R. Gray, “Toeplitz and circulant matrices: a review,” Foundations and Trends in Communications and Information Theory, vol. 2, pp. 153–239, 2006\.
* [26] H. Widom, Toeplitz matrices. in Studies in real and complex analysis, I. Hirschman Jr. (Ed.), The Mathematical Association of America, 1990.
* [27] N. Nikolski, Toeplitz matrices and operators. Cambridge Studies in Advanced Mathematics, Cambridge University Press, 2020.
* [28] J.-P. Serre, A course in arithmetic. Graduate texts in Mathematics, Springer, 1973.
|
# Operational Semantics with
Hierarchical Abstract Syntax Graphs111Extended abstract of invited talk
Dan R. Ghica Huawei Research, EdinburghUniversity of Birmingham, UK
###### Abstract
This is a motivating tutorial introduction to a semantic analysis of
programming languages using a graphical language as the representation of
terms, and graph rewriting as a representation of reduction rules. We show how
the graphical language automatically incorporates desirable features, such as
$\alpha$-equivalence and how it can describe pure computation, imperative
store, and control features in a uniform framework. The graph semantics
combines some of the best features of structural operational semantics and
abstract machines, while offering powerful new methods for reasoning about
contextual equivalence.
All technical details are available in an extended technical report by Muroya
and the author [DBLP:journals/corr/abs-1907-01257] and in Muroya’s doctoral
dissertation [koko].
## 1 Hierarchical abstract syntax graphs
Before proceeding with the business of analysing and transforming the source
code of a program, a compiler first parses the input text into a sequence of
atoms, the lexemes, and then assembles them into a tree, the Abstract Syntax
Tree (AST), which corresponds to its grammatical structure. The reason for
preferring the AST to raw text or a sequence of lexemes is quite obvious. The
structure of the AST incorporates most of the information needed for the
following stage of compilation, in particular identifying operations as nodes
in the tree and operands as their branches. This makes the AST algorithmically
well suited for its purpose. Conversely, the AST excludes irrelevant lexemes,
such as separators (white-space, commas, semicolons) and aggregators
(brackets, braces), by making them implicit in the tree-like structure. It is
always possible to recover the textual input, or rather an equivalent version
of it, from the AST via a process known as pretty-printing.
A fair question to ask is whether the AST can be improved upon as a
representation of program text, which captures grammatical structure while
discarding needless detail. In pretty-printing we know how irrelevant lexemes
can be manipulated to achieve a certain aesthetic effect. Redundant brackets
can be elided to reduce clutter, white-space can be added or removed to
improved alignment, and so on. Such details are accepted as irrelevant.
There is another, deeper level of detail in the text, which is irrelevant but
not always appreciated as such: variable names. Whereas we know, formally,
that bound variables can be systematically renamed ($\alpha$-equivalence) the
conventional AST will still remember their names, even though variable names
induce bureaucratic complications having to do with scope and shadowing.
Finally, there is yet another even deeper level of irrelevant detail in the
program text, the order in which variables are defined, absent define-use
dependencies between the definitions.
Consider the program text in Fig. LABEL:fig:cwb, in which def is a binder
associating a variable with a definition as text, akin to a macro, but
respecting variable scoping rules. Variable x in line 3 could be renamed, on
lines 3-5, to something else to avoid shadowing the variable with the same
name on line 1. Lines 1-2 and lines 3-4 can be swapped without changing the
result. But these facts are not immediate from examining its AST in Fig
LABEL:fig:ast (left).
⬇
1 def x = 0
2 def y = x + 1
3 def x = 2
4 def z = x + 3
5 y + z\end{lstlisting}
6 \caption{Bindings}
7 \label{fig:cwb}
8\end{wrapfigure}
9\begin{figure}
10\centering
11{\small
12\input{pics/ast.pdf_tex}
13\input{pics/asg.pdf_tex}
14}
15\caption{AST vs ASG for variable definition}
16\label{fig:ast}
17\end{figure}
18Unlike ASTs, an abstract syntax graphs (ASG) do not treat binders and
variables as nodes.
19Variables are instead represented as links, and binders assign target nodes
corresponding to their definitions to the variable links.
20The ASG of the code in Fig.~\ref{fig:cwb} is represented next to its AST in
Fig.~\ref{fig:ast}.
21To better understand the relation between the AST and the ASG, corresponding
nodes are coloured and the links are labelled with variable names.
22The colour and the labels are not part of the definition of the graph
structure, but are just used to aid understanding.
23The nodes corresponding to variable uses and definitions, left blank, are
not part of the ASG.
24It is thus immediately obvious that the ASG is, by construction, quotiented
both by $\alpha$-equivalence and by the order of non-interfering variable
bindings.
25This more general structural equivalence of lambda calculus terms has been
dubbed ‘‘graph equivalence" by Accattoli et.
al.~\cite{DBLP:conf/popl/AccattoliBKL14}.
26
27\begin{wrapfigure}{l}{0.35\textwidth}
28 \begin{lstlisting}[language=Python]
29 def x = 2
30 def z = x + 3
31 z + z\end{lstlisting}
32 \caption{Contraction}
33 \label{fig:con}
34\end{wrapfigure}
35ASGs are also helpful when variables are reused, as in the
Fig.~\ref{fig:con} example.
36The AST and the ASG are showed side-by-side in Fig.~\ref{fig:ast}, noting
that the links corresponding to the variable \texttt{z}, used twice, now both
point to its definition.
37This is why ASG are no longer trees, but directed acyclic graphs (DAGs).
38Formally, the ASGs are \textit{hypergraphs} with the links a graphical
representation of vertices and the nodes a graphical representation of
hyperedges.
39\begin{figure}
40 \centering
41 {\small
42 \input{pics/astcon.pdf_tex}
43 \input{pics/asgcon.pdf_tex}
44 }
45 \caption{AST vs ASG for contraction}
46 \label{fig:asg}
47\end{figure}
48
49To represent \textit{local} variable binding, as encountered in functions as
opposed to simple variable \textit{definition} discussed above, we note that
local variable binding is always associated with \textit{thunks}, i.e. code
with delayed execution.
50This is related to the fact that conventional programming languages use
normal-order reduction.
51In this evaluation strategy functions are considered values, i.e. there is
no evaluation ‘under the lambda’.
52In other words, functions are thunks and locally-bound variables will always
induce a thunk.
53Because thunks can be considered, operationally, as a single entity, it is
convenient to represent them in their ASG form as a single node, labeled by
the definition of the thunk, which is also an ASG.
54In other words, to model local variable binding in functions it is
convenient to use graphs labelled by graphs, which are, formally,
\textit{hierarchical} hypergraphs.
55
56To model variable behaviour in thunks correctly, our ASGs need to be
\textit{interfaced}, i.e. there needs to be a defined order on incoming links.
57If a thunk has $m$ bound variables and $n$ free variables then the first $m$
incoming links of the ASG used as its label represent the bound variables, in
the order in which they are bound.
58The last $n$ incoming links represent the free variables, in some specified
order.
59The node corresponding to the thunk will also have $n$ incoming links,
representing the definitions of its $n$ free variables, in an order consistent
with the order used by the label ASG.
60To make the correspondence more perspicuous we connect the links
corresponding to the free variables from the labelling ASG to those of the
node, as it causes no ambiguity.
61Fig.~\ref{fig:hasg} shows several examples for hierarchical ASGs and the
corresponding terms, with function application labelled as $@$.
62Note that thunks associated with lambda expressions are still explicitly
linked to a lambda-labelled node.
63This is because in a programming language thunks can be used for expressions
other than function definitions, as we shall see.
64\begin{figure}
65 \centering\small
66 \input{pics/hasg.pdf_tex}
67 \caption{Hierarchical ASGs}
68 \label{fig:hasg}
69\end{figure}
70
71\section{Operational semantics}
72
73
74The most widely used method for specifying programming languages is via
\textit{operational semantics} (OS).
75There are several versions of OS.
76We will focus on so-called \textit{structural} operational semantics (SOS)
in the style of Plotkin~\cite{DBLP:journals/jlp/Plotkin04a}, in which a
\textit{transition relation} is defined on \textit{configurations} consisting
of a term $t$ and some additional information (e.g. a program \textit{store},
$s, t$), so that the definition of the relation is inductive on the structure
of the term.
77
78\begin{wrapfigure}{r}{.35\textwidth}
79 \begin{gather*}
80 s,1+2\to s,3 \\\
81 \frac{s,e_1\to s’,e_1’}{s,e_1+e_2\to s’, e_1’+e_2}.
82 \end{gather*}
83 \caption{SOS (example rules)}
84 \label{fig:ltr}
85\end{wrapfigure}
86Typically the transition relation is written as $s,t\to s’, t’$.
87There are two kinds of rules, \textit{basic reductions} which perform
operations (e.g. first rule in Fig.~\ref{fig:ltr}) and \textit{simplification
steps} which seek redexes structurally in the program text according to the
evaluation strategy (second rule in Fig.~\ref{fig:ltr}).
88The latter are usually written in natural-deduction style.
89For example, the rule specifying that $+$ is evaluated left-to-right is the
second rule in Fig.~\ref{fig:ltr}.
90Note how the first operand $e_1$ is evaluated to $e_1’$ and, in the proces,
the store $s$ may change to $s’$.
91
92SOS can be naturally formulated on ASGs rather than on terms.
93Basic reductions correspond to graph rewrites and simplification steps to a
graph traversal algorithm which seeks the redexes.
94The basic reduction in Fig.~\ref{fig:ltr} is shown as a graph rewrite in
Fig.~\ref{fig:ltrg}, along with the rule for $\beta$-reduction.
95The former is quite obvious, but the latter is more interesting.
96It consists of the deletion of the abstraction-application pair, the
‘unboxing’ of the thunk by extracting the label of the thunk node and using it
in the top-level graph, the re-wiring of the bound variable, now open, to the
argument, and using the root node of $F$ as the overall root node.
97For the $\beta$ rule the graphs involved in the rewrite must also be
interfaced, with the interface nodes highlighted in grey.
98Also note that the rule is actually the \textit{small} $\beta$ rule used in
calculi of explicit substitution which reduces $(\lambda x.F)M$ to
$\mathtt{def}\ x=M\ \mathtt{in}\ F$~\cite{DBLP:journals/jfp/AbadiCCL91}.
99\begin{figure}
100 \centering
101 \small
102 \input{pics/ltrg.pdf_tex}
103 \caption{ASG rewrite as basic reductions}
104 \label{fig:ltrg}
105\end{figure}
106
107One aspect of the ASG-based evaluation which needs to be clearly explicated
is sharing.
108The DAG structure of the ASG can be refined by introducing special sharing
nodes, which, unlike operation nodes, would be allowed to have multiple
incoming links.
109Sharing nodes have a special behaviour during evaluation, managing the
process of systematic copying of sub-graphs.
110
111To evaluate an ASG in a way that is consistent with left-to-right call-by-
value the traversal is depth-first and left-to-right, without reaching inside
thunks, starting from the unique root.
112The current link in the traversal is called the \textit{focus}, and it can
move \textit{up} (i.e. away from the root) or \textit{down} (i.e. towards the
root).
113When the focus is moving up and it encounters a copy node it will copy the
node shared by the copy node, inserting further copy nodes on its outgoing
links.
114As the focus is moving down, whenever it passes a node which has an
associated rewrite rule it will exercise it, then change direction and move up
again (\textit{refocussing}).
115\begin{figure}[t]
116 \centering\small\input{pics/bigdiag.pdf_tex}
117 \caption{Evaluation of $(\lambda f x.f(fx)))(\lambda x.x+1, 2)$.}
118 \label{fig:exrew}
119\end{figure}
120
121In Fig.~\ref{fig:exrew} we show the key steps in the evaluation of the
expression $(\lambda f x.f(fx)))(\lambda x.x+1, 2)$.
122We use the labels of $\lambda2$ and $@2$ for definition and use of a
function with two arguments.
123Step (1) is reached after the focus moves along the path
$\mathit{abcbdefa}$, at which point the rewrite is performed, unboxing the
thunk and attaching the arguments to the nodes made available.
124Step (2) is simply rearranging the ASG in a more readable format.
125Step (3) is the copying of the node corresponding to the function $\lambda
x.x+1$ after the focus traverses path $ab$.
126Step (4) is the $\beta$ rewrite applied after the focus traverses path
$abcdb$.
127Step (5) is an arithmetic reduction, followed by another $\beta$ rewrite
and a final arithmetic reduction.
128
129The examples in this section (abstraction, application, arithmetic) deal
with what is usually deemed \textit{pure} functional programming, case in
which the configuration used by the SOS is the term itself.
130Expanding the SOS of a language to incorporate effects usually requires
revising the format of the configuration of the SOS, which in turn requires
reformulating the rules for the preexisting operations.
131This is a major fragility of the SOS approach, since the revision of the
format invalidates any technical results and require laborious re-
proving~\cite{DBLP:journals/entcs/GhicaT12}.
132ASGs can be enhanced with a single new node which will allow the
formulation of most known effects, namely an \textit{atom} node, in the sense
of~\cite{pitts_2013}.
133The ASG OS for a pure language then only differs from the ASG OS of an
impure language in that the atom nodes are not involved.
134The atom node, just like a sharing node, allows multiple incoming links.
135However, during evaluation, the atom node does not trigger a copying of the
node at the end of its outgoing link, but is instead treated as an endpoint by
the ASG traversal strategy.
136Indeed, just as computations are not performed inside of thunks they are
also not performed inside of the store.
137This insight, that the essence of effectful computation is the presence of
atoms in the OS is originally due to Pitts, but it turns out to be most
effective in ASG-based OS \cite{DBLP:conf/lics/Pitts96}.
138
139Fig~\ref{fig:asgr} shows the basic rule for assignment, with the atom
indicated as an unlabeled white node.
140The atom is made to point to the second operand of the assignment operator,
while the assignment operator itself reduces to the dummy value inhabiting the
unit type.
141In the process, whatever the atom was attached to before may become
inaccessible from the root of the ASG, therefore \textit{garbage.}
142Also note that the effect of the assignment is manifest only because other
parts of the ASG may point to the atom, a link which is persistent due to the
value-like behaviour of the atom.
143
144\begin{wrapfigure}{r}{.5\textwidth}
145 \centering\small
146 \input{pics/asgr.pdf_tex}
147 \caption{Assignment in ASG OS}
148 \label{fig:asgr}
149\end{wrapfigure}
150
151The SOS of a programming language can be further refined
(\textit{distilled}) into an abstract machine, which gives a more explicit
representation of the simplification rules via manipulation of
context~\cite{DBLP:journals/iandc/WrightF94}.
152From this point of view, the ASG representation of the SOS is already an
abstract machine, in the sense that it can give a cost-accurate model of
execution of the language.
153
154Another appealing feature of abstract machines is that they can model
control-transfer operations more conveniently that SOS.
155It is not impossible to use SOS for this, but the format of the transition
system needs to be significantly revised, making the transitions themselves
labelled~\cite{DBLP:journals/corr/SculthorpeTM16}.
156\begin{figure}
157 \centering\small
158 \input{pics/control.pdf_tex}
159 \caption{Control in ASG OS}
160 \label{fig:control}
161\end{figure}
162
163Since the ASG OS is formulated via arbitrary rewrites, control can be dealt
with in a straightforward way.
164Fig~\ref{fig:control} shows a labelled version of C-style
\textit{break/continue} statements.
165The operations involved are loop body definition ($l$), sequential
composition (;), break ($b$), and continue ($c$).
166The atom used as the first operand of $l$ becomes bound to the label which
is the bound argument of $M$, used to anchor a point in the ASG so that the
control operations of break or continue can determine where to jump to.
167If $M$ terminates normally then the whole cycle repeats.
168Unlike conventional C break and continue these variants are labelled, and
the labels are first-class citizens, i.e. they can be passed as arguments to
or returned from functions.
169
170An interactive evaluator for a variety of programming language features can
be found online.\footnote{\url{https://tnttodda.github.io/Spartan-
Visualiser/}}.
171
172\section{Reasoning}
173
174SOS was originally considered too ‘low level’ to reason about equivalence
in programming languages, at least in contrast with denotational semantics.
175However, SOS was considered more ‘high level’ than alternative operational
specifications of programming languages such as abstract machines.
176In time, a large variety of powerful techniques for reasoning with SOS-like
specifications proved that this is indeed a useful formalism for reasoning
about equivalence~\cite{DBLP:conf/ac/Pitts00} whereas abstract machines
remained useful due to their ability to model the cost of evaluation and as a
gateway to compiler development~\cite{leroy:inria-00070049}.
177The ASG OS seems to combine felicitously some of the best features of SOS
and abstract machines including, as we shall see, the ability to reason about
observational equivalence.
178
179In fact, the graph formulation of the OS makes it possible not just to
reason about equivalence, but to formulate a powerful characterisation theorem
which establishes equivalence by using some simpler combinatorial criteria.
180We must first ‘tame’ the OS by restricting ourselves to sets of rules which
are \textit{deterministic} and \textit{refocussing}.
181The first concept is the standard one.
182The second, initially formulated by Danvy et. al., means that following a
basic reduction the focus could be kept either at the point where the rewrite
occurs, or moved to the root of the graph, with equal
effect~\cite{DBLP:journals/tcs/DanvyMMZ12}.
183Indeed, all the rules we have presented in this tutorial are refocussing.
184
185Equivalences are also formulated graphically, as families of relations on
\textit{templates}, i.e. sets of graphs with the same interface.
186For a fixed abstract machine a template is said to be \textit{input-safe}
if evaluation from any input link preserves the relation.
187Note that, unlike a SOS, we can talk about the evaluation of an ASG which
is not a program, in fact not even a term, since evaluation is just a byword
for traversal and reduction.
188A template is said to be \textit{output-closed} if in the course of
evaluation no output link will ever be reached.
189Finally, a template is said to be \textit{robust} if it is preserved by all
rewrite rules of the language.
190The main theorem can be simply stated as:
191\\\\[2ex]
192\textbf{Theorem.}
(Characterisation~\cite[Sec.~6]{DBLP:journals/corr/abs-1907-01257}).
\textit{Robust templates induce observational equivalence. }
193\\\\[2ex]
194The conditions used to establish equivalence via the Characterisation
Theorem are all elementary and proved by case analysis.
195Moreover, the theorem allows for \textit{robust} proofs of equivalence in
the sense that they can withstand language extensions.
196For example we can prove the $\beta$ law for a pure language can be
extended to a language with imperative store just by showing that the
templates used in formulating the law are robust relative to the new rules for
variable creation, dereferencing, and assignment (Fig.~\ref{fig:asg}).
197Which happens to be the case.
198By contrast, conventional proofs of equivalence are fragile, and are
invalidated by even mild language extensions.
199
200\section{Related work}
201
202This is an elementary tutorial introduction and extended motivation for the
\textit{hypernet semantics} of programming
languages~\cite{DBLP:journals/corr/abs-1907-01257}, which is a streamlined and
generalised version of the \textit{Dynamic Geometry of Interaction (GoI)
Machine}~\cite{DBLP:journals/lmcs/MuroyaG19}.
203They are the outcome of an effort initially motivated by the understanding
of call-by-value and effectful computation from a GoI
perspective~\cite{DBLP:conf/csl/HoshinoMH14,DBLP:conf/popl/MuroyaHH16}.
204
205
206Graph-based intermediate representations (IR) are established in compiler
construction~\cite{DBLP:conf/irep/ClickP95} and in the formulation of abstract
machines for functional languages~\cite{DBLP:conf/fpca/JonesS89}.
207However, the origin of the approach describe here lies elsewhere, in
\textit{proof nets}, a graphical representation of proofs in Linear
Logic~\cite{DBLP:journals/tcs/Girard87} and especially in their generalisation
as \textit{interaction nets}~\cite{DBLP:conf/popl/Lafont90}.
208Interaction nets already exhibit the hierarchical structure we employ here,
which is used to model binding and higher-order structures.
209Hierarchical graphs are also used elsewhere in semantics, for example as
diagram languages of processes known as
\textit{bigraphs}~\cite{DBLP:journals/entcs/Milner08}.
210
211The connection between linear logic and its varieties and certain monoidal
categories kindled significant progress in diagrammatic
languages~\cite{Selinger2011}.
212For instance, traced monoidal categories, used as models of lambda calculus
with cyclic sharing~\cite{DBLP:conf/tlca/Hasegawa97}, led to the development
of a hierarchical graph syntax for
closures~\cite{DBLP:journals/entcs/SchweimeierJ99} remarkably similar to the
one described here.
213In terms of the treatment of graphs as combinatorial objects, much of the
literature considered them rather informally and a formalisation of proof nets
as hypergraphs was given much later~\cite{DBLP:journals/tcs/GuerriniMM01}.
214
215More recently, work by Accattoli has examined the interesting interplay
between term-based and graph-based formulations of the call-by-value lambda
calculus~\cite{DBLP:journals/tcs/Accattoli15}, even though his motivations are
somewhat different than ours, as illustrated by this quotation:
216\begin{quotation}\em
217 It is far from easy to realize an isomorphism
218 between terms and nets, as it is necessary
219 to take care of many delicate details about
220 weakenings, contractions, representation of
221 variables, administrative reduction steps,
222 and so on. [$\ldots$] More generally, such a strong
223 relationship turns the calculus into an
224 algebraic language for proof nets, providing
225 a handy tool to reason by structural
226 induction over proof nets.
227\end{quotation}
228In fact, a properly formalised graphical syntax can be just as powerful and
just as rigorous as an algebraic language.
229Moreover, the graphical language can be both simpler and better specified
than the term language, for example in the case of the calculus of explicit
substitutions, which lacks a proper formulation of
$\alpha$-equivalence~\cite{DBLP:conf/ictac/Accattoli18}.
230
231To conclude, we see the ASG operational semantics as a first step in an
exciting and potentially fruitful direction.
232Graphical languages are starting to emerge as a new and genuine formalism
which can give alternative, and sometimes improved, representations to
theories in fields as different as quantum
computation~\cite{coecke_kissinger_2017}, linear and affine
algebra~\cite{DBLP:conf/lics/BonchiPSZ19}, digital
circuits~\cite{DBLP:conf/csl/GhicaJL17}, signal
flow~\cite{DBLP:conf/popl/BonchiSZ15} and more.
233The motivations for this emergence are mixed, from the raw intuitive appeal
of visual representations to improved algorithmic properties.
234Examining how this methodology can be extended to programming languages is
an intriguing next step which brings together a number of existing ideas and
concepts and can unify existing gaps between semantics of programming
languages and compiler-related techniques.
235
236\bibliographystyle{eptcs}
237\bibliography{generic}
238\end{document}"
|
# An embedding of the skein action on set partitions into the skein action on
matchings
Jesse Kim Department of Mathematics
University of California, San Diego
La Jolla, CA, 92093-0112, USA<EMAIL_ADDRESS>
###### Abstract.
Rhoades defined a skein action of the symmetric group on noncrossing set
partitions which generalized an action of the symmetric group on matchings.
The ${\mathfrak{S}}_{n}$-action on matchings is made possible via the Ptolemy
relation, while the action on set partitions is defined in terms of a set of
skein relations that generalize the Ptolemy relation. The skein action on
noncrossing set partitions has seen applications to coinvariant theory and
coordinate rings of partial flag varieties. In this paper, we will show how
Rhoades’ ${\mathfrak{S}}_{n}$-module can be embedded into the
${\mathfrak{S}}_{n}$-module generated by matchings, thereby explaining how
Rhoades’ generalized skein relations all arise from the Ptolemy relation.
## 1\. Introduction
This paper concerns an action of ${\mathfrak{S}}_{n}$ on the vector space
spanned by the set of noncrossing set partitions of $[n]:=\\{1,2,\dots,n\\}$
first defined by Rhoades [7]. This action is defined in terms of three skein
relations, the simplest of which is the Ptolemy relation shown below.
$\mapsto$$+$
The original motivation for defining this action was to give an algebraic
proof of certain cyclic sieving results on noncrossing set partitions, first
proven by Reiner, Stanton, and White [6] and Pechenik [4] via direct
enumeration. Rhoades’ action has since been found within coinvariant rings and
coordinate rings of certain partial flag varieties[1, 3], strengthening the
claim that it is an action worth studying in its own right. Rhoades’ action
generalizes a similar action on the vector space with basis given by
noncrossing matchings. This paper will show how Rhoades’ action can be found
within the action on noncrossing matchings and thereby explain how the three
skein relations all arise from the Ptolemy relation.
More precisely, let $M(n)$ denote the set of all matchings of $[n]$, i.e.
collections of disjoint size-two subsets of $[n]$. The symmetric group
${\mathfrak{S}}_{n}$ acts naturally on $M(n)$ as follows. If
$\sigma\in{\mathfrak{S}}_{n}$ and
$m=\\{\\{a_{1},b_{1}\\},\dots,\\{a_{k},b_{k}\\}\\}$ is a matching, then
(1.1) $\sigma\star
m=\\{\\{\sigma(a_{1}),\sigma(b_{1})\\},\dots,\\{\sigma(a_{k}),\sigma(b_{k})\\}\\}.$
We can extend this action to an action on $\mathbb{C}[M(n)]$, the
$\mathbb{C}$-vector space with basis given by matchings of $[n]$. For our
purposes, it will be more useful to consider a sign-twisted version of this
action, where
(1.2) $\sigma\circ
m=\mathrm{sign}(\sigma)\\{\\{\sigma(a_{1}),\sigma(b_{1})\\},\dots,\\{\sigma(a_{k}),\sigma(b_{k})\\}\\}.$
A matching is noncrossing if it does not contain two subsets $\\{a,c\\}$ and
$\\{b,d\\}$ with $a<b<c<d$. The action on matchings does not descend to an
action on $NCM(n)$, the set of all noncrosssing matchings of $[n]$, since
permuting elements in a noncrossing matching could introduce crossings.
However, we can linearize and define an action on $\mathbb{C}[NCM(n)]$, the
$\mathbb{C}$-vector space with basis given by noncrossing matchings of $[n]$.
For any noncrossing matching $m$ and adjacent transposition $s_{i}=(i,i+1)$,
define
(1.3) $s_{i}\cdot m=\begin{cases}s_{i}\circ m&s_{i}\circ m\textrm{ is
noncrossing}\\\ m+m^{\prime}&\textrm{otherwise}\end{cases}.$
Here $\circ$ denotes the action on all matchings and $m^{\prime}$ is the
matching where the subsets of $m$ containing $i$ and $i+1$, call them
$\\{i,a\\}$ and $\\{i+1,b\\}$ have been replaced with $\\{i,i+1\\}$ and
$\\{a,b\\}$ and all other subsets remain the same. In other words, $s_{i}\circ
m$, $m$, and $m^{\prime}$ form a trio of matchings that differ only in a
Ptolemy relation. There exists an ${\mathfrak{S}}_{n}$-equivariant linear
projection $p_{M}:\mathbb{C}[M(n)]\rightarrow\mathbb{C}[NCM(n)]$ given for any
matching $m$ by
(1.4) $m\mapsto w^{-1}\cdot(w\circ m),$
where $w$ is any permutation for which $w\circ m$ is noncrossing. The kernel
of this projection is spanned by sums of matchings which differ only by a
Ptolemy relation, i.e.
(1.5)
$\\{\\{a_{1},a_{2}\\},\\{a_{3},a_{4}\\},\\{a_{5},a_{6}\\},\dots,\\{a_{2k-1},a_{2k}\\}\\}\\\
+\\{\\{a_{1},a_{3}\\},\\{a_{2},a_{4}\\},\\{a_{5},a_{6}\\},\dots,\\{a_{2k-1},a_{2k}\\}\\}\\\
+\\{\\{a_{1},a_{4}\\},\\{a_{2},a_{3}\\},\\{a_{5},a_{6}\\},\dots,\\{a_{2k-1},a_{2k}\\}\\}$
for any distinct $a_{1},\dots,a_{2k}\in[n]$. This projection can be thought of
as a way to“resolve” crossings in a matching and obtain a sum of noncrossing
matchings.
Analogously, let $\Pi(n)$ denote the set of all set partitions of $[n]$, and
let $\mathbb{C}[\Pi(n)]$ be the $\mathbb{C}$-vector space with basis given by
$\Pi(n)$. We can define an action of ${\mathfrak{S}}_{n}$ on
$\mathbb{C}[\Pi(n)]$ analogous to the action on $\mathbb{C}[M(n)]$. A set
partition is noncrossing if there do not exist distinct blocks $A$ and $B$ and
elements $a,c\in A$, $b,d\in B$ with $a<b<c<d$. Let $NCP(n)$ denote the set of
all noncrossing set partitions, and let $\mathbb{C}[NCP(n)]$ be the
corresponding vector space. Rhoades [7] defined an action of
${\mathfrak{S}}_{n}$ on $\mathbb{C}[NCP(n)]$ as follows. For any noncrossing
set partition $\pi$ and adjacent transposition $s_{i}$,
$s_{i}\cdot\pi=\begin{cases}-\pi&i\textrm{ and }i+1\textrm{ are in the same
block of }\pi\\\ -\pi^{\prime}&\textrm{at least one of }i\textrm{ and
}i+1\textrm{ is in a singleton block of }\pi\\\ \sigma(\pi^{\prime})&i\textrm{
and }i+1\textrm{ are in different size 2 or larger blocks of }\pi\end{cases}$
where $\pi^{\prime}$ is the set partition obtained by swapping which blocks
$i$ and $i+1$ are in, and $\sigma$ is defined for any almost-noncrossing (i.e.
the crossing can be removed by a single adjacent transposition) partition
$\pi$ by $\sigma(\pi)=\pi+\pi_{2}-\pi_{3}-\pi_{4}$ where, if the crossing
blocks in $\sigma$ are $\\{i,a_{1},\dots,a_{k}\\}$ and
$\\{i+1,b_{1},\dots,b_{l}\\}$, then $\pi_{2},\pi_{3}$ and $\pi_{4}$ are
obtained from $\pi$ by replacing these blocks with
* •
$\\{i,i+1\\}$ and $\\{a_{1},\dots,a_{k},b_{1},\dots,b_{l}\\}$ for $\pi_{2}$
* •
$\\{i,i+1,a_{1},\dots,a_{k}\\}$ and $\\{b_{1},\dots,b_{l}\\}$ for $\pi_{3}$
* •
$\\{i,i+1,b_{1},\dots,b_{l}\\}$ and $\\{a_{1},\dots,b_{k}\\}$ for $\pi_{3}$
when $k,l\geq 2$. If $k=1$ then $\pi_{4}=0$ instead and if $l=1$ then
$\pi_{3}=0$ instead. The sum of partitions given by $\sigma(\pi)$ is best
described with a picture, see Figure 1. The three possibilities (depending on
whether $k,l\geq 2$) are the three skein relations mentioned earlier.
$\mapsto$$+$$\mapsto$$+$$-$$\mapsto$$+$$-$$-$$\mapsto$$+$$\mapsto$$+$$-$$\mapsto$$+$$-$$-$
Figure 1. The three skein relations defining the action of
${\mathfrak{S}}_{n}$ on ${\mathbb{C}}[NCP(n)]$. The red vertices are adjacent
indices $i,i+1$ and the shaded blocks have at least three elements. The
symmetric 3-term relation obtained by reflecting the middle relation across
the $y$-axis is not shown.
A more detailed description of this action can be found in [7]. Rhoades showed
that we again have an ${\mathfrak{S}}_{n}$-equivariant linear projection
$p_{\Pi}:\mathbb{C}[\Pi(n)]\rightarrow\mathbb{C}[NCP(n)]$ given for any set
partition $\pi$ by
(1.6) $\pi\mapsto w^{-1}\cdot(w\circ\pi),$
where $w$ is any permutation for which $w\circ\pi$ (here $\circ$ denotes the
action of ${\mathfrak{S}}_{n}$ on all set partitions) is noncrossing. The
kernel of this projection is generated by the set of all elements of the form
$w\circ(s_{i}\circ\pi+\sigma(\pi))$ (the skein relations) for any permutation
$w$ and almost noncrossing set partition $\pi$.
Rhoades was able to determine the ${\mathfrak{S}}_{n}$-irreducible structure
of this module. In particular, the span of all singleton-free noncrossing set
partitions with exactly $k$ blocks is an ${\mathfrak{S}}_{n}$-irreducible of
shape $(k,k,1^{n-2k})$, and the span of all noncrossing set partitions with
exactly $s$ singletons and exactly $k$ non-singleton blocks is isomorphic to
an induction product of $S^{(k,k,1^{n-2k-s})}$ with the sign representation of
${\mathfrak{S}}_{s}$. Similarly, if we restrict the action on
$\mathbb{C}[NCM(n)]$ to the span of noncrossing perfect matchings, i.e.
noncrossing matchings of $[2n]$ with exactly $n$ pairs, then this action gives
a model for the ${\mathfrak{S}}_{n}$-irreducible of shape $(n,n)$ called the
$SL_{2}$-web basis. If we instead restrict to the span of noncrossing
matchings with exactly $k$ pairs, we get a submodule isomorphic to the
induction product of $S^{(k,k)}$ and a sign representation of
${\mathfrak{S}}_{n-2k}$. By the Pieri rule, this induction product is a direct
sum of three irreducible submodules, one of which is isomorphic to
$S^{(k,k,1^{n-2k})}$ appears in this induction product, so there exists a
unique embedding of $\mathbb{C}[NCP(n)_{0}]$, the span of all singleton-free
noncrossing set partitions in $\mathbb{C}[NCP(n)]$, into $\mathbb{C}[NCM(n)]$.
The main result of this paper explicitly describes the embedding as follows:
###### Theorem 1.1.
The linear map $f_{n}:\mathbb{C}[NCP(n)_{0}]\rightarrow\mathbb{C}[NCM(n)]$
defined by
$f_{n}(\pi)=\sum_{m\in M_{\pi}(n)}m$
is a ${\mathfrak{S}}_{n}$-equivariant embedding of vector spaces. Here
$M_{\pi}(n)$ is defined to be the set of all matchings $m$ in $M(n)$ for which
each block of $\pi$ contains exactly one pair in $m$.
For an example of this map, let $\pi=\\{\\{1,2,3\\},\\{4,5\\}\\}$ then
$f_{n}(\pi)=\\{\\{1,2\\},\\{4,5\\}\\}+\\{\\{1,3\\},\\{4,5\\}\\}+\\{\\{2,3\\},\\{4,5\\}\\}$
is a sum of 3 matchings in $\mathbb{C}[NCM(n)]$.
The rest of the paper is organized as follows. Section 2 will provide
necessary background information. Section 3 will prove our main result, the
embedding from $\mathbb{C}[NCP(n)]$ to $\mathbb{C}[NCM(n)]$. Section 4 will
determine the image of this embedding within $\mathbb{C}[NCM(n)]$.
## 2\. Background
### 2.1. ${\mathfrak{S}}_{n}$-representation theory
For $n\in\mathbb{Z}_{\geq 0}$, a partition of $n$ is a weakly decreasing
sequence $\lambda=(\lambda_{1},\lambda_{2},\dots,\lambda_{k})$ of positive
integers such that $\lambda_{1}+\cdots+\lambda_{k}=n$. Partitions of $n$ can
be represented by Young diagrams, which is an arrangement of square boxes into
$n$ left-justified rows, with the $i^{th}$ row containing $\lambda_{i}$ boxes.
Irreducible representations of the symmetric group ${\mathfrak{S}}_{n}$ are
naturally indexed by partitions of $n$. Let $S^{\lambda}$ denote the
${\mathfrak{S}}_{n}$-irreducible corresponding to partition $\lambda$. Given
two representations $V$ and $W$ of ${\mathfrak{S}}_{m_{1}}$ and
${\mathfrak{S}}_{m_{2}}$ respectively, with $m_{1}+m_{2}=n$, the induction
product $V\circ W$ is given by
$V\circ
W=\mathrm{Ind}_{{\mathfrak{S}}_{m_{1}}\times{\mathfrak{S}}_{m_{2}}}^{{\mathfrak{S}}_{n}}V\otimes
W$
where ${\mathfrak{S}}_{m_{1}}\times{\mathfrak{S}}_{m_{2}}$ is identified with
the parabolic subgroup of ${\mathfrak{S}}_{n}$ which permutes
$\\{1,\dots,m_{1}\\}$ and $\\{m_{1}+1,\dots,n\\}$ separately. When $V$ is an
irreducible representation $S^{\mu}$ for some partition $\mu$ of $m_{1}$ and
$W$ is a trivial representation of ${\mathfrak{S}}_{m_{2}}$, the Pieri rule
describes how to express $V\circ W$ in terms of irreducibles,
(2.1)
$S^{\mu}\circ\mathrm{triv}_{{\mathfrak{S}}_{m_{2}}}\cong\sum_{\lambda}S^{\lambda}$
where the sum is over all partitions $\lambda$ whose young diagram can be
obtained from that of $\mu$ by adding $m_{2}$ boxes, no two in the same
column. The dual Pieri rule describes the same when $W$ is a sign
representation instead of a trivial representation we again have
(2.2)
$S^{\mu}\circ\mathrm{sign}_{{\mathfrak{S}}_{m_{2}}}\cong\sum_{\lambda}S^{\lambda}$
but the sum is now over all partitions $\lambda$ whose young diagram can be
obtained from that of $\mu$ by adding $m_{2}$ boxes, no two in the same row.
### 2.2. Projections and their kernels
Here we provide justification for the two claims made in the introduction
regarding spanning sets for the kernels of the projection maps. The first such
claim is standard, while the second indirectly follows from results in [7]. We
include a proof of the first, the proof of the second is analogous.
###### Proposition 2.1.
The kernel of the projection
$p_{M}:\mathbb{C}[M(n)]\rightarrow\mathbb{C}[NCM(n)]$ is spanned by elements
of the form
(2.3)
$\\{\\{a_{1},a_{2}\\},\\{a_{3},a_{4}\\},\\{a_{5},a_{6}\\},\dots,\\{a_{2k-1},a_{2k}\\}\\}\\\
+\\{\\{a_{1},a_{3}\\},\\{a_{2},a_{4}\\},\\{a_{5},a_{6}\\},\dots,\\{a_{2k-1},a_{2k}\\}\\}\\\
+\\{\\{a_{1},a_{4}\\},\\{a_{2},a_{3}\\},\\{a_{5},a_{6}\\},\dots,\\{a_{2k-1},a_{2k}\\}\\}$
for any $a_{1},\dots,a_{2k}\in[n]$, i.e. sums of three matchings which differ
by a Ptolemy relation.
###### Proof.
Let $\beta$ denote the set of all elements of the form given in equation 2.3.
To see that the span of $\beta$ is contained in the kernel of $p_{M}$, note
that by the ${\mathfrak{S}}_{n}$-equivariance of $p_{M}$ it suffices to check
that applying $p_{M}$ gives 0 in the case where $a_{i}=i$ for all $i$. In this
case, we have
(2.4)
$p_{M}(\\{1,2/3,4/\cdots/2k-1,2k\\}+\\{1,3/2,4/\cdots/2k-1,2k\\}+\\{1,4/2,3/\cdots/2k-1,2k\\})\\\
=\\{1,2/3,4/\cdots/2k-1,2k\\}+(2,3)\cdot(-\\{1,2/3,4/\cdots/2k-1,2k\\})+\\{1,4/2,3/\cdots/2k-1,2k\\}\\\
=0$
To see that the kernel is contained in the span, note that since $p_{M}$ is a
projection, the kernel is spanned by $m-p_{M}(m)$ for any matching $m$. Let
$t$ denote the minimum number of transpositions $s_{i_{1}},\dots,s_{i_{t}}$
for which $(s_{i_{1}}\cdots s_{i_{t}})\star m$ is noncrossing, and let
$w=s_{i_{1}}\cdots s_{i_{t}}$. We will show by induction on $t$ that
$m-p_{M}(m)\in\textrm{span}(\beta)$. When $t=0$, then $m-p_{M}(m)=0$, so the
claim is true. Otherwise, assume the claim holds for $t-1$. We have
$m-p_{M}(m)=s_{i_{1}}\circ(s_{i_{1}}\circ m)-s_{i_{1}}\cdot
p_{M}(s_{i_{1}}\circ m)$. By our inductive hypothesis, $s_{i_{1}}\circ
m-p_{M}(s_{i_{1}}\circ m)$ lies in the span of $\beta$, so it suffices to
verify for any $b\in\beta$, that if we apply $s_{i_{1}}\circ(-)$ to every
crossing term of $b$ and apply either $s_{i_{1}}\cdot(-)$ or
$s_{i_{1}}\circ(-)$ to every noncrossing term of $b$, we remain in the span of
$\beta$. This is true because $\beta$ is closed under the $\circ$ action, and
for every noncrossing matching $m_{1}$, either
$s_{i_{1}}\circ m=s_{i_{1}}\cdot m$
or
$s_{i_{1}}\cdot m_{1}=s_{i_{1}}\circ m_{1}-(s_{i_{1}}\circ
m_{1}+m_{1}+m_{1}^{\prime})$
where $m_{1}^{\prime}$ is obtained by replacing the sets $\\{i,a\\}$ and
$\\{i+1,b\\}$ with the sets $\\{i,i+1\\}$ and $\\{a,b\\}$. In the second case,
$s_{i_{1}}\circ m_{1}+m_{1}+m_{1}^{\prime}$ is in $\beta$. ∎
###### Proposition 2.2.
The kernel of the projection
$p_{\Pi}:\mathbb{C}[\Pi(n)]\rightarrow\mathbb{C}[NCP(n)]$ is spanned by
elements of the form
$\displaystyle w\circ(s_{i}\circ\pi+\sigma(\pi))$
for any permutation $w$ and singleton-free almost noncrossing set partition
$\pi$, i.e. sums of set partitions which differ by a skein relation.
## 3\. The embedding
In order to prove that our map is an embedding, it will be helpful to
introduce a multiplicative structure to work with. To do so we will introduce
three graded commutative $\mathbb{C}$-algebras $R_{n}$, $A_{n}$, and $M_{n}$,
all with ${\mathfrak{S}}_{n}$-actions. If we forget the multiplicative
structure, the underlying ${\mathfrak{S}}_{n}$-modules of $R_{n}$, $A_{n}$,
and $M_{n}$ will contain a copy $\mathbb{C}[\Pi(n)]$, $\mathbb{C}[M(n)]$, and
$\mathbb{C}[NCM(n)]$ respectively. In the case of $M_{n}$, this copy will be
all of $M_{n}$. The structure of this proof is best explained via a
commutative diagram, see Figure 2. We will define a map
$h_{n}\circ\iota_{\Pi}:\mathbb{C}[\Pi(n)_{0}]\rightarrow M_{n}$, and show that
its kernel is equal to the kernel of $p_{\Pi}$. The desired embedding $f_{n}$
then follows from the first isomorphism theorem.
${R_{n}}$${A_{n}}$${M_{n}}$${\mathbb{C}[M(n)]}$${\mathbb{C}[NCM(n)]}$${\mathbb{C}[\Pi(n)_{0}]}$${\mathbb{C}[NCP(n)_{0}]}$$\scriptstyle{g_{n}}$$\scriptstyle{h_{n}}$$\scriptstyle{q}$$\scriptstyle{\cong}$$\scriptstyle{p_{M}}$$\scriptstyle{\iota_{M}}$$\scriptstyle{\iota_{\Pi}}$$\scriptstyle{p_{\Pi}}$$\scriptstyle{f_{n}}$
Figure 2. A commutative diagram of the maps used in the following proofs. All
maps shown are ${\mathfrak{S}}_{n}$-equivariant linear maps. Maps between
$R_{n}$, $A_{n}$, and $M_{n}$ are also morphisms of $\mathbb{C}$-algebras. The
desired embedding is shown as a dashed arrow.
We begin with the definition of $R_{n}$.
###### Definition 3.1.
Let $n\in\mathbb{N}$. Define $R_{n}$ to be the unital graded commutative
$\mathbb{C}$-algebra generated by nonempty subsets of $[n]$ where each
generator has degree $1$. Define a degree-preserving action of
${\mathfrak{S}}_{n}$ on $R_{n}$ by
$\pi\cdot\\{a_{1},\dots,a_{k}\\}=\mathrm{sign}(\pi)\\{\pi(a_{1}),\cdots,\pi(a_{k})\\}$
for any permutation $\pi\in{\mathfrak{S}}_{n}$ and generator
$\\{a_{1},\cdots,a_{k}\\}\in R_{n}$.
The ring $R_{n}$ can be thought of as the ring of multiset collections of
subsets of $[n]$ with multiplication given by union of collections and
addition purely formal. It is in this sense that it contains a copy of
$\mathbb{C}[\Pi(n)]$, as set partitions of $n$ are particular collections of
subsets of $[n]$. To be precise, there exists an ${\mathfrak{S}}_{n}$-module
embedding $\iota_{\Pi}:\mathbb{C}[\Pi(n)_{0}]\hookrightarrow R_{n}$, given by
sending any singleton-free set partiton $\pi$ to the product of its blocks.
The ring $A_{n}$ is a subring of $R_{n}$ designed to model matchings in much
the same way which $R_{n}$ models set partitions. It is defined as follows.
###### Definition 3.2.
Let $n\in\mathbb{N}$ and define $A_{n}$ to be the
${\mathfrak{S}}_{n}$-invariant subalgebra of $R_{n}$ generated by the size two
subsets of $[n]$. The subring $A_{n}$ is invariant under the
${\mathfrak{S}}_{n}$-action of $R_{n}$, and thus inherits a graded
${\mathfrak{S}}_{n}$-action from $R_{n}$.
Like $R_{n}$, the ring $A_{n}$ can be thought of as the ring of multiset
collections of size-two subsets of $[n]$. As matchings are particular
collections of size-two subsets of $[n]$, we again have an
${\mathfrak{S}}_{n}$-module embedding
$\iota_{M}:\mathbb{C}[M(n)]\hookrightarrow A_{n}$, given by
$\\{\\{a_{1},b_{1}\\},\dots,\\{a_{k},b_{k}\\}\\}\longmapsto\\{a_{1},b_{1}\\}\cdots\\{a_{k},b_{k}\\}$
for any matching $\\{\\{a_{1},b_{1}\\},\dots,\\{a_{k},b_{k}\\}\\})$.
Our final ring, $M_{n}$, is defined as a quotient of $A_{n}$ in the following
way.
###### Definition 3.3.
Define $I_{n}$ to be the ideal of $A_{n}$ generated by elements of the
following forms
* •
$\\{a,b\\}\cdot\\{a,b\\}$
* •
$\\{a,b\\}\cdot\\{a,c\\}$
* •
$\\{a,b\\}\cdot\\{c,d\\}+\\{a,c\\}\cdot\\{b,d\\}+\\{a,d\\}\cdot\\{b,c\\}$
for any distinct $a,b,c,d\in[n]$. Then $I_{n}$ is a homogeneous
${\mathfrak{S}}_{n}$-invariant ideal of $A_{n}$, so define $M_{n}$ to be the
graded ${\mathfrak{S}}_{n}$-module $M_{n}:=A_{n}/I_{n}$. Let
$q:A_{n}\rightarrow M_{n}$ be the quotient map.
The first two types of elements listed in the definition of $I_{n}$ serve the
purpose of removing collections of size-two subsets which are not actually
matchings. The third is the Ptolemy relation used to define the action of
${\mathfrak{S}}_{n}$ on $\mathbb{C}[NCM(n)]$, so quotienting by this ideal
gives an ${\mathfrak{S}}_{n}$-module isomorphic to $\mathbb{C}[NCM(n)]$, as
per the following argument.
###### Proposition 3.4.
There is an ${\mathfrak{S}}_{n}$-module isomorphism from $\mathbb{C}[NCM(n)]$
to $M_{n}$, given by
$\\{\\{a_{1},b_{1}\\},\dots,\\{a_{k},b_{k}\\}\\}\longmapsto\\{a_{1},b_{1}\\}\cdots\\{a_{k},b_{k}\\}$
for any noncrossing matching
$\\{\\{a_{1},b_{1}\\},\dots,\\{a_{k},b_{k}\\}\\}$.
###### Proof.
Let $q:A_{n}\rightarrow M_{n}$ be the quotient map. Consider the map
$q\circ\iota_{M}:\mathbb{C}[M(n)]\rightarrow M_{n}$. The kernel of
$q\circ\iota_{M}$ is spanned by the elements of $\mathbb{C}[M(n)]$ which are
sent under $\iota_{M}$ to a multiple of
$\\{a,b\\}\cdot\\{c,d\\}+\\{a,c\\}\cdot\\{b,d\\}+\\{a,d\\}\cdot\\{b,c\\}$ for
some distinct $a,b,c,d\in[n]$. Therefore the kernel of $q\circ\iota_{M}$ is
equal to the kernel of $p_{M}$ by Proposition 2.1. The image of
$q\circ\iota_{M}$ is all of $M_{n}$. To see this, note that products of
generators of $A_{n}$ form a vector space basis for $A_{n}$, and every such
basis element is either in the image of $\iota_{M}$ or in $I_{n}$. We
therefore have
(3.1)
$\mathbb{C}[NCM(n)]\cong\mathbb{C}[M(n)]/\mathrm{ker}(p_{M})=\mathbb{C}[M(n)]/\mathrm{ker}(q\circ\iota_{M})\cong\mathrm{im}(q\circ\iota_{M})=M_{n}$
where the isomorphism on the left is induced by the map $p_{M}$ and the
isomorphism on the right is induced by the map $q\circ\iota_{M}$. Composing
these isomorphisms gives the stated map. ∎
The following definition is the key idea behind our main theorem.
###### Definition 3.5.
Let $n\in\mathbb{N}$. Define the $\mathbb{C}$-algebra map
$g_{n}:R_{n}\rightarrow A_{n}$ by
$g_{n}(A)=\sum_{\\{a,b\\}\subseteq A}\\{a,b\\}$
for generators $A\in R_{n}$. Singleton sets are sent to $0$ by $g_{n}$. Define
$h_{n}:=q\circ g_{n}$ where $q$ is the quotient map $A_{n}\rightarrow M_{n}$.
We give the definition in terms of $R_{n}$, $A_{n}$, and $M_{n}$ for
simplicity and ease of proofs later, but the map we really care about is
$h_{n}\iota_{\Pi}:\mathbb{C}[\Pi(n)]\rightarrow M_{n}$. Under this map, a set
partition $\pi$ is sent to the product of its blocks, then each block is sent
to the sum of all size-two subsets it contains. After distributing, we get a
sum of all ways to pick a size two subset from each block. Composing with the
isomorphism between $M_{n}$ and $\mathbb{C}[NCM(n)]$ we get the sum of all
matchings such that each block of $\pi$ contains exactly one pair of the
matching, as in Theorem 3.8.
The ${\mathfrak{S}}_{n}$-equivariance of $h_{n}$ is simple to check. We have
the following calculation.
###### Lemma 3.6.
Let $i,j\geq 2$ and let $p_{1},p_{2},\dots,p_{i}$ and
$q_{1},q_{2},\dots,q_{j}$ be in $[n]$. Then the element of $R_{n}$
$\displaystyle\kappa_{n}:=\\{p_{1},\dots,p_{i}\\}\cdot\\{q_{1},\dots,q_{j}\\}$
$\displaystyle-\\{p_{1},\cdots p_{i-1}\\}\cdot\\{q_{1},\cdots
q_{j},p_{i}\\}-\\{p_{1},\cdots p_{i},q_{j}\\}\cdot\\{q_{1},\cdots q_{j-1}\\}$
$\displaystyle+\\{p_{1},\cdots p_{i-1},q_{j}\\}\cdot\\{q_{1},\cdots
q_{j-1},p_{i}\\}+\\{p_{1},\cdots p_{i-1},q_{1},\cdots
q_{j-1}\\}\cdot\\{p_{i},q_{j}\\}$
lies in the kernel of $h_{n}$.
When $i,j>2$, the element $\kappa_{n}$ corresponds to the five-term skein
relation depicted in Figure 1. If either $i$ eqauls 2, then $\\{p_{1},\cdots
p_{i-1}\\}=\\{p_{1}\\}$ is a one element set and therefore sent to 0 by
$h_{n}$, removing the term containing $\\{p_{1}\\}$ corresponds to the four-
term skein relation depicted in Figure 1. Similarly, if $j$ equals 2 or $i$
and $j$ both equal two, removing the terms in $\kappa_{n}$ which are
individually sent to 0 corresponds to the four or three-term skein relation
depicted in Figure 1.
###### Proof.
Applying $h_{n}$ to $\kappa_{n}$ gives
$\displaystyle\sum_{\begin{subarray}{c}\\{a,b\\}\subseteq\\{p_{1},\dots,p_{i}\\}\\\
\\{c,d\\}\subseteq\\{q_{1},\dots,q_{j}\\}\end{subarray}}\\{a,b\\}\cdot\\{c,d\\}$
$\displaystyle-\sum_{\begin{subarray}{c}\\{a,b\\}\subseteq\\{p_{1},\dots,p_{i-1}\\}\\\
\\{c,d\\}\subseteq\\{q_{1},\dots,q_{j},p_{i}\\}\end{subarray}}\\{a,b\\}\cdot\\{c,d\\}-\sum_{\begin{subarray}{c}\\{a,b\\}\subseteq\\{p_{1},\dots,p_{i},q_{j}\\}\\\
\\{c,d\\}\subseteq\\{q_{1},\dots,q_{j-1}\\}\end{subarray}}\\{a,b\\}\cdot\\{c,d\\}$
$\displaystyle+\sum_{\begin{subarray}{c}\\{a,b\\}\subseteq\\{p_{1},\dots,p_{i-1},q_{j}\\}\\\
\\{c,d\\}\subseteq\\{q_{1},\dots,q_{j-1},p_{i}\\}\end{subarray}}\\{a,b\\}\cdot\\{c,d\\}+\sum_{\begin{subarray}{c}\\{a,b\\}\subseteq\\{p_{1},\dots,p_{i-1},q_{1},\cdots,q_{j-1}\\}\\\
\\{c,d\\}\subseteq\\{p_{i},q_{j}\\}\end{subarray}}\\{a,b\\}\cdot\\{c,d\\}$
Note that the pairs of sets defining the first and second summations in the
above expression differ only in the location of $p_{i}$, and similarly for the
third and fourth. Since these summations come with opposite signs, the
$\\{a,b\\},\\{c,d\\}$ terms in the above expression will cancel unless one of
$a,b,c,d$ is equal to $p_{i}$. Similarly, comparing the first and third sums
and the second and fourth sums we find cancellation unless at least one of
$a,b,c,d$ is equal to $q_{j}$. If the remaining two elements of $a,b,c,d$ are
both $p$’s or both $q$’s, then $\\{a,b\\}\cdot\\{c,d\\}$ also cancels.
Therefore we have
(3.2)
$h_{n}(\kappa_{n})=\sum_{\begin{subarray}{c}a\in\\{p_{1},\dots,p_{i-1}\\}\\\
b\in\\{q_{1},\dots,q_{j-1}\\}\end{subarray}}\\{a,p_{i}\\}\cdot\\{b,q_{j}\\}+\\{a,q_{j}\\}\cdot\\{b,p_{i}\\}+\\{a,b\\}\cdot\\{p_{i},q_{j}\\}$
which is manifestly a sum of the definining relations of $M_{n}$. ∎
The above calculation allows for the identification of the kernel of
$h\circ\iota_{\Pi}$ and $p_{\Pi}$ mentioned in the beginning of this section.
###### Proposition 3.7.
The kernel of $h\circ\iota_{\Pi}$ is generated by the set of all elements of
the form $w\circ(s_{i}\circ\pi+\sigma(\pi))$ (the skein relations) for any
permutation $w$ and singleton-free almost noncrossing set partition $\pi$.
###### Proof.
By Lemma 3.6, all such elements lie in the kernel. Since restricting
$\mathbb{C}[\Pi(n)_{0}]$ to noncrossing set partitions with exactly $k$ blocks
gives an irreducible submodule [7, Proposition 5.2], and each such irreducible
is sent to degree $k$ in $M_{n}$, it suffices to demonstrate that each
irreducible is not sent identically to 0. But for each noncrossing set
partition $\pi\in\Pi(n)_{0}$, $h_{n}\circ\iota(\pi)$ is a sum of noncrossing
matchings of $[n]$, which form a basis for $M_{n}$ by Proposition 3.4, and is
therefore nonzero. ∎
We can now prove our main result.
###### Theorem 3.8.
The linear map $f_{n}:\mathbb{C}[NCP(n)_{0}]\rightarrow\mathbb{C}[NCM(n)]$
defined by
$f_{n}(\pi)=\sum_{m\in M_{\pi}(n)}m$
is a ${\mathfrak{S}}_{n}$-equivariant embedding of vector spaces. Here
$M_{\pi}(n)$ is defined to be the set of all matchings $m$ in $M(n)$ for which
each block of $\pi$ contains exactly one pair in $m$.
###### Proof.
By Proposition 3.7 and Proposition 2.2, the kernel of $h\circ\iota_{\Pi}$ is
equal to the kernel of $p_{\Pi}$. So we have
(3.3)
$\mathbb{C}[NCP(n)_{0}]\cong\mathbb{C}[\Pi_{0}(n)]/\mathrm{ker}(p_{\Pi})=\mathbb{C}[\Pi(n)_{0}]/\mathrm{ker}(h\circ\iota_{\Pi})\cong\mathrm{im}(h\circ\iota_{\Pi})\subset
M_{n}\cong\mathbb{C}[NCM(n)]$
where the isomorphism on the left is induced by $p_{\Pi}$ and the isomorphism
on the right is induced by $h\circ\iota_{\Pi}$. Chasing these isomorphisms and
inclusions results in the map $f_{n}$. ∎
## 4\. The image
We have an embedding
$f_{n}:\mathbb{C}[NCP(n)_{0}]\hookrightarrow\mathbb{C}[NCM(n)]$, so it is a
natural question to ask for a description of the image of $f_{n}$ within
$\mathbb{C}[NCM(n)]$. Via the commutative diagram in Figure 2,we have an
isomorphism of images
(4.1) $\mathrm{im}(h_{n})\cong\mathrm{im}(f_{n}).$
So it is equivalent to describe the image of $h_{n}$, and the multiplicative
structure of $M_{n}$ will make describing the image of $h_{n}$ easier. This
section will show that the image of $h_{n}$ has a simple description, the
proof of which will require the following lemmas.
###### Lemma 4.1.
Let $A\subseteq[n]$. Then $h_{n}(A)^{2}=0$.
###### Proof.
Applying the definition of $h_{n}$ gives
(4.2) $h_{n}(A)^{2}=\sum_{\begin{subarray}{c}a,b\in[n]\\\ a\neq
b\end{subarray}}\sum_{\begin{subarray}{c}c,d\in[n]\\\ c\neq
d\end{subarray}}\\{a,b\\}\cdot\\{c,d\\}$
Using the defining relation of $M_{n}$ that
$\\{a,b\\}\cdot\\{a,c\\}=0$
we have
(4.3) $\sum_{\begin{subarray}{c}a,b\in[n]\\\ a\neq
b\end{subarray}}\sum_{\begin{subarray}{c}c,d\in[n]\\\ c\neq
d\end{subarray}}\\{a,b\\}\cdot\\{c,d\\}=\frac{1}{3}\sum_{\begin{subarray}{c}a,b,c,d\in[n]\\\
a,b,c,d\,\mathrm{distinct}\end{subarray}}\\{a,b\\}\cdot\\{c,d\\}+\\{a,c\\}\cdot\\{b,d\\}+\\{a,d\\}\cdot\\{b,c\\}.$
The right hand side of the above equation equals 0 because
$\\{a,b\\}\cdot\\{c,d\\}+\\{a,c\\}\cdot\\{b,d\\}+\\{a,d\\}\cdot\\{b,c\\}=0$
for any distinct $a,b,c,d\in[n]$. ∎
###### Lemma 4.2.
Let $A,B$ be disjoint subsets of $[n]$. Then
$h_{n}(A)\cdot\left(\sum_{\begin{subarray}{c}a\in A\\\ b\in
B\end{subarray}}\\{a,b\\}\right)=0.$
###### Proof.
Applying $h_{n}$ gives
$h_{n}(A)\cdot\left(\sum_{\begin{subarray}{c}a\in A\\\ b\in
B\end{subarray}}\\{a,b\\}\right)=\frac{1}{3}\sum_{a_{1},a_{2},a_{3}\in
A}\sum_{b\in
B}\\{a_{1},a_{2}\\}\cdot\\{a_{3},b\\}+\\{a_{1},a_{3}\\}\cdot\\{a_{2},b\\}+\\{a_{2},a_{3}\\}\cdot\\{a_{1},b\\}=0$
∎
###### Lemma 4.3.
Let $B_{1},\dots,B_{k}$ be the blocks of a singleton free set partition of
$[n]$. Then
$h_{n}\left(\prod_{i=1}^{k}B_{k}\right)=h_{n}\left([n]\cdot\prod_{i=1}^{k-1}B_{k}\right)$
###### Proof.
We have the following calculation:
$\displaystyle h_{n}\left([n]\cdot\prod_{i=1}^{k-1}B_{i}\right)$
$\displaystyle=\left(\sum_{\begin{subarray}{c}a,b\in[n]\\\ a\neq
b\end{subarray}}\\{a,b\\}\right)\cdot
h_{n}\left(\prod_{i=1}^{k-1}B_{i}\right)$
$\displaystyle=\left(\left(\sum_{i=1}^{k}h_{n}(B_{i})\right)+\left(\sum_{1\leq
i<j\leq k}\sum_{\begin{subarray}{c}a\in B_{i}\\\ b\in
B_{j}\end{subarray}}\\{a,b\\}\right)\right)\cdot
h_{n}\left(\cdot\prod_{i=1}^{k-1}B_{k}\right)$
$\displaystyle=h_{n}(B_{i})\cdot\left(\cdot\prod_{i=1}^{k-1}h_{n}(B_{i})\right)$
where the last line follows by the preceeding two lemmas, as every term in the
outer sum of
$\sum_{1\leq i<j\leq k}\sum_{\begin{subarray}{c}a\in B_{i}\\\ b\in
B_{j}\end{subarray}}\\{a,b\\}$
is annihilated by some term in the product
$\prod_{i=1}^{k-1}h_{n}(B_{k})$
and every term except the $i=k$ term in the sum
$\sum_{i=1}^{k}h_{n}(B_{i})$
is annihilated by some term in the product
$\prod_{i=1}^{k-1}h_{n}(B_{k}).$
∎
We can now describe the image of $h_{n}$.
###### Theorem 4.4.
Let $H_{n}$ be the ideal of $M_{n}$ generated by $h_{n}([n])$. Then
$\mathrm{im}(h_{n})=H_{n}.$
###### Proof.
It is immediate from Lemma 4.3 that the image of $h_{n}$ is contained in
$H_{n}$, so it suffices to show that
(4.4)
$\mathrm{dim}(H_{n})\leq\mathrm{dim}(\mathrm{im}(h_{n}))=\mathrm{dim}(\mathrm{im}(f_{n}))=\\#NCP(n)_{0}$
To bound the dimension of $H_{n}$, note that for any fixed $a\in[n]$,
$h_{n}([n])\cdot\left(\sum_{\begin{subarray}{c}b\in[n]\\\ b\neq
a\end{subarray}}\\{a,b\\}\right)=\frac{1}{3}\sum_{\begin{subarray}{c}b\in[n]\\\
b\neq a\end{subarray}}\sum_{\begin{subarray}{c}c\in[n]\\\ c\neq
a\end{subarray}}\sum_{\begin{subarray}{c}d\in[n]\\\ d\neq
a\end{subarray}}(\\{a,b\\}\cdot\\{c,d\\}+\\{a,c\\}\cdot\\{b,d\\}+\\{a,d\\}\cdot\\{b,c\\})=0$
so
$h_{n}([n])\cdot\\{1,a\\}=-h_{n}([n])\cdot\left(\sum_{\begin{subarray}{c}b\in[n]\\\
b\neq a,1\end{subarray}}\\{a,b\\}\right).$
Therefore $H_{n}$ is spanned by elements of the form
$h_{n}([n])\cdot\\{a_{1},a_{2}\\}\cdot\\{a_{3},a_{4}\\}\cdots\\{a_{2k-1},a_{2k}\\}$
where the sets $\\{a_{1},a_{2}\\},\dots,\\{a_{2k-1},a_{2k}\\}$ form a
noncrossing matching of $\\{2,\dots,n\\}$. These elements are not linearly
independent, however. Consider the element $\tilde{f}_{n}(\tilde{\pi})$ of
$M_{n}$ given by
$\tilde{f}_{n}(\tilde{\pi}):=\prod_{B\in\tilde{\pi}}h_{n}(B)$
for any singleton free noncrossing set partition $\tilde{\pi}$ of
$\\{2,\dots,n\\}$ (i.e. the “image of $f_{n}$” if such a set partition were in
the domain of $f_{n}$). Let $B_{1}$ be the block of $\tilde{\pi}$ containing
$2$, and let $\pi$ be the set partition of $[n]$ obtained by adding $1$ to
block $B_{1}$. We have
$\displaystyle h_{n}([n])\cdot\tilde{f}_{n}(\tilde{\pi})$
$\displaystyle=h_{n}([n])\cdot\prod_{B\in\tilde{\pi}}h_{n}(B)$
$\displaystyle=h_{n}(B_{1})\cdot h_{n}\left([n]\cdot\prod_{B\neq
B_{1}\in\pi}B\right)$ $\displaystyle=h_{n}(B_{1})\cdot
h_{n}\left(\prod_{B\in\pi}B\right)$
$\displaystyle=h_{n}(B_{1})h_{n}(B_{1}\cup\\{1\\})h_{n}\left(\prod_{B\neq
B_{1}\in\pi}B\right)$ $\displaystyle=0$
The third equality follows from Lemma 4.3 and the final equality follows from
the fact that
$h_{n}(B_{1})h_{n}(B_{1}\cup\\{1\\})=h_{n}(B_{1})^{2}+h_{n}(B_{1})\left(\sum_{b\in
B_{1}}\\{1,b\\}\right)=0$
which follows from Lemma 4.2 and Lemma 4.1. The collection of
$\tilde{f}_{n}(\tilde{\pi})$ for singleton-free noncrossing set partitions
$\pi$ of $\\{2,\dots,n\\}$ is linearly independent. To see this, note that any
linear relation among the $\tilde{f}_{n}(\tilde{\pi})$ would also be a linear
relation among $f_{n-1}(\pi)$ where $\pi$ is the set partition of $[n-1]$
obtained by decrementing the indices in $\tilde{\pi}$. But $f_{n-1}$ is an
embedding and singleton-free noncrossing set partitions are linearly
independent in $\mathbb{C}[NCP(n-1)_{0}]$. The dimension of $H_{n}$ is
therefore bounded by
(4.5) $\mathrm{dim}(H_{n})\leq\\#\\{\textrm{noncrossing matchings of
}\\{2,\dots,n\\}\\}\\\ -\\#\\{\textrm{singleton-free noncrossing set
partitions of }\\{2,\dots,n\\}\\}.$
Noncrossing matchings of $\\{2,\dots,n\\}$ are in bijection with noncrossing
set partitions of $[n]$ in which only the block containing $1$ may be a
singleton (though it may be larger). Given a noncrossing set partition, simply
take the matching that matches the largest and smallest element of each block
not containing $1$. Sinlgeton-free noncrossing set partitions of
$\\{2,\dots,n\\}$ are in bijection with set partitions of $[n]$ in which
$\\{1\\}$ is the unique singleton block. We therefore have
$\mathrm{dim}(H_{n})\leq\\#NCP(n)_{0}$
as desired. ∎
## 5\. Future directions
One of the goals motivating this paper is to find new combinatorially nice
bases for ${\mathfrak{S}}_{n}$-irreducibles which arise from existing bases in
an analogous way to the skein action. More specifically, suppose we have a
basis for $S^{\lambda}$ which is indexed by certain structures on the set
$[k]$, where $k=|\lambda|$ (e.g. noncrossing perfect matchings, in the case of
this paper). We can create a basis for the induction product of $S^{\lambda}$
with a sign representation of ${\mathfrak{S}}_{n-k}$ indexed by all ways to
put a certain structure on a $k$-element subset of $[n]$. The Pieri rule tells
us which ${\mathfrak{S}}_{n}$ irreducibles this decomposes into. In
particular, there will be one copy of $(\lambda,1^{n-k})$. How do we isolate
that irreducible?
It is optimistic to think that there will be a method that works in any sort
of generality, but perhaps analogs could be found in certain specific cases.
For example, an analog might exist for the $A_{2}$-web basis for $S^{(k,k,k)}$
introduced by Kuperberg [k]. The web basis consists of planar bipartite graphs
embedded in a disk with $n$ boundary vertices all of degree 1, interior
vertices are degree 3, all boundary vertices are part of the same bipartition,
and no cycles of length less than 6 exist. One potential candidate for a basis
for $S^{(k,k,k,1^{n-3k})}$ is as follows.
###### Conjecture 5.1.
Let $A$ be the set of all planar bipartite graphs embedded in a disk for which
the following conditions hold
* •
There are $n$ vertices on the boundary of the disk, and there exists a
bipartition in which all of these vertices are in the same part.
* •
Every interior vertex is connected to a boundary vertex
* •
Every interior vertex in the same bipartition as the boundary vertices is
degree 3. These are called negative interior vertices
* •
Every interior vertex not in the same bipartition as the boundary vertices is
degree at least 3. These are called positive interior vertices.
* •
The number of positive interior vertices minus the number of negative interior
vertices is exactly $k$.
* •
No cycles of length less than 6 exist.
Then $|A|$ is equal to the dimension of $S^{(k,k,k,1^{n-3k})}$.
The set $A$ can be thought of as consisting of webs for which the condition of
interior vertices being degree 3 has been partially relaxed. The conjecture
can be shown to hold for $k=2$ and any $n$, as well as $n=10,k=3$ via direct
enumeration. If the above conjecture is true, it suggests the following
question.
###### Question 5.2.
Does there exist a combinatorially nice action of ${\mathfrak{S}}_{n}$ on
$\mathbb{C}[A]$ which creates a ${\mathfrak{S}}_{n}$ module isomorphic to
$S^{(k,k,k,1^{n-3k})}$? If so, what does the unique embedding into
$S^{(k,k,k)}$ induced with a sign representation of ${\mathfrak{S}}_{n-3k}$
look like?
A positive answer to this question might help elucidate how to apply similar
methods more generally.
## 6\. Acknowledgements
We are very grateful to Brendon Rhoades for many helpful discussions and
comments on this project.
## References
* [1] J. Kim and B. Rhoades. Set partitions, fermions, and skein relations. IMRN, 2022. arXiv:2109.06373
* [2] G. Kuperberg, Spiders for rank 2 Lie algebras, Comm. Math. Phys. 180 (1996), no. 1, 109–151. arXiv:q-alg/9712003
* [3] R. Patrias, O. Pechenik, and J. Striker. A web basis of invariant polynomials from noncrossing partitions. Preprint, 2021. arXiv:2112.05781
* [4] O. Pechenik. Cyclic sieving of increasing tableaux and small Schröder paths. J. Comb. Theory, Ser. A, 125 (2014).
* [5] K Petersen, P. Pylyavsky, B. Rhoades. Promotion and cyclic sieving via webs. J. Alg. Combinatoris, 30 (2009).
* [6] V. Reiner, D. Stanton, and D. White. The cyclic sieving phenomenon. J. Comb. Theory, Ser. A, 108 (2004).
* [7] B. Rhoades. A skein action of the symmetric group on noncrossing partitions. J. Algebraic Combin., 45 (1), (2017), 81–127.
* [8] B. Sagan. The Symmetric Group, Springer, New York, 2001
|
# Modelling uncertainties in wide binary constraints on primordial black holes
Emily Tyler,1 Anne M. Green,1 and Simon P. Goodwin2
1 School of Physics and Astronomy, University of Nottingham, Nottingham, NG7
2RD, UK
2 Department of Physics and Astronomy, University of Sheffield, Sheffield, S3
7RH, UK
E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
Dark matter in the form of compact objects with mass $M_{\rm co}\gtrsim
10M_{\odot}$ can be constrained by its dynamical effects on wide binary stars.
Motivated by the recent interest in Primordial Black Hole dark matter, we
revisit the theoretical modelling involved in these constraints. We improve on
previous studies in several ways. Specifically, we i) implement a physically
motivated model for the initial wide-binary semi-major axis distribution, ii)
include unbound binaries, and iii) take into account the uncertainty in the
relationship between semi-major axis and observed angular separation. These
effects all tend to increase the predicted number of wide binaries (for a
given compact object population). Therefore the constraints on the halo
fraction in compact objects, $f_{\rm co}$, are significantly weakened. For the
wide binary sample used in the most recent calculation of the constraints, we
find the fraction of halo dark matter in compact objects is $f_{\rm co}<1$ for
$M_{\rm co}\approx 300\,M_{\odot}$, tightening with increasing $M_{\rm co}$ to
$f_{\rm co}<0.26$ for $M_{\rm co}\gtrsim 1000\,M_{\odot}$.
###### keywords:
Galaxy: halo – Stars: binaries: general – Cosmology: dark matter
††pubyear: 2022††pagerange: Modelling uncertainties in wide binary constraints
on primordial black holes–Modelling uncertainties in wide binary constraints
on primordial black holes
## 1 Introduction
There is strong evidence from cosmological and astronomical observations that
$\approx 85\%$ of the matter in the Universe is in the form of cold,
nonbaryonic dark matter (DM), see e.g. Bertone et al. (2005) for a review.
Traditionally the most popular dark matter candidates have been new elementary
particles, such as Weakly Interacting Massive Particles or axions. However,
the discovery of gravitational waves from mergers of tens of Solar mass black
holes by LIGO-Virgo (Abbott et al., 2016) has led to a surge of interest in
Primordial Black Holes (PBHs) as a dark matter candidate (Bird et al., 2016;
Sasaki et al., 2016; Carr et al., 2016). PBHs are black holes that may form in
the early Universe, for instance from the collapse of large density
perturbations (Zel’dovich & Novikov, 1967; Hawking, 1971).
There are various constraints on the abundance of PBHs with mass $M_{\rm
PBH}\gtrsim 1M_{\odot}$ from gravitational microlensing (Diego et al., 2018;
Zumalacarregui & Seljak, 2018; Blaineau et al., 2022; Esteban-Gutiérrez et
al., 2022), gravitational waves from mergers of binaries (Sasaki et al., 2016;
Ali-Haïmoud et al., 2017), their dynamical effects on stars in wide binaries
(Yoo et al., 2004; Quinn et al., 2009; Monroy-Rodríguez & Allen, 2014), and in
dwarf galaxies (Brandt, 2016), and the radiation emitted due to accretion of
gas onto PBHs (Ricotti et al., 2008; Gaggero et al., 2017). For reviews, with
extensive reference lists, see e.g. Carr & Kuhnel (2020); Green & Kavanagh
(2021). The increased interest in PBH DM motivates a careful reanalysis of
these constraints. For instance, the constraints from the temperature
anisotropies in the Cosmic Microwave Background, due to the effects of PBHs on
the recombination history of the Universe, have been found to be significantly
weaker than previously thought (Ali-Haïmoud & Kamionkowski, 2017; Poulin et
al., 2017).
In this paper we focus on the constraints on multi-Solar mass compact objects
in the halo of the Milky Way (MW) from their dynamical effects on wide binary
stars. While this is motivated by the recent interest in PBHs as a dark matter
candidate, these constraints apply to any compact object DM. Close encounters
between binary stars and massive compact objects increase the energies and
semi-major axes of the binaries, and potentially disrupt some of the binaries.
Observations of the semi-major axis distribution of wide binaries in the MW
can therefore potentially constrain the abundance of compact objects. For
perturbers with mass $M_{\rm p}\gtrsim 10^{3}M_{\odot}$ the closest encounter
dominates, while for lighter perturbers it is necessary to take into account
the cumulative, diffusive, effects of multiple interactions (Bahcall et al.,
1985; Binney & Tremaine, 2008).
Bahcall et al. (1985) used wide binaries in the Milky Way disk to constrain
the fraction of the local mass density in compact objects. Yoo et al. (2004)
then used a sample of 90 wide halo binaries compiled by Chanamé & Gould (2004)
to constrain the fraction of the MW halo in compact objects. They found that
compact objects with mass $M_{\rm co}>43\,M_{\odot}$ could not make up all of
the halo, and objects with mass $M_{\rm co}\gtrsim 10^{3}M_{\odot}$ were
constrained to make up less than $20\%$ of the halo, at $95\%$ confidence.
Quinn et al. (2009) highlighted that these constraints are very sensitive to
the widest binaries. They carried out radial velocity measurements of four of
the widest binaries in the Chanamé & Gould (2004) sample, and found that the
second widest binary was in fact not a binary, as the two stars have
significantly different radial velocities. Without this spurious binary, the
mass above which compact objects were excluded from making up all of the halo
increased to $M_{\rm co}\sim 500\,M_{\odot}$. The radial velocities, along
with the proper motions, also allow the orbits of the binaries to be
calculated. The orbits found by Quinn et al. (2009) extend to radii
$(20-60)\,{\rm kpc}$. In this case the average DM density the binaries
experience is significantly, $(50-90)\%$, smaller than the local (i.e. at the
Solar radius) DM density, which further weakens the constraint. Quinn et al.
(2009) concluded that the Chanamé & Gould (2004) sample was too small to place
meaningful constraints on the halo fraction of compact objects.
Monroy-Rodríguez & Allen (2014) calculated constraints using 251 halo wide
binaries from a catalogue compiled by Allen & Monroy-Rodríguez (2014). 160 of
these binaries had radial velocity measurements, allowing their orbits to be
calculated. Using the binaries which spend the smallest fraction of their time
in the Galactic disk, they found that compact objects with $M_{\rm co}\gtrsim
5\,M_{\odot}$ are excluded from making up all of the halo, and objects with
mass $M_{\rm co}\gtrsim 10^{2}M_{\odot}$ make up less than $10\%$, at $95\%$
confidence. Contrary to Quinn et al. (2009), they found that the average DM
densities experienced by the wide binaries are not significantly different
from the local density.
In this paper we revisit the modelling assumptions in these analyses, refining
several aspects. In particular, previous work assumed that the initial binary
semi-major axis distribution is log-flat or a power law, while we use an
initial distribution motivated by simulations of the formation of wide
binaries during the dissolution of large star clusters (Kouwenhoven et al.,
2010; Griffiths, 2019). We also include unbound binaries in our comparison
with observations and take into account the uncertainty in calculating the
observed angular separation of a binary from its semi-major axis. We outline
our method in Sec. 2, present and discuss our results in Sec. 3, and conclude
with a Summary in Sec. 4.
## 2 Method
### 2.1 Binary sample
To illustrate the effects of theoretical modelling on the constraints, we use
the catalogue of halo wide binaries compiled from various sources by Allen &
Monroy-Rodríguez (2014). This catalogue was used by Monroy-Rodríguez & Allen
(2014) to calculate the most recent wide binary constraints on the abundance
of compact objects (that are quoted in reviews of PBH DM e.g. Carr & Kuhnel
(2020); Green & Kavanagh (2021)).
As discussed by, e.g., Chanamé & Gould (2004), constructing a reliable large
catalogue of halo binaries, without selection biases, is non-trivial. Halo
binaries need to be distinguished from disk binaries and, as emphasised by
Quinn et al. (2009), radial velocity measurements are required to eliminate
chance associations. Coronado et al. (2018) constructed a catalogue of halo
binaries using Sloan Digital Sky Survey data, however this sample only covers
projected separations less than $\sim 0.1\,{\rm pc}$.
GAIA (Gaia Collaboration, 2018) offers the possibility of constructing a
large, consistent catalog of halo wide binaries. However at this time there is
no definitive sample of halo binaries (see e.g. Oh et al., 2017; Oelkers et
al., 2017; Tian et al., 2020, for work in this direction).
### 2.2 Simulations
#### 2.2.1 Interactions between perturbers and wide binaries
Our simulations of interactions between perturbers111We are specifically
interested in the case of PBH DM, however the constraints apply to any compact
object DM, and therefore we use these terms, and ‘perturber’ interchangably.
and wide binaries largely follow Yoo et al. (2004). We assume that all
binaries are composed of stars which each have mass $0.5M_{\odot}$ and that
the distribution of the relative velocities of the binaries and perturbers,
$f(v_{\rm rel})$, is Maxwellian with dispersion $\sigma_{\rm rel}=220\,{\rm
km}\,{\rm s}^{-1}$.
When we compare simulated binary distributions with observations in Sec. 2.3
below, the initial binary semi-major axis distribution is taken into account
using a scattering matrix formalism, as in Yoo et al. (2004). In our initial
simulations, for simplicity and following previous work, we use a semi-major
distribution which is log-flat between $10\,{\rm au}$ and $10^{5.5}\,{\rm
au}$, and assume that the square of the initial eccentricity is uniformly
distributed between $0$ and $1$ (i.e. thermal).
As in previous work (Yoo et al., 2004; Quinn et al., 2009; Monroy-Rodríguez &
Allen, 2014) we do not include perturbations from Giant Molecular Clouds
(GMCs) or the effects of Galactic tides. Due to their low number density in
the halo the impact of GMCs on halo wide binaries is expected to be small, and
neglecting it is a conservative assumption. Galactic tides are smaller for
halo wide binaries than for the disk binaries studied in Jiang & Tremaine
(2010), and likewise including their effects would act to tighten the
constraints. We have also assumed that the PBHs are smoothly distributed and
are not themselves in binaries. Some PBHs are expected to form binaries in the
early Universe (Nakamura et al., 1997; Ali-Haïmoud et al., 2017), and PBH
clusters form not long after matter-radiation equality (Afshordi et al., 2003;
Inman & Ali-Haïmoud, 2019). The evolution of these clusters, and in particular
the disruption of PBH binaries within them, is a challenging problem and the
present day spatial distribution of PBHs within galaxies is not yet understood
in detail.
Figure 1: The final semi-major axis distribution of $10^{5}$ binaries composed
of stars with mass $0.5M_{\odot}$ evolved for 10 Gyr in a population of
perturbers with a Maxwellian relative velocity distribution with dispersion
$\sigma_{\rm rel}=220\,{\rm km\,s}^{-1}$, mass density $\rho=0.009M_{\odot}$
pc-3 and masses 10$M_{\odot}$ (orange lines), 100$M_{\odot}$ (green) and
1000$M_{\odot}$ (red). The dot-dashed lines are for the full binary population
(bound and unbound binaries), while the solid lines show only the binaries
that remain bound at all times. The initial, log-flat, binary semi-major axis
distribution is shown by the black dotted line.
Unlike previous work on constraints on compact object DM from halo binaries,
we include unbound binaries in our comparison with observed binaries. Yoo et
al. (2004) argued that disrupted binaries quickly diffuse to large
separations, beyond those probed observationally. However Jiang & Tremaine
(2010) included unbound systems in their study of the effects of perturbers on
disk binaries using diffusion equations. They found that the stars from
unbound binaries have small relative velocities, which would lead them to be
detected as binaries by surveys. Furthermore, they also found that some
unbound binaries can become rebound.
The rate at which encounters with impact parameter between $b$ and $b+{\rm
d}b$ and relative velocity between $v_{\rm rel}+{\rm d}v_{\rm rel}$ occur,
$\dot{C}$, is given by
$\dot{C}=n_{\rm p}v_{\rm rel}2\pi b\,{\rm d}bf(v_{\rm rel})\,{\rm d}v_{\rm
rel}\,,$ (1)
where $n_{\rm p}=\rho/M_{\rm p}$ is the perturber number density and $\rho$
and $M_{\rm p}$ are the perturber mass density and mass respectively. We
consider perturber masses in the range $1M_{\odot}<M_{\rm p}<3\times
10^{3}M_{\odot}$ and fix $\rho$ to the standard value for the local DM
density, $0.009M_{\odot}$pc-3 (e.g. de Salas & Widmark (2021)), however the
constraints can be straight forwardly rescaled to other values of the local DM
density.
We have found (see Fig. 3.5 of Tyler (2022)) that encounters which cause a
fractional change in the binary energy less than $0.1\%$ have a negligible
(less than $0.1\%$) effect on the semi-major axis distribution, therefore we
do not include these encounters in our simulations. We calculate the number of
interactions expected within a time $T=10\,{\rm Gyr}$, roughly equally to the
age of the MW. For each individual binary the actual number of encounters
experienced is drawn from a Poisson distribution and the impact parameter and
relative velocity of each encounter are found from the distributions in Eq.
(1).
The relative velocity between the perturber and binary is always much larger
than the orbital velocities of the binary stars. Therefore the stars can be
treated as stationary during an encounter and the impulse approximation used
to calculate its effect (e.g. Binney & Tremaine (2008)). The positions of the
stars are unperturbed, while the changes in their velocities are perpendicular
to the trajectory of the perturber and given by
$\Delta v_{i}=\frac{2GM_{\rm p}}{v_{\rm rel}b_{i}}\frac{{{\bf
b}_{i}}}{b_{i}}\,,$ (2)
where ${\bf b}_{i}$ is the impact parameter to star $i$.
Binaries are evolved in time between encounters. For bound binaries the time
between encounters is much longer than the period of the binary, so we do this
by taking a random value for the mean anomaly between $0$ and $2\pi$ and
converting this (via Kepler’s equation) to a future true anomaly. The
hyperbolic orbits of unbound binaries are not periodic, so in this case we
evolve the binary’s eccentric anomaly forwards in time exactly. The position
and velocity vectors of the two stars before each encounter are calculated
from their semi-major axis, eccentricity and orbital phase (true anomaly).
Fig. 1 shows the final semi-major axis distribution for simulations with a
log-flat initial binary semi-major axis distribution and perturbers with
density $\rho=0.009\,M_{\odot}\,{\rm pc}^{-3}$ and masses $M_{\rm
p}=10,10^{2}$ and $10^{3}M_{\odot}$. It shows both the full binary population
(dot-dashed lines) and also just the binaries which remain bound throughout
the whole simulation (solid lines), i.e. the result that would be obtained by
discarding unbound binaries. We see that for $M_{\rm p}=10^{2}$ and
$10^{3}M_{\odot}$ (green and red lines respectively) the two distribution
differ significantly for $a\gtrsim 10^{4}\,{\rm au}$, and hence discarding
unbound binaries significantly underestimates the abundance of the widest
observed apparent binaries. As mentioned previously, Jiang & Tremaine (2010)
find that disrupted binaries in the Galactic disk have very small relative
velocities. For perturbers larger than $\sim 1M_{\odot}$, however, the
increase in relative velocity due to encounters is more significant (Yoo et
al., 2004, Eq. A2). We note that our results for binaries which remain bound
throughout are in good agreement with previous work by Yoo et al. (2004) and
Monroy-Rodríguez & Allen (2014).
The large abundance of unbound wide binaries for $M_{\rm p}=10^{3}M_{\odot}$
is likely due to the low number density of perturbers, which decreases with
increasing perturber mass (for constant perturber mass density). Even though
encounters with $M_{\rm p}=10^{3}M_{\odot}$ are more likely to break the
binaries, multiple encounters are required to give the binaries sufficient
relative velocity to drift apart within the timescale of the simulation. This
may also explain why for $M_{\rm p}=10M_{\odot}$ there are very few unbound
binaries; these binaries have experienced a large number of encounters giving
them sufficient relative velocity to drift far apart by the end of the
simulation.
#### 2.2.2 Orbits of binaries
Figure 2: The probability distribution of the time-averaged dark matter
density calculated along the orbits of 160 binaries from Allen & Monroy-
Rodríguez (2014) that it is possible to calculate orbits. The orange vertical
line shows the dark matter density at the Solar radius, 0.00754 $M_{\odot}$
pc-3.
It is useful to calculate the orbits of the wide binaries within the MW
potential for two reasons. Firstly each binary experiences an orbit-dependent,
time-varying, DM density. This can be taken into account by finding the time-
averaged DM density along each binary orbit, and scaling the constraint on the
perturber density by the mean time-averaged DM density divided by the value of
the local DM density (Quinn et al., 2009). Secondly, binaries will experience
perturbations from stars when passing through the Galactic disk, and hence
binaries which spend the smallest fraction of their orbits within the Galactic
disc are more powerful for constraining perturbers in the halo. Monroy-
Rodríguez & Allen (2014) classified the binaries as ‘most halo-like’ according
to the fraction of time their orbit spends within the disc ($|z|<500\,{\rm
pc}$).
We calculated the binary orbits for the 160 binaries in the Allen & Monroy-
Rodríguez (2014) catalog 222Online data from
https://cdsarc.cds.unistra.fr/viz-bin/cat/J/ApJ/790/158. which have sufficient
data to do this using the galpy Python package (Bovy, 2015). For each binary
we use the most recent data from the SIMBAD database (Wenger et al., 2000),
usually from GAIA DR2 (Gaia Collaboration, 2018). We used the MWPotential2014
model in galpy, which has a Navarro-Frenk-White density profile (Navarro et
al., 1997) for the MW halo, along with potentials for the disk and bulge.
While this model is not intended to be the best current model of the MW, its
parameters are similar to those obtained from, e.g., fits to rotation curve
data (Eilers et al., 2019), and it is sufficiently accurate for our purpose.
We find the mean time-averaged DM density for the 160 binaries is $\sim 40\%$
larger than the DM density at the solar radius. Quinn et al. (2009) et al.
found substantially smaller time-averaged DM densities for the widest binaries
that they studied. However, like Monroy-Rodríguez & Allen (2014), we find that
the orbit for NLTT10536 reaches a maximum $z$ value of around $5\,{\rm kpc}$,
whereas the orbit calculated by Quinn et al. (2009) extended to $z\approx
40\,{\rm kpc}$. Also, using the most recent determination of its distance,
proper motion and radial velocity, we find an orbit for NLTT16394 which is
confined to smaller values of $z$ and $R$ than previously found (Monroy-
Rodríguez & Allen, 2014; Quinn et al., 2009)
The probability density of the time-averaged dark matter densities for the 160
binaries it is possible to calculate orbits for is shown in Fig. 2. The
distribution of time-averaged DM densities experienced by the binaries is not
too wide (full width at half maximum $0.007\,M_{\odot}\,$pc-3). This suggests
that simply scaling the constraint on the perturber density by the mean time-
averaged DM density should capture the effect of the varying DM density
experienced by the binaries.
Figure 3: The best fit final projected separation distribution (green line)
compared with the observed separation distribution (blue crosses). The
corresponding initial distribution (orange line), which has parameters
$\alpha=1.26$ and $A=1.00$ is also shown. The best fit perturber mass and
density are $M_{\rm p}=30\,M_{\odot}$ and $\rho=0.012\,M_{\odot}\,$pc-3
respectively.
### 2.3 Comparison with observations
#### 2.3.1 Initial semi-major axis distribution
A model is required for the initial semi-major axis separation distribution
from which the current distribution has evolved. Unfortunately, it is
extremely unclear what that initial distribution should be. Previous work on
wide binary disruption (Yoo et al., 2004; Quinn et al., 2009; Monroy-Rodríguez
& Allen, 2014; Weinberg et al., 1987; Jiang & Tremaine, 2010) used a power law
distribution, $\propto a^{-\alpha}$, which is the simplest generalisation of
Öpik’s Law, a log-flat distribution. It is not at all obvious that this simple
distribution is a good model for the initial wide binary semi-major axis
distribution (see also Tian et al., 2020).
Binary semi-major axis distributions usually seem to follow a roughly log-
normal distribution with a peak at tens to hundreds of au depending on the
primary mass (see e.g. Raghavan et al., 2010; Duchêne & Kraus, 2013; Ward-
Duong et al., 2015). The best understood sample of binary separations are
local field G dwarfs (Raghavan et al., 2010) which have a log-normal
separation distribution which peaks at $\sim 30$ au, with a variance of 1.5 in
the log (so roughly two thirds of systems lie between 1 and 1000 au).
Local field G dwarfs have a few per cent of very wide binaries beyond $10^{4}$
au, which is usually modelled as the exponential tail of the G dwarf log-
normal. However, it is not clear that this is a good way of modelling the wide
binary tail. The formation mechanism(s) of very wide binaries, with semi-major
axis $>10^{4}$ au, are not understood. The peaks of binary distributions (at
tens to hundreds of au) are thought to arise from core and/or disc
fragmentation during star formation (see Goodwin et al., 2007; Duchêne &
Kraus, 2013; Reipurth et al., 2014). However, systems with separations
$>10^{4}$ au are much wider than the size of star forming cores and so it is
uncertain how they arise. The most likely mechanism suggested so far is ‘soft
capture’ (Kouwenhoven et al., 2010; Moeckel & Bate, 2010; Moeckel & Clarke,
2011), where a wide binary is formed by the chance proximity of two stars with
low relative velocities during the dissolution of a star cluster or star
forming region.
Figure 4: Two sigma constraints on the perturber density, $\rho$, as a
function of the perturber mass, $M_{\rm p}$. The orange and blue lines show
our constraints for $A=1$ (initial binary semi-major axis distribution is a
pure power law) and $0<A<1$ (allowing a varying fraction of the initial
distribution to be log-normal) respectively. The dotted and solid green lines
are the Monroy-Rodríguez & Allen (2014) constraints for their 25 and 100 most
halo like binaries respectively.
Simulations of soft capture show that the rate is low, but that very wide
binaries can be formed. Griffiths (2019) carried out simulations of the
dissolution of clusters with different levels of (fractal) substructure in the
initial star cluster (c.f. Kouwenhoven et al., 2010). From their simulations
we find that a power law distribution is a good fit to the wide binaries
formed via soft capture (see e.g. their Fig. 5.7), with the slope decreasing
from $\alpha=0.9$ to $0.7$ as the level of substructure decreases. This could
well appear like an exponential tail in the broader distribution of
separations (as current data is too poor to show any features of different
formation mechanisms).
How many wide binaries we would expect is another unknown. The fraction of
wide binaries in the local field G dwarf population is a few per cent
(depending on exactly where one draws the line for wide binaries, see e.g.
Tokovinin & Lépine (2012)). However, the local field population should have
been processed to some degree by other field stars in exactly the same way a
PBH population would process the halo binaries. Therefore this provides a
lower limit on wide binary production in what are now Galactic disc field
stars. If we assume soft capture as the mechanism then we would not expect a
metallicity-dependence on the primordial wide binary fraction333 El-Badry &
Rix (2019) find a very slight excess of metal rich field wide,
($5\,000-50\,000$) au, binary systems over metal poor systems, but the two are
very similar..
Therefore as well as considering a pure power law for the initial binary semi-
major axis distribution (motivated by our fits to simulations of soft
capture), we also study an initial distribution where in addition primordial
binaries make up a variable fraction, $1-A$, of the total population between
$a_{\mathrm{min}}=30$ au and $a_{\mathrm{max}}=2\times 10^{4}$ au. We assume
that the primordial binaries have a log-normal distribution with mean
$\mu=100$ au and log width $\sigma=1.5$ (which is closer to the local pre-main
sequence binary population than the local field, see Duchêne & Kraus (2013)).
#### 2.3.2 Binary separations
The observed separation of a system is the angular separation, which depends
on its semi-major axis, eccentricity, phase, inclination, orientation, and
distance. From a single observation of a separation on the sky it is
impossible to determine the true semi-major axis in anything other than a
purely statistical way. Yoo et al. (2004) calculated a theoretical angular
separation distribution by convolving the projected separation distribution of
their simulated binaries with their assumed (inverse) distance distribution.
Monroy-Rodríguez & Allen (2014) instead compared the semi-major axis
distribution of simulated and observed binaries, using a statistical
relationship between semi-major axis and angular separation to estimate the
observed semi-major axes.
The problem with using a statistical relationship between the instantaneous
separation and the semi-major axis is that it only holds for a ‘typical’
binary. On average, the semi-major axis of a binary is slightly larger than
the observed separation (how much larger depends on the assumed eccentricity
distribution). However, some binaries (high eccentricity systems at apastron,
oriented such that we see the 3D separation in 2D) will be observed with a
separation of approximately twice the semi-major axis. Such systems are rare,
but will tend to fall at the widest extreme of the distribution. Therefore, at
the widest end of the distribution this would tend to over-estimate the semi-
major axes. For this reason we compare the projected separations of our
theoretical distribution with the observed distribution, by randomising the
viewing angles, rather than attempting to turn the observed separation
distribution into a semi-major axis distribution.
To calculate the predicted separation distribution for a given initial semi-
major axis distribution, we use the same scattering matrix formalism as Yoo et
al. (2004). Since each binary evolves independently, then the expected number
of binaries with projected separation $r_{j}$, $P(r_{j},M_{\rm p},\rho)$, is
given by
$P(r_{j},M_{\rm p},\rho)\propto a_{j}\,S_{ij}(M_{\rm p},\rho)\,q(a_{j}),$ (3)
where $q(a)$, is the probability density of the initial semi-major axis
distribution and the scattering matrix, $S_{ij}(M_{\rm p},\rho)$, is the
number of simulated binaries with initial semi-major axis in the $i$-th
logarithmically spaced bin centered at $a_{i}$ that have final projected
separation $r_{j}$ for a simulation with perturber mass $M_{\rm p}$ and dark
matter density $\rho$. The factor of $a_{j}$ appears because our semi-major
axis bins are logarithmically spaced.
#### 2.3.3 Statistical analysis
Previous work has used likelihood analysis (Yoo et al., 2004) or the
Kolmogorov-Smirnov (K-S) test (Monroy-Rodríguez & Allen, 2014) to compare
simulated and observed binary distributions. Both of these methods have
drawbacks for this analysis. Likelihood analysis doesn’t provide information
about how good a fit the best fit is, while the K-S test is less sensitive to
differences in the extremes of distributions, which is suboptimal as the
widest binaries are most affected by perturbers. The classical $\chi^{2}$ test
is not valid if the number of samples in any bin is small, which is the case
for the widest binaries. We therefore use a modified version of the $\chi^{2}$
test, which provides $p$-values, is valid for small sample sizes, and is
equally sensitive to deviations across the whole range of the distributions.
The modified $Y^{2}$ statistic (Lucy, 2000), is rescaled so that its variance
is fixed to be equal to twice its mean, and hence the standard translation of
$\chi^{2}$ values into $p$-values is valid, even for small samples. The
$Y^{2}$ statistic is defined as
$Y^{2}=\nu+\sqrt{\frac{2\nu}{2\nu+\Sigma_{i}n_{i}^{-1}}}\left(\chi^{2}-\nu\right)\,,$
(4)
where $n_{i}$ is the expected number of binaries in the $i$-th bin. The number
of degrees of freedom, $\nu$, is equal to the number of bins minus the number
of fitted parameters plus one as the $n_{i}$’s have been normalised to match
the total observed number of binaries. The $\chi^{2}$ statistic is given, as
usual, by
$\chi^{2}=\sum_{i}\frac{(N_{i}-n_{i})^{2}}{n_{i}}\,,$ (5)
where $N_{i}$ is the number of observed binaries in the $i$-th bin and the sum
is over all bins with non-zero $N_{i}$.
## 3 Results and discussion
We calculate the $Y^{2}$ statistic as a function of perturber mass, $M_{\rm
p}$, and density, $\rho$, the fraction of the binaries that have power law
semi-major axis distribution initially, $A$, and the slope of the power law,
$\alpha$. For each $M_{\rm p}$ and $\rho$ combination we find the minimum
value of $Y^{2}$, $Y^{2}_{\text{min}}(M_{\rm p},\rho)$. We first check that
the best fit is a sufficiently good fit by comparing the global minimum value
of $Y^{2}$, $Y^{2}_{\text{min}}$, with the number of degrees of freedom,
$\nu$. Here we have two fitted parameters ($A$ and $\alpha$) and seven bins,
so $\nu=7-(2+1)=4$. The global best fit has $\alpha=1.26$, $A=1$, $M_{\rm
p}=30M_{\odot}$ and $\rho=0.012M_{\odot}\,{\rm pc}^{-3}$. It has $Y^{2}_{\rm
min}<3$ and hence is indeed a good fit to the data. Fig. 3 compares the best
fit projected separation distribution with the observed separation
distribution, and also shows the corresponding initial separation
distribution.
Next we calculate constraints on $M_{\rm p}$ and $\rho$ by finding the pairs
of values for which
$\Delta Y^{2}(M_{\rm p},\rho)=Y^{2}_{\text{min}}(M_{\rm
p},\rho)-Y^{2}_{\text{min}}=\text{inverse}\left(1-\text{cdf}(p)\right),$ (6)
where $p=0.05$ for $2\sigma$ constraints, and cdf is the cumulative
distribution function of the $\chi^{2}$ distribution with 2 degrees of
freedom, since we are now finding constraints on two parameters ($M_{\rm p}$
and $\rho$). We do this for both $A=1$, i.e. a pure power law distribution for
the initial binary distribution, and $0<A<1$, i.e. allowing a varying fraction
of the distribution to be log-normal. Finally, as discussed in Sec. 2.2.2, we
rescale our constraints by a factor of $0.71$ to take into account the average
DM density experienced by the binaries along their orbits.
Our constraints on the perturber mass, $M_{\rm p}$, and density, $\rho$, are
shown in Fig. 4. We compare our (very similar) $2\sigma$ constraints for $A=1$
(orange line) and $0<A<1$ (blue line) with the Monroy-Rodríguez & Allen (2014)
constraints from their 100 and 25 ‘most halo like’ binary samples (green solid
and dashed lines respectively). For values of $M_{\rm p}$ larger than those
plotted, the Monroy-Rodríguez & Allen (2014) constraints are expected to be
roughly constant.
We tested the validity of comparing 25 observed binaries with our simulations
and found that randomly choosing groups of 25 binaries resulted in constraints
that varied significantly. This is due to the large stochasticity in the
distribution of observed angular separations from a semi-major axis
distribution when the number of binaries is small. This suggests that a much
larger sample of halo wide binaries is required to provide any meaningful
constraints. Therefore we only present our constraints calculated using the
full sample of binaries to avoid this stochasticity. Fig. 7 of Monroy-
Rodríguez & Allen (2014) indicates that they were able to calculate reliable
constraints from small sub-populations of binaries. This difference is likely
to be because they compare ‘virtual’ binaries, constructed from $500-10,000$
simulated binaries, with the semi-major axis of observed binaries calculated
by assuming there is a one-to-one relationship between projected separation
and semi-major axis. This assumption is an oversimplification that does not
take into account the varied phases and orientations of the observed binaries.
Our constraint is significantly weaker than that from Monroy-Rodríguez & Allen
(2014). We find $f_{\rm co}<1$ for $M_{\rm p}\approx 300\,M_{\odot}$,
tightening with increasing $M_{\rm p}$ to $f_{\rm co}<0.26$ for $M_{\rm
p}\gtrsim 1000\,M_{\odot}$. An obvious question is ‘why are our constraints so
much weaker than those of Monroy-Rodríguez & Allen (2014)?’. To restate the
obvious - compact objects destroy wide binaries, and the wider the binary, the
more susceptible to destruction it is. Therefore, the constraints on the
allowed compact object density are extremely sensitive to the number of very
wide binaries, and the exact values of the semi-major axes. We include two
effects that Monroy-Rodríguez & Allen (2014) did not, both of which act to
increase the number of very wide binaries predicted for any particular initial
semi-major axis distribution and perturber population. Consequently, the
abundance of perturbers required to reduce the abundance of the widest
binaries below that which we observe is larger.
Firstly, we do not discard unbound binaries. This means there are systems with
wide separations which, from a single observation, would be indistinguishable
from a (very weakly) bound ‘true’ binary. This increases the number of very
wide systems that could potentially be observed.
Secondly, by projecting our theoretical distribution into observed separations
we correctly allow for systems to be observed where the separation is
significantly larger than the semi-major axis (up to a factor of two for bound
binaries, and greater than two for unbound systems). Such systems are rare,
but by definition fall at the widest extreme of the distribution which is what
sets the constraints.
The inclusion of unbound binaries in the final distribution contributes the
most to weakening the constraints. Fig. 1 shows that at the largest semi-major
axis, the total number of binaries is at least one magnitude larger than the
number of bound binaries for $M_{\rm p}>100M_{\odot}$. The next largest
contribution is from the initial semi-major axis distribution. For perturber
masses $M_{\rm p}>1000M_{\odot}$, the fraction of dark matter that could
consist of compact objects (Fig. 4) increases from 0.1 to 0.3 when comparing a
variable distribution ($0<A<1$) with a power law distribution ($A=1$).
Comparing projected separations, and therefore taking into account the large
apastron distance of wide binaries, is likely to have had a relatively small
effect on the final constraints. While the number of binaries at the largest
separations, which are most susceptible to this effect, are the most important
for calculating constraints, the increase in binary separation due to this
effect is approximately a factor of 2 in most cases.
## 4 Summary
We have revisited the theoretical modelling involved in placing constraints on
the fraction of the MW halo in compact objects from the dynamical effects on
the semi-major axis distribution of wide binary stars. We have improved on
previous work in several ways. We have used a physically motivated model for
the initial binary semi-major axis, taken into account the uncertainty in
relating semi-major axis to observed angular separation, and retained unbound
binaries. We compare simulated binary separations with observations using the
$Y^{2}$ statistic (Lucy, 2000). This retains the advantages of the $\chi^{2}$
statistic, namely it allows the goodness of fit of the best fit to be checked
and (unlike the K-S test) is sensitive to deviations at the extremes of the
distributions.
We find that with these improvements the constraints obtained using the Allen
& Monroy-Rodríguez (2014) wide binary sample are significantly weakened. We
find $f_{\rm co}<1$ for $M_{\rm co}\approx 300\,M_{\odot}$, tightening with
increasing $M_{\rm co}$ to $f_{\rm co}<0.26$ for $M_{\rm co}\gtrsim
1000\,M_{\odot}$, whereas Monroy-Rodríguez & Allen (2014) found $f_{\rm co}<1$
for $M_{\rm p}\sim 10\,M_{\odot}$, tightening with increasing $M_{\rm co}$ to
$f_{\rm co}<0.1$ for $M_{\rm co}\gtrsim 100\,M_{\odot}$. It is therefore
crucial that these modelling improvements are implemented when calculating
constraints on compact objects using future improved catalogs of halo wide-
binaries.
## Acknowledgements
ET was supported by a United Kingdom Science and Technology Facilities Council
(STFC) studentship. AMG is supported by STFC grant ST/P000703/1. For the
purpose of open access, the authors have applied a CC BY public copyright
licence to any Author Accepted Manuscript version arising. This research has
made use of the SIMBAD database, operated at CDS, Strasbourg, France.
## Data Availability
This work is entirely theoretical, and has no associated data.
## References
* Abbott et al. (2016) Abbott B. P., et al., 2016, Phys. Rev. Lett., 116, 061102
* Afshordi et al. (2003) Afshordi N., McDonald P., Spergel D. N., 2003, Astrophys. J. Lett., 594, L71
* Ali-Haïmoud & Kamionkowski (2017) Ali-Haïmoud Y., Kamionkowski M., 2017, Phys. Rev. D, 95, 043534
* Ali-Haïmoud et al. (2017) Ali-Haïmoud Y., Kovetz E. D., Kamionkowski M., 2017, Phys. Rev. D, 96, 123523
* Allen & Monroy-Rodríguez (2014) Allen C., Monroy-Rodríguez M. A., 2014, ApJ, 790, 158
* Bahcall et al. (1985) Bahcall J. N., Hut P., Tremaine S., 1985, ApJ, 290, 15
* Bertone et al. (2005) Bertone G., Hooper D., Silk J., 2005, Phys. Rept., 405, 279
* Binney & Tremaine (2008) Binney J., Tremaine S., 2008, Galactic Dynamics, second edn. Princeton University Press
* Bird et al. (2016) Bird S., Cholis I., Muñoz J. B., Ali-Haïmoud Y., Kamionkowski M., Kovetz E. D., Raccanelli A., Riess A. G., 2016, Phys. Rev. Lett., 116, 201301
* Blaineau et al. (2022) Blaineau T., et al., 2022, arXiv e-prints, p. arXiv:2202.13819
* Bovy (2015) Bovy J., 2015, ApJS, 216, 29
* Brandt (2016) Brandt T. D., 2016, Astrophys. J., 824, L31
* Carr & Kuhnel (2020) Carr B., Kuhnel F., 2020, Ann. Rev. Nucl. Part. Sci., 70, 355
* Carr et al. (2016) Carr B., Kuhnel F., Sandstad M., 2016, Phys. Rev. D, 94, 083504
* Chanamé & Gould (2004) Chanamé J., Gould A., 2004, ApJ, 601, 289
* Coronado et al. (2018) Coronado J., Sepúlveda M. P., Gould A., Chanamé J., 2018, MNRAS, 480, 4302
* Diego et al. (2018) Diego J. M., et al., 2018, Astrophys. J., 857, 25
* Duchêne & Kraus (2013) Duchêne G., Kraus A., 2013, ARA&A, 51, 269
* Eilers et al. (2019) Eilers A.-C., Hogg D. W., Rix H.-W., Ness M. K., 2019, ApJ, 871, 120
* El-Badry & Rix (2019) El-Badry K., Rix H.-W., 2019, MNRAS, 482, L139
* Esteban-Gutiérrez et al. (2022) Esteban-Gutiérrez A., Mediavilla E., Jiménez-Vicente J., Agües-Paszkowsky N., Muñoz J. A., Heydenreich S., 2022, ApJ, 929, L17
* Gaggero et al. (2017) Gaggero D., Bertone G., Calore F., Connors R. M. T., Lovell M., Markoff S., Storm E., 2017, Phys. Rev. Lett., 118, 241101
* Gaia Collaboration (2018) Gaia Collaboration 2018, A&A, 616, A1
* Goodwin et al. (2007) Goodwin S. P., Kroupa P., Goodman A., Burkert A., 2007, in Reipurth B., Jewitt D., Keil K., eds, Protostars and Planets V. p. 133 (arXiv:astro-ph/0603233)
* Green & Kavanagh (2021) Green A. M., Kavanagh B. J., 2021, J. Phys. G, 48, 043001
* Griffiths (2019) Griffiths D., 2019, PhD thesis, University of Sheffield, Sheffield, UK, https://etheses.whiterose.ac.uk/23547/
* Hawking (1971) Hawking S., 1971, Mon. Not. Roy. Astron. Soc., 152, 75
* Inman & Ali-Haïmoud (2019) Inman D., Ali-Haïmoud Y., 2019, Phys. Rev. D, 100, 083528
* Jiang & Tremaine (2010) Jiang Y.-F., Tremaine S., 2010, MNRAS, 401, 977
* Kouwenhoven et al. (2010) Kouwenhoven M. B. N., Goodwin S. P., Parker R. J., Davies M. B., Malmberg D., Kroupa P., 2010, MNRAS, 404, 1835
* Lucy (2000) Lucy L. B., 2000, MNRAS, 318, 92
* Moeckel & Bate (2010) Moeckel N., Bate M. R., 2010, MNRAS, 404, 721
* Moeckel & Clarke (2011) Moeckel N., Clarke C. J., 2011, MNRAS, 415, 1179
* Monroy-Rodríguez & Allen (2014) Monroy-Rodríguez M. A., Allen C., 2014, ApJ, 790, 159
* Nakamura et al. (1997) Nakamura T., Sasaki M., Tanaka T., Thorne K. S., 1997, Astrophys. J. Lett., 487, L139
* Navarro et al. (1997) Navarro J. F., Frenk C. S., White S. D. M., 1997, Astrophys. J., 490, 493
* Oelkers et al. (2017) Oelkers R. J., Stassun K. G., Dhital S., 2017, AJ, 153, 259
* Oh et al. (2017) Oh S., Price-Whelan A. M., Hogg D. W., Morton T. D., Spergel D. N., 2017, AJ, 153, 257
* Poulin et al. (2017) Poulin V., Serpico P. D., Calore F., Clesse S., Kohri K., 2017, Phys. Rev. D, 96, 083524
* Quinn et al. (2009) Quinn D., Wilkinson M., Irwin M., Marshall J., Koch A., Belokurov V., 2009, Mon. Not. Roy. Astron. Soc., 396, 11
* Raghavan et al. (2010) Raghavan D., et al., 2010, ApJS, 190, 1
* Reipurth et al. (2014) Reipurth B., Clarke C. J., Boss A. P., Goodwin S. P., Rodríguez L. F., Stassun K. G., Tokovinin A., Zinnecker H., 2014, in Beuther H., Klessen R. S., Dullemond C. P., Henning T., eds, Protostars and Planets VI. p. 267 (arXiv:1403.1907), doi:10.2458/azu_uapress_9780816531240-ch012
* Ricotti et al. (2008) Ricotti M., Ostriker J. P., Mack K. J., 2008, Astrophys. J., 680, 829
* Sasaki et al. (2016) Sasaki M., Suyama T., Tanaka T., Yokoyama S., 2016, Phys. Rev. Lett., 117, 061101
* Tian et al. (2020) Tian H.-J., El-Badry K., Rix H.-W., Gould A., 2020, ApJS, 246, 4
* Tokovinin & Lépine (2012) Tokovinin A., Lépine S., 2012, AJ, 144, 102
* Tyler (2022) Tyler E., 2022, PhD thesis, University of Nottingham, Nottingham, UK, eprints.nottingham.ac.uk/id/eprint/69110
* Ward-Duong et al. (2015) Ward-Duong K., et al., 2015, MNRAS, 449, 2618
* Weinberg et al. (1987) Weinberg M. D., Shapiro S. L., Wasserman I., 1987, ApJ, 312, 367
* Wenger et al. (2000) Wenger M., et al., 2000, A&AS, 143, 9
* Yoo et al. (2004) Yoo J., Chaname J., Gould A., 2004, Astrophys. J., 601, 311
* Zel’dovich & Novikov (1967) Zel’dovich Y. B., Novikov I. D., 1967, Sov. Astron., 10, 602
* Zumalacarregui & Seljak (2018) Zumalacarregui M., Seljak U., 2018, Phys. Rev. Lett., 121, 141101
* de Salas & Widmark (2021) de Salas P. F., Widmark A., 2021, Rept. Prog. Phys., 84, 104901
|
# UAVs-Enabled Maritime Communications: Opportunities and Challenges
Muhammad Waseem Akhtar, Nasir Saeed
###### Abstract
Next-generation communication system is expected to integrated the terrestrial
and non-terrestrial networks. Maritime communication, in this sense, can plays
an important role in marine activities. Unmanned aerial vehicles (UAVs) are
proposed as the aerial base stations for the terrestrial networks. However,
applications of UAVs for maritime communication is still un-explored. In this
paper, we highlight different aspects of UAV-based maritime communication
which includes the use cases, channel characteristics, maritime network design
prospects, research challenges an future direction.
###### Index Terms:
Maritime communication, unmanned aerial vehicals (UAVs), blockchains , machine
learning, artificial intelligence, massive mimo.
## I Introduction
As being known, around 70$\% $ of the surface of our planet is covered by
seawater and over 90$\%$ of the world’s products are moved by a commercial
fleet consisting around 46.000 ships [uav1]. Thousands of ships are out of
sight from shore or any other vessel all the time. Therefore, reliable
maritime communications are considered to play a significant role in maritime
operations. However, current maritime systems comprise mainly either too low
bandwidth legacy analog very high frequency (VHF) radios or too high-cost
satellite communication (SatCom) networks to support the international
Maritime Safety Organization (IMO) eNavigation concept, which needs wideband
low-cost communication systems to achieve better security, surveillance, and
environmental control, efficient working conditions for the crew on board, and
Internet services for passengers. On the other hand, the Wireless Broadband
Access (WBA) mainly consisting of wireless fidelity (Wi-Fi) and othe 4th
generation wireless technologies are regarded to have the potential of
fulfilling the IMO eNavigation concept because they support the required
features such as high data rate and bandwidth efficiency, mobility with the
low-latency handover, good security and quality of service (QoS) and d) low-
cost deployment. However, it is questionable how the WBA can work optimally in
maritime areas, and hence the research on radio propagation overseas is vital.
In the previous decades, the world has experienced an ever-growing booming
marine economy. Conventional sectors such as fisheries and transportation have
been continually developed, while new maritime activities including oil
exploitation, environment monitoring, and tourism have evolved. All these
require a greater data rate and more dependable wireless communications.
Existing maritime communication systems mainly rely on satellite and/or
customized communication systems operating in HF/very high frequency (VHF)
bands, such as the navigational telex (NAVTEX) system, the automatic
identification system (AIS), as well as the evolving VHF data transfer system
(VDES)[uav1]. Therefore, the typical marine networks are commonly seen as
integrated satellite-ground-sea networks with mesh topology among the users.
The satellite-based system, albeit undergoing quick development which
considerably boosts its potential to deliver high-speed data coverage for a
wide area, suffers from unavoidable big propagation delay and expensive
implementation cost. On the other hand, the HF/VHF-based systems, mainly being
utilized for vessel identification, tracking/monitoring, and security
alerting, also have inherent difficulties such as the demand of special
devices and insufficient bandwidth. To improve user experience, a near coast
marine user should be able to access effortlessly highspeed terrestrial mobile
networks (such as fourth generation (4G) or fifth generation (5G) networks).
As a result, near coast maritime communications have garnered substantial
interest, where the major purpose is to provide wide-area broadband coverage
for offshore users with the aid of terrestrial base stations (BSs) and/or
relays, and with the technologies utilized for WiFi and LTE [uav2].
Similarly, some other works have examined employing an unmanned aerial vehicle
(UAV) in marine communications. Furthermore, 5G technology such as massive
multiple-input multiple-output (MIMO), millimeter wave (mmWave), and user-
centric network, can potentially deliver a greater data rate coverage to
widely spread maritime users. The implementation of these physical layer
approaches and network designs is expected to be a promising direction in
future maritime communication systems.
### I-A Background
With the rapid development of marine industries such as marine transportation,
marine fishing, marine exploration, marine tourism, marine rescue, and marine
military deployments in recent years, the number of ships has increased
continuously, resulting in rapid growth of marine business and data volume. It
is vital to boost the performance of marine communication networks in order to
keep pace with the increasing growth of marine activities. At the moment,
maritime satellites are largely utilized to interact with maritime terminals.
While a satellite network can give wider coverage, it suffers from long
propagation times and is unable to keep up with the increasing expansion of
maritime communications and computing requirements and prices.
The infancy and youth of radio technology were predominantly tied to nautical
applications. Following his development of the first operational radio
transceiver in 1895, Guglielmo Marconi performed transmission experiments
between two Italian warships off the port of Spezia in 1897, when he managed
to exchange radio signals over a distance of 22 km. Later he resumed his
experiments in England, where on Christmas Eve in 1898 he achieved
radiotelegraphy contact between the “East Goodwin” lightship and South
Foreland Lighthouse in South East England. On 3rd March 1899, the steamship
“RF Matthews” crashed with this lightship, which frightened the lighthouse
ashore to request aid. This was the first time a distress call was transmitted
via radio from a ship at sea.
However, despite the significant breakthroughs in radio technology since that
time, innovations in maritime networks are substantially trailing behind their
land equivalent, and fresh solutions are needed to fulfill the approaching
user requirements. Extension and economic reliance of an ocean area nearly 6
times the size of its landmass. The wide geographic distances and the economic
importance of activities at sea in remote places necessitate fresh and
inventive radio-based solutions. A ship at sea can’t connect directly to land-
based sites or other ships via cable. It is consequently postulated that
wireless communication is the key solution for effective communication. Before
the introduction of wireless communications, ships at sea could only
communicate within a visible distance and were constrained either to the use
of various light forms and/or flags. Even today signal flags are an important
way of ship-to-ship communication.
Ships began to be equipped with wireless communication gear at approximately
the same time as Marconi’s experiment on intercontinental wireless
communication. At that time, steamships carrying people launched an expanding
need for telegrams. However, there was no organized arrangement for distress
communications. As the “R.M.S. Titanic” was sinking in April 1912, a distress
signal was transmitted by radio. And yet, the multiple casualties that ensued
sparked the establishment of the treaty addressing the distress and safety
communications for ships.
### I-B Vision and Literature Survey
Today’s land-based communications systems are subject to ongoing enhancements
and upgrades of current infrastructure due to the ever-increasing requirement
for widely available, quick, and stable exchange of enormous amounts of
information in real-time. On the other hand, despite efforts to improve,
maritime communications systems remain trail in this respect and are largely
characterized by low speeds, relatively high costs, and restricted
availability and capacity. Recent advances in marine transport include the
rising digitization of ships and related maritime services and introducing the
notion of autonomous ship operation.
Unlike the commonly utilized Automatic Identification System (AIS), where a
limited amount of preset data is provided to vessels and coast stations within
the range, the digital ship and autonomous ship ideas demand continuous real-
time transmission of vast volumes of digital data. Moreover, since these data
are to be communicated to the land-based stations and operators for further
analysis and processing, satellite communications are employed for this
purpose instead of terrestrial systems. In addition, the necessity for high-
speed and dependable Internet connectivity for both ship’s personnel and
passengers is continuously expanding. Thus, the previously noted constraints
of existing maritime communications systems may prevent adopting and
implementing these revolutionary concepts in the maritime business. Therefore,
modifications and modernisation of current systems and installation of new
systems will be necessary. However, modernization necessitates massive
interventions into the equipment and the commitment of significant financial
and material resources.
An additional way to reducing the stress on maritime communications systems
could be to minimize the size of the transmitted data by adopting compression
techniques based on specialized algorithms for shipboard data encoding. These
algorithms could be efficiently implemented in the ship communication system,
thus avoiding substantial interventions and investments.
Figure 1: A depiction of basic network architecture for UAV-aided maritime
communication
### I-C Maritimes Networks
The maritime communications infrastructure will handle both the crew’s and
passengers’ networking requirements. Crew communication may be limited to
vessels-to-vessels (decision support systems, computer-supported collaborative
work for equipment inspection and maintenance, telemedicine systems, etc).
(decision support systems, computer-supported collaborative work for equipment
inspection and maintenance, telemedicine systems, etc.). The services that can
be supplied by this infrastructure can encompass the complete range of network
services, including conversational, messaging, retrieval, and presentation
services as described by the ITU I.211 taxonomy. Numerous categories above
incorporate real-time multimedia communications, which requires equal network
needs.
Wireless communication is the only realistic way for ships at sea to
communicate information with other ships and land-based stations. Before the
introduction of contemporary wireless communications systems, ship
communication was confined to communication within a visible distance
utilizing signals with various lights and flags. Ships began to be equipped
with wireless communication devices with the invention of radiotelegraphs.
Maritime communications have been utilized for three basic purposes: distress
and safety communications, communications in support of navigation, and
general communications. Distress and safety communications include distress
calls and communication during search and rescue (SAR) operations for vessels
in distress. In contrast, communications supporting navigation refer to
exchanging information with surrounding vessels and port managers during a
voyage. Finally, general-purpose communications encompass numerous public
communication services that serve similar functions as on land.
Maritime communications have recently seen an increased transition from analog
to digital communications, as well as a major increase in the requirement to
share larger volumes of data in general. With the implementation of the Global
Maritime Distress and Safety System (GMDSS) in 1999, it became possible to
send digital distress signals automatically via satellite communications,
instead of the already obsolete methods of sending warning or Mayday messages
via telegraph and telephone, respectively. Communications in support of
navigation have also developed with the introduction of digital radio
communications systems, such as AIS, which permits the interchange of
navigational information with neighboring vessels and coast stations via
terrestrial VHF communications. General public communications include using
various Internet services by ship’s crew and passengers, which is achieved
through marine communications satellites.
#### I-C1 Over the Sea Communication Network
In another sense, the next generation maritime communication will be
universal. It will be available to vessels of all sorts and sizes, with a
choice of services and applications customized to their needs. Likewise,
automatic identification systems are anticipated to follow the same direction.
One of the advantages of future marine communication system will be the
capability to accommodate all prospective users. Additionally, this will
result in an expansion in the worldwide market population from a few thousand
to several million vessels (including pleasure ships), resulting in enormous
economies of scale. Once this future maritime communications system is in
place, the 6G and beyond vision of user-centricity, ongoing invention of new
services, and flexible business models appears to be substantially more
possible, even for ships at sea in the most distant regions of the world.
With their activation in the second half of the twentieth century,
communications satellites became widely employed for long-distance maritime
communications, which until then had been limited to the use of radio waves in
the Medium Frequency/High Frequency (MF/HF) band. Today, mobile communications
via Inmarsat satellites, among others, are employed, giving diverse services,
such as Fleet Xpress, FleetBroadband, and Inmarsat C. In addition to the
initially employed voice communication, data transmission via satellite
communications is getting more and more developed. However, satellite
communications still employ a narrow band, which results to a decreased
transmission speed. In addition to the lesser bandwidth, the fees for using
satellite communication are still quite high, so that this method of
communication is not an effective alternative for transferring bigger volumes
of maritime data across longer distances.
AIS supports the 9.6 kbps digital communication over VHF communication
channels utilizing Gaussian minimum shift keying (GMSK) modulation. The AIS-
based communication is used to send digital voyage data acquired aboard the
ship. In addition to delivering pertinent data, the ship’s AIS transmitter may
concurrently collect input from neighboring ships, allowing them to identify
each other. The AIS transceivers use the 156 MHz radio band, with an output
power of up to 12.5 W. Therefore, vessels and stations can receive the AIS
signal within a range of 30–40 km.
#### I-C2 Underwater Communication Networks
A wireless LAN or pico-cellular wireless network aboard the station will offer
crew communication and access to all apps according to the crew member’s
allocated security profile. Additionally, crew communications devices will
contain position-fixing technologies (e.g., WLAN- or GPS/Galileo-based, indoor
GPS) to provide location-based services (LBS) such as man overboard (MOB) and
other distress alarms to occur regardless of the vessel’s location. Crews are
probably to be early users of wearable computing gadgets. Owing to the
sensitivity of his profession and his demand for access to a huge amount of
decision-support information, the seafarer is a likely early adopter of such
technology. Passengers will be able to access infotainment, internet services,
and voice contact with the shore via the ship’s wireless network, using their
own communication handsets. They may be offered LBS as they travel about the
ship, with their location being established either by the ship’s pico-cellular
wireless network or by their own handsets equipped with GPS or other position-
fixing technology. Underwater wireless sensing systems are envisioned for
stand-alone applications and control of autonomous underwater vehicles (AUVs)
and as an adjunct to cabled systems. For example, cabled ocean observatories
are being erected on undersea cables to deploy an enormous fiber-optic network
of sensors (cameras, wave sensors, and seismometers) covering miles of the
ocean floor. These cables can enable communication access points, very much
like cellular base stations are connected to the telephone network, allowing
users to roam and interact from regions where cables cannot reach. Another
example is cabled submersibles, commonly known as remotely controlled vehicles
(ROVs) (ROVs). These vehicles, which may weigh more than 10 metric tonnes, are
connected to the mother ship by a cable that can run over several kilometers
and give tremendous power to the remote end, along with high-speed
communication messages. Today, both vehicle technology and sensor technology
are sophisticated enough to motivate the idea of underwater sensor networks.
To convert this theory into reality, however, one must address the challenge
of communications. Underwater communication methods today largely use acoustic
technology. Complementary communication approaches, such as optical and radio-
frequency, or even electrostatic communication, have been proposed for short-
range networks (usually 1–10m), where their very high bandwidth (MHz or more)
can be leveraged. These signals attenuate very rapidly, within a few meters
(radio) or tens of meters (optical), requiring either high-power or big
antennas. Acoustic communications offer wider ranges but are hindered by three
factors: limited and distance-dependent bandwidth, time-varying multipath
propagation, and low speed of sound.
Figure 2: UAV as a use case of reliable maritime communication in dynamically
changing environmental conditions.
## II Use Cases of UAV-Aided Maritime Communication
Following are the major use cases of UAV-aided maritime communication.
### II-A Ubiquitous Transmission and Coverage
An AIS ensures the safety of navigation by delivering identification,
tracking, and other information about the ship to other ships and coastal
destinations automatically. It is regarded as a complimentary way to maritime
radars to share navigational data in the VHF frequency region for collision
avoidance. First and foremost, a maritime MTC system is necessary to enable
ubiquitous connectivity between vessels and shore on a worldwide scale,
notably over open oceans including the most distant places of the planet like
the Polar Regions, to assure uninterrupted and consistent presence of maritime
services. Currently, the availability of services in offshore settings is
constrained by a lack of information and communication infrastructures.
Moreover, the services have always been in a campus-style deployment; cross-
region continuity of maritime service remains patchy and even nonexistent.
Ultimately, a global cooperative maritime IoT network is required for the
undisrupted services across organizational, regional, and national boundaries
especially in times of crisis.
### II-B UAV-based Relaying
Generally, one common assumption in legitimate eavesdropping is that the
legitimate monitor may be far away from the suspicious party for concealment
and hence possesses inadequate eavesdropping channels. To deal with such an
issue, various works proposed to utilize jamming launched by the monitor to
lower the suspicious rate, to assist the efficient eavesdropping by the
monitor itself. Specifically, [uav4] first presented one jamming-assisted
eavesdropping approach in a single-hop suspicious system. Subsequently, [uav5]
further considered the legitimate eavesdropping in the multi-input multi-
output (MIMO) network, the one-way and two-way relaying systems. UAV-based
communications are getting growing importance for a -de range of applications
particularly with the arrival of the high altitude long endurance platforms
such as Tier II+. UAVs can be rapidly deployed, enabling BLOS communications
in support of a range of military activities. The UAV airborne relay will not
only enable range extension for theater communications but will also allow new
services including wideband COTM. With the developments in miniaturized
technology and better transmitter efficiency, multi-function, multi-band
transponders may be carried in UAVs within the size, weight, and power
dissipation budgets.
On the other hand, with the flexible mobility and high possibilities of line-
of-sight (LoS) air-to-ground links, UAVs-enabled relays are displaying
increasingly important advantages in wireless communication [uav6]. Therefore,
instead of employing the fixed relay(s) as in [uav7], the wise suspicious
parties can also deploy UAV(s) as the relay(s) for more efficient suspicious
information transmission by exploiting UAV’s rate adaptation, which, however,
will cause enormous hardship for the legitimate party to implement efficient
eavesdropping. The main reasons are I with given jamming environment setting
by the monitor, the wise UAV obviously can adjust its location for maximizing
the suspicious communication rate adaptively, which thus causes a fundamental
obstacle for the monitor to figure out its exact jamming effect; ii) for the
monitor, the UAV’s adaptive deployment will also affect its received quality
of the suspicious signals via the air-ground link.
Figure 3: A depiction of UAV handover in maritime network.
### II-C UAV-Aided Maritime IoT Data Harvesting
While research on underwater sensor networks has substantially advanced in
recent years, it is obvious that major obstacles remain to be solved. With the
flurry of new ways to communication, media access, networking, and
applications, effective analysis, integration, and testing of these concepts
are paramount—the field must generate fundamental insights, as well as
comprehend what stands up in practice. For these reasons, we believe that the
development of new theoretical models (both analytical and computational) is
very much needed and that greater use of testbeds and field experiments is
essential; such work will support more accurate performance analysis and
system characterization, which will feed into the next generation of
underwater communications and sensing. In addition, integration and testing of
current ideas will stress the seams that are typically concealed in more
specialized laboratory studies, such as total system cost, energy consumption,
and general robustness in diverse settings. Applications drive the development
of underwater sensing and networking. Inexpensive computing, sensing, and
communications have enabled terrestrial sensor networking in the past couple
of decades; we predict that cheap computing, together with lower-cost enhanced
acoustic technology, communication, and sensing, will enable underwater
sensing applications as well.
### II-D Maritime Wireless Power Transfer
Wireless charging has been acknowledged as a viable technology to provide
energy supply for battery-limited nodes, such as Internet of Things (IoT)
devices and sensors. There have been quite a few works on the use of UAVs for
WPT. For instance, UAV-enabled WPT systems were presented in [uav8], where the
UAV was used to broadcast wireless energy to ground receivers. Due to the
line-of-sight (LoS) linkages between the UAV and ground sensors, the UAV-
enabled WPT system may improve the energy transfer efficiency substantially by
deploying the UAV as a mobile energy transmitter in these works. Specifically,
the reference evaluated the attainable energy area of a basic two-user UAV-
enabled WPT system by optimizing the UAV’s trajectory with limits on the UAV’s
maximum speed.
### II-E Maritime Computation Offloading
Because of great sensitivity to time and energy consumption, many computation-
and data-intensive jobs are difficult to accomplish on mobile terminals and
cannot fulfill the needs of the rapid development of mobile networks. To
overcome this challenge, mobile edge computing (MEC) appears to be a promising
solution. Decision-making applications relying on real-time video streaming
and image processing tend to exceed the local data processing power of low-
cost UAVs or may significantly prolong the time required for completing their
activities. To address this issue, mobile-edge computing (MEC) may
beneficially work with UAVs, for easing computational offloading from the UAV
to the edge nodes. The cooperation between UAVs and MEC systems can be shown
through crowd surveillance. More explicitly, UAV-mounted high-resolution
cameras are capable of broadcasting real-time footage, which improves the
discovery of offenders using face recognition. However, both the weak
computational performance and limited power supply of UAVs inhibit the
aforementioned real-time recognition on board. To tackle these issues, the
assistance of MEC systems might be used for offloading numerous computational
activities for increasing the face recognition performance on time. To be
particular, the data collected are partitioned into two segments, one to be
computed at the UAV and the other to be offloaded to the edge node through a
gateway or access point (AP). Computation offloading is extensively utilized
to overcome the computing power restrictions of a resource-constrained mobile
monitor. Offloading conducts some elements of a work or a job of the mobile
monitor on servers on behalf of the mobile monitor. If the execution cost of
the compute operations in the portion of the task in the mobile monitor is
more than the execution cost considering the offloading mechanism, the part of
the task is done on remote servers. In some applications such as mobile video
surveillance systems, the offloading decision method considers real-time
features, as well as energy economy. Nimmagadda et al. [uav9] suggested a
real-time moving object tracking technique using mobile robots based on
compute offloading. Their offloading choice algorithm assesses the computation
and communication time of the offloaded task and decides to execute the task
on a robot or servers to satisfy time limitations and reduce the total
response time.
### II-F Maritime Localization
The location and recognition of ship targets play a significant role in the
SAGIN environment. Maritime location leverages the numerous measurement
instruments of a ship to locate the position of other nautical targets.
Maritime location can give more accurate location services for ship
navigation, hydrographic surveys, maritime resource research, etc. Ocean
surveillance satellites can leverage the advantages of space and altitude to
cover extensive oceans, monitor the actions of ships and submarines in real-
time, and identify and monitor radar signals transmitted by ships. Then, the
positions of target ships may be tracked and located. However, the position
precision based on satellites may not satisfy unforeseen situations with high
precise location needs, such as ocean rescue and noncooperative (enemy) ship
location. UAVs are autonomous vehicles that can be controlled from a distance
without pilot control. In the 21st century, UAVs are increasingly utilized in
military surveillance, target tracking, topographic surveys, etc. [uav10]. The
self-positioning of UAV platforms and the determination of the location of
unknown marine targets by UAVs have become hot subjects in recent years. For
static targets, a popular passive locating strategy is to measure the time-
difference-of-arrival (TDOA) [uav11] of the broadcast signal from the target
ship to each observation UAV. In general, target ship location based on TDOA
observations requires the collaboration of many UAVs. If the TDOA can be
properly measured, the target ship location can be determined by computing the
intersection of the hyperbolas of the TDOAs[uav12]. However, in real
applications, TDOA measurements often contain random noise, which decreases
the target location accuracy. In addition, TDOA-based approaches are more
suitable for single target ship locations.
## III UAV-Aided Maritime Communication Network Architecture
### III-A Maritime Control Station
A marine control station (MCS) is a control center positioned on the water to
give the facilities for operators of unmanned vehicles in space or air. MCSs
play a highly crucial function in unmanned aerial systems, arguably as vital
as unmanned aircraft. MCSs may be either stationary or transportable
software/hardware devices which are used to command and monitor unmanned
aircraft. Small UAV MCSs are created with a computer and a small maritime
terminal. At this stage, it is vital to emphasize that the control segments do
not have to be positioned on the water. Other control segment types are
represented by underwater control stations or airborne control stations.
Computers used as MCSs can take many sorts of gadgets such as PDA, laptop
computer, wearable computer or several transport boxes full of equipment. The
systems can take different shapes, from simple with an antenna hooked to a
laptop computer to more complex/complicated like rat’s nest with computers,
electronics boxes, monitors, antennas, wires, and joysticks.
### III-B Control Links
data links allow the operator to direct and control the unmanned vehicle
status. The uplink is used to command the unmanned vehicle, while the downlink
is utilized for ground receipt of status and condition information.
### III-C Data Links
Communication technology is part of the UAS responsible for data delivery
between system elements and external units. The fundamental challenges of
communication systems are adaptability, flexibility, security as well as
cognitive controllability of frequency, bandwidth, and data/information flow.
UAV communication is mostly important in terms of safety when it comes to
integration between UAV and MCS. Many elements can form communication systems
and be incorporated into various combinations.
#### III-C1 UAV-Ship and Satellite-Ship Data Links
This link delivers information from the unmanned vehicle to a sea-based
reception device. The payload data link is usually important for missions but
not for flights.
#### III-C2 UAV-Satellite, UAV-UAV, and Satellite-Satellite Data Links
UAVs can cooperate with other airborne platforms, such as satellites and other
UAVs which demand air-to-air communication between the platforms.
## IV Channel Characteristics
To create efficient maritime communication systems, the first and fundamental
necessity is to develop a framework to comprehend the wireless channels. In an
integrated air-ground-sea communications network, there are two major types of
channels to be investigated, namely the air-to-sea channel (e.g., for
communication links from the aircraft-based base stations or relays) and the
near-sea-surface channel (for land-to-ship/ship-to-land or ship-to-ship
communications). Due to the unique features of the maritime propagation
environment such as sparse scattering, sea wave movement, and the ducting
effect over the sea surface, the modeling of these maritime channel links
differs from conventional terrestrial wireless channels in many aspects and,
consequently, will result in a significant impact on the transceiver design.
To create an efficient maritime communication system, it is necessary to
understand the corresponding wireless channels thoroughly and develop
appropriate channel models. Whereas the marine satellite channel has been
explored extensively [uav13], the wireless channels in the integrated air-
ground-sea network are less well understood for the near coast situation. In
both academia and industry, researchers have lately started various
measurement initiatives and have developed several analytical methodologies to
describe maritime wireless channels. In a near coast integrated air-ground-sea
communication network, there are two major types of channel links to be
considered, namely, the air-to-sea channel link which is used for the
transmission from aircraft (balloons or UAVs)-based BSs or relays, and the
near-sea-surface channel link which is used to support land-to-ship/ship-to-
land and ship-to-ship communications. On the one hand, various air-to-ground
wireless channel models have recently been adapted to apply to the air-to-sea
propagation environment. The two most essential and distinguishing properties
of the maritime wireless channels are scarce and location entirely dependent:
1) The sparse feature is widely encountered in the marine environment, in
several aspects including both the scattering and the user distribution. 2)
The location dependent feature implies that for a marine user, fully different
model structures should be applied for the channels in different location
regions. Consequently to these traits, new difficulties and opportunities
develop in the design of future maritime communication systems.
### IV-A Air-to-Sea Channel
The air-to-ground channels have been intensively studied in the literature
[uav14], the three features of the maritime environment, i.e., sparsity,
instability, and ducting effect, bring unique characteristics to the air-to-
sea channels and therefore result in notable differences in the channel
modeling. In most circumstances, the LOS path and the surface reflection path
are two main paths in an air-to-sea channel. Considering that the transmitter
is in general at a high altitude and the transmission distance is large, the
so-called curved-Earth two ray (CE2R) model is commonly employed to take into
account the earth curvature [uav15]. In some cases, other scattered weak paths
need to be considered besides the two main paths. The dispersion often happens
around the receiver due to the high transmitter altitude [uav16]. As
indicated, whereas the local scattering could be rich for inland receivers
(e.g., in the near-urban area), a maritime user is predicted to confront more
sparser scattering, and so the over-water configuration may simplify the
modeling as compared to the inland air-to-ground channels. As mentioned, the
air-to-sea channel can be represented by a standard two-ray or three-ray a
model. In this situation, due to the destructive summing of the two or three
independent beams with different phases, the channel will meet deep nulls at
particular receiver positions as confirmed by [37], [38]. 3 The deep nulls
emerge with a larger probability in the maritime environment due to their
sparse character, but for the inland air-to-ground channels, the path loss
curve would be smoother with rich scattering. Two elements that may alter the
path loss model need to be considered in the air-to-sea propagation
environment: 1) Earth curvature: In maritime communications, normally, a long
coverage distance is expected. Therefore, the use of the CE2R model will be
necessary, which subsequently leads to different path loss models from those
found under the flat-earth assumption. 2) Ducting effect: Although the height
of the transmitter is normally higher than the duct layer, part of the radio
energy could still be trapped in the duct layer when the grazing angle (i.e.,
the angle between the direct path and the sea surface plane) is less than a
particular threshold. In this situation, the ray-trapping action of the
evaporation duct (and evaluated duct) will considerably increase the energy of
the received signals, resulting in path loss reduction.
### IV-B Near Sea-Surface Channel
In this section, we focus on the channel links in the land-to-ship/ship-to-
land and ship-to-ship communications, i.e., the near-sea-surface channels. For
the air-to-sea channels, a crucial aspect may be inferred as “angle-
dependant”. More particular, the air-to-sea channel properties would be
considerably different for different grazing angles. As an example, the duct
layer propagation only exists when the grazing angle is smaller than a
specific threshold, hence creating distinct path loss models and delay spread.
In contrast, the near-sea-surface wireless channel can be characterized as
“location-dependent”. different channel models should be used based on the
Transmitter-Receiver distance. When the Transmitter-Receiver distance is
modest, the channel could be represented using the standard two-ray model,
where the LOS and the surface reflection routes are the two most significant
components of the channel. As the distance grows, the first ray from the
evaporation duct layer appears (if existent) . In this example, the three-ray
model provides a more precise description of the channel. If the receiver goes
even farther away, both the LOS and the surface-reflection routes finally
vanish due to the earth’s curvature. However, the receiver can still receive a
signal in the presence of the duct layer, provided that the direction of the
transmit beam is properly configured. At conclusion, as the Transmitter-
Receiver distance increases, the propagation characteristic may change from
two-ray to three-ray and becomes duct-only in the end. Beyond-LOS (B-LOS)
transmission is possible in marine communications thanks to the ducting effect
across the sea surface. As a potential way to enable long-distance and high-
security transmission, B-LOS transmission using the ducting effect has gained
great attention in military communications. For X-band communications, the
communication range can be considerably expanded to up to 1000 km with the aid
of the duct layer.
## V Network Design Prospects
### V-A Energy Efficient UAV Placement and Trajectory Design
Despite their ample applications, UAV communication systems face many new
obstacles . In particular, the endurance and performance of UAV systems are
fundamentally constrained by the onboard energy, which is virtually finite due
to the aircraft’s size and weight limits. Thus, energy-efficient communication
for maximizing the data transfer per unit energy consumption of the UAV is of
crucial importance. Note that energy-efficient solutions for UAV communication
systems are notably different from those in the previous literature on
terrestrial communication systems . Firstly, although the motive for energy-
efficiency maximization in terrestrial communications is mostly for minimizing
energy consumption and cost, that for UAV systems is more crucial due to the
limited on-board energy. For example, given the maximum quantity of energy
that can be carried by aircraft, an improvement in energy efficiency
immediately increases the number of information bits that can be transmitted
with the UAV before it needs to be recalled for recharging/refueling.
Secondly, besides the conventional energy expenditure on communication-related
functions, such as communication circuits and signal transmission, the UAV
systems are subject to the additional propulsion power consumption for
maintaining the UAV aloft and supporting its mobility (if necessary), which is
usually much higher than the communication power consumption. Note that the
UAV’s propulsion energy consumption is determined by its flying condition
including velocity and acceleration, which thus needs to be taken into account
in energy-efficient design for UAV communications.
### V-B IRS-Aided UAV Deployments
An intuitive notion is to develop a more regulated wireless environment to
improve the secure performance of UAV communication systems. Recently,
intelligent reflecting surface (IRS) has emerged as a novel approach due to
its capacity to restructure wireless channels , which offers more degrees of
freedom to build a smart and reconfigurable wireless environment in a
controllable manner. Technically, these tunable and low-cost reflecting
elements installed on IRS are capable of dynamically altering the phase shifts
and absorbing the signal energy, then the intended signals can be amplified
although the interference signals are lowered simultaneously . Therefore, IRS
can be properly built to improve the undesired propagation conditions to
promote UAV communications. However, integrating IRS into UAV-enabled secure
communication systems faces obstacles from network characterisation to
performance optimization. To address the technical issue of limited onboard
energy due to the battery limits of the UAV, a simple option is to use
lightweight, low-power relaying devices on board. The IRS which merely
reflects the incident signals and functions as a passive relaying satisfies
this criteria. Due to its lightweight and passive feature, it will lower the
UAV energy consumption significantly, and thus, the operational period of the
UAV relay may be dramatically prolonged. In addition, as the reflecting
surface element may be downsized, an incredibly small UAV air platform can be
deployed for one-to-one relaying service to improve the quality of service of
mobile relaying custom-made for each ground user. The performance boost is
accomplished by introducing the UAV-based IRS mainly owing to an additional
equivalent line-of-sight (ELoS) channel which greatly enhances the received
signal-to-noise ratio (SNR) at the ground user. On the other hand, as the
ground user is mobile especially at the edge of a cell, the UAV-based IRS
relaying can monitor the ground user to offer an ELoS link to strengthen the
poor remote-edge channel for better performance. In , the authors analyzed the
capacity of an IRS-based UAV communication system when the phase compensation
was imperfect, and the authors in maximized the system sum rate using non-
convex optimization for IRS-assisted UAV OFDMA communication, which can be
treated as the symbiotic UAV and IRS systems.
### V-C UAV-aided Mobile Edge Computing
As an emerging distributed computing paradigm, mobile edge computing (MEC)
presents a viable solution for these difficulties. MEC distributes processing,
storage, and network control capabilities from the data center to the network
edge so that offering low-latency and distributed services. One of the major
issues in maritime communications is the structure of the services provider
chain, which is growing more and more complex. As market conditions are
changing frequently, important players in the value chain are continually
repositioning themselves in their market regions and/or change/acquire new
positions in the chain moving to or between various market areas (via
strategic acquisitions, new ventures, or partnerships).
### V-D Massive MIMO-Aided UAV Deployments
Although UAVs normally operate at significantly higher altitudes than ground-
based user equipments (GUEs), thus requiring the GBSs to deliver three-
dimension (3D) signal coverage for them. However, the ground base station
(GBS) antennas in the existing wireless network are normally inclined downward
to offer ground coverage solely while limiting the inter-cell interference
among GUEs. This results in insufficient coverage for communications with UAVs
in the sky for resolving the aforementioned aerial-ground interference and 3D
coverage issues, massive multiple-input multiple-output (MIMO) has been
applied recently for supporting UAVs by leveraging the advantage of a large
number of GBS antennas . Compared to typical two-dimension (2D) beamforming
toward the ground alone, massive MIMO provides 3D beamforming with finer-
grained angle resolutions in both azimuth and elevation dimensions. Thus, it
delivers significantly more effective interference mitigation capability in
both the UAV uplink and downlink communications by utilizing the elevation
angle difference between UAVs and GUEs . Furthermore, 3D beamforming enhances
connection and coverage for UAVs in the sky due to more flexible elevation-
domain beamforming. Although massive MIMO is promising for interference
suppression and coverage extension in cellular-connected UAV communications,
it faces several practical challenges in serving UAVs, for example, pilot
contamination caused by UAVs with strong LoS channels and channel/beam
tracking for UAVs with 3D high mobility. Moreover, supporting the UAV swarm
and implementing realistic hybrid GBS beamforming architecture for huge MIMO
further complicate pilot contamination and channel/beam tracking challenges.
This prompts this paper to present an outline of the new concerns and
challenges in massive MIMO for supporting UAV communications.
### V-E Machine Learning and Artificial Intelligence
A fundamental issue appearing not only in UAV networks but also in wireless
networks, in general, is interference. Recently, ML-based techniques have been
invoked for effective interference management in UAV networks. The paper in
Reference examined opportunistic channel access by UAVs. This topic was
formulated as a non-cooperative interference mitigation game, considering the
different properties of data flow and UAV clustering. After, these qualities
are included in the utility function, suitably allocating the weight
coefficients to each characteristic and linearly combining them are conducted.
Moreover, a distributed log-linear learning technique is applied to attain the
Nash Equilibrium (NE) of the interference mitigation game. The learning
algorithm is based on the fact that a UAV, suffering from intra-cluster
interference, is randomly chosen to update its joint channel-slot selection
according to its experienced interference level, slot interval, and cluster
rewards in each step and stochastically determines the channel selection. The
simulations focused on convergence behavior, selection behavior, and
performance evaluation, underlining the necessity of establishing optimal
weights for improved interference control by the log-linear method. In this
fashion, the proposed algorithm converges fast to attain the optimal network
utility and the minimal weighted interference level. A 3D and dynamic
architecture that is introduced in the UAV-cellular networks, resource
management, network planning, content-caching, and user association tasks are
very demanding, since many contradictory requirements should be considered,
such as low latency, increased throughput, low overhead, support of a massive
number of devices, and dynamic conditions. In that perspective, the ML
framework has been employed to facilitate resource management in a pretty
efficient manner. In Reference , the authors attempted to forecast the success
and failure rates in a UAV network by applying ML methods based on linear
regression (LR) and SVM. Since UAV connectivity is time-variant due to their
movement, the success probability of the transmission diminishes as the
wireless links’ distance grows.
### V-F Blockchain-Aided UAV Placement
The UAVs have the potential to be widely used in crucial IoT applications due
to their benefits to detect data and expand network coverage. Securing the
communication of UAVs with the corresponding network is regarded one of the
most critical hard topics. Blockchain technology, to protect the data transfer
of wireless communication supported UAV sensing systems for marine IoT
applications. Authenticating the existence of registered UAVs in the maritime
environment for communication purposes, help to better respond to the security
problems. Utilizing the notion of Blockchain, false UAVS can be precisely
identified in a distributed manner, and therefore banning them for further
communication, in contrast to existing systems that aim to safeguard the IDs
of UAVs in centralized databases. The validation of the deployed network
element’s authenticity in the marine IoT is a significant issue to satisfy the
security of the overall network. Therefore, how to develop a really trusted
communication paradigm remains an issue. To tackle this difficulty, Blockchain
can be utilized to develop a trustworthy authentication method, to confirm the
authenticity of a wireless communications-assisted UAV sensing system in
maritime IoT. The Blockchain is an immutable ledger that can provide suitable
integrity and security solutions for IoT applications. Blockchain has recently
garnered major appeal because to its distributive character, where Blockchain
decouples the centralized hold from a single entity in the network and
distributes control to several participating nodes actively using the
Blockchain network.
### V-G mmWave and TeraHz Communication for UAVs
Propagation measurements at roughly 500 MHz across sea path have been done
some years ago where the association between the field intensity, weather
circumstances, and varying refractivity were carefully explored. Subsequently,
statistical information about ducting/super-refraction and signal fading
effects was acquired, and signal level dependence on the tidal waves was
analyzed at 248 and 341 MHz. Fixed wireless links over sea routes were
measured by the use of a wireless LAN system at 2.4 GHz , indicating that the
operation rate could be enhanced from 90$\%$ to 100$\%$ by exploiting space
diversity, and the limit of the sea roughness for specular reflections was
given. Similarly, digital TV systems in UHF bands from 470 MHz to 710 MHz have
been tested worldwide. The ITU-R Recommendation P.1546-2 presented an
empirical model to predict point-to-area field strength for numerous services,
including maritime mobile in the frequency range 30 MHz to 3000 MHz within
1000 km, in which, however, small-scale channel characterizations were not
included. In relation to , measurement data on channel properties of a mobile
wideband channel overseas at 1.9 GHz were provided. However, the scenario for
open water conditions was not described, and reflection coefficient and
Doppler shift from sea surface were not a part of the work objectives.
## VI Research Challenges and Directions
### VI-A Dynamically Varying Channel Conditions
With the developments in acoustic modem technology, study has moved into the
domain of networks. The key problems were highlighted over the past decade,
pointing once again to the basic disparities between acoustic and radio
transmission. For example, acoustic signals propagate at 1500 mps, generating
propagation delays as long as a few seconds across a few kilometers. With bit
rates of the order of 1000 bps, propagation delays are not negligible with
regard to typical packet durations, a situation very different from that
observed in radio-based networks. Moreover, acoustic modems are often limited
to half-duplex operation. These limits suggest that acoustic-conscious
protocol design can give greater efficiency than straight use of protocols
created for terrestrial networks (e.g., 802.11 or transmission control
protocol (TCP)). In addition, for anchored sensor networks, energy efficiency
will be as crucial as in terrestrial networks, since battery re-charging
hundreds of meters below the sea surface are difficult and expensive. Finally,
underwater instruments (sensors, robots, modems, and batteries) are neither
cheap nor disposable. This fact may be the single most essential aspect that
(at least for now) distinguishes underwater sensor networks from their
terrestrial counterpart, and profoundly modifies many network design concepts
that are typically taken for granted.
### VI-B UAV 3-Dimensions Trajectory Design
Exploiting the UAV’s high mobility is projected to unlock the full potential
of UAV-to-ground communications . Specifically, by considering a linear
topology scenario for ground receivers, the work in optimizes the one-
dimensional (1D) UAV trajectory and communication resource allocation to
disclose the basic rate limits of the UAV-enabled multiple access channel. For
a UAV-enabled uplink NOMA network, the joint 2D UAV trajectory and power
control problem is proposed in to optimize the sum rate, which is then turned
into a UAV deployment issue. The work in develops an efficient solution to
tackle the max-min average rate problem by optimizing the 2D UAV trajectory
and resource allocation for the time division multiple access (TDMA) and NOMA
schemes. However, all the aforementioned research efforts either focus on the
CR based or NOMA based UAV network, but the integration of CR with NOMA has
not been fully researched.
### VI-C UAV-to-Sea and UAV-to-UAV Interference Management
UAV trajectory optimization is vital in such cases. An online path planning
that accounts for wireless measurements is crucial and would, in essence,
assist in tackling the aforementioned interference concerns together with new
advances in the design of the network, such as 3D frequency reuse. Such a path
planning approach allows the UAVs to change their movement based on the rate
requirements of both aerial UAV-UEs and ground UEs, thus increasing the
overall network performance. For UAV-RMS applications, UAVs will largely send
data in the uplink. Nevertheless, the capacity of cellular-connected UAVs to
establish LoS communication with several ground BSs might lead to severe
mutual interference among them as well as to the ground users. To overcome
this difficulty, additional advances in the architecture of future cellular
networks such as enhanced receivers, cell coordination, 3D frequency reuse,
and 3D beamforming, are needed. For instance, because to their capabilities of
detecting and categorizing images, CNNs can be implemented on each UAV in
order to recognize numerous elements of the environment such as the location
of UAVs, BSs, and ground UEs. Such a method will enable each UAV to change its
beamwidth tilt angle so as to minimize the interference on the ground UEs.
Moreover, in streaming applications, UAV trajectory optimization is also
critical. In particular, physical layer technologies such as 3D beamforming,
can be paired with an interference-aware path planning system to provide more
efficient communication links for both ground and aerial users. Such a path
planning strategy (e.g., such as the one we described in ) allows the UAVs to
change their movement based on the rate requirements of both aerial UAV-UEs
and ground UEs, thus increasing the overall network performance.
### VI-D 3D Mobility Management(3D Handoffs)
To fully enjoy the benefits of UAV deployment, beyond visual line of sight
activities are of crucial relevance where UAVs acting as aerial users, can
retain contact with the ground base stations (GBSs) for command and control
(C&C) functions in the downlink (DL) . UAVs flying in the sky may be served by
the sidelobes of base station antennas which provide lower antenna gains .
This creates considerable issues for the mobility management (MM) for the
cellularconnected UAVs based on reference signal received power (RSRP) (RSRP).
The GBS delivering the maximum RSRP might be placed far distant from the UAV.
This type of patchy signal coverage of GBSs would result in poor mobility
performance such as handover failure (HOF), radio connection failure, as well
as unnecessary handovers (HOs), called ping-pong occurrences. Apart from
these, due to the loss of the C$\&$C signal, the UAV may collide with a
commercial aircraft or even crash into a populated area which can result in
dangerous events. Hence, excellent MM for enabling reliable connections
between UAVs and GBSs is of essential relevance. MM approaches for ground user
equipment (GUE) in both homogeneous and heterogeneous cellular networks have
been investigated extensively in the literature . However, the research in MM
for cellular-connected UAVs is still in its infancy.
### VI-E Beam-forming for High Mobility Ships and UAVs
For reliable wireless communications between APs and UAVs, we have the
following critical design issues: I beamformer design; ii) power control
design. These design difficulties are challenging owing to frequent AP
switching when UAVs traveling at a fast speed. Moreover, since a UAV contains
two smart antenna systems, they should be collaboratively operated. This leads
in concurrent two-point beamforming and two-point power control. The conjunct
power control challenge is to minimize the average of total transmission power
over a complete section with restrictions of the received SINR and maximum
transmission powers by taking advantage of knowing the location of UAVs and
APs inside a section. Here, each subsection could be small enough so that the
SINR variation by transferring one subsection to another is insignificant.
However, when fixed beamforming weight vectors are utilized for a complete
subsection, there would be some SINR fluctuation due to the change of angle of
arrival (AoA) or angle of departure (AoD). The beamformer design difficulty is
to reduce the SINR fluctuation caused by AoA/AoD changing in each subsection.
At greater heights, vertical beamforming or up-titled BS antennas may be
needed to provide improved coverage. The novel situations will require further
research, simulations, and field measurements. The properties of air–ground
wireless channels are different from those of terrestrial wireless channels.
This is one of the root causes for the interference and mobility difficulties
highlighted in this article. More empirical measurements will be of
substantial value for constructing more accurate statistical air–ground
channel models. Take Doppler effects, for example. Characterizing Doppler
effects clearly in channel measuring campaigns will be of interest, especially
for drones flying at high speeds.
## VII Conclusion
In this paper, we….
| Muhammad Waseem Akhtar is currently a PhD (Electrical Engineering) student
at School of Electrical Engineering and Computer Science (SEECS), National
university of Science and Technology (NUST), Islamabad, Pakistan. He has
received the Master of Science in Electrical (Telecommunication) Engineering
degree from NUST, Islamabad, in 2014, and B.Sc. Telecommunication Engineering
degree from University of Engineering and Technology (UET), Peshawar,
Pakistan, in 2009. His current research interests include cooperative
communication, energy and bandwidth efficient network designing, massive MIMO
and D2D communication, artificial intelligence, machine learning and
blockchains technologies.
---|---
| NASIR SAEED (Senior Member, IEEE) received the bachelor’s degree in
telecommunication from the University of Engineering and Technology, Peshawar,
Pakistan, in 2009, the master’s degree in satellite navigation from the Polito
di Torino, Italy, in 2012, and the Ph.D. degree in electronics and
communication engineering from Hanyang University, Seoul, South Korea, in
2015. He was an Assistant Professor with the Department of Electrical
Engineering, Gandhara Institute of Science and IT, Peshawar, from August 2015
to September 2016. He has worked as an Assistant Professor with IQRA National
University, Peshawar, from October 2016 to July 2017. From July 2017 to
December 2020, he was a Postdoctoral Research Fellow with the Communication
Theory Laboratory, King Abdullah University of Science and Technology. He is
currently an Associate Professor with the Department of Electrical
Engineering, National University of Technology, Islamabad, Pakistan. His
current research interests include cognitive radio networks, underwater
wireless communications, aerial networks, dimensionality reduction, and
localization.
---|---
|
# Phases, morphologies, and transitions in a membrane model for the
endoplasmic reticulum
Jaya Kumar Alageshan1, Yashodhan Hatwalne2, Rahul Pandit1
1 Centre for Condensed Matter Theory, Department of Physics,
Indian Institute of Science, Bangalore 560012, India.
2 Raman Research Institute, C.V. Raman Avenue,
Sadashivanagar Bengaluru 560080, India
###### Abstract
We introduce a novel model, comprising self-avoiding surfaces and
incorporating edges and tubules, that is designed to characterize the
structural morphologies and transitions observed within the endoplasmic
reticulum (ER). By employing discretized models, we model smooth membranes
with triangulated surfaces, and we utilize numerical variational methods to
minimize energies associated with periodic morphologies. Our study obtains
phases, their morphologies, and their transitions and examines their
dependence on the membrane chemical potential, the line tensions, and the
positive Gaussian curvature stiffness. By starting with diverse topological
structures, we explore shape variations by using Surface Evolver, while
maintaining fixed topology. Notably, we identify the region of parameter space
where the model displays lamellae, with a lattice of helical edges connecting
the layers; this resembles structures that have been observed in the rough ER.
Furthermore, our investigation reveals an intricate phase diagram with
periodic structures, including flat lamellar sheets, sponge phases, and
configurations comprising tubules with junctions, which are akin to the
morphology of the smooth ER. An estimation of lattice parameters is achieved
through fluctuation analysis. Significantly, our model predicts a transition
between homotopically equivalent lamellae, with helical edges and
configurations featuring tubules with junctions.
## 1 Introduction
The endoplasmic reticulum (ER) surrounds the nucleus in most eukaryotic cells
and plays a crucial role in cellular structure and function. Its intricate
structure and diverse functions make it a cornerstone of cellular activity.
The basic building blocks of the ER are a network of membranous tubules and
flattened sheets called cisternae, which occur throughout the cytoplasm of
eukaryotic cells [1] and exhibit distinct curvature, lipid composition, and
protein organization [2]. The interconnected tubular structures extend from
the cisternae (see Fig. 1). The tubules are dynamic and flexible structures
that branch out and fuse to form the complex three-dimensional network of the
ER. The biophysics of the ER encompasses the study of its structure, physical
properties, and the molecular dynamics underlying its functions. In the ER,
the following two main regions have been identified:
* •
The Rough Endoplasmic Reticulum (RER) often appears as flattened cisternae and
has a rough appearance under an electron microscope because of ribosomes on
its outer surface [3]. These ribosomes are responsible for protein synthesis
and the RER helps in the folding and also the modification of proteins within
its lumen [4]. Furthermore, the newly synthesized proteins’ structural
integrity and correct folding are ensured by chaperone proteins within the ER
[5]. The lumen of the RER is contiguously connected to the lumen of the
nuclues [1].
* •
Away from the nucleus, the ER breaks down into a filamentous structure that
forms the Smooth Endoplasmic Reticulum (SER). The SER lacks ribosomes on its
surface, so it has a smooth appearance [3] and consists predominantly of
tubules. It is involved in various metabolic processes, such as lipid
synthesis [6] and detoxification of toxins [7]. In particular, in muscle
cells, the SER stores and releases calcium ions that are necessary for muscle
contraction [8]. Calcium ions also play a pivotal role in cell signalling.
therefore, the ER plays a crucial role in their regulation and the proper
functioning of the cell [10]. The SER produces phospholipids and steroids that
are essential for cell-membrane structure and hormone production [11].
Figure 1: A schematic diagram illustrating the smooth and rough endoplasmic
reticula (SER and RER), ribosomes, the cell nucleus, nuclear pores, and the
nuclear envelope.
The ER is dynamic and continually undergoes remodelling via the fusion and the
fission of tubules and cisternae; thus, it adapts to changes in cellular
physiology and structure. Proteins such as reticulons and REEPs are
responsible for ER shaping and dynamics [15], and they maintain the ER
morphology and influence its biophysical properties. The protein insertion,
trafficking, and signalling processes within the lipid bilayer of the ER
membranes influence its mechanical properties, fluidity, and curvature [12].
In turn, these biophysical properties influence the movement of vesicles and
molecules within the ER.
The ER membrane is subject to active fusion and fission processes [16, 17].
There is a traffic of protein-carrying vesicles in the ER-Golgi-plasma
membrane, through the secretory pathways [18]. Typically, about $1000$
vesicles leave the ER every second [19]. Despite this strong remodelling
activity, which is energetically driven by the consumption of ATP/GTP, the ER
has a well-defined stable lamellar structure [20]. The fusion and fission
processes occur predominantly at the outer interface between the smooth ER and
the cell plasma [1]; so we hypothesize that local equilibrium is valid in the
bulk of the ER. Our findings demonstrate that this hypothesis leads naturally
to the structures of the ER that have been uncovered in recent experiments
[13].
Freeze-fracture studies [13] have established that the RER comprises stacks of
flat sheets connected by screw dislocations with helical edges (see Fig.
8(a)). To avoid geometric frustration, the constituents of the RER must have
an equal number of left- and right-handed dislocations; a two-dimensional (2d)
section, normal to the axes of the dislocation yields a square lattice with
alternating handedness, as shown in Fig. 8(b). A morphological transition,
from the flat lamellar sheets to lamellae with helical edges occur as the
lattice parameter, i.e., the distance between the helical edges goes to
infinity. We note, in passing, that this transition is akin to the TGB-SmA
(Twist Grain Boundary-Smectic-A) transitions in lyotropic liquid crystals
[14].
The ER is an essential and versatile cellular structure that plays a pivotal
role in various cellular functions. Thus, understanding the biophysics of the
ER is of paramount importance as it governs the organelle’s function and
interactions with other cellular components. It is important to delve into the
physical properties of ER membranes, protein dynamics, and molecular
processes, which occur within this organelle; this is essential (a) for
gaining insights into cellular physiology and disease mechanisms, particularly
those that are related to ER stress disorders, and (b) for the development of
targeted therapies. The ER works in conjunction with other organelles, such as
the Golgi apparatus, vesicles, and mitochondria, to orchestrates a complex
array of molecular activities, significantly contributing to the overall
functionality and health of eukaryotic cells.
In this study, we concentrate on the structural aspects of the ER, and
organise the remaining part of this article as follows: In Sec. 2 we present
our model. Then, in Sec. 3, we give the details of the numerical shape
variation that we employ to find the minimal-energy configuration for each of
the ER morphologies by using Surface Evolver [25]. Finally, in Sec. 4, we
present the phase diagrams and morphologies we obtain. The concluding Sec. 5
is devoted to a discussion of the significance of our results and possible
limitations of our study.
## 2 The Model
The ER has a complex multi-scale structure, with diverse morphologies and
functions that span length scales ranging from nanometers to micrometres. The
ER membranes are composed of lipid bilayers that have a thickness $\simeq 4-6$
nanometers. Protein folding, chaperone interactions, and post-translational
modifications and other molecular processes occur within the ER lumen and
determine the double-bilayer thickness, which is typically $\simeq 30-50$
nanometers [26]. Furthermore, the ribosomes that decorate the outside of the
RER are $\simeq 20-30$ nanometers in diameter; they are responsible for
protein synthesis, which occurs at the nanometer scale within the ER
cisternae. Throughout the cytoplasm, the SER forms an extensive network, which
extends over micrometre scales in most eukaryotic cells. Vesicles within the
ER network are involved in the transport of proteins, lipids, and other
molecules, within the ER and between the ER and other cellular compartments,
such as the Golgi apparatus. The sizes of these vesicles can vary, but their
diameters lie typically in the range $25-50$ nanometers.
The ER’s diverse length scales, which are vital for its functions, allow it to
perform tasks ranging from molecular-level protein synthesis and modification
to coordinating cellular processes over larger distances. This ability to
cover multiple scales enables efficient communication and coordination within
the cell and underscores the ER’s importance in maintaining cellular
homeostasis. The ER is complex: it has many types of protein molecules, with
non-trivial properties, embedded within its membrane.
From the structural perspective, the principal building blocks of ER are the
fragments of sheets and tubules made up of double lipid bilayers [28]. We
model the double bilayer as a self-avoiding two-dimensional (2d) surface and
the tubules as one-dimensional (1d) curves (see fig. 2), in concurrence with
Ref. [37]. The edges of the double bilayer (red) and the tubules (blue) are
decorated with reticulons [43]. The quantitative calculation of the free
energy for a self-avoiding fluid membrane with tubules is complex: it involves
the use theoretical models that have been motivated by experimental
observations. We use a continuum-mechanics-based approach to study the phases,
transitions, and morphologies of such membranes under different biophysical
conditions.
Figure 2: (a) A colour-coded schematic diagram of the elements of the double
bilayer membrane in our model: ($\mathcal{A}$)\- 2D surface bulk;
($\partial\mathcal{A}$)\- surface edge; ($\mathcal{T}$)\- tubule; (orange)\-
junction of tubule and edge; (b) the corresponding equivalent 2d surface and
1d curve in our model.
In our study, we consider the Helfrich Hamiltonian [36] for a fluid membrane
together with line tension, and self-avoidance given below:
$\displaystyle\mathcal{H}=\underbrace{\gamma\oint_{\partial\mathcal{A}}ds\;\;+\;\;\;\Gamma\int_{\mathcal{T}}ds}_{\text{Line-
tension}}\;\;+\;\;\underbrace{\iint_{\mathcal{A}}\left(\frac{\kappa}{2}\;H^{2}+\kappa_{G}K\right)\sqrt{g}\;d^{2}\sigma}_{\text{Bending}}$
$\displaystyle+\;\;\underbrace{\mu\iint_{\mathcal{A}}\sqrt{g}\;d^{2}\sigma}_{\text{Surface
tension}}\;\;+\;\;\underbrace{4\;\mathcal{B}\iint_{\mathcal{A}}\left(\left(\frac{\Delta}{\delta}\right)^{12}-\left(\frac{\Delta}{\delta}\right)^{6}\right)\sqrt{g}\;d^{2}\sigma}_{\text{Self-
avoidance}}\,,$ (1)
where $\mathcal{A}$ is the area of the surface elements and
$\partial\mathcal{A}$ are the corresponding boundaries; $\mathcal{T}$’s refer
to the tubules [see Fig. 2]. We model the bending energy of the membrane by
using the Helfrich terms [36], which depend on the mean curvature $H$ and
Gaussian curvature $K$. The surface- and line-tension energies arise from the
stretching of the membrane’s constituent molecules and depend on the tension
or stress applied to the membrane. The term with coefficient $\mu$ penalizes
deviations from the membrane’s preferred surface area, $\gamma$ is the line
tension of the surface edge, and $\Gamma$ is the line tension of the tubule.
The membrane’s edges can coalesce to form tubules, as shown in Fig. 3, so a
tubule is effectively two merged edges (see Fig. 3), whence $\Gamma=2\gamma$.
We neglect the energetics of the tubule-edge junction, orange-region in Fig.
2, because it involves higher-order gradients than those we retain in Eq. 1.
The self-avoidance interactions between membranes are captured by the Lennard-
Jones-type potential [38], with coefficient $\mathcal{B}$, where $\delta$ is
the inter-layer spacing and $\left(2^{1/6}\Delta\right)$ is the preferred
spacing.
Figure 3: Tubule formation: (a) A schematic where the edges of the double
bilayer join together to form a tubule; (b) The effective process when
modelled with a 2d surface and 1d curve preserves the Euler characteristic
$\chi$, with $V$-vertices, $E$-edges, and $F$-faces.
For a free-standing membrane, with $\delta\gg\Delta$, a variation of the
Hamiltonian (1), with respect to its shape, leads to [39, 40]
$\displaystyle 4\kappa H(H^{2}-K)-2\mu H+\kappa\nabla^{2}(2H)=0\,,$ (2)
in the bulk; and the corresponding boundary conditions are:
$\displaystyle\left.\left[2\kappa
H+\kappa_{G}\kappa_{n}\right]\right|_{\partial\mathcal{A}}$ $\displaystyle=$
$\displaystyle 0\,,$
$\displaystyle\left.\left[-2\kappa\partial_{\perp}H+\mu\kappa_{n}+\kappa_{G}\partial_{\parallel}\tau_{g}]\right]\right|_{\partial\mathcal{A}}$
$\displaystyle=$ $\displaystyle 0\,,$ $\displaystyle\left.\left[2\kappa
H^{2}+\kappa_{G}K+\mu+\gamma\kappa_{g}\right]\right|_{\partial\mathcal{A}}$
$\displaystyle=$ $\displaystyle 0\,,$ (3)
where $\kappa_{n}$ and $\kappa_{g}$ are, respectively, the normal and the
geodesic curvatures of the boundary curve, with respect to (w.r.t.) the
surface, $\tau_{g}$ is the torsion,
$\partial_{\parallel}\equiv\hat{\mathbf{t}}\cdot\nabla$ is the derivative
tangential to the boundary curve, and
$\partial_{\perp}\equiv\hat{\mathbf{n}}_{s}\cdot\nabla$ is the derivative
directed along the outward normal to the surface boundary.
Entropic contributions arise because of the membrane’s ability to explore
various conformations via thermal fluctuations that lead to scale-dependent
renormalizations of the coefficients in Eq.( 1). Hence, we consider the free-
energy contribution associated with the fluctuations of the membrane. which
can be written as
$\displaystyle\mathcal{F}$ $\displaystyle=$
$\displaystyle-\ln{\mathcal{Z}}/\beta\,,\text{ where }\beta=1/k_{B}T\,,\text{
and }$ $\displaystyle\mathcal{Z}$ $\displaystyle=$
$\displaystyle\int\mathcal{D}[\mathcal{C}]\;\exp(-\beta\mathcal{H})\,,$ (4)
where $\mathcal{Z}$ is the functional integral over all possible
configurations of the membrane shape, with $\mathcal{D}[\mathcal{C}]$ the
measure. We assume that the free energy has the same form as the Hamiltonian
(1), with these renormalized coefficients. The inclusion of short-wavelength
fluctuations renormalizes the bending moduli at the length scale $L$ as
follows [31]:
$\displaystyle\tilde{\kappa}$ $\displaystyle=$
$\displaystyle\kappa_{0}-\frac{3}{4\pi\beta}\;\ln\left(L/a\right)\,,$ (5)
$\displaystyle\tilde{\kappa}_{G}$ $\displaystyle=$
$\displaystyle\kappa_{G0}+\frac{5}{6\pi\beta}\ln\left(L/a\right)\,,$ (6)
where $a$ is the molecular-length cut-off; $\kappa_{0}$ and $\kappa_{G0}$ are
bare quantities, and $\xi_{\kappa}=a\;\exp\left(4\pi\beta\kappa_{0}/3\right)$
and $\xi_{\kappa_{G}}=a\;\exp\left(6\pi\beta\kappa_{G0}/5\right)$ as the
corresponding thermal persistence lengths. The renormalized surface tension is
$\displaystyle\tilde{\mu}=\mu_{0}+\frac{\mu_{0}}{4\pi\beta\;\kappa_{0}}\ln\left(L/a\right)\,.$
(7)
The self-avoidance term also undergoes renormalization, with
$\displaystyle\tilde{\Delta}=a+1/\sqrt{\tilde{\kappa}|\tilde{\mu}|\beta^{2}}\,.$
(8)
Furthermore, if $\delta=2^{1/6}\delta_{0}+\Delta$, then the quadratic
expansion of the compression term is $\sim
2^{17/3}\mathcal{B}\;(\Delta/\delta_{0})^{2}$. Therefore,the renormalized
layer compressibility is [42]
$\displaystyle\tilde{\mathcal{B}}=\mathcal{B}_{0}+\frac{\pi^{2}\tilde{\Delta}}{2^{2/3}\;\kappa\;\beta^{2}\;(\tilde{\Delta}-\delta_{0})^{4}}\,.$
(9)
Explicit expressions for the renormalization of the line tension in fluid
membranes can be intricate and model-dependent, and there is no universally
accepted formula for them. So, for simplicity, we retain the bare value of
this line tension.
In Sec. 4, we consider different morphologies that we classify on the basis on
their topologies. Furthermore, to simplify our analysis, we study only
periodic structures. Therefore, to compare the stabilities of the different
phases and their morphologies, we use the free energy per unit volume. All the
energy-minimizing solutions to the total free energy in Eq. (1) are not
amenable to analytic solutions because of the geometry-related non-
linearities. It is imperative, therefore, to look for approximate numerical
solutions. In the next Section, we give a brief overview of the Surface
Evolver package that we use for such numerical simulations.
## 3 Methods: Surface Evolver
Surface Evolver (SE) is an open-source numerical package used in computational
physics and mathematics to model and simulate the behaviours of surfaces and
interfaces. Initially devised by Kenneth Brakke for mean-curvature-flow
analysis [23], this tool has gained wide usage across diverse realms,
prominently in exploring the physics of soap films, minimal surfaces, and
various interfacial phenomena.
Within SE, smooth 2D surfaces are represented as piece-wise-linear
triangulated surfaces embedded within Euclidean 3D space.
Key attributes of SE encompass:
1. (a)
Numerical Surface Energy Minimization: Enables the discovery of minimal
surfaces and configurations, while adhering to specified constraints.
2. (b)
Versatile Geometry Simulation: Facilitates the simulation of intricate
geometries, encompassing diverse surface types, shapes, and interactions.
3. (c)
Dynamic Interface Simulation: Allows the study of surface evolution over time,
yielding insights into growth, deformation, and phase transitions.
4. (d)
Parameter Control: Offers manipulation of parameters like surface tension,
boundary conditions, and constraints, thereby enabling the exploration of
surface behaviours in varying environments and physical conditions.
SE harnesses different numerical techniques [energy-minimization algorithms]
to ascertain surface-equilibrium configurations. These algorithms calculate
the most energy-efficient shapes and structures for surfaces, with specified
boundary conditions and constraints. Its applications in material science
include studies of surface energies, stability assessments, and interface
analyses. Moreover, SE serves as a valuable tool in comprehending surface
interactions within biological systems, so it can aid in the modelling of
biological membranes and cell structures.
Working with SE typically involves scripting, in its dedicated programming
language, which allows users to define surface energies, constraints, and
boundary conditions. In our specific study, we introduce an auxiliary term,
represented as the self-avoidance $(\mathcal{B})$ in Eq. 1, into the SE
calculations to account for membrane interactions. This term incorporates the
interaction between membranes by estimating the inter-layer distance, denoted
as $\delta$, within the lamellar phase. We determine this distance from
triangles along the vertex normal, as illustrated in Fig. 4(a).
(a) (b)
Figure 4: (a) Schematic diagram demonstrating the calculation of the distance
between a vertex and neighbouring layers ($\delta$), along the vertex-normal.
(b) Lennard-Jones-type potential between the surfaces.
SE minimizes the energy by numerical shape variation, which is implemented by
using a conjugate-gradient descent [53], w.r.t. the coordinate variables of
the vertices, as follows:
$\displaystyle\frac{\partial\mathbf{x}_{\alpha}}{\partial\tau}=\mathbf{\Upsilon}_{\alpha\beta}\;.\;\frac{\delta\mathcal{F}}{\delta\mathbf{x}_{\beta}}\,,$
(10)
where $\mathbf{x}_{\alpha}$ is the 3d position vector of the vertex labeled by
$\alpha$, $\mathbf{\Upsilon}_{\alpha\beta}$ represents the vertex-dependent
mobility matrix, and $\tau$ is the iteration scale. Since we are modeling a
fluid membrane, the redundant evolution, leading to re-parameterization (in-
plane movement) of the vertices in the bulk, is eliminated by redefining the
mobility matrix as
$\displaystyle\mathbf{\Upsilon}_{\alpha\beta}\rightarrow\tilde{\mathbf{\Upsilon}}_{\alpha\beta}\;:=\;\mathbf{n}_{\alpha}\;.\;\mathbf{\Upsilon}_{\alpha\beta}\,,$
(11)
where $\mathbf{n}_{\alpha}$ is the normal at the vertex labeled $\alpha$. The
above process enforces only the normal motion of surfaces. Furthermore, we set
all the vertex mobilities to be equal, i.e.,
$\tilde{\mathbf{\Upsilon}}_{\alpha\beta}=\Upsilon\;\delta_{\alpha\beta}\;\vec{\mathbf{n}}_{\alpha}$,
where $\delta_{\alpha\beta}$ is a Kronecker delta. The scale factor $\Upsilon$
is estimated by using Newton’s method; and it is proportional to the inverse
of the Hessian [56]. We also note that the Hessian of the edge vertices
dominates over that of the bulk, leading to slow the convergence of edge
modes. To adjust for the offset, after every iteration of the conjugate
gradient descent we perform the variation of only the edge vertices, while
keeping all the vertices in the bulk fixed.
We explore various morphologies detailed in Sec. 4. These distinct shapes
possess unique topologies, which can be classified by their homology groups
[22]. Our study is initiated with triangular meshes that represent fundamental
mesh structures corresponding to each morphology. The process of energy
minimization employs iterative triangle and edge refinements, alongside vertex
evolution guided by Eq. (10). This iterative refinement seeks stable surface
configurations, ensuring that vertex-coordinate changes remain within $5\%$ of
the unit cell size. Triangle refinements enhance surface resolution by
subdividing large triangles into smaller ones, effectively capturing intricate
surface details. We achieve triangle refinement and reshaping by using Pachner
moves [24], depicted in Fig. 5. These transformations preserve the
triangulation’s topological properties.
Figure 5: A schematic diagram of Pachner moves that we use to refine and re-
triangulate the mesh: (a) refinement; (b) triangle weeding; (c) edge weeding;
(d) edge flip.
The systems that we consider are periodic in x, y, and z directions, with
orthorhombic or cubic unit cells; we implement this in SE by using the
SYMMETRY option [52]. We obtain the energy per unit volume to compare the
stabilities of different phases of the system.
## 4 Results
Fluid membranes, which avoid self-intersections and incorporate edges, possess
the capacity to self-organize into diverse structural configurations, to adapt
to distinct environmental conditions, and thus yield various phases [see,
e.g., Refs. [46] and [57]]. Our model introduces, in addition, the possibility
of tubule formation; this enriches the spectrum of possible phases in the
system.
In our subsequent explorations of membrane stability, we consider cases in
which $\kappa>0$ significantly surpasses $k_{B}T$. We focus on
$\kappa_{G}=\kappa$; although the stability of fluid membranes allow for both
positive and negative values of $\kappa_{G}$, we exxplore only the region
$\kappa_{G}>0$.
We investigate the stability of diverse morphologies, by using our continuum
model (1) with renormalized coefficients (as discussed above). We utilize
Surface Evolver (SE) to capture the mean shapes. We focus on the region
$\mathcal{B}\sim\beta$; and $\delta_{0}$, which is equal to the double-layer
width, induces a repulsive potential between surfaces. This enables the
fluctuations to counterbalance attractive forces and allows surfaces to exist
independently without condensation.
Our exploration of the stabilities of different morphologies begins with the
initialization of minimal triangular meshes, with either orthorhombic or cubic
symmetry, that are designed to yield the desired topology. These meshes evolve
via SE to minimize the energy [Eq. (1)]. As the vertex-coordinate change falls
below a $5\%$ threshold relative to the unit-cell size, we iteratively refine
and re-triangulate the mesh by using the Pachner moves described above. We
continue this until further refinement fails to induce deformations [given the
stipulated threshold].
We measure energies in units of $\beta$ and normalize lengths by
$\xi_{\kappa}$. Figure 6 illustrates the phase diagram that we have obtained
for model (1) with the representative value $\kappa=5\beta$ in the two-
dimensional parameter space $[\exp{-(\beta\gamma)},\beta\mu]$; this shows a
variety of phases with non-trivial structures, that minimize the energy in
different regions of this parameter space. We proceed to present an in-depth
analysis of these phases and the phase diagram; and we elucidate the interplay
between $\mu$ and $\gamma$ that stabilises different phases.
Figure 6: The phase diagram for model (1) with the representative value
$\kappa=5\beta$ in the two-dimensional parameter space
$[\exp{-(\beta\gamma)},\beta\mu]$; this shows a variety of phases with non-
trivial structures, that minimize the energy in different regions of this
parameter space. considered.
* ✮
Lamellar phases: The system exhibits a strong inclination to form lamellae
when $\mu$ has large negative values; this favours the and filling of the
domain with layered surfaces (or lamellae). In describing the membrane stack
within this phase, a persistence length ($\xi_{\kappa}$) signifies the average
distance over which neighbouring membranes collide because of thermal
fluctuations [50]. These collisions result in a steric repulsion between the
membranes and, consequently, renormalize the inter-layer spacing to
$\Delta\approx\xi_{l}\sim 1/\sqrt{\kappa|\mu|\beta^{2}}$.
* ❏
Flat lamellae (FL): When the energy ($\gamma$), associated with edges, becomes
excessively high, the system actively avoids edges and the lamellae adopt a
flat mean profile (see Fig. 7) to minimize the bending energy. In terms of
topology, the phase comprises disconnected sheets, displaying a long-range
orientational order, of the surface normal, and quasi-long-range positional
order along this normal direction.
If $\lambda_{\perp}$ and $\lambda_{\parallel}$ are the unit cell dimensions in
the directions along the layer’s normal and lateral directions, then the free-
energy and the energy density reduce to:
$\displaystyle\mathcal{F}_{FL}$ $\displaystyle=$
$\displaystyle\tilde{\mu}\>\lambda_{\parallel}^{2}+4\>\tilde{\mathcal{B}}\left(\left(\frac{\tilde{\Delta}}{\lambda_{\perp}}\right)^{12}-\left(\frac{\tilde{\Delta}}{\lambda_{\perp}}\right)^{6}\right)\lambda_{\parallel}^{2}\,;$
$\displaystyle f_{FL}$ $\displaystyle=$
$\displaystyle\mathcal{F}_{FL}/(\lambda_{\perp}\lambda_{\parallel}^{2})\,;$
(12)
$\lambda_{\parallel}=L=\xi_{\kappa}$; and we minimize $f_{FL}$ w.r.t.
$\lambda_{\perp}$.
(a) (b)
Figure 7: Structure of flat lamellae: (a) an orthorhombic unit cell; (b) the
repetition of unit cells that form the final configuration of a flat lamellar
phase.
* ❏
Lamellae with helical edges (LH): In the lamellar phase, as the $\gamma$ is
reduced to a large negative value, the lamellae prefer to break and form
sheets with edges. But simple circular holes or edges are unstable, so the
system develops topologically non-trivial helical edges, exhibiting a
morphology that resembles a staircase (see Fig. 8(a)). This morphology
consists of repeating, alternating layers, or terraces, connected via the
helical edge. The helices are chiral; and opposite chirality helical edges can
condense to form a square lattice as we show in Fig. 8(b).
We use the continuum elasticity model (1) to obtain the free energy. The
lattice spacing, $\alpha$, of the helical edges is stabilized by the
fluctuations; and it is determined self-consistently. Energetically, smaller
values of $\alpha$ would lead to a higher density of edges and hence a lower
energy. In our model, the lattice-spacing limit is set by the validity of the
lamellar structure, which is $\alpha\geq\xi_{\kappa}$, and we saturate this
limit. Furthermore, by approximating the surface to be a helicoid, which is a
minimal surface, we satisfy the force balance in the bulk [Eq. (2)]. The
boundary conditions in Eq. (3) relate the radius of the helical boundary to
the interlayer spacing via [54]
$\displaystyle-\frac{\tilde{\kappa}_{G}\;(\tilde{\xi}_{l}/2\pi)^{2}}{(r^{2}+(\tilde{\xi}_{l}/2\pi)^{2})^{2}}+\frac{\tilde{\gamma}\;r}{(r^{2}+(\tilde{\xi}_{l}/2\pi)^{2})}+\tilde{\mu}=0\,,$
(13)
where $r$ is the core radius (see Fig. 8(a)), and
$\xi_{l}=1/\sqrt{\kappa|\mu|\beta^{2}}$. We begin with a coarse triangulated
surface, as in Fig. 9(a), with $\lambda_{\parallel}=2\alpha$; we then let SE
evolve the vertices with multiple mesh refinements to reach the stable
structure, shown in Fig. 9(c).
(a) (b)
Figure 8: Schematic diagrams: (a) (right panel) the Helicoid core; (left
panel) the helicoid, with the core removed and showing the corresponding
helical edge; (b) a unit cell of a pair of left(L)- and right(R)-handed
helicoids arranged on a square lattice [left panel: 3d view; right panel: top
view].
(a) (b) (c)
Figure 9: Schematic diagrams showing the structure of lamellae with helical
edges: (a) An orthorhombic unit cell with a triangulated surface that has two
left- and two right-handed helical edges, as in Fig. 8(b). This surface serves
as the input initial condition for the SE. (b) The repetition of unit cells
with orthorhombic symmetry. (c) The final configuration after energy
minimization via SE.
Studies such as those of Refs. [13] and [39] associate the LH structures with
the rough ER, but they do not take into account the self-avoidance of the
membranes. Instead, to enforce a preferred inter-layer spacing in the case of
the staircase-like structure, they use a spontaneous geodesic-curvature term
of the form $\int_{\partial\mathcal{A}}(\kappa_{g}-\bar{\kappa}_{g})^{2}\;dl$,
where $\kappa_{g}$ and $\bar{\kappa}_{g}$ are the geodesic curvatures of the
boundary and the corresponding spontaneous value induced by the protein that
occupies the helical boundaries (and acts as springs with a preferred pitch).
But for a lattice of helicoids with lattice spacing greater than
$\zeta_{\kappa}$, thermal fluctuations render the structure unstable without
self-avoidance. Furthermore, the terms in $\mathcal{F}_{geo}$ is of higher
order compared to $\int K\;d^{2}\sigma$, as a consequence of the Gauss-Bonnet
theorem, which relates the geodesic curvature and Gaussian curvature through
the relation,
$\displaystyle\int\int_{\mathcal{A}}\mathcal{K}\;\sqrt{g}\;d^{2}\sigma=2\pi(1-g)+\oint_{\partial\mathcal{A}}k_{g}ds\,,$
(14)
where $\mathcal{K}$ is the Gaussian curvature, $g$ is the genus of the
surface, and $k_{g}$ is the geodesic curvature. Now, if we use the $\oint
k_{g}^{2}ds$ term, we must include, for consistency, higher-order terms in the
Gaussian curvature term in the bulk. Furthermore, the inclusion of the
quadradic geodesic-curvature term stabilizes stacks of membranes with holes of
radius $\sim 1/\kappa_{g}$; these were not reported in Ref. [13].
* ✮
Sponge phase: If $\kappa_{G}>0$, and we use a high value of $\gamma$ and a
small value of $|\mu|$, the interlayer spacing decreases notably.
Consequently, lamellae develop the propensity to merge, giving rise to
intricately interconnected membrane networks that create a porous,
labyrinthine structure called a sponge phase. In contrast to the flat lamellar
phase, the sponge phase likes to bend and maximize negative Gaussian
curvature. This phase adopts shapes akin to minimal surfaces with zero mean
curvature without edges. The resulting sponge phase is characterized by a
disordered and porous structure devoid of long-range order.
We examine periodic sponge structures with cubic symmetry; these are often
referred to as a plumber’s nightmare because of its complex morphology [46,
57] (see Fig. 11(c)). To find the optimal plumber’s-nightmare structure, we
start with a triangular mesh, as shown in Fig. 11(a), and let the SE evolve
the surface, iteratively refining the structure until the convergence criteria
are satisfied. These structures exhibit three-dimensional crystalline order
and are such that their area $\sim\lambda^{2}$, with a factor which we
determine by using the plot in Fig. 10, that we obtain by using SE. Since it
is a minimal surface, $H=0$, and the integral of the Gaussian curvature per
unit cell is independent of $\lambda$ [it is a function only of the genus].
Furthermore, for the plumber’s-nightmare structure, the entropic contribution
is constant [46]. Hence, the total free energy density for the
plumber’s-nightmare structure for a unit cell of size $\lambda(=L)$ is
$\displaystyle f_{SP}\approx
A_{1}\tilde{\mu}/\lambda+A_{2}-A_{3}\tilde{\kappa}_{G}/\lambda\,,$ (15)
where $A_{1}$ is the slope of the linear fit in Fig. 10(a), $A_{2}$ is the
constant entropic contribution, and $A_{3}$ is the slope from Fig. 10(b).
(a) (b)
Figure 10: Scaling relations for the plumber’s-nightmare phase obtained by
using SE: (a) Plot of the area vs the unit-cell length ($\lambda$); (b) Plot
of the total Gaussian curvature vs $\lambda$.
Note that, for $\kappa_{G}>0$ and $\mu<0$, $f$ decreases with $\lambda$, so we
have a UV runaway, and the stability of the structure is dictated by the
short-distance cut-off; in our numerical study, the double bi-layer thickness
provides this cut-off, i.e. $\lambda\sim a$.
(a) (b) (c)
Figure 11: Structure of the plumber’s nightmare: (a) Triangulated surface in a
cubic unit cell that serves as the input to SE; (b) the repetition of unit
cells with cubic symmetry; (c) final configuration after energy minimization
via SE.
* ✮
Droplet phase: For positive $\mu$ and positive $\gamma$, the free energy can
be reduced by the formation of spherical droplets whose radii are determined
by fluctuations. The characteristic length scale, associated with $\mu>0$,
determines the droplet size $\xi_{\mu}=1/\sqrt{4\pi\mu\beta}$; and the shape
fluctuations determine the spacing between the droplets, which is of the order
of $\xi_{\kappa}$. In general, for $\kappa_{G}>0$, the droplets proliferate
[high genus topology]. For large $\mu$, the system prefers to avoid membranes
by breaking up into dilute non-spherical droplets. We neglect the self-
avoidance term because of the dilute-droplet assumption. To perform a self-
consistent comparison between different morphologies, while keeping the
analysis simple, we use a mono-disperse distribution of spheres with
$\xi_{\mu}$ as the cut-off radius, and the lattice spacing of the order of
$\lambda=\xi_{\kappa}$. Then, the corresponding free-energy density is [32]
$\displaystyle f_{D}\approx
4\pi\>\tilde{\mu}\>\xi^{2}_{\mu}/\lambda^{2}+4\pi\>(\tilde{\kappa}/2+\tilde{\kappa}_{G})/\lambda^{2}-\ln(\xi_{\kappa}/\xi_{\mu})/(\beta\lambda^{2})\,.$
(16)
* ✮
Tubule phases: When $\mu$ is positive and we have a negative value of
$\gamma$, tubules dominate the morphology, which has string-like undulations
and translational entropy of the junctions [33].
* ✧
Tubule nematic: For large positive values of $\mu$, the system avoids forming
surfaces and consists only of tubules. For the simple model that we consider,
with only line tension for the tubules, they can align to give rise to nematic
order, which optimizes packing. We consider a hexagonal packing of tubules and
estimate the free-energy density.
* ✧
Tubules with Y-junction: In the case of negative $\mu$ close to zero and
negative values of $\gamma$, the tubules have string-like undulations and form
3-way Y-junctions, with a flat triangular patch of surface, as shown in Fig.
12(a). The Y-junctions act as particles and give rise to translational entropy
as in the Tlusty-Safran transition [33]. The break-up of the membrane into
3-way junctions is achieved through the transition in Fig. 3. In our study, we
consider the cubic arrangement of Y-junctions shown in Fig. 12.
(a) (b)
Figure 12: Schematic diagram showing the structure of tubules with
Y-junctions: (a) Triangulated surface in a cubic unit cell that serves as our
input to SE. The tubules and surface edges are highlighted as 3d red tubes,
but in SE they are 1d curves; (b) final configuration after energy
minimization via SE, such that the unit cell is repeated with cubic symmetry.
(a) (b)
Figure 13: A schematic diagram showing different possible topologies with
4-way junctions, with a Burger’s loop highlighted in black: (a) fragments of
flat membranes that connect to form a closed loop; (b) saddle membranes
connected by tubules.
* ✧
Tubules with saddle junctions: If $\kappa_{G}>0$, for large negative $\gamma$
and small positive $\mu$, the edges in the staircase can grow and coalesce to
form a network of tubules connected by small patches of sheets, to form
saddle-junctions. The patches form pieces of saddle minimal surfaces that have
negative Gaussian curvature and zero mean curvature (see Fig. 14(c)). Here the
shape and size of the patch are determined by the competition between the
positive surface tension, $\mu$, which tries to minimize the area, and the
negative contribution from the Gaussian curvature. To embed a negative
Gaussian curvature surface in Euclidean three-dimensional space, the integral
of the Gaussian curvature scales sub-linearly in the surface area [55] and
hence leads to a finite size. We consider a lattice of such saddles and
estimate their shape and size numerically.
(a) (b) (c)
Figure 14: Schematic diagram showing the structures of saddle junctions: (a)
Triangulated surface in a unit cell that serves as our input to SE. The
tubules and surface edges are highlighted as 3d red tubes, but in SE they are
1d curves; (b) final configuration after energy minimization using SE, such
that the unit cell is repeated with orthorhombic symmetry; (c) the saddle
shape of the junction with net negative Gaussian curvature minimizes the
energy for $\kappa_{G}>0$.
The Saddle junctions undergo a symmetry-breaking transition into the
Y-junction phase, through the tubule formation process described in Fig. 3,
wherein a membrane breaks up into pieces of triangles as shown in Fig. 15.
(a) (b)
Figure 15: Symmetry breaking transition from a saddle junction (a) to a
Y-junction in (b).
The phase diagram in Fig. 6 summarizes the analysis presented above. As
observed in Ref. [13], the lamellar phase with helical edges corresponds to
the rough ER and the tubule phases correspond to the smooth ER. Through our
model, we highlight that there exists a structural transition between the
lamellae with helical edge, saddle-junctions, and Y-junctions such that the
interior of the double bilayer, the lumen, maintains contiguous connectivity.
## 5 Conclusions
Our study demonstrates that the diverse structural phases witnessed within the
endoplasmic reticulum (ER) can be effectively replicated by using our model
(1) that encompasses interacting fluid membranes featuring edges. By employing
energy-minimization techniques and variational methods, we derive the stable
shapes characterizing these distinct structures.
The resultant phase diagram, derived through numerical optimization, unveils
transitions from lamellar configurations to helicoidal stacks, closely
resembling the experimentally observed structures within the rough endoplasmic
reticulum (RER). Furthermore, our findings highlight the existence of
homotopy-preserving transitions, illustrating transformations between
helicoidal stacks, saddle junctions, and Y-junctions.
### 5.1 Discussion
The intricate nature of endoplasmic reticulum (ER) structures, coupled with
the diverse array of factors influencing its morphology poses a significant
challenge in constructing a comprehensive phase diagram that shows the varied
ER morphologies under diverse conditions. In this context, several critical
points require attention:
1. (a)
The diverse morphological variations across different cell types and
physiological contexts.
2. (b)
Understanding the interconversion and remodelling dynamics between tubules and
sheets.
3. (c)
Investigating the influence of cellular activity on ER morphology.
4. (d)
Unraveling the intricate interplay among lipid bilayers, proteins, and
cellular signalling pathways.
5. (e)
Deciphering the role of proteins in sculpting and determining ER morphology.
Advancements in super-resolution microscopy, live-cell imaging techniques, and
computational modelling continuously contribute valuable insights into ER
morphology. Recent research endeavours have made strides in elucidating phase
transitions and morphological alterations within the ER. However, the
construction of a detailed phase diagram that comprehensively integrates these
findings remains a work in progress.
Limitations of our model: The ER system is, in general, active and far from
equilibrium. While our work identifies key influential conditions and factors
impacting ER morphology, crafting an exact phase diagram that integrates
dynamic activity and out-of-equilibrium components remain an ongoing
challenge. The inclusion of active non-equilibrium phenomena can be
accomplished by using coloured-noise-based membrane fluctuations, as
discussed, e.g., in Ref. [21]; such studies have not yet yielded the
structures we have obtained.
## 6 Acknowledgements
YH and JKA express gratitude to V.A. Raghunathan for valuable inputs and
insightful discussions. JKA thanks Chaitra Hegde for engaging in relevant
discussions.
We acknowledge the Department of Science and Technology, New Delhi, for their
support through the DSTO1359 grant. RP acknowledges the support received from
the Science and Engineering Research Board (SERB) and the National
Supercomputing Mission (NSM). We also extend our gratitude to SERC (IISc) for
providing computational resources.
We acknowledge the use of AI tools like Grammarly and ChatGPT for sentence
rephrasing and grammar checks. Subsequently, the material underwent meticulous
proofreading to ensure precision and rectify any errors. Our process includes
thorough reviews and edits to ensure accuracy, relevance, and coherence in the
finalized text.
## References
* [1] Park, Seong H., and Craig Blackstone. ”Further assembly required: construction and dynamics of the endoplasmic reticulum network.” EMBO reports 11.7 (2010): 515-521.
* [2] Alberts, Bruce, et.al., Molecular Biology of the Cell: Seventh International Student Edition with Registration Card. WW Norton $\&$ Company, 2022.
* [3] Shibata Y, Shemesh T, Prinz WA, Palazzo AF, Kozlov MM, Rapoport TA: Mechanisms determining the morphology of the peripheral ER. Cell, 143:774-788, (2010).
* [4] Braakman I, Hebert DN. ”Protein folding in the endoplasmic reticulum”, Cold Spring Harb Perspect Biol.;5(5):a013201 (2013).
* [5] Caramelo, J.J.; Parodi, A.J. ”A sweet code for glycoprotein folding”, FEBS Lett., 589, 3379–3387, (2015).
* [6] Vance, J.E. Phospholipid Synthesis and Transport in Mammalian Cells. Traffic, 16, 1–18, (2015).
* [7] Maxfield FR, Wüstner D, ”Intracellular cholesterol transport”, The Journal of Clinical Investigation. 110 (7): 891–8, (2002).
* [8] Baumann O, Walz B, ”Endoplasmic reticulum of animal cells and its organization into structural and functional domains”, Int Rev Cytol 205: 149–214, (2001).
* [9] oyoshima C, Nakasako M, Nomura H, Ogawa H, ”Crystal structure of the calcium pump of sarcoplasmic reticulum at 2.6 A resolution”. Nature. 405 (6787): 647–55, (2000).
* [10] Scorrano, et.al., BAX and BAK regulation of endoplasmic reticulum Ca2+: A control point for apoptosis. Science , 300, 135–139, (2003).
* [11] Fawcett, D. The Cell, 2nd ed.; W. B. Saunders Company: Philadelphia, PA, USA, 1981; ISBN 9780721635842.
* [12] Voeltz GK, Prinz WA, Shibata Y, Rist JM, Rapoport TA, ”A class of membrane proteins shaping the tubular endoplasmic reticulum”, Cell 124:573–86, (2006).
* [13] Mark Terasaki, Tom Shemesh, et.al., Stacked Endoplasmic Reticulum Sheets Are Connected by Helicoidal Membrane Motifs, In Cell, Volume 154, Issue 2, 2013, Pages 285-296.
* [14] Kamien, Randall D., and T. C. Lubensky. ”Chiral lyotropic liquid crystals: TGB phases and helicoidal structures.” Journal de Physique II 7.1 (1997): 157-163.
* [15] Friedman JR, Voeltz GK: The ER in 3D: a multifunctional dynamic membrane network. Trends Cell Biol, 21:709-717, (2011).
* [16] Chen, S., Novick, P., Ferro-Novick, S. ”ER structure and function.” Current Opinion in Cell Biology 25, 428–433, (2013).
* [17] Borgese, N., Francolini, M., Snapp, E. ”Endoplasmic reticulum architecture: structures in flux”. Current Opinion in Cell Biology 18, 358–364, (2006).
* [18] Lodish H, Berk A, Zipursky SL, et al. Molecular Cell Biology. 4th edition. New York: W. H. Freeman; (2000).
* [19] Felix T. Wieland, Michael L. Gleason, Tito A. Serafini, James E. Rothman, The rate of bulk flow from the endoplasmic reticulum to the cell surface, In Cell, Volume 50, Issue 2, 1987, Pages 289-300, ISSN 0092-8674, https://doi.org/10.1016/0092-8674(87)90224-8.
* [20] Schwarz, Dianne S, and Michael D Blower. ”The endoplasmic reticulum: structure, function and response to cellular signalling”, Cellular and molecular life sciences: CMLS vol. 73,1 79-94 (2016).
* [21] Rao, Madan and R. C., Sarasij, Active Fusion and Fission Processes on a Fluid Membrane, Phys. Rev. Lett., (87), 12, 2001.
* [22] Nakahara, Mikio. Geometry, topology and physics. CRC press, 2018\.
* [23] K. Brakke. ”The motion of a surface by its mean curvature”, In: Mathematical Notes, Vol. 20. Princeton University Press, Princeton. (1978)
* [24] Pachner, Udo, ”P.L. homeomorphic manifolds are equivalent by elementary shellings”, European Journal of Combinatorics, 12 (2): 129–145, (1991).
* [25] Brakke KA. The surface evolver. Experimental mathematics. 1992 Jan 1;1(2):141-65.
* [26] West M, Zurek N, Hoenger A, Voeltz GK. ”A 3D analysis of yeast ER structure reveals how ER domains are organized by membrane curvature”, J. Cell Biol. 193:333–46, (2011).
* [27] Silke Robitzsch, et.al., ”Quantitative analysis of ER morphology in plants: The endoplasmic reticulum in Arabidopsis hypocotyls”, The Plant Cell, Vol. 33, Issue 7, 2264-2281, (2021).
* [28] Shibata Y, Voeltz GK, Rapoport TA. ”Rough sheets and smooth tubules”. Cell. 126 (3): 435–9, (2006).
* [29] Shibata Y, Hu J, Kozlov MM, Rapoport TA, ”Mechanisms shaping the membranes of cellular organelles”. Annu Rev Cell Dev Biol 25:329–354 (2009).
* [30] A W. Helfrich, Helical bilayer structures due to spontaneous torsion of the edges, 1986, The Journal of Chemical Physics, 1085-1087, 85, 2.
* [31] F. David, in Statistical Mechanics of Membranes and Surfaces, edited by D. Nelson, T. Piran, and S. Weinberg (World Scientific, Singapore, 1989).
* [32] Safran, S.A., et.al., Phys. Rev. Lett. 57 491, (1986).
* [33] T. Tlusty, S. A. Safran, R. Strey, Topology, Phase Instabilities, and Wetting of Microemulsion Networks, PRL, Vol. 84, No. 6, Feb 2000.
* [34] Lubensky, T. C. and Renn, S. R., Twist-grain-boundary phases near the nematic˘smectic-A˘smectic-C point in liquid crystals, Phys. Rev. A, Vol 81, Issue 8, 1990, pp: 4392–4401.
* [35] A. A. Abrikosov, [Sov. Phys.—JETP 5, 1174 (1957)]
* [36] W. Helfrich, Elastic Properties of Lipid Bilayers—Theory and Possible Experiments, Z. Naturforsch. C 28 (1973) 693-703.
* [37] Shemesh T, Klemm RW, et.al., A model for the generation and interconversion of ER morphologies. Proceedings of the National Academy of Sciences. 2014 Dec 9;111(49):E5243-51.
* [38] Lennard-Jones. ”Cohesion”. Proceedings of the Physical Society. 43 (5): 461–482, (1931).
* [39] Guven, Jemal, Greg Huber, and Dulce María Valencia. ”Terasaki spiral ramps in the rough endoplasmic reticulum.” Physical review letters 113.18 (2014): 188101.
* [40] Alageshan, Jaya Kumar, Buddhapriya Chakrabarti, and Yashodhan Hatwalne. ”Equilibrium of fluid membranes endowed with orientational order.” Physical Review E 95, no. 4 (2017): 042806.
* [41] Gelbart, William M., Avinoam Ben-Shaul, and Didier Roux, eds. Micelles, membranes, microemulsions, and monolayers. Springer Science and Business Media, 2012.
* [42] Lianghui Gao, Leonardo Golubovic, ”Smectic phases of semiflexible manifolds: Constant-pressure ensemble” Physical Review E 66, 051918 (2002).
* [43] Voeltz, GK; Prinz WA; Shibata Y; Rist JM; Rapoport TA. ”A class of membrane proteins shaping the tubular endoplasmic reticulum”. Cell. 124 (3): 573–586 (2006).
* [44] Gao, Lianghui and Golubovi, Leonardo, Smectic phases of semiflexible manifolds: Constant-pressure ensemble, Phys. Rev. E, vol. 66, issue 5, pages: 051918, 14, 2002.
* [45] Parsegian, A. and Evans, E., Proc. Nat. Acad. Sci. USA 83 7132, (1986).
* [46] David A. Huse, Stanislas Leibler. ”Phase behaviour of an ensemble of nonintersecting random fluid films.” Journal de Physique, 49 (4), pp.605-621, (1988).
* [47] Golubovic, Leonardo and Lubensky, T. C., Smectic elastic constants of lamellar fluid membrane phases: Crumpling effects, Phys. Rev. B, vol. 39, issue 16, pp: 12110–12133, 1989.
* [48] Ogawa, H., Uchida, N., Numerical simulation of the twist-grain-boundary phase of chiral liquid crystals. Physical Review E, 73(6), 060701, 2006.
* [49] Kamien, Randall D., and Tom C. Lubensky. ”Minimal surfaces, screw dislocations, and twist grain boundaries.” Physical review letters 82.14 (1999): 2892.
* [50] Chaikin, Paul M., Tom C. Lubensky, and Thomas A. Witten. Principles of condensed matter physics. Vol. 10. Cambridge: Cambridge university press, (1995).
* [51] David A. Huse, Stanislas Leibler. “Phase behaviour of an ensemble of nonintersecting random fluid films.” Journal de Physique, 49 (4), pp.605-621 (1988).
* [52] Brakke, Ken A., and John M. Sullivan. ”Using symmetry features of the surface evolver to study foams.” In Visualization and mathematics, pp. 95-117. Springer Berlin Heidelberg, 1997.
* [53] Hestenes, Magnus R., Stiefel, Eduard. ”Methods of Conjugate Gradients for Solving Linear Systems”. Journal of Research of the National Bureau of Standards. 49 (6): 409 (1952).
* [54] Hatwalne, Yashodhan, and Murugappan Muthukumar. ”Chiral symmetry breaking in crystals of achiral polymers.” Physical review letters 105, no. 10 (2010).
* [55] Johannes C. C. Nitsche, Lectures on Minimal Surfaces: Volume 1, Introduction, Fundamentals, Geometry and Basic Boundary Value Problems, Cambridge University Press, (1989).
* [56] Avriel, Mordecai. “Nonlinear Programming: Analysis and Methods.” Dover Publishing (2003).
* [57] Menon, Gautam I., Rahul Pandit, and Sriram Ramaswamy. ”Sponge Phase Transitions from a Lattice Mode.” Molecular Crystals and Liquid Crystals Science and Technology. Section A. 288.1 (1996): 93-104.
|
# The Brain’s Bitter Lesson: Scaling Speech Decoding With Self-Supervised
Learning
Dulhan Jayalath1 Gilad Landau1 Brendan Shillingford2
Mark Woolrich3 Oiwi Parker Jones1
$\\{^{1}$PNPL, 3OHBA$\\}$, University of Oxford 2Google DeepMind
{dulhan<EMAIL_ADDRESS>
###### Abstract
The past few years have produced a series of spectacular advances in the
decoding of speech from brain activity. The engine of these advances has been
the acquisition of labelled data, with increasingly large datasets acquired
from single subjects. However, participants exhibit anatomical and other
individual differences, and datasets use varied scanners and task designs. As
a result, prior work has struggled to leverage data from multiple subjects,
multiple datasets, multiple tasks, and unlabelled datasets. In turn, the field
has not benefited from the rapidly growing number of open neural data
repositories to exploit large-scale data and deep learning. To address this,
we develop an initial set of neuroscience-inspired self-supervised objectives,
together with a neural architecture, for representation learning from
heterogeneous and unlabelled neural recordings. Experimental results show that
representations learned with these objectives scale with data, generalise
across subjects, datasets, and tasks, and are also learned faster than using
only labelled data. In addition, we set new benchmarks for two foundational
speech decoding tasks. Taken together, these methods now unlock the potential
for training speech decoding models with orders of magnitude more existing
data.
Figure 1: Leveraging unlabelled data for speech decoding. We pre-train a
neural network using tasks that generate implicit labels from abundant
unlabelled neuroimaging data, permitting learning from large heterogeneous
datasets. The tasks apply a randomly selected transformation $T$ to the data
and the network predicts the transformation. We fine-tune the pre-trained
network on labelled data, achieving faster generalisation owing to the
strength of the representation learned in pre-training.
## 1 Introduction
In his Bitter Lesson, Richard Sutton argues that a major conclusion of 70
years of AI research is that general methods exploiting large-scale
computation will outperform model-based approaches as the availability of
compute increases [50]. In line with this, the generality of deep learning,
via statistical learning from ever bigger datasets, has allowed the field to
leverage computation in a way that appears to scale arbitrarily, leading to
astounding advances across a diverse set of domains [29, 7, 42, 46].
In the domain of brain data, and of tasks like speech decoding, the bitter
lesson has not yet been fully assimilated. State-of-the-art brain-computer
interfaces (BCIs) have tried to scale up labelled datasets for individual
subjects, using either invasive [40, 60] or non-invasive brain recordings
[52], mapping these to transcripts of attempted or imagined speech. Yet, a
number of obstacles to scale remain. With few exceptions at present (e.g.
[17]), speech decoding models tend not to train on data from more than one
subject. Moreover, they do not combine data from multiple datasets and in
general do not utilise unlabelled data, or data from diverse tasks. Thus the
size of training data has been limited to how much can be acquired for a
single subject, and data from other subjects, or from the growing number of
public data repositories, has not been leveraged. There are many reasons for
these limitations; individual brains and data from different neuroimaging
scanners differ, for example. But overcoming these limitations, as has begun
to happen in neighbouring sub-fields (e.g. [27]), holds the promise of
training models on collective, internet-scale data.
Given the scarcity of labelled data, self-supervised learning (SSL) appears
promising as an avenue for domains where such data is rare or hard to obtain
[3]. In the SSL paradigm, _pretext_ tasks pre-train a model on unlabelled data
by generating implicit training labels through transformations of the input
data in order to help a downstream task. We develop a set of these tasks,
informed by advances in neuroscience, for learning with unlabelled brain data
(Figure 1) and design an architecture for processing continuous multi-sensor
neuroimaging signals. In order to scale existing non-invasive datasets, we
provide a unified method that allows us to leverage data from other
experiments that do not have the same labels (by treating them as unlabelled)
and that come from different subjects and neuroimaging scanners. We evaluate
the representations learned with our approach on heard speech datasets
acquired with non-invasive magnetoencephalography (MEG), setting the baselines
for speech detection and voicing classification on this data. The results not
only demonstrate that scaling with unlabelled data works in speech decoding,
but also shows that these representations can generalise across datasets,
tasks, and even novel subjects for the first time. Our main contributions are:
* •
A set of domain specific self-supervised pretext tasks for representation
learning that can scale speech decoding over multiple subjects, multiple
studies, and unlabelled data;
* •
A neural architecture for learning these self-supervised objectives and
training downstream speech decoding from brain data; and
* •
A comprehensive experimental evaluation providing evidence that our approach
can scale up with data and enable cross-dataset, task, and subject
generalisation.
## 2 Related Work
Prior work in speech decoding has focused almost entirely on supervised
learning with decoding models that typically do not generalise across
participants or experiments. This is true both in recent state-of-the-art
invasive studies [40, 39, 60, 9] and non-invasive studies [52]. These prior
works have scaled up the experimental data collected within individual
subjects, but are unable to leverage data from other subjects and experiments.
Focusing on semantic rather than phonetic decoding, work by Tang et al. [52]
is remarkable for showing an ability to generalise across labelled task data
when listening to speech, imagining speech, or even watching videos. They do
not, however, leverage unlabelled data and are unable to show generalisation
between subjects.
Specific studies into the limitations of generalising models between subjects
show that while performance decreases on average when subjects are pooled,
there are exceptions. Csaky et al. [11] find that a subset of individuals
perform better when evaluated with a group-level model than with individual
models. Exploiting audio data in a multi-modal framework, Défossez et al. [17]
show that decoding performance improves for a segment identification task as
data from multiple subjects listening to connected speech are aggregated.
Although they repeat the result within two MEG and two EEG datasets, Défossez
et al. [17] do not show any improvements for pooling data across datasets.
Moreover, they do not combine data from studies with different labels either;
cf. [58, 16, 56]. Unfortunately, two of these papers [58, 16] included a bug
in their evaluation code. As such, their methods may perform no better than a
baseline that provides pure noise inputs to the model [28].
In general, speech decoding has centred on different kinds of speech:
listening, imagining, speaking out loud, and, for paralysed patients,
attempting to speak aloud. We focus here on listening because it is easier to
decode than imagined speech (e.g. [37]). There is also some evidence of a
functional overlap between listening and imagined speech representations in
the brain [55], though we acknowledge that the question of overlap has been
contested [30]. Prior work has also investigated the two tasks that we focus
on here [13, 40, 23]. The first of these, speech detection, formed the
backbone to Moses et al. [40], where a speech detection model was trained and
subsequently used to detect isolated words, which were in turn classified and
checked against a language model to generate acceptable sentences. Hamilton et
al. [26] further elaborated on the neural anatomy underlying speech detection,
categorising neural responses in the superior temporal gyrus (STG) to
sustained speech and speech onset. As for the second task, voicing
classification, Gwilliams et al. [23] used this task as a proxy for phoneme
classification, as pooling phonemes into unvoiced or voiced segments (e.g. /p
t k f s/ vs /b d g v z/) improves data efficiency. We note that voicing
classification and speech detection are related tasks as voicing is a subclass
of speech. This makes them foundational for building hierarchical speech
decoding pipelines similar to prior surgical decoding work [40, 60].
In the computer vision literature, there have been a plethora of methods that
use self-supervised pretext tasks for representation learning [1, 15, 41, 31,
62, 21]. Until now, similar approaches have not translated to the brain
decoding literature. However, prior work has used other methods to leverage
unlabelled brain data. For example, Jiang et al. [27] succeeded in cross-
dataset and cross-task generalisation, using a transformer with tokenised
brain signals and a masked token prediction objective. Although this work
combined unlabelled EEG datasets, it only achieved improvements on non-speech
tasks. Wang et al. [57] used a similar approach, replacing tokens with
contextualised embeddings of time-frequency input representations. They
attained impressive speech detection results but with invasive neural
recordings, which are comparatively rare and thus have much less potential to
scale than non-invasive data.
## 3 Method
To process continuous neuroimaging data, we introduce a neural architecture
for encoding heterogeneous brain signals into latent representations. By
developing pretext tasks with the objective of learning generalisable brain
representations, we leverage this architecture for self-supervised learning
from unlabelled data, hoping to replicate similar successes in computer vision
[21, 8].
### 3.1 Network Architecture
We design a two-stage neural network architecture (Figure 2). The pre-training
stage uses pretext tasks to train a representation with unlabelled brain data.
Then, the fine-tuning stage uses this representation to learn the downstream
task by fine-tuning with labelled data.
Figure 2: Architecture overview. Inputs are projected into a shared dimension,
then encoded. In pre-training, all weights are trainable except for modules in
light-red, while in shallow fine-tuning, modules with light-blue borders are
frozen and modules with light-red borders are unfrozen. Deep fine-tuning is
identical, except the encoder is trainable. Dashed borders indicate optional
components.
Normalisation. We divide recordings into windows of length $w$ seconds or $t$
samples. At train time, each batch of windows is standardised such that each
sensor has zero mean and unit variance.
Backbone. The network takes as input the standardised sample windows. To
combine heterogeneous datasets, which have different numbers of sensors $S$,
we apply a dataset-conditional linear layer to the sensor dimension,
projecting the signal into a shared space with dimension
$d_{\mathrm{shared}}$. Then, to encode the signal, we construct a wave-to-wave
convolutional encoder architecture, the cortex encoder, inspired by work in
neural audio codecs [61, 14]. Specifically, our convolutional encoder adapts
the implementation of the SEANet architecture [51] used in Défossez et al.
[14]. As these codecs typically operate on mono audio signals in
$\mathbb{R}^{1\times t}$, while our signals are in
$\mathbb{R}^{d_{\mathrm{shared}}\times t}$, we increase the convolutional
channel dimension from $1$ to match $d_{\mathrm{shared}}$ while also inflating
the channel dimension of subsequent convolutions. We refer to the output
dimension of embeddings from this backbone as $d_{\mathrm{backbone}}$. Thus,
the backbone takes as input a window in $\mathbb{R}^{S\times t}$, and encodes
this into $\tau$ embeddings (where $\tau<t$), each of dimension
$d_{\mathrm{backbone}}$ (i.e. an
$\mathbb{R}^{d_{\mathrm{backbone}}\times\tau}$ output).
Pre-training. Following the advice of Balestriero et al. [3, Section 3.2], we
use a two-layer feedforward projector to alleviate misalignment between our
pretext and downstream tasks in the representation. After the projector,
linear classifiers make predictions for each of the pretext tasks.
Fine-tuning. In this stage, we introduce classifiers for the downstream tasks
and train with labelled data and supervised learning (Figure 2). Depending on
the experiment, we fine-tune in one of two ways. In the first, we train an MLP
classifier from scratch on top of the pre-trained representation, which
remains frozen. Thus, we backpropagate only through the classifier. We call
this shallow fine-tuning. In the other, we fine-tune by training an MLP on top
of the pre-trained representation again, but do not freeze the backbone,
training end-to-end. Here, we backpropagate through the classifier and the
backbone. We refer to this as deep fine-tuning. In both cases, a new trainable
dataset-specific linear layer can be introduced for a novel dataset.
For speech detection, our classifier makes a prediction for each individual
embedding. For voicing classification, where there is only one label for each
sample window, the embeddings are flattened into a tensor in
$\mathbb{R}^{d_{\mathrm{backbone}}\times\tau}$ representing the entire window.
This is the input to the voicing classifier and is referred to as full epoch
decoding in neuroimaging literature [12].
Subject conditioning. Just as speakers have different voices, neural responses
between subjects have different characteristics. Consequently, individual
variation leads to models that do not generalise well across subjects [11]. In
the speech literature, models include speaker conditioning to account for
these differences [20]. We take a similar approach by introducing subject
conditioning, represented as a subject embedding, to our model. With a SEANet-
based architecture, Zeghidour et al. [61] find that conditioning is equally as
effective at the encoder bottleneck as in other stages of the model. Hence, we
place ours at the cortex encoder bottleneck for simplicity. We evaluate
multiple types of conditioning including subject embeddings and feature-wise
linear modulation (FiLM) [44].
### 3.2 Pretext Tasks
Our pretext tasks are unsupervised feature learning tasks for continuous
neuroimaging signals that aim to learn generalisable speech decoding features.
Since different datasets use different hardware and varied numbers of sensors,
we construct these tasks with labels that are agnostic to the number of
sensors in the signal. This means that these tasks do not require identifying
specific sensor channels.
Band prediction. In the literature, neural responses can be broadly segmented
into functional frequency bands [22, 45, 36]. Delta ($\delta$) waves (0.1–4
Hz) are commonly associated with the rhythmic structure of heard speech [35],
Theta ($\theta$) waves (4–8 Hz) reliably track [34] and phase-lock to the
amplitude envelope of heard sentences [43], Alpha ($\alpha$) waves (8–12 Hz)
relate to attentional processes and the inhibition of irrelevant information,
helping to focus on relevant speech signals [49], Beta ($\beta$) waves
(12–30Hz) are implicated in top-down predictive coding [5] which affects
lexical processing [59], Gamma ($\gamma$) waves (30–70 Hz) occur with higher
cognitive functions (e.g. memory, learning, reasoning, and planning) [19, 6],
and High Gamma ($\gamma^{\mathrm{high}}$) waves ($>$70 Hz) have been linked
specifically to speech detection [26] and phonemic feature classification in
the STG [38] as well as phonemic feature classification in the ventral
sensorimotor cortex (vSMC) [10]. As High Gamma is a relatively wide band, we
have split it into two sub-bands: Lower High Gamma
($\gamma^{\mathrm{high}}_{\mathrm{lower}}$) waves (70–100 Hz) and Upper High
Gamma ($\gamma^{\mathrm{high}}_{\mathrm{upper}}$) waves (100–150 Hz).
To learn representations that can distinguish between these, our band
prediction task applies a band-stop filter for a randomly selected band
$\omega$ to the sample $x$, passes the filtered sample $x^{\omega^{\prime}}$
through the network backbone $g$ and the corresponding predictor
$f_{\mathrm{band}}$, requiring the network to predict the frequency band that
was rejected. This yields the loss
$\mathcal{L}_{\mathrm{band}}=\sum_{x\in
B}\mathcal{L}_{\mathrm{CE}}(f_{\mathrm{band}}(g(x^{\omega^{\prime}})),\omega),$
(1)
where $B$ is a mini-batch of samples,
$\omega\in\\{\delta,\theta,\alpha,\beta,\gamma,\gamma^{\mathrm{high}}_{\mathrm{lower}},\gamma^{\mathrm{high}}_{\mathrm{upper}}\\}$,
and $\mathcal{L}_{\mathrm{CE}}$ is the cross-entropy loss as this is a multi-
class classification task.
Phase shift prediction. Phase coupling between networks of neuron populations
is necessary for coordinating brain activity [18, 54]. Thus, since phase often
synchronises between communicating brain areas, phase coupling between
spatially distant sensors is likely to be a useful feature. Supporting this
insight, recent work [27] also finds phase to be an essential component of the
signal.
To learn representations that encode phase differences between brain areas,
this task applies a discrete uniform random phase shift
$\phi\in\\{0,\frac{\pi}{8},\frac{\pi}{4},\frac{3\pi}{8},\frac{\pi}{2},\frac{5\pi}{8},\frac{3\pi}{4},\frac{7\pi}{8}\\}$
to a uniformly randomly selected proportion $\rho$ of the sensors. Applying
this shift to random sensors is critical since sensors are placed in different
positions, capturing different regions of the brain. Uniform random selection
ensures differences between any two regions of the brain are represented. The
objective of this task is to predict the phase shift. This leads to a similar
loss
$\mathcal{L}_{\mathrm{phase}}=\sum_{x\in
B}\mathcal{L}_{\mathrm{CE}}(f_{\mathrm{phase}}(g(x^{\phi})),\phi),$ (2)
where $x^{\phi}$ describes the signal with a phase shift $\phi$ applied to a
proportion of the sensors. We use a discrete number of possible phase shifts,
treating it as a multi-class task rather than a regression task, to ease the
difficulty of the problem as MEG scanners typically have a large number of
sensors.
Amplitude scale prediction. MEG and EEG signals use an array of sensors at
different spatial locations, capturing different signal sources more
intensely. Representing the relative amplitude difference between sensors
could be important for differentiating between neural responses originating
from distinct parts of the brain. Within speech, Hamilton et al. [26] find
that localised regions of the STG respond to sustained speech and speech
onsets. Differentiating between neural responses from this region and others
may be essential for decoding speech perception.
Thus, this pretext task focuses on learning representations that encode
relative sensor amplitude differences. Similar to the phase shift task, we
select a random proportion of the sensors $\rho$ and apply a discrete random
amplitude scaling coefficient $A\in[-2,2]$, discretised into 16 scaling
factors, to the signal. The objective is to predict the scaling factor,
leading to the loss
$\mathcal{L}_{\mathrm{amplitude}}=\sum_{x\in
B}\mathcal{L}_{\mathrm{CE}}(f_{\mathrm{amplitude}}(g(x^{A})),A),$ (3)
where $x^{A}$ is the signal scaled with $A$.
These pretext tasks capture complementary time- and frequency-domain
properties of the signal. Hence, during pre-training, we combine them,
creating an augmented version of the input for every pretext task by applying
the matching transformation. We feed the augmented inputs through the network
backbone and apply the corresponding classifier to predict the transformation,
summing the weighted losses such that our final pre-training loss is given by
$\mathcal{L}_{\mathrm{SSL}}=w_{1}\mathcal{L}_{\mathrm{band}}+w_{2}\mathcal{L}_{\mathrm{phase}}+w_{3}\mathcal{L}_{\mathrm{amplitude}},$
(4)
where $w_{i}$ is a constant coefficient for each self-supervised loss.
## 4 Experiments
In this section, we evaluate the representations learned with our pretext
tasks by measuring their ability to scale downstream performance with
unlabelled data. This includes understanding how well they can generalise
across datasets, subjects, and tasks. We focus our evaluation on MEG data as
the signal is rich, with better spatial resolution than EEG [32] and faster
sampling rates than fMRI [25].
### 4.1 Experimental setup
Datasets. Unless specified otherwise, our experiments use Cam-CAN [48, 53] as
an unlabelled representation learning dataset for pre-training. This is a
study containing 641 subjects with resting and sensorimotor tasks, totalling
approximately 160 hours of MEG recordings. For our downstream tasks, we use
two labelled heard speech MEG datasets where participants listen to short
stories or audiobooks. Armeni et al. [2] contains 3 subjects who listen to 10
hours of recordings each (30 hours total) while Gwilliams et al. [24] has 27
subjects, each recorded for 2 hours (54 hours total). Overall, we utilise over
200 hours of data. To the best of our knowledge, this is the largest volume of
MEG data ever used for speech decoding.
Preprocessing. Each recording is in $\mathbb{R}^{S\times T}$ where $S$ is the
number of sensors and $T$ is the number of time points sampled by the scanner.
To eliminate high-frequency muscle movement artifacts, we apply a low-pass
filter at 125Hz as well as a high-pass filter at 0.5Hz to remove slow-drift
artifacts. Since the datasets were recorded in Europe, where the electric grid
frequency is 50Hz, we apply a notch filter at multiples of 50Hz to account for
line noise. Next, we downsample the signal to 250Hz, avoiding aliasing at
frequencies up to our low-pass filter threshold. Finally, we detect bad sensor
channels, those with significant noise and artifacts, using a variance
threshold and replace them by interpolating the spatially nearest sensors.
Downstream tasks. We evaluate our methods with two fundamental speech decoding
tasks of increasing difficulty. The first, speech detection, determines
whether speech occurs in the auditory stimulus using the neural response. The
second task is voicing classification. Given data aligned at the occurrence of
a phoneme, the task is to recognise whether the phoneme is voiced or
voiceless, where voicing is a binary phonetic feature that categorises whether
a speech sound is associated with vocal cord vibration. We select these tasks
as they are simpler than phoneme recognition, but are foundational because
they must be solved to decode speech accurately into natural language.
Training. We pre-train all models to completion and then shallow or deep fine-
tune on labelled data for each task, using this to analyse the
generalisability of the pre-trained representation. Appendix E provides
complete training details for all experiments.
### 4.2 Learning Generalisable Representations Using Pretext Tasks
We investigate whether our self-supervised objectives produce useful
representations by first learning a representation with each pretext task
independently and then analysing its downstream generalisation using shallow
fine-tuning (i.e. with the representation frozen). In Table 1, we show that
all of our pretext tasks lead to better than chance accuracy on speech
detection, and critically, outperform a randomly initialised and shallow fine-
tuned baseline. The results are statistically significant in all cases,
providing initial evidence that all of our tasks are helpful in learning
representations that generalise downstream to speech decoding.
Interestingly, the combination of all pretext tasks leads to better
generalisation than any task on its own. As we hypothesised earlier, this may
be because our pretext tasks capture complementary properties in time- and
frequency-space, enforcing that our representation includes more salient
features for speech decoding. For our other downstream task, voicing
classification, we were not able to use shallow fine-tuning to generalise to a
statistically significant degree with any of the pretext tasks nor with a
random backbone. It may be too difficult to learn with shallow fine-tuning and
so we turn to using deep fine-tuning for this task in the next section.
Pretext task | Speech detection balanced accuracy
---|---
| Armeni | $t$ | $p$ | Gwilliams | $t$ | $p$
Amp($\rho=0.2$) | $0.5846\pm 0.0032$ | $26$ | $7\mathrm{e}{-4}$ | $0.5116\pm 0.0004$ | $29$ | $6\mathrm{e}{-4}$
Phase($\rho=0.5$) | $0.5831\pm 0.0029$ | $29$ | $6\mathrm{e}{-4}$ | $0.5093\pm 0.0014$ | $7$ | $1\mathrm{e}{-2}$
Band | $0.5941\pm 0.0024$ | $39$ | $2\mathrm{e}{-3}$ | $0.5177\pm 0.0028$ | $6$ | $1\mathrm{e}{-2}$
All | $\mathbf{0.6011}\pm 0.0018$ | $56$ | $2\mathrm{e}{-4}$ | $\mathbf{0.5206}\pm 0.0010$ | $22$ | $1\mathrm{e}{-3}$
None (rand. backbone) | $0.5000\pm 0.0000$ | $-$ | $-$ | $0.5000\pm 0.0000$ | $-$ | $-$
Chance-level | $0.5$ | $-$ | $-$ | $0.5$ | $-$ | $-$
Table 1: Pre-training with pretext tasks produces generalisable
representations. We provide the test accuracy at the point of best validation
accuracy (early stopping). In the none baseline, the network backbone is
randomly initialised before fine-tuning (i.e. there is no pre-training). When
all pretext tasks are used, their losses are weighted equally. The uncertainty
is the standard error of the mean over three random seeds. We quote the
$t$-score and $p$-value for each result from a one-sample one-sided $t$-test
where the population mean is chance-level accuracy. Models are pre-trained on
all of Cam-CAN [48, 53] and fine-tuned on Armeni et al. [2] or Gwilliams et
al. [24] respectively.
Among the pretext tasks, band prediction performs best. This could be because
the representation easily identifies phase-locking to speech onset in theta
waves [43]. However, further investigation is necessary to determine which
frequency band is most informative. The choice of the proportion of sensors to
apply transformations to, $\rho=0.5$ for phase shift prediction and $\rho=0.2$
for amplitude prediction, were determined through a hyperparameter search
(Appendix F). We conjecture that a smaller $\rho$ is optimal for amplitude
scale prediction since this leads to representations that are especially
strong at discriminating amplitude differences among small groups of sensors.
Perhaps this makes it easier to distinguish between neural responses from
distinct parts of the brain such as the STG, which is associated with speech
onset [26]. In contrast, a larger $\rho$ for phase shift prediction may be
best because phase is descriptive of neural synchrony which is distributed
across the brain. Unlike localised responses, a large proportion of the
sensors can detect this feature.
Whilst we have shown that our representations generalise, do they outperform
supervised baselines when trained to saturation? Using shallow fine-tuning
once more, we compare our approach directly to similarly parameterised
supervised classifiers in Table 2. A representation pre-trained with all
pretext tasks significantly outperforms chance, a randomly initialised
backbone, and supervised training on both downstream datasets. Here, the
supervised classifier has significantly more parameters than the models that
probe the network backbone because the input dimension is far larger without
an encoder. Even with this bias favouring the supervised classifier, the self-
supervised case performs better. Thus, it is apparent that pre-training with
unlabelled data significantly improves downstream generalisation.
Model | Speech detection balanced accuracy
---|---
| Armeni | $t$ | $p$ | Gwilliams | $t$ | $p$
Supervised | | | | | |
Linear | $0.5222\pm 0.0015$ | $12$ | $3\mathrm{e}{-3}$ | $0.5006\pm 0.0001$ | $5$ | $2\mathrm{e}{-2}$
Self-supervised \+ fine-tuning | | | | |
Random backbone | | | | | |
MLP (two layers) | $0.5000\pm 0.0000$ | $-$ | $-$ | $0.5000\pm 0.0000$ | $-$ | $-$
Pre-trained backbone | | | | | |
Linear | $0.5853\pm 0.0017$ | $52$ | $3\mathrm{e}{-4}$ | $0.5011\pm 0.0001$ | $9$ | $6\mathrm{e}{-3}$
MLP (two layers) | $\mathbf{0.6011}\pm 0.0018$ | $56$ | $2\mathrm{e}{-4}$ | $\mathbf{0.5206}\pm 0.0010$ | $22$ | $1\mathrm{e}{-3}$
Chance-level | $0.5$ | $-$ | $-$ | $0.5$ | $-$ | $-$
Table 2: Self-supervised pre-training outperforms supervised classifiers. We
use shallow fine-tuning on top of a randomly initialised and pre-trained
backbone, comparing these to a supervised classifier trained to completion. We
quote test accuracy at the point of best validation accuracy (early stopping)
and show uncertainty as the standard error of the mean from three seeds. We
calculate the $t$-score and $p$-value from one-sample one-sided $t$-tests
where the population mean is chance. Self-supervised models are pre-trained on
all of Cam-CAN [48, 53] and shallow fine-tuned on Armeni et al. [2] or
Gwilliams et al. [24] while the supervised models are trained entirely on the
latter datasets.
### 4.3 Scaling Speech Decoding With Unlabelled Data
Figure 3: Scaling unlabelled data improves downstream generalisation. We pre-
train the model on increasing amounts of unlabelled data from Cam-CAN [48, 53]
and fine-tune on voicing and speech detection on the Armeni et al. [2] and
Gwilliams et al. [24] datasets. No pre-training shows best test accuracy for a
randomly initialised network fine-tuned for the same number of epochs (i.e.
without pre-training). We show only one no pre-training line as the result is
the same for both datasets. The shaded area shows the standard error of the
mean across three seeds.
Previously, we showed that the combination of all pretext tasks leads to the
best representations and that this outperforms a randomly initialised backbone
as well as a supervised classifier. In this section, we examine how these
pretext tasks fare as we increase the amount of unlabelled data, analysing
scaling performance on downstream tasks. As before, we pre-train with the
combined pretext tasks using Cam-CAN [48, 53], which does not have any speech
decoding labels, and now fine-tune for ten epochs on Armeni et al. [2] and
Gwilliams et al. [24], showcasing transfer with minimal fine-tuning. This
simulates the scenario in which models need to generalise quickly with very
little calibration data from new patients in order to be deployed as a speech
BCI. Where earlier we analysed only speech detection, here, we also focus on
voicing classification by using deep fine-tuning for this task only. As
evidenced by Section 4.2, we require this because voicing classification is a
more difficult task, necessitating end-to-end training with more parameters to
observe generalisation.
Figure 3 shows balanced accuracy as we increase the amount of unlabelled data
in pre-training up to approximately 160 hours. For both tasks, pre-training
outperforms the no pre-training baseline, which fails to generalise at all,
showing that any unlabelled data is sufficient to pre-train a useful
representation. In both tasks, there is a clear improvement in accuracy as the
amount of unlabelled data increases. Thus, adding unlabelled data improves
generalisation and scales performance. With speech detection, the improvement
in accuracy is more significant for Armeni et al. [2]. Noting that there is
more within-subject data in this dataset than Gwilliams et al. [24], speech
detection may be more sensitive to subject variation than voicing
classification.
We scaled up the pre-training dataset by increasing the number of subjects.
Since this led to consistent and almost monotonic improvements in downstream
accuracy, our self-supervised method is an exception to the common consensus
that pooling subjects worsens generalisation. Moreover, as we pre-trained our
model with a different dataset to those we fine-tuned on, our representation
shows cross-dataset generalisation. This is particularly notable as the Armeni
et al. [2], Gwilliams et al. [24], and pre-training datasets all use different
scanners entirely. Performing well across these datasets indicates that,
together, our architecture and pretext tasks successfully generate
representations that are generalisable across heterogeneous scanners.
Beyond hardware, our results also show improvements on distinct tasks, namely
speech detection and voicing classification, thus illustrating cross-task
generalisation. Thus, our pretext tasks are sufficiently generic to produce
representations that generalise to multiple downstream tasks, indicating their
applicability to speech decoding more generally. Finally, our pre-training
dataset contained no language data whatsoever yet still improved downstream
accuracy on language tasks. Remarkably, this shows that brain data collected
from any task (including those that are not linguistic) can be used to scale
downstream speech decoding performance.
### 4.4 Scaling Unlabelled Data Improves Zero-Shot Generalisation To Novel
Subjects
Brain data is variable across participants, leading to difficulty transferring
models to novel subjects [11]. Whilst we have shown generalisation across
subjects with our method, here, we investigate how well we can generalise to
novel subjects—an even more difficult challenge. This is critical in order to
widely deploy speech BCIs for new patients. In the following experiment, we
pre-train using the same setup as Figure 3, but fine-tune only on Gwilliams et
al. [24]. We use FiLM conditioning as this performs best for seen and unseen
subjects (Appendix B). During training, we hold out three subjects with which
we evaluate novel subject generalisation (i.e. unseen subject performance).
Figure 4: Scaling pre-training data improves novel subject generalisation. As
before, we pre-train with increasing amounts of unlabelled data from Cam-CAN
[48, 53], but now fine-tune only on Gwilliams et al. [24]. In the seen case,
we evaluate on held-out sessions from subjects that were used in the training
data. For the unseen case we evaluate on three completely held-out subjects.
No pre-training shows the best test accuracy for a randomly initialised
network fine-tuned for the same number of epochs (i.e. without pre-training).
We show only one no pre-training line as the result is the same for both
datasets. The shaded area is the standard error of the mean over three seeds.
Figure 4 shows that scaling up the amount of unlabelled data used in pre-
training not only improves accuracy on subjects previously seen, but also
demonstrates a positive trend in performance for unseen (novel) subjects. This
is a promising result, indicating that scaling with our method is an
encouraging direction for resolving the issues related to individual variance
faced by prior work.
### 4.5 Limitations
Although our results are significant in demonstrating a viable path forward to
scale up speech BCIs, there remain a number of limitations to the present
work. We focused here on two downstream tasks: speech detection and voice
classification. Ultimately, we would like to expand this work to predict full
transcripts from brain recordings (i.e. brain-to-text). This has been achieved
with surgical data [40, 60] but not yet convincingly with non-invasive methods
like MEG or EEG [28]. Speech detection has played an important role in the
development of full brain-to-text in a surgical context [40] and we hope may
play a similar role for non-invasive methods. Prior work has further used
voice classification as a stand in for phoneme classification [23], and we
have been able to improve on these results here. In future work, we would like
to expand this to all English phonemes. Secondly, while we have been able to
demonstrate the utility of a few pretext tasks here, we do not claim to have
exhausted the full set of useful tasks. Rather, we conjecture that more useful
pretext tasks remain to be found and believe a useful avenue of research will
be into other input representations for brain recordings. For example, this
paper did not make use of spatial features. Another limitation is our emphasis
on heard speech over other types of speech, such as attempted or imagined
speech. We hypothesise that the same methods presented here will generalise to
these other varieties of speech, though this has yet to be shown. But, perhaps
the biggest limitation of the present work is that, while it surpasses the
amount of data used in other studies, it remains to be seen how much speech
decoding tasks can be improved by scaling up the number of datasets used in
training. In sharing this work now, we believe that the current proof of
concept will be sufficiently impactful to the field as we continue to actively
scale up the datasets that we can leverage.
## 5 Conclusion
Ultimately, solving speech decoding could transform the lives of patients with
severe communication difficulties. Yet, this promise has not materialised
because the field has been blocked by its inability to scale up data to
leverage deep learning. Prior methods have been unable to aggregate data
across different datasets, labels, or subjects to scale up because of
heterogeneity in recording hardware, experiment design, and participants. A
handful of studies have shown weak signals towards alleviating these issues.
But until now, no one has developed a general solution. We provided a unified
method that leverages unlabelled data using generic pretext tasks that shows
that all of these problems can be solved. We verified this with experiments
showing that our method not only scales with heterogeneous data but even
generalises across datasets, subjects, and tasks. Our method unlocks the
potential of the bitter lesson, providing a general method to exploit more
computation by using more data. We implore the research community to employ
the vast quantities of data and compute available to realise this potential.
If scale is all you need in speech decoding, then the bitter lesson may not be
so bitter.
## Acknowledgments and Disclosure of Funding
We would like to thank Botos Csaba for many insightful discussions and
creative ideas which helped shaped the direction of this work. Thanks also to
Minqi Jiang for an encouraging conversation on unsupervised representation
learning, Mats W.J. van Es for technical assistance with the OSL library,
Brian Liu for technical contributions which did not reach the final paper, and
Miran Özdogan for reviewing a draft of this work. The authors would like to
acknowledge the use of the University of Oxford Advanced Research Computing
(ARC) facility in carrying out this work.
http://dx.doi.org/10.5281/zenodo.22558.
DJ is supported by an AWS Studentship from the EPSRC CDT in Autonomous
Intelligent Machines and Systems (AIMS). GL is supported by an EPSRC
Studentship. MW is supported by the Wellcome Trust (106183/Z/14/Z,
215573/Z/19/Z), the New Therapeutics in Alzheimer’s Diseases (NTAD) study
supported by UK MRC, the Dementia Platform UK (RG94383/RG89702) and the NIHR
Oxford Health Biomedical Research Centre (NIHR203316). The views expressed are
those of the author(s) and not necessarily those of the NIHR or the Department
of Health and Social Care. OPJ is supported by the MRC (MR/X00757X/1), Royal
Society (RG\R1\241267), NSF (2314493), NFRF (NFRFT-2022-00241), and SSHRC
(895-2023-1022).
## References
* Agrawal et al. [2015] Pulkit Agrawal, João Carreira, and Jitendra Malik. Learning to see by moving. In _2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015_ , pages 37–45. IEEE Computer Society, 2015. doi: 10.1109/ICCV.2015.13. URL https://doi.org/10.1109/ICCV.2015.13.
* Armeni et al. [2022] Kristijan Armeni, Umut Güçlü, Marcel van Gerven, and Jan-Mathijs Schoffelen. A 10-hour within-participant magnetoencephalography narrative dataset to test models of language comprehension. _Scientific Data_ , 9(1):278, June 2022. ISSN 2052-4463. doi: 10.1038/s41597-022-01382-7. URL https://www.nature.com/articles/s41597-022-01382-7.
* Balestriero et al. [2023] Randall Balestriero, Mark Ibrahim, Vlad Sobal, Ari Morcos, Shashank Shekhar, Tom Goldstein, Florian Bordes, Adrien Bardes, Grégoire Mialon, Yuandong Tian, Avi Schwarzschild, Andrew Gordon Wilson, Jonas Geiping, Quentin Garrido, Pierre Fernandez, Amir Bar, Hamed Pirsiavash, Yann LeCun, and Micah Goldblum. A cookbook of self-supervised learning. _CoRR_ , abs/2304.12210, 2023. doi: 10.48550/ARXIV.2304.12210. URL https://doi.org/10.48550/arXiv.2304.12210.
* Bordes et al. [2023] Florian Bordes, Samuel Lavoie, Randall Balestriero, Nicolas Ballas, and Pascal Vincent. A surprisingly simple technique to control the pretraining bias for better transfer: Expand or narrow your representation. _CoRR_ , abs/2304.05369, 2023. doi: 10.48550/ARXIV.2304.05369. URL https://doi.org/10.48550/arXiv.2304.05369.
* Bressler and Richter [2015] Steven L Bressler and Craig G Richter. Interareal oscillatory synchronization in top-down neocortical processing. _Current Opinion in Neurobiology_ , 31:62–66, 2015.
* Buzsáki and Wang [2012] György Buzsáki and Xiao-Jing Wang. Mechanisms of gamma oscillations. _Annual Review of Neuroscience_ , 35:203–225, 2012.
* Caron et al. [2021] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In _2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, October 10-17, 2021_ , pages 9630–9640. IEEE, 2021. doi: 10.1109/ICCV48922.2021.00951. URL https://doi.org/10.1109/ICCV48922.2021.00951.
* Chen et al. [2020] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. In _Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event_ , volume 119 of _Proceedings of Machine Learning Research_ , pages 1597–1607. PMLR, 2020. URL http://proceedings.mlr.press/v119/chen20j.html.
* Chen et al. [2024] Xupeng Chen, Ran Wang, Amirhossein Khalilian-Gourtani, Leyao Yu, Patricia Dugan, Daniel Friedman, Werner Doyle, Orrin Devinsky, Yao Wang, and Adeen Flinker. A neural speech decoding framework leveraging deep learning and speech synthesis. _Nature Machine Intelligence_ , pages 1–14, April 2024. ISSN 2522-5839. doi: 10.1038/s42256-024-00824-8. URL https://www.nature.com/articles/s42256-024-00824-8.
* Cheung et al. [2016] Connie Cheung, Liberty S Hamilton, Keith Johnson, and Edward F Chang. The auditory representation of speech sounds in human motor cortex. _eLife_ , 5:e12577, 2016.
* Csaky et al. [2022] Richard Csaky, Mats W. J. van Es, Oiwi Parker Jones, and Mark W. Woolrich. Group-level brain decoding with deep learning. _Human Brain Mapping_ , 44:6105 – 6119, 2022. URL https://doi.org/10.1002/hbm.26500.
* Csaky et al. [2023] Richard Csaky, Mats W.J. van Es, Oiwi Parker Jones, and Mark Woolrich. Interpretable many-class decoding for MEG. _NeuroImage_ , 282:120396, November 2023. ISSN 10538119. doi: 10.1016/j.neuroimage.2023.120396. URL https://linkinghub.elsevier.com/retrieve/pii/S1053811923005475.
* Dash et al. [2020] Debadatta Dash, Paul Ferrari, Satwik Dutta, and Jun Wang. NeuroVAD: Real-Time Voice Activity Detection from Non-Invasive Neuromagnetic Signals. _Sensors_ , 20(8):2248, January 2020. ISSN 1424-8220. doi: 10.3390/s20082248. URL https://www.mdpi.com/1424-8220/20/8/2248.
* Défossez et al. [2022] Alexandre Défossez, Jade Copet, Gabriel Synnaeve, and Yossi Adi. High fidelity neural audio compression. _CoRR_ , abs/2210.13438, 2022. doi: 10.48550/ARXIV.2210.13438. URL https://doi.org/10.48550/arXiv.2210.13438.
* Doersch et al. [2015] Carl Doersch, Abhinav Gupta, and Alexei A. Efros. Unsupervised visual representation learning by context prediction. In _2015 IEEE International Conference on Computer Vision, ICCV 2015, Santiago, Chile, December 7-13, 2015_ , pages 1422–1430. IEEE Computer Society, 2015. doi: 10.1109/ICCV.2015.167. URL https://doi.org/10.1109/ICCV.2015.167.
* Duan et al. [2023] Yiqun Duan, Charles Chau, Zhen Wang, Yu-Kai Wang, and Chin-Teng Lin. DeWave: Discrete encoding of EEG waves for EEG to text translation. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, _Advances in Neural Information Processing Systems 36 (NeurIPS 2023), New Orleans, LA, USA, December 10 - 16_ , 2023. URL http://papers.nips.cc/paper_files/paper/2023/hash/1f2fd23309a5b2d2537d063b29ec1b52-Abstract-Conference.html.
* Défossez et al. [2023] Alexandre Défossez, Charlotte Caucheteux, Jérémy Rapin, Ori Kabeli, and Jean-Rémi King. Decoding speech perception from non-invasive brain recordings. _Nature Machine Intelligence_ , 5(10):1097–1107, October 2023. ISSN 2522-5839. doi: 10.1038/s42256-023-00714-5. URL https://www.nature.com/articles/s42256-023-00714-5.
* Fries [2005] Pascal Fries. A mechanism for cognitive dynamics: neuronal communication through neuronal coherence. _Trends in Cognitive Sciences_ , 9(10):474–480, October 2005. ISSN 1364-6613. doi: 10.1016/j.tics.2005.08.011. URL https://www.sciencedirect.com/science/article/pii/S1364661305002421.
* Fries [2009] Pascal Fries. Neuronal gamma-band synchronization as a fundamental process in cortical computation. _Annual Review of Neuroscience_ , 32:209–224, 2009.
* Gibiansky et al. [2017] Andrew Gibiansky, Sercan Ömer Arik, Gregory Frederick Diamos, John Miller, Kainan Peng, Wei Ping, Jonathan Raiman, and Yanqi Zhou. Deep voice 2: Multi-speaker neural text-to-speech. In Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, _Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA_ , pages 2962–2970, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/c59b469d724f7919b7d35514184fdc0f-Abstract.html.
* Gidaris et al. [2018] Spyros Gidaris, Praveer Singh, and Nikos Komodakis. Unsupervised representation learning by predicting image rotations. In _6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings_. OpenReview.net, 2018. URL https://openreview.net/forum?id=S1v4N2l0-.
* Giraud and Poeppel [2012] Anne-Lise Giraud and David Poeppel. Cortical oscillations and speech processing: emerging computational principles and operations. _Nature Neuroscience_ , 15(4):511–517, April 2012. ISSN 1546-1726. doi: 10.1038/nn.3063. URL https://www.nature.com/articles/nn.3063.
* Gwilliams et al. [2022] Laura Gwilliams, Jean-Rémi King, Alec Marantz, and David Poeppel. Neural dynamics of phoneme sequences reveal position-invariant code for content and order. _Nature Communications_ , 13(1):6606, November 2022. ISSN 2041-1723. doi: 10.1038/s41467-022-34326-1. URL https://www.nature.com/articles/s41467-022-34326-1.
* Gwilliams et al. [2023] Laura Gwilliams, Graham Flick, Alec Marantz, Liina Pylkkänen, David Poeppel, and Jean-Rémi King. Introducing MEG-MASC a high-quality magneto-encephalography dataset for evaluating natural speech processing. _Scientific Data_ , 10(1):862, December 2023. ISSN 2052-4463. doi: 10.1038/s41597-023-02752-5. URL https://www.nature.com/articles/s41597-023-02752-5.
* Hall et al. [2014] Emma L. Hall, Siân E. Robson, Peter G. Morris, and Matthew J. Brookes. The relationship between MEG and fMRI. _NeuroImage_ , 102:80–91, 2014. URL https://doi.org/10.1016/j.neuroimage.2013.11.005.
* Hamilton et al. [2018] Liberty S. Hamilton, Erik Edwards, and Edward F. Chang. A Spatial Map of Onset and Sustained Responses to Speech in the Human Superior Temporal Gyrus. _Current Biology_ , 28(12):1860–1871.e4, June 2018. ISSN 09609822. doi: 10.1016/j.cub.2018.04.033. URL https://linkinghub.elsevier.com/retrieve/pii/S0960982218304615.
* Jiang et al. [2024] Weibang Jiang, Liming Zhao, and Bao liang Lu. Large brain model for learning generic representations with tremendous EEG data in BCI. In _The Twelfth International Conference on Learning Representations_ , 2024. URL https://openreview.net/forum?id=QzTpTRVtrP.
* Jo et al. [2024] Hyejeong Jo, Yiqian Yang, Juhyeok Han, Yiqun Duan, Hui Xiong, and Won Hee Lee. Are EEG-to-text models working? _arXiv_ , 2024. doi: https://arxiv.org/abs/2405.06459.
* Jumper et al. [2021] John M. Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin Žídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A A Kohl, Andy Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W. Senior, Koray Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis. Highly accurate protein structure prediction with alphafold. _Nature_ , 596:583 – 589, 2021. URL https://doi.org/10.1038/s41586-021-03819-2.
* Langland-Hassan and Vicente [2018] Peter Langland-Hassan and Agustín Vicente. _Inner Speech: New Voices_. Oxford University Press, 2018.
* Larsson et al. [2016] Gustav Larsson, Michael Maire, and Gregory Shakhnarovich. Learning representations for automatic colorization. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, _Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part IV_ , volume 9908 of _Lecture Notes in Computer Science_ , pages 577–593. Springer, 2016. doi: 10.1007/978-3-319-46493-0\\_35. URL https://doi.org/10.1007/978-3-319-46493-0_35.
* Lopes da Silva [2013] Fernando Lopes da Silva. EEG and MEG: Relevance to Neuroscience. _Neuron_ , 80(5):1112–1128, December 2013. ISSN 0896-6273. doi: 10.1016/j.neuron.2013.10.017. URL https://www.sciencedirect.com/science/article/pii/S0896627313009203.
* Loshchilov and Hutter [2019] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In _7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019_. OpenReview.net, 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7.
* Luo and Poeppel [2007] Huan Luo and David Poeppel. Phase patterns of neuronal responses reliably discriminate speech in human auditory cortex. _Neuron_ , 54(6):1001–1010, 2007.
* Luo et al. [2010] Huan Luo, Zuxiang Liu, and David Poeppel. Auditory cortex tracks both auditory and visual stimulus dynamics using low-frequency neuronal phase modulation. _PLOS Biology_ , 8(8):e1000445, 2010.
* Mai et al. [2016] Guangting Mai, James W. Minett, and William S. Y. Wang. Delta, theta, beta, and gamma brain oscillations index levels of auditory sentence processing. _NeuroImage_ , 133:516–528, June 2016. ISSN 1053-8119. doi: 10.1016/j.neuroimage.2016.02.064. URL https://www.sciencedirect.com/science/article/pii/S1053811916001737.
* Martin et al. [2014] Stéphanie Martin, Peter Brunner, Chris Holdgraf, Hans-Jochen Heinze, Nathan E Crone, Jochem Rieger, Gerwin Schalk, Robert T Knight, and Brian N Pasley. Decoding spectrotemporal features of overt and covert speech from the human cortex. _Frontiers in Neuroengineering_ , 7:14, 2014.
* Mesgarani et al. [2014] Nima Mesgarani, Connie Cheung, Keith Johnson, and Edward F. Chang. Phonetic feature encoding in human superior temporal gyrus. _Science_ , 343(6174):1006–1010, 2014. doi: DOI:10.1126/science.1245994.
* Metzger et al. [2023] Sean L. Metzger, Kaylo T. Littlejohn, Alexander B. Silva, David A. Moses, Margaret P. Seaton, Ran Wang, Maximilian E. Dougherty, Jessie R. Liu, Peter Wu, Michael A. Berger, Inga Zhuravleva, Adelyn Tu-Chan, Karunesh Ganguly, Gopala K. Anumanchipalli, and Edward F. Chang. A high-performance neuroprosthesis for speech decoding and avatar control. _Nature_ , 620:1037–1046, 2023.
* Moses et al. [2021] David A. Moses, Sean L. Metzger, Jessie R. Liu, Gopala K. Anumanchipalli, Joseph G. Makin, Pengfei F. Sun, Josh Chartier, Maximilian E. Dougherty, Patricia M. Liu, Gary M. Abrams, Adelyn Tu-Chan, Karunesh Ganguly, and Edward F. Chang. Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria. _New England Journal of Medicine_ , 385(3):217–227, July 2021. ISSN 0028-4793. doi: 10.1056/NEJMoa2027540. URL https://doi.org/10.1056/NEJMoa2027540.
* Noroozi and Favaro [2016] Mehdi Noroozi and Paolo Favaro. Unsupervised learning of visual representations by solving jigsaw puzzles. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, _Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part VI_ , volume 9910 of _Lecture Notes in Computer Science_ , pages 69–84. Springer, 2016. doi: 10.1007/978-3-319-46466-4\\_5. URL https://doi.org/10.1007/978-3-319-46466-4_5.
* OpenAI [2023] OpenAI. GPT-4 technical report. _CoRR_ , abs/2303.08774, 2023. doi: 10.48550/ARXIV.2303.08774. URL https://doi.org/10.48550/arXiv.2303.08774.
* Peelle et al. [2012] Jonathan E. Peelle, Joachim Gross, and Matthew H. Davis. Phase-locked responses to speech in human auditory cortex are enhanced during comprehension. _Cerebral Cortex_ , 23(6):1378–1387, 2012.
* Perez et al. [2018] Ethan Perez, Florian Strub, Harm de Vries, Vincent Dumoulin, and Aaron C. Courville. FiLM: Visual reasoning with a general conditioning layer. In Sheila A. McIlraith and Kilian Q. Weinberger, editors, _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018_ , pages 3942–3951. AAAI Press, 2018. doi: 10.1609/AAAI.V32I1.11671. URL https://doi.org/10.1609/aaai.v32i1.11671.
* Piai et al. [2014] Vitória Piai, Ardi Roelofs, and Eric Maris. Oscillatory brain responses in spoken word production reflect lexical frequency and sentential constraint. _Neuropsychologia_ , 53:146–156, January 2014. ISSN 0028-3932. doi: 10.1016/j.neuropsychologia.2013.11.014. URL https://www.sciencedirect.com/science/article/pii/S0028393213004119.
* Radford et al. [2023] Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, and Ilya Sutskever. Robust speech recognition via large-scale weak supervision. In _International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA_ , volume 202 of _Proceedings of Machine Learning Research_ , pages 28492–28518. PMLR, 2023. URL https://proceedings.mlr.press/v202/radford23a.html.
* Schoffelen et al. [2019] Jan-Mathijs Schoffelen, Robert Oostenveld, Nietzsche H. L. Lam, Julia Uddén, Annika Hultén, and Peter Hagoort. A 204-subject multimodal neuroimaging dataset to study language processing. _Scientific Data_ , 6(1):17, April 2019. ISSN 2052-4463. doi: 10.1038/s41597-019-0020-y. URL https://www.nature.com/articles/s41597-019-0020-y.
* Shafto et al. [2014] Meredith A. Shafto, Lorraine K. Tyler, Marie Dixon, Jason R. Taylor, James Benedict Rowe, Rhodri Cusack, Andrew J. Calder, William D. Marslen-Wilson, John S. Duncan, T. Dalgleish, Richard N. A. Henson, Carol Brayne, and Fiona E. Matthews. The Cambridge centre for ageing and neuroscience (Cam-CAN) study protocol: a cross-sectional, lifespan, multidisciplinary examination of healthy cognitive ageing. _BMC Neurology_ , 14, 2014.
* Strauß et al. [2015] Antje Strauß, Molly J Henry, Mathias Scharinger, and Jonas Obleser. Alpha phase determines successful lexical decision in noise. _Journal of Neuroscience_ , 35(7):3256–3262, 2015.
* Sutton [2019] Richard Sutton. The bitter lesson. _Incomplete Ideas (blog)_ , 2019. URL http://www.incompleteideas.net/IncIdeas/BitterLesson.html.
* Tagliasacchi et al. [2020] Marco Tagliasacchi, Yunpeng Li, Karolis Misiunas, and Dominik Roblek. SEANet: A multi-modal speech enhancement network. In Helen Meng, Bo Xu, and Thomas Fang Zheng, editors, _Interspeech 2020, 21st Annual Conference of the International Speech Communication Association, Virtual Event, Shanghai, China, 25-29 October 2020_ , pages 1126–1130. ISCA, 2020. doi: 10.21437/INTERSPEECH.2020-1563. URL https://doi.org/10.21437/Interspeech.2020-1563.
* Tang et al. [2023] Jerry Tang, Amanda LeBel, Shailee Jain, and Alexander G. Huth. Semantic reconstruction of continuous language from non-invasive brain recordings. _Nature Neuroscience_ , 26(5):858–866, May 2023. ISSN 1546-1726. doi: 10.1038/s41593-023-01304-9. URL https://www.nature.com/articles/s41593-023-01304-9.
* Taylor et al. [2017] Jason R. Taylor, Nitin Williams, Rhodri Cusack, Tibor Auer, Meredith A. Shafto, Marie Dixon, Lorraine K. Tyler, Cam-CAN Group, and Richard N. A. Henson. The Cambridge centre for ageing and neuroscience (Cam-CAN) data repository: Structural and functional MRI, MEG, and cognitive data from a cross-sectional adult lifespan sample. _Neuroimage_ , 144:262 – 269, 2017.
* Vidaurre et al. [2018] Diego Vidaurre, Laurence T. Hunt, Andrew J. Quinn, Benjamin A. E. Hunt, Matthew J. Brookes, Anna C. Nobre, and Mark W. Woolrich. Spontaneous cortical activity transiently organises into frequency specific phase-coupling networks. _Nature Communications_ , 9(1):2987, July 2018. ISSN 2041-1723. doi: 10.1038/s41467-018-05316-z. URL https://www.nature.com/articles/s41467-018-05316-z.
* Wandelt et al. [2024] Sarah K Wandelt, David A. Bjånes, Kelsie Pejsa, Brian Lee, Charles Y Liu, and Richard Andersen. Representation of internal speech by single neurons in human supramarginal gyrus. _Nature human behaviour_ , 2024. URL https://doi.org/10.1038/s41562-024-01867-y.
* Wang et al. [2023a] Bo Wang, Xiran Xu, Longxiang Zhang, Boda Xiao, Xihong Wu, and Jing Chen. Semantic reconstruction of continuous language from MEG signals. _CoRR_ , abs/2309.07701, 2023a. doi: 10.48550/ARXIV.2309.07701. URL https://doi.org/10.48550/arXiv.2309.07701.
* Wang et al. [2023b] Christopher Wang, Vighnesh Subramaniam, Adam Uri Yaari, Gabriel Kreiman, Boris Katz, Ignacio Cases, and Andrei Barbu. Brainbert: Self-supervised representation learning for intracranial recordings. In _The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023_. OpenReview.net, 2023b. URL https://openreview.net/pdf?id=xmcYx_reUn6.
* Wang and Ji [2022] Zhenhailong Wang and Heng Ji. Open vocabulary electroencephalography-to-text decoding and zero-shot sentiment classification. In _Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Virtual Event, February 22 - March 1_ , pages 5350–5358. AAAI Press, 2022. doi: 10.1609/AAAI.V36I5.20472. URL https://doi.org/10.1609/aaai.v36i5.20472.
* Weiss and Mueller [2012] Sabine Weiss and Horst M. Mueller. “Too many betas do not spoil the broth”: the role of beta brain oscillations in language processing. _Frontiers in Psychology_ , 3, 2012. doi: https://doi.org/10.3389/fpsyg.2012.00201.
* Willett et al. [2023] Francis R. Willett, Erin M. Kunz, Chaofei Fan, Donald T. Avansino, Guy H. Wilson, Eun Young Choi, Foram Kamdar, Matthew F. Glasser, Leigh R. Hochberg, Shaul Druckmann, Krishna V. Shenoy, and Jaimie M. Henderson. A high-performance speech neuroprosthesis. _Nature_ , 620(7976):1031–1036, August 2023. ISSN 1476-4687. doi: 10.1038/s41586-023-06377-x. URL https://www.nature.com/articles/s41586-023-06377-x.
* Zeghidour et al. [2022] Neil Zeghidour, Alejandro Luebs, Ahmed Omran, Jan Skoglund, and Marco Tagliasacchi. Soundstream: An end-to-end neural audio codec. _IEEE ACM Trans. Audio Speech Lang. Process._ , 30:495–507, 2022. doi: 10.1109/TASLP.2021.3129994. URL https://doi.org/10.1109/TASLP.2021.3129994.
* Zhang et al. [2016] Richard Zhang, Phillip Isola, and Alexei A. Efros. Colorful image colorization. In Bastian Leibe, Jiri Matas, Nicu Sebe, and Max Welling, editors, _Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part III_ , volume 9907 of _Lecture Notes in Computer Science_ , pages 649–666. Springer, 2016. doi: 10.1007/978-3-319-46487-9\\_40. URL https://doi.org/10.1007/978-3-319-46487-9_40.
## Appendix A Supervised Heard Speech Decoding
While this work is primarily about scaling speech decoding, we also set the
first baselines for speech detection and voicing classification from MEG with
connected auditory speech stimuli. Therefore, in this section we provide fully
supervised results where our architecture is trained to completion using all
available labelled data in the corresponding dataset (except the validation
and test splits) and compare this to a linear classifier baseline. These
results are not comparable with our self-supervised models because the entire
architecture is trained end-to-end. In our SSL work with shallow fine-tuning,
only the final classifier is trained with labels.
Experiment | Balanced accuracy
---|---
| Speech detection | Voicing
Armeni (linear) | $0.5222\pm 0.0015$ | $0.5207\pm 0.0007$
Armeni (ours) | $\mathbf{0.7253}\pm 0.0005$ | $\mathbf{0.5282}\pm 0.0006$
Gwilliams (linear) | $0.5006\pm 0.0001$ | $0.5166\pm 0.0002$
Gwilliams (ours) | $\mathbf{0.5946}\pm 0.0016$ | $\mathbf{0.5211}\pm 0.0005$
Table 3: Supervised baselines. Uncertainty is standard error of the mean over
three random seeds.
The results in Table 3 show that we can achieve strong speech detection
results on Armeni et al. [2] and Gwilliams et al. [24] with our architecture.
The difference between the two may be due to a combination of different
scanners, numbers of sensors (208 vs 269 magnetometers), varied difficulty in
the audio stimulus, and more within-subject data in Armeni et al. [2]. We also
achieve good results on voicing classification for both datasets, easily
outperforming the linear models.
One potential way to improve on both supervised and self-supervised approaches
is by using a temporally aware model such as an LSTM or an attentional model
such as a Transformer as the classifier. We do not investigate this within
this paper as it is out of scope. Rather, we choose to use simple classifiers
as the purpose of this paper is to demonstrate the potential of scaling.
## Appendix B Subject Conditioning
To investigate subject conditioning approaches, we pre-train then fine-tune
our models on a subset of Gwilliams et al. [24], holding out three subjects
for evaluation. We conduct these experiments without conditioning, with
subject embeddings, and with FiLM to see if they have any influence on subject
generalisation. We represent embeddings as learnable vectors, unique to each
subject, that are concatenated to the backbone representation. For FiLM, we
use the same embeddings as inputs to the functions that modulate the backbone
representation.
Subject | Subject voicing balanced accuracy
---|---
conditioning | Seen | $t$ | $p$ | Unseen (zero-shot) | $t$ | $p$
None | $0.5118\pm 0.001$ | $8.4$ | $0.007$ | $0.5057\pm 0.001$ | $4.5$ | $0.02$
Embedding | $0.5168\pm 0.0006$ | $27.9$ | $0.0006$ | $0.5052\pm 0.0007$ | $7.5$ | $0.009$
FiLM [44] | $\mathbf{0.5197}\pm 0.001$ | $18.6$ | $0.001$ | $\mathbf{0.5058}\pm 0.002$ | $3.3$ | $0.04$
Chance-level | $0.5$ | $-$ | $-$ | $0.5$ | $-$ | $-$
Table 4: Subject conditioning improves generalisation on seen subjects but not
novel subjects. We quote test accuracy at the point of best validation
accuracy (early stopping). Uncertainty is the standard error of the mean and
we show the $t$-score and $p$-value from a one-sample one-sided $t$-test where
the population mean is chance-level accuracy.
Table 4 reveals that in-distribution subject generalisation is helped by using
subject conditioning methods. Both subject embeddings and FiLM conditioning
outperform no conditioning, with FiLM being best. However, for novel subject
generalisation, using a subject conditioning method makes little difference.
Regardless of conditioning, we can generalise beyond chance-level, but suffer
a reduction in accuracy compared to in-distribution subjects. Conditioning may
be negligible in the unseen case as the embeddings are random rather than
learned for the held-out subjects.
## Appendix C Aggregating Unlabelled Datasets
To scale up unlabelled data further than individual studies, we must be able
to combine many existing datasets. As a preliminary investigation, we combine
two of the largest public MEG datasets: MOUS [47] and Cam-CAN [48, 53]. We
investigate how pre-training with these combined datasets affects downstream
performance using the same experimental setup as Figure 3.
| | Balanced accuracy
---|---|---
Labels | Pre-training data (hours) | Speech detection | Voicing
Armeni | MOUS (160 hours) | $0.5736\pm 0.0016$ | $0.5220\pm 0.0013$
| Cam-CAN (159 hours) | $\mathbf{0.5911}\pm 0.0009$ | $\mathbf{0.5241}\pm 0.0004$
| Cam-CAN + MOUS (319 hours) | $0.5872\pm 0.0012$ | $0.5236\pm 0.0016$
Gwilliams | MOUS (160 hours) | $0.5060\pm 0.0006$ | $0.5175\pm 0.0008$
| Cam-CAN (159 hours) | $\mathbf{0.5118}\pm 0.0011$ | $\mathbf{0.5196}\pm 0.0010$
| Cam-CAN + MOUS (319 hours) | $0.5116\pm 0.0017$ | $0.5187\pm 0.0013$
Table 5: Combining two unlabelled datasets does not improve over the best
study. We quote test accuracy at the point of best validation accuracy (early
stopping). The uncertainty is the standard error of the mean calculated over
three seeds.
The results in Table 5 suggest that combining these two datasets is not
strictly better than the best dataset on its own. Pre-training on Cam-CAN
alone leads to the best performance in all instances. This suggests that Cam-
CAN is a better pre-training dataset than MOUS—a surprising result given that
MOUS and Armeni et al. [2] share the same scanning hardware and MOUS includes
data from subjects listening to speech while Cam-CAN has no such language
tasks. This could be due to Cam-CAN being a cleaner dataset as we found
artefacts can significantly affect learning. Combining Cam-CAN with MOUS
performs marginally worse or the same as Cam-CAN alone. For all but one
instance they are within the standard error of the mean of each other. The
combined datasets always outperform MOUS on its own.
In general, further investigation is necessary to determine how datasets
should be best aggregated. Increasing the number of datasets could enable the
network to model the variance between them and eventually improve over the
best singular dataset. This remains an open question as we were only able to
study up to two datasets in this paper.
## Appendix D Task Labels Are Hard To Scale; Generic SSL Scales Best
Figure 5: Cross-task semi-supervised learning does not improve generalisation.
We pre-train with increasing amounts of labelled data from Armeni et al. [2],
using speech labels when evaluating voicing and voicing labels for speech. We
also use a constant 38 hours of unlabelled data in pre-training. No pre-
training shows test accuracy for a randomly initialised network fine-tuned for
the same number of epochs (i.e. without pre-training). SSL is when pre-
training with only the unlabelled data. The shaded area shows the standard
error of the mean across three seeds.
Our self-supervised tasks led to clearly generalisable and scalable
representations by providing an inductive bias towards speech decoding.
However, could these representations be made more powerful by utilising labels
from other speech decoding tasks to improve this bias? This moves us into the
semi-supervised learning regime, where we use a mix of labelled and unlabelled
data.
With the same experimental setup as Figure 3, we add speech detection or
voicing as an additional loss during pre-training. Figure 5 shows that no
amount of additional labelled data improves downstream accuracy, with all data
points remaining near the range of the standard error of the mean of the SSL
baseline for both tasks. There also does not appear to be any indication of
scaling within the range of labelled data available to us.
This outcome indicates that our self-supervised losses are the main
contributor to downstream accuracy. The MEG signal appears rich, encapsulating
lots of information, with the speech component being a subset distributed
across the representation space. Learning with specific labels produces
representations that are too specialised to improve on other speech decoding
tasks. This does not preclude there being some—or a combination of—labelled
tasks that provide more general inductive biases for speech; nor does it mean
more labelled data will not scale. However, the point remains salient.
Labelled data is more difficult to scale given its scarcity and the generic
property is lost with specific labels while it is well-preserved with SSL,
providing another reminder of the bitter lesson.
## Appendix E Experiment Details
To generate the data in Table 1, we pre-train our representations with non-
overlapping sample windows from all subjects and sessions 001 to 009 of Armeni
et al. [2] for 200 epochs with a $0.8:0.1:0.1$ train/val/test split. We
shallow fine-tune with labels for 30 epochs using all subjects, but only
session 010, which was held-out during pre-training.
For the data in Figure 3, we use the same number of epochs and the same ratios
for data splitting. We pre-train only with Cam-CAN [48, 53]. We adjust the
amount of unlabelled data used by increasing the number of subjects in the
sequence 1, 2, 4, 8, 17, 36, 74, 152, 312, and 641, successively randomly
selecting more subjects to include. Each seed uses a different set of subjects
to reduce negative effects from outlier subjects. We fine-tune with all data
in Armeni et al. [2] and Gwilliams et al. [24] with the same train/val/test
ratios as before.
For the results shown in Figure 5, we use the same setup as in Figure 3. In
addition to the unlabelled data, we add labelled data from Armeni et al. [2],
increasing the number of sessions used in the sequence 1, 3, 6, 9, 12, and 27.
For this experiment, we calculate the number of labelled data hours using an
estimate of 1 hour per session per subject.
Finally, for the results in Table 4, we pre-train and deep fine-tune to
saturation with subjects 01-07 of Gwilliams et al. [24] and test zero-shot
generalisation using held-out subjects 08-10. All other experimental details
are the same as the previous two experiments.
In all experiments, we use three randomly selected seeds for each pre-training
and corresponding fine-tuning run. For speech detection, since our encoder
reduces the temporal dimension from 125 samples (the number of samples in a
0.5 second window with a sample rate of 250Hz) down to 5 embeddings, we
downsample our speech detection labels to match using PyTorch’s
torch.nn.functional.interpolate. Therefore, each speech detection label
represents a 0.1 second period of time.
## Appendix F Hyperparameters
We conduct a search over hyperparameters of interest to optimise our self-
supervised objectives and neural architecture. In all experiments in this
section, we measure downstream performance (using shallow fine-tuning) with
best validation accuracy for speech detection unless explicitly specified,
training and evaluating only on data from subject 001 of Armeni et al. [2] to
minimise training time, over three random seeds. All uncertainties in this
section are standard deviations and not standard error of the mean as in the
main body.
Table 6 shows the results of a search over the proportion $\rho$ of sensors to
apply the phase and amplitude pretext task transformations to. We only search
up to $\rho=0.5$ as anything above this is equivalent to applying the inverse
transformation to a proportion $1-\rho$ of the sensors. We find that for phase
shift prediction, $\rho=0.5$ is optimal, while for amplitude scale prediction
$\rho=0.2$ is best.
Task | $\rho$ | Balanced accuracy
---|---|---
Phase shift prediction | 0.1 | $0.5452\pm 0.0048$
| 0.2 | $0.5566\pm 0.0014$
| 0.3 | $0.5578\pm 0.0021$
| 0.4 | $0.5530\pm 0.0014$
| 0.5 | $\mathbf{0.5655}\pm 0.0026$
Amplitude scale prediction | 0.1 | $0.5440\pm 0.0012$
| 0.2 | $\mathbf{0.5796}\pm 0.0050$
| 0.3 | $0.5625\pm 0.0063$
| 0.4 | $0.5684\pm 0.0016$
| 0.5 | $0.5650\pm 0.0026$
Table 6: Best $p$ for phase and amplitude pretext tasks.
In our architecture, we define $d_{\mathrm{backbone}}$ to be the bottleneck
dimension of the cortex encoder (i.e. the embedding dimension) as well as the
size of $d_{\mathrm{shared}}$. Balestriero et al. [3], using the results from
work by Bordes et al. [4], suggest that increasing the backbone output
dimension in self-supervised learning leads to better downstream transfer. We
investigate this for our architecture by varying the encoder dimension in
Table 7. Our results are consistent with their findings. Balanced accuracy is
highest at the largest encoder dimension we evaluated. Encoder dimensions
larger than $2048$ led to unstable training and were not feasible given the
compute resources available to us.
$d_{\mathrm{backbone}}$ | Balanced accuracy
---|---
$128$ | $0.5775\pm 0.0037$
$300$ | $0.5715\pm 0.0012$
$512$ | $0.5835\pm 0.0016$
$1024$ | $0.5800\pm 0.0050$
$2048$ | $\mathbf{0.5987}\pm 0.0024$
Table 7: Optimal encoder dimension.
In Table 8, we show the effect of varying the length $w$, i.e. the window size
of each data sample. Increasing the window to five seconds led to unstable
losses in pre-training so we did not evaluate above a window length of two
seconds. This is likely to be because increasing the window size reduces the
number of training samples when only non-overlapping samples are used. For
speech detection, there appears to be a U-shaped distribution, where very
short and very long windows lead to better downstream accuracy. For voicing
classification, the window lengths we evaluated seem to make little
difference. However, windows smaller than 0.1 seconds may lead to
significantly lower accuracy as most phonemes will occur for at least this
length of time in the neural response [23].
Window length (s) | Balanced accuracy
---|---
| Speech detection | Voicing classification
$0.2$ | $0.5919\pm 0.0014$ | $0.5033\pm 0.0014$
$0.4$ | $0.5804\pm 0.0036$ | $\mathbf{0.5077}\pm 0.0027$
$0.5$ | $0.5715\pm 0.0012$ | $0.5055\pm 0.0023$
$1.0$ | $0.5790\pm 0.0012$ | $0.5062\pm 0.0020$
$2.0$ | $\mathbf{0.6116}\pm 0.0019$ | $0.5065\pm 0.0011$
Table 8: Ideal sample window length.
We also evaluated several different configurations of the SEANet architecture
for the cortex encoder, converging on convolutional blocks with channel
dimensions $(512,512,512,512)$.
While these ablations indicate a theoretically ideal architectural
configuration, in practice, we altered our final experimental architecture due
to instabilities during training when data was scaled up. Our final
architecture hyperparameters achieve a balance between the best values from
our hyperparameter search and stable training. These values are detailed in
Table 9.
Hyperparameter | Value
---|---
Window length (s) | $0.5$
$\rho$ (phase) | $0.5$
$\rho$ (amplitude) | $0.2$
$\\{w_{1},w_{2},w_{3}\\}$ | $\\{1.0,1.0,1.0\\}$
$d_{\mathrm{shared}}$ | $512$
$d_{\mathrm{backbone}}$ | $512$
SEANet convolution channels | $(512,512,512,512)$
SEANet downsampling ratios | $(5,5,1)$
FiLM conditioning dimension | $16$
Subject embedding dimension | $16$
Pre-training epochs | $200$
Optimizer | AdamW [33]
Learning rate | $0.000066$
Train ratio | $0.8$
Validation ratio | $0.1$
Test ratio | $0.1$
Table 9: Experimental hyperparameters.
## Appendix G Compute Resources
All experiments were run on individual NVIDIA V100 and A100 GPUs with up to
40GiB of GPU memory on a system with up to 1TiB of RAM. Each pre-training run
with the maximum amount of pre-training data took approximately 200 hours (8.3
days). Fine-tuning following pre-training took up to another 12 hours. We
estimate that we used approximately 3000 hours of compute for the final
experimental runs, including hyperparameter searches. In total, over the
course of developing this work from idea to final paper, we used around 10,000
hours of GPU compute.
## Appendix H Licences For Datasets And Code
The Armeni et al. [2] dataset is distributed under CC-BY-4.0 while the
Gwilliams et al. [24] dataset is distributed under the CC0 1.0 Universal
licence. The Schoffelen et al. [47] dataset is distributed with a RU-DI-HD-1.0
licence from the Donders institute. The licence for the Cam-CAN [48, 53]
dataset is unknown. The SEANet code adapted from Défossez et al. [14] is
distributed under the MIT licence, and the OSL library, which we use for
preprocessing, is under the BSD-3-Clause licence.
## Appendix I Broader Impacts
Decoding speech from non-invasive brain recordings is likely to bring about
significant positive societal impacts. Research in this field could enable
paralysed patients to communicate freely and materially assist those with
minor communication difficulty (e.g. stammering). As the technology matures,
it could also enable new ways of communicating with others and interacting
with devices without the risks of invasive surgical implants. Nevertheless,
the maturity of this technology could also present potential negative societal
impacts. For one, reading inner speech creates new concerns over data controls
as this information is likely to be highly sensitive and personal to
individuals. Given access to this technology, there is also the risk that bad
actors could extract sensitive information from target individuals without
consent. Moreover, there are possible long horizon effects associated with
speech decoding research. Broad adoption of this technology could lead to the
gradual erosion of privacy over inner speech within society. In addition,
asymmetric effects, where some individuals or organisations can read inner
speech but others are unable to, could worsen societal inequality. Within the
scope of this paper, we mitigate risks associated with inner speech by
focusing on decoding heard speech where there is low potential for abuse.
Nonetheless, we acknowledge that this is still a stepping stone towards
solving inner speech decoding.
|
# A Bernoulli Mixture Model to Understand and Predict Children’s Longitudinal
Wheezing Patterns
Pierre G. B. Moutounet-Cartan<EMAIL_ADDRESS>
Department of Mathematics
Imperial College London
London, SW7 2AZ, United Kingdom
###### Abstract
In this research, we perform discrete unsupervised machine learning through a
Bernoulli Mixture Model on data representing the expression of the wheeze
phenotype of patients at different stages of their childhood up to age 16.
Wheeze is a distinct noise produced while breathing due to narrowed airways,
such as asthma or viral chest infections. Due to a study from Henderson et
al., (2008), it has been estimated that around 23.5% of U.K. children had at
least wheezed once by six years of age, and 6.9% had persistent wheezing
problems. The usage of a Bernoulli Mixture Model is new in the field, where
previous classification methods used classical unsupervised learning such as
$K$-means, $K$-medoids, or Latent Class Analyses (Loza et al.,, 2016;
Kurukulaaratchy et al.,, 2014; Deliu et al.,, 2016; Brew et al.,, 2019). In
particular, Oksel et al., (2019) found that the Latent Class Analysis has
resulted majorly dependent to the sample size, and $K$-means is largely
dependent on the distance measure and so the data-set.
In this research, we estimate that around $27.99(\pm 2.15)\%$111The values in
brackets are the differences between the estimated values for the measures
listed and the left or right boundary of the 95% confidence interval. The
inverse cumulative distribution function value is taken from a
$t$-distribution with $n-1$ degrees of freedom. of the population has
experienced wheezing before turning 1 in the United Kingdom. Furthermore, the
Bernoulli Mixture Model classification is found to work best with $K=4$
clusters in order to better balance the separability of the clusters with
their explanatory nature, based on a cohort of $N=1184$. The probability of
the group of parents in the $j$th cluster to say that their children have
wheezed during the $i$th age is assumed $P_{ij}\sim\text{Beta}(1/2,1/2)$, the
probabilities of assignment to each cluster is
$R\sim\text{Dirichlet}_{K}(\alpha)$, the assignment of the $n$th patient to
each cluster is $Z_{n}\ |\ R\sim\text{Categorical}(R)$, and the $n$th patient
wheezed during the $i$th age is $X_{in}\ |\
P_{ij},Z_{n}\sim\text{Bernoulli}(P_{i,Z_{n}})$; where $i\in\\{1,\dots,6\\}$,
$j\in\\{1,\dots,K\\}$, and $n\in\\{1,\dots,N\\}$. The classification is then
performed through the E-M optimization algorithm (Bishop,, 2006; Saeed et
al.,, 2013). We found that this clustering method groups efficiently the
patients with late-childhood wheezing, persistent wheezing, early-childhood
wheezing, and none or sporadic wheezing. Furthermore, we found that this
method is not dependent on the data-set, and can include data-sets with
missing entries.
It is hoped this study will give medical staff an understanding of the
wheezing patterns in children up to age 16, and so provide an effective
treatment.
Keywords: Bayesian Statistics, E-M Optimization, Mixture Models, Bernoulli
Mixture Model, Wheeze, Respiratory Problems, Classification
###### Contents
1. 1 The Cohort & Introduction
2. 2 Theory of Bernoulli Mixture Model
3. 3 Application to the data-set
4. 4 Results
5. 5 Conflicts of interest
## 1 The Cohort & Introduction
Parents of 1184 children residing in the United Kingdom were asked to tell if
they had wheeze in the previous year at ages 1, 3, 5, 8, 11, and 16. A
positive answer was recorded as a 1 in the registry, while a negative answer
was recorded as a 0. Each patient is given a fixed identification number going
from 0 to 1183. The data of 537 children was incomplete (45.4% of the data),
i.e., the parent did not answer to the question at least once. There were 647
full entries, i.e., parents of 647 children provided the information at every
stage described above. The information was later put into a spreadsheet, where
each row represented the data collected for each child, and each column the
answer for each of the periods as explained above, under ”Age 1,” ”Age 3,”
”Age 5,” ”Age 8,” ”Age 11,” and ”Age 16.” Whenever no information was
provided, NaN was recorded.
From this data, we can therefore estimate the number of children within the
U.K. population who had wheeze at most a year before turning 1, 3, 5, 8, 11,
and 16. We estimate the means by the sample means, and produce a 95%
confidence interval with the sample mean and the corresponding critical value
from a $t$-distribution with 647 degrees of freedom. This algorithm can be
found in Appendix.
Age period | Est. mean | LHS CI at 95% | RHS CI at 95%
---|---|---|---
Age 1 | 27.99% | 25.84% | 30.14%
Age 3 | 23.74% | 21.70% | 25.78%
Age 5 | 23.04% | 21.02% | 25.06%
Age 8 | 18.05% | 16.20% | 19.89%
Age 11 | 18.89% | 17.01% | 20.76%
Age 16 | 17.02% | 15.22% | 18.83%
Table 1: Estimate of the U.K. population having wheezed at different ages with
the LHS and RHS of the 95% confidence interval using a $t$-distribution with
647 degrees of freedom.
From Table 1, we can deduce that approximately $27.99(\pm 2.15)\%$ (with a 95%
confidence) of the population has experienced wheezing before turning 1. This
seems to corroborate Henderson et al., (2008) findings, where they estimated
that 23.5% of U.K. children had at least wheezed once by 6.
The challenge with this data is that 45.5% of it is incomplete. From Fig. 1,
the number of parents stopping to provide a status on their child wheeziness
increases whenever the latter gets older.
Figure 1: Number of NaN entries per age tranche within the cohort data.
In particular, other papers such as Loza et al., (2016); Kurukulaaratchy et
al., (2014); Deliu et al., (2016); Brew et al., (2019); Oksel et al., (2019)
have used well known clustering methods such as K-means, K-medoids, as well as
Latent Class Analysis. These methods require that no entry be missing or
classified as NaN in the data-set, which we have for a large proportion in our
case. Furthermore, Oksel et al., (2019) found that the Latent Class Analysis
has resulted majorly dependent to the sample size, the frequency, and the
timing of data collection.
Therefore, in this paper, we suggest a novel method for classifying the
longitudinal wheeze phenotype diagnosed in children aged 1 to 16, using a
Bernoulli Mixture Model based on the E-M optimization algorithm. This method
can allow missing entries in the data-set, and is generally less dependent on
the sample size than other methods.
## 2 Theory of Bernoulli Mixture Model
In this subsection, we consider the general case of the Bernoulli Mixture
Model so that we can apply it in our case later on.
Let $X_{1},\dots,X_{M}$ be a set of $M$ binary variables where
$X_{i}\sim\mathrm{Bernoulli}(\lambda_{i})$ for $\lambda_{i}\in(0,1)$, $\forall
i\in\\{1,2,\dots,M\\}$. Note that in this case, the marginals of $X_{i}$ are
given by $f_{X_{i}}(x_{i})=\lambda_{i}^{x_{i}}(1-\lambda_{i})^{1-x_{i}}$ where
$x_{i}\in\\{0,1\\}$, $\forall i\in\\{1,2,\dots,M\\}$.
Hence, we have that
$\mathbb{P}(\mathbf{X}|\Lambda)=\prod_{i=1}^{M}f_{X_{i}}(x_{i})=\prod_{i=1}^{M}\lambda_{i}^{x_{i}}(1-\lambda_{i})^{1-x_{i}}$
where $\mathbf{X}=(X_{1},\dots,X_{M})^{t}$ and
$\Lambda=(\lambda_{1},\dots,\lambda_{M})^{t}$.
As the above expression are written as a separate product of the different
probability mass functions of $X_{1},\dots,X_{M}$, we have that $\mathbf{X}$
is independent of $\Lambda$. We have
$\mathbb{E}(X_{i}|\lambda_{i})=\lambda_{i},\forall i\in\\{1,\dots,M\\}$ by
characteristic of the Bernoulli distribution, so that
$\mathbb{E}(\mathbf{X}|\Lambda)=\Lambda\in(0,1)^{M}\ldotp\ (\dagger)$
Similarly, by independence we have
$\mathrm{cov}(X_{i},X_{j}|\lambda_{i},\lambda_{j})=0$, $\forall
i,j\in\\{1,\dots,M\\}$ such that $i\neq j$, and by property of the Bernoulli
distribution, $\mathrm{var}(X_{i}|\lambda_{i})=\lambda_{i}(1-\lambda_{i})$,
$\forall i\in\\{1,\dots,M\\}$. Therefore,
$\mathbf{\Sigma}_{ik}:=\mathrm{cov}(\mathbf{X}|\Lambda)_{ik}=\delta_{ik}\lambda_{i}(1-\lambda_{i})\in(0,1)^{M\times
M}\ (\ddagger)$
where $\delta_{ik}$ is the Kronecker delta, i.e.,
$\delta_{ik}=\cases{1}\text{if}i=k,\\\ 0\text{if}i\neq k\ldotp$
We now build a finite mixture, with new parameters
$\Pi=(\pi_{1},\dots,\pi_{K})^{t}$, where $K\leqslant M$ will be the number of
clusters, by
$\mathbb{P}(\mathbf{X}|\tilde{\Lambda},\Pi):=\sum_{j=1}^{K}\pi_{j}\mathbb{P}(\mathbf{X}|\Lambda_{j})\
(\star)$
where $\tilde{\Lambda}=(\Lambda_{1},\dots,\Lambda_{K})^{t}$, the set of
parameters of each component (Saeed et al.,, 2013), where we have
$\mathbb{P}(\mathbf{X}|\Lambda_{j})=\prod_{i=1}^{M}\lambda_{ji}^{x_{i}}(1-\lambda_{ji})^{1-x_{i}}\ldotp$
For this mixture model, we have
$\mathbb{E}(\mathbf{X}|\tilde{\Lambda},\Pi)=\sum_{j=1}^{K}\pi_{i}\mathbb{E}(\mathbf{X}|\Lambda_{j})=\sum_{j=1}^{K}\pi_{j}\Lambda_{j}\in(0,1)^{M}\text{by}(\dagger)\ldotp$
Similarly,
$\mathrm{cov}(\mathbf{X}|\tilde{\Lambda},\Pi)=-\mathbb{E}(\mathbf{X}|\tilde{\Lambda},\Pi)\mathbb{E}(\mathbf{X}|\tilde{\Lambda},\Pi)^{t}+\sum_{j=1}^{K}\pi_{k}(\mathbf{\Sigma}_{j}+\Lambda_{j}\Lambda_{j}^{t})$
where $\mathbf{\Sigma}_{j}$ is as defined in $(\ddagger)$, i.e.,
$(\mathbf{\Sigma}_{j})_{ik}=\delta_{ik}\lambda_{ji}(1-\lambda_{ji})\ldotp$
For a random sample $\mathbf{X}_{1},\dots,\mathbf{X}_{N}$ which is distributed
as in $(\star)$ with respect to $\tilde{\Lambda}$ and $\Pi$, the likelihood
function for $\mathbf{X}_{1},\dots,\mathbf{X}_{N}|\tilde{\Lambda},\Pi$ is
given by
$\mathcal{L}(\mathbf{X}_{1},\dots,\mathbf{X}_{N},\tilde{\Lambda},\Pi)=\prod_{n=1}^{N}\sum_{j=1}^{K}\pi_{j}\mathbb{P}(\mathbf{X}_{n}|\Lambda_{j})$
so that the log-likelihood $\ell$ is
$\ell(\mathbf{X}_{1},\dots,\mathbf{X}_{N},\tilde{\Lambda},\Pi)=\sum_{n=1}^{N}\log\left(\sum_{j=1}^{K}\pi_{j}\mathbb{P}(\mathbf{X}_{n}|\Lambda_{j})\right)=\sum_{n=1}^{N}\log\left(\sum_{j=1}^{K}\left\\{\pi_{j}\prod_{i=1}^{M}\lambda_{ji}^{x_{ni}}(1-\lambda_{ji})^{1-x_{ni}}\right\\}\right)\ldotp$
The aim is to generate an algorithm that finds $\Pi$, $\tilde{\Lambda}$ which
maximize $\ell$, which we will call $\Pi_{\text{max}}$ and
$\tilde{\Lambda}_{\text{max}}$ respectively. Unfortunately, due to the shape
of $\ell$, $\Pi_{\text{max}}$ and $\tilde{\Lambda}_{\text{max}}$ cannot be
found in closed form in that case.
Hence, we introduce a latent binary variable
$\mathbf{Z}=(z_{1},\dots,z_{K})^{t}$ associated with each instance of
$\mathbf{X}$ (Bishop,, 2006). Therefore, the conditional distribution of
$\mathbf{X}$, given the latent variable $\mathbf{Z}$, is given by
$\mathbb{P}(\mathbf{X}|\mathbf{Z},\tilde{\Lambda})=\prod_{j=1}^{K}\mathbb{P}(\mathbf{X}|\Lambda_{j})^{z_{j}},$
where the prior for the latent variable is
$\mathbb{P}(\mathbf{Z}|\Pi)=\prod_{j=1}^{K}\pi_{j}^{z_{j}}\ldotp$
By considering the random sample $\mathbf{X}_{1},\dots,\mathbf{X}_{N}$, we
have the likelihood function $\mathcal{G}$ for
$\mathbf{X}_{1},\dots,\mathbf{X}_{N}|\mathbf{Z}$ given by
$\mathcal{G}(\mathbf{X}_{1},\dots,\mathbf{X}_{N},\mathbf{Z},\tilde{\Lambda},\Pi)=\prod_{n=1}^{N}\mathbb{P}(\mathbf{X}_{n},\mathbf{Z}|\tilde{\Lambda},\Pi)=\prod_{n=1}^{N}\prod_{j=1}^{K}(\pi_{j}\mathbb{P}(\mathbf{X}_{n}|\Lambda_{j}))^{z_{nj}},$
so that, once expanded, we have
$\mathcal{G}(\mathbf{X}_{1},\dots,\mathbf{X}_{N},\mathbf{Z},\tilde{\Lambda},\Pi)=\prod_{n=1}^{N}\prod_{j=1}^{K}\left(\pi_{j}\prod_{i=1}^{M}\lambda_{ji}^{x_{ni}}(1-\lambda_{ji})^{1-x_{ni}}\right)^{z_{nj}}\ldotp$
Therefore, the log-likelihood $g$ is given by
$g(\mathbf{X}_{1},\dots,\mathbf{X}_{N},\mathbf{Z},\tilde{\Lambda},\Pi)=\sum_{n=1}^{N}\sum_{j=1}^{K}z_{nj}\left(\log(\pi_{j})+\sum_{i=1}^{M}\left\\{x_{ni}\log(\lambda_{ji})+(1-x_{ni})\log(\lambda_{ji})\right\\}\right)\ldotp$
Hence, the expectation of the log-likelihood with respect to the marginal
distribution of $\mathbf{Z}$ is given by
$\mathbb{E}_{\mathbb{P}(\mathbf{Z}|\Pi)}(g(\mathbf{X}_{1},\dots,\mathbf{X}_{N},\mathbf{Z},\tilde{\Lambda},\Pi))=\sum_{n=1}^{N}\sum_{j=1}^{K}\mathbb{E}(z_{nj})\left(\log(\pi_{j})+\sum_{i=1}^{M}\left\\{x_{ni}\log(\lambda_{ji})+(1-x_{ni})\log(\lambda_{ji})\right\\}\right)\ldotp(\star\star)$
By Bayes’ Theorem and Saeed et al., (2013); Bishop, (2006), we have
$\mathbb{E}(z_{nj})=\dfrac{\pi_{j}\mathbb{P}(\mathbf{X}_{n}|\Lambda_{j})}{\sum_{k=1}^{K}\pi_{k}\mathbb{P}(\mathbf{X}_{n}|\Lambda_{k})}\ldotp\
(\diamond)$
Let
$N_{j}=\sum_{n=1}^{N}\mathbb{E}(z_{nj})\text{and}\mathbf{\overline{X}}_{j}=\dfrac{1}{N_{j}}\sum_{n=1}^{N}\mathbb{E}(z_{nj})\mathbf{X}_{n}\ldotp$
Then one can show that
$\tilde{\Lambda}_{\text{max}}=(\hat{\Lambda}_{1},\dots,\hat{\Lambda}_{K})^{t}\text{where}\hat{\Lambda}_{j}=\mathbf{\overline{X}}_{j}$
$\text{and}\hat{\Pi}_{\text{max}}=(\hat{\pi}_{1},\dots,\hat{\pi}_{K})^{t}\text{where}\hat{\pi}_{j}=N_{j}/N\ldotp$
Indeed, such a $\tilde{\Lambda}_{\text{max}}$ makes the derivative with
respect to the $\lambda_{j}$s vanish, and such a $\hat{\Pi}_{\text{max}}$
maximizes $(\star\star)$ through a Lagrange multiplier as seen in Saeed et
al., (2013); Bishop, (2006).
Therefore, the expectation-maximization algorithm for a Bernoulli mixture
model first gives initialization values to $\tilde{\Lambda}$ and $\Pi$. The
algorithm then computes the value of the log-likelihood at the initial values
$\tilde{\Lambda}_{0}$, $\Pi_{0}$.
On the next step, the algorithm does a loop by evaluating $\mathbb{E}(z_{nj})$
as in $(\diamond)$, and reevaluates $\hat{\Lambda}_{j},\hat{\pi}_{j}$ as found
above. Then, we evaluate the log-likelihood at these values. We stop the loop
whenever the log-likelihood meets a convergence criterion.
## 3 Application to the data-set
Each cluster has a certain probability of a yes answer at the different ages
of the children. We assume that they follow a Beta distribution. Hence, let
$P_{ij}\sim\mathrm{Beta}\left(\dfrac{1}{2},\dfrac{1}{2}\right)\text{for}i\in\\{1,\dots,6\\}\text{and}j\in\\{1,\dots,K\\}$
where $K$ is the number of clusters, as the Beta distribution is well known to
represent a distribution of probabilities, and $\mathrm{Beta}(1/2,1/2)$ is
more dense around its support boundaries $0$ and $1$. Hence, this is our
prior. That is, given a cluster $j\in\\{1,\dots,K\\}$ and the age tranche of
the children $i\in\\{1,\dots,6\\}$, the group of parents in the $j$th cluster
has a probability of $P_{ij}$ to say that their children have wheezed at (or
within a year of) the $i$th age.
Now, to provide the cluster assignments, we use the Categorical distribution,
which is a generalized Bernoulli distribution (in higher dimensions), as each
vector in the data either belongs to one of the $K$ clusters or does not.
Hence, let
$Z_{n}\ |\ R\sim\mathrm{Categorical}_{K}(R)\text{for}n\in\\{1,\dots,N\\},$
where $R$ represents the probabilities of the cluster assignments, which we
assume follows a Dirichlet distribution, i.e., a generalized Beta distribution
(in higher dimensions), so that
$R\sim\mathrm{Dirichlet}_{K}(\alpha)$
for some constant and positive vector $\alpha$, which we are free to choose as
it is an uninformative distribution.
Therefore, for $X_{in}$ the random variable representing the fact that the
$n$th patient wheezed at the $i$th age tranche, we have
$X_{in}\ |\
P_{ij},Z_{n}\sim\mathrm{Bernoulli}(P_{i,Z_{n}})\text{for}i\in\\{1,\dots,6\\}\text{and}n\in\\{1,\dots,N\\}\ldotp$
To sum up, the model is given by P_ij &∼Beta(1/2,1/2),
R ∼Dirichlet_K(α),
Z_n — R ∼Categorical_K(R),
X_in — P_ij, Z_n ∼Bernoulli(P_i,Z_n), for $i\in\\{1,\dots,6\\}$,
$j\in\\{1,\dots,K\\}$, and $n\in\\{1,\dots,N\\}$, where each variable
describes the following: P_ij&: probability of the group of parents in the jth
cluster to say that their children have
wheezed during the ith age,
R: probabilities of assignment to each cluster 1,…,K,
Z_n: assignment of the nth patient to each cluster 1,…,K,
X_in: the nth patient wheezed during the ith age.
The plate diagram for such a configuration can be found in Fig. 2. Note again
that $\alpha$, a $K$-dimensional vector with $\alpha_{j}>0$ for all
$j\in\\{1,\dots,K\\}$, can be chosen arbitrarily since the distribution of $R$
is uninformative ($Z_{n}$ is latent).
Figure 2: Plate diagram of the considered Bernoulli Mixture Model for the
considered cohort.
## 4 Results
We then perform the expectation-maximization algorithm on the log-likelihood
as explained in the previous section. We can then plot the Hinton plots for
$R$, the $P_{ij}$s, and the $Z_{n}$s. These are visualized in Fig. 3.
(a) The Hinton plot for $4$ clusters of $R$.
(b) The Hinton plot for $5$ clusters of $R$.
(c) The Hinton plot for $6$ clusters of $R$.
(d) The Hinton plot for $7$ clusters of $R$.
Figure 3: The Hinton plots of $R$ for different number of clusters. Each graph
shows the probabilities of being assigned to each cluster. Here, the data used
excludes the rows with NaN entries.
Hinton diagrams for $R$ show the number of elements per cluster. The bigger
the white square is, the bigger the cluster is. From Fig. 3, we can see that
for either setting $4,5,6$ or $7$ groups, there are $4$ dominant clusters.
Now, if we set the number of clusters to $64=2^{6}$, which is the number of
all possibilities to arrange 0s and 1s in a 6-dimensional vector, i.e., all
ways parents can answer to the questionnaire, then we get the Hinton diagram
for $R$ as shown in Fig. 4.
Figure 4: The Hinton diagram by setting $64$ clusters of $R$. Figure 5: Hinton
diagram of the $P_{ij}$s for $i\in\\{1,\dots,6\\}$ and $j\in\\{1,\dots,4\\}$,
i.e., the probabilities of the parents saying that their child wheezed at each
age tranche separated by cluster. The squares in the lower rectangle show the
”sizes” of each cluster (more formally, the probability of belonging to each
cluster).
We can see from Fig. 4 that there are $4$ dominant clusters. Hence, later on
for the Bernoulli mixture model we will take 4 different clusters. Note that
under the implemented algorithm, although the clusters – of course – do not
change by launching again the code, the order of these clusters do change. A
choice of 4 clusters give the following Hinton diagram for the $P_{ij}$s, as
seen in Fig. 5, superimposed with the Hinton diagram for $R$. According to
Fig. 5, Cluster 2 is made of the children with high probabilities of late
wheezing, Cluster 4 of the children with high probabilities of early wheezing,
Cluster 3 of the children with high probabilities of persisting wheezing, and
Cluster 1 of the children with probabilities of sporadic and benign wheezing.
To give ourselves an idea of the assignments of each cluster for each patient,
we can plot the cluster heat map as shown in Fig. 6(a). As seen in Fig. 6(a),
this clustering method seems to be grouping quite efficiently the patients
with late-childhood wheezing (blue group), persistent wheezing (green group),
early-childhood wheezing (red group), and none or sporadic wheezing (purple
group).
(a) Rows with missing entries disregarded.
(b) Rows with missing entries kept.
Figure 6: Heat map of the answers of the parents and their respective cluster
(the cluster they have been assigned with highest probability) where (6(a))
the rows with missing entries were disregarded, and where (6(b)) the rows with
missing entries were all kept. Each dark blue entry represents a positive
answer by the parents that their child had wheezed, while a light blue entry
is a negative answer. From Fig. 5, Cluster 1 is here the purple group, Cluster
2 is the blue group, Cluster 3 is the green group and Cluster 4 is the red
group. The gray-scaled bar represents the probability of assignment to this
cluster, going from $0.4$ (white) to $1$ (black). Each missing answer for Fig.
6(b) from the parents is shown in gray.
The answers in each group that look more like outliers do have a lower
probability to be assigned this cluster, as seen classified in Fig. 6(a). For
example, the set of answers (No, No, Yes, No, No, Yes) belongs to the purple
cluster with smallest probability of assignment in this cluster, at
approximately 46% (but still higher than the probabilities of assignment in
other clusters). The highest in the purple cluster being 97% for the set of
answers (No, No, No, No, No, No). For the blue cluster, the smallest
assignment probability is approximately 51% with set of answers (Yes, No, Yes,
No, Yes, Yes) while the highest is 97% with answers (No, No, No, Yes, Yes,
Yes) – which makes sense as it is the cluster of late-wheezing children. For
the green cluster, the smallest is 51% with answers (Yes, No, Yes, Yes, No,
No), the highest being at 99.9% with answers (Yes, Yes, Yes, Yes, Yes, Yes)
since it is the cluster of persistent wheeze issues children. Finally, for the
red cluster, the smallest probability is 61% for the answers (No, Yes, No, No,
No, Yes), the highest being 98% for the answers (Yes, Yes, No, No, No, No) as
it is the cluster of early-wheezing children.
Figure 7: Sankey diagram showing the patients flow between Bernoulli mixture
models with the complete data only (left) and all the data including rows with
missing entries (right). Note that the clusters here may not have the same
labels and color as before since the algorithm was called another time here.
We can also perform the Bernoulli process on the full data, i.e., also on the
parents that did not say if their child wheezed at all age tranches, since
$P_{ij}$ is a probability assigned to each answer individually. The aim is to
compare whether the change of grouping of the data with answers for each age
tranche changes whether or not we add up this new incomplete data. Ideally, it
does not so that the prediction is accurate. The heat map of the clustering
via Bernoulli mixture of all the data, including the missing one, is shown in
Fig. 6(b).
Comparing the sizes of the clusters in Fig. 6(a) and Fig. 6(b), adding the
data with missing entries has increased the sizes of the green and blue
clusters. We can here plot a Sankey diagram, as seen in Fig. 7 showing to
which clusters the entries with complete data are mapped from a Bernoulli
mixture with only complete data to a Bernoulli mixture with all the data
available. From Fig. 7, we can see that the model for full entries remains
stable with only seven swing patients, of which one of type $\alpha$ (Yes, No,
Yes, No, Yes, Yes), two of type $\beta$ (Yes, Yes, No, No, Yes, No), one of
type $\gamma$ (Yes, No, Yes, No, No, Yes), two of type $\xi$ (No, No, No, Yes,
No, Yes), and one of type $\lambda$ (No, Yes, No, No, No, Yes).
Those swing patients are shown in Table 2. The ”biggest” swing is made by the
two patients of type $\beta$ (see Table 2), where they have above 60% of
probability to be assigned different clusters when the missing entries from
the full set of data is added up. This is due to the fact that the expression
of the wheeze phenotype of these patients is very unpredictable and do not
show any pattern – the parents answered (Yes, Yes, No, No, Yes, No).
Patient type | Cl L | Pr L | Cl R | Pr R
---|---|---|---|---
$\alpha$ | 3 | 51% | 2 | 55%
$\beta$ | 1 | 67% | 2 | 65%
$\gamma$ | 4 | 64% | 2 | 40%
$\xi$ | 4 | 56% | 3 | 57%
$\lambda$ | 1 | 41% | 4 | 54%
Table 2: Allocations of the swing patients from a Bernoulli mixture with only
the clean data (assigned to the cluster Cl L with probability Pr L) to all the
data (assigned to the cluster Cl R with probability Pr R), where each cluster
number is as shown in Fig. 7.
The Bernoulli Mixture Model is advantageous in practice because it allows
practitioners to add up data throughout the childhood of a patient. E.g., if
the parents have a child aged 3 and gave answers to the questionnaire (whether
or not the child wheezed before 1, and around 3), the data can still be
clustered through the Bernoulli Mixture Model, so that the practitioner can
look at complete past entries in this group (”neighboring data”) and predict
future wheezing patterns of this child. As seen in Fig. 6(b), the Bernoulli
Mixture Model generally assumes higher probabilities of wheeze if no entry is
given, unless if it is sporadic. This method might be more accurate if the
parents are asked if their children wheezed during smaller age intervals, such
as yearly.
## 5 Conflicts of interest
The author(s) note no known conflict of interest by undertaking this research.
## References
* Bishop, (2006) Bishop, C. M. (2006). pages 444–455. Information Science and Statistics. Springer (New York, NY). ISBN: 978-03-873-1073-2.
* Brew et al., (2019) Brew, B. K., Chiesa, F., Lundholm, C., Örtqvist, A., and Almqvist, C. (2019). A modern approach to identifying and characterizing child asthma and wheeze phenotypes based on clinical data. Plos One, 14(12).
* Deliu et al., (2016) Deliu, M., Sperrin, M., Belgrave, D., and Custovic, A. (2016). Identification of Asthma Subtypes Using Clustering Methodologies. Pulmonary Therapy, 2(1):19–41.
* Henderson et al., (2008) Henderson, A. J., Granell, R., Heron, J., et al. (2008). Associations of wheezing phenotypes in the first 6 years of life with atopy, lung function and airway responsiveness in mid-childhood. Thorax, 63(11):974–980.
* Kurukulaaratchy et al., (2014) Kurukulaaratchy, R. J., Zhang, H., Raza, A., Patil, V., Karmaus, W., Ewart, S., and Arshad, S. H. (2014). The Diversity of Young Adult Wheeze; a Cluster Analysis in a Longitudinal Birth Cohort. Journal of the British Society for Allergy and Clinical Immunology, 44(5):724–735.
* Loza et al., (2016) Loza, M. J., Djukanovic, R., Chung, K. F., Horowitz, D., Ma, K., Branigan, P., Barnathan, E. S., Susulic, V. S., and et al (2016). Validated and longitudinally stable asthma phenotypes based on cluster analysis of the ADEPT study. Respiratory Research, 17(165).
* Oksel et al., (2019) Oksel, C., Granell, R., Mahmoud, O., Custovic, A., Henderson, A. J., Investigators, S., and Investigators, B. T. (2019). Causes of variability in latent phenotypes of childhood wheeze. The Journal of Allergy and Clinical Immunology, 143(5):1783–1790.
* Saeed et al., (2013) Saeed, M., Javed, K., Babri, H. A., et al. (2013). Machine learning using Bernoulli mixture models: Clustering, rule extraction and dimensionality reduction. Neurocomputing, 119(7):366–374.
|
# Multispacecraft Remote Sensing and In Situ Observations of the 2020 November
29 Coronal Mass Ejection and Associated Shock: From Solar Source to
Heliospheric Impacts
Chong Chen State Key Laboratory of Space Weather, National Space Science
Center, Chinese Academy of Sciences, Beijing 100190, People’s Republic of
China<EMAIL_ADDRESS>University of Chinese Academy of Sciences, Beijing
100049, People’s Republic of China Ying D. Liu State Key Laboratory of Space
Weather, National Space Science Center, Chinese Academy of Sciences, Beijing
100190, People’s Republic of China<EMAIL_ADDRESS>University of Chinese
Academy of Sciences, Beijing 100049, People’s Republic of China Bei Zhu
Space Engineering University, Beijing 101416, People’s Republic of China
###### Abstract
We investigate the source eruption, propagation and expansion characteristics,
and heliospheric impacts of the 2020 November 29 coronal mass ejection (CME)
and associated shock, using remote sensing and in situ observations from
multiple spacecraft. A potential–field source–surface model is employed to
examine the coronal magnetic fields surrounding the source region. The CME and
associated shock are tracked from the early stage to the outer corona using
extreme ultraviolet and white light observations. Forward models are applied
to determine the structures and kinematics of the CME and the shock near the
Sun. The shock shows an ellipsoidal structure, expands in all directions, and
encloses the whole Sun as viewed from both SOHO and STEREO A, which results
from the large expansion of the CME flux rope and its fast acceleration. The
structure and potential impacts of the shock are mainly determined by its
radial and lateral expansions. The CME and shock arrive at Parker Solar Probe
and STEREO A. Only based on the remote sensing observations, it is difficult
to predict whether and when the CME/shock would arrive at the Earth. Combining
Wind in situ measurements and WSA-ENLIL simulation results, we confirm that
the far flank of the CME (or the CME leg) arrives at the Earth with no shock
signature. These results highlight the importance of multipoint remote sensing
and in situ observations for determining the heliospheric impacts of CMEs.
Interplanetary shocks (829) — Solar wind (1534) — Solar coronal mass ejections
(310)
## 1 Introduction
Coronal mass ejections (CMEs) are large–scale magnetized plasma ejected from
the Sun into interplanetary space, which are responsible for major geomagnetic
storms in the terrestrial environment. They are called interplanetary CMEs
(ICMEs), when traveling into interplanetary space. A fast CME can drive a
shock ahead of it, when the CME speed relative to the ambient solar wind is
greater than the magnetosonic speed or Alfvén speed of the ambient solar wind.
CME–driven shocks can accelerate solar energetic particles (SEPs) and enhance
the geo–effectiveness of CMEs. Understanding the process of CME/shock
propagation and evolution is of critical importance for space weather
forecasting.
Multispacecraft remote sensing and in situ observations are required to study
the complete chain of the evolution of CMEs/shocks from their solar sources
through heliospheric propagation to their impacts. As far as we know, the
study of such a complete chain is still lacking. First, the use of
multi–perspective remote sensing observations can better constrain the
three–dimensional (3D) structure of the CME and its associated shock. To
determine the 3D morphology of a CME, Thernisien et al. (2006, 2009) develop a
graduated cylindrical shell (GCS) model based on the coronagraph images from
different vantage points. As for CME–driven shocks, Kwon et al. (2014, 2015)
propose an ellipsoid model to reconstruct the 3D structure of a shock. Second,
separated multipoint in situ measurements can better assess the structure and
the heliospheric impacts of CMEs and shocks (e.g., Liu et al. 2008, 2019; Hu
et al. 2017; de Lucas et al. 2011). The multispacecraft measurements at
different locations can determine the propagation and expansion
characteristics of CMEs/shocks along different directions, including the
evolution anisotropy of CMEs/shocks resulting from different background solar
wind. Multipoint in situ measurements are also needed to verify the
forecasting accuracy along different directions of a magnetohydrodynamics
model in a same event simulation (e.g., Reinard et al. 2012; Biondo et al.
2021).
Finally, the combination of remote sensing and in situ observations from
multiple spacecraft can provide a more complete picture of CME/shock
evolution, and improve our capabilities to interpret events and to forecast
space weather effects (e.g., Liu et al. 2008, 2011, 2012; Nieves-Chinchilla et
al. 2012; Hu et al. 2016; Palmerio et al. 2021). Below we show examples of how
the multi–perspective remote sensing observations in combination with in situ
measurements are used to investigate the 3D propagation and evolution of
CMEs/shocks. Based on the wide–angle imaging observations from Solar
Terrestrial Relations Observatory (STEREO; Kaiser et al. 2008) and in situ
measurements at 1 AU, a geometric triangulation technique has been developed
to track CMEs (Liu et al., 2010a, b), and the Sun–to–Earth kinematics of fast
and slow CMEs have been revealed (Liu et al., 2013, 2016). Analyzing
multipoint imaging observations and widely separated in situ measurements, Liu
et al. (2017, 2019) find the fading and persistence of CME–driven shocks along
different directions in the heliosphere. In addition, using multipoint remote
sensing and in situ observations, the interactions of successive CMEs, which
may cause intense geomagnetic storms, are examined during the whole
propagation process (e.g., Liu et al. 2012, 2014b, 2014a; Webb et al. 2013;
Mishra et al. 2015; Lugaz et al. 2017). The geometrical and magnetic
relationships between ICMEs at 1 AU, CMEs near the Sun and their solar sources
have also been investigated, using coordinated imaging and in situ
observations from multiple vantage points (e.g., Liu et al. 2010b, 2011; Xie
et al. 2021; Marubashi et al. 2015; Syed Ibrahim et al. 2019; Savani et al.
2015). However, the connections between in situ signatures and solar source
characteristics are still not fully understood, because CMEs can have a very
complex evolution from the Sun to 1 AU, including deceleration, deflection,
rotation, erosion, and interaction with ambient solar wind structures and
other CMEs (e.g., Manchester et al. 2017 and references therein). Therefore,
the comparisons between multi–perspective remote sensing observations and
multipoint in situ measurements are necessary and critical to make the correct
connections between the solar source eruptions, CMEs near the Sun and their in
situ counterparts and to understand their heliospheric impacts.
On 2020 November 29, a large CME erupted, which is associated with a shock and
an M4.4 class flare that peaked at about 13:11 UT. The CME caused the first
type II radio burst and the first widespread SEP event of solar cycle 25
(Lario et al., 2021; Kollhoff et al., 2021). The eruption is a limb event as
viewed from the Earth, and is observed by a fleet of spacecraft at different
vantage points. The positions of the spacecraft in the ecliptic plane on 2020
November 29 are shown in Figure 1. STEREO A and Parker Solar Probe (PSP; Fox
et al. 2016) are 0.96 AU and 0.81 AU from the Sun, and 57.8° and 96.2° east of
the Earth, respectively. The red arrow indicates the location of the source
region (E97°S25°), which points to PSP in the ecliptic plane. This event
provides a unique opportunity to investigate the solar source, propagation,
and heliospheric impacts of the CME and its shock, as multi–perspective remote
sensing observations and multipoint in situ measurements are available for
this event. We examine source eruption signatures in Section 2, the
propagation and expansion characteristics of the CME/shock in Section 3, and
in situ measurements at different locations in Section 4. The results are
summarized in Section 5. The results of this work provide a more complete
propagation and expansion picture of the CME and its shock, and highlight the
importance of multispacecraft remote sensing and in situ observations for
determining the heliospheric impacts of CMEs.
## 2 source eruption signatures
The CME was launched around 12:45 UT on 2020 November 29 associated with
significant features, such as an M4.4 class flare, coronal dimmings, and EUV
waves as shown in Figure 2. Panel (a) shows an overview of the source region
in a 195 Å image from the Extreme–Ultraviolet Imager (EUVI) of the Sun Earth
Connection Coronal and Heliospheric Investigation (SECCHI; Howard et al. 2008)
aboard STEREO A with potential–field source–surface (PFSS) extrapolation
results mapped on it. The viewpoint of the image deviates a few degrees from
STEREO A (see the black part on the left of the EUVI image) to show the
coronal magnetic field configuration around the source region. The closed
magnetic field arches are toroidal and surrounded by open magnetic field lines
of the same polarity. The source region is beneath the right part of the
toroidal closed magnetic field arches as marked by the white arrow in panel
(a).
Panels (b)-(c) display STEREO A/EUVI 195 Å running–difference images at two
moments during the time of the eruption. The solar flare is visible at the
beginning of the eruption followed by a significant dimming region, which is
indicative of the removal of the coronal plasma. The dimming region
continuously extends. Panel (b) also shows the significant rise and expansion
of the plasma loop. The most dramatic signature in the EUV observations is the
EUV wave around the dimming, which propagates away from the source region and
sweeps a large area of the solar disk, as shown in panel (c). The continuously
extended areas of the dimming and the EUV wave indicate the expansion of the
CME as it rises up.
Panel (d) shows a hot channel at its nascent stage, which has been interpreted
as a flux rope (e.g., Zhang et al. 2012; Cheng et al. 2013, 2014) although the
magnetic field is not observed, in a 131 Å image observed by the Atmosphere
Imaging Assembly (AIA; Lemen et al. 2012) on board Solar Dynamics Observatory
(SDO; Pesnell et al. 2012). This is a limb event as viewed from the Earth, and
the source region is just behind the solar limb, so we can track the hot
channel at the early stage using EUV observations (see below). The axis of the
hot channel possibly has a largely inclined angle, because the two legs of the
hot channel separate with a large distance. SDO/AIA 193 Å running–difference
images are displayed in panels (e)-(f). In panel (e), the plasma loop is
bubble–shaped ahead of the hot channel shown in panel (d). There is an
ejection propagating toward the south as marked by the red arrow in panel (f).
This is not the eruption we study, because of its propagation direction. This
eruption may be caused by the EUV wave, which sweeps across and destabilizes
it.
These observations suggest that the hot channel has a large tilt angle, and,
as the hot channel rises and expands, it causes a dimming region and a
large–scale EUV wave. The EUV wave propagates across a significant portion of
the solar disk and is recognized as the footprint of the shock (see below).
## 3 structure and kinematics of the CME/shock
EUV and coronagraph observations of the eruption from GOES 16, STEREO A, and
Solar and Heliospheric Observatory (SOHO; Domingo et al. 1995) are displayed
in Figure 3. Panels (a)-(c) show the eruption at the early stage in EUV and
white light images. The primary EUV wave is considered to be the footprint of
the CME–driven shock (e.g., Patsourakos & Vourlidas 2012; Cheng et al. 2012;
Kwon et al. 2014), and at the early time the fronts of the CME and shock are
not well separated (see panel (c)). Therefore, they can be used together to
determine the shock structure. The well–developed CME and shock in coronagraph
images from two vantage points are shown in panels (d) and (g). The shock can
be seen in the images as a faint edge around the CME leading edge (e.g., Liu
et al. 2008; Hess & Zhang 2014; Zhu et al. 2021). We can see the backward
propagation of the shock on the opposite side of the eruption, because it
expands in all directions. Previous studies suggest that the shock can be
modeled well using a simple spheroidal structure (e.g., Hess & Zhang 2014;
Kwon et al. 2014; Liu et al. 2017, 2019). Here, we use a geometrical ellipsoid
model developed by Kwon et al. (2014) to determine the 3D morphology of the
shock, based on the EUV and white light observations from GOES 16, STEREO A,
and SOHO. The ellipsoid model has seven free parameters: the height,
longitude, and latitude of the center of the ellipsoid; the lengths of the
three semiprincipal axes; and the rotation angle of the ellipsoid. We assume
the cross section of the shock ellipsoid perpendicular to the propagation
direction to be a circle, which reduces two free parameters (one semiprincipal
axis and the rotation angle) in our fitting. We start to fit the shock at
12:56 UT when the EUV wave can be distinguished from the brightenings in the
dimming region, and get a good visual consistency between the model and
observations. At the initial stage, the footprint of the ellipsoid model on
the solar disk is consistent with the EUV wave front (see panels (a)-(b)). In
coronagraph images, as shown in panels (c), (f), and (i), the shock front is
well represented by the ellipsoid model. As for the CME, we employ the GCS
model proposed by Thernisien et al. (2006) to fit the CME based on
running–difference coronagraph images from STEREO A and SOHO. The GCS model
can determine the direction of propagation, tilt angle of CME flux rope and
height. By adjusting the free parameters, the GCS model fits the CME
observations from the two viewpoints very well, as shown in panels (e) and
(h).
Figure 4 shows the 3D modeled CME and shock structures at 15:18 UT on 2020
November 29. At this moment, the propagation longitude of the CME and shock is
about 85° and 80° east of the Sun–Earth line as marked by the blue and red
arrows, respectively. Application of the GCS model gives an average
propagation direction of about 90° east of the Sun–Earth line and about 17°
south of the ecliptic plane. The propagation direction of the CME is roughly
consistent with the location of the source region (E97°S25°), and does not
change much within 30 $R_{\sun}$. The tilt angle of the CME flux rope obtained
from the GCS model is about 75° with respect to the ecliptic plane, which is
consistent with the AIA 131 Å observations (see Figure 2 (d)). When we employ
the GCS model to fit the CME (e.g., Thernisien et al. 2006, 2009; Liu et al.
2010b), the half angle increases from 15° at 2.4 $R_{\sun}$ to 70° at 15.8
$R_{\sun}$, demonstrating the large expansion of the flux rope at its initial
stage. The speed of the CME leading edge is accelerated from $\sim$800 km s-1
to $\sim$2100 km s-1 within 10 $R_{\sun}$ (see below). Therefore, the
ellipsoidal structure of the shock is produced by the large expansion of the
CME flux rope and its fast acceleration. The front of the shock nose is close
to the CME nose, while the flank has farther distances from the CME. The shock
surrounds the whole Sun, which helps to explain the detection of SEPs at Solar
Orbiter behind the location of the eruption (see Figure 1). The CME and shock
could arrive at PSP and STEREO A, according to their propagation directions
and large expansions.
We can track the CME flux rope and the shock from their initial stage to the
outer corona with projection effects minimized, as the propagation direction
is almost perpendicular to the Sun–Earth line. The translation and expansion
distances of the shock can be obtained from the ellipsoid model fitting. As
shown in Figure 4, the lateral and radial expansion distances are represented
by “b” and “c”, respectively. The “d” and “d + c” denote the translation
distance of the shock center and shock nose height, respectively. The motion
of the CME flux rope can be tracked by stacking the EUV running–difference
images within a slit along the radial direction, as marked by the white line
in the panel (d) of Figure 2. The distance–time map of 131 Å
running–difference images from SDO/AIA is shown in the left panel of Figure 5.
The heights of the hot channel are extracted along the track, as indicated by
the red dashed curve. The right panel of Figure 5 shows the time–elongation
map produced by stacking running–difference images of C2 and C3 from
SOHO/LASCO within a slit along the ecliptic plane. The track associated with
the CME is marked by the red dashed curve. The track in C2 is not clear,
because the CME is so fast that C2 only captures three images. Elongation
angles of the CME leading edge in the ecliptic plane are extracted along the
track. Given that SOHO is about 90° away from the propagation direction of the
CME, we use a harmonic mean (HM) approximation (Lugaz et al., 2009), which
assumes that CMEs are attached to the Sun as a spherical front and move along
a fixed radial direction, to derive the CME kinematics. Readers are directed
to Liu et al. (2013) for discussions of selection of CME geometry depending on
the observation angle of the spacecraft.
The height and speed of the CME obtained from the GCS modeling, HM
approximation, and EUV observations, and the kinematics of the shock are
displayed in Figure 6. Note that we have used the GCS propagation longitude
(E90°) as input for the HM approximation. As shown in the top panel, the CME
height from the HM approximation can connect with the GCS model. The height of
the shock nose is slightly larger than the CME height, because the front of
the shock nose is ahead of and close to the CME leading edge (see Figure 4).
The radial distance of the CME flux rope obtained from the EUV observations
could be connected with the height of the CME from the GCS model. The speeds
of the CME and shock nose are shown in the middle panel. The speed profiles of
the CME and shock nose are similar. The CME and shock nose are accelerated to
about 2100 km s-1 within only 1 hour below $\sim$6 $R_{\sun}$. Then the speeds
have a deceleration. According to the propagation characteristics of fast CMEs
described by Liu et al. (2013), the speeds probably have a gradual longer
deceleration in interplanetary space to meet the shock speed at STEREO A
(about 700 km s-1), as marked by a horizontal dashed line in the middle panel.
The bottom panel shows the speeds of the lateral expansion, radial expansion
and translation of the shock. The expansion speeds of the shock are much
larger than the translation speed of the shock center. Therefore, the shock
radial expansion provides a major contribution to the propagation of the shock
nose, and the structure and potential impacts of the shock are mainly
determined by its radial and lateral expansions. We also overlap the GOES 1–8
Å X–ray flux on the middle panel scaled in arbitrary units. The liftoff of the
CME flux rope and the rise of the X–ray flux are almost at the same time. All
the speeds increase during the flare rising phase, and reach maxima after the
peak value of the X–ray flux.
Figure 7 shows the radio dynamic spectra associated with the 2020 November 29
CME from PSP, STEREO A, and Wind. The intense type III radio busts, which
start at about 12:55 UT on November 29 within several minutes after the
liftoff of the CME, are observed by all three spacecraft. An intense type III
radio bust portends the occurrence of a major CME on the Sun (e.g., Reiner et
al. 2001). All the spacecraft observe a short–duration type II radio burst in
the initial phase, which starts at about 13:10 UT almost near the peak of the
X–ray flux, as shown in the panels zoomed in. We use the Leblanc density model
(Leblanc et al., 1998) with an electron density of 15 cm-3 at 1 AU to convert
the shock nose distances obtained from the ellipsoid model to frequencies. The
Leblanc density model describes the average radial variation of the density of
the medium where the shock was propagating, so the electron density of 15 cm-3
is the nominal density and not necessary to be observed at 1 AU. The
frequencies are doubled to their harmonic frequencies and overlapped on the
spectra. The frequencies roughly match the observed type II radio burst when
the corresponding shock nose distances are below $\sim$ 5 $R_{\sun}$. The
vertical dashed lines in the PSP spectrum represent the shock arrival times at
PSP from in situ measurements. The previous one is associated with the small
CME on 2020 November 26. The second shock is the one we study in this paper,
and is also observed at STEREO A. The arrival times of the second shock at PSP
and STEREO A correspond to the suddenly enhanced, diffuse intensity in the
spectra. Note that there is no shock signature observed at Wind.
To determine the propagation and heliospheric impacts of the CME/shock, we
request a WSA–ENLIL simulation (Arge et al., 2004; Odstrcil et al., 2004) run
from the Community Coordinated Modeling Center (CCMC). CME parameters derived
from the GCS model, such as the time across the inner boundary at 21.5
$R_{\sun}$, half angular width, radial speed, and propagation direction, are
input in the simulation. Note that, as mentioned above, a small CME occurred
on 2020 November 26 and was also observed by PSP. Here, we only insert the CME
of interest in the simulation, and the results are good compared with the in
situ measurements. The CME/shock arrival times at PSP and Wind predicted by
the WSA–ENLIL model are consistent with the observed arrival times from in
situ measurements. As for STEREO A, the predicted arrival time is only about 5
hours earlier than the observed shock arrival time. The density and velocity
profiles from the simulation generally agree with the in situ measurements,
although the simulation overestimates the density. The simulation results are
displayed in Figure 8. The top panels show the density and radial speed
distributions at 01:01:06 UT on 2020 December 1. The CME arrives at PSP and
STEREO A with a large angle width. The density and radial speed of the CME are
much larger than the ambient solar wind. From the bottom panels, the far flank
of the CME (or the CME leg) could arrive at the Earth with relatively low
density and speed about one day after arriving at STEREO A.
## 4 in situ measurements
Figure 9 displays the solar wind magnetic field measurements from the FIELDS
instruments (Bale et al., 2016) aboard PSP, and the WSA–ENLIL simulation
results at PSP. From PSP in situ measurements, there are two shocks and two
ICME structures observed at PSP. The first shock passes PSP around 23:05 UT on
November 29, which may be associated with the small CME event on November 26
propagating toward PSP. The second shock is the one of interest, which is
observed at PSP around 18:35 UT on November 30, followed by a long–duration
sheath region and an ICME structure. The long–duration sheath region is from
18:35 UT on November 30 to 02:23 UT on December 1 lasting about 8 hours. This
indicates that PSP likely observes the flank of the CME, because the standoff
distance between the shock front and the CME flux rope increases from the nose
to the flank (see Figure 4). The propagation direction of the CME in latitude
and the location of the source region also imply that the northern flank of
the CME would arrive at PSP and STEREO A. The ICME interval is from 02:23 UT
to 11:40 UT on December 1, which is determined by the bidirection of the
electron pitch angle distribution (PAD) and magnetic field profiles, such as
the smooth and strong magnetic field and the coherent rotation of the field.
Although the resolution of the electron PAD is low, there are some indications
of the bidirectional streaming electron (BDE) signatures. We can see the
mainly northward $B_{N}$ component in the ICME implying a large tilt angle of
the CME flux rope. This is consistent with the results obtained from the GCS
model. The WSA–ENLIL simulation results are plotted as red dashed lines.
Because there are no plasma data from PSP during the period of time, we can
not compare the simulation results with the in situ plasma measurements. The
shock arrival time predicted by the WSA–ENLIL model is consistent with the
observed shock arrival time. Therefore, we use the WSA–ENLIL simulation
results to estimate the shock speed at PSP, which is about 850 km s-1. The
simulated magnetic field is much lower than the observed, because in the
WSA–ENLIL model, the inserted CME has no internal magnetic field and carries
the field from the ambient solar wind. From the in situ measurements at PSP,
the second shock has not propagated into the first ICME yet.
The in situ measurements from STEREO A associated with the 2020 November 29
CME are presented in Figure 10, with the simulation results at STEREO A shown
as red dashed lines. A shock is observed at STEREO A around 07:20 UT on
December 1, which is $\sim$5 hours later than the predicted arrival time from
the simulation. The shock speed predicted by the WSA–ENLIL model is about 850
km s-1, which is larger than the observed shock speed at STEREO A (about 700
km s-1). The shock is followed by a sheath region lasting about 4 hours, and
the ICME interval is determined by the electron PAD and magnetic field
profiles. The magnetic field strength of the ICME from STEREO A shows a
declining profile similar to the observations from PSP. However, the magnetic
field components are more dynamic than that observed by PSP.
If we only consider the location of the source region and the propagation
direction of the CME/shock, the CME and shock may not arrive at the Earth,
because the Earth is about 90° away from the propagation direction of the
CME/shock. Figure 11 shows the Wind in situ measurements probably associated
with the CME, which seem to be not like a typical ICME (e.g., Zurbuchen &
Richardson 2006). Given the large expansion of the CME and shock near the Sun,
and the simulation suggests that the CME may arrive at the Earth (see Figure
8), we infer that the structure observed at Wind may be a part of the CME. The
lack of typical ICME signatures (e.g., no clear BDE signature) probably
suggests that the far flank of the CME (or the CME leg) arrives at the Earth.
According to the proton temperature and magnetic field strength profiles, we
give an approximate interval of the ICME, as indicated by two vertical dashed
lines. The density profile from the simulation generally agrees with the in
situ measurements, although the WSA–ENLIL simulation overestimates the density
about 5 times. The CME arrival time (07:33 UT on December 2) predicted by the
WSA–ENLIL model is also in good agreement with the measurements. These
indicate that the CME indeed arrives at the Earth. According to the CME and
shock structures in the coronagraph images and the simulation results, we
suggest that the far flank of the CME (or the CME leg) arrives at the Earth
due to the vast expansion of the CME. However, there is no shock signature in
the measurements. This supports the perspective of Liu et al. (2017) that, at
some point, the shock would be just a wave and lose the nonlinear steepening
character especially near the wake.
## 5 summary and conclusions
We have investigated the source eruption, propagation and expansion
characteristics, and heliospheric impacts of the 2020 November 29 CME/shock,
combining multi–perspective remote sensing observations and multipoint in situ
measurements. EUV observations together with PFSS modeling are used to examine
the source eruption signatures and the coronal magnetic field configuration.
The structures and kinematics of the CME and associated shock are analyzed
using the forward models and remote sensing observations from multiple
spacecraft. The WSA–ENLIL simulation and the multipoint in situ measurements
are used to study the heliospheric impacts of the CME/shock. Key results are
obtained concerning the source eruption, interplanetary propagation, and
heliospheric impacts of the CME and associated shock.
The source region is beneath toroidal closed magnetic field arches, which are
surrounded by open magnetic field lines of the same polarity. The CME eruption
is associated with a significant dimming and an EUV wave sweeping a large area
of the solar disk. This indicates a large CME eruption associated with a
shock, and implies a rapid expansion of the CME/shock. This is a limb event as
viewed from the Earth, and the CME flux rope can be seen in AIA 131 Å images,
whose axis has a large tilt angle. Therefore, we can track the flux rope at
its early stage using EUV observations.
Due to the large expansion of the CME flux rope and its fast acceleration, the
shock shows an ellipsoidal structure, expands in all directions, and encloses
the whole Sun as viewed from both SOHO and STEREO A. We use the GCS model and
the ellipsoid model to fit the CME and the shock, respectively. The tilt angle
and propagation direction of the CME flux rope obtained from the GCS model are
consistent with the EUV observations. The CME flux rope quickly expands and
accelerates at its initial stage. Therefore, the ellipsoidal structure of the
shock is produced by the large expansion of the CME flux rope and its fast
acceleration. The ellipsoid model fitting results suggest that the shock
expands in all directions and encloses the whole Sun as viewed from both SOHO
and STEREO A, which helps to explain the detection of SEPs at Solar Orbiter
behind the location of the eruption. The structure and kinematics of a shock
can be used to study the relationship between SEPs arriving at different
locations and their sources (e.g., Rouillard et al. 2011; Lario et al. 2017;
Zhu et al. 2021). The study of the 2020 November 29 widespread SEP event has
been performed by Kollhoff et al. (2021), but they did not give a detailed
analysis of the CME and associated shock kinematics. Therefore, the shock
ellipsoid fitting results (see Table 1) in our paper can be a complement and
provide some key parameters for the study of the SEP acceleration.
The structure and potential impacts of the shock are mainly determined by its
radial and lateral expansions. We track the CME by producing jmaps with
projection effects minimized, as the propagation direction is almost
perpendicular to the Sun–Earth line. The kinematics of CME/shock from
different methods agree with each other. The speeds of the CME and shock nose
are rapidly accelerated to maxima within only 1 hour below $\sim$6 $R_{\sun}$,
and then show a deceleration, which is a typical speed profile for fast CMEs
(Liu et al., 2013). The shock expansion speeds are much larger than the
translation speed, so we suggest that the structure and potential impacts of
the shock are mainly determined by its radial and lateral expansions. The
liftoff of the CME flux rope and the rise of the X–ray flux are almost at the
same time, and all the speeds of the CME and shock increase during the flare
rising phase and peak after the maximum value of the X–ray flux.
The type III and type II radio bursts associated with the eruption are
observed at PSP, STEREO A, and Wind. All three spacecraft observe an intense
type III radio burst, which indicates a major CME on the Sun, and a
short–duration type II radio burst in the initial phase. We convert the shock
nose distances obtained from the ellipsoid model to frequencies using the
Leblanc density model with an electron density of 15 cm-3 at 1 AU. The shock
kinematics seem consistent with the frequency drift of the associated type II
burst.
According to the separated in situ measurements and simulation results, the
CME and shock arrive at PSP and STEREO A, and the Earth observe the far flank
of the CME (or the CME leg) due to the large expansion of the CME. The shock
is observed at PSP around 18:35 UT on November 30, followed by a long–duration
sheath region and an ICME structure. The ICME has a mainly northward $B_{N}$
component implying a large tilt angle of the CME flux rope, which is
consistent with the results obtained from the GCS model. The shock passes
STEREO A around 07:20 UT on December 1. The magnetic field strength of the
ICME from STEREO A shows a declining profile similar to the observations from
PSP, but the magnetic field components are more dynamic than that observed by
PSP. The Earth is about 90° away from the CME/shock nose, so according to the
location of the source region and the propagation direction of the CME and
shock, they may not arrive at the Earth. Moreover, it is difficult to predict
whether and when the CME/shock would arrive at the Earth only based on the
remote sensing observations. Combining the in situ measurements from Wind and
WSA–ENLIL simulation results, we finally find that, because of the large
expansion of the CME, the far flank of the CME (or the CME leg) arrives at the
Earth with no shock signature. This is surprising because the shock is
generally thought to be wider than the CME. The shock may have already decayed
before reaching Wind along this far flank direction. These results highlight
that multispacecraft remote sensing and in situ observations are important for
determining the heliospheric impacts of CMEs.
The research was supported by NSFC under grant 41774179, Beijing Municipal
Science and Technology Commission (Z191100004319003), the Specialized Research
Fund for State Key Laboratories of China, and the Strategic Priority Research
Program of Chinese Academy of Sciences (XDA15018500). We acknowledge the use
of data from Parker Solar Probe, STEREO, SDO, SOHO, GOES, and Wind. The
WSA–ENLIL simulations are provided by CCMC through their public Runs on
Request system (http//ccmc.gsfc.nasa.gov).
Figure 1: Positions of the spacecraft and planets in the ecliptic plane on
2020 November 29. The “A”, “B”, and “SO” represent the satellites of STEREO A,
STEREO B, and Solar Orbiter respectively. The black circle and red ellipse
mark the orbits of the Earth and PSP respectively. The dotted curves show
Parker spiral magnetic fields created with a solar wind speed of 450 km s-1.
The red arrow indicates the direction of the source region in the ecliptic
plane.
Figure 2: Magnetic field configuration and EUV observations of the source
region. (a) PFSS modeled coronal magnetic fields surrounding the source region
mapped onto the EUVI 195 Å image at 13:14 UT on 2020 November 29. The blue
lines represent the closed field lines, and the green lines are the open
magnetic field lines. The white arrow marks the source region. (b)-(c)
Running–difference images of STEREO A/EUVI at 195 Å showing the low coronal
signatures associated with the CME. (d) Hot channel at 131 Å from SDO/AIA. The
white line indicates a slice along the radial direction to create a
distance–time diagram for the hot channel. (e)-(f) Running–difference images
of SDO/AIA at 193 Å showing the plasma loop and the EUV wave from the
viewpoint of the Earth. The red arrow marks the eruption caused by the EUV
wave.
Figure 3: Running–difference images from GOES 16, STEREO A, and SOHO with
corresponding modeling of the CME and shock. Panels (a)-(c) show the same
shock modeling result for 13:15 UT. Panels (d)-(i) display the
running–difference coronagraph images and corresponding GCS modeling (green)
and ellipsoid shock modeling (red).
Figure 4: Geometry of the CME (blue lines) and the shock (red lines) at 15:18
UT on 2020 November 29. The blue and red arrows indicate the propagation
directions of the CME and shock, respectively. The directions of the Earth,
STEREO A, PSP, and Solar Orbiter are marked by the black arrows. The center of
the shock ellipsoid (O) and its distances from the center of the Sun (d), from
the nose of the shock (c) and from the flank perpendicular to the propagation
direction (b) are also indicated.
Figure 5: Left: distance–time map of the hot channel by stacking SDO/AIA 131 Å
running–difference images along the white line in the panel (d) of Figure 2.
The red curve indicates the track of the hot channel. Right: time–elongation
map constructed from running–difference images of C2 and C3 from SOHO/LASCO
along the ecliptic plane. The red curve marks the track of the CME leading
edge, along which the elongation angles are extracted.
Figure 6: Kinematics of the CME and shock derived from different methods. Top:
the distances of the CME derived from EUV observations, the GCS model, and the
HM approximation, and the height of the shock nose derived from the ellipsoid
model. Middle: speeds of the CME and shock nose derived from the numerical
differentiation of the distances in the top panel. GOES 1–8 Å X–ray flux is
overlapped in the panel scaled in arbitrary units. The horizontal dashed line
represents the shock speed of $\sim$700 km s-1 at STEREO A. Bottom: expansion
speeds of the shock along the radial and lateral directions and translation
speed of the shock center. The distances and speeds of the CME front are
binned to reduce the uncertainties, so the CME parameters are average values
and standard deviations within the bins. Following Kwon et al. (2014), the
uncertainty of the ellipsoid model parameters is estimated to be 8%.
Figure 7: Radio dynamic spectra associated with the 2020 November 29 eruption
from PSP, STEREO A, and Wind. The white arrows mark the type II radio burst
observed by the spacecraft. The shock nose distances derived from the
ellipsoid model are converted to frequencies by using the Leblanc density
model with an electron density of 15 cm-3 at 1 AU. The frequencies are
overlapped in the spectra as red crosses. The white curve in the middle panel
is the GOES X–ray flux scaled in arbitrary units. The vertical dashed lines
represent the shock arrival times from in situ measurements at PSP and STEREO
A. The areas with radio bursts are expanded and plotted over each image for
clarity.
Figure 8: Density and radial speed distributions from the WSA–ENLIL simulation
in the ecliptic plane at 01:01:06 UT on 2020 December 1 (top) and 07:00:21 UT
on 2020 December 2 (bottom). The values of the density and speed are scaled by
the color bar at the right corner of each panel. The positions of the Earth,
STEREO A, and PSP are marked by the circle, triangle, and square,
respectively. The first time indicates the moment when the CME have just
passed PSP and would arrive at STEREO A. The second time is close to the CME
arrival time at the Earth.
Figure 9: In situ solar wind measurements and simulation results at PSP. From
top to bottom, the panels show the normalized 314 eV electron PAD, proton
density, bulk speed, proton temperature, magnetic field strength, and
components, respectively. Note that there are no plasma data from PSP during
the time of interest. The red dashed lines represent the simulation results.
The blue vertical dashed lines indicate the observed arrival times of the
shocks marked with “S0” and “S”, respectively. The shaded regions indicate the
intervals of two ICMEs.
Figure 10: In situ solar wind measurements and simulation results at STEREO A.
From top to bottom, the panels show the normalized 93–247 eV electron PAD,
proton density, bulk speed, proton temperature, magnetic field strength, and
components, respectively. The red dashed lines represent the simulation
results, and the density from the simulation is divided by a factor of 10. The
dotted line in the fourth panel denotes the expected proton temperature
calculated from the observed speed (Lopez, 1987). The blue vertical dashed
line indicates the observed arrival time of the shock marked with “S”, and the
shaded region indicates the interval of the ICME.
Figure 11: In situ solar wind measurements and simulation results at Wind.
From top to bottom, the panels show the normalized 265 eV electron PAD, proton
density, bulk speed, proton temperature, magnetic field strength, and
components, respectively. The red dashed lines represent the simulation
results, and the density from the simulation is divided by a factor of 5. The
dotted line in the fourth panel denotes the expected proton temperature
calculated from the observed speed (Lopez, 1987). The black vertical dashed
lines indicate the interval of the ICME–like structure.
Table 1: The shock ellipsoid fitting parameters and corresponding times.
Time (UT) | $\theta$ | $\phi$ | $h$ | $a$ | $c$
---|---|---|---|---|---
STEREO A | GOES/SOHO | (°) | (°) | $R_{\sun}$ | $R_{\sun}$ | $R_{\sun}$
EUVI 12:55:30 | SUVI 12:56:37 | -95 | -18 | 1.31 | 0.51 | 0.51
COR1 13:10:18 | SUVI 13:10:37 | -90 | -15 | 1.79 | 1.27 | 1.06
COR1 13:15:18 | SUVI 13:15:27 | -90 | -15 | 2.00 | 1.52 | 1.35
COR1 13:20:18 | SUVI 13:20:37 | -85 | -15 | 2.34 | 2.12 | 1.83
COR2 13:24:00 | C2 13:23:38 | -80 | -15 | 2.73 | 3.10 | 2.38
COR2 13:39:00 | C3 13:39:54 | -80 | -23 | 3.16 | 5.49 | 4.77
COR2 13:54:00 | C3 13:51:55 | -80 | -25 | 3.52 | 7.65 | 6.51
– | C3 14:03:54 | -80 | -25 | 3.83 | 9.43 | 8.21
– | C3 14:15:38 | -80 | -27 | 4.10 | 11.34 | 9.84
COR2 14:24:00 | C3 14:27:39 | -80 | -30 | 4.41 | 13.10 | 11.53
COR2 14:39:00 | C3 14:39:38 | -80 | -30 | 4.68 | 14.85 | 13.16
– | C3 15:07:59 | -80 | -33 | 5.49 | 18.53 | 16.90
– | C3 15:18:34 | -80 | -35 | 5.71 | 19.88 | 18.24
Note. — $\theta$, $\phi$ and $h$ are the longitude, latitude, and height of
the center of the ellipsoid, respectively. $a$ and $c$ are the lengths of the
two semiprincipal axes. Because we assume the cross section of the shock
ellipsoid perpendicular to the propagation direction to be a circle, there are
only 5 free parameters in our fitting.
## References
* Arge et al. (2004) Arge, C. N., Luhmann, J. G., Odstrcil, D., Schrijver, C. J., & Li, Y. 2004, Journal of Atmospheric and Solar-Terrestrial Physics, 66, 1295, doi: 10.1016/j.jastp.2004.03.018
* Bale et al. (2016) Bale, S. D., Goetz, K., Harvey, P. R., et al. 2016, Space Sci. Rev., 204, 49, doi: 10.1007/s11214-016-0244-5
* Biondo et al. (2021) Biondo, R., Pagano, P., Reale, F., & Bemporad, A. 2021, A&A, 654, L3, doi: 10.1051/0004-6361/202141892
* Cheng et al. (2013) Cheng, X., Zhang, J., Ding, M. D., Liu, Y., & Poomvises, W. 2013, ApJ, 763, 43, doi: 10.1088/0004-637X/763/1/43
* Cheng et al. (2012) Cheng, X., Zhang, J., Olmedo, O., et al. 2012, ApJ, 745, L5, doi: 10.1088/2041-8205/745/1/L5
* Cheng et al. (2014) Cheng, X., Ding, M. D., Guo, Y., et al. 2014, ApJ, 780, 28, doi: 10.1088/0004-637X/780/1/28
* de Lucas et al. (2011) de Lucas, A., Schwenn, R., dal Lago, A., Marsch, E., & Clúa de Gonzalez, A. L. 2011, Journal of Atmospheric and Solar-Terrestrial Physics, 73, 1281, doi: 10.1016/j.jastp.2010.12.011
* Domingo et al. (1995) Domingo, V., Fleck, B., & Poland, A. I. 1995, Sol. Phys., 162, 1, doi: 10.1007/BF00733425
* Fox et al. (2016) Fox, N. J., Velli, M. C., Bale, S. D., et al. 2016, Space Sci. Rev., 204, 7, doi: 10.1007/s11214-015-0211-6
* Hess & Zhang (2014) Hess, P., & Zhang, J. 2014, ApJ, 792, 49, doi: 10.1088/0004-637X/792/1/49
* Howard et al. (2008) Howard, R. A., Moses, J. D., Vourlidas, A., et al. 2008, Space Sci. Rev., 136, 67, doi: 10.1007/s11214-008-9341-4
* Hu et al. (2016) Hu, H., Liu, Y. D., Wang, R., Möstl, C., & Yang, Z. 2016, ApJ, 829, 97, doi: 10.3847/0004-637X/829/2/97
* Hu et al. (2017) Hu, H., Liu, Y. D., Wang, R., et al. 2017, ApJ, 840, 76, doi: 10.3847/1538-4357/aa6d54
* Kaiser et al. (2008) Kaiser, M. L., Kucera, T. A., Davila, J. M., et al. 2008, Space Sci. Rev., 136, 5, doi: 10.1007/s11214-007-9277-0
* Kollhoff et al. (2021) Kollhoff, A., Kouloumvakos, A., Lario, D., et al. 2021, A&A, 656, A20, doi: 10.1051/0004-6361/202140937
* Kwon et al. (2014) Kwon, R.-Y., Zhang, J., & Olmedo, O. 2014, ApJ, 794, 148, doi: 10.1088/0004-637X/794/2/148
* Kwon et al. (2015) Kwon, R.-Y., Zhang, J., & Vourlidas, A. 2015, ApJ, 799, L29, doi: 10.1088/2041-8205/799/2/L29
* Lario et al. (2017) Lario, D., Kwon, R. Y., Richardson, I. G., et al. 2017, ApJ, 838
* Lario et al. (2021) Lario, D., Richardson, I. G., Palmerio, E., et al. 2021, ApJ, 920, 123, doi: 10.3847/1538-4357/ac157f
* Leblanc et al. (1998) Leblanc, Y., Dulk, G. A., & Bougeret, J.-L. 1998, Sol. Phys., 183, 165, doi: 10.1023/A:1005049730506
* Lemen et al. (2012) Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, Sol. Phys., 275, 17, doi: 10.1007/s11207-011-9776-8
* Liu et al. (2010a) Liu, Y., Davies, J. A., Luhmann, J. G., et al. 2010a, ApJ, 710, L82, doi: 10.1088/2041-8205/710/1/L82
* Liu et al. (2011) Liu, Y., Luhmann, J. G., Bale, S. D., & Lin, R. P. 2011, ApJ, 734, 84, doi: 10.1088/0004-637X/734/2/84
* Liu et al. (2010b) Liu, Y., Thernisien, A., Luhmann, J. G., et al. 2010b, ApJ, 722, 1762, doi: 10.1088/0004-637X/722/2/1762
* Liu et al. (2008) Liu, Y., Luhmann, J. G., Müller-Mellin, R., et al. 2008, ApJ, 689, 563, doi: 10.1086/592031
* Liu et al. (2016) Liu, Y. D., Hu, H., Wang, C., et al. 2016, ApJS, 222, 23, doi: 10.3847/0067-0049/222/2/23
* Liu et al. (2017) Liu, Y. D., Hu, H., Zhu, B., Luhmann, J. G., & Vourlidas, A. 2017, ApJ, 834, 158, doi: 10.3847/1538-4357/834/2/158
* Liu et al. (2013) Liu, Y. D., Luhmann, J. G., Lugaz, N., et al. 2013, ApJ, 769, 45, doi: 10.1088/0004-637X/769/1/45
* Liu et al. (2014a) Liu, Y. D., Yang, Z., Wang, R., et al. 2014a, ApJ, 793, L41, doi: 10.1088/2041-8205/793/2/L41
* Liu et al. (2019) Liu, Y. D., Zhu, B., & Zhao, X. 2019, ApJ, 871, 8, doi: 10.3847/1538-4357/aaf425
* Liu et al. (2012) Liu, Y. D., Luhmann, J. G., Möstl, C., et al. 2012, ApJ, 746, L15, doi: 10.1088/2041-8205/746/2/L15
* Liu et al. (2014b) Liu, Y. D., Luhmann, J. G., Kajdič, P., et al. 2014b, Nature Communications, 5, 3481, doi: 10.1038/ncomms4481
* Lopez (1987) Lopez, R. E. 1987, J. Geophys. Res., 92, 11189, doi: 10.1029/JA092iA10p11189
* Lugaz et al. (2017) Lugaz, N., Temmer, M., Wang, Y., & Farrugia, C. J. 2017, Sol. Phys., 292, 64, doi: 10.1007/s11207-017-1091-6
* Lugaz et al. (2009) Lugaz, N., Vourlidas, A., & Roussev, I. I. 2009, Annales Geophysicae, 27, 3479, doi: 10.5194/angeo-27-3479-2009
* Manchester et al. (2017) Manchester, W., Kilpua, E. K. J., Liu, Y. D., et al. 2017, Space Sci. Rev., 212, 1159, doi: 10.1007/s11214-017-0394-0
* Marubashi et al. (2015) Marubashi, K., Akiyama, S., Yashiro, S., et al. 2015, Sol. Phys., 290, 1371, doi: 10.1007/s11207-015-0681-4
* Mishra et al. (2015) Mishra, W., Srivastava, N., & Chakrabarty, D. 2015, Sol. Phys., 290, 527, doi: 10.1007/s11207-014-0625-4
* Nieves-Chinchilla et al. (2012) Nieves-Chinchilla, T., Colaninno, R., Vourlidas, A., et al. 2012, Journal of Geophysical Research (Space Physics), 117, A06106, doi: 10.1029/2011JA017243
* Odstrcil et al. (2004) Odstrcil, D., Riley, P., & Zhao, X. P. 2004, Journal of Geophysical Research (Space Physics), 109, A02116, doi: 10.1029/2003JA010135
* Palmerio et al. (2021) Palmerio, E., Kilpua, E. K. J., Witasse, O., et al. 2021, Space Weather, 19, e2020SW002654, doi: 10.1029/2020SW002654
* Patsourakos & Vourlidas (2012) Patsourakos, S., & Vourlidas, A. 2012, Sol. Phys., 281, 187, doi: 10.1007/s11207-012-9988-6
* Pesnell et al. (2012) Pesnell, W. D., Thompson, B. J., & Chamberlin, P. C. 2012, Sol. Phys., 275, 3, doi: 10.1007/s11207-011-9841-3
* Reinard et al. (2012) Reinard, A. A., Lynch, B. J., & Mulligan, T. 2012, ApJ, 761, 175, doi: 10.1088/0004-637X/761/2/175
* Reiner et al. (2001) Reiner, M. J., Kaiser, M. L., & Bougeret, J. L. 2001, J. Geophys. Res., 106, 29989, doi: 10.1029/2000JA002228
* Rouillard et al. (2011) Rouillard, A. P., Odstřcil, D., Sheeley, N. R., et al. 2011, ApJ, 735, 7, doi: 10.1088/0004-637X/735/1/7
* Savani et al. (2015) Savani, N. P., Vourlidas, A., Szabo, A., et al. 2015, Space Weather, 13, 374, doi: 10.1002/2015SW001171
* Syed Ibrahim et al. (2019) Syed Ibrahim, M., Joshi, B., Cho, K. S., Kim, R. S., & Moon, Y. J. 2019, Sol. Phys., 294, 54, doi: 10.1007/s11207-019-1443-5
* Thernisien et al. (2009) Thernisien, A., Vourlidas, A., & Howard, R. A. 2009, Sol. Phys., 256, 111, doi: 10.1007/s11207-009-9346-5
* Thernisien et al. (2006) Thernisien, A. F. R., Howard, R. A., & Vourlidas, A. 2006, ApJ, 652, 763, doi: 10.1086/508254
* Webb et al. (2013) Webb, D. F., Möstl, C., Jackson, B. V., et al. 2013, Sol. Phys., 285, 317, doi: 10.1007/s11207-013-0260-5
* Xie et al. (2021) Xie, H., Gopalswamy, N., & Akiyama, S. 2021, ApJ, 922, 64, doi: 10.3847/1538-4357/ac23cc
* Zhang et al. (2012) Zhang, J., Cheng, X., & Ding, M.-D. 2012, Nature Communications, 3, 747, doi: 10.1038/ncomms1753
* Zhu et al. (2021) Zhu, B., Liu, Y. D., Kwon, R.-Y., et al. 2021, ApJ, 921, 26, doi: 10.3847/1538-4357/ac106b
* Zurbuchen & Richardson (2006) Zurbuchen, T. H., & Richardson, I. G. 2006, Space Sci. Rev., 123, 31, doi: 10.1007/s11214-006-9010-4
|
# Dark matter in compact stars
Joseph Bramante<EMAIL_ADDRESS>Department of Physics, Engineering
Physics, and Astronomy, Queen’s University, Kingston, Ontario, K7N 3N6, Canada
The Arthur B. McDonald Canadian Astroparticle Physics Research Institute,
Kingston, Ontario, K7L 3N6, Canada Perimeter Institute for Theoretical
Physics, Waterloo, Ontario, N2L 2Y5, Canada Nirmal Raj<EMAIL_ADDRESS>Centre for High Energy Physics, Indian Institute of Science, C.V. Raman
Avenue, Bengaluru 560012, India
###### Abstract
White dwarfs and neutron stars are far-reaching and multi-faceted laboratories
in the hunt for dark matter. We review detection prospects of wave-like,
particulate, macroscopic and black hole dark matter that make use of several
exceptional properties of compact stars, such as ultra-high densities, deep
fermion degeneracies, low temperatures, nucleon superfluidity, strong magnetic
fields, high rotational regularity, and significant gravitational wave
emissivity. Foundational topics first made explicit in this document include
the effect of the “propellor phase” on neutron star baryonic accretion, and
the contribution of Auger and Cooper pair breaking effects to neutron star
heating by dark matter capture.
###### Contents
1. 1 Introduction
2. 2 The physics of compact objects
1. 2.1 Fermi gas model and maximum masses
2. 2.2 Structure equations and equation of state
3. 2.3 Spin periods
4. 2.4 Neutron star substructure
5. 2.5 Thermonuclear explosions
6. 2.6 Cooling
1. 2.6.1 White dwarf cooling.
2. 2.6.2 Neutron star cooling.
3. 2.6.3 Comparison of white dwarf and neutron star late-stage cooling
7. 2.7 Nucleon superfluidity
8. 2.8 Neutron star magnetic field and spin-down
3. 3 The white dwarf as a dark matter laboratory
1. 3.1 Dark matter annihilation inside and heating white dwarfs
2. 3.2 Non-annihilating dark matter converting white dwarfs into black holes
3. 3.3 White dwarf explosions via dark matter
4. 3.4 Dark matter’s influence on white dwarf equations of state
4. 4 The neutron star as a dark matter laboratory
1. 4.1 Dark matter kinetic and annihilation heating of neutron stars
1. 4.1.1 Capture and kinetic heating
2. 4.1.2 Dark matter self-annihilations, nucleon co-annihilations, and induced nucleon decay
3. 4.1.3 Improvements and uncertainties
4. 4.1.4 Dark matter models that heat neutron stars through scattering and annihilation
5. 4.1.5 Neutron star reheating mechanisms not involving dark matter
2. 4.2 Neutron stars and halo substructure
3. 4.3 Dark matter inducing superbursts in neutron stars
4. 4.4 Dark matter that implodes neutron stars into black holes
1. 4.4.1 Dark matter thermalization in neutron stars
2. 4.4.2 Collapse of dark matter and formation of small black hole
3. 4.4.3 Growth or evaporation of dark matter-formed black hole in the neutron star
4. 4.4.4 Signatures of dark matter that implodes neutron stars
5. 4.5 Primordial black hole dark matter and neutron stars
6. 4.6 Neutron stars admixed with dark sectors
1. 4.6.1 Impact on nuclear equation of state
2. 4.6.2 More admixed neutron stars
7. 4.7 Exotic compact stars
8. 4.8 Dark sectors leading to internal heating of neutron stars
9. 4.9 Dark matter signals in gravitational waves from neutron star mergers
10. 4.10 Dark matter signals in pulsar timing
1. 4.10.1 Pulsar timing arrays
2. 4.10.2 Binary pulsar timing
3. 4.10.3 Pulsar spin-down
11. 4.11 Axion-like and very light dark matter, and neutron stars
5. 5 Conclusions and perspective
6. Acknowledgments
## 1 Introduction
Dark matter is one of the foremost scientific mysteries of our times. Given
how little is known about its microphysical properties, its possible
identities seem limitless. This is famously encapsulated in the 90+ orders of
magnitude that dark matter (DM) masses could span, from $10^{-24}$ eV, set by
the maximum possible Compton wavelength containable within a dwarf galaxy, to
$10^{8}M_{\odot}\simeq 10^{74}$ eV, the mass of DM in a small galaxy. Over
this range of masses DM may be described as a wave/field, a particle, a
macroscopic object, or galactic substructure – including black holes and
topological defects. A promising strategy to confront such remarkable
diversity in possibility is to exploit physical systems with remarkable
diversity in characteristics.
Compact stars – white dwarfs (WDs) and neutron stars (NSs) typically formed as
relics of nuclear-powered stars – afford such an environment. Since their
quantum properties were first described in the 1920s$-$30s by Fowler [1],
Anderson [2], Stoner [3], Chandrasekhar [4], Zwicky and Baade [5] (and
possibly Landau [6]), our understanding of compact stars has been enriched at
the intersection of several branches of physics: astrophysics, general
relativity, particle physics, nuclear physics, statistical physics,
thermodynamics, and plasma physics. It is little wonder that they feature in
numerous tests of fundamental physics [7, 8], and it should come as no
surprise that they are also ideal laboratories to search for dark matter.
Indeed, DM hunters would do well to take advantage of their striking
properties: they have very high densities, with accompanying steep
gravitational potentials, sometimes deeply degenerate constituent fermions,
often very low temperatures, the presence of nucleon superfluidity, ultra-
strong magnetic fields, extreme regularity in rotation rivalling the precision
of atomic clocks, and powerful gravitational radiation emitted during binary
mergers, to name a few.
The use of stars to look for evidence of DM dates to proposals that weakly
interacting particle DM might alter nuclear reaction rates in the Sun [9, 10].
Shortly after, it was realized that NSs were useful for seeking out certain
models of DM that could form black holes in their interior [11]. One immediate
difference between a search for DM in compact stars and a terrestrial detector
is that, since DM is accelerated in the deep gravitational well of a compact
star, its interactions with stellar constituent particles occur at semi-
relativistic velocities: $\mathcal{O}(10^{-2}-10^{-1})~{}c$ for a WD and
$\mathcal{O}(0.5)~{}c$ for a NS. This high DM speed provides enhanced
sensitivity to theoretical models with velocity-suppressed rates for
scattering on Standard Model (SM) states, since in the Milky Way’s Galactic
halo (and by extension in terrestrial detectors) the velocity of DM particles
is only $\mathcal{O}(10^{-3})c$. In particular, the environs of a NS are
greatly suited to testing the origin of DM, since the kinetic energy of DM at
speeds $\sim 0.7c$ are similar to that during cosmological production,
particularly for “freeze-out” processes [12].
This review is organized as follows. In Section 2 we provide an overview of
the properties of NSs and WDs, emphasizing aspects that will be important for
dark matter searches. In Section 3, we describe WD searches for dark matter,
treating dark matter annihilation and heating of WDs, conversion of WDs to
black holes, ignition of Type Ia supernovae, and effects of dark matter on WD
equations of state. In Section 4, we describe NS searches for dark matter,
including dark matter heating NSs kinetically and via annihilations, models of
dark matter that convert NSs to black holes, exotic compact stars that
constitute dark matter, NSs admixed with dark matter, models of dark matter
that lead to internal heating of NSs, signals of dark matter in NS-related
gravitational waves and pulsar timing, and the utility of NSs in discovering
axion-like and primordial black hole dark matter. In Section 5, we briefly
discuss future research directions for dark matter in compact stars.
## 2 The physics of compact objects
A detailed account of the physical characteristics of WDs and NSs is beyond
the scope of this review, and for these we refer the reader to Refs. [13] and
[14]. Here we outline key properties of these stars that make them useful dark
matter detectors.
White dwarfs are stellar remnants formed from main sequence stars that undergo
a red giant phase not hot enough to fuse carbon. Depending on its mass, a WD
will be composed of some proportion of helium, carbon, oxygen, neon and
magnesium, which make up the bulk of the mass. A sea of electrons co-habiting
with nuclei provide, as we will see, the Fermi degeneracy pressure that
supports the WD against gravitational collapse.
Super-giant progenitors of mass around 10–25 $M_{\odot}$ that undergo core-
collapse leave behind short-lived “proto-NSs” through which neutrinos diffuse
out carrying away 99% of the star’s binding energy, following which NSs are
born. They are composed mainly of Fermi degenerate neutrons formed by
electron-proton capture, $e^{-}+p\rightarrow n+\nu_{e}$, at extreme densities
and temperatures. Due to beta chemical equilibrium, NSs are also thought to
contain populations of protons, electrons, and muons; it is in fact the filled
Fermi seas of these fermionic fields that keep neutrons from decaying to
protons, electrons, and muons inside NSs. The supernova collapse is
generically expected to be hydrodynamically asymmetric, resulting in a natal
“kick” to the NS at 450-1000 km/s speeds in a random direction [15, 16, 17,
18, 19]; a 1% fractional anisotropy in the momenta of escaping neutrinos could
be another source of the asymmetric kick [20, 21, 22].
### 2.1 Fermi gas model and maximum masses
Compact stars, especially WDs, are prevented from collapsing under their own
gravity by Fermi degeneracy pressure. In a low-temperature Fermi gas of Fermi
momentum $p_{F}$, the number of fermions (of spin degeneracy $g_{s}=2$)
filling up a real volume $V$ and Fermi sphere volume $V_{F}=4\pi p_{F}^{3}/3$,
is $N_{f}=g_{s}VV_{F}/(2\pi)^{3}$, from which we obtain:
$p_{F}=(3\pi^{2}n)^{1/3}~{},$ (1)
where $n$ is the fermion number density. The total energy of the Fermi gas
given the energy of a state $e(p)$ is
$E=4\pi g_{s}V\int_{0}^{p_{F}}dpp^{2}e(p)~{},$ (2)
and for an energy density $\varepsilon=E/V$ the pressure is obtained as
$P=-\bigg{(}\frac{\partial E}{\partial V}\bigg{)}_{N_{f}}=n^{2}\frac{{\rm
d}}{{\rm d}n}\bigg{(}\frac{\varepsilon}{n}\bigg{)}~{}.$ (3)
Setting $e(p)=m_{f}+p^{2}/(2m_{f})$ in the non-relativistic limit and $e(p)=p$
in the relativistic limit, and using Eqs. (1),(2) and (3), we get the Fermi
degeneracy pressure of a species as
$P=\begin{cases}[(3\pi^{2})^{2/3}/5m_{f}]\ n^{5/3}~{},~{}~{}p_{F}\ll
m_{f}~{},\\\ [(3\pi^{2})^{1/3}/4]\ n^{4/3}~{}~{}~{}~{}~{},~{}~{}p_{F}\gg
m_{f}~{}.\end{cases}$ (4)
The net pressure of the compact star is the sum of the contributions of
constituent species. In WDs, the electrons are unbound – the electron-nucleus
Coulomb energy $\simeq Ze^{2}(n_{e}/Z)^{1/3}$ is $\mathcal{O}(10^{-2})$ times
the typical kinetic energy, $(3\pi^{2}n_{e})^{1/3}$ – and thus form their own
Fermi gas system. It may be seen from Eq. (2) that due to their lightness and
abundance, it is the electrons that contribute the greatest to the pressure of
WDs. In contrast, in neutron stars the constituent neutrons contribute the
most to both the stellar mass and pressure.
The total energy of the star in the non-relativistic limit is given by
$\displaystyle E^{\rm non-rel}_{\rm tot}$ $\displaystyle\simeq$
$\displaystyle({\rm total\ kinetic})-({\rm gravitational\ binding})$ (5)
$\displaystyle=$ $\displaystyle\frac{3}{5}N_{\rm f}\frac{p_{F}^{2}}{2m_{\rm
f}}-\frac{3}{5}\frac{GM_{\star}^{2}}{R_{\star}}$ $\displaystyle=$
$\displaystyle\bigg{(}\frac{27\sqrt{3}\pi}{40\sqrt{10}}\bigg{)}^{2/3}\frac{1}{m_{f}R_{\star}^{2}}\bigg{(}\frac{ZM_{\star}}{Am_{N}}\bigg{)}^{5/3}-\frac{3}{5}\frac{GM_{\star}^{2}}{R_{\star}}~{},$
and in the relativistic limit,
$\displaystyle E^{\rm rel}_{\rm tot}$ $\displaystyle=$
$\displaystyle\frac{3}{4}N_{\rm
f}p_{F}-\frac{3}{5}\frac{GM_{\star}^{2}}{R_{\star}}$ (6) $\displaystyle=$
$\displaystyle\bigg{(}\frac{243\pi}{256}\bigg{)}^{1/3}\frac{1}{R_{\star}}\bigg{(}\frac{ZM_{\star}}{Am_{N}}\bigg{)}^{4/3}-\frac{3}{5}\frac{GM_{\star}^{2}}{R_{\star}}~{}.$
Eq. (5) shows that, in the ground state of the compact star where the virial
theorem (potential energy = -2 $\times$ kinetic energy) applies, we have
$R_{\star}\propto M^{-1/3}_{\star}$ as a mass-radius relation111This shows
that more compact stars are generally denser, which we will see is relevant to
determining the speed of DM passing through and the density of DM collected in
compact stars.. This implies that WDs, modeled accurately as a Fermi gas
system, become smaller with increasing mass. Hence the heaviest WDs are the
densest, and thus one expects electrons in them to be ultra-relativistic and
Eq. (6) to apply. In Eq. (6) both the potential and kinetic terms fall as
$R_{\star}^{-1}$, however the former grows faster with $M_{\star}$ than the
latter, implying a maximum WD mass above which the star will collapse. This
“Chandrasekhar limit” [23] (see also Sec. 2.5) is given by
$M_{\rm max-WD-
rel}=\sqrt{\frac{5\pi}{G^{3}}}\frac{15}{16}\bigg{(}\frac{Z}{Am_{N}}\bigg{)}^{2}\simeq
1.7\ \bigg{(}\frac{2Z}{A}\bigg{)}^{2}\ M_{\odot}~{}.$ (7)
A similar limit may be obtained for NS masses by setting $A\rightarrow 1$,
$Z\rightarrow 1$:
$M_{\rm max-NS-rel}=6.8\ M_{\odot}~{}.$ (8)
These estimates are not physically motivated: they assume relativistic
fermions constituting the entire volume of the star (true for neither WDs nor
NSs) and a non-interacting Fermi gas (not true for NSs). Nevertheless they
ballpark the true limit to within $\mathcal{O}(1)$ factors. A more precise
treatment must account for the stellar structure, which we will discuss below,
but first let us make two more estimates of the maximum mass of NSs.
(i) If we assume non-relativistic neutrons, the virial theorem using Eq. (6)
gives a mass-radius relationship:
$R_{\rm NS}\simeq 12\ {\rm km}\ \bigg{(}\frac{M_{\odot}}{M_{\rm
NS}}\bigg{)}^{1/3}~{}.$ (9)
In this picture the NS radius is a decreasing function of its mass; however it
cannot become smaller than the Schwarzschild radius corresponding to a maximum
mass, $R_{\rm Schw}=3\ {\rm km}\ (M/M_{\odot})$. This condition gives
$M_{\rm max-NS-nonrel}=2.8\ M_{\odot}~{}.$ (10)
(ii) Due to super-nuclear densities in the NS cores, strong interactions
cannot be neglected in considerations of NS structure. A maximum mass can be
obtained in the (unphysical) limit where such interactions solely support the
star against gravitational collapse [24]. Strong interactions become repulsive
at inter-nucleon distances roughly shorter than the reduced Compton wavelength
of the mediating pion, $m_{\pi}^{-1}$. This gives a maximum neutron density
$m_{N}m_{\pi}^{3}$, corresponding to a mass-radius relation of $M_{\rm
NS}=4\pi m_{N}m^{3}_{\pi}R^{3}_{\rm NS}/3$. For a surface escape speed $v_{\rm
esc}$, we have $R_{\rm NS}=R_{\rm Schw}v_{\rm esc}^{-1}=3~{}{\rm
km}~{}(M/M_{\odot})~{}v_{\rm esc}^{-1}$. Putting these together yields the
maximum NS mass as
$M_{\rm max-NS-strong}=\sqrt{\frac{3}{32\pi}}v^{3/2}_{\rm
esc}\bigg{(}\frac{M^{6}_{\rm Pl}}{m_{N}m_{\pi}^{3}}\bigg{)}^{1/2}\simeq
2~{}M_{\odot}\ \bigg{(}\frac{v_{\rm esc}}{0.5c}\bigg{)}^{3/2}~{}.$ (11)
As we will see below, this turns out to be an excellent estimate.
As argued in, e.g., Ref. [25], a more accurate reason for the existence of a
maximum mass for NSs is that the sound speed $c_{s}$ in NS material cannot be
arbitrarily large. In particular, $c^{2}_{s}/c^{2}=(\partial
P/\partial\varepsilon)_{\bar{s}}\leq 1$ must be satisfied everywhere in the
NS, where $\bar{s}$ is the specific entropy. Physically, increments in the
self-gravitating energy density result in increments in equilibrium-restoring
pressure, however this trend cannot extend forever due to the sound speed
limitation, putting a cap on NS masses. This is also an important criterion in
modelling the equation of state (EoS) of high-density NS matter.
Briefly, we review the argument for a _minimum_ NS mass. For a given EoS of
the NS core fluid, as the central density (hence mass) of the NS is decreased,
the gravitational binding energy will decrease, and at some minimum density,
the NS will be unstable to small radial perturbations. This EoS-dependent
minimum NS mass is typically $\sim 0.1M_{\odot}$ [26, 27]. Such an NS would be
primarily composed of an $\mathcal{O}(100)$ km crust zone, with a percent-
level fraction of mass in the central degenerate neutron fluid [27]. Be that
as it may, a realistic minimum mass of NSs is about $1M_{\odot}$, after
neutrino pressure and other thermal effects during the formation of a NS in a
core collapse supernova are considered222A compact object of mass
$0.77\begin{subarray}{c}+0.20\\\ -0.17\end{subarray}M_{\odot}$ has been
observed in the supernova remnant HESS J1731$-$347 [28], exciting speculations
as to its nature and the EoS of nuclear matter [29, 30]. [31].
### 2.2 Structure equations and equation of state
Detailed reviews of NS structure and the role of EoSs may be found in Refs.
[32, 33, 34], while we present here the essentials. Accurate estimates of
compact star macroscopic properties are best obtained by solving the
spherically symmetric stellar structure equations:
$\displaystyle\frac{dP}{dr}$ $\displaystyle=$
$\displaystyle-\frac{Gm\varepsilon}{c^{2}r^{2}}\bigg{(}1+\frac{P}{\varepsilon}\bigg{)}\bigg{(}1+\frac{4\pi
r^{3}P}{mc^{2}}\bigg{)}\bigg{(}1-\frac{2Gm}{c^{2}r}\bigg{)}^{-1}~{},$
$\displaystyle\frac{dm}{dr}$ $\displaystyle=$ $\displaystyle
4\pi\frac{\varepsilon}{c^{2}}r^{2}~{},$ (12)
Here $m$ is the mass enclosed within a radial distance $r$, and all other
quantities are as defined above. The first equation, the Tolman-Oppenheimer-
Volkoff (TOV) equation, describes hydrostatic equilibrium, and the second
describes the conservation of mass in the star. Given an EoS $P(\varepsilon)$
and the boundary conditions $m(0)=0$, $\varepsilon(0)=\varepsilon_{c}$ (a
“central density”), the structure equations can be solved to obtain useful
quantities: a mass-radius relation (which would capture the maximum allowed
mass), radial profiles of pressure, energy or number density, chemical
potential, and so on.
Figure 1: Left. WD mass-radius relations derived from three different
equations of state for a 12C WD. The Chandrasekhar solution treats electrons
as a free gas. The Hamada and Salpeter equation of state incorporates Coulomb
corrections to the free gas approximation. The Relativistic FMT model
additionally accounts for WD relativistic corrections to the Wigner-Seitz cell
(Coulomb-corrected) treatment of the equation of state, [35]. Quantitatively
similar curves are obtained for 4He and 16O WDs. The point marked “$\beta$
instability” corresponds to where the central density is high enough that the
WD is unstable against electron capture of nuclei resulting in
$(Z,A)\to(Z-1,A)$ conversions. The point marked “GR instability” is where the
general relativistic corrections shift an otherwise infinite central density
to a finite value as the point at which the WD becomes gravitationally
unstable. Right. Same as the left panels, but for 56Fe WDs. These plots are
taken from Ref. [35]. Figure 2: NS mass-radius relations for various equations
of state for nuclear matter at high densities. The blue shaded region is
preferred by pulsar observations [33], the yellow region is a fit to the
observation of binary NS mergers using a hadronic EoS [36], the green region
is the 90% C.L. preferred region from an EoS-insensitive fit to GW170817 [37],
and the red regions are Bayesian fits at 90% C.L. from a combination of
gravitational wave and low-energy nuclear and astrophysical data [38]. The
horizontal thick-dashed lines depict the measured mass of the heaviest
observed pulsar MSP J0740+6620 [39]. The line-shaded bottom right region is
excluded by centrifugal mass loss, with the limit coming from observations of
the fastest-spinning (716 Hz) pulsar [40]. The line-shaded top left region is
excluded by the condition of causality: for any EoS the sound speed $c_{s}\leq
c$. The rectangular regions are simultaneous fits at 68% C.L. of NS mass and
radius by NICER (light brown by Refs. [41, 42] and dashed-green-enclosed by
Refs. [43, 44]). This plot is taken from Ref. [45].
A reliable estimate of WD properties may be gained by assuming a polytropic
EoS: $P(\varepsilon)=K\varepsilon^{\gamma}$. For WDs, one can set the second
and third terms to unity on the right-hand side of the first equation in Eq.
(12), as the $c$-dependent terms depict general relativistic corrections that
are only important for NSs. It is then straightforward to solve Eq. (12) for
polytropes [25, 46]. In particular, the cases of $\gamma=5/3$ and
$\gamma=4/3$, applicable respectively to the limit of non-relativistic and
relativistic electrons, result in the $M$-$R$ scaling relations we derived
from the virial theorem in Eqs. (5) and (6) with more refined numerical
factors. Notably, for the relativistic case we obtain the Chandrasekhar mass
as:
$M_{\rm Ch-WD}\simeq 1.4M_{\odot}~{}.$ (13)
Realistic EoSs are non-polytropes accounting for Coulomb corrections arising
from electron-ion interactions, e.g., the Feynman-Metropolis-Teller EoS [35].
Figure 1 shows representative $M$-$R$ relations for WDs of various nuclear
compositions, taken from Ref. [35]. A simple analytical fit to translate
between $\rho$ and WD masses $M_{\rm WD}\in[0.1,1.35]M_{\odot}$ is [47]
$\bigg{(}\frac{\rho_{\rm WD}}{1.95\times 10^{6}\ {\rm
g/cm}^{3}}\bigg{)}^{2/3}+1\approx\bigg{[}\sum_{i=0}^{6}c_{i}\bigg{(}\frac{M_{\rm
WD}}{M_{\odot}}\bigg{)}^{i}\bigg{]}^{-2}~{},$ (14)
with $\\{c_{i}\\}=\\{1.003,-0.309,-1.165,2.021,-2.060,1.169,-0.281\\}$.
In NSs the EoS of nuclear matter is non-trivial due to the non-perturbative
nature of QCD at the densities of the core. EoSs must account for nucleon-
nucleon interactions, far more uncertain than Coulomb interactions, and must
fit data on the per-nucleon binding energy in symmetric nuclear matter, the
so-called symmetry energy that accounts for the energy above the $N=Z$ ground
state, the nuclear compressibility, and much else. For these reasons a wide
range of EoSs has been proposed, resulting in multiple predictions for NS
configurations. Figure 2 displays $M$-$R$ curves obtained from a few popular
EoSs. The top left is a region where $c_{s}>c$ and hence causality is
violated; the NS mass for various EoSs is seen to reach a maximum close to
this region.
### 2.3 Spin periods
Celestial bodies have a maximum angular speed: the gravitational force on a
mass element on the equator must exceed the centrifugal force on it, giving a
minimum spin period
$P_{\rm min}\simeq\sqrt{\frac{3\pi}{G\rho}}=10^{4}~{}{\rm s}\sqrt{\frac{{\rm
g}/{\rm cm}^{3}}{\rho}}~{}.$ (15)
Thus for WDs with $\rho=\mathcal{O}(10^{6})$ g/cm3, $P_{\rm min}\simeq 10$ s,
and for NSs with $\rho=\mathcal{O}(10^{14})$ g/cm3, $P_{\rm min}\simeq
10^{-3}$ s. And indeed, the first pulsars historically espied were identified
as such by their gradually lengthening sub-second spin periods. Moreover, no
pulsars with spin periods smaller than $\mathcal{O}(\rm ms)$ have been
observed; those observed near this limit are called millisecond pulsars. The
bottom right region of Fig. 2 is excluded by the fastest spinning pulsar
observed with rotation frequency 716 Hz [40], a limit given by ($R$/10
km)${}^{3/2}\geq$ 1280 Hz $(M/1.5~{}M_{\odot})^{1/2}$ [32].
Figure 3: Top. Schematic of the internal structure of a NS, taken from Ref.
[48]. The layers of the crust are shown in the zoom. Bottom left. Density
profile of (various layers of) a NS crust. Right. Nucleon number as a function
of NS crust density. See Sec. 2.4 for further details.
### 2.4 Neutron star substructure
In the top panel of Figure 3 we show a schematic of the interior structure of
a NS. The physics of substructure is obtained by solving Eq. (12) with the
appropriate EoS for each stellar region. For illustration here we will make
use of the Brussels-Montreal “unified” equation of state (“BSk”) accounting
for all regions/densities in the NS, expressed in terms of analytic fits [49].
What follows is an overview of NS substructure; interested readers may gain
further details from Ref. [50] and the references listed in Ref. [48].
The crust, about $1\ \rm{km}$ thick, spans over 10 decades in density and
consists of several distinct layers corresponding to different phases of
nuclear matter. The bottom left panel of Figure 3 shows the density of
material as a function of the proper depth for the various crustal layers, and
the bottom right panel shows nucleon numbers of nuclei as a function of
densities spanning the entire crust; both plots were made using the EoS BsK21
[48]. These plots do not show the atmosphere (density $<10^{4}$ g/cm3,
thickness $\mathcal{O}(\mu$m), composed of hydrogen and lighter elements) and
ocean (density $<10^{10}$ g/cm3, thickness $\mathcal{O}(10)$ m, composed of
carbon and heavy metals); these layers affect the star’s thermal spectrum, and
are influenced by the star’s magnetic field.
The outer crust (density $10^{4}-10^{11}$ g/cm3) is composed of nuclei forming
a body-centered-cubic Coulomb crystal, interspersed with a degenerate and
nearly-free relativistic gas of electrons. À la WDs, electron degeneracy
contributes dominantly to the pressure, while nuclei contribute dominantly to
the mass. Deep in the crust, where the electron chemical potential is higher,
nuclei become increasingly neutron-rich due to inverse beta decay. The outer
crust terminates when the density and pressure become so high that free
neutron states begin to appear.
The transition to the inner crust is marked by the neutron drip line, marked
by density $\rho_{\rm drip}\simeq 4.2\times{10^{11}\ \rm{g/cm^{3}}}$ [51],
beyond which a fraction of neutrons becomes unbound from nuclei. Up to
densities about $0.1$ times the nuclear saturation density
$\rho_{0}\simeq{2\times{10^{14}}\ \rm{g/cm^{3}}}$, the inner crust comprises
of heavy, neutron-rich nuclei (also known as proton clusters) forming a
lattice, along with both an electron gas and a dripped-neutron gas. Such a
system is inaccessible to terrestrial experiments, hence the composition of
the inner crust is far more uncertain than the outer crust, and studies of
this region are limited to theoretical calculations, e.g., the Compressible
Liquid Drop Model, the Thomas-Fermi approximation, and many-body quantum
calculations. As the NS cools down, the dripped neutrons are expected to form
a superfluid phase.
Further down, the inner crust density approaches the nuclear saturation point,
and homogeneous nuclear matter appears [52, 53]. This has led to the
prediction of the so-called nuclear “pasta” phase at the bottom of the inner
crust [54, 55, 56, 57, 58, 59]. Intricate competition between nuclear
attraction and Coulomb repulsion forms these extended non-spherical phases of
nuclear matter; as the density increases, gnocchi, then spaghetti, and then
lasagna pasta phases become more prevalent. In the deepest layer of the inner
crust there are “inverted pasta phases” where nuclear density material
predominates over sparser, sub-nuclear density voids. This includes bucatini
(anti-spaghetti) and swiss cheese (anti-gnocchi) phases. Nuclear pasta is
confined to a thin layer, yet they constitute a significant fraction of the
crustal mass as they span densities of $0.1-1\ \rho_{0}$. They may also impact
several properties of the NS such as its thermal and electrical conductivity,
and the elasticity and neutrino opacity of the crust.
The inner crust terminates when the density reaches $\rho_{0}$, beyond which
nuclei “melt” into uniform nuclear matter, which form the core of the NS. The
core is further sub-divided into the outer core (densities 0.5$-$2
$\rho_{0}$), where the nuclear matter is expected to be comprised of neutrons,
protons, and electrons, and the inner core (densities 2$-$10 $\rho_{0}$),
where exotic states of matter may possibly be present. These could be meson
and hyperon condensates [60, 61, 62, 63]. These could also be deconfined quark
matter, which is possible when the bag constant is large in the QCD bag model
[63], either as $ud$ matter [64] or $uds$ matter [65, 66, 67, 68, 69, 70].
Deconfined quark matter is believed to be in a color superconducting (“CSC”)
phase [71, 72, 73], which could be crystalline [74]. If strange-quark pairing
is suppressed, a two-flavor superconducting (“2SC”) phase is formed involving
$u$ and $d$ quarks. If not, the $uds$ matter may exist in in a color-flavor-
locked (“CFL”) phase [75, 73] which may co-exist with confined states [76, 77,
78].
### 2.5 Thermonuclear explosions
Astrophysical situations may arise in which a WD exceeds its Chandrasekhar
mass (Eq. (7)). For carbon-oxygen WDs, this would lead to ignition of runaway
carbon fusion that unbinds the star. This is how Type Ia supernovae,
conventionally used as “standard candles” in cosmological distance
measurements, have been theorized to originate – via accreting material from a
binary companion and going super-Chandrasekhar. This picture, however, is
disputed by the lack of a specific “trigger” of the thermonuclear process
along with a number of other observational inconsistencies [79]. As will be
discussed later, other possible Type Ia progenitors include WD mergers and
pyconuclear reactions in sub-Chandrasekhar mass WDs.
Yet another setting in which thermonuclear chain reactions create an explosion
is in the ocean layer of NS crusts, and in particular the carbon component,
which could be ignited by mass accretion from a binary companion. For
accretion rates $>10\%$ of the Eddington limit, the result is “superbursts”,
x-ray bursts that spew $\mathcal{O}(10^{35})$ J of energy, lasting for hours,
and in some cases recurring about every year [80, 81, 82, 83]. This must be
distinguished from regular Type-I bursts in NSs, typically ignited by surface
accretion, emitting $10^{3}$ times less energy and lasting $10^{3}$ times
shorter.
Ref. [84] provides extended discussion on the physics of thermonuclear runaway
fusion, while we provide here a brief summary. Two generic conditions must be
satisfied: (1) a mimimum energy $Q_{\rm dep}$ must be deposited to raise the
temperature of a critical mass $M_{\rm crit}$ of density $\rho$ to a critical
temperature $T_{\rm crit}$ which can sustain fusion:
Condition 1 $\displaystyle Q_{\rm dep}\geq M_{\rm crit}(\rho,T_{\rm
crit})\bar{c}_{p}(\rho,T_{\rm crit})T_{\rm crit}~{}.$ (16)
The temperature prior to heating is here assumed $\ll T_{\rm crit}$, and
$\bar{c}_{p}\simeq c^{\rm e}_{p}/2+c^{\gamma}_{p}/4+c^{\rm ion}_{p}$ is the
average isobaric specific heat capacity, with
$c^{\ell}_{p}(\rho,T_{\rm
crit})=\frac{a_{\ell}b_{\ell}}{u}\bigg{(}\frac{T_{\rm crit}}{E_{\rm
F}}\bigg{)}^{\alpha_{\ell}}\bigg{[}1-\bigg{(}\frac{m_{e}}{E_{\rm
F}}\bigg{)}^{2}\bigg{]}^{\beta_{\ell}}~{}.$ (17)
Here $u$ is the atomic mass unit, $m_{e}$ the electron mass, and for the
{electronic, radiative, ionic} contributions,
$a_{\ell}=\\{\pi^{2},4\pi^{4}/5,5/2\\}$, $b_{\ell}=\\{\sum
X_{i}Z_{i}/A_{i},\sum X_{i}Z_{i}/A_{i},\sum X_{i}/A_{i},\\}$ (with $X_{i}$,
$Z_{i}$, $A_{i}$ the mass fraction, charge and atomic number of the ion
species $i$ respectively), $\alpha_{\ell}=\\{1,3,0\\}$, and
$\beta_{\ell}=\\{-1,-3/2,0\\}$ . The Fermi energy $E_{\rm
F}=[m^{2}_{e}+(3\pi^{2}n_{e})^{2/3}]^{1/2}$ with $n_{e}=\rho b_{\rm e}/u$ (Eq.
(1)). The trigger energy in Eq. (16) ranges $\mathcal{O}(10^{17})$ GeV
$\rightarrow$ $\mathcal{O}(10^{24})$ GeV for WD central densities
corresponding to WD masses ranging 1.4 $M_{\odot}\rightarrow$ 0.8 $M_{\odot}$.
Eq. (16) is necessary but not sufficient for runaway fusion. There is a second
condition, through which the critical mass $M_{\rm crit}=4\pi\rho\lambda_{\rm
trig}^{3}/3$ is also defined. To wit, the rate of energy gain via nuclear
fusion must exceed the rate of energy loss via diffusion over the volume set
by the “trigger length” $\lambda_{\rm trig}$:
Condition 2 $\displaystyle\dot{Q}_{\rm nuc}>\dot{Q}_{\rm diff}~{}.$ (18)
Here we have $\dot{Q}_{\rm nuc}=M_{\rm crit}\dot{S}_{\rm nuc}$ and
$\dot{Q}_{\rm diff}\simeq 4\pi k\lambda_{\rm trig}T_{\rm crit}$ for a nuclear
energy deposition rate per mass $\dot{S}_{\rm nuc}$ and thermal conductivity
$k$. Conductive diffusion from relativistic electrons provides the dominant
source of diffusion in WDs at the temperatures and densities relevant for
igniting thermonuclear fusion; see Ref. [85, 86] for analytic expressions for
$\dot{Q}_{\rm diff}$.
The estimation of $\dot{S}_{\rm nuc}$ involves numerical simulations of flame
propagation with a nuclear reaction network [84]. From this,
$\displaystyle\lambda_{\rm trig}$ $\displaystyle=$
$\displaystyle\sqrt{\frac{3kT_{\rm crit}}{\rho\dot{S}_{\rm nuc}(\rho,T_{\rm
crit})}}~{}$ (19) $\displaystyle=$
$\displaystyle\begin{cases}\lambda_{1}~{}(\frac{\rho}{\rho_{1}})^{-2}\ \ \ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ ,\rho\leq\rho_{1}\\\
\lambda_{1}~{}(\frac{\rho}{\rho_{1}})^{\ln(\lambda_{2}/\lambda_{1})/\ln(\rho_{2}/\rho_{1})}~{},\rho_{1}<\rho\leq\rho_{2}\end{cases}$
where for WDs $\\{\lambda_{1}^{\rm WD},\lambda_{2}^{\rm WD}\\}=\\{1.3\times
10^{-4}~{}{\rm cm},2.5\times 10^{-5}~{}{\rm cm}\\}$ and
$\\{\rho_{1},\rho_{2}\\}=\\{2\times 10^{8}~{}{\rm g/cm}^{3},10^{10}~{}{\rm
g/cm}^{3}\\}$. This analytic form was obtained in Ref. [47] by fitting to
Figure 6 of Ref. [84] – that is restricted to $\rho_{1}\leq\rho\leq\rho_{2}$ –
and extrapolating to lower densities assuming plausible density-scalings of
$k$ and $\dot{S}_{\rm nuc}$. The fit is for $T_{\rm crit}$ = 0.5 MeV and
assumes equal carbon and oxygen masses in WDs. In the NS ocean, the mass
fraction of carbon is 10% [80], implying $\rho\to 0.1\rho$ in Eq. (19) if Eq.
(19) holds for pure carbon burning333It probably does, for the scalings of Eq.
(19) are seen to be similar to those in Table 3 of Ref. [84], for conductive
burning.. One could also fit a relation among the WD central density, critical
temperature and trigger mass [85]:
$T_{\rm crit}\gtrsim 10^{9.7}~{}{\rm K}\bigg{(}\frac{\rho}{10^{8}~{}{\rm
g/cm^{3}}}\bigg{)}^{3/140}\bigg{(}\frac{M_{\rm crit}}{{\rm
g}}\bigg{)}^{3/70}~{}.$ (20)
Figure 4: Cooling curves. Left. Luminosity versus time of WDs of various
masses, taken from Ref. [87]. The onset of crystallization at about $10^{8}$
yr takes cooling from the regime of thermal ions to the Debye regime. Right.
Surface temperature versus time of a benchmark WD and NS. Early cooling
dominated by emission of neutrinos is distinctly faster than that of photons.
See Sec. 2.6 for further details.
### 2.6 Cooling
As no nuclear fuel is burnt in compact stars, they cool down continually from
the moment of their birth unless energy is deposited into them by some means,
as discussed in Sections 3 and 4. Observations of compact star cooling are an
important handle on the physics governing their internal dynamics.
#### 2.6.1 White dwarf cooling.
WDs initially cool by shedding the thermal energy of constituent ions. Given
the specific heat per ion $c_{v}=3/2$, the total WD energy in thermal ions is
$U=\frac{3T}{2}\bigg{(}\frac{M_{\rm WD}}{Am_{N}}\bigg{)}~{}.$ (21)
The WD luminosity $L=-dU/dt$, and the cooling curve can be obtained from an
independent expression for the luminosity in terms of the WD internal
temperature $T_{\rm int}$:
$L=0.2\ {\rm J/s}\ \bigg{(}\frac{M_{\rm
WD}}{M_{\odot}}\bigg{)}\bigg{(}\frac{T_{\rm int}}{{\rm K}}\bigg{)}^{7/2}~{},$
(22)
derived from photon diffusion in the WD surface layers assuming Kramer’s
opacity, and combining it with the EoS; see Ref. [26] for a detailed
treatment. The cooling timescale is then obtained as
$t_{\rm cool}\simeq{\rm
Gyr}~{}\bigg{(}\frac{M/M_{\odot}}{L/(10^{-3}L_{\odot})}\bigg{)}^{5/7}~{}.$
(23)
Thus the cooling times are long enough to keep WDs from becoming invisibly
faint today, yet short enough to make them fainter than main-sequence stars.
The above relation only holds for WDs with $T_{\rm int}>T_{\rm Debye}\simeq
10^{7}~{}$K, the typical Debye temperature below which the ions crystallize.
For smaller temperatures, corresponding to $L\lesssim 10^{-4}L_{\odot}$, the
specific heat comes from the vibration of the crystal lattice as opposed to
thermal motion of the ions. Obtaining WD cooling times accounting for this
effect involves a non-trivial treatment [26] that is beyond our scope. In Fig.
4 left panel we show a luminosity-vs-time cooling curve indicating the point
at which crystallization effects become important. In the right panel we show
a temperature-vs-time curve for a benchmark WD of mass 0.67 $M_{\odot}$
corresponding to a 7000 km radius.
#### 2.6.2 Neutron star cooling.
NSs cool by emitting neutrinos (generated in weak processes) and photons; the
rate of neutrino cooling rate is initially larger and hence dominates up to a
point, before photon cooling takes over. In describing the cooling of NSs,
where GR effects are significant, it is necessary to distinguish between the
temperature in the frame of the NS, $T$, and in the frame of a distant
observer, $\widetilde{T}$, related by
$\displaystyle\widetilde{T}$ $\displaystyle\equiv$ $\displaystyle T/(1+z)~{},$
$\displaystyle 1+z$ $\displaystyle=$ $\displaystyle\frac{1}{\sqrt{1-2GM_{\rm
NS}/R_{\rm NS}}}~{}.$ (24)
The temperature evolution during passive cooling is given by
$c_{\rm
v}(\widetilde{T})\frac{d\widetilde{T}}{dt}=-L_{\nu}^{\infty}(\widetilde{T})-L_{\gamma}^{\infty}(\widetilde{T})~{},$
(25)
where the neutrino luminosity of the NS as measured by a distant observer of
our benchmark NS is zenzizenzizenzic in NS temperature [88]:
$L_{\nu}^{\infty}(\widetilde{T})=1.33\times 10^{39}~{}{\rm
J/yr}~{}\bigg{(}\frac{\widetilde{T}}{10^{9}~{}{\rm K}}\bigg{)}^{8}~{},$ (26)
applicable for slow/modified Urca (“Murca”) processes such as $N+n\rightarrow
N+pe^{-}\bar{\nu}_{e}$ and $N+pe^{-}\rightarrow N+n\nu_{e}$ (with $N=n,p$),
the neutrino cooling mechanism as prescribed by the “minimal cooling” paradigm
[89]. In principle there could also be fast/direct Urca (“Durca”) processes
such as $n\rightarrow pe^{-}\bar{\nu}_{e}$ and $pe^{-}\rightarrow n\nu_{e}$
[90]. These processes dominate the NS cooling down to $\widetilde{T}=10^{8}$
K. It has also been suggested that cooling via $N\gamma\to N\nu\bar{\nu}$
induced by QCD anomaly-mediated interactions are comparable to Murca processes
in early-stage NS cooling [91, 92]. The luminosity of photon blackbody
emission from the NS surface is:
$L_{\gamma}^{\infty}(\widetilde{T}_{s})=4\pi(1+z)^{2}R_{\rm
NS}^{2}\widetilde{T}^{4}_{s}~{}.$ (27)
The NS heat capacity $c_{V}$ is given by [93]
$\displaystyle c_{V}(\widetilde{T})$ $\displaystyle=$ $\displaystyle 4.8\times
10^{26}~{}{\rm J/K}~{}\bigg{(}\frac{\widetilde{T}}{10^{4}~{}{\rm K}}\bigg{)}$
(28) $\displaystyle=$ $\displaystyle 2.7\times 10^{-21}~{}M_{\odot}/{\rm
K}~{}\bigg{(}\frac{\widetilde{T}}{10^{4}~{}{\rm K}}\bigg{)}~{}.$
Solving Eq. (25) requires a relation between the surface ($T_{s}$) and
internal ($T$) temperatures. Such a relation is highly sensitive to the
composition of the NS’ outermost envelope, which acts as an insulating layer
for temperatures $\gtrsim\mathcal{O}(10^{3})$ K, and becomes too thin for
insulation at smaller temperatures [90, 94]. For an iron envelope at high
temperatures [95, 96],
$T_{s}=10^{6}~{}{\rm K}\bigg{[}\bigg{(}\frac{M_{\rm
NS}}{1.5~{}M_{\odot}}\bigg{)}\cdot\bigg{(}\frac{10~{}{\rm km}}{R_{\rm
NS}}\bigg{)}^{2}\bigg{]}^{1/4}\bigg{[}\frac{T}{9.43\times 10^{7}~{}{\rm
K}}\bigg{]}^{0.55}~{}.$ (29)
One can then identify the thin-envelope regime by solving for $T_{s}=T$ in the
above equation, which gives $T_{\rm env}=3908$ K, below which one can simply
set $T_{s}=T$.
The solution of Eq. (25) can now be written down as the time for the NS to
cool to a temperature $\widetilde{T}_{\rm cool}$ ($\ll$ the initial
temperature) [97]:
$t_{\rm cool}(\widetilde{T}_{9})/{\rm yr}=\begin{cases}t_{\rm
env}=s_{1}^{-k}q^{-\gamma}\big{[}\big{(}1+(s_{1}/q)^{k}\widetilde{T}_{9}^{2-n}\big{)}^{-\gamma/k}-1\big{]},\
\widetilde{T}_{\rm cool}>\widetilde{T}_{\rm env}~{},\\\ t_{\rm
env}+(3s_{2})^{-1}(\widetilde{T}_{9}^{-2}-\widetilde{T}_{\rm env}^{-2}),\ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \widetilde{T}_{\rm
cool}\leq\widetilde{T}_{\rm env}~{},\end{cases}$ (30)
where $\widetilde{T}_{9}=\widetilde{T}_{\rm cool}/(10^{9}~{}{\rm K})$,
$q=2.75\times 10^{-2}$, $s_{1}=8.88\times 10^{-6}$, $s_{2}=8.35\times 10^{4}$,
$k=(n-2)/(n-\alpha)$ and $\gamma=(2-\alpha)/(n-\alpha)$ with $\alpha=2.2$ and
$n=8$; $\widetilde{T}_{\rm env}\simeq 4000~{}$K corresponds to the time after
which the surface and internal temperatures equalize. The right-hand panel of
Figure 4 shows the NS cooling curve plotted using the above expression, with
the distinct regimes of neutrino and photon cooling labelled.
While in standard cooling scenarios NSs are expected to cool down to
$\mathcal{O}(10^{2})$ K over Gyr timescales (as in Fig. 4), temperatures as
high as $10^{4}$ K have been conjectured to persist if some additional
astrophysical source of NS heating is present: we discuss such speculative
reheating mechanisms in Sec. 4.1.5. We note that in the cases of magnetic
field decay and rotochemical heating, the expectation is that late-stage NSs
will still glow at below $10^{3}$ K.
#### 2.6.3 Comparison of white dwarf and neutron star late-stage cooling
From the discussion above, and from the right-hand panel of Fig. 4, we see
that the temperature of an NS expected at late stages ($t_{\rm cool}\gtrsim$
Gyr), about $10^{2}$ K, is much smaller than the late-stage temperature of a
WD, about $10^{3.3-4}$ K. It is natural to wonder why, as one may expect both
to have a similar late-stage temperature since both are low-temperature
degenerate stars. The crucial difference is that the late-stage WD heat
capacity is determined by vibrational modes of the nuclear ionic Coulomb
lattice forming its interior, while the NS heat capacity is that of a
degenerate Fermi gas.
In the WD case, the heat capacity per ion is [26]
$c_{v}^{\rm WD}\simeq\int_{0}^{\kappa_{\rm
max}}\frac{\kappa^{2}d\kappa}{2\pi^{2}n}\sum_{\lambda=1}^{3}\frac{e^{\omega_{\lambda}(\kappa)/T}(\omega_{\lambda}(\kappa)/T)^{2}}{(e^{\omega_{\lambda}(\kappa)/T}-1)^{2}}~{},$
(31)
where $\omega_{\lambda}$ is the vibrational energy, $\kappa$ is the wavenumber
of the normal modes of the Coulomb lattice, and $\lambda$ labels transverse
and longitudinal modes. Using the linear approximation
$\omega_{\lambda}=\kappa c_{s}$, where $c_{s}$ is the sound speed in the
lattice, and changing variables in the integral to $\kappa\rightarrow
yT/c_{s}$, it can be immediately seen that $c_{v}^{\rm WD}\propto T^{3}$.
In the case of the NS, in the limit where the temperature $T$ drops below the
Fermi energy $E_{F}$, only a fraction $T/E_{F}$ of the fermions close to the
Fermi surface will be excited and raise the bulk temperature. Hence the energy
per fermion $\propto T(T/E_{F})$, and it follows that the heat capacity
$c_{v}^{\rm NS}\propto T$.
From Eq. (25), setting the photon luminosity $\propto T^{4}$, we obtain
$t_{\rm cool}\propto\log T$ for WDs and $t_{\rm cool}\propto 1/\sqrt{T}$ for
NSs. Thus, WDs indeed cool much slower than NSs at later stages.
### 2.7 Nucleon superfluidity
It has been recognized since 1959 that nucleons in a NS could be in a
superfluid state [98], a hypothesis supported by observational fits to cooling
curves [94]. Neutron superfluidity and proton superconductivity arise due to
their Cooper pairing with a 0.1 MeV energy gap, corresponding to a critical
temperature $T_{c}\simeq 10^{10}$ K [99, 63, 100]. Pairing occurs mainly close
to the Fermi surface, hence superfluidity does not influence the EoS of NS
matter (therefore bearing no consequence on NS mass-radius relations), but
does play a major role in setting the NS’ heat capacity and neutrino
emissivity. This is because these quantities are sensitive to particle-hole
excitations close to the Fermi surface: the energy gap exponentially
suppresses the nucleon contribution to the heat capacity for NS temperatures
$\ll T_{c}$.
Neutrons in the NS inner crust are expected to form Cooper pairs in the
singlet ${}^{1}S_{0}$ state and in the core in a triplet ${}^{3}P_{2}$ state:
at higher densities, singlet pairing becomes repulsive [100]. The less dense
protons are expected to pair in the singlet state in the NS core. A quark core
in the NS could give rise to “color superconductivity” with $ud$, $ds$, $su$
Cooper pairs carrying color charges [101]. Nucleon pairing models play a
central role in the possibility of rotochemical heating, as discussed in Sec.
4.1.5. The presence of superfluidity in NSs also gives rise to macroscopic
vortices and flux tubes, the former of which may play a role in late-stage
reheating of NSs (Sec.4.1.5).
Figure 5: Left. The “light cylinder” around a NS within which the co-rotating
magnetosphere is confined [102]. Acceleration of charges in this region are
thought to produce electromagnetic beams that are detected terrestrially as
regular pulses, making young NSs “pulsars”. Right. $P$-$\dot{P}$ diagram taken
from Ref. [103], illustrating the evolution of pulsars. For a description of
the types of pulsars displayed here, see Ref. [102]. See Sec. 2.8 for further
details.
### 2.8 Neutron star magnetic field and spin-down
When a progenitor star turns into an NS, its surface area shrinks by a factor
of about $10^{10}$. As a result, thanks to conservation of magnetic flux
(Gauss’ law for magnetism, $B\times R_{\rm NS}^{2}$ = constant) the stellar
magnetic fields increase by this factor, and thanks to conservation of angular
momentum the rotational speed also rises by this factor. Flux conservation
also implies that the total energy in the NS due to the magnetic field
decreases with the NS size:
$E^{\rm NS}_{B}=\frac{B^{2}}{8\pi}\cdot\frac{4\pi R_{\rm NS}^{3}}{3}=\frac{\rm
const.}{R_{\rm NS}}~{},$ (32)
hence the presence of $B$ fields tends to enlarge the NS. However $E^{\rm
NS}_{B}$ is bounded by the gravitational binding energy of the NS, giving the
condition
$B\leq\sqrt{\frac{18}{5}\frac{GM^{2}_{\rm NS}}{R^{4}_{\rm NS}}}\simeq
10^{18}~{}{\rm Gauss}~{}\bigg{(}\frac{M_{\rm
NS}}{M_{\odot}}\bigg{)}\bigg{(}\frac{10~{}{\rm km}}{R_{\rm
NS}}\bigg{)}^{2}~{}.$ (33)
A stricter upper limit can be obtained from considerations of hydromagnetic
stability [104]. Measurements from pulsar spin-down (discussed here) find that
millisecond pulsars typically have $B$ field strengths of about $10^{8}$
Gauss, classical pulsars about $10^{12}$ Gauss, and magnetars about $10^{15}$
Gauss. NSs have a “magnetosphere”, a region of plasma surrounding the NS and
co-rotating with it due to their coupling through the $B$ field. One can see
that this region is finite by simply locating the equatorial radius at which
its tangential speed $=c$ for a spin period $P$:
$R_{\perp}^{\rm LC}=\frac{cP}{2\pi}=48~{}{\rm km}\bigg{(}\frac{P}{\rm
ms}\bigg{)}~{}.$ (34)
This region defines the “light cylinder” shown in Figure 5, left panel. The
presence of strong moving magnetic fields in the light cylinder generates
electric fields that accelerate charged particles at the stellar surface,
leading to emission of electromagnetic beams from near the magnetic poles of
the NS. This beam, as we will soon see, is powered by the rotational energy of
the NS. The lighthouse-like sweep of the beam, detected as regular pulses on
Earth, serves to reveal NSs as pulsars444At the time of writing, two “white
dwarf pulsars” have been discovered [105, 106], but these refer to regular
pulsation corresponding to the beat frequency of orbital rotation with a
binary companion and the spin of the WD.. This is how NSs were historically
discovered by Bell and Hewish, and continues to be the primary method for
finding NSs in the sky [107].
The NS spin varies over the lifetime of the NS due to a number of factors,
chief among which is magnetic dipole radiation extracting rotational kinetic
energy, an effect known as pulsar spin-down. The radiation power of a rotating
magnetic dipole of moment $m$, with a component $m_{\perp}$ perpendicular to
the NS spin axis, and angular speed $\omega=2\pi/P$, is given by [102]
$\dot{E}_{\rm
rad,B}=\frac{2}{3c^{3}}m_{\perp}^{2}\omega^{4}=\frac{2}{3c^{3}}(B_{\perp}R_{\rm
NS})^{3}\bigg{(}\frac{2\pi}{P}\bigg{)}^{4}~{},$ (35)
where in the second equation we have used the expression for a sphere
uniformly magnetized with field strength $B$. The rotational power of an NS of
moment of inertia $I=2M_{\rm NS}R^{2}_{\rm NS}/5$ is given by
$\dot{E}_{\rm rot}=I\omega\dot{\omega}=-4\pi^{2}\frac{I\dot{P}}{P^{3}}~{}.$
(36)
For sub-kHz frequencies this radiation cannot penetrate the ISM nebula
surrounding the NS, and is hence deposited in it; the observed $P$, $\dot{P}$
and luminosities of supernova remnants such as the Crab Nebula ($P$ = 0.033
sec, $\dot{P}$ = 1 sec/80,000 yr, luminosity = $10^{5}L_{\odot}$, much higher
than that of the Crab Pulsar within) bears out the supposition that
$-\dot{E}_{\rm rot}\simeq\dot{E}_{\rm rad,B}$ [102].
NS spin-down provides a remarkably valuable handle on the age of an NS through
measurement of just its $P$ and $\dot{P}$, i.e. without requiring knowledge of
its radius, mass and $B$ field. Assuming the $B$ field remains constant, by
equating Eqs. (35) and (36) we see that $P\dot{P}$ is constant over time. For
an initial spin period $P_{0}$,
$\displaystyle\int_{0}^{\tau}dt(P^{\prime}\dot{P}^{\prime})$ $\displaystyle=$
$\displaystyle\int_{P_{0}}^{P}dP^{\prime}P^{\prime}$ $\displaystyle\Rightarrow
P\dot{P}\tau$ $\displaystyle=$ $\displaystyle\frac{P^{2}-P_{0}^{2}}{2}$
$\displaystyle\Rightarrow\ \ \ \tau$ $\displaystyle=$
$\displaystyle\frac{P}{2\dot{P}}~{},$ (37)
where the last equality assumed that the initial period $P_{0}\ll P$. This
characteristic age $\tau$ due to spin-down is often an excellent order-of-
magnitude estimate of an observed NS’s true age. It slightly overestimates the
latter for young NSs as an NS’ spin may initially decelerate via gravitational
radiation due to an oblate shape. For instance, for the Crab Pulsar, whose
supernova was observed in 1054 A.D., one finds $\tau$ = 1300 years. In the
case of older pulsars, Eq. (37) must again be used with special care,
specifically when being applied to NSs that are thought to have spun up at
some point in their life. These could be, e.g., millisecond pulsars that are
modelled as accreting mass and angular momentum from a binary companion; these
have been observed with a characteristic age older their actual age [108]. In
particular, there are millisecond pulsars with $\tau>$13.8 Gyr [107], the
measured age of the universe [109].
We note in passing that for NSs for which precise data on distances and proper
motions are available, their kinematic age may also be estimated by tracing
back their trajectories and locating a plausible birth site [110]. This
technique is possible thanks to the kick velocity imparted to the NS by the
asymmetric explosion of the progenitor, as mentioned in the beginning of Sec.
2.
The pulsar braking index $n$ is defined via $\dot{\omega}\propto\omega^{n}$.
With a little elementary calculus, it may be seen that
$n\equiv\frac{\omega\ddot{\omega}}{\dot{\omega}^{2}}=2-\frac{P\ddot{P}}{\dot{P}^{2}}~{}.$
(38)
For spin-down induced by magnetic dipole radiation, one finds by equating Eqs.
(35) and (36) that $n=3$, although pulsars with braking indices of 1.4$-$3
have been observed, suggesting other spin-down mechanisms [102].
It is useful to place observed pulsars on a $P$-$\dot{P}$ diagram such as the
one shown in Fig. 5 right panel. Pulsars typically begin life at the north-
west region of the diagram, and move south-east along contours of constant $B$
strengths while crossing contours of constant spin-down age. Eventually as
they age to about 10 Myr the rotational energy is insufficient to generate the
pulsar beam, and they cross the “pulsar death line”, sometimes referred to as
the “death valley”. However, the death line is not well-understood, for the
exact mechanism by which pulsar beams are created is still unknown and is an
active area of research. This is evident in the $P$-$\dot{P}$ diagram: quite a
few pulsars lie beyond various models of the death line [111, 112, 113, 114],
with PSR J2144-3933 lying well beyond all the canonical death lines. We will
re-encounter this oddball pulsar, which also happens to be the coldest NS
observed, in Sec. 4.1.
## 3 The white dwarf as a dark matter laboratory
WDs have been used as DM detectors via a number of mechanisms. There are four
main effects, which we will detail in the rest of the section: (1) DM can
collect and annihilate inside WDs, heating them to above the temperature that
would be expected from a standard WD cooling curve such as in Sec. 2.6. (2) So
much non-annihilating DM accumulates in a WD that the DM collapses and forms a
black hole deep in the WD interior. This small black hole can grow to accrete
the entire WD, thereby converting its host into a solar mass black hole. (3)
DM encounters with and collection in the WD can cause it to explode. (4) WDs’
internal structure could be altered if a substantial fraction of its mass were
comprised of DM.
In addition, resonant conversion of axion-like particle DM to photons in the
corona of a magnetic WD may be observed [115]; we relegate discussion of this
phenomenon in the context of NSs to Sec. 4.11.
Figure 6: Left. Upper bounds from Ref. [116] on DM density distributions in
the globular cluster M4, compared with an estimate of the DM densities
(labelled “1101.2737”) from Ref. [117] using a spherical collapse model. Also
shown are the range of DM densities required to match the observed
luminosities of WDs in M4 via DM annihilations within the WD as well as
kinetic heating by infalling DM; the horizontal range of the rectangles spans
the uncertainty in the positions of the WDs. Right. Bounds on dark matter
using an old WD in the Milky Way taken from [86]. See Secs. 3.1 and 3.2 for
further details.
.
### 3.1 Dark matter annihilation inside and heating white dwarfs
The possibility that dark matter can accumulate inside and change the internal
thermal properties of stars has long been appreciated [9, 10]. A number of
works has proposed that old WDs could have their late-time temperature altered
through accumulation and annihilation of DM in the interior [118, 117, 119,
120]. To a good approximation the amount of collisionless DM (for local DM
density $\rho_{\chi}$ and average DM-WD relative speed $v_{\rm rel}$) flowing
through a WD with mass $M_{\rm WD}=1.2~{}{\rm M_{\odot}}$, radius $R_{\rm
WD}=4000$ km, and surface escape velocity $v_{\rm esc}=\sqrt{2GM_{\rm
WD}/R_{\rm WD}}$ is
$\displaystyle\dot{M}$ $\displaystyle=\rho_{\chi}v_{\rm
rel}\times\pi\left(\frac{R_{\rm WD}v_{\rm esc}}{v_{\rm
rel}}\right)^{2}=10^{-7}~{}{\rm\frac{M_{\odot}}{\rm Gyr}}~{}\left(\frac{R_{\rm
WD}}{4000~{}{\rm km}}\right)^{2}\left(\frac{M_{\rm WD}}{1.2~{}{\rm
M_{\odot}}}\right)\left(\frac{\rho_{\chi}}{0.4~{}{\rm GeV/cm^{3}}}\right),$
(39)
where we have normalized to the mass accumulated over a gigayear to emphasize
that the DM mass accumulated inside the WD over the lifetime of the universe
is only a tiny fraction of the stellar mass. This expression assumes that all
DM incident on the WD is captured; for the DM-nucleon or DM-electron cross
section dependence of the capture rate, see Refs. [121, 86].
The late-time temperature of a benchmark WD described above, assuming it is
determined by the capture and annihilation of all DM transiting the WD, is
given by [116]
$T_{\rm WD}\approx 4000~{}{\rm K}\left(\frac{350~{}{\rm km/s}}{v_{\rm
rel}}\right)^{1/4}\bigg{(}\frac{\rho_{\chi}}{10^{3}~{}{\rm
GeV/cm^{3}}}\bigg{)}^{1/4},$ (40)
where here we have normalized this expression to a typical $v_{\rm rel}$, but
have chosen $\rho_{\chi}$ more than three orders of magnitude greater than the
inferred DM density near most WDs whose temperatures have been determined.
This is the DM density required for heating WDs above their expected late-time
temperature shown in Figure 4. In practice, this means that in order to find
or exclude DM this way, one would need to find an ancient WD in a region that
conclusively has a high DM density.
Reference [117] studied the heating effect that certain inelastic DM models
would have on the late-stage temperature of WDs, and found that for a
background DM density of $\rho_{\chi}\simeq 3\times 10^{4}~{}{\rm
GeV/cm^{3}}$, they would be sensitive to inelastic inter-state mass splittings
of about $10-10^{3}$ keV and per-nucleon scattering cross sections
$\sigma_{n\chi}\gtrsim 10^{-41}~{}{\rm cm^{2}}$. These authors proceeded to
investigate whether WDs observed in a very dense self-bound stellar system,
the globular cluster NGC 6121, a.k.a. Messier 4 (M4), might reside in a
background density of DM large enough to observe heating from DM. Assuming
that M4 was formed from a subhalo that was then tidally stripped by the Milky
Way parent halo, using a spherical collapse model first derived in Ref. [118],
adopting an NFW density profile, and accounting for the slight adiabatic
contraction of densities from the baryon potential, they estimated that the DM
density was approximately 800 GeV/cm3 at a cluster-centric distance $r=2.3$
pc, where the farthest WDs were observed in the Hubble Space Telescope.
Following this, a number of authors investigated the implications of DM in
globular clusters capturing in celestial bodies, under the assumption of a
large (103-104 GeV/cm3) DM density [122, 123, 124, 125, 126, 127, 128, 129].
A recent study [116] set empirical limits on the DM densities in M4 using
measurements of stellar line-of-sight velocities and performing a spherical
Jeans analysis; Figure 6 shows these limits on various DM density profiles
corresponding to upper bounds on NFW scale parameters. The density estimate of
Ref. [117], denoted by an asterisk, is safe from these limits. Nevertheless,
it was argued that the use of globular clusters as copious sources of DM
resulting in far-reaching conclusions about its microscopic properties is
problematic for several reasons.
1. 1.
The origin story of globular clusters is unclear. While Ref. [117] echoed a
popular theory – corraborated in $N$-body simulations – that globular clusters
originate in DM subhalos that then strip tidally [130, 131, 132], alternative
simulations suggest they may form with no aid from dark matter via the
collapse of giant molecular clouds [133, 134, 135, 136].
2. 2.
The V-band mass-to-light ratios of globular clusters in solar units is 1–5,
which is equivocal about the presence of DM in them, unlike, say, dwarf
galaxies (10–100), the Coma Cluster of galaxies (660) or the Milky Way (25)
which are known to harbor significant amounts of DM. In fact, a structure
defined as a stellar “cluster” is defined as a system whose dynamics need not
be explained by DM, unlike a “galaxy” [137]. Accordingly, studies of more than
20 globular clusters looking for DM in them have either failed to detect DM or
come to ambiguous conclusions [116].
3. 3.
There is no guarantee that any invisible mass favored in globular cluster data
is in fact DM, as it may also be from faint stellar remnants [138].
4. 4.
The interpretation of the presence or absence of DM in ambiguous datasets is
sensitive to priors and parametrizations. Ref. [139] found no evidence for DM
when analyzing NGC 2419 by fitting a Michie model for the stellar and a
generalized NFW profile for the DM distributions, but found strong evidence
for DM when fitting these quantities with no analytic form, floating instead
389 free parametes.
One could conclude that, due to these significant uncertainties and the
related infeasibility of determining DM density distributions in globulars
with current and imminent measurement sensitivities, globular clusters are
sytems that are far from robust for making statements about DM interactions.
On that note, there are proposals for finding WDs in dwarf galaxies like Segue
I and II [140].
### 3.2 Non-annihilating dark matter converting white dwarfs into black holes
If enough non-annihilating DM accumulates in WDs, the DM can collapse, and
subsequently form a small black hole that accretes surrounding WD material,
eventually consuming the entire WD [122, 86, 141]. Typically the DM is assumed
to be “asymmetric” since in such models DM typically does not self-annihilate
[142]. If in the process of accumulation and collapse DM self-annihilates
efficiently, too much of it may be lost to form a black hole in the WD core.
The routine by which DM could form a small black hole in the interior of a WD
is very similar to the more studied case of DM forming black holes in
NSs555See also Ref. [143, 144, 145], which study black hole formation in other
astrophysical bodies like the Earth, Sun, and Population III stars., which is
detailed in length in Section 4.4. To avoid repetition, here we will emphasize
aspects that are distinct from the case of the NS. The WD-to-BH conversion
process is as follows. First, DM accumulates in the WD over time, through
scattering on nuclei or electrons in its interior. Then, the captured DM
thermalizes with the WD interior, i.e., after repeated scattering it is
expected to localize within a small volume determined by the WD’s internal
temperature and gravitational potential.
One chief difference here between WDs and NSs is that during thermalization,
DM will scatter with a Coulomb lattice of ions in the core of the WD, which is
stabilized by relativistic electron degeneracy pressure. This effect
considerably suppresses DM-nucleus scattering rates at low momentum transfers,
the regime that determines the thermalization timescale $t_{\rm th}^{\rm WD}$.
For a carbon WD, this is given by [86]
$t_{\rm th}^{\rm WD}\simeq 20~{}{\rm yr}~{}\left(\frac{10^{-40}~{}{\rm
cm^{2}}}{\sigma_{n\chi}}\right)\left(\frac{m_{\chi}}{10^{6}~{}{\rm
GeV}}\right)^{2}\left(\frac{10^{7}~{}{\rm K}}{T_{\rm WD}}\right)^{5/2}~{}.$
(41)
Thus for $m_{\chi}>10^{10}$ GeV, it can take $>$ Gyr for DM to thermalize with
the WD interior. Another difference between DM collapsing to form black holes
in WDs and NSs is that, during the collapse inside a WD, DM may trigger a
star-destroying thermonuclear explosion. We now turn to this topic.
Figure 7: Illustration of mechanisms by which WDs may be prompted to explode
by dark matter. (a) DM acumulates to the point of collapse in the center of
the WD, then while collapsing (or after collapsing and forming a black hole)
heats the WD to a temperature inducing a thermonuclear chain reaction. (b) The
internal potential or mass energy of spatially extended DM is deposited as WD
nuclei enter its state, prompting local heating that initiates the
thermonuclear runaway. (c) Macroscopic DM transiting the WD transfers
explosive kinetic energy via scattering on WD constituents.
### 3.3 White dwarf explosions via dark matter
Dark matter accumulated inside WDs might trigger a Type Ia-like supernova
explosion through the deposition of enough energy to prompt runaway fusion
reactions in the carbon/oxygen/neon interior of the WD [85, 146]; see also
Ref. [147] for an early discussion of DM cores affecting Type Ia supernovae.
More generally, DM triggering WDs into supernovae can proceed in a number of
ways:
* •
Attendant to DM converting WDs to black holes, DM can collect into a core
region of the WD, collapse, and as a result of the collapse, deposit enough
energy to ignite the WD. Ignition can occur either directly through nuclear
scattering during the collapse of the DM core [85, 146, 47] or through the
evaporation of a small black hole that forms out of the collapsed DM core
[148, 141, 47].
* •
DM can have internal properties that result in energy being liberated as WD
particles enter the DM state. A simple example of this is captured (and
possibly thermalized) DM annihilating and depositing energy in the WD medium
with which it is admixed [149]. Other interesting possibilities are composite
DM with an internal potential for baryons [150], solitonic Q-ball DM that
absorbs baryonic charge and dissociates nuclei in the process [149], monopoles
that possibly induce nucleon decay in similar fashion (Sec. 4.1.2), and
accretion of WD carbon ions onto a black hole formed from collapse of
electrically charged DM states [47].
* •
During an initial transit through the WD, DM can deposit kinetic energy gained
by falling into the WD’s gravitational potential. The DM could be in the form
of primordial black holes (PBHs), in which case energy is transferred via
dynamical friction [146, 151], or particles inducing nuclear scatter recoils
[149, 152, 153]. Tightly bound asteroid-like DM triggering WD explosions via
stellar shocks has also been suggested [154].
However the WD is heated, a number of requirements must be met for
thermonuclear reactions sparked to sustain themselves and cause the WD to
explode. These requirements are described in Sec. 3.3. We now discuss some
subtle aspects of this phenomenon as explored in the literature.
A detailed simulation of PBH energy deposition in a WD, including the effect
of turbulent flows in the wake of the passing PBH, found that heavier PBHs
were required to ignite WDs [151] compared to initial estimates [146]. This
study employed a 1D+1D hydrodynamic simulation of the shock front created by a
transiting PBH, and found that the development of hydrodynamic instabilities
dissipating heat deposited through dynamical friction appeared to occur more
rapidly than ignition processes, which were modeled using the same carbon
fusion reaction rates used in Ref. [84]. Instead of a burning reaction, the
prompt detonation of WD material by transiting PBHs was further studied in
Ref. [155]. Another study investigated ignition during DM core collapse using
a system of differential equations that track the evolution of per-particle
energies [156] \- this tracked carbon fusion inside the collapse region, and
found carbon depleted before the WD ignition temperature in Ref. [84] was
reached. Future work could build on this result in a number of directions,
$e.g.$ by studying C-O burning and employing the full nuclear reaction network
used in Ref. [84] to obtain WD ignition temperatures. In addition, future WD
ignition estimates should also consider convective flows of heated WD material
moving carbon through the collapse region replenishing carbon and oxygen, and
whether WD ignition occurs via thermal energy transported out of the collapse
region. This is especially important, since studies on carbon fusion occurring
inside DM bound states found that fusion can be induced in the region
surrounding the collapsing region that is the source of heat, either through
the evaporation of black holes of size much smaller than the ignition region,
or through effluence of thermal energy outside of the transiting DM composite
[86, 141, 150, 47]. Finally, the ignition of WD supernovae via oxygen burning
typically requires a temperature somewhat higher than that of carbon666We
thank Melissa Diamond for correspondence on this point. [84], and future
detailed treatments of WD ignition by collapsing DM should account for this
possibility.
Ref. [149] set limits on a wide range of DM-nucleus scattering cross sections
and DM masses assuming point-like elastic scattering of DM particles on carbon
in WDs. These constraints were placed using the condition for the minimum
stopping power,
$n_{\rm T}\sigma_{\rm T\chi}m_{\rm T}v^{2}_{\rm
esc}\gtrsim\rho\bar{c}_{p}T_{\rm crit}\lambda_{\rm trig}^{2}$ (42)
for a heating region of size $\leq\lambda_{\rm trig}$. However, this condition
does not account for the finite number of nuclei that the DM particle would
encounter during its transit through the heating region. Suitably modified,
the above condition should be
$\displaystyle N_{\rm hit}\frac{m_{\rm T}v^{2}_{\rm esc}}{\lambda_{\rm trig}}$
$\displaystyle\gtrsim$ $\displaystyle\rho\bar{c}_{p}T_{\rm crit}\lambda_{\rm
trig}^{2}~{},$ $\displaystyle N_{\rm hit}$ $\displaystyle=$
$\displaystyle\max[n_{\rm T}\sigma_{\rm T\chi}\lambda_{\rm trig},n^{1/3}_{\rm
T}\lambda_{\rm trig}]$ (43)
where $N_{\rm hit}$ is the number of point-like scatters on nuclei as the DM
particle traverses the length $\lambda_{\rm trig}$. One can see that Eq. (43)
reduces to Eq. (42) for $\sigma_{\rm T\chi}<n_{\rm T}^{-2/3}$. Ref. [149]
considers a 1.25 $M_{\odot}$ WD to set limits, for which $n_{\rm T}\simeq
10^{-31}$ cm-3, implying that $\sigma_{\rm T\chi}\lesssim 2\times 10^{-21}$
for Eq. (42) to be valid. However, $\sigma_{\rm T\chi}>10^{-12}$ cm2 is shown
to be excluded in Ref. [149]. One could also see the error in this result by
estimating the maximum energy transferred by DM elastic recoils by a linear
transit across a length $\lambda_{\rm trig}$. This is $(n^{1/3}_{\rm
T}\lambda_{\rm trig})(m_{\rm T}v^{2}_{\rm esc})\simeq=10^{4.5}$ GeV, which may
be compared with the trigger energies ranging across WD masses, $10^{17-24}$
GeV (Sec. 2.5). One could contrast this analysis against Refs. [152, 153],
which considered WD explosions triggered by the transit of macroscopic
composite DM. In these studies, the requisite number of WD nuclei within a
trigger volume may indeed be excited to ignite the region into runaway fusion.
In Fig. 11 bottom left panel we show the masses and radii of DM mini-clusters
constrained by the observed existence of WDs in our Galaxy, taken from Ref.
[153]. Overlaid here are contours of the minimum DM-nucleus elastic scattering
cross sections required to transfer sufficient kinetic energy to the WD
trigger volume to induce stellar explosion.
A number of phenomena have been linked to the DM-induced ignition of
thermonuclear explosions in WDs. It has been posited that DM core collapse in
WDs might account for a large fraction of observed Type Ia supernovae [85], as
a solution to the Type Ia progenitor problem [79] and consistent with the
apparent observation of sub-Chandrasekhar WDs as the origin of most Type Ia
supernovae [157]. Reference [85] also found that a trend in existing Type Ia
data [158], showing that more massive progenitors explode sooner, is
consistent with certain DM models that induce WD explosions through DM core
collapse, where this would occur sooner for heavier WDs. The accumulation in
certain sub-Chandrasekhar WDs of charged massive particles (CHAMPs) making up
DM, which might occur preferentially outside galaxies with magnetic fields
that serve to deflect CHAMPs, could be an explanation of the distribution of
calcium-rich gap transient WD supernovae [47] that do explode preferentially
on the outskirts of galaxies [159]. The distribution of Type Ia supernovae in
galaxies could be tied to local properties like velocity dispersion,
especially in the case of PBH-ignition [160]. Finally, a separate study has
investigated whether WD explosions from DM could explain the aforementioned
Ca-rich gap transient distribution, through the ignition of WDs in dwarf
spheroidal galaxies expected to be located at some distance from larger
galactic DM halos [161].
### 3.4 Dark matter’s influence on white dwarf equations of state
WDmass-radius relationships can also be observably impacted by DM. If a
substantially massive core of DM accumulated in the interior of a WD, its
stable configurations would be altered through revised TOV equations [147,
162, 163]. For a typical circumambient DM density, the amount of collisionless
DM required to induce these effects, $10^{-4}M_{\odot}-10^{-1}~{}M_{\odot}$,
well exceeds what could be collected in the WD over the lifetime of the
universe; see Eq. (39). However, future studies could investigate whether such
a large quantity of DM might be collected through collisional accretion,
analogous to the NS treatment in Ref. [164] (discussed in Sec. 4.2). Another
effect comes through the axion: its existence implies that its non-derivative
coupling to nucleons would displace it from its usual minimum in a finite-
density medium. This results in a reduction of the nucleon mass and alters the
TOV equations [165].
## 4 The neutron star as a dark matter laboratory
Figure 8: Top left. Cartoon showing the dark kinetic heating effect in NSs.
Scattering interactions of the infalling dark matter flux contribute to the
luminosity of a typical NS at the level of a $1500$ K blackbody temperature.
Top middle. The nucleon Auger effect that contributes to kinetic (and possibly
annihilation) heating by dark matter in NSs. The total energy deposited after
scattering turns out to be the dark matter energy transfer, although
physically it comes as the sum of two contributions: the energy spilled during
the rapid filling of the hole left behind by the struck target, and the energy
carried by the target in excess of the Fermi energy. Top right. The breaking
and re-pairing of Cooper pairs that contributes to kinetic (and possibly
annihilation) heating by dark matter in NSs. This phenomenon takes place for
dark matter with mass above about 35 MeV; for smaller masses, dark matter
capture proceeds through collective excitations in the nucleon superfluid
medium. Bottom left. Cartoon showing possible additional heating of NSs via
self-annihilations of dark matter possibly collected in a thermalized volume.
This highly model-dependent process could heat the NS to blackbody
temperatures around 2000 K. Bottom right. As a function of NS mass, NS
effective temperatures imparted by dark kinetic+annihilation heating that can
be measured at the James Webb Space Telescope at various signal-to-noise
ratios, taken from Ref. [45]. The band denotes variations over NS radii
predicted by numerous equations of state as well as NS-DM relative velocities
from estimates by various NS population models. See Sec. 4.1.1 for further
details.
### 4.1 Dark matter kinetic and annihilation heating of neutron stars
#### 4.1.1 Capture and kinetic heating
Figure 9: Top. Capture cross section sensitivities for light dark matter scattering in a NS crust (left) (via excitation of superfluid phonons in the inner core) and in the NS core (via Pauli-blocked contact scattering on neutrons, although see Sec. 4.1.1 for a discussion on scattering in the superfluid core), and for heavier dark matter scattering in various layers of the crust and the core (right). These two plots are taken from Ref. [48]. See Sec. 4.1.1 for further details. Bottom. Sensitivities to the cutoff of effective CP-even scalar interactions of dark matter with relativistic, degenerate electrons in a NS, for DM that is spin-1/2 (left) and spin-0 (right). Also shown are the sensitivities for interactions with muons, protons and neutrons. The electron scattering limits are seen to widely complement terrestrial searches. These two plots are taken from Ref. [166]. See Sec. 4.1.1 for further details. effect | change in capture rate | applicability | reference
---|---|---|---
EoS of star effects | $\mathcal{O}$(1): BSk20 $\to$ 21 | all $m_{\chi}$ | [167]
none: QMC-2 $\to$ BSk24 | all $m_{\chi}$ | [168]
mass-radius configuration | $\mathcal{O}(100)$ as $1\to 2.2M_{\odot}$ | all $m_{\chi}$ | [169]
nuclear self-energies | 30$-$100 | $m_{\chi}>$ 100 MeV, any EoS | [170]
nucleon structure | $\mathcal{O}(10^{3})$ for 2 $M_{\odot}$ NSs | [168]
non-elastic scattering | subdominant | $-$ | [168]
“collective” effects | $\mathcal{O}(1-10^{3})$ | 2 $M_{\odot}$ NS, | [171]
| | $m_{\chi}<100$ MeV, |
| | $A^{\prime}$ mediator |
superfluidity: energy gap | maybe $\mathcal{O}$(1) | $m_{\chi}\lesssim 35$ MeV, | [48]
| | single phonon excitation | [164]
NS opacity/ extinction factor | $\mathcal{O}(1)$ | $m_{\chi}>$ GeV | [169]
relativistic kinematics | $\sim 4$ | $m_{\chi}>$ GeV | [169]
$\sim 10$ | $m_{\chi}<$ GeV | [169]
gravitational focusing | $<2$ | all $m_{\chi}$ | [169]
light mediator kinematics | $\mathcal{O}(1)$ | $m_{\phi}/\mu_{\rm red}<10^{-1}$ | [172]
voided | $m_{\phi}/m_{\chi}<10^{-4}$
DM halo velocity distribution | $<2$ | all $m_{\chi}$ | [173]
Table 1: Known effects that modify the rates of dark matter capture in NSs.
See Sec. 4.1.3 for further description.
NSs are excellent captors of particle dark matter by virtue of their extreme
densities and steep gravitational potentials, and also quite serviceable as
thermal detectors thanks to their typically small temperatures. While the
capture of DM in NSs and its subsequent thermal relaxation was first treated
in Ref. [11], it was only recently realized that this could be a minimal probe
of dark matter scattering on Standard Model (SM) states: the transfer of DM
kinetic energy to the NS’s constituent particles during the infall of DM at
semi-relativistic speeds overheats the NS [174]. It was also proposed that
upcoming infrared telescopes, e.g., the Thirty Meter Telescope (TMT) [175] and
the Extremely Large Telescope (ELT) [176] are sensitive to this “dark kinetic
heating” mechanism [174] for NSs out to about $100$ pc from Earth; a study has
also been dedicated to the sensitivity at the recently launched James Webb
Space Telescope (JWST) [177, 45], which has shown that finding an NS much
closer than 100 pc would likely be required. Thermal observations of nearer
pulsars could be made following the discovery of old, isolated NSs in radio
telescopes such as FAST [178], CHIME [179] and SKA [180]. Though their $B$
fields and rotational velocities are expected to be low, implying they
populate regions near the “pulsar death line” in $P$-$\dot{P}$ space beyond
which NSs are supposed to stop pulsing, NSs have been observed beyond the
death line [113, 111, 114, 112] calling into question models of NS pulsation
(as also discussed in Sec. 2.8). It is estimated that about $10^{5}$ NSs in
the Galaxy lie beyond the death line [113].
To illustrate the idea of dark kinetic heating let us consider the following
representative NS configuration:
$\displaystyle M_{\rm NS}$ $\displaystyle=$ $\displaystyle 1.5\
M_{\odot},~{}~{}R_{\rm NS}=12.85\ {\rm km}$ $\displaystyle\Rightarrow v_{\rm
esc}$ $\displaystyle=$ $\displaystyle\sqrt{\frac{2GM_{\rm NS}}{R_{\rm
NS}}}\simeq 0.59\ .$ (44)
where $v_{\rm esc}$ is the escape speed at the surface. This configuration is
obtained for a Quark Meson Coupling (QMC) EoS of matter [168].
For local DM density $\rho_{\chi}$ and average DM-NS relative speed $v_{\rm
rel}$ (which in the solar vicinity are $0.4$ GeV/cm3 and 350 km/s [181]), the
DM mass capture rate is given by [11]
$\displaystyle\dot{M}=m_{\chi}C_{n\chi}$ $\displaystyle=$
$\displaystyle\rho_{\chi}v_{\rm rel}\times\pi b_{\rm max}^{2}\times
p_{v}\times p_{\sigma}\ ,$ (45) $\displaystyle=$ $\displaystyle
p_{v}p_{\sigma}~{}\times~{}1.76\times 10^{25}~{}{\rm GeV/s}~{},$
where $b_{\rm max}=R_{\rm NS}(1+z)(v_{\rm esc}/v_{\rm rel})$ is the maximum
impact parameter of DM intersecting the NS, with $1+z=(1-v_{\rm
esc}^{2})^{-1/2}$ a blueshift factor magnifying the NS radius to a distant
observer, and $p_{v}$ is the probability that a scattered DM particle loses
sufficient energy to be captured. For instance, this probability $\simeq 1$
for scalar- or vector-mediated scatters, but may be suppressed for
pseudoscalar-mediated interactions that favor soft forward scatters [172]. Eq.
(45) is, of course, the DM capture rate for an isolated NS; an NS in a binary
system could capture DM at a rate greater by up to a factor of a few thanks to
gravitational assist [182].
The probability that incident DM is scattered is given by
$p_{\sigma}=1-e^{-\tau}\simeq\tau=\sigma_{n\chi}/\sigma_{\rm cap}$ where, for
optical depth $\tau$, the approximate equality in the first line holds in the
optically thin limit. The “capture cross section” above which $\tau>1$ in the
NS core is:
$\displaystyle\displaystyle\sigma_{\rm
cap}=\begin{cases}\sigma_{0}(\bar{m}_{n}/m_{\chi})\ &,\ \ m_{\rm
evap}<m_{\chi}<\bar{m}_{n}\ ,\\\ \sigma_{0}\ &,\ \ \bar{m}_{n}\leq
m_{\chi}\leq\text{PeV}\ ,\\\ \sigma_{0}(m_{\chi}/{\rm PeV})\ &,\ \
m_{\chi}>{\rm PeV}\ ,\end{cases}$ (46)
where the NS geometric cross section $\sigma_{0}=\pi(\bar{m}_{n}/M_{\rm
NS})R_{\rm NS}^{2}\simeq 2.2\times 10^{-45}\,{\rm cm}^{2}$. One understands
the dependence on $m_{\chi}$ in Eq. (46) by considering the typical neutron
recoil energy in the neutron rest frame:
$\displaystyle\Delta E_{\rm DM}$
$\displaystyle\simeq\frac{\bar{m}_{n}m_{\chi}^{2}(1+z)^{2}v^{2}_{\rm
esc}}{(\bar{m}_{n}^{2}+m_{\chi}^{2}+2(1+z)\bar{m}_{n}m_{\chi})}\ ,$ (47)
The above expression is a good approximation to describe DM-neutron scattering
in the stellar rest frame as well, since the neutrons are typically non-
relativistic: their Fermi momenta, varying over a few 100 MeV across the NS,
are smaller than their $\sim$GeV mass. For $m_{\chi}\\!<\\!\bar{m}_{n}$, only
a fraction $\simeq 3\Delta p/p_{F}$ of degenerate neutrons close enough to
their Fermi surface receive the typical momentum transfer $\Delta
p=\sqrt{2\bar{m}_{n}\Delta E_{\rm DM}}$ to scatter to a state above the Fermi
momentum $p_{F}\simeq 0.4~{}\text{GeV}$. This “Pauli-blocking” effect gives
$\sigma_{\rm cap}\propto\Delta E_{\rm DM}^{-1/2}\propto m_{\chi}^{-1}$. The
so-called evaporation mass,
$m_{\rm evap}\simeq 20~{}{\rm eV}\ \bigg{(}\frac{T_{\rm NS}}{10^{3}~{}{\rm
K}}\bigg{)}~{},$ (48)
is the DM mass below which the thermal energy of the NS would kinetically
eject the captured DM from the stellar potential well [167, 183]. For
$\bar{m}_{n}\\!\leq\\!m_{\chi}\\!\leq\\!10^{6}~{}\\!\text{GeV}$, a single
scatter suffices for capture: $\Delta E_{\rm DM}\simeq\bar{m}_{n}v_{\rm
esc}^{2}\gamma^{2}>$ KEhalo, the DM halo kinetic energy. For
$m_{\chi}>\text{PeV}$, multiple scatters are required for capture, so that
approximately $\sigma_{\rm cap}\propto{\rm KE}_{\rm halo}/\Delta E_{\rm
DM}\propto m_{\chi}$. The expression in Eq. (45) can be refined to account for
the velocity distribution of DM far from the NS [184].
The heating of the NS comes not only from the recoil of incident DM but from
two other secondary effects. As depicted in Fig. 8, a target neutron (or a
lepton) that is upscattered by DM leaves behind a hole in the Fermi sea. The
hole is filled up immediately by a nearby neutron from a higher energy level,
which in turn leaves a hole, and so on. This process spills over energy in the
form of radiation and kinetic energy, and is reminiscent of the Auger effect
observed in electron levels in superconductors; we will re-encounter this
effect as a means of NS internal heating in Sec. 4.8. The net energy deposited
in the NS by this effect, $E_{\rm Auger}$, is simply the difference in energy
between the Fermi surface and the position of the original hole. The energy
carried by the struck nucleon/lepton in excess of the Fermi energy, $E_{\rm
kin}$, is dissipated as kinetic energy above the Fermi surface. Thus the total
energy deposit $E_{\rm Auger}+E_{\rm kin}$ comes out to be simply the DM
recoil energy $\Delta E_{\rm DM}$. Yet another effect comes from the
superfluidity of nucleons (see Sec. 2.7). For $m_{\chi}\gtrsim 35$ MeV, DM
participates in elastic scattering by first breaking a nucleon Cooper pair,
which is bound with an energy given by the superfluidity energy gap $\sim$
MeV. The absorbed $\sim$ MeV energy is redeposited into the NS when the free
nucleon finds another and pairs up, liberating the gap energy. For
$m_{\chi}\lesssim 35$ MeV nucleons in the NS might not scatter elastically as
there isn’t enough energy transfer to break nucleon Cooper pairs, leaving DM
to capture via collective excitations instead [48, 164]. Light DM capture in
certain models through collective effects in NSs has been studied [171]. The
presence of DM self-interactions can enhance the capture rate by orders of
magnitude as initially captured DM particles can serve as captors of ambient
DM [185].
Once captured in the potential well, a DM particle repeatedly scatters on and
thermalizes with the NS until its orbit shrinks to within the radius of the
star, by which times most of its kinetic energy is transferred. Under
equilibrium, the kinetic power of the infalling dark matter, constituting the
NS heating rate, equals the rate at which photons are emitted from the NS
surface, constituting the NS cooling rate. The latter is dominated by such
photon emission for NSs older than $\sim$Myr, as we saw in Sec. 2.6. The NS
luminosity corresponding to a temperature $T$ (in the NS frame) is then
$L=z\dot{M}=4\pi R_{\rm NS}^{2}T^{4}$, which attains a maximum value $L_{\rm
max}$ for unit capture probabilities $p_{\sigma}$ and $p_{v}$. For our
representative NS configuration (Eq. (44)), $L_{\rm max}=7.6\times
10^{24}~{}{\rm GeV/s}$, corresponding to a NS temperature seen by a distant
observer $\widetilde{T}=T/(1+z)$ of $\widetilde{T}=1400~{}{\rm K}$.
Temperatures in this range are measurable within reasonable integration times
at current and imminent infrared telescope missions [174, 186], in particular
at the recently launched JWST [45], and the forthcoming ELT and TMT. For
instance, the NIRCam instrument at JWST could constrain the surface NS
temperature at 1750 K with a signal-to-noise ratio (SNR) of 2 in $27.8~{}{\rm
hr}(d/10\ {\rm pc})^{4}$, where $d$ is the distance to the NS [174]; the IRIS
instrument at TMT could do the same in $19.4~{}{\rm hr}(d/10\ {\rm pc})^{4}$.
In the bottom right panel of Fig. 8 are displayed the NS effective
temperatures constrainable at JWST at various SNRs for integration times of
5.5 hr and 24.3 hr, using the F150W2 filter on NIRCam. In this plot taken from
Ref. [45], the band spans the range of the NS radii (which determines the
range of DM capture rates) predicted by various EoSs, and integrates over the
NS-DM relative velocities predicted by various NS population models in Ref.
[187, 188]. These sensitivities are for the case of NSs being heated not only
by the kinetic energy of infalling DM but also by DM annihilations, which we
will discuss in Sec. 4.1.2. Searches for DM using NS thermal emissions are
best carried out with NSs whose “standard” temperatures are expected to be
below approx. $1000$ K. Thus one would need NSs older than 10 Myr (Fig. 4),
making the determination of their age via spin-down or kinematic
considerations (Sec. 2.8) crucial. One would also need them sufficiently
isolated to ensure no accretion of material from a binary companion.
The DM-nucleon scattering cross section may be so large that DM scatters
dominantly with the $\sim$km-thick low-density crust of the NS before reaching
the $\sim$20 km-diameter denser core. Moreover, the core may consist of exotic
phases of high-density matter such as meson condensates and deconfined $ud$ or
$uds$ quark matter, the latter of which may exist in any of the multiple
phases discussed in Sec. 2.4; in such cases, the dynamics governing DM
scattering cannot be unambiguously computed, whereas the better understood
crust can be treated robustly as a DM captor. DM scattering with the NS crust
leads to surface emission of photons under thermal equilibrium analogous to
capture in the NS core discussed above, hence the observational signatures of
NS heating are unchanged. In Figure 9 we show the DM capture cross section
$\sigma_{\rm cap}$ for every layer of the NS described in Sec. 2.4, derived in
Ref. [48] for a 1.8 $M_{\odot}$ mass, 12.5 km radius NS. For DM masses below
about 10 MeV (left panel), DM capture can occur by scattering on superfluid
neutrons in the inner crust, and exciting phonons. The single-phonon emission
mode is expected to dominate, which proceeds via a static structure function =
$\Delta p/(2m_{n}c_{s})$ that relates the per-nucleon cross section to the
phonon-excitation cross section. Here $c_{s}$ is the phonon speed. Due to the
proportionality to the transfer momentum, $\sigma_{\rm cap}\propto
m_{\chi}^{-1}$ similar to the Pauli-blocking regime of the NS core discussed
above. The latter sensitivity (applicable to when the core is populated mainly
by neutrons) is also shown for comparison in the plot. For DM masses above
about 100 MeV (right panel), DM capture can occur by scattering on individual
nucleons locked up in nuclei in the outer crust by transferring energies
greater than their $\sim$MeV binding energy. Scattering on nuclei is generally
suppressed: large $\Delta p$ leads to loss of nuclear coherence over multiple
nucleons, and small $\Delta p$ leads to loss of coherence over multiple
nuclei, described by a lattice structure function. Deeper down in the inner
crust, heavier-than-100-MeV DM capture proceeds by scattering on loosely bound
nucleons, and even further down, by scatterig on the pasta phase. Pasta
scattering may either be on individual nucleons at high DM masses or on
multiple nucleons at low DM masses as described by response functions
accounting for inter-nucleon correlations. A resonant peak in the response
function is seen to enhance the capture sensitivity near $m_{\chi}\simeq$ 100
MeV. For comparison is also shown the DM capture cross section for scattering
in an NS core dominated by neutrons.
Even in the absence of exotic phases, NS cores are expected to contain
$\sim$10% level populations of protons, electrons, and muons thanks to beta
chemical equilibrium. DM may be possibly be leptophilic, such that scattering
at tree level is solely on $e^{-}$ and/or $\mu^{-}$, or iso-spin violating,
such that scattering is dominantly on protons. NS capture and heating applies
to these scenarios, too [174]. While the Fermi momenta of protons and muons
are smaller than their mass, making them non-relativistic and amenable to the
above treatment, that of electrons are 1–2 orders of magnitude greater than
$m_{e}$, warranting relativistic kinematics to treat their DM capture in the
stellar rest frame [189, 167, 190, 191, 192, 193, 194, 166]. This also makes
the treatment of Pauli-blocking non-trivial [192, 166]. In particular, the
capture probability accounting for Pauli-blocking, relativistic scattering and
summing over multiple scatters is [166]
$df=\sum_{N_{\text{hit}}}\;d\sigma_{\rm CM}\,v_{\rm
Mol}\,dn_{\text{T}}\,\frac{\Delta t}{N_{\text{hit}}}\,\Theta\left(\Delta
E-\frac{E_{\text{halo}}}{N_{\text{hit}}}\right)\Theta\left(\frac{E_{\text{halo}}}{N_{\text{hit}}-1}-\Delta
E\right)\Theta\left(\Delta E+E_{p}-E_{\text{F}}\right)\ ,$ (49)
where $v_{\rm Mol}$ is the Möller velocity that relates the cross section in
any frame to that in the center of momentum frame ($d\sigma_{\rm CM}$),
$dn_{\text{T}}$ is the differential volume of the target momentum space
normalized to the Fermi volume, $E_{\rm halo}$ is the DM halo kinetic energy,
and $\Delta E$ is the energy transfer. We refer the reader to Ref. [166] for a
detailed formalism. In Figure 9’s bottom panels we show the NS capture
sensitivity to contact interaction cutoffs versus $m_{\chi}$ for scalar-type
operators involving spin-1/2 and spin-0 DM. For electron scattering the NS
capture reach is seen to be orders of magnitude greater than that of
terrestrial direct searches for $m_{\chi}>$ MeV, and indeed completely
complements the latter for sub-MeV DM masses.
NS capture-and-heating can also provide orders-of-magnitude improvement over
Earth-bound searches for DM with scattering that is
1. 1.
spin-dependent, since scattering directly on fermions instead of nuclei does
not lead to the loss of nuclear coherence that limits spin-dependent searches
at direct detection [186, 169, 195],
2. 2.
and/or velocity-dependent [186, 169, 195], since semi-relativistic DM speeds
at the point of capture overcome velocity-suppressed scattering rates,
3. 3.
inelastic [174, 196, 197], since again the high DM speeds ensure that
$\mathcal{O}(100)$ MeV mass splittings between the DM and its excited state
can be probed, as opposed to $\mathcal{O}(100)$ keV at direct detection, and
4. 4.
below the so-called neutrino floor at direct searches, coming from irreducible
neutrino backgrounds that are irrelevant for NS capture; see Fig. 9 top right
panel,
5. 5.
with heavier-than-PeV DM, where DM capture proceeds mainly through multiple
scattering in transit [198, 199, 174].
#### 4.1.2 Dark matter self-annihilations, nucleon co-annihilations, and
induced nucleon decay
While the discussion above focused NS heating from the transfer of captured DM
kinetic energy, applicable to any particulate dark matter model – in
particular to non-annihilating DM such as asymmetric DM – certain scenarios
may lead to DM annihilation inside the NS that further brightens it [184, 200]
and thereby facilitate observations [174, 186, 48, 45], in some cases reducing
telescope integration times by a factor of 10. For instance, JWST/NIRCam could
constrain a 2480 K NS, heated by local DM kinetic energy + annihilations, with
SNR 2 in 2.5 hr $(d/10{\rm pc})^{4}$, and TMR/IRIS could do so in 0.56 hr
$(d/10{\rm pc})^{4}$ [174]; compare these with kinetic heating-only exposure
times in Sec. 4.1.1. Fig. 8 shows JWST sensitivities in more detail, as
discussed in Sec. 4.1.1.
Self-annihilations of DM into most SM states would result in NS heating, the
exception being neutrinos with sub-100 MeV energies as their optical depth in
the NS material is too small to be trapped [201]. In any case, this phenomenon
relies intricately on whether or not the DM thermalizes with the NS within its
lifetime, since DM may possibly annihilate much more efficiently if it is
collected within a small volume in the NS core; this is a highly model-
dependent question [189, 194, 48] as discussed in Sec. 4.4.1. To understand
this, consider the evolution of the number of DM particles $N_{\chi}$ within a
volume $V$ of the NS self-annihilating with a thermally averaged cross section
$\langle\sigma_{\rm ann}v\rangle$, and its solution:
$\displaystyle\frac{dN_{\chi}}{dt}$ $\displaystyle=$ $\displaystyle
C_{\chi}-\frac{\langle\sigma_{\rm ann}v\rangle N_{\chi}^{2}}{V}~{},$
$\displaystyle N_{\chi}(t)$ $\displaystyle=$
$\displaystyle\sqrt{\frac{C_{\chi}V}{\langle\sigma_{\rm
ann}v\rangle}}\tanh\bigg{(}\frac{t}{\tau_{\rm eq}}\bigg{)}~{},$ (50)
$\displaystyle\tau_{\rm eq}$ $\displaystyle=$
$\displaystyle\sqrt{\frac{V}{C_{\chi}\langle\sigma_{\rm ann}v\rangle}}~{},$
where $C_{\chi}=C_{n\chi}+C_{\chi\chi}$ is the total DM capture rate via
scattering on nucleons (Eq. (45)) and, through self-interactions, on DM
already accumulated in the NS, and $\tau_{\rm eq}$ is the characteristic
timescale for equilibrium between capture and annihilation to establish, after
which $N_{\chi}(t)$ achieves a steady state ($dN_{\chi}/dt\rightarrow 0$).
Thus for $t>\tau_{\rm eq}$, the total annihilation rate equals the capture
rate. When $V$ is the thermal volume (Eq. (52)), one can then compute the
minimum annihilation cross section required for capture-annihilation
equilibrium to occur well within the age of an observed NS, $\tau_{\rm NS}$.
Using a partial-wave expansion $\langle\sigma_{\rm ann}v\rangle=a+bv^{2}$, the
condition may be written for $s$-wave and $p$-wave domination as [194]
$\displaystyle a>7.4\times 10^{-54}\ {\rm cm^{3}/s}\ \bigg{(}\frac{{\rm
Gyr}}{\tau_{\rm NS}}\bigg{)}^{2}\bigg{(}\frac{C_{\rm
max}}{C_{\chi}}\bigg{)}\bigg{(}\frac{\rm GeV}{m_{\chi}}\frac{T_{\rm
NS}}{10^{3}~{}{\rm K}}\bigg{)}^{3/2}~{},$ $\displaystyle b>2.9\times 10^{-44}\
{\rm cm^{3}/s}\ \bigg{(}\frac{{\rm Gyr}}{\tau_{\rm
NS}}\bigg{)}^{2}\bigg{(}\frac{C_{\rm max}}{C_{\chi}}\bigg{)}\bigg{(}\frac{\rm
GeV}{m_{\chi}}\frac{T_{\rm NS}}{10^{3}~{}{\rm K}}\bigg{)}^{1/2}~{},$ (51)
where $C_{\rm max}$ is the maximum capture rate achieved at the saturation
cross section.
Interestingly, a thermal Higgsino of 1.1 TeV mass, a largely unconstrained
true electroweak WIMP [140]), would thermalize with just the NS crust rapidly
enough to heat a reasonably old NS through annihilations in equilibrium with
the rate of capture [48]. We also remark that due to different scalings of the
NS luminosity from kinetic or annihilation heating on the NS mass and radius,
in principle it must be possible to distinguish between the two heating
mechanisms using an ensemble of NSs [174, 45].
An interesting way to probe DM self-annihilations in NSs is possible if the
primary annihilation products are feebly interacting DM-SM mediators that live
long enough to exit the star before decaying to SM states. One could search
for a flux of these states sourced by DM “focused” in celestial bodies via
capture. For gamma-ray final states, limits have been imposed with Fermi and
H.E.S.S. data on DM-nucleon scattering and DM self-annihilation cross sections
using brown dwarfs for sub-GeV DM and NSs for TeV mass DM [126]. For neutrino
final states, limits in the TeV-PeV range come from IceCube, KM3NeT and
ANTARES [202, 203].
Dark matter species that carry negative baryon number, arising for instance
from “hylogenesis” models, could annihilate with baryons in a NS post-capture
leading to possibly observable heating signals [204, 205, 206]. Such co-
annihilations with nucleons are also possible in models of “dark baryons” that
undergo quantum mixing with the neutron [207]. Yet another co-annihilation-
like scenario resulting in NS heating is when a component of the DM comes in
the form of magnetically charged black holes (MBHs) [208]. Subspecies that
come with electroweak-symmetric coronas are expected to be near-extremal in
mass, however upon encountering NSs they may become non-extremal: first they
may capture in the NS by stopping due to the Fermi-degenerate gas, then they
could absorb nucleons that are emitted back as (baryon number-violating)
Hawking radiation, overheating NSs. A smaller deposit of heat could come from
mergers of captured MBHs that enhance Hawking radiation, mimicking a self-
annihilation process. Energy depositions from DM annihilations may also
possibly nucleate quark matter bubbles in the NS interior, resulting in
emission of radiation, cosmic rays and gravitational waves [209].
The production of x-rays and other high-energy effluents emitted from NSs,
resulting from monopoles passing through and catalyzing nucleon decay, have
been studied [210, 211]. This provides a strong bound on the abundance of
monopole species that induce nucleon decay, which is a well-motivated class of
monopoles arising from symmetry-breaking in Grand Unified Theories.
#### 4.1.3 Improvements and uncertainties
The above treatment has been improved by accounting for a number of physical
effects in the NS, which in some cases leads to observational uncertainties;
these effects are collected in Table 1. The largest uncertainty in the capture
rate, spanning two orders of magnitude, comes from the unknown mass of the NS
candidate that will be observed [169], unless some precise mass-radius
measurement is performed. Other effects that may modify the DM capture rate,
applicable to different DM and NS mass ranges, are variations in the EoS of NS
matter, self-energies from the nuclear potential, nucleon structure that
suppresses coherent scattering, nucleon superfluidity, extinction in the NS in
the optically thick regime, scattering on relativistic nucleons, gravitational
focusing in the NS interior layers, suppression of scattering via the
propagator of a mediator of mass smaller than the transfer momentum, and the
Galactic velocity distribution of DM. Table 1 lists the appropriate references
that treat these effects.
Figure 10: Sensitivities of self-consistent dark matter models to NS kinetic
heating; see Sec. 4.1.4. Top left. [174] Electroweakino singlet and doublet
mass parameters for various $\tan\beta\equiv$ ratio of Higgs VEVs, that may be
cornered through inelastic scattering of thermal Higgsino DM in the NS via
excitation to charged and neutral states (regions marked by “$\delta<$ GeV”).
Top right. [195] As a function of DM mass, gluonic coupling to an axion-like
particle that mediates velocity-dependent scattering interactions. The gray
region depicts limits from beam dumps, rare meson decays, and astrophysics. NS
capture can also proceed through mediation by a CP-even scalar in the theory,
which gives rise to limits from direct detection. Bottom left. [212] The
orange region can be probed for spin-0 DM scattering on muons in the NS by
exchanging a $U(1)_{L_{\mu}-L_{\tau}}$ gauge boson. Also shown are constraints
from DM self-interactions. Bottom right. [207] NS temperatures achieved by
capture and heating of the anti-particle of DM carrying baryon number = 1, in
a scenario where DM self-interacts repulsively and annihilates to the mediator
$\phi$ that then decays to SM states that deposit heat.
#### 4.1.4 Dark matter models that heat neutron stars through scattering and
annihilation
Making use of the general effects discussed above, specific UV-complete and
self-consistent DM models have been explored in the context of NS capture and
heating. These include the supersymmetric partner of the Higgs field, the
Higgsino, that captures through inelastic scattering to electrically neutral
and charged excited states [174, 48], a generalization of this to electroweak
multiplets [213], a model of DM with a vector force-carrier of a gauged
$L_{\mu}-L_{\tau}$ interaction [191], DM in the form of a GeV-scale “dark
baryon” that mixes with the neutron [207], simplified models of DM (specifying
a single state each for DM and the mediator) with various mediator species
[214, 215, 216], DM that arises as a pseudo-Goldstone boson [217], models of
dark sectors that can explain the muon $g-2$ anomaly [218], and consistent
models of DM interacting with nucleons through a pseudoscalar mediator: axion-
like particles and a CP-odd state that arises in a Two-Higgs doublet model
[195]. The sensitivities to parameters of some of these scenarios are shown in
Fig. 10.
#### 4.1.5 Neutron star reheating mechanisms not involving dark matter
A search for DM reheating NSs in the late stages of their cooling must
encompass understanding other astrophysical mechanisms that could possibly do
the same. We discuss below those that feature prominently in the literature.
1. 1.
DM capture in NSs would not encounter a “neutrino floor” due to very dilute
ambient neutrino densities that produce suppressed recoils/absorption on NS
constituents, owing to low cross sections and Pauli-blocking. However, it is
natural to ask if there is an “ISM floor” from accretion of interstellar
material. It turns out that old, isolated NSs that have spin periods $<$ 1000
seconds do not accrete ISM as they are in an ejector phase [219]: a “pulsar
wind” of ISM outflow powered by the NS’ magnetic field, being much denser than
the inflowing material attempting accretion, would pre-empt accretion via
kinetic pressure. Even if the pulsar wind happens to be weak enough for the
ISM to overcome it, there is a second barrier to accretion: the magnetosphere
co-rotating with the NS will impute centrifugal acceleration to the ISM,
spraying away the gas – the propellor phase. For NSs with unusually large spin
periods of $>1000$ seconds, these arguments do not apply, instead infalling
ISM would be deflected along the magnetic field lines of the NS and accretion
will be confined to a small polar region, which can be distinguished from all-
surface thermal emission. In any case, the ISM density in the local 100 pc is
10-3 GeV/cm3 [220] so that any ISM accretion will be outdone by present-day DM
capture near geometric cross sections.
2. 2.
Rotochemical heating could result from an imbalance in chemical potentials as
the NS material is driven out of beta chemical equilibrium by deceleration in
the rotation of NSs. Reactions that seek to restore chemical equilibrium
deposit heat in the NS. This mechanism could occur for NSs with small (sub-7
ms) pulsar spin periods at birth for certain nucleon pairing models [221] – a
requirement in tension with studies that find that natal spin periods are
likely $\mathcal{O}(10-100)$ ms (see the references listed in Ref. [222]).
3. 3.
Other astrophysical late-time NS reheating mechanisms include [223, 224]
magnetic field decay that dissipates energy into the NS material, crust
cracking which arises when the NS crust breaks as the NS relaxes from an
oblate to spherical shape, releasing accumulated strain energy, and vortex
creep which develops as superfluid vortex lines travel outward as the NS spins
down, and get pinned to the nuclear lattice in the inner crust thereby
introducing a velocity difference between the crust and the superfloid core,
which dissipates energy in the star.
We note that these mechanisms are speculative, and none have been
unequivocally observed. An exclusion set by non-observation of DM-induced
heating at imminent telescopes would also rule out these mechanisms. Another
notable point is that while the rotational power of NSs goes into dipole
radiation, which in turn illuminates the nebula surrounding the pulsar as we
saw in Sec. 2.8, it very likely does not contribute to the NS thermal
luminosity. This is already apparent in the Crab Nebula example discussed in
Sec. 2.8, but can also be inferred from x-ray emission bounds on observed
pulsars which have thermal luminosities 5-6 orders smaller than the rotational
power: see Table 5 of Ref. [225]. Further, the diffusion of the $B$ field in
the NS is also unlikely to heat the NS; as argued in Ref. [226], NSs older
than about Myr are cool enough for magnetic diffusion timescales to exceed the
NS age, effectively shutting off $B$ field dissipation regardless of the
initial strength of the field.
Figure 11: Top. NS cooling timescale versus surface temperature (obtained
from Eq. (30)), superimposed on a plot of time between DM clump-NS encounters
versus the energy deposited by kinetic heating during the passage of a clump
for various NS radii (left). The ticks on either x-axis and either y-axis
correspond one-to-one to each other. This plot shows the region in which NSs
are expected to glow at a steady temperature, so that observing a single NS is
enough to set constraints, and the region where overheated NSs cool down
rapidly between clump encounters, so that astronomical surveys are required to
observe the fraction of overheated NSs in an ensemble. On the right are future
sensitivities of astronomical observations of NSs on DM clump radii and
masses, exploiting dark kinetic heating, seen to be complementary to limits
from other experiments. These limits are valid for DM-nucleon cross sections
greater than the values for which the effects in these searches are relevant.
These two plots are taken from Ref. [164]; see Sec.4.2 for further details.
Bottom left Dark clump masses and radii constrained by compact stellar
thermonuclear explosions, occurring for the minimum DM-nucleus cross sections
per DM mass overlaid; see Sec.4.2. Bottom right. For two different internal
density profiles, the mean flux density of transient radio signals at various
telescopes from encounters of axion miniclusters as a function of the transit
time (= signal burst duration), taken from Ref. [227]. See Ref. 4.11 for
further details.
### 4.2 Neutron stars and halo substructure
Numerous cosmologies predict enhanced small-scale power, for instance via an
early matter-dominated era or DM self-interactions assisting primordial
density perturbations, resulting in a substantial fraction of DM surviving in
substructure termed variously as clumps, subhalos, minihalos and miniclusters
[228, 229, 230, 231, 232, 233, 234, 235]. If DM has scattering interactions
with the SM, and if the interacting component resides in clumps, direct
searches may have observed no conclusive signal simply because the Earth has
yet to encounter a subhalo since their inception. In this senario, subhalo DM
may be observed by its heating of old, nearby NSs: the latter may travel
through DM clumps and capture constituent DM particles, giving rise to kinetic
and/or annihilation heating.
In the top left panel of Fig. 11, taken from Ref. [164], is shown the cooling
time of NSs as a function of the NS surface temperature in green, and in the
same plot is shown the energy deposited by clumps in NSs during encounters,
$E_{\rm meet}^{T}$, as a function of the time between NS-clump encounters for
various clump sizes. The $E_{\rm meet}^{T}$ in the top x-axis correspond to
the NS temperatures imparted in the bottom x-axis immediately following the
encounter. For encounter times shorter than cooling times, the NS will glow at
a steady-state luminosity, whereas for those longer than cooling times, NSs
would be expected to glow brightly for short durations following encounters
before dimming. In the latter case, sky surveys of large populations of NSs
may be able to pick out the fraction that is still above some temperature to
which the telescope is sensitive. In the top right panel, also taken from Ref.
[164], are shown clump mass vs radius regions that may discovered by observing
more than 100 NSs above 104 K in the local kiloparsec, e.g., by Roman/WFIRST
and Rubin/LSST, and excluded by observing a single NS with temperature $<$
1000 K, e.g. by JWST, ELT and TMT. Also shown is a region that is already
excluded by the observation of the coldest ($<$30,000 K) known NS PSR
J2144$-$3933 by the Hubble Space Telescope (HST) [236] for clumps made of
dissipative or strongly self-interacting DM, which would acrrete onto NSs
through the Bondi–Hoyle–Lyttleton mechanism [237, 238, 239].
In addition, in the presence of a long-range fifth force, NS heating by clumps
may be enhanced by greater focusing effects, greater DM kinetic energies upon
arrival at the NS surface, and seismic oscillations induced by an effective
tidal force. In the bottom right panel of Fig. 16, taken from Ref. [240], is
shown the limit from overheating PSR J2144$-$3933 on the effective NS-clump
coupling versus clump mass, for four values of the range of the fifth force
arising from a Yukawa potential [241]. (We do note that the DM need not be in
the form of a clump for these limits to apply, but could also be a tightly
bound composite.) The curve labelled “NS kinetic heating” corresponds to
having an additional short-range interaction enhance DM capture. These limits
are complementary to those coming from the Bullet Cluster on DM self-
interactions mediated by the light mediator, from weak equivalence principle
tests using Galacto-centric motions of celestial bodies on the inter-baryonic
force mediated by the same, and from the 15 year dataset of the NANOGrav
pulsar timing array (see also Sec. 4.10.1).
Yet another signature of clumps with nucleon scattering interactions is
thermonuclear explosions induced in compact stars, as discussed in Sec. 3.3.
These could be Type Ia-like supernovae in carbon-oxygen WDs or x-ray
superbursts in the carbon ocean layer in NS crusts (Sec. 2.5). Constraints
from the observed frequency of NS superbursts (Sec. 4.3) and from the
existence of WDs (Sec. 3.3) are shown in the left bottom panel of Fig. 11 in
the plane of clump size and mass; the contours overlaid are the minimum
reduced nuclear cross sections required to ignite a trigger mass of the
stellar material. This method of constraining clumps could be extended to
those with baryonic long-range forces discussed above. In that case, limits on
the effective coupling apply to far smaller values (all the way to unity) than
shown in Fig. 16 bottom right panel, and to much higher clump masses. See Ref.
[153].
Clumps encountering NSs can also be made of axions, leading to interesting
signatures depicted in the right bottom right panel of Fig. 11, which we
discuss in Sec. 4.11. We also note that the phenomenology of black hole
formation inside NSs (Sec. 4.4) would be applicable here if NS-clump
encounters are frequent enough.
### 4.3 Dark matter inducing superbursts in neutron stars
Superbursts in NS carbon oceans, described in Sec. 2.5, can be induced by
transiting DM if it is sufficiently heavy to deposit the requisite trigger
energy. Ref. [152] set limits on the cross sections and (super-Planckian)
masses of macroscopic DM opaque to nuclei by satisfying the runaway criteria
(Eqs. (16) and (18)) and requiring that the time between DM-NS encounters is
smaller than the inferred maximum recurrence time of the superburst 4U
1820+30. Ref. [153] set limits on the masses, radii and interaction strengths
of dark clumps (shown in Fig. 11 bottom left panel) and nuggets with long-
range baryonic forces, using inferred recurrence times of the six superbursts
(out of 16 detected in total) that have been observed to repeat [82, 83].
### 4.4 Dark matter that implodes neutron stars into black holes
Dark matter that is captured by an NS, after repeated re-scattering with the
NS medium, will settle into a small thermalized region at the center of the
NS. As more DM is collected, this spherical agglomeration can grow to a large
enough mass that it collapses and forms a small black hole, which may
(depending on its mass) subsequently accrete the entirety of the NS,
transforming it into a solar mass black hole [11]. The processes of DM capture
in NSs, thermalization, accumulation to the point of collapse, collapse,
formation of a black hole, and its possible evaporation via Hawking radiation
or growth consuming the NS, have been investigated in Refs. [242, 184, 118,
200, 243, 244, 245, 246, 247, 248, 249, 85, 250, 251, 252, 253, 254, 148, 141,
255, 256, 257, 258, 143, 150, 259, 260, 261]. In addition, possible
astrophysical signatures of DM converting NSs to black holes have been
identified in, $e.g.$, Refs. [249, 262, 251, 252].
The kind of DM that is by and large studied in this context is “asymmetric
dark matter”, DM primarily made of its particles as opposed to a symmetric
population of particles and anti-particles. This emulates the visible
universe, which is primarily matter (electrons, nucleons) and not anti-matter;
indeed, the asymmetry in DM may be linked to that of the visible sector [142,
263], but this is not necessary for the discussion that follows. The primary
feature that permits asymmetric DM to convert NSs into black holes is that it
is typically777For the exception, see Ref. [264]. non-annihilating, and so as
it collects inside the NS, it is not expected to annihilate to Standard Model
states. This may be compared with symmetric, annihilating DM discussed in Sec.
4.1.2. Investigation into what fraction of the DM may self-annihilate or co-
annihilate with nucleons, while still forming a black hole inside the NS, was
undertaken in Refs. [247, 248, 265].
Another kind of DM which could convert NSs into black holes is primordial
black holes [266, 267]. A PBH captured in an NS can settle inside, accrete NS
material, and convert the NS into another black hole [268, 269, 270, 271, 272,
251, 273, 274, 275, 145, 276, 277, 278]; this is detailed in Section 4.5. For
the remainder of this sub-section we will focus on particle DM.
We now turn to details of the processes leading asymmetric dark matter to
convert NSs into black holes. They proceed as follows: (1) DM is captured in
the NS and thermalizes with the NS interior, forming a small ball of DM at the
center, (2) the DM ball reaches a critical mass at which it collapses, and
through some cooling process continues to collapse until (3) a small black
hole forms which, provided accretion of NS material outstrips Hawking
radiation, will result in the conversion of the NS to a black hole. Figure 12
left panel shows a simple schematic of this process.
Figure 12: Left. Schematic of asymmetric dark matter converting a NS into a
black hole. Right. Dark matter per-nucleon scattering cross section versus
mass bounds on heavy fermionic asymmetric dark matter from the observation of
old Milky Way pulsars that have not been converted to black holes [251],
compared with terrestrial direct search limits and their neutrino floor. Also
shown are prospects for observating NS mergers with accompanying kilonovae,
localized to 1 kpc precision inside Milky Way-like spiral galaxies. A detailed
discussion of Milky Way pulsar ages, and in particular PSR J1738+0333, which
has a characteristic age confirmed by the age of its WD companion, can be
found in Ref. [279].
#### 4.4.1 Dark matter thermalization in neutron stars
In step (1) above, the size of the thermalized region is determined by the
temperature of the NS, which sets the final temperature of the DM particles,
and by the central density of the NS, which sets the gravitational potential
binding energy. A simple application of the virial theorem yields an estimated
DM thermal radius of [247]
$r_{\rm th}\approx 20~{}{\rm cm}\left(\frac{{\rm
GeV}}{m_{\chi}}\right)^{1/2}\left(\frac{T_{NS}}{10^{3}~{}\rm
K}\right)^{1/2}\left(\frac{10^{15}~{}{\rm g/cm^{3}}}{\rho_{\rm
NSc}}\right)^{1/2},$ (52)
where $\rho_{\rm NSc}$ is the NS central density. The time it takes for DM to
sink to this region depends on a few timescales (see $e.g.$, Ref. [143],
Section 3 for a review), but usually the longest is the time it takes for DM
to scatter with its lowest velocities/temperatures on nucleons, after having
mostly settled inside the NS. A detailed calculation of this timescale
requires modeling the NS core, and so the result will depend on the density,
degeneracy, and possibly even new QCD phases in the NS interior. For neutrons
treated as a degenerate fluid, we have [189]
$t_{\rm th}\approx 3000~{}{\rm
yr}~{}\frac{\frac{m_{\chi}}{m_{n}}}{\left(1+\frac{m_{\chi}}{m_{n}}\right)^{2}}\left(\frac{2\times
10^{-45}~{}{\rm
cm^{2}}}{\sigma_{n\chi}}\right)\left(\frac{T_{NS}}{10^{5}~{}{\rm
K}}\right)^{2},$ (53)
where this expression assumes a momentum-independent cross section for
spin-1/2 DM scattering on nucleons via a heavy mediator. Extensions to spin-0
DM, Lorentz structures of DM-nucleon interactions leading to momentum-
dependent cross sections, and light mediators were investigated in Ref. [194].
In the above expression, the thermalization timescale counter-intuitively
decreases with increasing DM mass above $m_{n}$: one would naively expect that
heavier DM takes longer to thermalize. But the effect comes about because
$t_{\rm th}$ is set by the inverse of the energy loss rate (in turn depending
on the DM-nucleon scattering rate) in the NS degenerate medium with phase
space restrictions, and this rate goes as positive powers of the (continually
degrading) DM momentum $k_{\rm cold}$. For DM energies close to the NS
temperature, $k_{\rm cold}\simeq\sqrt{3m_{\chi}}T_{\rm NS}$, implying energy
is lost faster in the last few scatters for heavier DM, i.e., implying quicker
thermalization. In Figure 13 we show the per-nucleon cross section or
effective field theory coupling necessary for DM to thermalize inside an NS on
10 Gyr year timescales for certain models.
As discussed in Sec. 4.1.2, depending on the DM annihilation cross section,
thermalized DM collected within $r_{\rm th}$ can annihilate efficiently enough
to yield interesting signals.
Figure 13: Left. [189] Cross section for dark matter to thermalize in a
neutron star in 10 billion years, assuming a momentum-independent cross
section with neutrons. Right. [194] The case of scattering on neutrons through
the scalar current operator indicated in the figure with mediator mass
$m_{\phi}$. Thermalization in 10 Myr and 10 Gyr for different NS temperatures,
and a curve indicating parameters that lead to DM capture in the NS through
geometric cross sections, are shown.
#### 4.4.2 Collapse of dark matter and formation of small black hole
In step (2), after enough DM has collected in the thermalized region in the
NS, it will reach a critical mass at which it collapses. The exact density of
DM required to initiate collapse will depend on its self-interactions, and by
extension its EoS and sound speed while contained in the NS. Assuming
negligible self-interactions, the critical mass required for collapse is
$M_{\rm crit}\approx 7\times 10^{46}~{}{\rm GeV}\left(\frac{10^{7}~{}{\rm
GeV}}{m_{\chi}}\right)^{3/2}\left(\frac{T_{\rm NS}}{10^{3}~{}\rm
K}\right)^{3/2}\left(\frac{10^{15}~{}{\rm g/cm^{3}}}{\rho_{\rm
NSc}}\right)^{1/2}.$ (54)
For a detailed review of the conditions for collapse see $e.g.$, Section 4 of
[143]. It is generally the case that if DM thermalizes rapidly through
scattering with neutrons in the NS interior, then when it reaches the point of
collapse it will also rapidly shed the gravitational energy required to form a
black hole. This is because the temperature is higher during collapse and
hence the time to shed gravitational energy is typically shorter. As the
shortness of this timeframe is common, this part of the collapse dynamics is
not always treated explicitly, but Refs. [85, 143, 148, 11] provide more
detailed treatment, both in compact stars and other astrophysical bodies. The
time for the DM sphere to collapse below its Schwarzschild radius will depend
on whether it cools via scattering with neutrons or through other radiative
process, $e.g.$, emission of a light particle in the dark sector [85].
An additional consideration is whether enough DM will have collected to exceed
the dark sector Chandrasekhar mass (analogous to Eq. (7)), parametrically of
order
$M_{\rm Chand,f}\approx\frac{M_{\rm Pl}^{3}}{m_{\chi}^{2}}\approx
M_{\odot}\left(\frac{{\rm GeV^{2}}}{m_{\chi}^{2}}\right)~{},$ (55)
while for bosons this is
$\displaystyle M_{\rm Chand,b}$ $\displaystyle\approx$
$\displaystyle\frac{2M_{\rm
Pl}^{2}}{m_{\chi}}\left(1+\frac{\lambda}{32\pi}\frac{m_{pl}^{2}}{m_{\chi}^{2}}\right)^{1/2}$
(56) $\displaystyle\rightarrow$ $\displaystyle\begin{cases}2\frac{M_{\rm
Pl}^{2}}{m_{\chi}}~{},\ \ \lambda\ll 1,\\\
\frac{\lambda}{2\sqrt{2}}\frac{M_{\rm
Pl}^{3}}{m_{\chi}^{2}}~{},\lambda>100m_{\chi}/M_{\rm Pl}~{},\end{cases}$
where $\lambda$ is the boson $\phi$’s repulsive self-interaction coupling
arising in the Langrangian $\mathcal{L}\supset-(\lambda/4!)\phi^{4}$.
Attractive DM self-interactions could alter the amount of asymmetric fermionic
DM necessary for collapse to a black hole [246, 265, 279]. The collapse of
light fermionic DM is in principle permitted by the attractive self-
interaction mediated by a light scalar, however, a detailed study of the final
stage collapse to a black hole has pointed out an important caveat [280]: for
a simple scalar field potential consisting only of a mass term and a coupling
to the fermions, the effective mass of the scalar could grow during DM fermion
collapse, preventing collapse to a black hole. Whether bosonic self-
interactions let DM form black holes in NSs is a non-trivial question. In
particular, a large value of $\lambda$ can shift bosonic asymmetric DM bounds
from old NSs to higher DM masses [245, 247, 248]. Bosonic asymmetric DM
forming black holes inside a NS do so by forming a Bose-Einstein condensate
(BEC) [245, 244, 281, 247, 248, 252], from which collapse will proceed for
GeV-mass DM. The dynamics of the BEC prior to and following collapse would
affect whether a black hole is produced and is an area of investigation [281,
247, 248, 252, 282].
#### 4.4.3 Growth or evaporation of dark matter-formed black hole in the
neutron star
After a black hole is formed inside the NS, step (3) is to determine whether
it is so small that it will rapidly evaporate away via Hawking radiation, or
whether it is so large that through accumulating surrounding baryonic material
it will grow to consume the NS in a relatively short timeframe. Initial
studies of this process estimated whether Bondi accretion by the black hole
would proceed faster than Hawking radiation, which is entirely determined by
the initial mass of the black hole [245, 244]. Later studies incorporated the
accumulation of DM particles additionally collected into the NS and onto the
black hole, finding that this can substantially influence whether the black
hole would grow to consume the NS [247, 248, 265].
Altogether, the requirement that the black hole grows in the NS is given by
$\dot{M}^{(\rm NS~{}accretion)}+\dot{M}^{(\rm DM~{}accretion)}-\dot{M}^{(\rm
Hawking)}>0,$ (57)
where the first term is the NS accretion rate onto the black hole, the second
is the DM accretion rate onto the black hole, and the third is the Hawking
radiation rate. Each of these terms has been individually studied in the
context of asymmetric DM which causes NSs to implode:
1. 1.
NS accretion: The simplest treatment of NS accretion onto the black hole
assumes Bondi accretion. In practice, angular momentum of the NS fluid around
the black hole, for a rapidly spinning NS, can diminish accretion relative to
naïve Bondi accretion, but the high viscosity of the NS fluid results in
infall rates consistent with spherical Bondi accretion despite angular
momentum effects [283]. Sufficiently small black holes will have a quantum
penalty to accumulation of neutrons due to the neutron de Broglie wavelength;
this effect can be pronounced for black hole masses near the edge of growth
vs. evaporation [260]. The accretion of the NS fluid onto the black hole
inside a NS has been studied in a detailed simulation that accounts for
hydrodynamic and general relativistic effects [255], finding that in the final
stages of accretion, the mass of NS fluid ejected from the accretion zone is
likely less than about $10^{-4}~{}M_{\odot}$.
2. 2.
DM accretion: The accretion of DM onto the small black hole inside the NS can
substantially affect whether it grows or shrinks, especially when the DM
accretion rate onto the NS is maximized [247, 248, 265, 252].
3. 3.
Hawking radiation: There is a correction to the evaporation rate of black
holes inside NSs, coming from the Pauli blocking of Hawking radiation, since
the region around the black hole will be inhabited by a degenerate sea of SM
fermions [284].
#### 4.4.4 Signatures of dark matter that implodes neutron stars
Figure 14: Left. From a simulation of a NS accreting onto a black hole at is
center [255]. NS fluid density and velocity vectors are shown in the left half
of these figures, with angular momentum density shown on the right half, for a
different NS spin parameters $\Omega$. Right. Number of NSs converted to black
holes in a Milky Way-like galaxy, along with the current NS implosion rate for
dark matter models that would cause NSs near Earth to implode in 10 Gyr
(“ADM1”) and 50 Gyr (“ADM2”) [251].
A number of striking astrophysical signatures arise from DM that converts NSs
to black holes. Firstly, the oldest known pulsars can be used to set limits on
asymmetric DM, since for a given background DM density, the existence of these
pulsars limits the accumulation of DM [242, 243, 244, 245, 246, 247, 248, 249,
184, 200]. However, the “characteristic age” of pulsars comes with caveats: it
is not always a good indicator of the actual age of the pulsar as discussed in
Sec. 2.8. One particular pulsar, PSR J1738+0333, is in a binary system with an
old WD, and thus to go with its characteristic age, has an additional age-
marker in its WD companion, both of which point to a $\gtrsim 5$ Gyr-old NS.
Hence this pulsar has been used to set bounds on asymmetric DM [279, 251]. A
recent work [285] integrates over the density of DM that NSs traverse during
their orbits around the Milky Way, refining bounds that use characteristic
ages of old, nearby pulsars.
A number of prompt and delayed signatures may be sought if DM is converting
Gyr-old NSs to black holes in regions where DM is denser than in the outer
regions of the Milky Way. In particular, the absence of millisecond pulsars in
the Galactic Center [286, 287] has been linked to models of DM that would
convert old NSs to black holes [249, 200]. These studies predict a maximum
|
# Identical Bands Around the Isobaric Rare Earth Even-Even Nuclei with the
Mass Number A = 164
M. A. Abdelsalam Physics Department, Faculty of Science, Al-Azhar University,
Cairo, Egypt H. A. Ghanim Physics Department, Faculty of Science, Al-Azhar
University, Cairo, Egypt M. Kotb Physics Department, Faculty of Science, Al-
Azhar University, Cairo, Egypt A. M. Khalaf Physics Department, Faculty of
Science, Al-Azhar University, Cairo, Egypt
###### Abstract
Eight pairs of rare-earth normally - deformed (ND) nuclei around the isobaric
nuclei with A = 164 and have identical values of F-spin, $\pm$ $F_{0}$ and
$N_{p}$ $N_{n}$ ($N_{p}$ and $N_{n}$ are the number of valence protons and
valence neutrons respectively ) have been studied. These pairs of identical
bands (IB’s) cover 16 mass units and are classified as (i) 3 pairs of nuclei
separated by (2p,2n) :(${}^{162}Yb-^{166}Hf$), (${}^{162}Er-^{166}Yb$),
(${}^{162}Dy-^{166}Er$) (ii) 2 pairs of nuclei separated by (4p,4n):
(${}^{160}Dy-^{168}Yb$), (${}^{160}Er-^{168}Hf$) (iii) 2 pairs of nuclei
separated by (6p,6n): (${}^{158}Er-^{170}W$) (${}^{158}Dy-^{170}Hf$) and (iv)
one pair of nuclei separated by (8p,8n): (${}^{156}Dy-^{172}W$).
We suggested a theoretical collective rotational formula containing three
parameters (CRF3) as an extended version of Bohr-Mottelson model to calculate
the ground state positive parity excitation energies. Also, the sd-version of
the interacting boson model (IBM) has been used to describe the nuclear shapes
by using the intrinsic coherent-state. The optimized models parameters for
each nucleus are adjusted by using a simulation search program to minimize the
root mean square deviation between the theoretical calculation and
experimental excitation energies. The best adopted model parameters of the
CRF3 are used to calculate the rotational frequencies $\hbar\omega$, the
kinematic $J^{(1)}$ and dynamic $J^{(2)}$ moments of inertia and the evolution
of $J^{(1)}$ and $J^{(2)}$ with increasing $\hbar\omega$ are systematically
analyzed. A smooth gradual increase in both moments of inertia was seen.
The calculated results agree excellently with the experimental ones which give
strong support to the suggested CRF3.
The adopted IBM parameters are used to calculate the potential energy surfaces
(PES’s) which describe the nuclear deformation. The PES’s for our nuclei shows
two wells corresponding to prolate and oblate sides which indicate that these
nuclei are deformed and have rotational behaviors.
The correlation quantities which identify the IB’s are extracted. It is found
that the nuclei having $N_{p}N_{n}/\bigtriangleup$ where $\bigtriangleup$ is
the average pairing gap, exhibit identical excitation energies and energy
ratios in their ground state rotational bands.
Keywords : Interacting Boson model (IBM) - Identical Bands - Potential Energy
Surface
## 1 Introduction
The discovery of rotational bands in adjacent even-even and odd-mass
superdeformed (SD) nuclei in which the $\gamma$-ray transition energies are
nearly identical to within a few KeV was an exotic and unexpected phenomenon
in nuclear structure physics [1, 2, 3, 4, 5]. Since the identical bands (IB’s)
have essentially identical transition energies, then the associated dynamical
moment of inertia are thus identical. Several explanations were put forward
[4, 5, 6, 7, 8, 9, 10, 11, 12] to understand the origin of IB’s phenomenon
assuming the occurrence of such IB’s to be a specific property of the SD
states in nuclei. The explanations of these IB’s includes: the Coriolis force,
the particle alignment and pairing [13], the roles of special high-N orbitals
of intruder configuration and band crossing[14, 15, 16, 17], the pseudo-spin
in supersymmetry [7, 18, 19] and the supersymmetry with many-body interactions
[20].
Soon the phenomenon of low-spin identical bands was found in pairs of even-
even normal deformed (ND) nuclei [21], and in neighboring even-even and odd-
mass nuclei in rare-earth region where they have similar moments of inertia
[22, 23]. If was noted that low spin IB’s are not limited to nearby nuclei but
are widespread and found in pairs of even-even nucleoside as separated by 24
mass unit (like ${}^{156}Dy,^{180}Os$) [24]. Attempts were made to understand
the low-spin IB’s in terms of some simple systematics of the moments of
inertia in the rare-earth region [25, 26, 27, 28, 29, 30] or from several
types of consideration [31].
For the description of normally deformed (ND) bands, some useful models were
proposed. Bohr and Mottelson [32] pointed out that, under the adiabatic
approximation, the rotational energy of an axially symmetric nucleus may be
expanded for $K=0$ band as a power series in the I(I+1) term. The expansion
for the $K\neq 0$ band takes the same form, but includes a band head energy
and the I(I+1) is replaced by $\left[I(I+1)-K^{2}\right]$. Another useful
models for nuclear rotational spectra are the particle-rotor model (PRM) [33],
the variable moment of inertia (VMI) model [34, 35], the soft rotor model [36]
and the interacting boson model [37].
In the concept of F-spin and its projection [38] any pairs of conjugate nuclei
with the same F-spin and $F_{0}$ values in any F-multiplet will have the same
$N_{p}N_{n}$ [24, 39, 40] where $N_{p}$ and $N_{n}$ are respectively the
number of valence protons and valence neutrons. The product $N_{p}N_{n}$ was
used in the classification of the changes that occur in nuclear structure [41,
42]. It was assumed that [25, 43] the moment and the P-factor depends also on
the product $N_{p}N_{n}$.
The purpose of the present paper is (i) to analyse the excitation energies for
even-even normally deformed nuclei in rare earth region in framework of
suggested new collective rotational formula (CRF3). (ii) to exhibit the
occurrence of IB’s in eight pairs of nuclei in rare earth region. (iii) to
present the parameters which characterize the appearance of IB’s. (iv) use the
sd version of interacting boson model (sdIBM) to calculate the potential
energy surfaces (PES’s).
## 2 Outline of the Suggested Collective Rotational Formula with Three
Parameters (CRF3)
Rotational states in normal deformed (ND) nuclei can be characterized by their
excitation energies E(I) as a function of spin I, which generally lie low as
compared to the single-particle excitation. In the strong coupling limit, the
rotational ground state energy for an axially symmetric even-even nucleus
obeys the I(I+1) rule, i.e form bands of levels that fulfill the relation
$\displaystyle E(I)$
$\displaystyle=\dfrac{\hbar^{2}}{2J}I(I+1)=\alpha\,\textit{\text{\^{I}}}^{2}$
(1)
where $\alpha$ = $\hbar^{2}/2J$ and Î = I(I+1)
The relation (1) defines in addition the nuclear moment of inertia J as a
constant for an ideal rotor. This simple rotational formula gives deviations
from experimental data, So Bohr and Mottelson pointed out that agreement was
improved by adding to it a second team to yield
$\displaystyle E(I)$ $\displaystyle=\alpha I(I+1)+\beta[I(I+1)]^{2}$
$\displaystyle=\alpha\,\text{\^{I}}^{2}+\beta\,\text{\^{I}}^{4}$
$\displaystyle E(I)$
$\displaystyle=\alpha\,\text{\^{I}}^{2}(1+\gamma\,\text{\^{I}}^{2})$ (2)
where $\gamma=\beta/\alpha$
Since the moment of inertia J increases on rotation of the nucleus, the
observed deviations from the experiment were still more evident.
According to the variable moment of inertia(VMI) model[34, 35], there is a
gradual increase in moment of inertia J with increasing the spin I, so we
suggest that the moment inertia J can be written as
$J=J(I)=J\,(1\,+\,\sigma\,\text{\^{I}}^{2})$ (3)
Substituting in equation (2), yield
$E(I)=\alpha\,\text{\^{I}}^{2}\left(\dfrac{1+\gamma\,\text{\^{I}}^{2}}{1+\sigma\,\text{\^{I}}^{2}}\right)$
(4)
Therefore, the two-term Bohr-Mottelson formula becomes an extended new formula
with three parameters. We denote formula (4) as the collective rotational
formula with three parameters (CRF3). The parameters are
$\alpha,\beta,\gamma$.
The suggested CRF3 is more general because it leads to the following three
predictions:
a) when $\sigma=\gamma$ it gives pure rigid rotor equation(1)
b) when $\sigma=0$ it gives the two parameters Bohr-Mottelson equation (2)
c) when $\gamma=0$ it gives soft rotor model [36]
$E(I)=\dfrac{\hbar^{2}}{2J}\dfrac{I(I+1)}{1+\sigma(I+I^{2})}$ (5)
Two types of moments of inertia were suggested by Bohr-Mottelson which reflect
two different aspects of nuclear dynamics. The first moment of inertia is the
kinematic $J^{(1)}$, it is equal to the inverse of the slope of the curve of
energy E versus $\text{\^{I}}^{2}$ (or I(I+1)) times $\hbar^{2}/2$, while the
second moment of inertia is the dynamic $J^{(2)}$, it is related to the
curvature in the curve of E versus Î (or $\sqrt{I(I+1)}$ ).
The kinematic $J^{(1)}$) and dynamic $J^{(2)}$ moments of inertia are defined
as:
$\displaystyle J^{(1)}$
$\displaystyle=\dfrac{\hbar^{2}}{2}\left[\dfrac{dE}{dI(I+1)}\right]^{-1}=\hbar\dfrac{\sqrt{I(I+1)}}{\omega}$
$\displaystyle=\dfrac{\hbar^{2}}{2}\left(\dfrac{dE}{d\text{\^{I}}^{2}}\right)^{-1}=\hbar\dfrac{\text{\^{I}}}{\omega}$
(6) $\displaystyle J^{(2)}$
$\displaystyle=\hbar^{2}\left[\dfrac{d^{2}E}{d(\sqrt{I(I+1)})^{2}}\right]^{-1}=\hbar\dfrac{d\sqrt{I(I+1)}}{d\omega}$
$\displaystyle=\hbar^{2}\left(\dfrac{d^{2}E}{d\text{\^{I}}^{2}}\right)^{-1}=\hbar\dfrac{d\text{\^{I}}}{d\omega}$
(7)
In the case of our CRF3, the two moments of inertia becomes
$J^{(1)}(I)=\dfrac{\hbar^{2}}{2\alpha}\dfrac{(1+\sigma\text{\^{I}}^{2})^{2}}{[1+\gamma\text{\^{I}}^{2}(2+\sigma\text{\^{I}}^{2})]}$
(8)
$J^{(2)}(I)=\dfrac{\hbar^{2}}{2\alpha}\dfrac{(1+\sigma\text{\^{I}}^{2})^{3}}{[(1+6\gamma\text{\^{I}}^{2})+\sigma\text{\^{I}}^{2}(3\gamma\text{\^{I}}^{2}+\alpha\gamma\text{\^{I}}^{4}-3)]}$
(9)
Experimentally $\hbar\omega$, $J^{(1)}$and $J^{(2)}$ are extracted in terms of
the transition energy $E_{\gamma}(I)=E(I)-E(I-2)$ as:
$\hbar\omega(I)=\frac{1}{4}[E_{\gamma}(I+2)+E_{\gamma}(I)]\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(MeV)$
(10)
$J^{(1)}(I)=\dfrac{2I-1}{E_{\gamma}(I)}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(\hbar^{2}MeV^{-1})$
(11)
$J^{(2)}(I)=\dfrac{4}{E_{\gamma}(I+2)-E_{\gamma}(I)}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;(\hbar^{2}MeV^{-1})$
(12)
As a special case, the lowest dynamical moment of inertia reads
$J^{(2)}_{lowest}=\dfrac{4}{E_{\gamma}(4^{+}_{1}\rightarrow
2_{1}^{+})-E_{\gamma}(2^{+}_{1}\rightarrow 0_{1}^{+})}$ (13)
## 3 Determination of Ground State Band Properties of Even-Even Nuclei and
the Physical Identical Parameters
In order to understand the behavior of low lying states of an axially
symmetric normally deformed nuclei, it is insightful to examine some physical
observables which exist in a pair of IB’s, the observables include:
1\. The P- Factor, Structure Factor (SF), and Saturation Parameter (SP)
Casten [43] introduced the P-Factor
$P=\dfrac{N_{p}N_{n}}{N_{p}+N_{n}}$ (14)
where $N_{p}$ and $N_{n}$ are the numbers of valence protons and valence
neutrons respectively which are counted as particles or holes from the nearest
closed shell
$\displaystyle N_{p}$ $\displaystyle=min[(Z-50),(82-Z)]$ (15) $\displaystyle
N_{n}$ $\displaystyle=min[(N-82),(126-N)]$ (16)
The P- Factor represents the average number of interactions of each valence
nucleon with those of the other type. It can be viewed as the ratio of the
number of valences p-n residual interactions to the number of valence like-
nucleon pairing interactions, or if the p-n and pairing interactions are orbit
independent, then P is proportional to the ratio of the integrated p-n
interaction strength to the integrated pairing interaction strength. The
nuclear collectivity and deformation depend sensitively on the P- Factor.
The structure factor (SF) and the saturation parameter (SP) are given by
$\displaystyle SF$ $\displaystyle=N_{p}N_{n}(N_{p}+N_{n})$ (17) $\displaystyle
SP$ $\displaystyle=\left(1+\dfrac{SF}{SF_{max}}\right)^{-1}$ (18)
It is found that the lowest dynamical moment of inertia $J^{(2)}_{lowest}$ is
proportional to $\sqrt{SF}$.
2\. The Concept of F-Spin
A nucleus with $N_{p}$ valence protons and $N_{n}$ valence neutrons has a
total boson number
$N_{B}=\dfrac{N_{p}+N_{n}}{2}=N_{\pi}+N_{\nu}$ (19)
The $N_{\pi}$ proton bosons and neutron bosons are assigned F-Spin,
$F=\frac{1}{2}$ with projection $F_{0}=+\frac{1}{2}$ for proton bosons and
$F_{0}=-\frac{1}{2}$ for neutron bosons. A given nucleus is characterized by
two quantum numbers [38]:
$F=\dfrac{N_{\pi}+N_{\nu}}{2}$ and its projection
$F_{0}=\dfrac{N_{\pi}-N_{\nu}}{2}$
Squaring and subtracting, yield
$4(F^{2}-F^{2}_{0})=4N_{\pi}N_{\nu}=N_{p}N_{n}$ (20)
That is any pair of conjugate nuclei with the same F-spin and $F_{0}$ values
in any F-spin multiplet have identical $N_{p}N_{n}$ values.
In our chosen nuclei, the F-spin multiplet is given by: (A+4, Z+2), (A+8,
Z+4), (A+12, Z+6) and (A+16, Z+8) for Dy, Er, Yb, Hf, and W isotopes.
Any pair of nuclei which show identical excitation energies have nearly equal
value of the product of their valence nucleon numbers $N_{p}$ and $N_{n}$
[41]. However, the analysis of experimental data shows that the converse is
not true. The simple quantity $N_{p}N_{n}$ helps also in the evolution of
nuclear deformation and collectivity in nuclei [40]. On the other hand, the
product $N_{p}N_{n}$ or the P- Factor plays an important role in studying the
orbit dependence, shell gaps, and intruder orbitals.
3\. Pairing Interaction Energy
The pairing interaction energy $\bigtriangleup$ in an even-even nucleus is the
average pairing gap ($(\bigtriangleup_{p}+\bigtriangleup_{n})/2$ where
$\bigtriangleup_{p}$ and $\bigtriangleup_{n}$ are respectively the proton and
neutron pairing gaps which are determined from the difference in binding
energies of the neighboring odd and even nuclei
$\displaystyle\bigtriangleup_{p}$
$\displaystyle=\frac{1}{4}[B(N,Z-2)-3B(N,Z-1)+3B(N,Z)-B(N,Z+1)]$ (21)
$\displaystyle\bigtriangleup_{n}$
$\displaystyle=\frac{1}{4}[B(N-2,Z)-3B(N-1,Z)+3B(N,Z)-B(N+1,Z)]$ (22)
The pairing gaps $\bigtriangleup_{p}$ and $\bigtriangleup_{n}$ are determined
empirically from the relation
$\displaystyle\bigtriangleup_{p}\simeq\bigtriangleup_{n}=\dfrac{12}{\sqrt{A}}\texttt{
}\;\;\;\;\;\;\;\;\;\;\;(MeV)$ (23)
The average pairing gap of the nucleus is then
$\bigtriangleup=\dfrac{\bigtriangleup_{p}+\bigtriangleup_{n}}{2}=\dfrac{12}{\sqrt{A}}\texttt{
}MeV$ (24)
It is observed that [39, 43] the even-even nuclei belong to different mass
number having identical $(N_{p}N_{n}/\bigtriangleup)$ values exhibit identical
excitation energies and identical energy ratios.
4\. Quadrupole Transition Probabilities and Deformation Parameters
The quadrupole transition probability per unit time for the transition
$I_{i}\rightarrow I_{f}$ is given by
$T(E_{2})=\dfrac{4\pi}{75}\left(\dfrac{5}{\hbar}\right)\left(\dfrac{E_{2^{+}_{1}}}{\hbar
c}\right)^{5}B(E_{2};I_{i}\rightarrow I_{f})$ (25)
where $B(E_{2})$ is the reduced transition probability and $E_{2^{+}_{1}}$ is
the energy of the $2_{1}^{+}$ state.
Experimentally $T(E_{2})$ for transition $2_{1}^{+}\rightarrow 0_{1}^{+}$ is
obtained by
$T(E_{2},2_{1}^{+}\rightarrow
0_{1}^{+})=\dfrac{ln2}{(1+\alpha)T_{1/2}}=\dfrac{0.693}{(1+\alpha)T_{1/2}}$
(26)
where $\alpha$ is the total conversion coefficient taken from the tabulated
values given by Rose [44] and $T_{1/2}$ is the lifetime of the rotational
level.
The $B(E_{2},2_{1}^{+}\rightarrow 0_{1}^{+})$ values carry important
information about the collectivity of nuclear rotation and can be extracted
from the equations (25,26).
The relation between the intrinsic nuclear quadrupole moment $Q_{0}$ and
$B(E_{2})$ is given by
$Q_{0}^{2}=\dfrac{16\pi}{e}B(E_{2},2_{1}^{+}\rightarrow 0_{1}^{+})$ (27)
Practically the most reliable method of determining the quadrupole deformation
parameter $\beta_{2}$ in framework of geometric collective model (GCM) is to
extract $\beta_{2}$ from $Q_{0}$ according to the formula
$\beta_{2}(exp)=\dfrac{\sqrt{5\pi}}{3ZR_{0}^{2}}Q_{0}$ (28)
assuming a uniformly charged nucleus of spheroidal shape, where the nuclear
radius has the value $R_{0}=1.2A^{1/3}$(fm) and Z is the nuclear charge
number.
The expression (28) for $\beta_{2}$ is widely used to compare the quadrupole
deformation of different nuclei. It is noticed that the
$B(E_{2},2_{1}^{+}\rightarrow 0_{1}^{+})$ values increase when going from the
closed shell at N=82 toward midshell where maximum values are occur, while
from midshell toward the shell closure at N= 126 its values are decreases.
In a second way , specially where the $B(E_{2},2_{1}^{+}\rightarrow
0_{1}^{+})$ value is not known, we estimate $\beta$ by using the approximate
empirical Grodzins relation [45]:
$E_{2^{+}_{1}}B(E_{2},2_{1}^{+}\rightarrow 0_{1}^{+})=2.5\times
10^{-3}\texttt{ }\dfrac{Z^{2}}{A}$ (29)
where
$\displaystyle B(E_{2},2_{1}^{+}\rightarrow
0_{1}^{+})=\dfrac{1}{16\pi}e^{2}Q^{2}_{0}=\dfrac{9}{80\pi^{2}}e^{2}Z^{2}R^{4}_{0}\beta^{2}\;\;\;\;\;\;\;(\texttt{in
units of }e^{2}b^{2})$ (30)
We can relate $\beta$ and $E_{2^{+}_{1}}$ as:
$\beta^{2}_{G}=\dfrac{1224}{E_{2^{+}_{1}}A^{7/3}}$ (31)
where $E_{2^{+}_{1}}$ is in MeV.
Also $\beta_{2}$ can be determined by using the SU(3) rotational limit of
interacting boson model(IBM)[37], the square of the deformation parameter
$\beta^{2}$ in a state of angular momentum I is given by [46]:
$\langle\beta^{2}\rangle_{I}=\dfrac{\alpha^{2}}{6(2N-1)}[I(I+1)+8N^{2}_{B}+22N_{B}-15]$
(32)
where $N_{B}$ is the total number of valence bosons and $\alpha$ is a
normalization constant ($\alpha=0.101$ for rare-earth nuclei). The expectation
value of $\beta^{2}$ in the ground state becomes
$\langle\beta^{2}\rangle_{0}=\alpha^{2}\dfrac{8N^{2}_{B}+22N_{B}-15}{6(2N-1)}$
(33)
which is an almost linearly increasing function of the boson number $N_{B}$
and has the same value for nuclei having the same number of valence nucleons
$N=[N_{p}+N_{n}],N=[(N_{p}-1)+(N_{n}-1)]$ (34)
It is evident that $\beta_{IBM}$ extracted from IBM is much larger than
$\beta_{GCM}$ extracted from GCM because $\beta_{GCM}$ refer to the
deformation of all A nucleons while $\beta_{IBM}$ describe only 2N valence
bosons, the approximate relation between them is given by:
$\beta_{GCM}=1.18\left(\dfrac{2N}{A}\right)\beta_{IBM}$ (35)
The deformation parameter $\beta$ reflects the equilibrium shape and structure
of the nucleus such as the energy ratio $R_{4/2}=E(4_{1}^{+})/E(2_{1}^{+})$
and the reduced transition probability $B(E_{2},2_{1}^{+}\rightarrow
0_{1}^{+})$ which are the best indicators to exhibit the collective properties
of the even-even nuclei.
5\. Energy Ratios and Percentage Difference in Transition Energies
The energy ratios and the percentage difference in transition energies give
the characteristic of the evolution of the collectivity in the even-even
nuclei. Only deformed nuclei show rotational levels and particularly the even-
even nuclei display a simple structure energies proportional to I(I+1) with
only even values of the spin I considering that the moment of inertia is
constant (rigid rotator), therefore the energy ratio $R_{4/2}=3.333$. The
observed moment of inertia extracted from the experiment is only one-quarter
to one-half of what one would expect from a rigid rotator which means that not
the whole nucleons are participating in the collective motion.
On the other hand for an ideal harmonic quadrupole spectrum for spherical
nuclei a system of equidistant states is formed by the composition of
vibrational quanta. The first excited state is $2_{1}^{+}$ followed by the
degenerate $0_{2}^{+},2_{2}^{+},4_{1}^{+},$ and so forth. Therefore energy
ratio$R_{4/2}=2$.
To compare level spacing in two nuclei with masses $A_{1}$, and $A_{2}$ where
$A_{2}>A_{1}$, we define the percentage differences ratios in transition
energies as :
$\delta=\dfrac{\bigtriangleup E_{\gamma}(I)}{E_{\gamma_{2}}(I)}$ (36)
where
$\displaystyle E_{\gamma}=E(I)-E(I-2)$ (37) $\displaystyle\bigtriangleup
E_{\gamma}(I)=E_{\gamma_{1}}(I)-E_{\gamma_{2}}(I)$ (38)
So that
$\displaystyle E_{\gamma_{1}}=(1+\delta)E_{\gamma_{2}}$ (39)
For rigid rotor the ratio
$\displaystyle\delta_{R}=\left(\dfrac{A_{2}}{A_{1}}\right)^{5/3}-1$ (40)
define the fractional change in $A^{5/3}$.
The fractional change in transition energies $\delta$ divided by the rigid
rotor ratio $\delta_{R}$ is denoted by $\delta_{\gamma}$. If the spacings are
identical, then $\delta=0,\delta_{\gamma}=0$ and if they scale as $A^{5/3}$
then $\delta_{\gamma=1}$.
Similarly, the percentage difference in kinematic moment of inertia $J^{(1)}$
is given by
$\displaystyle K=-\dfrac{\bigtriangleup J^{(1)}(I)}{J^{(1)}_{2}(I)}$ (41)
where
$\displaystyle J^{(1)}(I)$ $\displaystyle=\dfrac{2I-1}{E_{\gamma}(I)}$ (42)
$\displaystyle\bigtriangleup J^{(1)}(I)$
$\displaystyle=J^{(1)}_{1}(I)-J^{(1)}_{2}(I)$ (43)
So that
$J^{(2)}_{2}=(1+K)J^{(1)}_{1}$ (44)
Substituting for $J^{(1)}$, yield $K=\delta$.
## 4 The Interacting Boson Model to Calculate the Potential Energy Surfaces
and Electric Quadrupole Transition Probability
We consider the Hamiltonian of the first order U(5)- SU(3) quantum shape phase
transition in the form
$H=\epsilon_{d}\hat{n}_{d}+a_{2}\hat{Q}^{(x)}\hat{Q}^{(x)}$ (45)
where $\hat{n}_{d}$ and $\hat{Q}^{(x)}$ are respectively the d-boson number
operator and quadrupole operator defined as
$\displaystyle\hat{n}_{d}$
$\displaystyle=\sum_{\mu}d_{\mu}^{\dagger}\stackrel{{\scriptstyle\sim}}{{d}}_{\mu}$
(46) $\displaystyle\hat{Q}^{(x)}$
$\displaystyle=\left[d^{\dagger}s+s^{\dagger}\stackrel{{\scriptstyle\sim}}{{d}}\right]^{(2)}+x\left[d^{\dagger}\times\stackrel{{\scriptstyle\sim}}{{d}}\right]^{(2)}$
(47)
where $\left(s^{\dagger},d^{\dagger}\right)$ and
$\left(s,\stackrel{{\scriptstyle\sim}}{{d}}\right)$ are the boson creation and
annihilation operators respectively, and $x$ is the structure parameter of the
quadrupole operator of IBM ($x$ for pure rotational SU(3) limit is equal to
$-\sqrt{7}/2$). Here $d_{\mu}=(-1)^{\mu}d_{-\mu}$ and standard notation of
angular momentum coupling is used.
To get the potential energy surface (PES) of the Hamiltonian, we introduce the
intrinsic coherent frame in which the ground state of a nucleus with N bosons
can be expressed as a boson condensate. The bosonic intrinsic coherent state
for the ground state band of a given even-even nucleus can be written in the
form[47, 48, 49]
$\lvert
N\beta\gamma\rangle=\dfrac{1}{\sqrt{N!}}[b^{\dagger}(\beta,\gamma)]^{N}\lvert
0\rangle$ (48)
where $\lvert 0\rangle$ is the boson vacuum and $b^{\dagger}$ is the boson
creation operator which acts in the intrinsic system and is given by:
$\displaystyle b^{\dagger}=\dfrac{1}{\sqrt{1+\beta^{2}}}[s^{\dagger}+\beta
cos\gamma(d_{0}^{\dagger})+\dfrac{1}{\sqrt{2}}\beta
sin\gamma(d_{2}^{\dagger}+d_{-2}^{\dagger})]$ (49)
where $\beta$ is the quadrupole deformation parameter which measures the axial
deviation from spherical symmetry and the parameter $\gamma$ controls the
departure from axial symmetries.
The ground state PES is the expectation value of the Hamiltonian in the
intrinsic coherent state
$PES=\langle N\beta\gamma\rvert H\rvert N\beta\gamma\rangle$ (50)
The associated PES of the Hamiltonian (45) for $x=-\sqrt{7}/2$ reads
$\displaystyle E(N,\beta,\gamma)$
$\displaystyle=\epsilon_{d}\dfrac{N\beta^{2}}{1+\beta^{2}}+a_{2}\left[\dfrac{N}{1+\beta^{2}}(5+\dfrac{11}{4}\beta^{2})+\dfrac{N(N-1)}{(1+\beta^{2})^{2}}(4\beta^{2}-2\sqrt{2}\beta^{3}cos3\gamma+\dfrac{1}{2}\beta^{4})\right]$
(51)
Equation (51) can be written in another form as
$\displaystyle
E(N,\beta,\gamma)=g_{1}\dfrac{N\beta^{2}}{1+\beta^{2}}+\dfrac{N(N-1)}{(1+\beta^{2})^{2}}[g_{2}\beta^{2}+g_{3}\beta^{3}cos3\gamma+g_{4}\beta^{4}]+c$
(52)
where the coefficients involve linear combination of the Hamiltonian
parameters
$\displaystyle g_{1}$
$\displaystyle=\epsilon_{d}-\dfrac{9}{4}a_{2},\;\;\;\;\;\;\;\;\;\;g_{2}=4a_{2}$
$\displaystyle g_{3}$
$\displaystyle=2\sqrt{2}a_{2},\;\;\;\;\;\;\;\;\;\;\;\;\;\;g_{4}=\dfrac{1}{2}a_{2},\;\;\;\;\;\;\;c=5Na_{2}$
Also, equation (51) can be rewritten in general form as
$E(N,\beta,\gamma)=\dfrac{A_{2}\beta^{2}+A_{3}\beta^{3}cos3\gamma+A_{4}\beta^{4}}{(1+\beta^{2})^{2}}+A_{0}$
(53)
where the coefficients read
$\displaystyle A_{2}$
$\displaystyle=\left[\epsilon+\left(4N-\dfrac{25}{4}\right)a_{2}\right]N,\;\;\;\;\;\;\;\;\;\;\;A_{3}=2\sqrt{2}a_{2}(N-1)N$
$\displaystyle A_{4}$
$\displaystyle=\left[\epsilon+\left(\dfrac{2N+5}{4}-4\right)a_{2}\right]N,\;\;\;\;\;\;A_{0}=5a_{2}N$
For $a_{2}=0$, we get the pure spherical vibrator U(5) limit and for
$\epsilon_{d}=0$, we get the pure deformed rotational Su(3) limit.
Another important quantity that tests the nature of the shape phase transition
of low lying collective states the reduced electric quadrupole transition
probabilities $B(E_{2})$.
In IBM, the general form of the electric quadrupole operator is written in the
form [50]
$T(E_{2})=eQ(sdIBM)$ (54)
The coefficient e is the boson’s effective charge.
The reduced electric quadrupole transition probabilities are given by
$B[E_{2},I_{i}\rightarrow I_{f}]=\dfrac{1}{2I_{i}+1}\rvert\langle
I_{f}\rvert\rvert T(E_{2})\rvert\rvert I_{i}\rangle\rvert^{2}$ (55)
For rotational SU(3), yield
$\displaystyle B(E_{2},I+2\rightarrow I)$
$\displaystyle=e^{2}\,\dfrac{3}{4}\dfrac{(I+2)(I+1)}{(2I+3)(2I+5)}(2N-1)(2N+I+3)$
(56) $\displaystyle Q(I)$
$\displaystyle=-e\sqrt{\dfrac{16\pi}{40}}\dfrac{I}{2I+3}(4N+3)$ (57)
For the special case for I=0, we have
$\displaystyle B(E_{2},2_{1}^{+}\rightarrow
0_{1}^{+})=e^{2}\dfrac{1}{5}N(2N+3)$ (58)
## 5 Numerical Calculations and Discussion
In this section, we applied our formalism to eight pairs of nuclei having
identical bands (IB’s) in rare-earth region namely:
$(^{162}Yb-^{166}Hf),(^{162}Er-^{166}Yb),(^{162}Dy-^{166}Er),(^{160}Dy-^{168}Yb),(^{160}Er-^{168}Hf),\\\
(^{158}Er-^{170}W),(^{158}Dy-^{170}Hf)$ and $(^{156}Dy-^{172}W)$.
To calculate the ground state positive parity excitation energy E(I) for each
nucleus, we suggested the CRF3.
The parameters $\alpha,\gamma,\sigma$ of CRF3 have been determined by a
fitting procedure using a computer-simulated search program to minimize the
root mean square deviation of the calculated excitation energies from the
experimental ones. The quality of the fitting is indicated by the standard
common definition of $x$
$\displaystyle
x=\sqrt{\dfrac{1}{N}\Sigma_{i}\left(\dfrac{E_{exp}(I_{i})-E_{cal}(I_{i})}{\delta
E_{exp}(I_{i})}\right)^{2}}$
where N is the number of experimental data points entering the fitting
procedure and $\delta E_{exp}(I_{i})$ is the experimental error in the
excitation energies - The experimental excitation energies are taken from
[51]. The optimized best adopted values of parameters for each nucleus of our
studied nuclei are listed in Table (LABEL:tab:1).
Figure 1: Systematic of the calculated (solid curves) ground state energies for our selected even-even rare earth Dy, Er, YB, Hf, W isotopes versus neutron number N and comparison with the experimental ones (dashed curves). The spin-parity are labeled by $I^{\pi}$. Table 1: Values of optimized best parameters $\alpha,\gamma,\sigma$ of the collective rotational formula(CRF3) for ground state bands in our selected even-even rare-earth nuclei. $N_{p}$ and $N_{n}$ are the number of valance protons and the number of valance neutrons respectively. Nuclide | $\alpha$ (KeV) | $\gamma$ ($10^{-3}$) | $\sigma$ ($10^{-3}$) | $N_{p}$ | $N_{n}$
---|---|---|---|---|---
Dy 156 | 22.96 | 6.964 | 14.54 | 16 | 8
158 | 16.48 | 2.163 | 4.339 | 16 | 10
160 | 14.49 | 0.8683 | 2.021 | 16 | 12
162 | 13.49 | 1.398 | 2.233 | 16 | 14
Er 158 | 32.76 | 9.699 | 23.52 | 14 | 8
160 | 20.73 | 3.017 | 6.641 | 14 | 10
162 | 17.01 | 1.440 | 3.212 | 14 | 12
166 | 13.49 | 0.2573 | 1.188 | 14 | 16
Yb 162 | 27.87 | 6.334 | 14.27 | 12 | 10
166 | 17.08 | 2.053 | 3.95 | 12 | 14
168 | 14.72 | 1.039 | 2.425 | 12 | 16
Hf 166 | 26.60 | 5.565 | 12.67 | 10 | 12
168 | 20.58 | 3.116 | 6.849 | 10 | 14
170 | 15.92 | -0.00749 | 1.391 | 10 | 16
W 170 | 26.44 | 5.714 | 13.55 | 8 | 14
172 | 20.68 | 3.944 | 9.279 | 8 | 16
Figure 2: The calculated energy ratio $R_{4/2}=E(4^{+}_{1})/E(2^{+}_{1})$
versus neutron number N characterizes the low lying spectrum in Dy, Er, Yb,
Hf, and W isotopes. The symbols $o,\ast,\Square,\triangle,$ and x denote
${}_{66}Dy,_{68}Er,_{70}Yb,_{72}Hf,$ and ${}_{74}W$ respectively.
The systematic of the excitation energies of the low spin states as a function
of neutron number N in the considered even-even Dy, Er, Yb, Hf, W isotopes in
the mass region A= 156 - 172 in the normally deformed nuclear are shown in
Figure(1) and compared with the experimental ones. Only the ground state of
positive parity and spin $I^{\pi}=2^{+},4^{+},6^{+},8^{+},10^{+},12^{+}$ has
been indicated. We can see that the excitation energies decrease with
increasing the neutron number. Also, Figure(2) illustrate the calculated
energy ratio $R_{4/2}$ as a function of neutron number N for our studied
nuclei. We observe that for each isotopic chain the value of $R_{4/2}$
increases with increasing N (that is the deformation increased), and the
difference in $R_{4/2}$ for all pairs of IB’s is ranging from 0.4 % to 2.5 %
except the two pairs including the two isotopes ${}^{170,172}W$ (the
difference is about 5%).
Figure 3: The calculated results of kinematic $J^{(1)}$ (dashed curves) and
dynamic $J^{(2)}$ (solid curves) moments of inertia plotted as a function of
rotational frequency $\hbar\omega$ for the studied eight pairs of identical
bands in the rare-earth region. The $\ast$ and o correspond to the lighter and
heavier nucleus respectively.
For the eight pairs of IB’S, the kinematic $J^{(1)}$ and the dynamic $J^{(2)}$
moments of inertia derived from the transition energies are plotted versus the
rotational frequency $\hbar\omega$ as shown in Figure(3). It can be seen that
for all bands $J^{(1)}$ is smaller than $J^{(2)}$ and a smooth gradual
increase in both $J^{(1)}$ and $J^{(2)}$ with increasing $\hbar\omega$ are
seen and the similarities between each pair of IB’S are observed.
The IB’s correlation quantities exist between the considered pairs of nuclei
which exhibit the same identical excitation energies in their ground state
bands are listed in Table (LABEL:tab:21). These quantities include the P.
Factor, structure Factor SF, Saturation parameter SP, the F-Spin and its
projection $F_{0}$, pairing gaps $\bigtriangleup$, and the deformation
parameter $\beta$. The maximum structure factor for our region of nuclei is
SF= 6720. It is seen that the ratio $N_{p}N_{n}/\bigtriangleup$ rather than
the product $N_{p}N_{n}$ may be a better parameter for studying the IB’s. Note
that nuclei with symmetric $\pm F_{0}$ values have identical $N_{p}N_{n}$
values. For example the pair (${}^{160}Er$ and ${}^{168}Hf$) have
$(N_{p},N_{n})=(14,10)$ and $(10,14)$ respectively, so that $N_{p}N_{n}=140$
and $F_{0}=\pm 1$. Therefore if any F-spin multiplet has $F_{0}=\rvert
N_{p}-N_{n}\rvert/4$, those indicate that the pair of nuclei are similar in
structure if they have identical $(\rvert F_{0}\rvert,N_{p}N_{n})$.
Table 2: The identical band quantities of our eight pairs of nuclei. | $N_{p}N_{n}$ | P | SF | SP | $\lvert\delta\rvert\%$ | $\lvert k\rvert\%$
---|---|---|---|---|---|---
(${}^{158}Er\;-\;^{170}W\;$) | 112 | 5.090 | 2464 | 0.7317 | 1.28 | 1.27
(${}^{162}Yb\;-\;^{166}Hf$) | 120 | 5.4545 | 2640 | 0.7179 | 2.94 | 2.45
(${}^{156}Dy\;-\;^{172}W\;$) | 128 | 5.333 | 3072 | 0.6862 | 6.73 | 6.28
(${}^{160}Er\;-\;^{168}Hf$) | 140 | 5.833 | 3360 | 0.6666 | 1.35 | 1.22
(${}^{158}Dy\;-\;^{170}Hf$) | 160 | 6.1538 | 4160 | 0.6176 | 1.28 | 1.27
(${}^{162}Er\;-\;^{166}Yb$) | 168 | 6.6461 | 4368 | 0.6060 | 0.22 | 0.20
(${}^{160}Dy\;-\;^{168}Yb$) | 192 | 6.6857 | 5376 | 0.5555 | 0.10 | 0.30
(${}^{162}Dy\;-\;^{166}Er$) | 224 | 7.466 | 6720 | 0.5 | 1.29 | 1.26
| $(N_{\pi},N_{\nu})$ | N | $\dfrac{N_{\nu}}{N_{\pi}}$ | $(F,F_{0})$ | $\bigtriangleup$ (MeV) | $\dfrac{N_{p}N_{n}}{\bigtriangleup}$(MeV-1) | $\beta_{G}$
---|---|---|---|---|---|---|---
$\;\;{}^{158}Er\;\;\;\;$ | (7,4) | 11 | 0.571 | (5.5,1.5) | 0.954 | 117.4 | 0.2173
$\;\;{}^{170}W\;\;\;\;$ | (4,7) | 11 | 1.750 | (5.5,-1.5) | 0.920 | 121.739 | 0.2206
$\;\;{}^{162}Yb\;\;\;\;$ | (6,5) | 11 | 0.833 | (5.5,0.5) | 0.942 | 127.388 | 0.2270
$\;\;{}^{166}Hf\;\;\;\;$ | (5,6) | 11 | 1.2 | (5.5,-0.5) | 0.931 | 128.893 | 0.2254
$\;\;{}^{156}Dy\;\;\;\;$ | (8,4) | 12 | 0.5 | (6,2) | 0.960 | 133.333 | 0.2601
$\;\;\;\;{}^{172}W\;\;\;\;$ | (4,8) | 12 | 2.0 | (6,-2) | 0.914 | 140.043 | 0.2459
$\;\;{}^{160}Er\;\;\;\;$ | (7,5) | 12 | 0.714 | (6,1) | 0.948 | 147.679 | 0.2643
$\;\;{}^{168}Hf\;\;\;\;$ | (5,7) | 12 | 1.4 | (6,-1) | 0.925 | 151.351 | 0.2517
$\;\;{}^{158}Dy\;\;\;\;$ | (8,5) | 13 | 0.625 | (6.5,1.5) | 0.954 | 167.714 | 0.3026
$\;\;{}^{170}Hf\;\;\;\;$ | (5,8) | 13 | 1.6 | (6.5,-1.5) | 0.920 | 173.913 | 0.2754
$\;\;{}^{162}Er\;\;\;\;$ | (7,6) | 13 | 0.857 | (6.5,0.5) | 0.942 | 178.343 | 0.2896
$\;\;{}^{166}Yb\;\;\;\;$ | (6,7) | 13 | 1.166 | (6.5,-0.5) | 0.931 | 180.451 | 0.2814
$\;\;{}^{160}Dy\;\;\;\;$ | (8,6) | 14 | 0.75 | (7,1) | 0.948 | 202.531 | 0.3181
$\;\;{}^{168}Yb\;\;\;\;$ | (6,8) | 14 | 1.333 | (7,-1) | 0.925 | 207.567 | 0.2993
$\;\;{}^{162}Dy\;\;\;\;$ | (8,7) | 15 | 0.875 | (7.5,0.5) | 0.942 | 237.791 | 0.3256
$\;\;{}^{166}Er\;\;\;\;$ | (7,8) | 15 | 1.142 | (7.5,-0.5) | 0.931 | 240.601 | 0.3167
The percentage differences ratios in transition energy $\delta$ and the rigid
rotor ratio $\delta_{R}$ between pairs of levels in two nuclei are calculated
and listed in Table(4) for our eight pairs of IB’s. In spite of the parameters
$N_{p}N_{n}$, P, SF and SP are the same for the pairs $(^{156}Dy,^{172}W)$,
this pair is not really identical according to their high average percentage
differences in transition energies (approximately 6.7%).
For each nucleus in isotopic chains of ${}_{66}Dy,_{68}Er,_{70}Yb,_{72}Hf$ and
${}_{74}W$, the values of lowest dynamical moments of inertia
$J^{(2)}_{lowest}$ were calculated and displayed against the neutron number N
in Figure(4) - It can be seen that $J^{(2)}_{lowest}$ increases with
increasing the neutron number N and the difference in$J^{(2)}_{lowest}$ for
each pair of IB’s is very small ( approximately a horizontal line). As an
example of two nuclei that exhibit good IB’s, the pair
${}^{162}_{68}Er(J^{(2)}_{lowest}=31.525\hbar^{2}MeV^{-1})$ and
${}^{166}_{70}Yb(J^{(2)}_{lowest}=31.519\hbar^{2}MeV^{-1})$, that is nearly
the same $J^{(2)}_{lowest}$.
Table 4: The percentage differences ratios in transition energies $\delta$, the fractional change in transition energies divided by the rigid rotor ratio $\delta R$ and the ratio $R\;=\delta/\delta R$ for the eight pairs of identical bands. Identical pairs | $\lvert\delta\rvert=\dfrac{\bigtriangleup E_{\gamma}}{E_{\gamma_{2}}}\;\;\%$ | $\delta R$ | $\langle R_{\delta}\rangle$
---|---|---|---
(${}^{162}Yb\;-\;^{166}Hf$) | 2.964 | 4.149 | 0.714
(${}^{162}Er\;-\;^{166}Yb$) | 0.415 | 4.149 | 0.100
(${}^{162}Dy\;-\;^{166}Er$) | 1.297 | 4.149 | 0.312
(${}^{160}Er\;-\;^{168}Hf$) | 1.352 | 8.471 | 0.159
(${}^{160}Dy\;-\;^{168}Yb$) | 1.131 | 8.471 | 0.133
(${}^{158}Er\;-\;^{170}W\;$) | 10.826 | 12.976 | 0.834
(${}^{158}Dy\;-\;^{170}Hf$) | 1.765 | 12.976 | 0.136
(${}^{156}Dy\;-\;^{172}W\;$) | 7.410 | 17.671 | 0.419
Figure 4: The lowest dynamical moment of inertia $J^{(2)}_{lowest}$ against
the neutron number N for the eight pairs of identical bands. The solid line
connects each pair and symbols $o,\ast,\triangle,\Square,$ and $\diamondsuit$
denotes ${}_{66}Dy,_{68}Er,_{70}Yb,_{72}Hf,$ and ${}_{74}W$ respectively.
We classified our selected pairs of IB’s into four multiplets = (A+4), Z+2),
(A+B,Z+4), (A+12,Z+6), and (A+16,Z+8) and the percentage differences in
transition energies $\delta=\bigtriangleup E_{\gamma}/E_{\gamma_{2}}$ as a
function of spin I (up to I=10) have been calculated and illustrated Figure
(5). It is seen that the pairs of IB’s have approximately similar $\delta$ (
less than 2.5 %) except the two pairs which include the tungsten isotopes
${}^{170,172}W$ where the value of $\delta$ reaches $\sim 6-10$% in spite of
they have the same $N_{p}N_{n}$ value ($N_{p}N_{n}=112$ for
${}^{158}Er,^{170}W$ and $N_{p}N_{n}=128$ for ${}^{156}Dy,^{172}W$).
To further investigation for IB’s we used the SU(3) rotational limit of the
IBM to extract the quadrupole deformation $\beta_{IBM}$ for each nucleus. The
calculated $\beta_{IBM}$ is plotted against the ratio $N_{\nu}/N_{\pi}$ (where
$N_{\nu}$ and $N_{\pi}$ are the number of valence neutron and valence proton
bosons respectively) in Figure(6). It is seen that $\beta_{IBM}$ is the same
for each pair of IB’s (horizontal line).
Figure 5: Percentage difference in transition energies $\delta=\bigtriangleup
E_{\gamma}/E_{\gamma_{2}}$ for the eight pairs of multiplet (A+4,Z+2),
(A+8,Z+4), (A+12,Z+6), and (A+16,Z+8) for Dy, Er, Yb, Hf, and W isotopes. The
dashed curve represents the ratio of the rigid rotor. Figure 6: The quadrupole
deformation parameter $\beta_{IBM}$ was calculated from SU(3) limit of IBM as
a function of $N_{\nu}/N_{\pi}$ for our eight pairs of identical bands. Figure
7: Sketch of the potential energy surface PES calculated from the U(5)-SU(3)
shape phase transitions of IBM with intrinsic coherent state versus the
deformation parameters $\beta$ for the eight pairs of even-even nuclei having
identical bands.
For each nucleus, by using the IBM Hamiltonian equation (45) and its
eigenvalues equation (53), the PES’s have been calculated as a function of
deformation parameter $\beta$ along the axial trajectory $\gamma$ = 0°, 60°.
The results are illustrated in Figure(7) and the corresponding calculated
parameter of the PES’s $A_{2},A_{3},A_{4}$ and $A_{o}$ which are linear
combinations of the original parameters $\epsilon_{0}$ and $a_{2}$ are listed
in Table(4). From the graphs presented in Figure(7), we observe the similarity
in PES’s for each pair of IB’s. All studied nuclei are deformed and have
rotational characters, the prolate deformation is deeper than the oblate
deformation.
Table 5: Values of the adopted best (PES) parameters $A_{2},A_{3},A_{4},A_{0}$ ( in KeV ) for the studied eight pairs of identical bands. $N_{B}$ is the total number of bosons. | $N_{B}$ | $A_{2}$ | $A_{3}$ | $A_{4}$ | $A_{0}$
---|---|---|---|---|---
$\;\;{}^{162}Dy\;\;\;\;$ | 15 | -2.4667 | -0.5863 | 1.6665 | -0.3265
$\;\;{}^{166}Er\;\;\;\;$ | 15 | -1.6586 | -2.0341 | 4.4739 | -0.7875
$\;\;{}^{162}Er\;\;\;\;$ | 13 | -5.0526 | -2.5496 | 3.7667 | -0.9375
$\;\;{}^{166}Yb\;\;\;\;$ | 13 | -5.3088 | -3.1366 | 4.0554 | -0.925
$\;\;{}^{162}Yb\;\;\;\;$ | 11 | -4.84 | -1.6163 | 3.6775 | -0.9
$\;\;{}^{166}Hf\;\;\;\;$ | 11 | -2.8484 | -1.9547 | 3.9131 | -0.8625
$\;\;{}^{160}Dy\;\;\;\;$ | 14 | -1.9568 | -0.8838 | 1.1005 | -0.3
$\;\;{}^{168}Yb\;\;\;\;$ | 14 | -5.3088 | -3.1366 | 4.0554 | -0.925
$\;\;{}^{160}Er\;\;\;\;$ | 12 | -3.0403 | -2.3636 | 4.1401 | -0.8625
$\;\;{}^{168}Hf\;\;\;\;$ | 12 | -3.463 | -2.4694 | 4.039 | -0.875
$\;\;{}^{158}Dy\;\;\;\;$ | 13 | -1.6288 | -1.1822 | 1.0095 | -0.288
$\;\;{}^{170}Hf\;\;\;\;$ | 13 | -3.1845 | -3.395 | 4.497 | -0.8375
$\;\;{}^{158}Er\;\;\;\;$ | 11 | -1.6586 | -2.0541 | 4.4739 | -0.7875
$\;\;{}^{170}W\;\;\;\;$ | 11 | -0.9761 | -2.4841 | 4.7606 | -0.7546
$\;\;{}^{156}Dy\;\;\;\;$ | 12 | -1.5043 | -1.2135 | 0.9961 | -0.3
$\;\;{}^{172}W\;\;\;\;$ | 12 | -0.8852 | -1.4675 | 1.0599 | -0.313
## 6 Conclusion
By using a novel three parameters collective rotational formula (CRF3), the
positive parity ground state excitation energies are calculated for sixteen
nuclei in rare-earth region. The optimized three parameters are deduced by
using a computer simulated search program in order to obtain a minimum root
mean square deviation of the calculated excitation energies from the measured
ones. The potential energy surfaces are calculated by using the sd-version of
the interacting boson model.
The problem of low-spin identical bands in normal deformed nuclei in rare-
earth region is treated. We have exhibited identical bands in eight pairs of
conjugate even-even nuclei of widely dispersed spanning as much as sixteen
mass unit. Each pair with the same F-spin and projections $\pm F_{0}$ values
have identical product of valence proton and neutron numbers $N_{p}N_{n}$
values. Also, the values of dynamical moments of inertia for each identical
band pair are approximately the same. We extracted all the identical band
symmetry parameters like P-factor, saturation parameter, and structure factor
which all depend on $N_{p}$ and $N_{n}$. The pairing interaction energy, the
quadrupole transition probabilities, and the energy ratios are also treated.
## References
* [1] Th Byrski, FA Beck, D Curien, C Schuck, P Fallon, A Alderson, I Ali, MA Bentley, AM Bruce, PD Forsyth, et al. Observation of identical superdeformed bands in $\mathrm{N}$ = 86 nuclei. Physical review letters, 64(14):1650, 1990.
* [2] B. Haas, D. Ward, H. R. Andrews, G. C. Ball, T. E. Drake, S. Flibotte, A. Galindo-Uribarri, V. P. Janzen, J. K. Johansson, H. Kluge, J. Kuehner, A. Omar, S. Pilotte, D. Prevost, J. Rodriguez, D. C. Radford, P. Taras, J. P. Vivien, J. C. Waddington, and S. Aberg. Observation of excited proton and neutron configurations in the superdeformed ${}^{149}\mathrm{Gd}$ nucleus. Phys. Rev. C, 42:R1817–R1821, Nov 1990.
* [3] Cyrus Baktash, Bernard Haas, and Witold Nazarewicz. Identical bands in deformed and superdeformed nuclei. Annual Review of Nuclear and Particle Science, 45(1):485–541, 1995\.
* [4] FS Stephens, MA Deleplanque, JE Draper, RM Diamond, CW Beausang, W Korten, WH Kelly, F Azaiez, JA Becker, EA Henry, et al. Spin alignment in superdeformed hg nuclei. Physical review letters, 64(22):2623, 1990.
* [5] FS Stephens, MA Deleplanque, JE Draper, RM Diamond, AO Macchiavelli, CW Beausang, W Korten, WH Kelly, F Azaiez, JA Becker, et al. Pseudospin symmetry and quantized alignment in nuclei. Physical review letters, 65(3):301, 1990.
* [6] Ingemar Ragnarsson. Additivity in superdeformed bands. Physics Letters B, 264(1-2):5–10, 1991.
* [7] W Nazarewicz, PJ Twin, P Fallon, and JD Garrett. Natural-parity states in superdeformed bands and pseudo su (3) symmetry at extreme conditions. Physical Review Letters, 64(14):1654, 1990.
* [8] Z Szymański and W Nazarewicz. Rotating pseudo-oscillator scheme: pseudo-spin symmetry and identical bands. Physics Letters B, 433(3-4):229–235, 1998.
* [9] C Rigollet, Paul Bonche, Hubert Flocard, and P-H Heenen. Microscopic study of the properties of identical bands in the $\mathrm{A}=150$ mass region. Physical Review C, 59(6):3120, 1999.
* [10] Jin-Yan Zeng, Shu-Xin Liu, YA Lei, and L Yu. Microscopic mechanism of normally deformed identical bands at low spin in the rare-earth nuclei. Physical Review C, 63(2):024305, 2001.
* [11] Shu-Xin Liu, Jin-Yan Zeng, and En-Guang Zhao. Microscopic mechanism of identical superdeformed bands in ${}^{192,193,194}\mathrm{Hg}$. Physical Review C, 66(2):024320, 2002.
* [12] Ali Khalaf, Karima Abdelmageed, and MANAL SIRAG. Description of the yrast superdeformed bands in even-even nuclei in $\mathrm{A}\sim 190$ region using the nuclear softness model. Turkish Journal of physics, 39(2):178–186, 2015.
* [13] P Fallon, W Nazarewicz, MA Riley, and R Wyss. The influence of pairing on the properties of "identical" superdeformed bands in hg nuclei. Physics Letters B, 276(4):427–431, 1992.
* [14] Z Szymanski. Nature of the identical bands in atomic nuclei. Physical Review C, 51(3):R1090, 1995.
* [15] DS Haslip, N Kintz, S Flibotte, RAE Austin, G De France, M Devlin, Ch Finck, A Galindo-Uribarri, G Gervais, DR LaFosse, et al. Superdeformation in ${}^{147,148}\mathrm{Eu}$: Identical bands and $\pi 6^{1}$\- $\pi 6^{3}$ crossings. Physical Review C, 57(5):2196, 1998.
* [16] Lennart B Karlsson, Ingemar Ragnarsson, and Sven Åberg. Identical bands in superdeformed nuclei. Physics Letters B, 416(1-2):16–22, 1998.
* [17] XT He, SX Liu, SY Yu, JY Zeng, and EG Zhao. The $i_{13/2}$ proton intruder orbital and the identical superdeformed bands in${}^{193,194,195}\mathrm{Tl}$. The European Physical Journal A-Hadrons and Nuclei, 23(2):217–222, 2005.
* [18] A Gelberg, P Von Brentano, and RF Casten. On a possible supersymmetry in superdeformed bands. Journal of Physics G: Nuclear and Particle Physics, 16(8):L143, 1990\.
* [19] RD Amado, R Bijker, F Cannata, and JP Dedonder. Supersymmetric quantum mechanics and superdeformed nuclei. Physical Review Letters, 67(20):2777, 1991.
* [20] Yu-Xin Liu and Dong-Feng Gao. Description of identical superdeformed bands with $\bigtriangleup\mathrm{I}=4$ bifurcation. Physical Review C, 63(4):044317, 2001.
* [21] I Ahmad, MP Carpenter, RR Chasman, RVF Janssens, and TL Khoo. Rotational bands with identical transition energies in actinide nuclei. Physical Review C, 44(3):1204, 1991.
* [22] C Baktash, JD Garrett, DF Winchell, and A Smith. Low-spin indentical bands in neighboring odd-a and even-even nuclei: A challenge to mean-field theories. Physical review letters, 69(10):1500, 1992.
* [23] C Baktash, DF Winchell, JD Garrett, and A Smith. Low-spin identical bands in neighboring odd-a and even-even nuclei. Nuclear Physics A, 557:145–156, 1993.
* [24] RF Casten, NV Zamfir, P Von Brentano, and W-T Chou. Identical bands in widely dispersed nuclei. Physical Review C, 45(4):R1413, 1992.
* [25] M. Saha and S. Sen. Low-spin identical bands in the ${\mathit{n}}_{\mathit{p}}$${\mathit{n}}_{\mathit{n}}$ scheme. Phys. Rev. C, 46:R1587–R1590, Nov 1992.
* [26] M. (Saha) Sarkar and S. Sen. Simple phenomenology for the ground-state bands of even-even nuclei. Phys. Rev. C, 50:2794–2799, Dec 1994.
* [27] J-Y Zhang, RF Casten, W-T Chou, DS Brenner, NV Zamfir, and P Von Brentano. Identical bands and the varieties of rotational behavior. Physical review letters, 69(8):1160, 1992.
* [28] EC Halbert and W Nazarewicz. Deformation, pairing, and moments of inertia in ground-state bands of even-even rare-earth nuclei. Physical Review C, 48(5):R2158, 1993.
* [29] J. Y. Zeng, S. X. Liu, Y. A. Lei, and L. Yu. Microscopic mechanism of normally deformed identical bands at low spin in the rare-earth nuclei. Phys. Rev. C, 63:024305, Jan 2001.
* [30] AM Khalaf, MD Okasha, and KM Abdelbased. Occurrence and properties of low spin identical bands in normal-deformed even-even nuclei. PROGRESS, 13:50, 2017.
* [31] Mike W Guidry, Michael R Strayer, Cheng-Li Wu, et al. Some general constraints on identical band symmetries. Physical Review C, 48(4):1739, 1993.
* [32] A. Bohr, B. R. Mottelson, and W.A. Benjamin (Firm). Nuclear Structure: Volume $\mathrm{II}$ (nuclear Deformations). Nuclear Structure. Basic Books, 1975.
* [33] AM Khalaf. High-spin properties in deformed nuclei using weak coupling model. Indian Journal of pure and Applied Physics, 24(10):469–471, 1986\.
* [34] M. A. J. Mariscotti, Gertrude Scharff-Goldhaber, and Brian Buck. Phenomenological analysis of ground-state bands in even-even nuclei. Physical Review, 178(4):1864, Feb 1969.
* [35] G Scharff-Goldhaber, CB Dover, and AL Goodman. The variable moment of inertia (vmi) model and theories of nuclear collective motion. Annual review of nuclear science, 26(1):239–317, 1976.
* [36] P. von Brentano, N. V. Zamfir, R. F. Casten, W. G. Rellergert, and E. A. McCutchan. New yrast energy formula for soft rotors. Phys. Rev. C, 69:044314, Apr 2004.
* [37] F. Iachello and A. Arima. The Interacting Boson Model. Cambridge Monographs on Mathematical Physics. Cambridge University Press, 1987.
* [38] T Otsuka, A Arima, F Iachello, and Igal Talmi. Shell model description of interacting bosons. Physics Letters B, 76(2):139–143, 1978.
* [39] RF Casten. Possible unified interpretation of heavy nuclei. Physical Review Letters, 54(18):1991, 1985.
* [40] RF Casten and NV Zamfir. The evolution of nuclear structure: the scheme and related correlations. Journal of Physics G: Nuclear and Particle Physics, 22(11):1521, 1996.
* [41] R. F. Casten, N. V. Zamfir, P. von Brentano, and W.-T. Chou. Identical bands in widely dispersed nuclei. Phys. Rev. C, 45:R1413–R1416, Apr 1992.
* [42] RF Casten. A simple approach to nuclear transition regions. Physics Letters B, 152(3-4):145–150, 1985.
* [43] R. F. Casten, D. S. Brenner, and P. E. Haustein. Valence p-n interactions and the development of collectivity in heavy nuclei. Phys. Rev. Lett., 58:658–661, Feb 1987.
* [44] T. A. Green and M. E. Rose. Nuclear structure effects in internal conversion. Phys. Rev., 110:105–122, Apr 1958.
* [45] L Grodzins. The uniform behaviour of electric quadrupole transition probabilities from first 2+ states in even-even nuclei. Phys. Letters, 2, 1962.
* [46] A Partensky and Christiane Quesne. Deformation of nuclei as a function of angular momentum in the u (6)$\supset$ su (3) model. Annals of Physics, 136(2):340–370, 1981.
* [47] A. E. L. Dieperink, O Scholten, and F Iachello. Classical limit of the interacting-boson model. Physical Review Letters, 44(26):1747, 1980.
* [48] J.N. Ginocchio. An exactly solvable anharmonic bohr hamiltonian and its equivalent boson hamiltonian. Nuclear Physics A, 376(3):438–450, 1982.
* [49] Y Alhassid and N Whelan. Chaotic properties of the interacting-boson model: A discovery of a new regular region. Physical review letters, 67(7):816, 1991.
* [50] DD Warner and RF Casten. Predictions of the interacting boson approximation in a consistent q framework. Physical Review C, 28(4):1798, 1983.
* [51] Evaluated Nuclear Structure Data File National Nuclear Data Center. https://www.nndc.bnl.gov/.
|
# Local-global principle and integral Tate conjecture for certain varieties
Zhiyu Tian Beijing International Center for Mathematical Research
Peking University
100871, Beijing, China<EMAIL_ADDRESS>
###### Abstract.
We give a geometric criterion to check the validity of integral Tate
conjecture for one cycles on separably rationally connected fibrations over a
curve, and to check that the Brauer-Manin obstruction is the only obstruction
to local-global principle for zero cycles on a separably rationally connected
variety defined over a global function field.
We prove that the Brauer-Manin obstruction is the only obstruction to local-
global principle for zero cycles on all geometrically rational surfaces
defined over a global function field, and to Hasse principle for rational
points on del Pezzo surfaces of degree four defined over a global function
field of odd characteristic.
Along the way, we also prove some results about the space of one cycles on a
separably rationally connected fibration over a curve, which leads to the
equality of the coniveau filtration and the strong coniveau filtration
(introduced by Benoist-Ottem and Voisin) on degree $3$ homology of such
varieties.
###### Contents
1. 1 Introduction
1. 1.1 Local-global principle for zero cycles
2. 1.2 Integral Tate conjecture
3. 1.3 Coniveau and strong coniveau
4. 1.4 Integral Tate conjecture for one cycles: arithmetic part
5. 1.5 Algebraic equivalence
6. 1.6 Structure of the paper
2. 2 Space of one cycles
3. 3 Lawson homology
4. 4 Chow sheaves
5. 5 Integral Tate conjecture and local-global principle for zero cycles
6. 6 Examples
## 1\. Introduction
### 1.1. Local-global principle for zero cycles
Given a smooth projective variety defined over a global field, a natural and
important problem is to find criteria for the existence of rational points and
a description of the set of all rational points. Hasse principle and weak
approximation problem, or local-global principle, gives a characterization of
this set in terms of the adelic points. There are various obstructions for
local-global principle to hold, notably the so called Brauer-Manin
obstruction. A conjecture due to Colliot-Thélène states that for rationally
connected varieties defined over a global field, this is the only obstruction.
The study of zero cycles, as natural generalizations of rational points, has
also drawn lots of attentions in recent years. Motivated by the case of
rational points, Colliot-Thélène has formulated the following conjectures on
the local-global principle for zero cycles.
###### Conjecture 1.1.
[CT99, Conjecture 2.2] Let $X$ be a smooth projective variety defined over the
function field $\mathbb{F}_{q}(B)$ of a smooth curve $B$ defined over a finite
field $\mathbb{F}_{q}$. For every place $\nu$ of $\mathbb{F}_{q}(B)$, let
$z_{\nu}\in CH_{0}(X_{v})$. Suppose that for all element $A\in
Br(X)\\{\ell\\}$, we have $\sum_{\nu}Inv(A(z_{\nu}))=0$. Then for all $n>0$,
there is a cycle $z_{n}\in CH_{0}(X)$ such that for all $\nu$ we have that
$cl(z_{n})=cl(z_{\nu})\in
H^{2d}_{\text{\'{e}t}}(X_{\nu},\mu_{\ell^{n}}^{\otimes d}).$
Here $Inv(A(z_{\nu}))$ means the value of $(A,z_{\nu})$ under the pairing
$Br(X_{\nu})\\{\ell\\}\times CH_{0}(X_{\nu})\to\mathbb{Q}/\mathbb{Z}.$
A particular case of the above conjecture is the following.
###### Conjecture 1.2.
Let $X$ be a smooth projective variety defined over the function field
$\mathbb{F}_{q}(B)$ of a smooth curve $B$ defined over a finite field
$\mathbb{F}_{q}$. Suppose that for every place $\nu$ of $\mathbb{F}_{q}(B)$,
there is a zero cycle
$z_{\nu}\in CH_{0}(X_{v})$
of degree prime to $\ell$. Suppose that for all element $A\in
Br(X)\\{\ell\\}$, we have
$\sum_{\nu}Inv(A(z_{\nu}))=0.$
Then there is a cycle $z\in CH_{0}(X)$ of degree prime to $\ell$.
In this paper, for an abelian group $A$, we use
$A\hat{\otimes}\mathbb{Z}_{\ell}$ to denote the inverse limit
$\lim\limits_{\longleftarrow}A/\ell^{n}A$. The following stronger form of the
above conjectures is also well-known.
###### Conjecture 1.3.
Let $X$ be a smooth projective variety defined over a global field $K$. Let
$l$ be a prime number invertible in $K$. There is an exact sequence:
$CH_{0}(X)\hat{\otimes}\mathbb{Z}_{\ell}\to\Pi_{\nu\in\Omega(K)}CH_{0}(X_{\nu})\hat{\otimes}\mathbb{Z}_{\ell}\to
Hom(Br(X)\\{\ell\\},\mathbb{Q}/\mathbb{Z}).$
Conjectures 1.1 and 1.2 are consequences of this via considering the
commutative diagram of various cycle class maps. On the other hand, if the
cycle class map $CH_{0}(X_{\nu})\hat{\otimes}\mathbb{Z}_{\ell}\to
H^{2d}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(d))$ is injective, Conjecture 1.1
and this stronger conjecture 1.3 are equivalent. In general the injectivity
fails. But we will see that in many (and conjecturally all) cases of interest
to us, the injectivity holds.
One of the main theorems of this article is the following.
###### Theorem 1.4.
Let $X$ be a smooth projective geometrically rational surface defined over the
function field $\mathbb{F}_{q}(B)$ of a smooth projective curve $B$. Then
Conjectures 1.3, and hence 1.1 and 1.2 hold true for $X$ .
By a happy coincidence, we deduce a corollary for rational points.
###### Theorem 1.5.
Let $X$ be a del Pezzo surface of degree $4$ defined over a global function
field of odd characteristic. Then Brauer-Manin obstruction is the only
obstruction for Hasse principle for rational points on $X$ to hold.
###### Proof.
If there is a rational point everywhere locally and satisfies the Brauer-Manin
constraint, there is a zero cycle of degree $1$ over the function field by
Theorem 1.4. A del Pezzo surface of degree $4$ is a complete intersection of
$2$ quadrics in $\mathbb{P}^{4}$. Such a complete intersection has a rational
point if and only if it has an odd degree $0$-cycle [Bru78]. Hence we have the
result. ∎
###### Remark 1.6.
One is also interested to study weak approximation for rational points on a
del Pezzo surface of degree $4$. For a del Pezzo surface of degree $4$ over a
number field, assuming that there is a rational point, Salberger and
Skorobogatov[SS91] prove that the Brauer-Manin obstruction is the only
obstruction to weak approximation. As the author has been informed by Colliot-
Thélène, essentially the same argument also proves that over a global function
field of odd characteristic, Brauer-Manin obstruction is the only obstruction
to weak approximation once there is a rational point. In characteristic $2$,
some partial results are contained in the joint work of the author with Letao
Zhang [TZ18].
We finish this section with some previously known results. There is a vast
literature on the local-global principles for zero-cycles/rational points on
geometrically rational surfaces. Let us only mention a few relevant results
and refer the readers to survey articles such as [Wit18] etc. for a more
comprehensive list.
Colliot-Thélène proved Conjecture 1.3 holds for ruled surfaces defined over
number fields [CT00]. The global function field version for ruled surfaces is
proved by Parimala-Suresh [PS16], whose proof depends on the computation of
degree $3$ unramified cohomology and also establishes the integral Tate
conjecture for conic bundle over surfaces defined over finite fields. An
interesting example of cubic surfaces of the form
$(f+tg=0)\subset\mathbb{P}^{3}\times\mathbb{A}^{1}_{t}$ is studied by Colliot-
Thélène-Swinnerton-Dyer [CTSD12]. In addition to proving that Hasse-principle
for zero cycles holds for cubic surfaces of this form, they also prove that
the existence of rational points is equivalent to the existence of a zero
cycle of degree $1$ for such surfaces.
The study of complete intersection of two quadrics has also drawn lots of
attention. It starts with the work of Colliot-Thélène, Sansuc, and Swinnerton-
Dyer [CTSSD87]. Heath-Brown proved that Hasse principle for rational points
holds for smooth complete intersections of two quadrics in $\mathbb{P}^{7}$
over number fields [HB18], see also a different proof by Colliot-Thélène
[CT22]. Under the assumption on finiteness of Tate-Shafarevich groups of
elliptic curves and the validity of Schinzel’s hypothesis, Wittenberg proved
Hasse principle holds for such complete intersections in $\mathbb{P}^{5}$ and
some case in $\mathbb{P}^{4}$ over number fields [Wit07]. The author has shown
in a previous paper [Tia17] that Hasse principle for rational points holds for
smooth complete intersections of two quadrics in $\mathbb{P}^{n},n\geq 5$
defined over a global function field of odd characteristic.
### 1.2. Integral Tate conjecture
Our approach to Theorem 1.4 is based on the close relation between an integral
version of Tate conjecture and Colliot-Thélène’s conjectures, first studied by
Saito [Sai89] and Colliot-Thélène [CT99].
Let $X$ be a smooth projective geometrically irreducible variety of dimension
$d$ defined over a finite field $\mathbb{F}$. We have the cycle class maps:
(1) $CH_{1}(X)\otimes\mathbb{Z}_{\ell}\to H^{2d-2}(X,\mathbb{Z}_{\ell}(d-1)),$
(2) $CH_{1}(X)\otimes\mathbb{Z}_{\ell}\to
H^{2d-2}(X,\mathbb{Z}_{\ell}(d-1))\to
H^{2d-2}(\bar{X},\mathbb{Z}_{\ell}(d-1))^{G},$ (3)
$CH_{1}(\bar{X})\otimes\mathbb{Z}_{\ell}\to\cup_{K/\mathbb{F}}H^{2d-2}(\bar{X},\mathbb{Z}_{\ell}(d-1))^{G_{K}}\subset
H^{2d-2}(\bar{X},\mathbb{Z}_{\ell}(d-1)).$
We also have the corresponding cycle class maps after tensoring with
$\mathbb{Q}_{\ell}$. Tate conjecture predicts that the cycle class map on
codimension $r$ cycles
$CH^{r}(X)\otimes\mathbb{Q}_{\ell}\to
H^{2r}_{\text{\'{e}t}}(X,\mathbb{Q}_{\ell}(r))$
is surjective for any smooth projective varieties defined over a finite field.
While the cycle class map is in general not surjective for $\mathbb{Z}_{\ell}$
coefficients, one is still interested in knowing in which cases surjectivity
still holds. This is usually called the integral Tate conjecture (even though
it is not true in general).
###### Definition 1.7.
Let $X$ be a smooth, proper variety over an algebraically closed field. Given
any $f:\mathbb{P}^{1}\to X$, the pull-back $f^{*}T_{X}$ decomposes as direct
sum of line bundles $\oplus_{i=1}^{\dim X}\mathcal{O}(a_{i})$. We say that $X$
is _separably rationally connected_ , or _SRC_ , if $a_{i}>0$ for every $i$.
We say that $X$ is _separably rationally connected in codimension $1$_ or _SRC
in codimension $1$_ if there is a morphism $f$ for which $a_{i}\geq 0$ for
every $i$, with strict inequality for all but one $a_{i}$.
###### Remark 1.8.
The term SRC is introduced by Kollár-Miyaoka-Mori [KMM92]. The term SRC in
codimension $1$ is introduced in [KT23]. Main examples include SRC varieties
and fibrations over a curve with smooth proper SRC general fibers. In
characteristic $0$, these are all the examples. In positive characteristic
there are more examples. In any case, one can take the quotient by free
rational curves on a variety that is SRC in codimension $1$. The quotient is
either a curve or a point. In particular, the Chow group of $0$-cycles on such
varieties is supported in a curve.
The results of this paper, together with some results proved by the author in
[Tia20], strongly suggest that the following is true.
###### Conjecture 1.9.
Let $X$ be a smooth projective variety defined over a finite field. Assume
that $X$ is separably rationally connected in codimension $1$. Then the cycle
class map
$CH_{1}(X)\otimes\mathbb{Z}_{\ell}\to
H^{2d-2}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(d-1))$
is surjective, where $d=\dim X$.
We refer the readers to Theorem 1.13 and Remark 1.15 for evidences of this
conjecture.
The connection between integral Tate conjecture and Conjecture 1.1, 1.2 is the
following.
###### Theorem 1.10 ([CT99] Proposition 3.2, [Sai89] Corollary (8-6)).
Let $\mathbb{F}$ be a finite field, $C$ a smooth projective geometrically
connected curve over $\mathbb{F}$, and $K$ the function field of $C$. Let
$\mathcal{X}$ be a smooth projective geometrically connected variety of
dimension $d+1$ defined over $\mathbb{F}$, equipped with a morphism
$p:\mathcal{X}\to C$, whose generic fiber is smooth and geometrically
irreducible. Let $l$ be a prime different from the characteristic.
1. (1)
If the cycle class map
$CH^{d}(\mathcal{X})\otimes\mathbb{Z}_{\ell}\to
H^{2d}_{\text{{\'{e}}t}}(\mathcal{X},\mathbb{Z}_{\ell}(d))$
is surjective, Conjectures 1.1 and 1.2 are true.
2. (2)
If the cycle class map
$CH^{d}(\mathcal{X})\otimes\mathbb{Z}_{\ell}\to
H^{2d}_{\text{{\'{e}}t}}(\mathcal{X},\mathbb{Z}_{\ell}(d))\to
H^{2d-2}_{\text{{\'{e}}t}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(d-1))^{G}$
is surjective, or if
$CH^{d}(\mathcal{X})\otimes\mathbb{Z}_{\ell}\to
H^{2d}_{\text{{\'{e}}t}}(\mathcal{X},\mathbb{Z}_{\ell}(d))$
is surjective modulo torsion, Conjecture 1.2 is true.
###### Remark 1.11.
The cited references only contain a proof of the first statement. But the
second statement follows from the same proof. The general result of Saito
produces a cohomology class $\xi\in H^{2d}(\mathcal{X},\mathbb{Z}_{\ell}(d))$
whose restriction to each local place coincide with the class of
$z_{\mu}$([CT99, Proposition 3.1]). The various types of integral Tate
conjecture are simply used to find a global cycle whose class agrees with
$\xi$ in various cohomology groups. See also Page 19 of the slide of Colliot-
Thélène’s lecture at Cambridge in 2008 (available at
https://www.imo.universite-paris-saclay.fr/~jean-louis.colliot-
thelene/expocambridge240809.pdf).
A result of Schoen [Sch98] says that if the Tate conjecture is true for
divisors on all smooth projective surfaces defined over finite fields, then
for any smooth projective variety $V$ defined over a finite field
$\mathbb{F}$, the cycle class map
$CH_{1}(\bar{V})\otimes\mathbb{Z}_{l}\to\cup_{K/\mathbb{F}}H^{2d-2}(V,\mathbb{Z}_{\ell}(d-1))^{\text{Gal}(\bar{\mathbb{F}}/K)}\subset
H^{2d-2}(\bar{V},\mathbb{Z}_{\ell}(d-1))$
is surjective, where $\bar{V}$ is the base change of $V$ to an algebraic
closure of $\mathbb{F}$.
If furthermore $V$ is SRC in codimnesion $1$, since its Chow group of zero
cycles is supported in a curve, it is easy to see that every class in
$H^{2d-2}(\bar{V},\mathbb{Q}_{\ell}(d-1))$ is algebraic. Thus every class in
$H^{2d-2}(\bar{V},\mathbb{Z}_{\ell}(d-1))$ is fixed by some open subgroup of
the Galois group. So in this case, Schoen’s theorem implies that we always
have a surjection
$CH_{1}(V)\otimes\mathbb{Z}_{\ell}\to H^{2d-2}(V,\mathbb{Z}_{\ell}(d-1)),$
provided that Tate conjecture holds for all surfaces.
The paper [CTS10] discussed the implication of Schoen’s result for varieties
defined over $\bar{\mathbb{F}}(C)$, the function field of a curve defined over
$\bar{\mathbb{F}}$. Colliot-Thélène and Kahn analyzed the surjectivity of the
$\mathbb{Z}_{\ell}$-coeffiecient cycle class map of codimension $2$ cycles and
its relation with degree $3$ unramified cohomology
$H^{3}_{\text{nr}}(X,\mathbb{Q}_{\ell}/\mathbb{Z}_{\ell}(2))$ in [CTK13] (over
the complex numbers, such a relation is studied in [CTV12]). For the sake of
brevity, and since we will not need these notions for the other parts of this
paper, we will not define this invariant. Instead, we refer the interested
reader to these papers and the references therein for definitions and
properties of unramified cohomology. In particular, if the unramified
cohomology $H^{3}_{\text{nr}}(X,\mathbb{Q}_{\ell}/\mathbb{Z}_{\ell}(2))$
vanishes, the cokernal of $CH^{2}(X)\otimes\mathbb{Z}_{\ell}\to
H^{4}(X,\mathbb{Z}_{\ell}(2))$ is torsion free [CTK13, Théorème 2.2]. Thus if,
in addition, we know that the cokernal is torsion (for instance, if the Chow
group of zero cycles with rational coefficients is universally supported in a
surface [CTK13, Proposition 3.2]), we know the cycle class map is surjective.
One should also note that by the Tate conjecture, one expects the cokernal to
be torsion. In general, they deduced a short exact sequence relating various
Chow groups of codimension $2$ cycles and degree $3$ unramified cohomology.
Their short exact sequence for varieties over finite fields reads the
following ([CTK13, Théorème 6.8]):
$\displaystyle 0\to$ $\displaystyle\text{Ker}(CH^{2}(X)\to CH^{2}(\bar{X}))\to
H^{1}(\mathbb{F},\oplus_{\ell}H^{3}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(2))_{\text{tors}})$
(4) $\displaystyle\to$
$\displaystyle\text{Ker}(H^{3}_{\text{nr}}(X,\mathbb{Q}/\mathbb{Z}(2))\to
H^{3}_{\text{nr}}(\bar{X},\mathbb{Q}/\mathbb{Z}(2)))$ $\displaystyle\to$
$\displaystyle\text{Coker}(CH^{2}(X)\to CH^{2}(\bar{X})^{G})\to 0$
Of course, one can deduce from this a similar exact sequence for
$\ell$-primary torsions.
In particular, we can apply their results to $3$-folds, which then relates the
integral Tate conjecture to the vanishing of degree $3$ unramified cohomology.
Note that by the Lefschetz hyperplane theorem, if we can prove integral Tate
conjecture for one cycles on all $3$-folds, we prove integral Tate conjecture
for one cycles on all smooth projective varieties.
Several groups of authors proved the vanishing of the degree $3$ unramified
cohomoogy on certain threefolds and deduce the integral Tate conjecture for
one cycles, and thus proving Conjecture 1.1 and 1.2 for some surfaces defined
over a global function field. See, for example, [PS16] for the case of conic
bundles over a surface, [CTS21] and [Sca22] for the case of a product of a
curve with a $CH_{0}$-trivial surface.
We prove Theorem 1.4 as a consequence of the following case of the integral
Tate conjecture for one cycles.
###### Theorem 1.12.
Let $\pi:\mathcal{X}\to B$ be a projective flat family of surfaces over a
smooth projective curve $B$ defined over a finite field $\mathbb{F}_{q}$.
Assume that $\mathcal{X}$ is smooth and that the geometric generic fiber is a
smooth rational surface. Then integral Tate conjecture holds for one cycles.
More concretely, the cycle class map
$CH_{1}(\mathcal{X})\otimes\mathbb{Z}_{\ell}\to
H^{4}_{\text{\'{e}t}}(\mathcal{X},\mathbb{Z}_{\ell}(2))$
is surjective.
In general, one can deduce the following geometric criterion for the validity
of the integral Tate conjecture 1.9 and the local-global principles for
separably rationally connected varieties defined over global function fields.
Given a variety $V$ defined over a field $k$, we denote by $A_{1}(V)$ the
group of one cycles in $V$ modulo algebraic equivalence. We also use
$\overline{V}$ to denote the base change of $V$ to an algebraic closure of
$k$.
###### Theorem 1.13.
Let $\pi:\mathcal{X}\to B$ be a projective flat family of varieties over a
smooth projective curve $B$ defined over a finite field $\mathbb{F}_{q}$.
Assume that $\mathcal{X}$ is smooth and that the generic fiber is smooth,
separably rationally connected, and of dimension $d$. Consider the following
hypothesis:
* (A)
The cycle class map $A_{1}(\overline{\mathcal{X}})\otimes\mathbb{Z}_{\ell}\to
H^{2d}_{\text{\'{e}t}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(d))$ is
surjective.
* (B)
The cycle class map $A_{1}(\overline{\mathcal{X}})\otimes\mathbb{Z}_{\ell}\to
H^{2d}_{\text{\'{e}t}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(d))$ is
injective.
* (C)
The cycle class map from higher Chow groups
$\lim\limits_{\xleftarrow[n]{}}CH_{1}(\overline{\mathcal{X}},1,\mathbb{Z}/\ell^{n}\mathbb{Z})\to
H^{2d-1}_{\text{\'{e}t}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(d))$
is surjective.
* (D)
The coniveau filtration
$N^{1}H^{2d-1}_{\text{\'{e}t}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell})$ is
the whole cohomology group
$H^{2d-1}_{\text{\'{e}t}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}).$
If $\overline{\mathcal{X}}$ satisfies hypothesis (A) and (B), then the cycle
maps
$CH_{1}(\mathcal{X})\otimes\mathbb{Z}_{l}\to
H^{2d}_{\text{\'{e}t}}(\mathcal{X},\mathbb{Z}_{l}(d))\to
H^{2d}_{\text{\'{e}t}}(\overline{\mathcal{X}},\mathbb{Z}_{l}(d))^{Gal(\bar{\mathbb{F}}_{q}/\mathbb{F}_{q})}$
is surjective, and Conjecture 1.2 holds for the generic fiber $X$ over
$\mathbb{F}_{q}(B)$.
If $\overline{\mathcal{X}}$ satisfies hypothesis (C) or (D), then the cycle
maps
$CH_{1}(\mathcal{X})_{\text{alg}}\otimes\mathbb{Z}_{l}\to
H^{1}(\mathbb{F}_{q},H^{2d-1}_{\text{\'{e}t}}(\overline{\mathcal{X}},\mathbb{Z}_{l}(d)))$
is surjective.
Thus Conjecture 1.1 holds for the generic fiber $X$ over $\mathbb{F}_{q}(B)$
if either hypothesis (A), (B), (C) or (A), (B), (D) hold.
###### Remark 1.14.
The statements in Hypothesis (A), (B), (C), (D) only depends on the stable
birational class of the generic fiber of $\overline{\mathcal{X}}\to\bar{B}$.
In particular, they only depend on the stable birational class of the generic
fiber $X$ over the field $\mathbb{F}_{q}(B)$ (assuming that there is a smooth
projective model for every stable birational class of $X$). Also note that
Conjectures 1.1, 1.2, 1.3, 1.9 only depend on the stable birational class of
the variety over $\mathbb{F}_{q}(B)$ (or $\mathbb{F}$ for Conjecture 1.9).
###### Remark 1.15.
We make a few simple remarks about the validity of the hypothesis above. First
of all, it is a simple exercise to prove that all these hypothesis hold if we
use $\mathbb{Q}_{\ell}$-coefficient, and that they hold for all but finitely
many $\ell$.
As discussed above, Hypothesis (A) follows from Tate’s conjecture on surfaces.
The author has made conjectures on the Kato homology of rationally connected
fibrations over an algebraically closed field of characteristic $0$ in
[Tia20]. A special case of the conjecture predicts that for a rationally
connected fibration over a curve defined over an algebraically closed field of
characteristic $0$, hypothesis (A), (B), (C), and (D) hold. It is quite
reasonable to believe that the same is true for separably rationally connected
fibrations in characteristic $p>0$. We discuss some examples in Section 6.
Now we explain a corollary of Theorem 1.12, which confirm a conjecture of
Colliot-Thélène and Kahn ([CTK13, Conjecture 5.8]) up to $p$ torsion.
###### Corollary 1.16.
Let $X$ be a smooth projective threefolds defined over a finite field
$\mathbb{F}$. Assume that $X$ admits a fibration structure over a smooth
projective curve with smooth projective geometrically rational generic fiber.
Then we have
$H^{3}_{\text{nr}}(X,\mathbb{Q}_{\ell}/\mathbb{Z}_{\ell}(2))=0,$
for any $\ell$ invertible in $\mathbb{F}$, and a short exact sequence
$0\to H^{1}(\mathbb{F},H^{3}(\bar{X},\mathbb{Z}_{\ell}(2))\\{\ell\\}\to
CH_{1}(X)\otimes\mathbb{Z}_{\ell}\to
CH_{1}(\bar{X})^{G}\otimes\mathbb{Z}_{\ell}\to 0.$
###### Proof.
The vanishing of $H^{3}_{\text{nr}}(X,\mathbb{Q}_{\ell}/\mathbb{Z}_{\ell}(2))$
follows from Theorem 1.12, [CTK13, Theorem 2.2, Proposition 3.2], and the fact
that $CH_{0}(\bar{X})$ is supported in a curve.
It then follows from the exact sequence (1.2) that we have the above
descritption of the Chow gruops of $X$ and $\bar{X}$. ∎
###### Remark 1.17.
Theorem 1.13 holds for smooth projective separably rationally connected
varieties. We have made the proof works in both cases. A cheaper way to get
this result is to note that the validity of the integral Tate conjecture is a
stable birational invariant and apply the above theorems to the product
$\mathbb{P}^{1}\times X$ ($X$ separably rationally connected) as a fibration
over $\mathbb{P}^{1}$. Unfortunately, for a separably rationally connected
threefold $V$ defined over ${\mathbb{F}}_{q}$, we do not know if the cycle
class map
$CH_{1}(\bar{V})\otimes\mathbb{Z}_{\ell}\to
H^{4}(\bar{V},\mathbb{Z}_{\ell}(2))$
is surjective. Once we know this (e.g. if we are willing to assume the Tate
conjecture for surfaces), the same argument as above shows that
$CH_{1}({V})\otimes\mathbb{Z}_{\ell}\to H^{4}({V},\mathbb{Z}_{\ell}(2))$
is surjective. One can also deduce the vanishing of degree $3$ unramified
cohomology and the short exact sequence of Chow groups as above.
### 1.3. Coniveau and strong coniveau
This section is a digression into several a priori different notions of
conveau filtraions on the cohomology of a variety introduced by Benoist-Ottem
and Voisin in [BO21, Voi22]. These notions will play an important role when we
return to discuss the integral Tate conjecture in the next section.
Let us first review the definitions.
###### Definition 1.18.
Let $X$ be a smooth projective variety of dimension $n$ defined over an
algebraically closed field. Given an abelian group $A$ that is one of
$\mathbb{Z}/n\mathbb{Z},\mathbb{Z}_{\ell},\mathbb{Z},\mathbb{Q}$, or
$\mathbb{Q}_{\ell}$, we simply write $H^{k}(X,A)$ as either the étale
cohomology with coefficient $A$ or the singular cohomology with coefficient
$A$ (if $X$ is a complex variety), we have the following closely related
filtrations on the cohomolgy $H^{k}(X,A)$.
1. (1)
The coniveau filtration:
$N^{c}H^{k}(X,A):=\sum_{f:Y\to X}f_{*}(H_{2n-k}(Y,A))\subset
H_{2n-k}(X,A)\cong H^{k}(X,A),$
where the sum is taken over all morphisms from projective algebraic sets
$f:Y\to X,\dim Y\leq n-c$;
2. (2)
The strong coniveau filtration:
$\tilde{N}^{c}H^{k}(X,A):=\sum_{f:Y\to X}f_{*}(H_{2n-k}(Y,A))\subset
H_{2n-k}(X,A)\cong H^{k}(X,A),$
where the sum is taken over all morphisms from smooth projective varieties
$f:Y\to X,\dim Y\leq n-c$.
3. (3)
The strong cylindrical filtration:
$\tilde{N}_{c,\text{cyl}}H^{k}(X,A):=\sum\Gamma_{*}(H_{2n-k-2c}(Z,A))\subset
H_{2n-k}(X,A)\cong H^{k}(X,A)$
where the sum is taken over all _smooth_ projective varieties $Z$ and
correspondences $\Gamma\subset Z\times X$ of relative dimension $c$ over $Z$.
4. (4)
The strong equidimensional cylindrical filtration:
$\tilde{N}_{c,\text{cyl},\text{eq}}H^{k}(X,A):=\sum\Gamma_{*}(H_{2n-k-2c}(Z,A))\subset
H_{2n-k}(X,A)\cong H^{k}(X,A)$
where the sum is taken over all smooth projective varieties $Z$ and
correspondences $\Gamma\subset Z\times X$ that is equidimensional of relative
dimension $c$ over $Z$.
5. (5)
The semi-stable filtration: $N_{c,\text{st},\text{cyl}}H^{k}(X,\mathbb{Z})$ as
the group generated by the cylinder homomorphisms
$f_{*}\circ p^{*}:H_{2n-k-2c}(Z,\mathbb{Z})\to H_{2n-k}(X,\mathbb{Z})\cong
H^{k}(X,\mathbb{Z}),$
for all morphisms $f:Y\to X$, and flat projective morphisms $p:Y\to Z$ of
relative dimension $c$ with simple normal crossing fibers, where $\dim Z\leq
2n-k-2c$.
6. (6)
We use the notations $N^{c}H_{k}$ etc. to denote the filtrations on Borel-
Moore or singular homology $H_{k}$. Since $X$ is smooth, this is the same as
the filtrations $N^{c}H^{2d-k}$.
A general relation between these filtrations is the following.
###### Lemma 1.19.
[Voi22, Proposition 1.3] We have the following inclusions:
$\tilde{N}_{n-c,\text{cyl},\text{eq}}H^{2c-1}(X,A)\subset\tilde{N}_{n-c,\text{cyl}}H^{2c-1}(X,A)=\tilde{N}^{c}H^{2c-1}(X,A)\subset
N^{c}H^{2c-1}(X,A).$
The only non-obvious part, the equality in the middle, is proved by Voisin
[Voi22, Proposition 1.3].
A natural question is whether or not these filtrations agree with each other.
If we use $\mathbb{Q}$ or $\mathbb{Q}_{\ell}$ coefficients, the theory of
weights shows that the strong coniveau and coniveau filtrations are
equivalent. Since the difference between some of these filtrations also gives
stable birational invariants, one wonders if this could be used to prove non-
stable-rationality for some rationally connected varieties.
Examples with $\mathbb{Z}$-coefficients where the strong coniveau filtration
and coniveau filtration differ are constructed in [BO21]. More precisely, they
prove the following.
###### Theorem 1.20.
[BO21, Theorem 1.1] For all $c\geq 1$ and $k\geq 2c+1$, there is a smooth
projective complex variety $X$ such that the inclusion
$\tilde{N^{c}}H^{k}(X,\mathbb{Z})\subset N^{c}H^{k}(X,\mathbb{Z})$
is strict. One may choose $X$ to have torsion canonical bundle. If $c\geq 2$,
one may choose $X$ to be rational.
The examples as above usually have large dimension especially when $c$ is
large. For lower dimensional examples, Benoist-Ottem proved the following.
###### Theorem 1.21.
[BO21, Theorem 1.2] For $k\in\\{3,4\\}$, there is a smooth projective complex
variety $X$ of dimension $k+1$ with torsion canonical bundle such that the
inclusion
$\tilde{N}^{1}H^{k}(X,\mathbb{Z})\subset N^{1}H^{k}(X,\mathbb{Z})$
is strict.
These examples leave the case of $c=1$ open for threefolds and for rationally
connected varieties. Voisin studied the strong coniveau filtrations on
$H^{2d-3}$ [Voi22].
###### Theorem 1.22.
[Voi22, Theorem 2.6, Corollary 2.7, Theorem 2.17] Let $X$ be a smooth
projective variety of dimension $d$ defined over $\mathbb{C}$.
1. (1)
Assume the Walker Abel-Jacobi map ([Wal07])
$\phi:CH_{1}(X)_{\text{alg}}\to J(N^{1}H^{2d-3}(X,\mathbb{Z}))$
is injective on torsions. Then we have
$N_{1,\text{st},\text{cyl}}H^{2d-3}(X,\mathbb{Z})/\text{Tor}=N^{1}H^{2d-3}(X,\mathbb{Z})/\text{Tor}.$
2. (2)
If $\dim X$ is $3$, we have
$N_{1,\text{cyl},\text{st}}H^{3}(X,\mathbb{Z})/\text{Tor}=N^{1}H^{3}(X,\mathbb{Z})/\text{Tor}.$
3. (3)
If $X$ is rationally connected, we have
$N_{1,\text{cyl},\text{st}}H^{2d-3}(X,\mathbb{Z})=\tilde{N}_{1}H^{2d-3}(X,\mathbb{Z}).$
As a consequence,
$N_{1,\text{cyl},\text{st}}H^{2d-3}(X,\mathbb{Z})/\text{Tor}=\tilde{N}^{1}H^{2d-3}(X,\mathbb{Z})/\text{Tor}=N^{1}H^{2d-3}(X,\mathbb{Z})/\text{Tor}.$
We prove an improvement of Voisin’s results.
###### Theorem 1.23.
Let $X$ be a complex smooth projective variety of dimension $d$. Then the
following two filtrations agree with each other:
$N_{1,\text{st},\text{cyl}}H_{3}(X,\mathbb{Z})=N^{1}H^{2d-3}(X,\mathbb{Z}).$
Assume furthermore that $X$ is SRC in codimension $1$. There is a smooth
projective curve $C$ with a family of $1$-dimensional cycles $\Gamma\subset
C\times X$ such that
$\Gamma_{*}:H_{1}(C,\mathbb{Z})\to H_{3}(X,\mathbb{Z})$
has the same image as the s-map $s:L_{1}H_{3}(X)\to H_{3}(X)$, which is the
same as the $N^{1}H^{3}(X,\mathbb{Z})$. In particular, the following
filtrations on $H^{2d-3}(X,\mathbb{Z})$ introduced in Definition 1.18 are the
same:
$\tilde{N}_{1,\text{cyl},\text{eq}}H^{2d-3}(X,\mathbb{Z})=\tilde{N}_{1,\text{cyl}}H^{2d-3}(X,\mathbb{Z})=\tilde{N}^{d-1}H^{2d-3}(X,\mathbb{Z})=N^{d-1}H^{2d-3}(X,\mathbb{Z}).$
For the definition of Lawson homology and s-map, see Definition 3.5 and 3.10
in Section 3.
An immediate corollary is the following.
###### Theorem 1.24.
Let $X$ be a complex smooth projective variety, that is SRC in codimension
$1$. Then the following filtrations on $H^{3}(X,\mathbb{Z})$ introduced in
Definition 1.18 coincide with the whole cohomology group:
$\tilde{N}_{1,\text{cyl},\text{eq}}H^{3}(X,\mathbb{Z})=\tilde{N}_{1,\text{cyl}}H^{3}(X,\mathbb{Z})=\tilde{N}^{1}H^{3}(X,\mathbb{Z})=N^{1}H^{3}(X,\mathbb{Z})=H^{3}(X,\mathbb{Z}).$
###### Remark 1.25.
Using the decomposition of the diagonal argument, one can show that when $X$
is SRC in codimension $1$, for each $i$, there is a smooth projective variety
$Y_{i}$ and a family of cycles $\Gamma_{i}\subset Y_{i}\times X$ such that the
cokernal of $\Gamma_{*}:H_{i}(Y_{i})\to H_{i+2}(X)$ is $N$-torsion for a fixed
$N$. So we may consider the s-map from $\mathbb{Z}/N$ Lawson homology (defined
as the homotopy group of the topological group $Z_{r}(X)\otimes\mathbb{Z}/N$)
to $H_{3}(X,\mathbb{Z}/N)$. We have long exact sequences
$\begin{CD}L_{1}H_{i}(X,\mathbb{Z})@>{\cdot
N}>{}>L_{1}H_{i}(X,\mathbb{Z})@>{}>{}>L_{1}H_{i}(X,\mathbb{Z}/N)@>{}>{}>L_{1}H_{i-1}(X,\mathbb{Z})\\\
@V{}V{}V@V{}V{}V@V{}V{}V@V{}V{}V\\\ H_{i}(X,\mathbb{Z})@>{\cdot
N}>{}>H_{i}(X,\mathbb{Z})@>{}>{}>H_{i}(X,\mathbb{Z}/N)@>{}>{}>H_{i-1}(X,\mathbb{Z}).\\\
\end{CD}$
By results of Suslin-Voevodsky [SV96] and the Bloch-Kato conjecture proved by
Voevodsky, there is an isomorphism
$L_{1}H_{2+i}(X,\mathbb{Z}/N)\cong
CH_{1}(X,i,\mathbb{Z}/N)\cong\mathbb{H}^{i}(X,\tau^{\leq\dim
X-1}R\pi_{*}(\mathbb{Z}/N))$
between torsion Lawson homology, Bloch’s higher Chow group, and certain
Zariski cohomology group, where $\pi:X_{cl}\to X_{zar}$ is the continuous map
from $X(\mathbb{C})$ with the analytic topology to $X$ with the Zariski
topology.
We also have a long exact sequence:
$\ldots\to L_{1}H_{k}(X,\mathbb{Z}/N)\to H_{k}(X,\mathbb{Z}/N)\to
KH_{k}(X,\mathbb{Z}/N)\to L_{1}H_{k-1}(X,\mathbb{Z}/N)\to\ldots$
where $KH_{k}(X,\mathbb{Z}/N)=H^{\dim X-k}(X,R^{\dim X}\pi_{*}\mathbb{Z}/N)$
is the so-called Kato homology. The author has made a number of conjectures
about Kato homologies of a rationally connected fibration in [Tia20]. Special
cases of these conjectures imply that there is an isomorphism
$L_{1}H_{i}(X,\mathbb{Z}/N)\cong H_{i}(X,\mathbb{Z}/N)$ for all $k$ and all
rationally connected varieties and rationally connected fibrations over a
curve defined over the complex numbers. This in turn would imply the s-maps
$L_{1}H_{i}(X,\mathbb{Z})\to H_{i}(X,\mathbb{Z})$ are isomorphisms.
We have a similar result that applies to fields of positive characteristic.
###### Theorem 1.26 (=Theorem 4.14).
Let $X$ be a $d$-dimensional smooth projective variety defined over an
algebraically closed field, which is separably rationally connected in
codimension $1$. There is a smooth projective curve $C$ with a family of
$1$-dimensional cycles $\Gamma\subset C\times X$ such that
$\Gamma_{*}:H_{1}^{\text{BM}}(C,\mathbb{Z}_{\ell})\to
H_{3}^{\text{BM}}(X,\mathbb{Z}_{\ell})\cong
H^{2d-3}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell})$
surjects onto $N^{1}H^{2d-3}(X,\mathbb{Z}_{\ell})$.
###### Theorem 1.27 (=Theorem 4.16).
Let $X$ be a smooth projective defined over an algebraically closed field,
which is separably rationally connected in codimension $1$. Assume $X$ is a
$3$-fold. Then the following filtrations on $H^{3}(X,\mathbb{Z}_{\ell})$
introduced in Definition 1.18 equal the whole cohomology group:
$\tilde{N}_{1,\text{cyl},\text{eq}}H^{3}(X,\mathbb{Z}_{\ell})=\tilde{N}_{1,\text{cyl}}H^{3}(X,\mathbb{Z}_{\ell})=\tilde{N}^{1}H^{3}(X,\mathbb{Z}_{\ell})=N^{1}H^{3}(X,\mathbb{Z}_{\ell})=H^{3}(X,\mathbb{Z}_{\ell}).$
### 1.4. Integral Tate conjecture for one cycles: arithmetic part
We continue the discussion on integral Tate conjecture in this section.
The Serre-Hochschild spectral sequence gives an exact sequence:
$0\to
H^{1}(\mathbb{F}_{q},H^{2d-2r-1}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-r)))\to
H^{2d-2r}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(d-r))\to
H^{2d-2r}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-r))^{G}\to 0.$
Thus the integral Tate conjecture consists of a geometric part, i.e.
surjectivity of
$CH_{r}(X)\otimes\mathbb{Z}_{\ell}\to
H^{2d-2r}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-r))^{G},$
and an arithmetic part, i.e. surjectivity of
$CH_{r}(X)_{\text{hom}}\otimes\mathbb{Z}_{\ell}\to
H^{1}(\mathbb{F}_{q},H^{2d-2r-1}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-r))),$
where $CH_{r}(X)_{\text{hom}}\otimes\mathbb{Z}_{\ell}$ is the “geometrically
homologically trivial” part, i.e. the kernal of
$CH_{r}(X)\otimes\mathbb{Z}_{\ell}\to
H^{2d-2r}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-r))^{G}.$
In a recent preprint [SS22], Scavia and Suzuki systematically investigated the
question of the surjectivity in the arithmetic part and relate them to the
strong coniveau filtration. For codimension $2$ cycles, they obtain the
following results.
###### Theorem 1.28.
[SS22, Theorem 1.3] Let $\mathbb{F}$ be a finte field and $\ell$ be a prime
number invertible in $\mathbb{F}$,and suppose that $\mathbb{F}$ contains a
primitive $\ell^{2}$-th root of unity. There exists a smooth projective
geometrically connected $\mathbb{F}$-variety X of dimension $2\ell+2$ such
that the map
$CH^{2}(X)\otimes\mathbb{Z}_{\ell}\to
H^{4}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(4))^{G}$
is surjective whereas the map
$CH^{2}(X)_{\text{hom}}\otimes\mathbb{Z}_{\ell}\to
H^{1}(\mathbb{F}_{q},H^{3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(2)))$
is not.
###### Theorem 1.29.
[SS22, Theorem 1.4] Let $p$ be an odd prime. There exist a finite field
$\mathbb{F}$ of characteristic $p$ and a smooth projective geometrically
connected fourfold $X$ over $\mathbb{F}$ for which the image of the
composition
$H^{1}(\mathbb{F},H^{3}_{\text{\'{e}t}}(X,\mathbb{Z}_{2}(2))_{\text{tors}})\to
H^{1}(\mathbb{F},H^{3}_{\text{\'{e}t}}(X,\mathbb{Z}_{2}(2)))\to
H^{4}(X,\mathbb{Z}_{2}(2))$
contains a non-algebraic torsion class.
Shortly after Scavia-Suzuki’s preprint appeared, Benoist studied steenrod
operations on Chow groups and cohomologies [Ben22]. He obtained new examples
of non-algebraic cohomology classes over many fields
($\mathbb{C},\mathbb{R},\bar{\mathbb{F}}_{q},\mathbb{F}_{q}$) and for
cohomology classes on algebraizable smooth manifolds. In the case of finite
fields, his results removed the assumptions on $\ell^{2}$-th roots of unity in
Scavia-Suzuki’s results.
###### Theorem 1.30.
[Ben22, Theorem 4.12] Let $p\neq\ell$ be prime numbers, and let $\mathbb{F}$
be a finite subfield of $\bar{\mathbb{F}}_{p}/\mathbb{F}_{p}$. There exist a
smooth projective geometrically connected variety $X$ of dimension $2\ell+3$
over $\mathbb{F}$ and a non-algebraic class
$x\in\text{Ker}(H^{4}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(2))\to
H^{4}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(2)))$.
The failure of the surjectivity is related to the discrepancy of the strong
coniveau and coniveau filtration by the following.
###### Theorem 1.31.
[SS22, Theorem 1.5, Proposition 7.6] Let $X$ be a smooth projective
geometrically connected variety over a finite field $\mathbb{F}$ and $\ell$ be
a prime number invertible in $\mathbb{F}$. Suppose that the coniveau and
strong coniveau on $H^{3}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(2))$ coincide:
$N^{1}H^{3}(X,\mathbb{Z}_{\ell}(2))=\tilde{N}^{1}H^{3}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell}(2))$.
Then the $\ell$-adic algebraic Abel-Jacobi map is an isomorphism:
$CH^{2}(X)_{\text{alg}}\otimes\mathbb{Z}_{\ell}\to
H^{1}(\mathbb{F},N^{1}H^{3}(X,\mathbb{Z}_{\ell}(2))).$
In general, we have a surjection
$CH^{r}(X)_{\text{F-alg}}\otimes\mathbb{Z}_{\ell}\to\text{Image}(H^{1}(\mathbb{F},\tilde{N}^{r-1}H^{2r-1}(X,\mathbb{Z}_{\ell}(r)))\to
H^{1}(\mathbb{F},N^{r-1}H^{2r-1}(X,\mathbb{Z}_{\ell}(r)))).$
In the paper of Scavia-Suzuki [SS22], they use $CH^{r}(X)_{\text{F-alg}}$ to
denote cycles algebraically equivalent to zero over $\mathbb{F}$, and
$CH^{r}(X)_{\text{alg}}$ to denote cycles defined over $\mathbb{F}$ that are
algebraically equivalent to zero over $\bar{\mathbb{F}}$. However, for
codimension $2$ cycles on varieties defined over $\mathbb{F}$, there is no
known example where $CH^{2}(X)_{\text{F-alg}}$ and $CH^{2}(X)_{\text{alg}}$
differ (Question 8.2, [SS22]), and
$CH^{2}(X)_{\text{F-alg}}\otimes\mathbb{Z}_{\ell}$ and
$CH^{2}(X)_{\text{alg}}\otimes\mathbb{Z}_{\ell}$ are isomorphic if the strong
coniveau coincides with the coniveau filtration on $H^{3}$ ([SS22, Propostion
7.10, 7.11]). Moreover, for one cycles on a separably rationally connected
variety or a separably rationally connected fibration over a curve defined
over a finite field, we know $CH_{1}(X)_{\text{F-alg}}$ and
$CH_{1}(X)_{\text{alg}}$ are the same [KT23, Theorem 6].
While the surjectivity in the arithmetic part is not true in general, we do
expect this to be true for one cycles on varieties that is separably
rationally connected in codimension $1$.
As a corollary of Theorem 4.7 and the work of Scavia-Suzuki, we get the
following results regarding the arithemtic part of the cycle class map.
###### Corollary 1.32 (=Corollary 4.17).
Let $X$ be a smooth projective variety of dimension $d$ defined over a finite
field $\mathbb{F}_{q}$, which is separably rationally connected in codimension
$1$. Then we have a surjection
$CH_{1}(X)_{\text{alg}}\to
H^{1}(\mathbb{F}_{q},N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1)).$
Furthermore, assume one of the followings
1. (1)
$N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))=H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))$.
2. (2)
The cycle class map
$cl:\lim\limits_{\xleftarrow[n]{}}CH_{1}(\bar{X},1,\mathbb{Z}/\ell^{n})\to
H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1))$
is surjective.
Then every class in
$H^{1}(\mathbb{F}_{q},H^{3}(\bar{X},\mathbb{Z}_{\ell}(d-1)))$ is the class of
an algebraic cycle. In particular, this holds if $X$ has dimension $3$.
### 1.5. Algebraic equivalence
The key technical result in proving the main theorems is the joint work of the
author with János Kollár [KT23] studying algebraic equivalences of one cycles
on smooth projective varieties.
Algebraically equivalence between two cycles means that one has to add
complicated cycles to both of them and then get a family of cycles over a
curve. Given two stable maps, supposing that they give algebraically
equivalent cycles, it is not clear that this algebraic equivalence of cycles
can be realized as a deformation of stable maps. In the joint work with
Kollár, we prove that this is always possible for curves on smooth projective
varieties.
###### Theorem 1.33.
[KT23, Theorem 1] Let $X$ be a smooth, projective variety over an
algebraically closed field $K$. Let $\pi_{i}:C_{i}\to X$ (for $i\in I$) be
finitely many morphisms of nodal curves to $X$ such that the
$(\pi_{i})_{*}[C_{i}]$ are algebraically equivalent to each other. Then there
is a morphism from a single nodal curve $\pi_{R}:R\to X$ and a family of
connected nodal curves over a connected curve $B$ with a morphism to $X$:
$B\xleftarrow{}S\xrightarrow{\pi}X$ such that for any $i\in I$, there is a
point $b_{i}\in B$ with fiber $S_{i}\cong C_{i}\cup R$ and
$(\pi|_{S_{i}}:C_{i}\to X)\cong(\pi_{i}:C_{i}\to X),(\pi|_{S_{i}}:R\to
X)\cong(\pi_{R}:R\to X)$.
That is, the deformation of cycles is visible as deformation of maps. This
result yields many interesting applications. For example, the study leads to
the following theorem.
###### Theorem 1.34.
[KT23, Theorem 6] Let $X_{k}$ be a smooth, projective variety over a perfect
field $k$, which is separably rationally connected in codimension $1$. Then
the kernel of the natural map
$A_{1}(X_{k})\to A_{1}({X}_{\bar{k}})$
is either trivial or $\mathbb{Z}/2\mathbb{Z}$. More precisely,
1. (1)
the kernel is trivial if $X_{k}$ contains an odd degree $0$-cycle, and
2. (2)
if $Z=\sum_{i}d_{i}C_{i}$ and $Z_{\bar{k}}$ is algebraically equivalent to $0$
over $\bar{k}$, then $Z$ is algebraically equivalent to $0$ over $k$ if and
only if the index of $X$ divides
$\chi(Z):=\sum_{i}d_{i}\chi(C_{i},\mathcal{O}_{C_{i}})$.
Recall that a _pseudo algebraically closed field_ (or a _PAC field_ for short)
is a field where every geometrically integral variety has a rational point.
###### Theorem 1.35.
[KT23, Theorem 7] Let $X_{k}$ be a smooth projective variety defined over a
perfect field $k$. Assume that every geometrically irreducible $k$-variety has
a $0$-cycle of degree $1$ (e.g. $k$ is a finite field or a PAC field). Assume
that $X$ is separably rationally connected in codimension $1$. We have an
isomorphism
$A_{1}(X_{k})\cong A_{1}(X_{\bar{k}})^{G},$
where $G$ is the absolute Galois group of $k$.
If $k$ is not perfect (and every geometrically irreducible $k$-variety has a
$0$-cycle of degree $1$), then we have an isomorphism after inverting the
characteristic $p$.
### 1.6. Structure of the paper
This paper consists of applications of the results in [KT23].
The first application is in Section 2, where we describe some structural
results of the space of one cycles. These structural results are then used in
Sections 3 4 to study the coniveau filtration on $H_{3}$ of a separably
rationally connected fibration over a curve. The case of complex varieties
uses Lawson homology and is topological. Moreover, it gives “integral”
results. We present this first to give the readers some flavor of the
argument. The general case has to use the Chow sheaves introduced by Suslin
and Voevodsky, hence more abstract. Unfortunately, in this case we only have
results for torsion coefficients and have to pass to the inverse limit from
time to time. The criterion for the surjection of the cycle class map onto the
arithmetic part is proved in Corollary 4.17.
Finally, the applications to local-global principles are discussed in Section
5. In Section 6, we give some examples where the criterion in Theorem 1.13 can
be effectively checked.
Acknowledgment: I would like to thank Jean-Louis Colliot-Thélène and Olivier
Wittenberg for many helpful and constructive comments. I am grateful to János
Kollár for generously sharing his ideas and for co-authoring the article
[KT23] which produces much stronger results than those proved in the first
version of this paper, and which provides the technical results needed for
this paper. This work is partially supported by NSFC grants No. 11890660 and
No.11890662.
## 2\. Space of one cycles
In this section, we fix an algebraically closed field $k$ of any
characteristic. We remind the readers that a variety is always assumed to be
irreducible through out the paper, and thus connected. Sometimes we add the
word irreducible just to emphasis this.
###### Definition 2.1.
Let $X,Y$ be finite type reduced separated $k$-schemes. A family of relative
cycles of equi-dimension $r$ over $Y$ is a formal linear combination of
integral subschemes $\mathcal{Z}=\sum m_{i}Z_{i},Z_{i}\subset Y\times
X,m_{i}\in\mathbb{Z}$ such that
1. (1)
Each $Z_{i}$ dominates one irreducible component of $Y$
2. (2)
Each fiber of $Z_{i}\to Y$ has dimension $r$.
3. (3)
A fat point condition is satisfied (see [Kol96, Chapter I, 3.10.4] or [SV00,
Definition 3.1.3]).
4. (4)
A field of definition condition is satisfied. Namely, for any point $y\in Y$,
there are integral subschemes $\gamma_{i}$ of $X$ defined over the residue
field $\kappa(y)$ such that the cycle theoretic fiber ([Kol96, Chapter I,
3.10.4]) of $\mathcal{Z}$ over $y$ is $\sum m_{i}\gamma_{i}$.
We write $\mathcal{Z}^{+}$ and $\mathcal{Z}^{-}$ as the positive and negative
parts, i.e. the sum of $Z_{i}$’s with positive and negative coefficients.
We say this family has proper support if $Z_{i}\to Y$ are proper for every
$i$.
It is often convenient to allow some redundancy in the linear combination,
especially when we consider pull-back and restrictions. For example, we allow
an expression of the form $(Z_{1}+Z_{3})-(Z_{2}+Z_{3})$. When this happens, we
think of $(Z_{1}+Z_{3})$ (resp. $(Z_{2}+Z_{3})$) as the positive (resp.
negative) part.
We refer the interested readers to [Kol96] Section I.3, I.4 and [SV00] Section
3 for details about the last two conditions and the subtle points about these
definitions. We only remark that with this definition, one can pull-back
families of cycles.
We adopt Kollár’s [Kol96] convention of only considering families of cycles
over a reduced base. Suslin and Voevodsky [SV00] consider more general base.
For our purpose, it is perfectly fine to always work over a reduced base. We
call a separated, finite type, reduced $k$-scheme an _algebraic set_.
###### Remark 2.2.
We would also like to mention that for a normal variety, condition $3$ is
automatic. Condition $4$ is always satisfied in characteristic $0$, or if the
base is regular, or if all the $m_{i}$’s are invertible in the field $k$. It
is introduced to make sure that pulling-back of families of cycles is always
defined and one has a presheaf of relative cycles.
###### Definition 2.3.
Given a family of projective curves $S\to B$ and a $B$-morphisms $F:S\to
B\times X$, we can associate to a family of cycles $\Gamma_{S}\subset B\times
X$. So given a family of cycles $\Gamma\subset B\times X$ over $B$, we say it
is _nodal_ , if it is induced from a family of nodal curves as above.
###### Proposition 2.4.
Let $S_{1},S_{2}$ be two connected algebraic sets and $\Gamma_{1}\subset
S_{1}\times X,\Gamma_{2}\subset S_{2}\times X$ be two equidimensional families
of $r$-cycles in $X$. Assume that the cycles in the two families are
algebraically equivalent. There is a connected algebraic set $S$ and an
equidimensional family of $r$-cycles $\Gamma\subset S\times X$, and morphisms
$f_{1}:S_{1}\to S,f_{2}:S_{2}\to S$ such that
$\Gamma_{1}=f_{1}^{*}\Gamma,\Gamma_{2}=f_{2}^{*}\Gamma$. Moreover, if both
$S_{1}$ and $S_{2}$ are normal/smooth/projective, we may choose $S$ to be
normal/smooth/projective.
###### Proof.
Take two points $s_{1}\in S_{2},s_{2}\in S_{2}$. Denote by
$\gamma_{1},\gamma_{2}$ the cycle over $s_{1},s_{2}$. By assumption,
$\gamma_{1},\gamma_{2}$ are algebraically equivalent. Thus there is a smooth
projective curve $S_{3}$, two points $a,b\in S_{3}$ and a family of $r$-cycles
$\Gamma_{3}\subset S_{3}\times X$ such that the cycle over $a,b$ are
$\gamma_{1},\gamma_{2}$. This follows from the definition of algebraic
equivalence. Indeed, by definition of algebraic equivalence, we may find a
cycle $\Delta$ and a family of cycles $\Gamma_{T}$ over a smooth projective
curve $T$ and two points $t_{1},t_{2}\in T$ such that the cycle over $t_{1}$
(resp. $t_{2}$) is $\gamma_{1}+\Delta$ (resp. $\gamma_{2}+\Delta$). Then we
take $S_{3}$ to be $T$ and the family of cycles $\Gamma_{3}$ to be
$\Gamma_{T}-T\times\Delta$.
Define $S=S_{1}\times S_{2}\times S_{3}$, with $p_{i}:S\to S_{i},i=1,2,3,$ the
projections. Define
$\Gamma=p_{1}^{*}\Gamma_{1}+p_{2}^{*}\Gamma_{2}-p_{3}^{*}\Gamma_{3}$. Finally,
define
$f_{1}:S_{1}\to S,x\mapsto(x,s_{2},b)$
and
$f_{2}:S_{2}\to S,y\mapsto(s_{1},y,a).$
One easily checks these satisfy the conditions.
Since $S_{3}$ is a smooth projective curve, if both $S_{1},S_{2}$ are normal,
or smooth, or projective, so is $S$. ∎
Now we can state the main technical result of this section.
###### Theorem 2.5.
Let $X$ be a smooth projective variety defined over an algebraically closed
field $k$. Let $(U,\Gamma_{U})$ be an equi-dimensional family of one
dimensional cycles over an irreducible variety $U$ and let $u_{0},u_{1}\in U$
be two points in $U$ such that
$\gamma_{0}:=\Gamma_{U}|_{u_{0}}=\gamma_{1}:=\Gamma_{U}|_{u_{1}}=\gamma$
as cycles. Then there is a family of one-dimensional cycles $(V,\Gamma_{V})$
over a smooth quasi-projective variety $V$ with a morphism $f:V\to U$ such
that $f^{*}\Gamma_{U}=\Gamma_{V}$, and a lifting $v_{0},v_{1}$ of
$u_{0},u_{1}$, such that
1. (1)
The morphism $f:V\to U$ is projective and surjective.
2. (2)
We may take the family over $V$ to be nodal as in Definition 2.3. We still use
$\Gamma_{V}$ to denote the family of nodal curves in the following.
3. (3)
There is a nodal deformation equivalence $T\leftarrow\Gamma_{T}\to X$ over a
two pointed connected nodal curve $(T,t_{0},t_{1})$ between
$\Gamma_{V}|_{v_{0}}$ and $\Gamma_{V}|_{v_{1}}$.
4. (4)
For each $k$-point $t$ of $T$, the cycle over $t$ is $\gamma$.
5. (5)
For a general point $u$ in $U$, and for any pair of points $v,v^{\prime}$ in
the inverse image of $u$ in $V$, there is a nodal deformation equivalence
$(C_{v,v^{\prime}},c,c^{\prime})\leftarrow\Gamma_{C}\to
X,\Gamma_{C}|_{c}\cong\Gamma_{V}|_{v},\Gamma_{C}|_{c^{\prime}}\cong\Gamma_{V}|_{v^{\prime}}$
over a connected two pointed nodal curve $(C_{v,v^{\prime}},c,c^{\prime})$.
6. (6)
For any point $c\in C$, the cycle of $\Gamma_{C}$ at $c$ is the same as that
of $\Gamma_{U}$ at the point $u$.
###### Theorem 2.6.
Keep the same assumptions as in Theorem 2.5. Assume furthermore that $X$ is
separably rationally connected in codimension $1$. Then $V,T,C_{v,v^{\prime}}$
admit a morphism to $(W,\Gamma_{W})$, where $W$ is a normal projective variety
and $\Gamma_{W}$ is a family cycles over $W$. In characteristic $0$, we may
choose $W$ to be irreducible, smooth and projective. In general, we may take
$W$ to be irreducible, normal, projective and smooth near the nodal points of
$T,C$, and $v_{0},v_{1}$.
###### Remark 2.7.
This theorem is special to one cycles on varieties that is SRC in codimension
$1$.
Indeed, if the statements were true for a variety $X$ and families of
$r$-dimensional cycles, then the same argument as in Sections 3 and 4 would
prove that the strong coniveau filtration $\tilde{N}^{r}H^{2r+1}(X)$ coincide
with the coniveau filtration $N^{r}H^{2r+1}$. But the examples of Benoist-
Ottem [BO21], Scavia-Suzuki [SS22] (cited in Section 1.3 and 1.4 as Theorems
1.20, 1.21, 1.28, 1.29) shows that this is not true in general.
###### Remark 2.8.
We remark that the positive and negative part of the families of cycles may
vary along $T$. The theorem only states that the difference remains constant.
###### Remark 2.9.
Even if we start with a family of effective cycles, for the statements to be
true, we have to use non-effective cycles.
Moreover, the statement is not a simple corollary of the existence of the
universal family over the Chow variety (which is true only in characteristic
$0$). This is because we require that the family is parameterized by a normal
variety, while the Chow variety is only semi-normal in [Kol96] by definition
or satisfies no such normality condition at all in some other references such
as [Fri91].
It is possible that the morphism from the normalization of the Chow variety to
the Chow variety maps two points to the same point. If this happens, we take
$U$ to be the normalization of the Chow variety, $u,u^{\prime}$ to be the two
points mapping to the same point in the Chow variety, the existence of $V,T,W$
in this case cannot be deduced from the existence of the universal family over
the Chow variety.
###### Remark 2.10.
Finally we remark that $U$ being irreducible is not essential in the proof.
But it simplifies the argument. If $U$ is reducible and connected, one can use
similar argument as in [KT23, Section 8] to find a connected algebraic set
$V$. But in this case, we cannot choose $V$ to admit a morphism to $U$. The
best one can have is that for each irreducible component of $U$, there is an
irreducible component of $V$ with a projective, surjetive morphism to this
component.
###### Proof of Theorem 2.5.
First, using Nagata’s compactification and Chow lemma, we can make a base
change that is a birational projective morphism and replace $U$ with a quasi-
projective variety. So in the following, we assume $U$ is quasi-projective.
Up to a purely inseparable base change, we may assume that the generic fiber
of the family is comes from nodal curves.
We write $\Gamma_{U}=\Gamma^{+}_{U}-\Gamma^{-}_{U}$ as its positive and
negative part. We write
$\gamma_{0}^{+}=\Gamma^{+}_{U}|_{u_{0}},\gamma_{0}^{-}=\Gamma^{-}_{U}|_{u_{0}},\gamma_{1}^{+}=\Gamma^{+}_{U}|_{u_{1}},\gamma_{1}^{-}=\Gamma^{-}_{U}|_{u_{1}}.$
By assumption,
$\gamma_{0}^{+}-\gamma_{0}^{-}=\gamma_{1}^{+}-\gamma_{1}^{-}.$
We take a general complete intersection $V^{\prime}$ (of the same dimension as
$U$) containing $v_{0}=(u_{0},u_{1})$ and $v_{1}=(u_{1},u_{0})$ in the product
$U\times U$. There are two families of nodal curves
$\Gamma_{p}=\Gamma_{p}^{+}-\Gamma_{p}^{-},\Gamma_{q}=\Gamma_{q}^{+}-\Gamma_{q}^{-}$
over $V^{\prime}$ induced by the two projections $p,q:V^{\prime}\to U$. We
have the family of cycles over $V^{\prime}$
${\Gamma}_{V^{\prime}}=(\Gamma_{p}^{+}+\Gamma_{q}^{-})-(\Gamma_{p}^{-}+\Gamma_{q}^{-}).$
Then the positive part over $v_{0}$ is $\gamma_{0}^{+}+\gamma_{1}^{-}$, and
the positive part over $v_{1}$ is $\gamma_{1}^{+}+\gamma_{0}^{-}$. Thus they
are the same as cycles. Moreover, the restriction of the negative parts of
$\Gamma_{V^{\prime}}$ to $v_{0},v_{1}$ are the same.
So now we have two families of cycles, and the restriction of each family to
$v_{0},v_{1}$ gives the same cycle. We first prove the statement for each
family. Then we take base changes for both families such that they are over
the same base and subtract them. Here we use the fact that the fiber of the
family over the singular points of $T,C$ and $v_{0},v_{1},v,v^{\prime}$ are
all nodal curves, and thus the base change will change nothing in their
neighborhood.
From now on we assume there is a nodal family of effective cycles.
The existence of a generically finite base change $V^{\prime\prime}\to U$, the
lifting $v_{0},v_{1}$ and the curve $T$ follows from [KT23, Theorem 58].
Unwrapping the content of this theorem, one gets the following:
1. (1)
There is a nodal curve $Z$ and a family of r-tuple pf complete intersection
curves $r|H^{\text{ci}}|\to V^{\prime\prime}$.
2. (2)
One can glue them together to get a family of nodal curves
$C_{V^{\prime\prime}}\to V^{\prime\prime}$.
3. (3)
There is a family of nodal curves over a two pointed curve $X\leftarrow
C_{T}\to(T,t_{0},t_{1})$, such that the restriction of the family $C_{T}$ to
$t_{0},t_{1}$ coincide with the restriction of of $C_{V^{\prime\prime}}$ to
$v_{0},v_{1}$.
4. (4)
For each $t\in T$, the cycle over $t$ is $Z+\sum_{i=1}^{r}L_{i}(t)$, where
$\cup L_{i}\to T$ is a family of $r$-tuple complete intersection curves.
5. (5)
The restriction of $\cup L_{i}|$ to ${t_{0}},t_{1}$ coincide with the
restriction of $r|H^{\text{ci}}|$ to $v_{0},v_{1}$.
6. (6)
The families of curves induces a morphism $V^{\prime\prime}\cup
T\to\text{Hilb}_{1}(X\times\mathbb{P}^{3})$.
This is almost what we want, except that the cycles over $V^{\prime\prime}$
and $T$ changes by a constant cycle $Z$ and a family of r-tuples of complete
intersection curves in $r|H^{\text{ci}}|$. So we subtract the corresponding
family.
To get a finite base change $V\to U$, one can take the graph closure of
$V^{\prime}$ with respect to the morphism to
$\text{Hilb}_{1}(X\times\mathbb{P}^{3})$ and then apply semi-stable reduction.
We may assume that $V$ is smooth using de Jong’s alteration (or resolution of
singularities in characteristic zero).
We note that the base change $V\to U$ consists of two steps: first a purely
inseparable base change $V^{\prime}\to U$ such that a general fiber becomes
nodal, then a further base change $V\to V^{1}\to U$ such that all fibers
becomes semi-stable. The second step does not change general fibers.
Therefore, for a general point $u\in U$, denote by $v^{1}\in V^{1}$ its
inverse image in $V^{\prime}$, and two of its inverse image by
$v,v^{\prime}\in V$. The fiber of the family of stable maps over
$v,v^{\prime}$ consists of complete intersection curves and the nodal curve
over $v^{1}$. They only differ by the complete intersection curves. So we can
construct a deformation over a curve $C$ from the fiber over $v$ to the fiber
over $v^{\prime}$ by deforming the complete intersection curves. ∎
###### Proof of Theorem 2.6.
This is essentially [KT23, Corollary 59]. As in the proof of Theorem 2.5, we
reduces the theorem to the case of an effective family. The point is, we can
attach families of curves constructed in [KT23, Theorem 43] to the family
(after a base change), so that the fiber (of the new family of curves) over
the nodes of $T,C$ and $v_{0},v_{1}$ has unobstructed deformation. Thus the
nodes in $T,C$ and $v_{0},v_{1}$ map to smooth points in the Hilbert scheme of
$X\times\mathbb{P}^{3}$. We take $W$ to be the normalization of the unique
geometrically irreducible component containing the image of $V,T,C$. In
characteristic $0$, we may even take a resolution of singularities of $W$ that
is isomorphic over the smooth locus. ∎
## 3\. Lawson homology
Let $X$ be a complex projective variety and we fix a very ample line bundle
$\mathcal{O}(1)$.
All the degree’s are taken with respect to this line bundle. Let
$\text{Chow}_{r,d}(X)$ be the Chow variety parameterizing degree $d$,
$r$-dimensional cycles of $X$ and
$\text{Chow}_{r}(X)=\coprod_{d\geq 0}\text{Chow}_{r,d}(X),$
where $\text{Chow}_{r,0}(X)$ is defined to be a single point corrsponding to
the zero cycle. We give the set $\text{Chow}_{r}(X)(\mathbb{C})$ the structure
of a topological monoid, where the topological structure comes from the
analytic topology on $\text{Chow}_{r,d}(X)(\mathbb{C})$ and the monoid
structure is the sum of cycles. Define $Z_{r}(X)$ to be the group completion
of $\text{Chow}_{r}(X)(\mathbb{C})$. It has a topological group structure. The
topology can be defined in several equivalent ways. These are studied by Lima-
Filho [LF94].
###### Definition 3.1.
We first define the category $I^{\text{eq}}$. The objects are pairs
$(S,\Gamma)$ consisting of a normal variety $S$ and a family of equi-
dimensional $r$-dimensional cycles $\Gamma$, and whose morphisms between
$(S,\Gamma)$ and $(S^{\prime},\Gamma^{\prime})$ are all the morphisms $f:S\to
S^{\prime}$ such that $\Gamma=f^{*}\Gamma^{\prime}$.
Define the topological space $Z_{r}(X)^{\text{eq}}$ as the colimit of all the
topological spaces $S(\mathbb{C})$ over the category $I^{\text{eq}}$.
More precisely, each $(S,\Gamma)$ in $I^{\text{eq}}$ gives a map of sets
$\phi_{S}:S(\mathbb{C})\to Z_{r}(X)$. The topology of $Z_{r}(X)^{\text{eq}}$
is defined in such a way that a subset $T\subset Z_{r}(X)$ is closed if and
only if $\phi_{S}^{-1}(T)$ is closed for all $(S,\Gamma)$.
###### Lemma 3.2.
In the definition of $Z_{r}(X)^{\text{eq}}$, we may take a subset consisting
of family of equidimensional cycles over normal projective varieties (or
smooth projective varieties).
###### Proof.
Given any family of equidimensional cycles $\Gamma\to S$, we may find a normal
projective variety (resp. smooth projective variety) $T$, a family
$\Gamma_{T}\to T$, and an open subset $T^{0}$ of $T$ such that there is a
surjective proper map $p:T^{0}\to S$ and $\Gamma_{T}|_{T^{0}}$ is
$\Gamma\times_{S}T^{0}$.
Note that we have a factorization $T^{0}(\mathbb{C})\to S(\mathbb{C})\to
Z_{r}(X)$. A set in $S(\mathbb{C})$ is closed if and only if its inverse image
under $p^{-1}$ in $T^{0}(\mathbb{C})$ is closed. That is, the topology of
$S(\mathbb{C})$ is the quotient topology coming from $T^{0}(\mathbb{C})\to
S(\mathbb{C})$.
Thus the topology on $Z_{r}(X)^{\text{eq}}$ is determined by families over
normal varieties (resp. smooth varieties) such that the family has an
extension over a normal (resp. smooth) projective compactification.
Therefore, when defining $Z_{r}(X)^{\text{eq}}$ as a colimit, we may take only
normal (resp. smooth) projective varieties. ∎
###### Definition 3.3.
Define the topological space $Z_{r}(X)^{\text{Chow}}$ as the quotient of
$\text{Chow}_{r}(X)(\mathbb{C})\times\text{Chow}_{r}(X)(\mathbb{C})$
by $\text{Chow}_{r}(X)(\mathbb{C})$, where the action is
$(a,b)\mapsto(a+c,b+c)$ for $c\in\text{Chow}_{r}(X)(\mathbb{C})$.
###### Theorem 3.4 ([LF94], Theorem 3.1, Theorem 5.2, Corollary 5.4).
The identity map induces homeomorphisms
$Z_{r}(X)^{\text{eq}}\cong Z_{r}(X)^{\text{Chow}}.$
Here is the definition of Lawson homology, first studied in [Law89].
###### Definition 3.5.
Let $X$ be a complex projecitve variety. Define the Lawson homology
$L_{r}H_{n+2r}(X)$ as the homotopy group $\pi_{n}(Z_{r}(X))$.
###### Example 3.6 (Dold-Thom isomorphism).
Consider $Z_{0}(X)$, the group of zero cycles on $X$. The classical Dold-Thom
theorem implies that there is an isomorphism
$L_{0}H_{n}(X)\cong H_{n}(X,\mathbb{Z}),$
###### Example 3.7 (Hurewitz map).
The Hurewitz map is induced by the inclusion $X\to Z_{0}(X)$:
$\pi_{k}(X)\to\pi_{k}(Z_{0}(X))\cong H_{k}(X,\mathbb{Z}).$
###### Theorem 3.8.
Let $X$ be a complex smooth projective variety. Then for any loop $L$ in
$Z_{1}(X)$, there is a projective algebraic set $Y$ with a family of nodal
curves $\Gamma\to Y\times X$ over $Y$ such that the map
$\Phi:L_{0}H_{1}(Y)=\pi_{1}(Z_{0}(Y))\to L_{1}H_{3}(X)=\pi_{1}(Z_{1}(X))$
induced by the family $\Gamma$ contains the class $[L]$ in $L_{1}H_{3}(X)$.
Assume furthermore that either $X$ is rationally connected or $X$ is a
rationally connected fibration over a curve, we may take $Y$ to be smooth.
We first introduce some notations. Given a projective algebraic set $S$
parameterizing a family of one dimensional cycles of $X$, there is an induced
continuous map between topological groups:
$Z_{0}(S)\to Z_{1}(X).$
We denote by $I(S)$ the image of this map, i.e. the closed subgroup of
$Z_{1}(X)$ generated by the cycles over $S$, and $K(S)$ the kernal of this
map.
The first observation in the proof of Theorem 3.8 is the following.
###### Lemma 3.9.
Let $X$ be a complex smooth projective variety. For any class $[L]$ in
$L_{1}H_{3}(X)=\pi_{1}(Z_{1}(X))$, there is a normal projective variety $U$
and a family of equidimensional one cycles $\gamma_{U}$ over $U$ such that
$[L]$ is represented by a continuous map
$I=[0,1]\to U\to Z_{1}(X).$
Note that $I\to U$ is not a loop in general.
###### Proof.
Denote by $Z_{1}(X)^{0}$ the neutral component of the topological group
$Z_{1}(X)$, i.e., the connected component containing the identity. We may
assume $L$ lies in $Z_{1}(X)^{0}$. Cycles in $Z_{1}(X)^{0}$ are precisely the
cycles algebraically equivalent to $0$. By Proposition 2.4, the topological
group $Z_{1}(X)^{0}$ is a filtered colimit over closed subgroups generated by
one-dimensional cycles parameterized by normal projective varieties.
Homotopy groups commutes with filtered colimits. Thus there is an irreducible
normal projective variety $S$ with a family of one dimensional cycles over $S$
such that the induced map
$\pi_{1}(I(S))\to\pi_{1}(Z_{1}(X))\cong L_{1}H_{3}(X)$
contains the class $[L]$ in $\pi_{1}(Z_{1}(X))$.
The fibration $K(S)\to Z_{0}(S)\to I(S)$ gives a long exact sequence of
homotopy groups:
$\ldots\to\pi_{1}(Z_{0}(S))\to\pi_{1}(I(S))\to\pi_{0}(K(S))\to\ldots.$
A loop in $I(S)$ lifts to a continuous map from the unit interval $I=[0,1]$ to
$Z_{0}(S)$, such that $0,1$ map to two points in $Z_{0}(S)$ that parameterize
the same cycle in $X$.
We may assume that the family over $I$ is the difference of two families of
effective $0$-cycles of degree $d+/d-$ in $S$. That is, it corresponds to the
difference of two continuous maps $f^{+}:I\to S^{(d+)},f^{-}:I\to S^{(d-)}$,
which is the same as a continuous map $f:I\to S^{(d+)}\times S^{(d-)}$ with
$0$ mapping to a point $x=(x^{+},x^{-})$ and $1$ mapping to a point
$y=(y^{+},y^{-})$.
A family of one cycles over $S$ induces a family of cycles over $S^{(d+)}$ and
$S^{(d-)}$. Let us denote them by $\Gamma_{d+},\Gamma_{d-}$.
The loop is the composition $I\to S^{(d+)}\times S^{(d-)}\to Z_{0}(S)\to
Z_{1}(X)$, where the middle map is taking the difference.
Let us use a different family of cycles
$\pi_{+}^{*}\Gamma_{d+}-\pi_{d-}^{*}\Gamma_{d-}$ on the product
$S^{(d+)}\times S^{(d-)}$, where $\pi_{+/-}$ is the projection to
$S^{(d+)/(d-)}$. This family of cycles induces a continuous map
$S^{(d+)}\times S^{(d-)}\to Z_{1}(X)$ such that the composition $I\to
S^{(d+)}\times S^{(d-)}\to Z_{1}(X)$ is the loop $L$.
We take $U$ to be $S^{(d+)}\times S^{(d-)}$ and $\gamma_{U}$ to be
$\pi_{+}^{*}\Gamma_{d+}-\pi_{d-}^{*}\Gamma_{d-}$. ∎
###### Proof of Theorem 3.8.
By Lemma 3.9, there is a normal projective variety $U$ and a family of
equidimensional one cycles $\gamma_{U}$ over $U$ such that $[L]$ is
respresented by a continuous map
$f:I=[0,1]\to U\to Z_{1}(X).$
Denote by $x,y\in U$ the image of $0,1$ by $f$. The cycle over $x,y$ are the
same by assumption. Now we are in the set-up of Theorem 2.5. Thus there is a
smooth projective variety $V$ with a generically finite surjective morphism
$V\to U$, a lifting $x_{V},y_{V}$ of $x,y$ to $V$, and families of constant
cycles parameterized by curves $T,C$ connecting $x_{V},y_{V}$ and respectively
inverse images of a general point in $S$. We take $Y$ to be $V\cup T\cup C$ in
this case.
The morphism $V\to U$ induces a continuous map between topological groups
$Z_{0}(V)\to Z_{0}(U)$. Denote by $K$ the kernal topological group. As a
group, $K$ is generated by cycles of the form $a-b$, where $a,b$ are points in
a fiber of $V\to U$. Note that $I(V)=I(U)$. Thus we have a fibration sequence
of topological groups:
$0\to K\to K(V)\to K(U)\to 0.$
We have commutative diagrams:
$\begin{CD}\pi_{1}(Z_{0}(Y))@>{}>{}>\pi_{1}(I(Y))@>{}>{}>\pi_{0}(K(Y))\\\
@A{}A{}A@A{}A{}A@A{}A{}A\\\
\pi_{1}(Z_{0}(V))@>{}>{}>\pi_{1}(I(V))@>{}>{}>\pi_{0}(K(V))@<{}<{}<\pi_{0}(K)\\\
@V{}V{}V@V{}V{=}V@V{}V{}V\\\
\pi_{1}(Z_{0}(U))@>{}>{}>\pi_{1}(I(U))@>{}>{}>\pi_{0}(K(U))\\\ \end{CD}$
The obstruction of lifting the class $[L]$ in $\pi_{1}(I(V))$ is in
$\pi_{0}(K(V))$ and maps to $[x-y]$ in $\pi_{1}(K(U))$. The class
$[x_{V}-y_{V}]$ differs from the obstruction class by an element in
$\pi_{0}(K)$.
We take the stein factorization $V\to V^{\prime}\to U$, where $V\to
V^{\prime}$ has connected fibers (hence birational) and $V^{\prime}\to U$ is
finite. Therefore $\pi_{0}(K)$ is finitely generated by classes of the form
$[a-b]$, where $a,b$ are points in the fiber over a general point in $U$.
The class $[L]$ maps to $\pi_{1}(I(Y))$, with obstruction class the push-
forward of $[x_{V}-y_{V}]$ modulo classes in $\pi_{0}(K)$.
By the existence of the families of constant cycles over the curves $T,C$ in
Theorem 2.6, we have
1. (1)
The composition
$\pi_{0}(K)\to\pi_{0}(K(V))\to\pi_{0}(K(Y))$
is the zero map.
2. (2)
The push-forward of the class $[x_{V}-y_{V}]$ vanishes in $\pi_{0}(K(Y))$.
Thus the class of the loop $L$ in $\pi_{1}(I(Y))$ is contained in
$\pi_{1}(Z_{0}(Y))$.
Finally, if $X$ is rationally connected in codimension $1$, by Theorem 2.6,
all these families over $V,T,C$ come from pulling back from a family of cycles
over a smooth projective variety. In this case, we take $Y$ to be this smooth
projective variety. ∎
Now we introduce another ingredient.
###### Lemma 3.10.
[FM94, Page 709, 1.2.1] There is a continuous map, the _s-map_ :
$Z_{r}(X)\wedge\mathbb{P}^{1}\to Z_{r-1}(X)$ inducing the s-map on Lawson
homologies $s:L_{r}H_{k}(X)\to L_{r-1}H_{k}(X)$.
###### Remark 3.11.
The construction of the s-map depends on a deep result: Lawson’s algebraic
suspension theorem. A geometric way of describing the s-map is the following.
Given a cycle $\Gamma$, take a general pencil of divisors
$D_{t}(t\in\mathbb{P}^{1})$ that intersect $\Gamma$ properly, and the s-map
sends $([\Gamma],t)$ to the cycle $\Gamma\cdot D_{t}-\Gamma\cdot D_{0}$.
###### Definition 3.12.
Let $Y$ be a semi-normal variety. Let $Z\subset Y\times X$ be a family of
$r$-dimensional cycle over $Y$ corresponding to a morphism $f:Y\to Z_{r}(X)$.
We define the _correspondence homomorphism_
$\Phi_{f}:H_{k}(Y,\mathbb{Z})\to H_{k+2r}(X,\mathbb{Z})$
as the composition
$H_{k}(Y,\mathbb{Z})\cong\pi_{k}(Z_{0}(Y))\to\pi_{k}(Z_{r}(X))\xrightarrow{s^{\circ
k}}\pi_{k+2r}(Z_{0}(X))\cong H_{k+2r}(X,\mathbb{Z}),$
where the map $\pi_{k}(Z_{r}(X))\to\pi_{k+2r}(Z_{0}(X))$ is induced by $k$-th
iterations of the $s$-map.
###### Theorem 3.13 ([FM94] Theorem 3.4).
Let $Y$ be a smooth projective variety and $\Gamma\subset Y\times X$ be a
family of $r$-dimensional cycle over $Y$ corresponding to a morphism $f:Y\to
Z_{r}(X)$. We have
$\Phi_{f}=\Gamma_{*}:H_{k}(Y,\mathbb{Z})\to H_{k+2r}(X,\mathbb{Z}),$
where $\Gamma_{*}$ is the map defined using $Z$ as a correspondence.
With this result, we can prove the main results over complex numbers.
###### Theorem 3.14.
Let $X$ be a smooth projective variety. There is a projective curve $C$ with a
nodal family of cycles $\Gamma\subset C\times X$ inducing a map $f:C\to
Z_{1}(X)$ such that
$\Phi_{f}:H_{1}(C,\mathbb{Z})\to H_{3}(X,\mathbb{Z})$
has the same image as the s-map $s:L_{1}H_{3}(X)\to H_{3}(X)$, which is the
same as the coniveau filtration $N^{1}H_{3}(X,\mathbb{Z})$.
Furthermore, if $X$ is rationally connected in codimension $1$, we may take
$C$ to be a smooth projective curve, and $\Phi_{f}=\Gamma_{*}$.
###### Proof.
Recall that there is an isomorphism $L_{0}H_{k}(S)\cong H_{k}(S)$ for any
projective algebraic set $S$ by the Dold-Thom theorem. We have a commutative
diagram:
$\begin{CD}L_{0}H_{1}(Y)@>{f_{*}}>{}>L_{1}H_{3}(X)\\\
@V{\cong}V{}V@V{s}V{}V\\\ H_{1}(Y)@>{\Phi_{f}}>{}>H_{3}(X)\end{CD}$
The image of the s-map is finitely generated. Thus we may find finitely many
projective algebraic sets $Y_{i}$ and families of semistable curves
$\Gamma_{i}\to Y_{i}\times X$ such that the generators of the image of s-map
are contained in the image of the correspondence homomophisms
$\Phi_{i*}:H_{1}(Y_{i})\to H_{3}(X)$. Then we take $Y$ to be the product
$\Pi_{i}Y_{i}$ and $\Gamma=\sum_{i}\pi_{i}^{*}\Gamma_{i}$. Clearly $\Phi_{*}$
contains the image of all the $\Phi_{i*}$.
By taking general hyperplane sections containing all the singularities and all
the one dimensional irreducible components, we may find a projective curve
$C\subset Y$ such that $\pi_{1}(C_{1})\to\pi_{1}(Y)$ is surjective. Then we
simply restrict the family of cycles to $C$.
If $X$ is rationally connected in codimension $1$, we may take all the
$Y_{i}$’s to be smooth by Theorem 3.8, and thus $C$ to be a general complete
intersection of very ample divisors, which is a smooth projective curve.
Finally, we note that the image of the s-map
$s:L_{1}H_{3}(X)\to H_{3}(X)$
is $N^{1}H_{3}(X,\mathbb{Z})$ by [Wal07, Proposition 2.8]. ∎
The immediate consequence is the following.
###### Theorem 3.15.
Let $X$ be a smooth projective rationally connected variety or a rationally
connected fibration over a curve. Assume $X$ is a $3$-fold. Then all the
filtrations on $H^{3}(X,\mathbb{Z})$ introduced in Definition 1.18 equal the
whole cohomology group:
$\tilde{N}_{1,\text{cyl},\text{eq}}H^{3}(X,\mathbb{Z})=\tilde{N}_{1,\text{cyl}}H^{3}(X,\mathbb{Z})=\tilde{N}^{1}H^{3}(X,\mathbb{Z})=N^{1}H^{3}(X,\mathbb{Z})=H^{3}(X,\mathbb{Z}).$
###### Proof.
By the decomposition of the diagonal argument,
$L_{1}H_{k}(X)\otimes\mathbb{Q}\cong H_{k}(X,\mathbb{Q})\cong
H^{k}(X,\mathbb{Q}).$
Thus we know that
$s:L_{1}H_{3}(X)\to H_{3}(X,\mathbb{Z})\cong H^{3}(X,\mathbb{Z})$
is surjective by [Voi08, Corollary 3.1]. ∎
## 4\. Chow sheaves
In this section we discuss the general case over an algebraically closed field
$k$.
Sometimes we may “invert $p$”, by taking the tensor product of a sheaf with
$\mathbb{Z}[\frac{1}{p}]$. In this scenario, we understand that $p$ is $1$ if
the base field has characteristic $0$ and equal the characteristic otherwise.
We use $\mathbb{Z}_{l}$ coefficient étale cohomology for $\ell$ a prime
number, non-zero in the field $k$.
One can also define Lawson homology with $\mathbb{Z}_{\ell}$ coefficients in
this context [Fri91]. But the construction of the analogue of $Z_{r}(X)$ is
more complicated . Also Lawson homology in this context is much less studied.
Many of the known results for complex varieties have not been explicitly
stated to hold, even though one can imagine that they are still true. For
example, the author do not know a reference for the construction of the s-map,
Neither could the author find the analogue of Friedlander-Mazur’s result
(Theorem 3.13 ) explicitly stated. If one had developed all the necessary
results in this general context, presumably the argument in last section works
the same way.
So we decided to use another approach for the general case.
###### Definition 4.1.
A finite correspondence from $Y$ to $X$ is a family of relative cycles of
dimension $0$ with proper support over $Y$.
###### Definition 4.2.
Let $\text{Sch}_{k}$ be the category of finite type separated $k$-schemes. Let
$\text{Cor}_{k}$ be category whose objects are separated finite type
$k$-schemes and morphisms finite correspondences. Let $\text{SmCor}_{k}$ be
the full subcategory whose objects are smooth $k$-varieties. In this
subcategory a finite correspondence between from $X$ to $Y$ is a linear
combination of closed subvarieties of $X\to Y$ that are finite surjective onto
one of the irreducible components of $X$.
Recall that the h-topology is generated by covers that are universal
topological epimorphisms. Since we will only deal with noetherian schemes,
this is the same as the topology generated by Zariski covers and covers that
are proper surjective morphisms.
The qfh-topology is generated by covers in the h-topology that are also quasi-
finite.
Later we will only use the fact that a surjective proper morphism is an
h-cover.
###### Definition 4.3.
We define the presheaf $Z_{\text{fl}}(X,r)$ on the category $\text{Sch}_{k}$
whose value on a scheme $S$ is a formal linear combination of integral
subschemes $Z\subset S\times X$ that is flat, of equidimension $r$ over $S$.
We also define $Z_{\text{eq}}(X,r)$ on the category $\text{Sch}_{k}$ whose
value on a scheme $S$ is the group of families of cycles in $X$ of
equidimension $r$ over $S$.
We also define $Z(X,r)$ on the category $\text{Sch}_{k}$ whose value on a
scheme $S$ is the group of families of cycles of dimension $r$ in $X$ over
$S$.
We define $Z_{\text{fl}}^{\text{eff}}(X,r)$ (resp.
$Z_{\text{eq}}^{\text{eff}}(X,r),Z^{\text{eff}}(X,r)$) on the category
$\text{Sch}_{k}$ whose value on a scheme $S$ is the monoid of families of
effective cycles in $X$ of equidimension $r$ over $S$.
Similarly, we define
$C_{\text{fl}}(X,r),C_{\text{eq}}(X,r),C(X,r),C_{\text{fl}}^{\text{eff}}(X,r),C_{\text{eq}}^{\text{eff}}(X,r),C^{\text{eff}}(X,r)$
as the counterpart of the above presheaves for families of cycles with proper
support over $S$.
The sheaf $C_{\text{fl}}(X,r)$ is denoted by
$\mathbb{\mathbb{Z}}\text{PropHilb}$ in [SV00].
Since later we will consider cycles on proper schemes, with the purpose of
keeping the names consistent with previous section, we will use
$Z_{\text{fl}}(X,r)$ etc. notations.
Note that we do not require the subschemes to be equidimensional over $S$ in
the definition of $Z(X,r)$ and $C(X,r)$. It is possible to have higher
dimensional fibers ([SV00, Example 3.1.9]). However, $Z^{\text{eff}}(X,r)$ is
the same as $Z^{\text{eff}}_{\text{eq}}(X,r)$. Similarly for the properly
supported version.
We have the following.
###### Proposition 4.4.
[SV00, Proposition 4.2.7, 4.2.6, Lemma 4.2.13] The presheaf
$Z_{\text{eq}}(X,r)\otimes\mathbb{Z}[\frac{1}{p}]$ is a qfh-sheaf and the
presheaf $Z(X,r)\otimes\mathbb{Z}[\frac{1}{p}]$ is an h-sheaf. Moreover, the
sheafification in the h topology of $Z_{\text{eq}}(X,r)$ is the same as that
of $Z(X,r)$.
In the following, we write $Z_{r}^{h}(X)$ as the h-sheaf associated to
$Z_{\text{fl}}(X,r)$ (which is the same as that of $Z_{\text{eq}}(X,r)$ or
$Z(X,r)$).
In the following, given a presheaf $\mathcal{F}$, we use $C^{*}(\mathcal{F})$
to denote the Suslin complex of presheaves (with non-positive degrees). That
is, $C^{-i}(\mathcal{F})(S)=\mathcal{F}(S\times\Delta^{i})$, where
$\Delta^{i}=\text{Spec }k[t_{0},\ldots,t_{i}]/\langle\sum_{j}t_{j}=1\rangle$
is the algebraic $i$-simplex.
If $\mathcal{F}$ is a torsion, homotopy invariant étale sheaf with transfers,
or a qfh, or h sheaf on the category of schemes over $X$, with torsion order
prime to $p$, the Suslin rigidity theorem [SV96, Theorem 4.5] implies that
$\mathcal{F}$ is a locally constant sheaf. Since we work over an algebraically
closed field, locally constant is the same as constant. Thus for any torsion
sheaf $\mathcal{G}$ with torsion order prime to $p$, its Suslin complex
$C^{*}(\mathcal{G})$ is isomorphic to the pull-back of a complex of locally
constant sheaves. Moreover if $\mathcal{F}$ is a constant étale sheaf, we have
isomorphisms ([SV96, Theorem 10.2, 10.7])
$H^{i}_{\text{\'{e}t}}(X,\mathcal{F})\cong
H^{i}_{\text{qfh}}(X,\mathcal{F^{\text{qfh}}})\cong
H^{i}_{\text{h}}(X,\mathcal{F^{\text{h}}}).$
Since we assume that $k$ is algebraically closed, $\text{Spec }k$ has no
higher cohomology for any sheaf in any of these three topologies. Therefore
for any complex of constant sheaves, we also have the isomorphism of
cohomologies for $X=\text{Spec }k$. In particular, above discussions apply to
the complex $C^{*}(Z_{\text{eq}}(X,r))\otimes\mathbb{Z}/N\mathbb{Z}$. We may
identify the cohomology of this complex.
###### Theorem 4.5.
Let $X$ be a quasi-projective variety defined over an algebraically closed
field $k$. Let $N$ be an integer, non-zero in the field. We have the following
isomorphisms.
$\displaystyle H^{i}_{\text{h}}(\text{Spec
}k,C^{*}(Z_{\text{eq}}^{\text{h}}(X,r)\otimes\mathbb{Z}/N\mathbb{Z}))\cong
H^{i}_{\text{qfh}}(\text{Spec
}k,C^{*}(Z_{\text{eq}}^{\text{qfh}}(X,r)\otimes\mathbb{Z}/N\mathbb{Z}))$
$\displaystyle\cong$ $\displaystyle H^{i}_{\text{\'{e}t}}(\text{Spec
}k,C^{*}(Z_{\text{eq}}(X,r)\otimes\mathbb{Z}/N\mathbb{Z}))\cong
H^{i}_{\text{Ab}}(C^{*}(Z_{\text{eq}}(X,r)\otimes\mathbb{Z}/N\mathbb{Z})(\text{Spec
}k))$ $\displaystyle\cong$ $\displaystyle
CH_{r}(X,-i,\mathbb{Z}/N\mathbb{Z}).$
###### Proof.
The first three cohomology groups are isomorphic as discussed above. They are
all equal to the cohomology of the complex
$C^{*}(Z_{\text{eq}}(X,r)\otimes\mathbb{Z}/N\mathbb{Z})(\text{Spec }k))$,
since this complex computes the cohomology group in the qfh and étale
topology.
Finally, under the hypothesis that resolution of singularities holds, Suslin
[Sus00, Theorem 3.2] proves that for any quasi-projective variety $X$, we have
an isomorphism
$H^{i}_{\text{Ab}}(C^{*}(Z_{\text{eq}}(X,r)\otimes\mathbb{Z}/N\mathbb{Z})(\text{Spec
}k))\cong CH_{r}(X,-i,\mathbb{Z}/N\mathbb{Z}).$
Using Gabber’s refinement of de Jong’s alteration theorem, Kelly [Kel17,
Theorem 5.6.4] removed the resolution of singularities hypothesis. ∎
###### Corollary 4.6.
Let $X$ be a quasi-projective variety defined over an algebraically closed
field $k$. Let $N$ be an integer, non-zero in the field. We have the following
isomorphisms.
$\displaystyle H^{i}_{\text{\'{e}t}}(\text{Spec
}k,C^{*}(Z_{\text{eq}}(X,0)\otimes\mathbb{Z}/N\mathbb{Z}))\cong
H^{i}_{\text{qfh}}(\text{Spec
}k,C^{*}(Z_{\text{eq}}(X,0)\otimes\mathbb{Z}/N\mathbb{Z}))$
$\displaystyle\cong$ $\displaystyle H^{i}_{\text{h}}(\text{Spec
}k,C^{*}(Z_{\text{eq}}(X,0)\otimes\mathbb{Z}/N\mathbb{Z}))\cong
H^{i}_{\text{Ab}}(C^{*}(Z_{\text{eq}}(X,0)\otimes\mathbb{Z}/N\mathbb{Z})(\text{Spec
}k))$ $\displaystyle\cong$ $\displaystyle
CH_{0}(X,-i,\mathbb{Z}/N\mathbb{Z})\cong
H^{2d-i}_{\text{\'{e}t}}(X,\mathbb{Z}/N\mathbb{Z}),$
where $d$ is the dimension of $X$. In particular, all the groups are finite.
###### Proof.
The last equality follows from [SV96, Corollary 7.8] and [Gei00, Theorem 3.5],
[Kel17, Theorem 5.6.1]. Clearly the étale cohomology group is finite. ∎
We use $A_{1}(X)$ to denote the group of one cycles in $X$ modulo algebraic
equivalence. For any abelian group $A$ and any integer $m$, we use $A[m]$ to
denote the group of $m$-torsions in $A$, and $A/m$ to denote the quotient
$A/mA$.
For any integer $N$ invertible in the field $k$, we have a homomorphism
$CH_{1}(X,1,\mathbb{Z}/N\mathbb{Z})\to CH_{1}(X,0,\mathbb{Z})[N]$
that comes from the long exact sequence of higher Chow groups with
$\mathbb{Z}$ and $\mathbb{Z}/N$ coeffiecients. Composing with the surjective
map
$CH_{1}(X,0,\mathbb{Z})[N]\to A_{1}(X)[N],$
we have a homomorphism
$CH_{1}(X,1,\mathbb{Z}/N\mathbb{Z})\to A_{1}(X)[N].$
Now we can state the counterpart of Theorem 3.8.
###### Theorem 4.7.
Let $X$ be a smooth projective variety defined over an algebraically closed
field $k$ of characteristic $p$. Fix an integer $N$ that is invertible in $k$.
For any class $[L]$ in the kernal of the map
$H_{h}^{-1}(\text{Spec
}k,C^{*}(Z_{1}^{h}(X))\otimes\mathbb{Z}/N\mathbb{Z})\cong
CH_{1}(X,1,\mathbb{Z}/N\mathbb{Z})\to A_{1}(X)[N],$
there is a projective algebraic set $Z$ and a family of one-dimensional nodal
curves over $Z$ such that the class $[L]$ is in the image of
$H_{h}^{-1}(\text{Spec }k,C^{*}(Z_{0}^{h}(Z))\otimes\mathbb{Z}/N\mathbb{Z})\to
H_{h}^{-1}(\text{Spec }k,C^{*}(Z_{1}^{h}(X))\otimes\mathbb{Z}/N\mathbb{Z})$
induced by this family of cycles. Assume furthermore that $X$ is separably
rationally connected in codimension $1$. We may take $Z$ to be normal.
###### Remark 4.8.
This result is a priori weaker than Theorem 3.8 over complex numbers. We have
a short exact sequence
$0\to L_{1}H_{3}(X)/N\to L_{1}H_{3}(X,\mathbb{Z}/N)\to L_{1}H_{2}(X)[N]\to 0.$
Since $L_{1}H_{3}(X,\mathbb{Z}/N)\cong CH_{1}(X,1,\mathbb{Z}/N)$ and
$L_{1}H_{2}(X)[N]\cong A_{1}(X)[N]$, Theorem 4.7 only says that classes in
$L_{1}H_{3}(X)/N$ comes from a smooth projective variety.
But if we know that $L_{1}H_{3}(X)$ is finitely generated, then we can find
the lift. Conjecturally, this group is isomorphism to $H_{3}(X,\mathbb{Z})$,
thus finitely generated.
The proof of Theorem 4.7 is analogous to that of Theorem 3.8. We first prove
the analogue of Lemma 3.9.
###### Lemma 4.9.
Let $X$ be a smooth projective variety defined over an algebraically closed
field $k$ of characteristic $p$. Assume that $X$ is either a separablly
rationally connected variety or a separably rationally connected fibration
over a curve. Fix an integer $N$ that is invertible in $k$. For any class
$[L]$ in
$H_{h}^{-1}(\text{Spec
}k,C^{*}(Z_{1}^{h}(X))\otimes\mathbb{Z}/N\mathbb{Z})\cong
CH_{1}(X,1,\mathbb{Z}/N\mathbb{Z}),$
there is a normal projective variety $U$, a family of equidimensional one
cycles $\gamma_{U}$ over $U$ and a morphism $f:\Delta^{1}\to U$ such that
$[L]$ is represented by $f^{*}\gamma_{U}$ over $\Delta^{1}$.
###### Proof.
We could translate the proof of Theorem 3.8 in the context of h-sheaves. But
here is an easier argument using Hilbert schemes.
The class $[L]$ is represented by a family of cycles
$\sum_{i}m_{i}\Gamma_{i},m_{i}\in\mathbb{Z}/N$ over $\Delta^{1}$, where
$\Gamma_{i}\subset\Delta^{1}\times X$ is an integral subvariety. Since
$\Delta^{1}$ is one dimensional, the projection $\Gamma_{i}\to\Delta^{1}$ is
flat. Thus we get a morphism $f_{i}$ from $\Delta^{1}$ to the Hilbert scheme.
The universal subscheme over the Hilbert scheme gives a family of cycles over
the Hilbert scheme. Therefore, we may take $U$ to be the normalization of of
products of irreducible components of the Hilbert scheme and $\gamma_{U}$ to
be the family of cycles (with appropriate multiplicity) coming from universal
subschemes. ∎
We will need the following observation later in the proof.
###### Lemma 4.10.
Let $T$ be a connected projective algebraic set over an algebraically closed
field $k$, and let $x,y$ be two points in $T$. Let $\mathcal{F}$ be a sheaf of
abelian groups in the qfh or h topology, or an étale sheaf with transfers. Fix
an integer $N$ invertible in $k$. Write $F_{x}$ (resp. $F_{y}$) the
restriction of $F\in\mathcal{F}\otimes\mathbb{Z}/N\mathbb{Z}(T)$ to $x$ (resp.
$y$). Then $F_{x}=F_{y}$ in $H^{0}(\text{Spec
}k,C^{*}(\mathcal{F})\otimes\mathbb{Z}/N\mathbb{Z})$, where the cohomology in
taken in the étale topology, qfh topology or h topology.
###### Proof.
Elements in $\mathcal{F}\otimes\mathbb{Z}/N\mathbb{Z}(T)$ induces a unique
morphism
$Z_{0}(T)\otimes\mathbb{Z}/N\to\mathcal{F}\otimes\mathbb{Z}/N\mathbb{Z}.$
If $\mathcal{F}$ is a sheaf with transfers, this is the Yoneda Lemma. If
$\mathcal{F}$ is a qfh sheaf or h sheaf, this follows from the fact that the
qfh sheafification of $Z_{0}(T)[\frac{1}{p}]$ is the free sheaf
$\mathbb{Z}[\frac{1}{p}][T]$ generated by the presheaf of sets
$\text{Hom}(\cdot,T)$ ([SV96, Theorem 6.7]). Thus the class
$[\mathcal{F}_{x}]$ (resp. $[\mathcal{F}_{y}]$) is the image of $[x]$ (resp.
$[y]$) under the map
$H^{0}(\text{Spec }k,C^{*}(Z_{0}(T))\otimes\mathbb{Z}/N\mathbb{Z})\to
H^{0}(\text{Spec }k,C^{*}(\mathcal{F})\otimes\mathbb{Z}/N\mathbb{Z}).$
So it suffices to show that $[x]=[y]$ in $H^{0}(\text{Spec
}k,Z_{0}(T)\otimes\mathbb{Z}/N\mathbb{Z})$. But the latter cohomology group is
$CH_{0}(T)\otimes\mathbb{Z}/N\mathbb{Z}\cong\mathbb{Z}/N\mathbb{Z}$ by Lemma
4.6 and the isomorphism is given by the degree map. Any two points $x,y$ give
the same class in $H^{0}(\text{Spec
}k,Z_{0}(T)\otimes\mathbb{Z}/N\mathbb{Z})$. ∎
Now we begin the proof Theorem 4.7.
###### Proof of Theorem 4.7.
Given a normal projective variety $S$ parameterizing a family of one
dimensional cycles of $X$, there is an induced morphism of h-sheaves:
$Z_{0}^{h}(S)\to Z_{1}^{h}(X).$
We denote by $I(S)$ (resp. $K(S)$) the image h-sheaf (resp. kernal h-sheaf) of
this map.
By Lemma 4.9, there is a normal projective variety $U$, a family of
equidimensional one cycles $\gamma_{U}$ over $U$ and a morphism
$f:\Delta^{1}\to U$ such that $[L]$ is represented by $f^{*}\gamma_{U}$ over
$\Delta^{1}$.
Denote by $\gamma_{0},\gamma_{1}$ the restriction of the family of cycles
$\gamma_{U}$ over $U$ to $0,1\in\Delta^{1}$. Then
$\gamma_{0}-\gamma_{1}=N(\gamma_{0,1})$ for some cycle $\gamma_{0,1}$. The
image of $[L]$ in $CH_{1}(X,0,\mathbb{Z})[N]$ and $A_{1}(X)[N]$ is the class
of $\gamma_{0,1}$.
If $\gamma_{0,1}$ is zero in $A_{1}(X)[N]$, that is, if $\gamma_{0,1}$ is
algebraically equivalent to $0$, then by Proposition 2.4, there is a smooth
projective curve $D$ with a family of cycles $\gamma_{D}$ and two points
$d,d^{\prime}$ such that $\Gamma_{d}$ is $0$ and $\Gamma_{d^{\prime}}$ is
$\gamma_{0,1}$.
Consider the product $U\times D$. We have a family of cycles
$\gamma=\pi_{U}^{*}\gamma_{U}+N\pi_{D}^{*}\gamma_{D}$.
There are three points in $S=U\times D$,
$x=(f(0),d),y=(f(1),d),z=(f(1),d^{\prime})$
such that
1. (1)
$\gamma_{x}=\gamma_{z}=\gamma_{0}$.
2. (2)
There is a curve $C$ containing $y,z$ such that for every point $c\in C$, the
cycle $\gamma_{c}$ equals $\gamma_{1}$ in
$Z_{1}(X)\otimes\mathbb{Z}/N(\text{Spec }k)$.
As in the proof of Theorem 3.8, we apply the second part of Theorem 2.6 to
find a normal projective variety $V$, a projective algebraic set $Y$ with a
surjective projective morphism $V\to S$ and an embedding $V\to Y$, and
liftings $x_{V},y_{V},z_{V}$ of the points $x,y,z$ such that
1. (1)
The two points $x_{V}$ and $z_{V}$ are connected by a chain of curves in $Y$
parameterizing constant cycles.
2. (2)
The two points $y_{V}$ and $z_{V}$ are connected by a curve $D_{V}$ in $V$
parameterizing cycles divisible by $N$.
Denote by $K$ the kernal of the morphism between h sheaves
$Z_{0}(V)\to Z_{0}(S).$
Here $V\to S$ is proper and surjective. So the above morphism of h sheaves is
surjective. Then we have an isomorphism of h sub-sheaves of $Z_{1}(X)$
$I(V)\cong I(S).$
It follows that we have a short exact sequence of h sheaves:
$0\to K\to K(V)\to K(S)\to 0.$
We have commutative diagrams:
$\begin{CD}H^{-1}_{h}(C^{*}(Z_{0}(Y))/N)@>{}>{}>H^{-1}_{h}(C^{*}(I(Y)/N))@>{}>{}>H^{0}_{h}(C^{*}(K(Y))/N)\\\
@A{}A{}A@A{}A{}A@A{}A{}A\\\
H^{-1}_{h}(C^{*}(Z_{0}(V))/N)@>{}>{}>H^{-1}_{h}(C^{*}(I(V))/N)@>{}>{}>H^{0}_{h}(C^{*}(K(V))/N)\\\
@V{}V{}V@V{}V{=}V@V{}V{}V\\\
H^{-1}_{h}(C^{*}(Z_{0}(S))/N)@>{}>{}>H^{-1}_{h}(C^{*}(I(S))/N)@>{}>{}>H^{0}_{h}(C^{*}(K(S))/N)\\\
\end{CD}$
The obstruction of lifting the class $[L]$ in $H^{-1}_{h}(\text{Spec
}k,C^{*}(I(V))\otimes\mathbb{Z}/N\mathbb{Z})$ is in $H^{0}_{h}(\text{Spec
}k,C^{*}(K(V))$ and maps to $[x-y]$ in $H^{0}_{h}(\text{Spec
}k,C^{*}(K(S))\otimes\mathbb{Z}/N\mathbb{Z})$. The class $[x_{T}-y_{T}]$
differs from the obstruction class by an element in
$H^{0}_{h}(\text{Spec }k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z}).$
Given the morphism $V\to S$, we have a long exact sequence
$\displaystyle\ldots\to$ $\displaystyle H^{-1}_{h}(\text{Spec
}k,C^{*}(Z_{0}(V))\otimes\mathbb{Z}/N\mathbb{Z})\to H^{-1}_{h}(\text{Spec
}k,C^{*}(Z_{0}(S))\otimes\mathbb{Z}/N\mathbb{Z})$ $\displaystyle\to$
$\displaystyle H^{0}_{h}(\text{Spec
}k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})\to H^{0}_{h}(\text{Spec
}k,C^{*}(Z_{0}(T))\otimes\mathbb{Z}/N\mathbb{Z})\to\ldots$
Therefore $H^{0}_{h}(\text{Spec }k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})$ is
finitely generated by Corollary 4.6. By Lemma 4.12, any class in
$H^{0}_{h}(\text{Spec }k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})$ is equivalent
to the class of the form $[a-b]$, where $a,b$ are points in the fiber over a
general point in $S$.
The class $[L]$ maps to $H^{-1}_{h}(\text{Spec
}k,C^{*}(I(Y))\otimes\mathbb{Z}/N\mathbb{Z})$, with obstruction class the
push-forward of $[x_{V}-y_{V}]$ modulo classes in $H^{0}_{h}(\text{Spec
}k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})$.
By the existence of the families of constant cycles in the second part of
Theorem 2.6, and by Lemma 4.10, we have
1. (1)
The composition
$H^{0}_{h}(\text{Spec }k,C^{*}(K)/N)\to H^{0}_{h}(\text{Spec
}k,C^{*}(K(V))/N)\to H^{0}_{h}(\text{Spec }k,C^{*}(K(Y))/N)$
is the zero map.
2. (2)
The push-forward of the class $[x_{V}-y_{V}]$ vanishes in
$H^{0}_{h}(\text{Spec }k,C^{*}(K(Y))\otimes\mathbb{Z}/N\mathbb{Z})$ (by
applying Lemma 4.10 to $[x_{V}-y_{V}]$ and $[x_{V}-x_{V}]=0$).
Thus the class $[L]$ in $H^{-1}_{h}(\text{Spec }k,C^{*}(I(Y))/N)$ comes from
$H^{-1}_{h}(\text{Spec }k,C^{*}(Z_{0}(Y))/N)$.
Finally, if $X$ is separably rationally connected in codimension $1$, we may
take $Y$ to be normal by Theorem 2.6. We use Gabber’s refinement of de Jong’s
alteration to find a smooth projective variety $Z$ and a projective alteration
$Z\to Y$ whose degree is relatively prime to $N$. Then
$CH_{0}(Z,1,\mathbb{Z}/N\mathbb{Z})\to CH_{0}(Y,1,\mathbb{Z}/N\mathbb{Z})$
is surjective by Lemma 4.13. Pulling back the families of cycles over $Y$
gives a family of cycles over $Z$. Then the theorem follows from the following
commutative diagram
$\begin{CD}CH_{0}(Z,1,\mathbb{Z}/N)@>{}>{}>CH_{0}(Y,1,\mathbb{Z}/N)@>{\Gamma_{*}}>{}>CH_{1}(X,1,\mathbb{Z}/N)\\\
@V{\cong}V{}V@V{\cong}V{}V@V{}V{}V\\\
H_{1}(Z,\mathbb{Z}/N)@>{}>{}>H_{1}(Y,\mathbb{Z}/N)@>{\Gamma_{*}}>{}>H_{3}(X,\mathbb{Z}/N).\end{CD}$
∎
The lemmas used in the proof are the following.
###### Lemma 4.11.
Let $X\to Y$ be a flat and finite morphism defined over an algebraically
closed field $k$, where $Y$ is a normal variety (but $X$ is not necessarily
normal). Let $N$ be an integer invertible over $k$. Denote by $K$ the kernal h
sheaf of $Z_{0}(X)\to Z_{0}(Y)$. Then for any chosen general point in $Y$,
$H_{h}^{0}(\text{Spec }k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})$ is generated
by class of the form $[t_{1}-t_{2}]$, for $t_{1},t_{2}$ in the fiber of this
chosen general point in $Y$.
###### Proof.
Let $x_{1},x_{2}$ be two points in the fiber over $y\in Y$. Clearly
$H_{h}^{0}(\text{Spec }k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})$ is generated
by class of the form $[x_{1}-x_{2}]$ for all the pairs of points with the same
image in $Y$. We will show that for any chosen general point $t\in Y$, the
class $[x_{1}-x_{2}]$ is equivalent to a class $[t_{1}-t_{2}]$ for some points
$t_{1},t_{2}$ in the fiber over $t$. Consider the correspondence
$X\times_{Y}X\subset X\times X$. Since $X\to Y$ is assumed to be flat,
$X\times_{Y}X\to X$ is flat. We take an irreducible component $D$ containing
$(x_{1},x_{2})$, which dominates (and thus surjects onto) $X$. There are two
points $x_{D},t_{D}$ in $D$ such that the following conditions are satisfied.
1. (1)
There is a surjective morphism $f:D\to Y$ that maps $x_{D}$ (resp. $t_{D}$) to
$y$ (resp. $t$).
2. (2)
There are two morphisms $f_{1},f_{2}:D\to X$ such that
$f_{1}(x_{D})=x_{1},f_{2}(x_{D})=x_{2}$.
3. (3)
The composition of $f_{1},f_{2}$ with the morphism $q:X\to Y$ gives the
morphism $f:D\to Y$.
Then by Lemma 4.10, the class $[x_{1}-x_{2}]$ is the same as
$[f_{1}(t_{D})-f_{2}(t_{D})]$. ∎
###### Lemma 4.12.
Let $p:X\to Y$ be a generically finite surjective morphism between normal
projective varieties over an algebraically closed field $k$. Let $N$ be an
integer invertible over $k$. Denote by $K$ the kernal h sheaf of $Z_{0}(X)\to
Z_{0}(Y)$. Then the cohomology group $H_{h}^{0}(\text{Spec
}k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})$ is generated by class of the form
$[t_{1}-t_{2}]$, for $t_{1},t_{2}$ in the fiber of any chosen general point in
$Y$.
###### Proof.
There is a birational projective variety morphism $Y^{\prime}\xrightarrow{q}Y$
such that the strict transform $X^{\prime}$ of $X$ is flat over $Y^{\prime}$.
That is, we have a commutative diagram:
$\begin{CD}X^{\prime}@>{q^{\prime}}>{}>X\\\ @V{p^{\prime}}V{}V@V{p}V{}V\\\
Y^{\prime}@>{q}>{}>Y\\\ \end{CD}$
We denote by $K(p)$ etc. to denote the kernal h sheaf of $Z_{0}(X)\to
Z_{0}(Y)$ etc.. There is a commuative diagram of short exact sequences of h
sheaves
$\setcounter{MaxMatrixCols}{11}\begin{CD}0@>{}>{}>K(q^{\prime})@>{}>{}>Z_{0}(X^{\prime})@>{q_{*}^{\prime}}>{}>Z_{0}(X)@>{}>{}>0\\\
@V{}V{}V@V{}V{}V@V{p_{*}^{\prime}}V{}V@V{p_{*}}V{}V@V{}V{}V\\\
0@>{}>{}>K(q)@>{}>{}>Z_{0}(Y^{\prime})@>{q_{*}}>{}>Z_{0}(Y)@>{}>{}>0,\end{CD}$
which also gives commuatative diagrams after tensoring with
$\mathbb{Z}/N\mathbb{Z}$. Then we have long exact sequences:
$CH_{1}(X,1,\mathbb{Z}/N\mathbb{Z})\to CH_{1}(Y,1,\mathbb{Z}/N\mathbb{Z})\to
H_{h}^{0}(\text{Spec }k,C^{*}(K(p))\otimes\mathbb{Z}/N\mathbb{Z})\ldots,$
$CH_{1}(X^{\prime},1,\mathbb{Z}/N\mathbb{Z})\to
CH_{1}(Y^{\prime},1,\mathbb{Z}/N\mathbb{Z})\to H_{h}^{0}(\text{Spec
}k,C^{*}(K(p^{\prime}))\otimes\mathbb{Z}/N\mathbb{Z})\ldots.$
The cohomology group $H_{h}^{0}(\text{Spec
}k,C^{*}(K(p))\otimes\mathbb{Z}/N\mathbb{Z})$ (resp. $H_{h}^{0}(\text{Spec
}k,C^{*}(K(p^{\prime}))\otimes\mathbb{Z}/N\mathbb{Z})$ ) is generated by
cycles of the form $y_{1}-y_{2}$ for $y_{1},y_{2}$ in the same fiber. So it
suffices to show that such cycles are zero.
We first show that
$CH_{0}(Y^{\prime},1,\mathbb{Z}/N\mathbb{Z})\to
CH_{0}(Y,1,\mathbb{Z}/N\mathbb{Z})$
is surjective. This is because $Y^{\prime}\to Y$ has connected fibers. So for
any two points in the same fiber, by Lemma 4.10, the class of the difference
is zero. Since
$CH_{0}(Y^{\prime},0,\mathbb{Z}/N\mathbb{Z})\to
CH_{0}(Y,0,\mathbb{Z}/N\mathbb{Z})$
is an isomorphism, we know that $H_{h}^{0}(\text{Spec
}k,C^{*}(K(q))\otimes\mathbb{Z}/N\mathbb{Z})$ vanishes.
By the same argument, $H_{h}^{0}(\text{Spec
}k,C^{*}(K(q^{\prime}))\otimes\mathbb{Z}/N\mathbb{Z})$ vanishes. Then a simple
diagram chasing shows that
$H_{h}^{0}(\text{Spec }k,C^{*}(K(p^{\prime}))\otimes\mathbb{Z}/N\mathbb{Z})\to
H_{h}^{0}(\text{Spec }k,C^{*}(K(p))\otimes\mathbb{Z}/N\mathbb{Z})$
is surjective.
Thus the statements follow form Lemma 4.11. ∎
###### Lemma 4.13.
Let $p:X\to Y$ be a generically finite morphism between normal projective
varieties over an algebraically closed field $k$. Let $N$ be an integer
invertible over $k$. Assume that $\deg p$ is relatively prime to $N$. Then we
have a surjection
$CH_{0}(X,1,\mathbb{Z}/N\mathbb{Z})\to CH_{0}(Y,1,\mathbb{Z}/N\mathbb{Z}).$
###### Proof.
By Lemma 4.12, and the long exact sequence
$\displaystyle CH_{0}(X,1,\mathbb{Z}/N\mathbb{Z})\to
CH_{0}(Y,1,\mathbb{Z}/N\mathbb{Z})\to H^{0}_{h}(\text{Spec }k,C^{*}(K)/N)$
$\displaystyle\to$ $\displaystyle
CH_{0}(X,0,\mathbb{Z}/N)\xrightarrow{\cong}CH_{0}(Y,0,\mathbb{Z}/N),$
it suffices to show that for a general point $y\in Y$ and any two points
$x_{1},x_{2}$ in the fiber of $y$, the class $[x_{1}-x_{2}]$ is zero in
$H^{0}_{h}(\text{Spec }k,C^{*}(K)/N)$.
By the Bertini theorem for étale fundamental groups, there is a general
complete intersection curve $H$ such that the inverse image $H^{\prime}$ in
$Y^{\prime}$ is irreducible. For $H$ general, the morphism $H^{\prime}\to H$
is flat and finite of degree prime to $N$. Thus
$\displaystyle CH_{0}(H^{\prime},1,\mathbb{Z}/N\mathbb{Z})\to
CH_{0}(H,1,\mathbb{Z}/N\mathbb{Z})\to H_{h}^{0}(\text{Spec
}k,C^{*}(K_{H})\otimes\mathbb{Z}/N\mathbb{Z})$ $\displaystyle\to
CH_{0}(H^{\prime},\mathbb{Z}/N\mathbb{Z})\to
CH_{0}(H,\mathbb{Z}/N\mathbb{Z}),$
where $K_{H}$ is the kernal h sheaf of $Z_{0}(H^{\prime})\to Z_{0}(H)$. The
map
$CH_{0}(H^{\prime},\mathbb{Z}/N\mathbb{Z})\to
CH_{0}(H,\mathbb{Z}/N\mathbb{Z})$
is an isomorphism. On the other hand, since $p:H^{\prime}\to H$ is flat and
finite, we have pull-back and push-forward on all the higher Chow groups. The
composition of pull-back and push-forward
$CH_{0}(H,1,\mathbb{Z}/N\mathbb{Z})\xrightarrow{p^{*}}CH_{0}(H^{\prime},1,\mathbb{Z}/N\mathbb{Z})\xrightarrow{p_{*}}CH_{0}(H,1,\mathbb{Z}/N\mathbb{Z})$
is multiplcation by $\deg p$. Since the degree of the map is relatively prime
to $N$,
$CH_{0}(H^{\prime},1,\mathbb{Z}/N\mathbb{Z})\xrightarrow{q_{*}}CH_{0}(H,1,\mathbb{Z}/N\mathbb{Z})$
is surjective. Thus for any two points $t_{1},t_{2}$ over a general point
$t\in Y$, the class $[t_{1}-t_{2}]$ vanishes in $H_{h}^{0}(\text{Spec
}k,C^{*}(K_{H})\otimes\mathbb{Z}/N\mathbb{Z})$. So does its push-forward in
$H_{h}^{0}(\text{Spec }k,C^{*}(K)\otimes\mathbb{Z}/N\mathbb{Z})$. ∎
Fix a prime number $\ell$ different from the characteristic of $k$. In the
following theorem, we omit all the Tate twists for simplicity of notations.
###### Theorem 4.14.
Let $X$ be a smooth projective variety defined over an algebraically closed
field, which is separably rationally connected in codimension $1$. There is a
smooth projective curve $C$ with a family of $1$-dimensional cycles
$\Gamma\subset C\times X$ such that
$\Gamma_{*}:H_{1}^{\text{BM}}(C,\mathbb{Z}_{\ell})\to
H_{3}^{\text{BM}}(X,\mathbb{Z}_{\ell})$
surjects onto $N^{1}H_{3}(X,\mathbb{Z}_{\ell})$.
###### Proof.
In the following, we use Borel-Moore homology. For simplicity of notations, we
only write them as $H_{1},H_{3}$. Let $NH_{3}(X,\mathbb{Z}/\ell^{n})$ be the
coniveau filtration on the homology $N^{1}H_{3}(X,\mathbb{Z}/\ell^{n})$.
Denote by $\tilde{N}H_{3}(X,\mathbb{Z}/\ell^{n})$ the strong coniveau
filtration $\tilde{N}^{1}H_{3}(X,\mathbb{Z}/\ell^{n})$.
For a smooth projective variety $Y$, we have
$H_{1}(Y,\mathbb{Z}_{\ell})/\ell^{n}\cong H_{1}(Y,\mathbb{Z}/\ell^{n})$, since
$H_{0}(Y,\mathbb{Z}_{\ell})\cong\mathbb{Z}_{\ell}$ is torsion free. Therefore,
$\tilde{N}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\to\tilde{N}H_{3}(X,\mathbb{Z}/\ell^{n})$
is surjective.
We have a commutative diagram
$\lx@xy@svg{\hbox{\raise 0.0pt\hbox{\kern
73.91812pt\hbox{\ignorespaces\ignorespaces\ignorespaces\hbox{\vtop{\kern
0.0pt\offinterlineskip\halign{\entry@#!@&&\entry<EMAIL_ADDRESS>0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise
0.0pt\hbox{$\textstyle{\oplus_{(S,\Gamma_{S})}CH_{0}(S,1,\mathbb{Z}/\ell^{n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
57.49832pt\raise 6.075pt\hbox{{}\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\hbox{\hbox{\kern
0.0pt\raise-1.70833pt\hbox{$\scriptstyle{\oplus\Gamma_{S*}}$}}}\kern
3.0pt}}}}}}\ignorespaces{\hbox{\kern 97.91812pt\raise 0.0pt\hbox{\hbox{\kern
0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
0.0pt\raise-30.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
97.91812pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{CH_{1}(X,1,\mathbb{Z}/\ell^{n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
197.08075pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\hbox{\kern-3.0pt\lower
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
135.49944pt\raise-30.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
197.08075pt\raise 0.0pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{A_{1}(X)[\ell^{n}]\ignorespaces\ignorespaces\ignorespaces\ignorespaces\to
0}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
226.58502pt\raise-30.0pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx<EMAIL_ADDRESS>0.0pt\raise 0.0pt\hbox{\hbox{\kern 3.0pt\raise 0.0pt\hbox{$\textstyle{0\to
H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\cap
NH_{3}(X,\mathbb{Z}/\ell^{n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{\hbox{\kern
73.91814pt\raise-41.15pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@hook{1}}}}}}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
102.0091pt\raise-41.15pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
102.0091pt\raise-41.15pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{NH_{3}(X,\mathbb{Z}/\ell^{n})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$}}}}}}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}{\hbox{\lx@xy@droprule}}\ignorespaces{\hbox{\kern
197.32379pt\raise-41.15pt\hbox{\hbox{\kern 0.0pt\raise
0.0pt\hbox{\lx@xy@tip{1}\lx@xy@tip{-1}}}}}}{\hbox{\lx@xy@droprule}}{\hbox{\lx@xy@droprule}}{\hbox{\kern
197.32379pt\raise-41.15pt\hbox{\hbox{\kern 0.0pt\raise 0.0pt\hbox{\hbox{\kern
3.0pt\raise
0.0pt\hbox{$\textstyle{H_{2}(X,\mathbb{Z}_{\ell})[\ell^{n}]}$}}}}}}}\ignorespaces}}}}\ignorespaces,$
where the direct sum is taken over families of equidimensional one cycles over
smooth projective varieties.
By Theorem 4.7, the upper row is exact. The lower row is also exact, since it
comes from
$0\to H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\to H_{3}(X,\mathbb{Z}/\ell^{n})\to
H_{2}(X,\mathbb{Z}_{\ell})[\ell^{n}]\to 0.$
The vertical maps are cycle class maps.
The middle vertical map
$CH_{1}(X,1,\mathbb{Z}/\ell^{n})\to NH_{3}(X,\mathbb{Z}/\ell^{n})$
is surjective, since for any surface $\Sigma$, not necessarily smooth, we have
a surjection
$CH_{1}(\Sigma,1,\mathbb{Z}/\ell^{n})\to H_{3}(\Sigma,\mathbb{Z}/\ell^{n}).$
When this surface is smooth, it is a consequence of the Bloch-Kato conjecture.
The general case can be proven using the localization sequence for higher Chow
groups and Borel-Moore homology.
The left vertical arrow is the direct sum of the composition
$CH_{0}(S,1,\mathbb{Z}/\ell^{n})\to H_{1}(S,\mathbb{Z}/\ell^{n})\cong
H_{1}(S,\mathbb{Z}_{\ell})/\ell^{n}\xrightarrow{\Gamma_{S*}}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n},$
Since the cycle class map induces an isomorphism
$CH_{0}(S,1,\mathbb{Z}/\ell^{n})\cong H_{1}(S,\mathbb{Z}/\ell^{n})\cong
H_{1}(S,\mathbb{Z}_{\ell})/\ell^{n}$, the left vertical arrow has the same
cokernal as
$\tilde{N}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\to
H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\cap NH_{3}(X,\mathbb{Z}/\ell^{n}).$
By the snake lemma, this cokernal is isomorphic to the cokernal $C_{n}$ of
$\text{Ker}(CH_{1}(X,1,\mathbb{Z}/\ell^{n})\to
H_{3}(X,\mathbb{Z}/\ell^{n}))\to\text{Ker}(A_{1}[\ell^{n}]\to
H_{2}(X,\mathbb{Z}_{\ell})[\ell^{n}]).$
The connecting maps $C_{n+m}\to C_{n}$ are multiplication by $\ell^{m}$.
Therefore the inverse limit $\lim\limits_{\xleftarrow{}}C_{n}$ is torsion
free.
We have a factorization
$\tilde{N}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\to{N}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\to
H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\cap NH_{3}(X,\mathbb{Z}/\ell^{n})\subset
H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}.$
Taking inverse limit, we get
$\tilde{N}H_{3}(X,\mathbb{Z}_{\ell})\to{N}H_{3}(X,\mathbb{Z}_{\ell})\to\lim\limits_{\xleftarrow[n]{}}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\cap
NH_{3}(X,\mathbb{Z}/\ell^{n})\subset H_{3}(X,\mathbb{Z}_{\ell}).$
Therefore the map
${N}H_{3}(X,\mathbb{Z}_{\ell})/\to\lim\limits_{\xleftarrow[n]{}}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\cap
NH_{3}(X,\mathbb{Z}/\ell^{n})$
is injective. Since the cokernal of
${N}H_{3}(X,\mathbb{Z}_{\ell})/\to\lim\limits_{\xleftarrow[n]{}}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\cap
NH_{3}(X,\mathbb{Z}/\ell^{n})$ is torsion free, so is the cokernal of
$\tilde{N}H_{3}\to NH_{3}$. On the other hand, we know that the cokernal is
torsion. So it has to be zero. That is, the strong coniveau filtration
coincide with the coniveau filtration.
Therefore there is a smooth projective variety $Y$ and a family of cycles
$\Gamma_{Y}$ such that the induced map
$\Gamma_{Y*}:H_{1}(Y,\mathbb{Z}_{\ell})\to{N}^{1}H_{3}(X,\mathbb{Z}_{\ell})$
is surjective. By taking hyperplane sections in $Y$, we may find a smooth
projective curve $C$ with a family of cycles $\Gamma$ such that
$\Gamma_{*}:H_{1}(C,\mathbb{Z}_{\ell})\to{N}^{1}H_{3}(X,\mathbb{Z}_{\ell})$
is surjective.
Finally, for later use, we note that in the proof, we also prove that for $X$
SRC in codimension $1$, the maps
(5) $\tilde{N}^{1}H_{3}(X,\mathbb{Z}_{\ell})\to
N^{1}H_{3}(X,\mathbb{Z}_{\ell})\to\lim\limits_{\xleftarrow[n]{}}H_{3}(X,\mathbb{Z}_{\ell})/\ell^{n}\cap
NH_{3}(X,\mathbb{Z}/\ell^{n})$
are isomorphisms. The first isomorphism is already shown above. We have
already shown that the second map is injective. The cokernal is torsion since
the cokernal of $NH_{3}\to H_{3}$ is torsion. By the first isomorphism and the
fact that the composition has torsion free cokernal, the cokernal of the
second map is also torsion free, and thus zero. ∎
###### Remark 4.15.
When $X$ is only smooth projective, one can prove that the filtration
$N_{1,\text{cyl},\text{st}}H_{3}(X,\mathbb{Z}_{\ell})$ is the same as
$N^{1}H_{3}(X,\mathbb{Z}_{\ell})$ by the same argument.
###### Theorem 4.16.
Let $X$ be a smooth projective $3$-fold over an algebraically closed field.
Assume that $X$ is separably rationally connected in codimension $1$. Then all
the filtrations on $H^{3}(X,\mathbb{Z}_{\ell})$ introduced in Definition 1.18
equal the whole cohomology group:
$\tilde{N}_{1,\text{cyl},\text{eq}}H^{3}(X,\mathbb{Z}_{\ell})=\tilde{N}_{1,\text{cyl}}H^{3}(X,\mathbb{Z}_{\ell})=\tilde{N}^{1}H^{3}(X,\mathbb{Z}_{\ell})=N^{1}H^{3}(X,\mathbb{Z}_{\ell})=H^{3}(X,\mathbb{Z}_{\ell}).$
###### Corollary 4.17.
Let $X$ be a smooth projective variety of dimension $d$ defined over a finite
field $\mathbb{F}_{q}$, that is separably rationally connected in codimension
$1$. Assume one of the followings
1. (1)
$N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))=H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))$.
2. (2)
The cycle class map
$cl:\lim\limits_{\xleftarrow[n]{}}CH_{1}(\bar{X},1,\mathbb{Z}/\ell^{n})\to
H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1))$
is surjective.
Then every class in
$H^{1}(\mathbb{F}_{q},H^{3}(\bar{X},\mathbb{Z}_{\ell}(d-1)))$ is the class of
an algebraic cycle defined over $\mathbb{F}_{q}$. In particular, this holds if
$X$ has dimension $3$.
###### Proof.
We first show that the surjectivity of the cycle class map
$cl:\lim\limits_{\xleftarrow[n]{}}CH_{1}(\bar{X},1,\mathbb{Z}/\ell^{n})\to
H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1))$
implies that
$N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))=H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))$.
In fact, we have
$\displaystyle\lim\limits_{\xleftarrow[n]{}}CH_{1}(\bar{X},1,\mathbb{Z}/\ell^{n})\to\lim\limits_{\xleftarrow[n]{}}N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}/\ell^{n}(d-1))$
$\displaystyle\to$
$\displaystyle\lim\limits_{\xleftarrow[n]{}}H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}/\ell^{n}(d-1))=H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1)).$
Therefore
$\lim\limits_{\xleftarrow[n]{}}N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}/\ell^{n}(d-1))\to\lim\limits_{\xleftarrow[n]{}}H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}/\ell^{n}(d-1))=H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1))$
is surjective. On the other hand, since $N^{1}H_{3}(X,\mathbb{Z}/\ell^{n})$ is
a subgroup of $H_{3}(X,\mathbb{Z}/\ell^{n})$, the inverse limit is injective,
hence an isomorphism. We have an exact sequence
$\displaystyle
0\to\lim\limits_{\xleftarrow[n]{}}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))/\ell^{n}\cap
N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}/\ell^{n}(d-1))$
$\displaystyle\xrightarrow{\phi}$
$\displaystyle\lim\limits_{\xleftarrow[n]{}}N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}/\ell^{n}(d-1))\to\lim\limits_{\xleftarrow[n]{}}H^{2d-2}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1))[\ell^{n}],$
where the first inverse limit is
$N^{1}H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1))$ by (5), and
the last inverse limit is torsion free. Since the quotient of
$H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))/N^{1}H_{\text{\'{e}t}}^{2d-3}(\bar{X},\mathbb{Z}_{\ell}(d-1))$
is torsion for separably rationally connected varieties or separably
rationally connected fibrations over a curve, we know that $\phi$ is an
isomorphism and thus $N^{1}H^{2d-3}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell})\to
H^{2d-3}_{\text{\'{e}t}}(X,\mathbb{Z}_{\ell})$ is an isomorphism.
Therefore, by Theorem 4.14, there is a smooth projective curve $C$ defined
over $\bar{\mathbb{F}}_{q}$ with a family of one-dimensional cycles
$\Gamma\subset C\times\bar{X}$ such that
$\Gamma_{*}:H^{1}_{\text{\'{e}t}}(C,\mathbb{Z}_{\ell}(1))\to
H^{2d-3}_{\text{\'{e}t}}(\bar{X},\mathbb{Z}_{\ell}(d-1))$
is surjective. Then this corollary follows from [SS22, Proposition 7.6], the
statement of which is recalled in Theorem 1.31. ∎
## 5\. Integral Tate conjecture and local-global principle for zero cycles
Let $X$ be a smooth projective geometrically irreducible variety of dimension
$d$ defined over a finite field $\mathbb{F}$. We have the cycle class maps:
(6) $CH_{1}(X)\otimes\mathbb{Z}_{\ell}\to H^{2d-2}(X,\mathbb{Z}_{\ell}(d-1)),$
Recall the integral Tate conjecture asks the following question.
###### Question 5.1.
For which smooth projective variety $X$ defined over $\mathbb{F}$, and which
$r$, is the cycle class map (6) surjective?
We mention another closely related question.
###### Question 5.2.
Let $X$ be a smooth projecitve variety defined over a henselian local field
with finite residue field. Is the cycle class map
$CH_{0}(X)\hat{\otimes}\mathbb{Z}_{\ell}\to H^{2d}(X,\mathbb{Z}_{\ell}(d))$
injective? Here $\ell$ is invertible in the residue field.
###### Remark 5.3.
Question 5.2 has a positive answer if $X$ is a geometrically rational surface,
and has a regular model with SNC central fiber ([EW16, Theorem 3.1] in general
and [Sai91, Theorem A] for the case of $p$-adic fields). In this case, the
proof in [EW16] also shows that the closed fiber also satisfies a version of
the integral Tate conjecture. For $X$ defined over a Laurant field
$\mathbb{F}_{q}\left(\leavevmode\hbox{\set@color$\left(\leavevmode\hbox{\set@color$t$}\right)$}\right)$,
a regular model with SNC central fiber always exists since we have resolution
of singularities for $3$-folds.
If Question 5.2 has a positivie answer for the generic fiber $X$, then
Conjectures 1.1 and 1.3 are equivalent for $X$.
###### Remark 5.4.
We also note that the results in [Tia20] suggest that Question (5.2) should be
true for separably rationally connected varieties, provided that the
characteristic $p$ analogues of the conjecture $\textbf{R}(n,3)$ about Kato
homology in loc. cit. is true, and that we have the minimal model program
established in positive and mixed characteristic.
As discussed in Theorem 1.10 and the remark that follows this theorem in the
introduction, various types of integral Tate conjectures would imply various
versions of Colliot-Thélène’s conjectures 1.1, 1.2.
We can deduce Theorem 1.13 from Theorem 1.35 and Corollary 4.17.
###### Proof of Theorem 1.13.
Recall that $G=\text{Gal}(\bar{\mathbb{F}}_{q}/\mathbb{F}_{q})$ is the
absolute Galois group. By Theorem 1.35, we have an isomorphism
$A_{1}(\mathcal{X})\cong A_{1}(\overline{\mathcal{X}})^{G}.$
Under the assumptions (A), (B) of Theorem 1.13, we know that there is an
isomorphism of $\text{Gal}(\bar{\mathbb{F}}_{q}/\mathbb{F}_{q})$-modules:
$A_{1}(\overline{\mathcal{X}})\otimes\mathbb{Z}_{\ell}\cong
H^{2d}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(d)).$
Note that $G$ is generated by the Frobenius $F$ and
$A_{1}(\overline{\mathcal{X}})^{G}$ is the kernal of $F^{*}-\text{id}$. Since
$\mathbb{Z}_{\ell}$ is a flat $\mathbb{Z}$-module, we have
$A_{1}(\overline{\mathcal{X}})^{G}\otimes\mathbb{Z}_{\ell}$ is the kernal of
$(F^{*}-\text{id})\otimes\text{id}_{\mathbb{Z}_{\ell}}:A_{1}(\overline{\mathcal{X}})\otimes\mathbb{Z}_{\ell}\to
A_{1}(\overline{\mathcal{X}})\otimes\mathbb{Z}_{\ell}.$
That is,
$A_{1}(\overline{\mathcal{X}})^{G}\otimes\mathbb{Z}_{\ell}\cong(A_{1}(\overline{\mathcal{X}})\otimes\mathbb{Z}_{\ell})^{G}\cong
H^{2d}(\overline{\mathcal{X}},\mathbb{Z}_{\ell})^{G},$
where $\mathbb{Z}_{\ell}$ is equipped with the trivial action of $G$.
By part (2) of Theorem 1.10, this proves the first part of the theorem. The
second part of the theorem is just Corollary 4.17. ∎
###### Proof of Theorem 1.12.
First note that the surjectivity of the cycle class map is a birational
invariant. So using resolution of singularities for $3$-folds [CP08, CP09,
Abh98], we may assume that the singular fibers are SNC divisors. The result of
Bloch-Srinivas [BS83] shows that the Griffiths group of $1$ cycles on
$\overline{\mathcal{X}}$ is $p$-torsion. Thus the hypothesis (B) in Theorem
1.13 is satisfied. As for hypothesis (A), we have a commutative diagram of
localization exact sequences:
$\begin{CD}\oplus
CH_{1}(\overline{\mathcal{X}}_{i})\otimes\mathbb{Z}_{\ell}@>{}>{}>CH_{1}(\overline{\mathcal{X}})\otimes\mathbb{Z}_{\ell}@>{}>{}>CH_{1}(\overline{\mathcal{X}}^{0})\otimes\mathbb{Z}_{\ell}@>{}>{}>0\\\
@V{}V{}V@V{}V{}V@V{}V{}V\\\ \oplus
H^{4}_{\overline{\mathcal{X}}_{i}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(2))@>{}>{}>H^{4}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(2))@>{}>{}>H^{4}(\overline{\mathcal{X}}^{0},\mathbb{Z}_{\ell}(2))\\\
\end{CD}$
Here $\mathcal{X}_{i}$ is a singular fiber of the fibration $\mathcal{X}\to B$
and $\mathcal{X}^{0}$ is the complement of all the singular fibers
$\mathcal{X}_{i}$. By Section 4.3 in [EW16], the first vertical map is
surjective. We may assume that $\overline{\mathcal{X}}^{0}$ is over an affine
curve (i.e. the direct sum on the left is non-trivial). A simple calculation
then shows that $H^{4}(\overline{\mathcal{X}}^{0})$ is one dimensional and
spanned by the class of a section. Thus the third cycle class map is also
surjective by [dJS03]. So the middle one is surjective.
Hypothesis (C) and (D) are also satisfied in this case. For simplicity, we
only explain how to prove hypothesis (D). On the one hand, the cokernal is
torsion for separably rationally connected fibrations by a decomposition of
diagonal argument. On the other hand, Bloch-Kato conjecture proved by
Voevodsky (in dimension $3$, we can also use the Merkurjev-Suslin theorem)
implies that it is torsion free for all smooth projective $3$-fold. Hence the
cokernal has to vanish.
Thus Theorem 1.13 implies that $CH_{1}(\mathcal{X})\otimes\mathbb{Z}_{\ell}\to
H^{4}(\mathcal{X},\mathbb{Z}_{l}(2))$ is surjective. ∎
###### Proof of Theorem 1.4.
This follows from combining Theorem 1.10, 1.12, and Remark 5.3. ∎
## 6\. Examples
We conclude this article with some examples where one can check the conditions
in Theorem 1.13.
###### Proposition 6.1.
Let $\mathcal{X}\subset\mathbb{P}_{B}(E)$ be a family of complete
intersections of degree $d_{1},\ldots,d_{c}$ in $\mathbb{P}^{n}$ over a smooth
projective curve $B$ over $\mathbb{F}_{q}$. Assume that the generic fiber $X$
is smooth separably rationally connected of dimension $d,d\geq 5$ and that
$\sum d_{i}^{2}\leq n$. Also assume that $\mathcal{X}$ is smooth. Then the
cycle class map
$CH^{d}(\mathcal{X})\otimes\mathbb{Z}_{\ell}\to
H^{2d}_{\text{{\'{e}}t}}(\mathcal{X},\mathbb{Z}_{\ell}(d))$
is surjective and Conjectures 1.1 and 1.2 hold for the generic fiber $X$ over
$\mathbb{F}_{q}(B)$.
###### Proof.
By the dimension assumption and the affine vanishing for étale cohomology,
there is an isomorphism
$H^{2d}_{\text{{\'{e}}t}}(\mathcal{X},\mathbb{Z}_{\ell}(d))\cong
H^{2d-2}_{\text{{\'{e}}t}}(\overline{\mathcal{X}},\mathbb{Z}_{\ell}(d-1))^{G},$
which is spanned by the class of a section and a line in a fiber. So it
suffices to show that over the algebraic closure $\overline{\mathbb{F}}_{q}$,
every multisection is rationally equivalent to a multiple of any fixed section
modulo lines in fibers, and that every line in a fiber is algebraically
equivalent to any line in any smooth fiber.
Both statements follows from some well-known argument. More precisely, the
space of a chain of two lines passing through two points in a complete
intersection of degree $d_{1},\ldots,d_{c}$ is defined by equations of degree
$1,1,2,2,\ldots,d_{1}-1,d_{1}-1,d_{1},1,1,\ldots,d_{2}-1,d_{2}-1,d_{2},\ldots,1,1,\ldots,d_{c}-1,d_{c}-1,d_{c}$
in $\mathbb{P}^{n}$ (see, for example, Lemma 3.4 in [Pan18]). Thus by the
classical Tsen-Lang theorem, for any family of complete intersections of
degree $(d_{1},\ldots,d_{c})$ over a smooth curve $T/\bar{\mathbb{F}}_{q}$,
and for any two sections of this family, there is a family of chain of two
lines in the complete intersections over $T$ such that the two sections lie in
this family of chain of two lines. Any two sections in a
$\mathbb{P}^{1}$-bundle over a curve are rationally equivalent modulo general
fibers. Thus any two sections are rationally equivalent up to lines in general
fibers. This in turn implies that any two multi-sections of the same degree
are rationally equivalent up to lines in general fibers. Since any curve in a
fiber is rationally equivalent to the difference of two multi-sections of the
same degree, it is also rationally equivalent to lines in general fibers.
Finally, since the Fano scheme of lines of a complete intersection is
connected as long as it has positive dimension [DM98, Théroème 2.1], all lines
in a fiber are algebraically equivalent. ∎
###### Remark 6.2.
The dimension assumption is not restrictive. The only low dimensional examples
satisfying the numerical conditions are quadrics and linear spaces. One can
check by hands that integral Tate conjecture holds for them.
###### Remark 6.3.
In general it is still an open question if a smooth Fano complete intersection
is separably rationally connected. However, one can show that if the
characteristic $p$ is larger than all the $d_{i}$, then every smooth Fano
complete intersection of degree $d_{1},\ldots,d_{c}$ is separably rationally
connected [STZ22].
###### Proposition 6.4.
Let $X$ be a smooth proper variety that is also a homogeneous variety under an
integral linear algebraic group $G$ over $\mathbb{F}_{q}(B)$. Assume that $X$
admits a regular projective model $\mathcal{X}\to B$. Then the cycle class map
$CH^{d}(\mathcal{X})\otimes\mathbb{Z}_{\ell}\to
H^{2d}_{\text{{\'{e}}t}}(\mathcal{X},\mathbb{Z}_{\ell}(d))$
is surjective and Conjecture 1.2 holds for $X$.
###### Proof.
It is well-known that $\bar{G}$ over $\bar{\mathbb{F}}_{q}(B)$ is rational.
Thus $\overline{\mathcal{X}}\to\bar{B}$ is birational to
$\bar{B}\times_{\bar{\mathbb{F}}_{q}}\mathbb{P}^{n}\to\bar{B}$. So the
conditions in Theorem 1.13 are satisfied. ∎
###### Remark 6.5.
Liang [Lia13] proved that if the Brauer-Manin obstruction is the only
obstruction to weak approximation of rational points in a rationally connected
variety over a number field $K$ and all of its finite field extensions, the
number field analogue of Conjecture 1.3 is true. As a corollary, he proved
Conjecture 1.3 for all smooth proper varieties birational to a homogeneous
space under a linear algebraic groups with connected stablizer. Harpaz and
Wittenberg [HW20] proved Conjecture 1.3 for all smooth proper varieties
birational to a homogeneous space under a linear algebraic group. One could
expect that this also holds in the global function field case by essentially
the same proof (modulo some characteristic p issues).
###### Theorem 6.6.
Let $C$ be a smooth projective geometrically integral curve defined over a
global function field $\mathbb{F}(B)$. Assume that $C$ has a zero cycle of
degree $1$ over $\bar{\mathbb{F}}(B)$. Let $X(r,L)$ be the moduli space of
stable vector bundles of rank $r$ and fixed determinant
$L\in\text{Pic}^{d}(X)$, with $r,d$ relatively prime to each other. Assume
that $X(r,d)$ has a smooth projective model over $B$. Then the local-global
principle for zero cycles (i.e. Conjecture 1.1) holds for $X(r,L)$.
###### Proof.
It is well-knonw that $X(r,L)$ is geometrically rational [Hof07, New75, New80,
KS99] and has geometric Picard group isomorphic to $\mathbb{Z}$. In fact, as
long as the curve, defined over any field $k$, has a $k$-rational point,
$X(r,L)$ is rational over $k$ [Hof07]. Using a norm argument, one can prove
that the hypothesis (A)-(D) in Theorem 1.13 holds for
$X(r,L)\otimes\bar{\mathbb{F}}$ under our assumptions. Hence Theorem 1.13
implies the statements. ∎
## References
* [Abh98] S.S. Abhyankar. Resolution of singularities of embedded algebraic surfaces. 2nd, enl. ed. Berlin: Springer, 2nd, enl. ed. edition, 1998.
* [Ben22] Olivier Benoist. Steenrod operations and algebraic classes. arXiv preprint https://arxiv.org/abs/2209.03685, 2022.
* [BO21] Olivier Benoist and John Christian Ottem. Two coniveau filtrations. Duke Math. J., 170(12):2719–2753, 2021.
* [Bru78] Armand Brumer. Remarques sur les couples de formes quadratiques. C. R. Acad. Sci. Paris Sér. A-B, 286(16):A679–A681, 1978.
* [BS83] S. Bloch and V. Srinivas. Remarks on correspondences and algebraic cycles. Amer. J. Math., 105(5):1235–1253, 1983.
* [CP08] Vincent Cossart and Olivier Piltant. Resolution of singularities of threefolds in positive characteristic. I. Reduction to local uniformization on Artin-Schreier and purely inseparable coverings. J. Algebra, 320(3):1051–1082, 2008.
* [CP09] Vincent Cossart and Olivier Piltant. Resolution of singularities of threefolds in positive characteristic. II. J. Algebra, 321(7):1836–1976, 2009.
* [CT99] Jean-Louis Colliot-Thélène. Conjectures de type local-global sur l’image des groupes de Chow dans la cohomologie étale. In Algebraic $K$-theory (Seattle, WA, 1997), volume 67 of Proc. Sympos. Pure Math., pages 1–12. Amer. Math. Soc., Providence, RI, 1999.
* [CT00] Jean-Louis Colliot-Thélène. Principe local-global pour les zéro-cycles sur les surfaces réglées. J. Amer. Math. Soc., 13(1):101–127, 2000. With an appendix by E. Frossard and V. Suresh.
* [CT22] Jean-Louis Colliot-Thélène. Retour sur l’arithmétique des intersections de deux quadriques, avec un appendice par A. Kuznestov. preprint, https://arxiv.org/abs/2208.04121, 2022.
* [CTK13] Jean-Louis Colliot-Thélène and Bruno Kahn. Cycles de codimension 2 et $H^{3}$ non ramifié pour les variétés sur les corps finis. J. K-Theory, 11(1):1–53, 2013.
* [CTS10] Jean-Louis Colliot-Thélène and Tamás Szamuely. Autour de la conjecture de Tate à coefficients ${\bf Z}_{\ell}$ pour les variétés sur les corps finis. In The geometry of algebraic cycles, volume 9 of Clay Math. Proc., pages 83–98. Amer. Math. Soc., Providence, RI, 2010.
* [CTS21] Jean-Louis Colliot-Thélène and Federico Scavia. Sur la conjecture de tate entière pour le produit d’une courbe et d’une surface $ch_{0}$-triviale sur un corps fini. preprint, arXiv:2001.10515v4, 2021.
* [CTSD12] J.-L. Colliot-Thélène and Peter Swinnerton-Dyer. Zero-cycles and rational points on some surfaces over a global function field. Acta Arith., 155(1):63–70, 2012.
* [CTSSD87] Jean-Louis Colliot-Thélène, Jean-Jacques Sansuc, and Peter Swinnerton-Dyer. Intersections of two quadrics and Châtelet surfaces. I. J. Reine Angew. Math., 373:37–107, 1987.
* [CTV12] Jean-Louis Colliot-Thélène and Claire Voisin. Cohomologie non ramifiée et conjecture de Hodge entière. Duke Math. J., 161(5):735–801, 2012.
* [dJS03] A. J. de Jong and J. Starr. Every rationally connected variety over the function field of a curve has a rational point. Amer. J. Math., 125(3):567–580, 2003.
* [DM98] Olivier Debarre and Laurent Manivel. Sur la variété des espaces linéaires contenus dans une intersection complète. Math. Ann., 312(3):549–574, 1998.
|
# Two-dimensional Dirac semiconductor and its material realization
Botao Fu College of Physics and Electronic Engineering, Center for
Computational Sciences, Sichuan Normal University, Chengdu, 610068, China
Chao He College of Physics and Electronic Engineering, Center for
Computational Sciences, Sichuan Normal University, Chengdu, 610068, China Da-
Shuai Ma Key Lab of advanced optoelectronic quantum architecture and
measurement (MOE), Beijing Key Lab of Nanophotonics $\&$ Ultrafine
Optoelectronic Systems, and School of Physics, Beijing Institute of
Technology, Beijing 100081, China Zhi-Ming Yu Key Lab of advanced
optoelectronic quantum architecture and measurement (MOE), Beijing Key Lab of
Nanophotonics $\&$ Ultrafine Optoelectronic Systems, and School of Physics,
Beijing Institute of Technology, Beijing 100081, China Yong-Hong Zhao
College of Physics and Electronic Engineering, Center for Computational
Sciences, Sichuan Normal University, Chengdu, 610068, China Yugui Yao
<EMAIL_ADDRESS>Key Lab of advanced optoelectronic quantum architecture and
measurement (MOE), Beijing Key Lab of Nanophotonics $\&$ Ultrafine
Optoelectronic Systems, and School of Physics, Beijing Institute of
Technology, Beijing 100081, China
###### Abstract
We propose a new concept of two-dimensional (2D) Dirac semiconductor which is
characterized by the emergence of fourfold degenerate band crossings near the
band edge and provide a generic approach to realize this novel semiconductor
in the community of material science. Based on the first-principle
calculations and symmetry analysis, we discover recently synthesised triple-
layer (TL) BiOS2 is such Dirac semiconductor that features Dirac cone at X/Y
point, protected by nonsymmorphic symmetry. Due to sandwich-like structure,
each Dirac fermion in TL-BiOS2 can be regarded as a combination of two Weyl
fermions with opposite chiralities, degenerate in momentum-energy space but
separated in real space. Such Dirac semiconductor carries layer-dependent
helical spin textures that never been reported before. Moreover, novel
topological phase transitions are flexibly achieved in TL-BiOS2: (i) an
vertical electric field can drive it into Weyl semiconductor with switchable
spin polarization direction, (ii) an extensive strain is able to generate
ferroelectric polarization and actuate it into Weyl nodal ring around X point
and into another type of four-fold degenerate point at Y point. Our work
extends the Dirac fermion into semiconductor systems and provides a promising
avenue to integrate spintronics and optoelectronics in topological materials.
Introduction.— Dirac fermion with linearly band dispersion was first
discovered in 2D grapheneNeto _et al._ (2009). Whereafter, it was rapidly
popularized to 3D, and abundant topological semimetal phases including Weyl,
Dirac and nodal-line semimetals are harvested in plenty of materialsYan and
Felser (2017); Fang _et al._ (2016); Weng _et al._ (2016); Gao _et al._
(2019); Armitage _et al._ (2018). The unique electronic structures of
topological semimetals lead to protected surface states and novel response to
external fields and thus attract intensive research interestsWang and Wang
(2018); Nagaosa _et al._ (2020). Actually, these quasi-fermions can be
generalized towards superconductor and metallic systemsWang _et al._ (2016);
Chiu and Schnyder (2014). For instance, the nodal line fermion was observed on
superconducting material PbTaSe2Bian _et al._ (2016), and the flat surface
sates of nodal-line was reported widely exist in alkaline-earth metals and
noble metalsLi _et al._ (2016); Yan _et al._ (2015).
In comparison with metal and semimetal, the semiconductor materials are
particularly suitable for Dirac devices due to its high tunability and
compatibility with modern electronic industry. Therefore, the introduction of
Weyl/Dirac fermion into semiconductors to develop Weyl/Dirac semiconductors
could create a new degree of freedom for the future design of semiconductor
electronic and optoelectronic devices. Recently, the chiral Weyl fermion is
theoretically predictedHirayama _et al._ (2015) and experimentally observed
in elemental tellurium, which is an intrinsic semiconductor with band gap
about 0.36 eV, thus was dubbed as Weyl semiconductorSakano _et al._ (2020);
Ideue _et al._ (2019); Zhang _et al._ (2019). Based on these considerations,
in this paper we will generalize the four-fold degenerating Dirac fermion from
semimetal to semiconductor in two-dimension. Because 2D materials possess
prior mechanical properties and small size that are more favorable for
integration and regulation.
Figure 1: A nonsymmorphic operator $\widetilde{g}=\\{g|\bm{t}\\}$ leads to
band crossings at $\mathbf{G}/2$ ($\mathbf{G}$ is the reciprocal lattice
vector) for a system with $\mathcal{PT}$ symmetry in the present of SOC. (a)
The electronic structure of system with electron filling number is $4n+2$. (b)
The electronic structure of system with electron filling number is $4n$. The
dash line indicates the Fermi level. (c) Schematic illustration for
construction of Dirac fermion in multiple-layer structure. Two Weyl fermion
locate on individual monolayers merge together to form a Dirac fermion.
Proposal of 2D Dirac semiconductors— Firstly, we review the concept of 2D
Dirac semimetal in the presence of spin-orbital coupling (SOC), as proposed by
S. M. Young at alYoung and Kane (2015). For a system with both time reverse
$\mathcal{T}$ and space inversion $\mathcal{P}$ in addition with a
nonsmmorphic symmetry $\widetilde{g}=\\{g|\bm{t}\\}$, where $g$ is point group
operator and $\bm{t}$ is fractional translation. As shown in Fig. 1(a), along
a $g$-invariant line, e.g. $\mathbf{0}$ to $\mathbf{G}$, each double-
degenerated band can be marked by the opposite and ${\bm{k}}$-dependent
eigenvalue of $\widetilde{g}$: $\pm ie^{i\bm{kt}}$. It equals to $\pm i$ at
$\mathbf{0}$ and gradually evolves into $\pm 1$ at $\mathbf{G}/2$. At
$\mathbf{G}/2$, the $\mathcal{T}^{2}=-1$ guarantees two $+1$ states or $-1$
states to be degenerate, namely Kramers degeneracy. Consequently, two branches
of double-degenerate bands have to stick together and form a four-fold
degenerate Dirac point at $\mathbf{G}/2$. If the electron filling number
satisfies $4n+2(n\geq 0)$, the Fermi level then crosses Dirac point and it
becomes an ideal Dirac semimetal with point-like Fermi surface.
Despite concise and intriguing picture, the practical and ideal 2D Dirac
semimetal materials are very rareGuan _et al._ (2017); Kowalczyk _et al._
(2020). One reason may be the limitation of electron filling number. By
counting the 2D nonsmmorphic materials in material databaseMounet _et al._
(2018), we find only 16$\%$ of them have $4n+2$ electron filling number while
the rest host $4n$ electrons. The other reason may be that the second-order
SOC effect is relatively small compared with second-order electron hopping
which results in the emergence of undesirable trivial metal states at Fermi
level. This can be explicitly verified through a simple tight-binding modelSup
. From other perspective, as in Fig. 1(b) for most materials with $4n$
electrons, the Fermi level lies inside the gap between two sets of bands
including Dirac fermions at $\mathbf{G}/2$. In general, the system is a
semiconductor. Suppose the position of band edge happens to reside at or
nearby $\mathbf{G}/2$, it will become so-called “Dirac semiconductor” with
symmetry-enforced Dirac cone close to Fermi level. In such Dirac
semiconductor, the signature of Dirac fermions could be probed by angle-
resolved photoemission spectroscopy (ARPES) and transport experiments as what
have been done in Weyl semiconductorsKowalczyk _et al._ (2020); Zhang _et
al._ (2019); Qiu _et al._ (2020).
In Fig. 1(c), we introduce an practical approach to construct this Dirac
semiconductor in multiple-layer systems. Suppose a monolayer system without
$\mathcal{P}$ but with $\mathcal{T}$ symmetry, with SOC being taken into
consideration each band becomes spin splitting except for that at the time-
reversal invariant momenta (TRIM) where the Kramers degeneracy occurs. It’s
worth noting that such Kramers degeneracy is recently reported host
unanticipated topological charge in 3D chiral lattice, noted as Kramers-Weyl
fermionChang _et al._ (2018); Li _et al._ (2019). Bearing this in mind, we
stack up two $\mathcal{P}$-breaking monolayers which are semiconductor with
band edge at $\mathbf{G}/2$, and meanwhile impose an in-plane fractional
translation. The constructed bilayer system hosts both $\mathcal{P}$ and glide
mirror symmetries. Consequently, two Kramers-Weyl points from two monolayers
will merge one Dirac point at $\mathbf{G}/2$. This tells us that Dirac
semiconductor can exist in certain stacked bilayer or some intrinsic bilayer
or multiple layer materials. More importantly, with unique multiple-layer
structure and semiconducting nature, the Dirac semiconductor will exhibit
flexibly tunable electronic and topological properties under external fields.
Material Realization— Based on above analysis, we find out four kinds of
multiple-layer semiconductorsSup which host nonsmmorphyic space group and
proper band edge position. Here, taking TL-BiOS2 as a prototype, we will
demonstrate the emergence of Dirac fermion around the band edge and its
manipulation by external fields. The TL-BiOS2 is predicted to be a stable
direct band-gap semiconductor with high carrier mobility and large visible
light absorptionZhang _et al._ (2018). In experiment, the high carrier
mobility as well as power conversion effciency as solar cell has been recently
observed in 2D BiOS2Huang and Yu (2020). As shown in Fig. 2(a), the TL-BiOS2
consists of one BiO layer in the middle and two BiS2 layers on the top and
bottom, forming sandwich-like structure. The optimized lattice constance is
$|\bm{a}|$=$|\bm{b}|$=3.95 Å. Although, each BiS2 layer is inversion-
asymmetric with non-centrosymmetric polar site point group $C_{4v}$. Two BiS2
layers become each other’s inversion partners and meanwhile have inter-layer
relative translation, $(\bm{a}+\bm{b})/2$. Hence the whole system hosts
centrosymmetric and nonsymmorphic space group $P4/nmm$ (No.129).
Figure 2: (a) The atomic structure of TL-Bi2OS2. (b), (c)The real-space charge
density distribution of the maxima of HVB1 and HVB2. (d) The orbital-resolved
band structure without SOC effect. The green color represents Bi’s $p$-orbital
and the red color represents S’s $p$-orbital on BiS2 layer, while the blue
color represents all orbitals from BiO layer. (e) The band structure with SOC
effect. (f) The enlarged figures around LCB and HVB with SOC. (g) The 3D Dirac
cone around $X$ with helical spin textures indicated by arrows. The different
color represent bands from different BiS2 layers. The Dirac fermions are
marked by yellow dots.
The electronic structures of TL-Bi2OS2 under generalized gradient
approximation (GGA) are displayed in Fig. 2(d)-(e). In the absence of SOC, it
has a direct band gap ($E_{g}$=0.97 eV) with both valence band maximum (VBM)
and conduction band minimum (CBM) located at X/Y point. The quivalentence of X
and Y is guaranteed by the $C_{4z}$ symmetry. Importantly, both the highest
valence band (HVB) and lowest conduction band (LCB) are double-degenerate not
only at X/Y point but along X-M-Y path. This degeneracy is protected by screw
axis $\widetilde{C}_{2x}$/$\widetilde{C}_{2y}$Fan _et al._ (2018), in which
the tilde refers to additional $(\bm{a}+\bm{b})/2$ translation. The orbital-
resolved band structure demonstrates that both HVB and LCB primarily derive
from BiS2 layers, but the former from S-$p$ orbital while the latter from
Bi-$p$ orbital. The charge density distributions of double-degenerate HVB
(HVB1 and HVB2) states are calculated in Fig. 2(b)-(c), from which we find
despite their degeneracy in momentum-energy space, the HVB1 and HVB2 locate
separately on top and bottom BiS2 layers, respectively. Similar phenomenon
also happens for the LCB. Thus, TL-Bi2OS2 well satisfies the conditions for
the emergence of Dirac semiconductor: (i) Two $\mathcal{P}$-breaking BiS2
layers stack up via electrostatic interaction with intermediate BiO2 layer
which simultaneously host fractional translation and inversion symmetries.
(ii) The double-degenerate VBM or CBM at X/Y in TL-Bi2OS2 originate from
opposite BiS2 layers.
When considering SOC effect in Fig. 2(e), each band becomes double degenerate
because of $(\mathcal{PT})^{2}=-1$. The band gap ($E_{g}$) at X/Y reduces from
0.97 eV to 0.40 eV. More interesting, the LCB and HVB split along XM path and
exhibits Rashba-like dispersions as demonstrated in Fig. 2(f). The Rashba
parameter can be defined as $\alpha_{R}=2E_{R}/k_{R}$, which is about 1.27 eVÅ
and 2.15 eVÅ for HCB and LVB, respectively. In reality, it’s distinctive from
traditional Rashba splitting which occurs in non-centrosymmetric
materialsIshizaka _et al._ (2011); Maaß _et al._ (2016); Zhang and Liu
(2019); Di Sante _et al._ (2013). Around X/Y point, two double-degenerated
bands cross each other to form a fourfold-degenerated Dirac fermion as
expected. This could be understood from following perspective. Since the HVB1
and HVB2 locate separately on opposite BiS2 layers without $\mathcal{P}$, the
SOC effect certainly gives a Rashba splitting for them. Then considering two
BiS2 layers in TL-Bi2OS2 as a whole, the nonsmmorphic operation in combination
with space inversion will stick two sets of Rashba bands together, and in
particular enables the appearance of Dirac fermion at X/Y. Therefore, although
this Dirac semiconductor host $\mathcal{PT}$ symmetry, one can identify the
spin textures of Dirac fermion. As shown in Fig. 2(g), it indeed hosts
opposite chiral spin textures on different BiS2 layers, which may provide
potential applications in spintronics.
Figure 3: (a) The electronic structure under external vertical electric
filed, E${}_{\text{ext}}$=0.3 V/Å. (b) The enlarged view around valence band.
(c) The relation of $\Delta E$ and external electric filed. (d) The
distribution of Berry curvature around two Weyl points at X, and the Berry
phases circled them are given. (e) The electronic structure under $2\%$
biaxial strain, with spontaneous electric polarization $P_{s}$ along $y$. (f)
The enlarged view around the valence band at X. The eigenvalues of
$\widetilde{M}_{z}$ and $\widetilde{C}_{2y}$ are given. (g) The 3D view of
bands at (f), the nodal line is labelled by black line. (h) The enlarged views
around Y along different $k-$paths. The red and green dot indicates the
different Weyl fermions at X in (b) and (f), the Yellow dot represents Dirac
fermion in (h).
To capture the physics of Dirac fermion around band edge, we are going to
build an effective $k{\cdot}p$ model. For the VBM at X point, we can choose
the degenerate HVB1 and HBV2 states as the basis in the absence of SOC. Then
considering SOC effect the Hamiltonian around X is written as,
$H=\left(\begin{array}[]{cc}h_{+}&T\\\ T&h_{-}\\\ \end{array}\right),$ (1)
$h_{\pm}=\epsilon_{k}\pm(\alpha_{y}k_{y}\sigma_{x}-\alpha_{x}k_{x}\sigma_{y}),$
(2)
$\epsilon_{k}=\frac{\hbar^{2}k_{x}^{2}}{2m_{x}^{*}}+\frac{\hbar^{2}k_{y}^{2}}{2m_{y}^{*}},$
(3)
where $h_{\pm}$ describes the electron from HVB1/HVB2 that locates on
top/bottom BiS2 layer and $T$ is the inter-layer coupling term. The
$\alpha_{x,y}$ and $m_{x,y}^{*}$ are the anisotropic Rashba parameters and
effective mass along $x/y$ directions, respectively. The $\sigma_{x,y,z}$ is
Pauli matrix that represents spin degree of freedom. In Eq. 2 we consider the
predominant Rashba SOC term that originated from the interfacial polar filed
between BiO and BiS2 layer. Since the top and bottom BiS2 layers feel opposite
polar filed, they possess opposite Rashba spin texture as indicated on
$h_{\pm}$. At X point the little group is $D_{2h}$, the time reverse and space
inversion are chosen as $\mathcal{T}=i\sigma_{y}K\tau_{z}$ and
$\mathcal{P}=\sigma_{0}\tau_{y}$, then based on specific commutation relations
other symmetric operations can be readily written as
$\widetilde{C}_{2x}=\sigma_{x}\tau_{x}$,
$\widetilde{C}_{2y}=i\sigma_{y}\tau_{y}$ and
$\widetilde{M}_{z}=\sigma_{z}\tau_{x}$, where the $\tau_{x,y,z}$ is Pauli
matrix that represents layer degree of freedom. With these constraints the
inter-layer term is given as $T=t_{i}k_{x}\sigma_{0}\tau_{x}$, $t_{i}$ is
inter-layer coupling coefficient. Hereafter, the low-energy effective
Hamiltonian at X with first-order approximation is obtained as
${H_{\text{X}}=(\alpha_{y}k_{y}\sigma_{x}-\alpha_{x}k_{x}\sigma_{y})\tau_{z}+t_{i}k_{x}\sigma_{0}\tau_{x}}.$
(4)
With a straightforward solving, we get two branches of doubly degenerate
energy spectra,
${E_{\pm}^{\text{X}}=\pm\sqrt{(\alpha_{x}^{2}+t_{i}^{2})k_{x}^{2}+\alpha_{y}^{2}k_{y}^{2}}},$
(5)
which indeed depicts a quadruple degenerated Dirac fermion at
$k_{x}$=$k_{y}$=0 with Fermi velocities
$v_{x}=\sqrt{\alpha_{x}^{2}+t_{i}^{2}}$ and $v_{y}=|a_{y}|$. By fitting the
$k\cdot p$ model with our first-principle calculations, we obtain
$\alpha_{x}$=1.40 eVÅ, $\alpha_{y}$=2.15 eVÅ, $t_{i}$=0.09 eVÅ. The $t_{i}$ is
much smaller than $\alpha_{x,y}$ indicates the interlayer interaction is weak
and can be reasonably neglected. Then the Dirac fermion in Eq. 4 is able to be
decomposed into two Weyl fermions with opposite layer index,
$H_{\pm}^{W}=\pm(\alpha_{y}k_{y}\sigma_{x}-\alpha_{x}k_{x}\sigma_{y}).$ (6)
It indicates the Dirac fermion can be approximately taken as two spatially
isolated Weyl fermions with opposite layer-index and carrying opposite helical
spin texture, which differs from other known Dirac fermion in semimetal or
metal systems nd may extend the potential application of Dirac fermion. For
instance, the linearly dispersion as well as in-plane helical spin texture in
Dirac semiconductor may suppress direct backscattering and facilitate carrier
mobility, and the layer-resolved spin polarizationZhang _et al._ (2014); Liu
_et al._ (2015a, 2016); Guan and Luo (2020) can be flexibly controlled under
external filed. Moreover, we find for two and three layers of TL-BiOS2, the
Dirac fermion still existsSup , making it easy to probe in experiment. When
extending to 3D bulk limitation, the Dirac fermion transforms into a Dirac
nodal line along $k_{z}$ direction, where each point can be taken as a Dirac
point, thus dubbed as 3D Dirac nodal-line semiconductor. We believe these
unique features in such Dirac semiconductors deserve further experimental
explorations.
Manipulation of TPTs.—
The topological phase transition (TPT) manipulated by external electric field
is of great significance, especially in lower dimensionLi and Chang (2009); Li
_et al._ (2017); Liu _et al._ (2015b). For example, the electric-field-tuned
TPT is recently reported in ultra-thin Na3BiCollins _et al._ (2020), a well-
known Dirac semimetal. Since Dirac fermion is protected by $\mathcal{PT}$ in
addition with other crystal symmetries, breaking either $\mathcal{P}$ or
$\mathcal{T}$ will induce various TPTsYoung and Kane (2015). The fact that the
Dirac fermion in TL-BiOS2 contains two spatially isolated Weyl points provides
a good opportunity to manipulating by vertical electric field. Also, the
semiconducting nature of TL-BiOS2 is in favor of electric field control than
semimetal or metal systems in which the electrostatic screening predominates.
Specifically, an vertical electric field breaks inversion symmetry of TL-BiOS2
by inducing stacking potential on different BiS2 layers. This process can be
approximately depicted by adding a mass term $V_{1}=V_{z}\sigma_{0}\tau_{z}$
to $H_{\text{X}}$. Neglecting interlayer coupling term, $H_{\text{X}}+V_{1}$
resolves into ${V_{z}\sigma_{0}+H_{+}^{W}}$ and
${-V_{z}\sigma_{0}+H_{-}^{W}}$, which indicate a Dirac fermion can be
successful split into two Weyl points with energy difference $\Delta
E=2V_{z}$. In Fig. 3 (a)-(c), we perform first-principle calculations with
external vertical electric filed (E${}_{\text{ext}}$) varying from 0.0 to 0.4
V/Å. We find the four-fold degenerated Dirac fermions at HVB and LCB at X/Y
split into a pair of Karamer-Weyl fermions. The split amplitude $\Delta E$ is
linearly proportional to the size of E${}_{\text{ext}}$ with a ratio of 0.96
eÅ as shown in Fig. 3(c). We already know these two Weyl fermions have layer-
dependent chiral spin textures. Moreover, we show they also demonstrate large
Berry curvature distributions and can be identified by the quantized $\pm\pi$
Berry phase surrounding them as displayed in Fig. 3(d). Hence by switching
Electric field direction, the spin polarization direction and spin Hall
coefficient can be modified, working as spin filed effect transistor (SFET)Liu
_et al._ (2013).
Strain is another effective method for engineering the electronic, magnetic
and topological properties of 2D materialsXu _et al._ (2017); Zhao _et al._
(2020). As we know the Dirac point in TL-Bi2OS2 is guaranteed by unique
nonsymmorphic space group, so called symmetry-enforced that robust against any
symmetry-remained perturbations. In principle, a simple uniaxial or biaxial
strain on TL-Bi2OS2 won’t change the nature of nonsymmorphic. Luckily, we find
TL-Bi2OS2 is a novel piezoelectric materialCui _et al._ (2018); Guan _et
al._ (2020). A very tiny biaxial extensive strain will induce a Peierls-like
atomic distortion on BiS2 layers along $y$ direction that breaks space
inversion and gives switchable in-plane electronic polarizationSup . In Fig.
3(e), we calculate the bandstructure under $2\%$ biaxial strain. One can find
the band gap at X point (0.85 eV) becomes larger than that at Y (0.70 eV).
Because the in-plane polar distortion breaks the $C_{4z}$ rotation and lifts
the inter-valley degeneracy between X and Y.
The space group of ferroelectric distorted TL-BiOS2 is $Pmn2_{1}$, which
contains four symmetry operations: $M_{x}$, $\widetilde{M}_{z}$,
$\widetilde{C}_{2y}$ and identity. At X point, the lacking of $\mathcal{P}$
and $\widetilde{C}_{2x}$ permits the emergence of a mass term
$V_{2}=m_{0}\sigma_{z}\tau_{x}$, which couples two BiS2-layers and opens a
gap. As demonstrated in Fig. 3(f), the Dirac fermion splits into two Karamer-
Weyl fermions at X point with gap $2m_{0}$=18.1 meV. Moreover, thanks to
surviving $\widetilde{M}_{z}$, all bands could be characterized by its
eigenvalues within the whole Brillouin zone. We find along X$\Gamma$ path two
bands with opposite $\widetilde{M}_{z}$ eigenvalues have to cross each other
and exhibits unique hourglass-like band connection. Actually, such band
crossing happens for arbitrary $k$-path connecting X and $\Gamma$ and forms a
closed nodal ring rather than isolated points. To confirm this, in Fig. 3(g)
we draw the 3D band spectrum and find a Weyl nodal ring surrounding X appears
in accompany with two Weyl points at X, in consistent with our symmetry
analysis. Besides, the band crossing along XM is also protected by distinctive
eigenvalues of $\widetilde{C}_{2y}$. So far as we know such glide mirror
protected hourglass nodal rings or nodal chains are mainly studied on 3D
systemsBzdusek _et al._ (2016), our result provides a practical and tunable
Weyl nodal ring candidate in lower dimension. At Y point, despite sharing same
little group as X point, it hosts distinctive commutation relations, the zero-
order mass term $m_{0}\sigma_{z}\tau_{x}$ is forbidden by
$\widetilde{C}_{2y}=\sigma_{y}\tau_{x}$. Therefore, the quadruple degeneracy
is maintained at Y point while the $\mathcal{P}$-breaking allows first-order
term $V_{3}=w_{y}k_{y}\sigma_{x}\tau_{y}+w_{x}k_{x}\sigma_{y}\tau_{y}$, which
gives linear band splitting apart from Y along general $k-$path e.g. Y-X-Y as
shown in Fig. 3(h). Note the double band degeneracy are still maintained along
$\Gamma$Y and YM which are protected by the anticommutation relation
$\\{\widetilde{M}_{x},\widetilde{C}_{2y}\\}$=0 and by
$\mathcal{T}\widetilde{C}_{2y}=-1$, respectively. This type of Dirac point
that even exists without $\mathcal{PT}$ is very recently proposed in monolayer
SbSSnJin _et al._ (2020).
Conclusion.— As a summary, in this work, we put forward the concept of 2D
Dirac semiconductor that host Dirac cone around band edge in nonsymmorphic
systems and then we propose an effective approach to search for such Dirac
semiconductor in certain multiple materials. We explicitly demonstrate an
practical Dirac semiconductor materials: TL-BiOS2, in which the Dirac cone is
composed of two spatially isolated Weyl cones, and provides layer-resolved
spin polarization. Moreover, with various TPTs can be induced in TL-BiOS2 by
imposing external strains or electric fields. Besides, it is worth mentioning
that starting from Dirac semiconductor, one can conveniently achieve Dirac
semimetal state by simply shifting Fermi level through imposing gate voltage,
element doping as well as band inversion. In fact, the F-substitution of O is
used to modify the Fermi level in bulk LaOBiS2 to realize high-temperature
superconducting stateYazici _et al._ (2015), and some proposed Dirac
semimetal materials like HfGeTe can be taken as Dirac semiconductor with band
inversionGuan _et al._ (2017). Our finding of 2D Dirac semiconductor not only
enriches topological quantum material families but also provides a new avenue
for exploring spin polarization, ferroelectric and optoelectronic properties
in topological quantum materials.
###### Acknowledgements.
This research was funded by the National Natural Science Foundation of China
(Grant No. 11874273 and No. 11974009) and the Key Project of Sichuan Science
and Technology Program (2019YFSY0044). We also thank the Sichuan Normal
University for financial support (No. 341829001). This research project was
supported by the High Performance Computing Center of Sichuan Normal
University, China. The work at BIT is supported by the National Key R&D
Program of China (Grant No. 2016YFA0300600), the National Natural Science
Foundation of China (Grants Nos. 11734003), the Strategic Priority Research
Program of Chinese Academy of Sciences (Grant No. XDB30000000).
## References
* Neto _et al._ (2009) A. C. Neto, F. Guinea, N. M. Peres, K. S. Novoselov, and A. K. Geim, Rev. Mod. Phys. 81, 109 (2009).
* Yan and Felser (2017) B. Yan and C. Felser, Rev. Mod. Phys. 8, 337 (2017).
* Fang _et al._ (2016) C. Fang, H. Weng, X. Dai, and Z. Fang, Chin. Phys. B 25, 117106 (2016).
* Weng _et al._ (2016) H. Weng, X. Dai, and Z. Fang, J. Phys.: Condens. Matter 28, 303001 (2016).
* Gao _et al._ (2019) H. Gao, J. F. Venderbos, Y. Kim, and A. M. Rappe, Annu. Rev. Mater. Rev. 49, 153 (2019).
* Armitage _et al._ (2018) N. P. Armitage, E. J. Mele, and A. Vishwanath, Rev. Mod. Phys. 90, 015001 (2018).
* Wang and Wang (2018) H. Wang and J. Wang, Chin. Phys. B 27, 107402 (2018).
* Nagaosa _et al._ (2020) N. Nagaosa, T. Morimoto, and Y. Tokura, Nat. Rev. Mater. 5, 621 (2020).
* Wang _et al._ (2016) M. X. Wang, Y. Xu, L. P. He, J. Zhang, X. C. Hong, P. L. Cai, Z. B. Wang, J. K. Dong, and S. Y. Li, Phys. Rev. B 93, 020503 (2016).
* Chiu and Schnyder (2014) C.-K. Chiu and A. P. Schnyder, Phys. Rev. B 90, 205136 (2014).
* Bian _et al._ (2016) G. Bian, T.-R. Chang, R. Sankar, S.-Y. Xu, and et al., Nat. Commu. 7, 10556 (2016).
* Li _et al._ (2016) R. Li, H. Ma, X. Cheng, S. Wang, D. Li, Z. Zhang, Y. Li, and X.-Q. Chen, Phys. Rev. Lett. 117, 096401 (2016).
* Yan _et al._ (2015) B. Yan, B. Stadtmueller, N. Haag, S. Jakobs, and et al., Nat. Commu. 6, 10167 (2015).
* Hirayama _et al._ (2015) M. Hirayama, R. Okugawa, S. Ishibashi, S. Murakami, and T. Miyake, Phys. Rev. Lett. 114, 206401 (2015).
* Sakano _et al._ (2020) M. Sakano, M. Hirayama, T. Takahashi, S. Akebi, M. Nakayama, and et al., Phys. Rev. Lett. 124, 136404 (2020).
* Ideue _et al._ (2019) T. Ideue, M. Hirayama, H. Taiko, Takahashi, and et al., PNAS 116, 25530 (2019).
* Zhang _et al._ (2019) N. Zhang, G. Zhao, L. Li, P. Wang, and et al., arXiv 1906.06071 (2019).
* Young and Kane (2015) S. M. Young and C. L. Kane, Phys. Rev. Lett. 115, 126803 (2015).
* Guan _et al._ (2017) S. Guan, Y. Liu, Z.-M. Yu, S.-S. Wang, Y. Yao, and S. A. Yang, Phys. Rev. Mater. 1, 054003 (2017).
* Kowalczyk _et al._ (2020) P. J. Kowalczyk, S. A. Brown, T. Maerkl, Q. Lu, and a. e. a. Chiu, ACS NANO 14, 1888 (2020).
* Mounet _et al._ (2018) N. Mounet, M. Gibertini, P. Schwaller, and et al., Nat. Nanotechnol. 13, 246 (2018).
* (22) See Supplemental Information for the computational methods, the symmetry operators analysis, the tight-binding model, and the supplementary figures of candidates .
* Qiu _et al._ (2020) G. Qiu, C. Niu, Y. Wang, and et al., Nat. Nanotechnol. 15, 585 (2020).
* Chang _et al._ (2018) B. Chang, G.and Wieder, F. Schindler, and et al., Nat. Mater. 17, 978 (2018).
* Li _et al._ (2019) H. Li, S. Xu, Z. Rao, and et al., Nat. Commun. 10, 5505 (2019).
* Zhang _et al._ (2018) X. Zhang, B. Wang, X. Niu, Y. Li, Y. Chen, and J. Wang, Mater. Horiz. 5, 1058 (2018).
* Huang and Yu (2020) C. Huang and H. Yu, 2D Mater. 7, 025023 (2020).
* Fan _et al._ (2018) X. Fan, D. Ma, B. Fu, C.-C. Liu, and Y. Yao, Phys. Rev. B 98, 195437 (2018).
* Ishizaka _et al._ (2011) K. Ishizaka, M. Bahramy, H. Murakawa, and a. e. a. Sakano, Nat. Mater. 10, 521 (2011).
* Maaß _et al._ (2016) H. Maaß, H. Bentmann, C. Seibel, C. Tusche, and et al., Nat. Commun. 7, 11621 (2016).
* Zhang and Liu (2019) S.-H. Zhang and B.-G. Liu, Phys. Rev. B 100, 165429 (2019).
* Di Sante _et al._ (2013) D. Di Sante, P. Barone, R. Bertacco, and S. Picozzi, Adv. Mater. 25, 509 (2013).
* Zhang _et al._ (2014) X. Zhang, Q. Liu, J.-W. Luo, A. J. Freeman, and A. Zunger, Nat. Phys. 10, 387 (2014).
* Liu _et al._ (2015a) Q. Liu, X. Zhang, H. Jin, K. Lam, J. Im, A. J. Freeman, and A. Zunger, Phys. Rev. B 91, 235204 (2015a).
* Liu _et al._ (2016) Q. Liu, X. Zhang, and A. Zunger, Phys. Rev. B 93, 174119 (2016).
* Guan and Luo (2020) S. Guan and J.-W. Luo, Phys. Rev. B 102, 184104 (2020).
* Li and Chang (2009) J. Li and K. Chang, Appl. Phys. Lett. 95, 222110 (2009).
* Li _et al._ (2017) C.-H. Li, Y.-J. Long, L.-X. Zhao, L. Shan, Z.-A. Ren, J.-Z. Zhao, H.-M. Weng, X. Dai, Z. Fang, C. Ren, and G.-F. Chen, Phys. Rev. B 95, 125417 (2017).
* Liu _et al._ (2015b) Q. Liu, X. Zhang, L. Abdalla, A. Fazzio, and A. Zunger, Nano Lett. 15, 1222 (2015b).
* Collins _et al._ (2020) J. Collins, A. Tadich, W. Wu, and et al., Nature 564, 390 (2020).
* Liu _et al._ (2013) Q. Liu, Y. Guo, and A. J. Freeman, Nano Lett. 13, 5264 (2013).
* Xu _et al._ (2017) C.-Z. Xu, Y.-H. Chan, Y. Chen, P. Chen, and et al., Phys. Rev. Lett. 118, 146402 (2017).
* Zhao _et al._ (2020) C. Zhao, M. Hu, J. Qin, B. Xia, C. Liu, and et al., Phys. Rev. Lett. 125, 046801 (2020).
* Cui _et al._ (2018) C. Cui, F. Xue, W.-J. Hu, and L.-J. Li, npj 2D Mater. Appl. 2, 1 (2018).
* Guan _et al._ (2020) Z. Guan, H. Hu, X. Shen, P. Xiang, N. Zhong, J. Chu, and C. Duan, Adv. Electron. Mater. 6, 1900818 (2020).
* Bzdusek _et al._ (2016) T. Bzdusek, Q. Wu, A. Ruegg, M. Sigrist, and A. A. Soluyanov, Nature 538, 75 (2016).
* Jin _et al._ (2020) Y. J. Jin, B. B. Zheng, X. L. Xiao, Z. J. Chen, Y. Xu, and H. Xu, Phys. Rev. Lett. 125, 116402 (2020).
* Yazici _et al._ (2015) D. Yazici, I. Jeon, B. D. White, and M. B. Maple, Physica C: Superconductivity and its Applications 514, 218 (2015).
|
# Towards solving the BCS Hamiltonian gap in Near-Term Quantum Computers
Nahum Sá Centro Brasileiro de Pesquisas Físicas, Rua Dr. Xavier Sigaud 150,
22290-180 Rio de Janeiro, Brazil<EMAIL_ADDRESS>Ivan S. Oliveira Centro
Brasileiro de Pesquisas Físicas, Rua Dr. Xavier Sigaud 150, 22290-180 Rio de
Janeiro, Brazil<EMAIL_ADDRESS>Itzhak Roditi Centro Brasileiro de Pesquisas
Físicas, Rua Dr. Xavier Sigaud 150, 22290-180 Rio de Janeiro, Brazil
<EMAIL_ADDRESS>
###### Abstract
In this work, using a NISQ framework, we obtain the gap of a BCS Hamiltonian.
This could lead to interesting implications for superconductivity research.
For such task, we choose to use the Variational Quantum Deflation and analyze
the hardware restrictions that are needed to find the energy spectra on
current quantum hardware.
We also compare two different kinds of classical optimizers, Constrained
Optimization BY Linear Approximations (COBYLA) and Simultaneous Perturbation
Stochastic Approximation (SPSA), and study the effect of decoherence caused by
the presence of noise when using simulations in real devices. We implement
this method for a system with both 2 and 5 qubits. Furthermore, we show how to
approximate the gap within one standard deviation, even with the presence of
noise.
## 1 Introduction
At the current time, we may consider that we are in the Noisy Intermediate-
Scale Quantum (NISQ) [1] era of Quantum Computing which refers to devices with
50-100 qubits, without Quantum Error Correction. In other words, we are still
far from devices with perfect qubits and quantum operations. Thus, in order to
attain an acceptable performance, we need to use algorithms that are suited
for those limitations. A very valuable class of algorithms appropriate to deal
with such limitations, and that may be very useful in physics as well as in
many other fields, e.g. machine learning, are the so-called Variational
Quantum Algorithms (VQA).
Variational Quantum Algorithms is a class of hybrid classical-quantum
algorithms that shows noise resilience due to the use of parametric quantum
circuits [2], thus being able to be implemented in NISQ devices. Variational
Quantum Algorithms are universal [3] and have been proposed for various
problems, such as Combinatorial Optimization [4], Quantum Chemistry [5],
factoring [6], Machine Learning [7] and compilation [8].
One problem that can be dealt with by the use of such techniques and has the
potential to lead to interesting outcomes is superconductivity. This is fairly
well modeled using a pairing Hamiltonian.
BCS superconductivity is an appealing framework for testing VQA.
Experimentally, the simulation of pairing Hamiltonians have been shown using a
Nuclear Magnetic Resonance (NMR) setup [9]. However, due to intrinsic NMR
experimental characteristics, there are several limitations for scalability
which can be handled using VQAs.
In order to simulate Fermionic Hamiltonians, in digital quantum computers, one
needs to map fermions to qubits. There are many mapping techniques used for
quantum chemistry, for instance Jordan-Wigner [10], and Bravyi-Kitaev Mapping
[11]. One of such qubit mappings proposed by Wu et al. [12] has been only
implemented for NMR quantum computers, although it can be used for circuit-
based quantum computers and more specifically NISQ devices using Variational
Quantum Algorithms. This paper aims to use this mapping to solve the BCS
Hamiltonian. On the other hand, one can also can simulate Fermionic
Hamiltonians using analog quantum simulations [13, 14].
In order to show that this mapping is suitable for circuit-based quantum
computers, we will use the Fermion-to-Qubit mapping made by Wu et al. [12] to
solve the BCS problem with both $N=5$ and $N=2$ qubits. This also extends the
result of [9] in a setting where it is not needed to use any simplification of
the parameters of the Hamiltonian, leading to a path into solving this problem
with an arbitrary number of cooper pairs which is limited only by quantum
computer hardware.
In order to demonstrate this, our goal is to find the energy spectra. One can
use Variational Quantum Algorithms. We choose to use the Variational Quantum
Deflation algorithm [15] which could lead to restrictions of the topology of
quantum computers. Fortunately, those restrictions are met by real devices
which could be used to solve the BCS Hamiltonian. However, this method can
work with any algorithm able to find excited states on quantum computers, such
as Quantum Subspace Expansion methods [16, 17].
The paper is organized as follows: Section 2 presents the BCS Hamiltonian, the
qubit mapping used, explains the Variational Quantum Deflation algorithm, and
the ansatz used. Section 3 shows the numerical simulation results for an ideal
quantum computer and a noisy quantum computer assuming a Thermal Relaxation
Error noise model. Lastly, in Section 4, we conclude the paper and present
possible ideas for future research. In A we explain the optimizers used in
further detail. In B we further explain the methods that can be used to
measure the overlap between two states using a quantum computer and the
hardware restrictions.
## 2 Methods
First, we need to define the Fermionic Hamiltonian that we want to solve. The
general Bardeen–Cooper–Schrieffer (BCS) [18] Hamiltonian is given by:
$H_{BCS}=\sum_{m=1}^{N}\frac{\epsilon_{m}}{2}(n_{m}+n_{-m})\
+\sum_{m,k=1}^{N}V^{+}_{ml}c^{\dagger}_{m}c^{\dagger}_{-m}c_{-l}c_{l}$ (1)
where $n_{\pm m}=c^{\dagger}_{\pm m}c_{\pm m}$ is the number operator, and the
matrix elements $V^{+}_{ml}=\langle m,-m|V|l,-l\rangle$ are real and can be
calculated efficiently for any given problem. For simplicity V can be treated
as a constant which yields a good approximation to many superconductors [19].
An important property that can be obtained from the Hamiltonian is the gap,
which is defined as $2\Delta_{n}\equiv E_{n,1}-E_{n,0}$ , $n$, being the
number of Cooper pairs (see [12] for an NMR simulation experiment where the
energy spectrum of the Hamiltonian is determined). For instance, when $n=0$,
one has for the gap between the first excited state and the ground state,
$2\Delta_{0}\equiv E_{1}-E_{0}$. And for $n=1$, one has the gap between the
first and second excited states, $2\Delta_{1}\equiv E_{2}-E_{1}$. In the
present work, we will focus both on the $n=1$ gap and the $n=0$ gap.
In order to solve this problem in a quantum computer, one needs to map the BCS
Hamiltonian into a qubit Hamiltonian with a one-to-one mapping. In this paper
we will use one of the mappings presented in Wu et al. [12] which maps
Fermions to qubits, given by the following representation:
$H_{Q}=\sum_{m=1}^{N}\frac{\epsilon_{m}}{2}\sigma^{Z}_{m}+\frac{V}{2}\sum_{l>m=1}^{N}(\sigma^{x}_{m}\sigma^{x}_{l}+\sigma^{y}_{m}\sigma^{y}_{l})$
(2)
One can find the gap by finding the energy spectra of the qubit Hamiltonian
(Eq. 2) since the mapping assures that the qubit Hamiltonian has the same
energy spectra as that of the BCS Hamiltonian. The goal of this paper is to
find the energy spectra using a variational quantum algorithm called
Variational Quantum Deflation [15].
The Variational Quantum Deflation (VQD) algorithm is an extension of the
Variational Quantum Eigensolver (VQE) [5] algorithm, allowing us to
approximate excited states of a desired Hamiltonian, thus giving us access to
their energy values.
The original VQE algorithm only finds the ground energy of a given Hamiltonian
by using the decomposition of such Hamiltonian into Pauli strings and
minimizing the expected value of the Hamiltonian:
$E(\theta)=\langle\psi(\theta)|H|\psi(\theta)\rangle=\sum_{i}c_{i}\langle\psi(\theta)|P_{i}|\psi(\theta)\rangle$
(3)
Where $P_{i}$ are Pauli strings that are of polynomial size. In order to
measure this expectation value one needs to change from the computational
basis into the X and Y basis, thus this approach has low depth and is suited
for NISQ devices. By minimizing the cost function (Eq. 3) it is possible to
approximate the ground state of a Hamiltonian with high precision.
In order to get the energy of excited states, the cost function of the VQE
needs to be modified by taking into account that all eigenstates of the
Hamiltonian are orthogonal to each other. This is done by minimizing the
overlap between all eigenstates discovered using the algorithm. Thus, to
calculate the k-th excited state of a given Hamiltonian, $H$, the parameters
$\lambda_{k}$ must be optimized for a parametrized state
$|\psi(\lambda_{k})\rangle$ for a given cost function. The cost function that
needs to be minimized for the VQD is given by:
$F(\lambda_{k})=\langle\psi(\lambda_{k})|H|\psi(\lambda_{k})\rangle+\sum_{i=0}^{k-1}\beta_{i}\big{|}\langle\psi(\lambda_{k})|\psi(\lambda_{i})\rangle\big{|}^{2}$
(4)
The first term is the expected value of the energy, which can be obtained
measuring the Pauli decomposition of the Hamiltonian $H$ just like the VQE
cost function. On the other hand, the second term is not as trivial, one needs
to calculate the overlapping term between all eigenstates until the desired
k-th eigenstate, this term ensures that all eigenstates are orthogonal to each
other.
It is possible to measure the overlap between two quantum states on quantum
computers, this will be explained in more detail in B. In addition, those
methods imposes hardware restrictions, which are also discussed in the
appendix.
One of the downsides of the VQD algorithm is that the hyperparameter
$\beta_{i}$ has to be chosen, but, according to [15], it suffices to choose
$\beta_{i}>E_{k}-E_{i}$, where $E_{k}$ is the desired energy state and $E_{i}$
all energy states below the desired state, since this paper aims to find the
energy spectra of the Hamiltonian, we choose
$\beta_{i}>E_{\text{max}}-E_{\text{min}}$, where $E_{\text{max}}$ is the
highest value of the energy and $E_{\text{min}}$ the minimum value of the
energy. Another aspect to be careful with is that if the real ground state is
not satisfactorily approximated, there may be an error propagation due to the
initial state not being well-prepared.
In the following subsection, we will describe different kinds of variational
ansatz and our choice of a Hardware-Efficient Ansatz in more detail.
### 2.1 Variational Ansatz
In order to build a variational ansatz for the Variational Quantum Algorithm,
two conflicting goals should be met: The ansatz must be as general as possible
in order to create states that would approximate all eigenstates of the
Hamiltonian, but also needs to have a polynomial number of parameters in order
to express our ansatz efficiently on a quantum computer.
In the literature there are two distinct paths for constructing the
variational ansatz: 1) You can create an ansatz based on the problem’s
knowledge, in this case we can see as an example the UCCSD ansatz [20]; 2)
Since the main goal is to run the Variational Quantum Algorithm in real
hardware one needs to take into account hardware restrictions on connectivity
and the gate set and construct an ansatz that is suited for the Hardware that
the algorithm will run on, this is commonly called Hardware Efficient Ansatz
introduced by Kandala et al. [21].
Our choice of ansatz for this paper is the hardware efficient ansatz. This
kind of ansatz is depicted in FIGURE 1 and is composed by blocks of two kind
of unitary transformations. The first block is made of one or more hardware-
native parametrized one qubit gates ($R_{j}(\theta_{i})$) and depicted on the
figure as $U(\mathbb{\theta})$.
The second block is constructed using two qubit gates that can generate
entanglement, which are generally called entangling gates, the most common
entangling gate is the CNOT, however there are other entangling gates such as
iSWAP used on the Sycamore chip made by Google [22]. Since entangling gates
are the gates which have lower coherence, it is imperative to use them
according to the hardware’s connection graph in order to avoid using swap
gates, which are costly for NISQ devices and can lead to non-local errors.
Thus, the ansatz can be written as following:
$\left|\psi(\theta)\right\rangle=U_{\text{ENT}}\ U(\theta)\ \dots\
U_{\text{ENT}}\ U(\theta)\left|\psi(0)\right\rangle$ (5)
Depth, $d$, is defined as the number of both the single qubit rotations block
and the entangling block. The dependency on the number of parameters regarding
the depth is given by $2Nd$, where $N$ is the number of qubits, yielding a
polynomial number of parameters, which ensures that we are working with an
efficient ansatz construction.
Figure 1: Hardware Efficient ansatz where the unitary $U_{\mathrm{ENT}}$ is
constructed using the connection graph of the Quantum Hardware, and the
unitary $U(\mathbb{\theta})$ is made of hardware-native parametrized
rotations, in our case it is the combination of $R_{Y}$ and $R_{Z}$ rotations.
## 3 Results
In order to demonstrate our proposed method, we choose to simulate it both on
ideal and noisy quantum computers for the case of $N=2$ and $N=5$ qubits,
because the system is solvable using direct diagonalization, and we can
compare results between the result of direct diagonalization and the result
from the quantum computer. Both simulations will take into account that we
only take a finite amount of measurements (10,000 shots) from the quantum
computer.
The Hamiltonian that we aim to solve is given by equation (2) with $N=2,5$. An
interesting parameter that we can obtain for this Hamiltonian, as mentioned
before, is the gap: $2\Delta_{n}=E_{n,1}-E_{n,0}$.
This section will be divided in two subsections, in which different parameters
of the simulation will be considered. The first section 3.1 will be the
simulation performed by a perfect quantum computer, and the second 3.2 will be
the simulation subject to a noise model.
### 3.1 Ideal Quantum Computer
We will consider a perfect Quantum computer without any noise for the first
experiment. The purpose of this experiment is two-fold: firstly, to show that
the algorithm works in an ideal setting. Secondly, it will be a fair
comparison when we analyze the case with noise, which is of main importance
for running Variational Quantum Algorithms in NISQ devices.
In order to estimate the depth needed to simulate the Hamiltonian we run the
algorithm changing the depth of the ansatz and estimate the gap, this is shown
on FIGURES 2 and 3, for 2 and 5 qubits respectively. For the case of 5 qubits,
we choose to only use the COBYLA optimizer, because we have observed that it
worked better for the case of 2 qubits.
Figure 2: A plot analyzing the depth needed to solve the problem for a two-
qubit Hamiltonian using a Hardware Efficient Ansatz, where we took the
statistics of 50 runs of the algorithm with random initialization and finite
number of shots. Comparing between COBYLA and SPSA optimizers. Analysis of the
Optimizer performance, where we used $c=0.7$ and took the average of the last
25 $\lambda_{k}$ for the final $\lambda_{k}$ value on the SPSA. As the depth
of the circuit increases, the algorithm converges to the solution that is
obtained through direct diagonalization. Figure 3: A plot analyzing the depth
needed to solve the problem for a five qubit Hamiltonian using a Hardware
Efficient Ansatz, where we took the statistics of 50 runs of the algorithm
with random initialization and finite number of shots. We choose only to use
the COBYLA optimizer in order to show that our algorithm works for a higher
number of qubits.
According to the figures, we observe that for the SPSA optimizer the increase
in depth can also lead to higher variance on the samples, this is expected
since increasing the depth of the ansatz also leads to a linear increment in
the number of parameters, this shows that for this instance the gradient-free
optimizer works better for this kind of problem.
In addition, we also see that a depth of 3 is needed to find a reasonable
solution for both optimizers to solve the BCS problem for both cases, because
it is within 1 standard deviation of the statistical sample and is the
smallest depth that satisfies this condition.
After defining the depth needed to solve the problem, we benchmark the problem
of finding the $n=1$ gap of the two-qubit Hamiltonian by changing the coupling
parameter $V$, with parameters $\epsilon=\epsilon_{1}=\epsilon_{2}=3$. This is
done by calculating the first and second excited states of the Hamiltonian.
Both optimizers are able to solve the gap problem within 1 standard deviation,
which is demonstrated on FIGURE 4.
Figure 4: Measuring the gap varying V for the two-qubit Hamiltonian, where we
used $c=0.7$ and took the average of the last 25 $\lambda_{k}$ for the final
$\lambda_{k}$ value for the SPSA optimizer, and we took the statistics of 10
runs of the algorithm with random initialization.
However, only the COBYLA optimizer is able to show the expected trend on the
mean value. This is due to the need to tune the hyperparameters for each case
of the SPSA optimizer.
For the case of the five qubit Hamiltonian, we change the coupling parameter
$V$, with $\epsilon_{1}=\epsilon_{2}=\epsilon_{3}=\epsilon_{5}=3$ and
$\epsilon_{4}=4$, and estimate the $n=0$ gap, which is done by calculating the
gap between the ground state and the first excited state of the Hamiltonian.
This is represented in FIGURE 5.
Figure 5: Measuring the gap varying V for the five qubit Hamiltonian using the
depth 3, where we took the statistics of 20 runs of the algorithm with random
initialization.
### 3.2 Noisy Quantum Computer
In this work we choose the Thermal Relaxation Error [23] as our noise model
which represents the thermalization of the qubit towards the equilibrium state
at the temperature of the environment. This is an irreversible process that is
governed by the gate time $T_{g}$, relaxation times $T_{1}$, $T_{2}$, and
device temperature, $T$, which all can be obtained during the device
calibration.
Since the device is kept at a temperature very close to absolute zero, it is
safe to assume that $T\approx 0$. This assumption implies that there is no
excited state population, which is given by the equation:
$p_{e}=\bigg{(}1+\exp\big{(}\frac{2hf}{k_{B}T}\big{)}\bigg{)}^{-1}$ (6)
Where $T$ is the device’s Temperature, $f$ is the qubit frequency, $k_{B}$ is
Boltzmann’s constant, and $h$ is Planck’s constant. Under this assumption, we
have two sources of noise, dephasing and reset to $\left|0\right\rangle$.
The relaxation error rates are defined as $\epsilon_{T_{1}}=e^{-T_{g}/T_{1}}$
and $\epsilon_{T_{2}}=e^{-T_{g}/T_{2}}$ and the reset probability is defined
as $p_{\mathrm{reset}}=1-\epsilon_{T_{1}}$.
There are two behaviors for this kind of noise: $T_{2}\leq T_{1}$, and
$T_{2}>T_{1}$.
For the case, $T_{2}\leq T_{1}$ we can represent the Thermal Relaxation Error
as a probabilistic mixture of reset operations and unitary errors.
The probability of a dephasing happening is given by
$p_{Z}=\frac{1-p_{\mathrm{reset}}}{2}\bigg{(}1-\frac{\epsilon_{T_{2}}}{\epsilon_{T_{1}}}\bigg{)}$.
The probability of a reset to $\left|0\right\rangle$ operation occurring is
given by $p_{r_{\left|0\right\rangle}}=(1-p_{e})p_{\mathrm{reset}}$. The
probability of no noise occurring is given by
$p_{I}=1-p_{Z}-p_{\mathrm{reset}}$.
Under these conditions, the error model has a Kraus representation as:
$\begin{split}K_{I}=\sqrt{p_{I}}I\\\ K_{Z}=\sqrt{p_{Z}}Z\\\
K_{\mathrm{reset}}=\sqrt{p}_{\mathrm{reset}}\left|i\right\rangle\left\langle
0\right|\end{split}$ (7)
The relaxation noise is represented as the following channel:
$\rho\mapsto\mathcal{N}(\rho)=\sum_{j}K_{j}\rho K_{j}^{\dagger}$ (8)
For the case $T_{2}>T_{1}$, the error can be represented by a Choi-Matrix
representation. Considering $T\approx 0$, the Choi-Matrix of the Thermal
relaxation noise is given by [24]:
$\begin{pmatrix}1&0&0&\epsilon_{T_{2}}\\\ 0&0&0&0\\\
0&0&p_{\mathrm{reset}}&0\\\ \epsilon_{T_{2}}&0&0&1-p_{\mathrm{reset}}\\\
\end{pmatrix}$ (9)
Using the Choi-Matrix representation, it is possible to find the Kraus
operators and consequently find the channel for the noise.
It is important to evaluate if the Variational Quantum Algorithm that we are
using is resilient to noise in order to run in NISQ devices. Thus, we follow
the same methodology as the Ideal Quantum Computer section 3.1 under the
presence of noise.
We first vary the ansatz depth in order to find the number of layers on the
ansatz for the case with two qubits, and we see a similar trend when comparing
the analysis with and without noise, however the SPSA optimizer seems to
behave not as well as the COBYLA optimizer which shows convergence when the
depth increases. This shows that just as without noise, we have a good
convergence when the depth of the ansatz is equal to 3, just like the case
without noise.
Figure 6: A plot analyzing the depth needed to solve the problem for a two
qubit Hamiltonian using a Hardware Efficient Ansatz, where we took the
statistics of 10 runs of the algorithm with random initialization and compare
between COBYLA and SPSA optimizers. Analysis of the Optimizer performance,
where we used $c=0.7$ and took the average of the last 25 $\lambda_{k}$ for
the final $\lambda_{k}$ value on the SPSA.
For the case with five qubits, we use only the COBYLA optimizer in order to
show that our method works for more than two qubits, FIGURE 7. The noise leads
to an offset which has been removed. We see that, it has a similar behavior
than the case without noise, thus choosing the depth of the ansatz equals to 3
works for the five qubit case.
Figure 7: A plot analyzing the depth needed to solve the problem for a five
qubit Hamiltonian using a Hardware Efficient Ansatz, where we took the
statistics of 50 runs of the algorithm with random initialization for the
COBYLA optimizer. The results are adjusted by a constant factor due to the
presence of noise.
After defining the depth, we implement for the same case, varying $V$ on the
Hamiltonian and using $\epsilon=\epsilon_{1}=\epsilon_{2}=3$. We observe that
for this case, both COBYLA and SPSA optimizers have demonstrated a linear
trend, which is expected when the coupling parameter $V$ is changed.
Even though we used $\epsilon=\epsilon_{1}=\epsilon_{2}$, we could choose
$\epsilon_{1}\neq\epsilon_{2}$ without any loss to our algorithm, these values
were chosen because it would demonstrate a linear trend on the BCS gap for the
case of two qubits, other conditions would lead to non-linear relation when
varying the coupling parameter.
Figure 8: Measuring the gap varying the coupling constant, $V$, for the two
qubit Hamiltonian. We used $c=0.7$ and took the average of the last 25
$\lambda_{k}$ for the final $\lambda_{k}$ value on the SPSA.
Now we use the depth of 3, for the case varying the coupling constant, $V$,
with $\epsilon_{1}=\epsilon_{2}=\epsilon_{3}=\epsilon_{5}=3$ and
$\epsilon_{4}=4$, and estimate the $n=0$ gap. This is represented in FIGURE 9.
Figure 9: Measuring the gap varying the coupling constant, $V$, for the two
qubit Hamiltonian. We use the COBYLA Optimizer performance, to show that our
method works with a high qubit number. The results are adjusted by a constant
factor due to the presence of noise.
## 4 Conclusion
In this paper we have explored solving the BCS Hamiltonian through the Wu et
al. [12] mapping using Near-Term Quantum Computers through a Variational
Quantum Algorithm.
The algorithm that was used is called Variational Quantum Deflation (VQD) and
we were able to obtain the $n=1$ gap of a two qubit BCS Hamiltonian within 1
standard deviation considering random initialization, showing that the
algorithm works and is not highly dependent of the initialization of the
random parameters. We also obtain the $n=0$ gap of a five qubit BCS
Hamiltonian, which shows that our methods scale for higher qubit instances.
We also analyzed both gradient and gradient-free optimizers in order to search
for variational parameters that solve our task. For the gradient-based
optimizer we chose Simultaneous Perturbation Stochastic Approximation (SPSA)
and for the gradient-free we chose constraint functions by linear
interpolation (COBYLA). Showing that even in the presence of Thermal
Relaxation noise the COBYLA optimizer have a good convergence for solving the
BCS gap problem, even though it is assumed to be a bad optimizer when the cost
function has stochastic noise.
This work shows promising ways to solve the BCS Hamiltonian in near-term
quantum computers, which could be extended for an arbitrary number of cooper
pairs. For future works, we aim to explore an ansatz that is tailor-made for
the Hamiltonian in order to avoid the Barren Plateaus problem and have a
better optimization convergence. Another direction is to analyze in more depth
the objective function landscape of the VQD algorithm and see if the barren
plateau problem is present in this Variational Quantum Algorithm.
The authors would like to thank João Ribeiro and Filipe Melo for useful
comments on this manuscript. This study was financed in part by the
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior – Brasil (CAPES) –
Finance Code 001, and by the Conselho Nacional de Desenvolvimento Científico e
Tecnológico (CNPq). We acknowledge the use of IBM Quantum services for this
work. The views expressed are those of the authors, and do not reflect the
official policy or position of IBM or the IBM Quantum team. This manuscript
used Quantikz [25] for drawing the circuits and Qiskit [26] for its
simulations. All source code is available on GitHub:
https://github.com/nahumsa/volta.
## References
* [1] Preskill J 2018 Quantum 2 79
* [2] Reiner J M, Wilhelm-Mauch F, Schön G and Marthaler M 2019 Quantum Science and Technology 4 035005
* [3] Biamonte J 2021 Physical Review A 103 L030401
* [4] Farhi E, Goldstone J and Gutmann S 2014 arXiv preprint arXiv:1411.4028
* [5] Peruzzo A, McClean J, Shadbolt P, Yung M H, Zhou X Q, Love P J, Aspuru-Guzik A and O’brien J L 2014 Nature communications 5 4213
* [6] Anschuetz E, Olson J, Aspuru-Guzik A and Cao Y 2019 Variational quantum factoring International Workshop on Quantum Technology and Optimization Problems (Springer) pp 74–85
* [7] Schuld M, Bocharov A, Svore K M and Wiebe N 2020 Physical Review A 101 032308
* [8] Khatri S, LaRose R, Poremba A, Cincio L, Sornborger A T and Coles P J 2019 Quantum 3 140
* [9] Yang X, Wang A M, Xu F and Du J 2006 Chemical Physics Letters 422 20–24 URL https://doi.org/10.1016/j.cplett.2006.02.023
* [10] Jordan P and Wigner E 1928 Zeitschrift für Physik 47 631–651 URL https://doi.org/10.1007/bf01331938
* [11] Bravyi S B and Kitaev A Y 2002 Annals of Physics 298 210–226
* [12] Wu L, Byrd M and Lidar D 2002 Physical Review Letters 89 057904
* [13] Parsons M F, Mazurenko A, Chiu C S, Ji G, Greif D and Greiner M 2016 Science 353 1253–1256 URL https://doi.org/10.1126/science.aag1430
* [14] Bloch I, Dalibard J and Nascimbène S 2012 Nature Physics 8 267–276 URL https://doi.org/10.1038/nphys2259
* [15] Higgott O, Wang D and Brierley S 2019 Quantum 3 156
* [16] Colless J I, Ramasesh V V, Dahlen D, Blok M S, Kimchi-Schwartz M E, McClean J R, Carter J, de Jong W A and Siddiqi I 2018 Phys. Rev. X 8(1) 011021 URL https://link.aps.org/doi/10.1103/PhysRevX.8.011021
* [17] Nakanishi K M, Mitarai K and Fujii K 2019 Phys. Rev. Research 1(3) 033062 URL https://link.aps.org/doi/10.1103/PhysRevResearch.1.033062
* [18] Bardeen J, Cooper L N and Schrieffer J R 1957 Physical review 108 1175
* [19] Waldram J 1976 Reports on Progress in Physics 39 751
* [20] McArdle S, Endo S, Aspuru-Guzik A, Benjamin S C and Yuan X 2020 Reviews of Modern Physics 92 015003
* [21] Kandala A, Mezzacapo A, Temme K, Takita M, Brink M, Chow J M and Gambetta J M 2017 Nature 549 242–246
* [22] Arute F, Arya K, Babbush R, Bacon D, Bardin J C, Barends R, Biswas R, Boixo S, Brandao F G, Buell D A et al. 2019 Nature 574 505–510
* [23] Georgopoulos K, Emary C and Zuliani P 2021 arXiv e-prints arXiv:2101.02109 (Preprint 2101.02109)
* [24] Blank C, Park D K, Rhee J K K and Petruccione F 2020 npj Quantum Information 6 1–7
* [25] Kay A 2018 arXiv e-prints arXiv:1809.03842 (Preprint 1809.03842)
* [26] 2019 Qiskit: An open-source framework for quantum computing
* [27] McClean J R, Boixo S, Smelyanskiy V N, Babbush R and Neven H 2018 Nature communications 9 1–6
* [28] Holmes Z, Sharma K, Cerezo M and Coles P J 2021 arXiv preprint arXiv:2101.02138
* [29] Wang S, Fontana E, Cerezo M, Sharma K, Sone A, Cincio L and Coles P J 2020 arXiv preprint arXiv:2007.14384
* [30] Arrasmith A, Cerezo M, Czarnik P, Cincio L and Coles P J 2020 arXiv e-prints arXiv:2011.12245 (Preprint 2011.12245)
* [31] Spall J C et al. 1992 IEEE transactions on automatic control 37 332–341
* [32] Maryak J L and Chin D C 2001 Global random optimization by simultaneous perturbation stochastic approximation Proceedings of the 2001 American Control Conference.(Cat. No. 01CH37148) vol 2 (IEEE) pp 756–762
* [33] Spall J C 1998 IEEE Transactions on aerospace and electronic systems 34 817–823
* [34] Powell J 1994 A direct search optimization method that models the objective and constraint functions by linear interpolation Advances in Optimization and Numerical Analysis, Proceedings of the 6th Workshop on Optimization and Numerical Analysis, Oaxaca, Mexico ed Gomez S and Hennart J P (Dordrecht, The Netherlands: Kluwer Academic Publishers) pp 51–67
* [35] Powell J 1998 Acta Numerica 7 287–336
* [36] Buhrman H, Cleve R, Watrous J and de Wolf R 2001 Physical Review Letters 87 167902
* [37] Cincio L, Subaşı Y, Sornborger A T and Coles P J 2018 New Journal of Physics 20 113022
* [38] 2021 Amazon Braket https://aws.amazon.com/braket/
* [39] 2021 IBM Quantum Experience https://quantum-computing.ibm.com/
* [40] Havlíček V, Córcoles A D, Temme K, Harrow A W, Kandala A, Chow J M and Gambetta J M 2019 Nature 567 209–212 URL https://doi.org/10.1038/s41586-019-0980-2
## Appendix
## Appendix A Classical Optimizer
There are plenty of optimization algorithms that are used on Variational
Quantum Algorithms. In this paper, we will focus on two algorithms for
optimization: The Simultaneous Perturbation Stochastic Approximation (SPSA)
and Constrained Optimization BY Linear Approximations (COBYLA).
The problem of barren plateaus [27] is present in gradient-based optimizers,
which means that the loss function gradient becomes exponentially smaller
making it harder to minimize the cost function. The barren plateaus problem
can be induced by various factors such as high ansatz expressiveness [28] or
even presence of noise [29].
Initially, the presence of barren plateaus was found only on gradient-based
optimizers, but recently there is evidence that it affects gradient-free
optimizers as well [30].
Therefore, in order to examine the performance of classical optimizers, it is
fair to compare gradient-free and gradient based optimizers. We thus choose
COBYLA as the gradient-free optimizer and SPSA as the gradient-based
optimizer.
### A.1 SPSA
The Simultaneous Perturbation Stochastic Approximation (SPSA) [31] is an
algorithm that is robust with respect to stochastic perturbations of the
function with which it will be minimized and has been widely used in
variational quantum algorithms that deal with NISQ devices [21].
The SPSA algorithm works with only two evaluations using a symmetrical
Bernoulli distribution $\Delta_{i}$ and two zero converging sequences, $c_{i}$
and $a_{i}$. Since we want to minimize the cost function of the VQD algorithm
(eq. 4) we must take the gradient $g(\lambda_{k})$ this will be done by
sampling the cost function $F(\lambda_{k})$, since we only take the sample of
the gradient and do not calculate it analytically we name it
$\hat{g}(\lambda_{k})$ and we obtain it using the formula:
$\hat{g}(\lambda_{k}^{(i)})=\frac{F(\lambda_{k}^{(i)}+c_{n}\Delta_{i})-F(\lambda_{k}^{(i)}-c_{i}\Delta_{i})}{2c_{i}\Delta_{i}}$
(10)
where $\lambda_{k}^{(i)}$ is the $\lambda_{k}$ parameter in the
$i^{\text{th}}$ iteration of the optimization algorithm.
After evaluating the gradient, the algorithm updates the parameters using the
following rule:
$\lambda_{k}^{(i+1)}=\lambda_{k}^{(i)}-a_{i}\hat{g}(\lambda_{k}^{(i)})$ (11)
It has been proven [31, 32] that this algorithm can converge assuming
stochastic fluctuations in the cost function evaluation. The sequences $a_{i}$
and $c_{i}$ are chosen as:
$\begin{split}c_{i}=\frac{c}{i^{\gamma}}\\\
a_{i}=\frac{a}{i^{\alpha}}\end{split}$ (12)
where the values are optimally chosen [33] as
$\\{\alpha,\gamma\\}=\\{0.602,0.101\\}$. The value $a$ controls how large is
the update of the parameters and $c$ controls the gradient update, thus if
there are large statistical fluctuations on the cost function we must choose a
large $c$ in order to the gradient evaluation to be accurate, according to
experiments explained in section 3 we chose $c=0.7$.
In addition to that we use a calibration method as in [21] in order to adjust
the $a$ parameter, which starts at $a=2\pi/10$, choosing different directions
of $\Delta_{i}$, the inverse formula used to calibrate $a$ is:
$a=\frac{2\pi}{5}\frac{c}{\big{\langle}F(\lambda_{k}^{(1)}+c_{n})-F(\lambda_{k}^{(1)}-c_{i})\big{\rangle}_{\Delta_{1}}}$
(13)
Finally, in order to get the optimized $\lambda_{k}$ values, we follow [21]
and average over the last 25 iterations in order to suppress statistical
fluctuations on parameter values.
### A.2 COBYLA
Based on a direct search algorithm that models the objective and constraint
functions by linear interpolation [34] COBYLA is a classical gradient-free
optimizer [35]. It is based on an algorithm that is iterative, in which each
iteration produces, by interpolation at the vertices of a simplex, a linear
approximation of the objective and constraint functions. The iterative change
of values is restricted by a trust region, $\rho$, to assure that the problem
has a finite solution.
It is expected that this kind of optimizer works well when stochastic
perturbations are minimal, because it supposes that function evaluation is
exact. This could, anyhow, lead to problems when working with real quantum
computers that are subject to low decoherence times. However, there is no
proof in the literature that this algorithm will not converge with stochastic
perturbations, thus we aim to test this case to see if it converges with the
presence of noise.
For the trust region, the user needs to choose an initial
($\rho_{\text{init}}$) value that represents the exploration on the objective
function landscape, which decreases by half after each iteration in order to
find the minimum of the objective function subjected to the given constraints.
For our optimization procedure, we choose $\rho_{\text{init}}=1$.
## Appendix B Measuring the overlap of two states on a quantum computer
In order to use the Variational Quantum Deflation algorithm, we require
measuring the overlap between two quantum states. This can be done by three
techniques: SWAP test, Destructive SWAP test, and Amplitude Transition. In
this section, we will explain each method and comment its advantages and
disadvantages.
SWAP test is an algorithm that gives as a result the state overlap between two
quantum states $\rho$ and $\sigma$. There are two proposals for a SWAP test in
the literature which are equivalent to each other. The original SWAP test [36]
that uses a controlled-SWAP operation and an auxiliary qubit to measure the
overlap, and the Destructive SWAP test [37] which only uses first neighbor
connectivity, CNOT and 1 qubit gates to measure the overlap.
The SWAP test is depicted on FIGURE 10 and consists of the application of
Hadamard and Controlled-SWAP gates, and it is measured using the auxiliary
qubit. The probability on measuring the $\left|0\right\rangle$ state encodes
the overlap between $\rho$ and $\sigma$
$|\langle\rho|\sigma\rangle|^{2}=2\ \bigg{(}P(0)-\frac{1}{2}\bigg{)}$ (14)
where $P(0)$ is the probability of the outcome on the auxiliary qubit being 0.
Figure 10: Circuit representation of the SWAP test for states $\rho$ and
$\sigma$, this is done using controlled-SWAP gate and 1 qubit gates.
This is a solid approach for learning the overlap between two quantum states,
however using a controlled-SWAP gate demands high coherence times and high
connectivity between qubits, those two requirements are not available on
current quantum hardware. For NISQ devices we must use an algorithm that is
suited for the device’s restrictions on both connectivity and gate errors, the
algorithm that solve both those restrictions in certain conditions is the
Destructive SWAP test.
The Destructive SWAP test is represented on FIGURE 11 which consists of CNOT
and Hadamard Gates and no auxiliary qubit. This algorithm has depth $O(1)$
which makes it suited for NISQ devices and has demonstrated its superiority
against the original SWAP Test [37].
In order to obtain the state overlap, one ought to do measurements in the Bell
basis for the corresponding qubit in each state. Those measurements can be
achieved with a classical post-processing step that scales linearly with qubit
size. This post-processing can be interpreted as the expectation value of a
controlled-Z observable.
Figure 11: Destructive SWAP test for states $\rho$ and $\sigma$, after the
measurement on the Bell basis, it is needed to do a post-processing
classically.
It is important to analyze the topology needed for these algorithms to be used
in real quantum hardware that are available through the cloud. In order to
implement the SWAP test [36] in real hardware, even though the cost of a swap
gate is high depending on the hardware topology, we could implement it without
making various re-routing of qubits. This topology is supported by the IonQ
device provided by Amazon Braket [38].
$p_{1}$$p_{2}$$p_{3}$$a_{1}$$q_{1}$$q_{2}$$q_{3}$$\dots$$\dots$ Figure 12:
Hardware topology needed for the SWAP Test.
In order to implement the Destructive SWAP test in real hardware, it is
necessary to have a specific hardware topology that is represented in FIGURE
13. Where the first row of the graph encodes one quantum state and the second
encodes the second quantum state, and near-neighbor connection between them is
used for the Destructive SWAP test. There are actual devices that support this
kind of topology, for instance there is the Melbourne chip from IBMQ [39] and
the IonQ device provided by Amazon Braket [38].
$p_{1}$$p_{2}$$p_{3}$$q_{1}$$q_{2}$$q_{3}$$\dots$$\dots$ Figure 13: Hardware
topology needed for the Destructive Swap Test.
It is possible to measure the overlap between two states using the transition
amplitude between the two states [40]:
$\big{|}\langle\psi(\lambda_{k})|\psi(\lambda_{i})\rangle\big{|}^{2}=\big{|}\langle
0|U^{\dagger}_{\psi(\lambda_{k})}U_{\psi(\lambda_{i})}|0\rangle\big{|}^{2}$,
where $U_{\psi(\lambda_{i})}$ is the state generated by the variational
ansatz. The overlapping is obtained by measuring the frequency of obtaining
the $\left|0\right\rangle$ state. This approach leads to no restriction for
the hardware. However, it doubles the depth of the circuit, which exceed the
depth that is suited for NISQ devices. This procedure is represented on FIGURE
14.
Figure 14: Circuit representation of the transition amplitude method [40] for
measuring the overlap between $\left|\psi(\lambda_{i})\right\rangle$ and
$\left|\psi(\lambda_{k})\right\rangle$.
To summarize, if the topology of the hardware supports the FIGURE 12 topology,
using the SWAP test can be an advantage because you would be adding only 3
gates to your circuit, thus not growing the depth to the point that will make
it impossible to run on NISQ devices. If the hardware supports FIGURE 13,
using the destructive SWAP test could be an advantage because you would only
need a constant number of gates added in the end of the circuit, since the
overlap can be easily calculated using classical post-processing. It may
appear that the transition amplitude method is a superior method than the
other methods because it doesn’t imply any restrictions of the topology,
however there is a huge downside that comes with doubling the depth of the
circuit, which can make it worse than the Destructive SWAP test, and the SWAP
test in some hardware topologies.
|
* [37] K. Riedl. Leveraging memory effects and gradient information in consensus-based optimization: On global convergence in mean-field law. arXiv:2211.12184, 2022.
* [38] M. Sion. On general minimax theorems. Pacific Journal of mathematics, 8(1):171–176, 1958.
* [39] J. Von Neumann. Zur Theorie der Gesellschaftsspiele. Mathematische Annalen, 100(1):295–320, 1928.
* [40] J. Von Neumann and O. Morgenstern. Theory of games and economic behavior. In Theory of games and economic behavior. Princeton university press, 2007.
|
[figure]style=plain,subcapbesideposition=center
# On Incorporating Inductive Biases into VAEs
Ning Miao1* Emile Mathieu1 N. Siddharth2 Yee Whye Teh1 Tom Rainforth1*
1Department of Statistics, University of Oxford, 2University of
Edinburgh*Correspondence to: Ning Miao<EMAIL_ADDRESS>Tom
Rainforth<EMAIL_ADDRESS>
###### Abstract
We explain why directly changing the prior can be a surprisingly ineffective
mechanism for incorporating inductive biases into variational auto-encoders
(VAEs), and introduce a simple and effective alternative approach:
_Intermediary Latent Space VAEs_ (InteL-VAEs). InteL-VAEs use an intermediary
set of latent variables to control the stochasticity of the encoding process,
before mapping these in turn to the latent representation using a parametric
function that encapsulates our desired inductive bias(es). This allows us to
impose properties like sparsity or clustering on learned representations, and
incorporate human knowledge into the generative model. Whereas changing the
prior only indirectly encourages behavior through regularizing the encoder,
InteL-VAEs are able to directly enforce desired characteristics. Moreover,
they bypass the computation and encoder design issues caused by non-Gaussian
priors, while allowing for additional flexibility through training of the
parametric mapping function. We show that these advantages, in turn, lead to
both better generative models and better representations being learned.
## 1 Introduction
VAEs provide a rich class of deep generative models (DGMs) with many variants
(Kingma & Welling, 2014; Rezende & Mohamed, 2015; Burda et al., 2016;
Gulrajani et al., 2016; Vahdat & Kautz, 2020). Based on an encoder-decoder
structure, VAEs encode datapoints into latent embeddings before decoding them
back to data space. By parameterizing the encoder and decoder using expressive
neural networks, VAEs provide a powerful basis for learning both generative
models and representations.
The standard VAE framework assumes an isotropic Gaussian prior. However, this
can cause issues, such as when one desires the learned representations to
exhibit some properties of interest, for example sparsity (Tonolini et al.,
2020) or clustering (Dilokthanakul et al., 2016), or when the data
distribution has very different topological properties from a Gaussian, for
example multi-modality (Shi et al., 2020) or group structure (Falorsi et al.,
2018). Therefore, a variety of recent works have looked to use non-Gaussian
priors (van den Oord et al., 2017; Tomczak & Welling, 2018; Casale et al.,
2018; Razavi et al., 2019; Bauer & Mnih, 2019), often with the motivation of
adding inductive biases into the model (Davidson et al., 2018b; Mathieu et
al., 2019b; Nagano et al., 2019; Skopek et al., 2019).
In this work, we argue that this approach of using non-Gaussian priors can be
a problematic, and even ineffective, mechanism for adding _inductive biases_
into VAEs. Firstly, non-Gaussian priors will often necessitate complex encoder
models to maintain consistency with the prior’s shape and dependency structure
(Webb et al., 2018), which typically no longer permit simple parameterization.
Secondly, the latent encodings are still not guaranteed to follow the desired
structure because the ‘prior’ only appears in the training objective as a
regularizer on the encoder. Indeed, Mathieu et al. (2019b) find that changing
the prior is typically insufficient in practice to learn the desired
representations at a _population level_ , with mismatches occurring between
the data distribution and learned model.
To provide an alternative, more effective, approach that does not suffer from
these pathologies, we introduce _Intermediary Latent Space VAEs_ (InteL-VAEs),
an extension to the standard VAE framework that allows a wide range of
powerful inductive biases to be incorporated while maintaining an isotropic
Gaussian prior. This is achieved by introducing an _intermediary_ set of
latent variables that deal with the stochasticity of the encoding process
_before_ incorporating the desired inductive biases via a parametric function
that maps these intermediary latents to the latent representation itself, with
the decoder taking this final representation as input. See Fig. 1 for an
example.
Figure 1: Example InteL-VAE with star-like data. We consider the auto-encoding
for two example datapoints ($x_{1}$ and $x_{2}$, shown in green), which are
first stochastically mapped to $\operatorname{\mathcal{Y}}$ using a Gaussian
encoder. This embedding is then pushed forward to $\operatorname{\mathcal{Z}}$
using the _non-stochastic_ mapping $g_{\psi}$, which is a radial mapping to
enforce a spherical distribution. Decoding is then done in the standard way
from $\operatorname{\mathcal{Z}}$, with the complexity of the decoder mapping
simplified by the induced structural properties of
$\operatorname{\mathcal{Z}}$.
The InteL-VAE framework provides a variety of advantages over directly
replacing the prior. Firstly, it directly enforces our inductive biases on the
representations, rather than relying on the regularizing effect of the prior
to encourage this implicitly. Secondly, it provides a natural congruence
between the generative and representational models via sharing of the mapping
function, side-stepping the issues that non-Gaussian priors can cause for the
inference model. Finally, it allows for more general and more flexible
inductive biases to be incorporated, by removing the need to express them with
an explicit density function and allowing for parts of the mapping to be
learned during training.
To further introduce a number of novel specific realizations of the InteL-VAE
framework, showing how they can be used to incorporate various inductive
biases, enforcing latent representations that are, for example, multiply
connected, multi-modal, sparse, or hierarchical. Experimental results show
their superiority compared with baseline methods in both generation and
feature quality, most notably providing state-of-the-art performance for
learning sparse representations in the VAE framework.
To summarize, we a) highlight the need for inductive biases in VAEs and
explain why directly changing the prior is a suboptimal means for
incorporating them; b) propose InteL-VAEs as a simple but effective general
framework to introduce inductive biases; and c) introduce specific InteL-VAE
variants which can learn improved generative models and representations over
existing baselines on a number of tasks. Accompanying code is provided at
https://github.com/NingMiao/InteL-VAE.
## 2 The Need for Inductive Biases in VAEs
Variational auto-encoders (VAEs) are deep stochastic auto-encoders that can be
used for learning both deep generative models and low-dimensional
representations of complex data. Their key components are an encoder,
$q_{\phi}(z|x)$, which probabilistically maps from data
$x\in\operatorname{\mathcal{X}}$ to latents $z\in\operatorname{\mathcal{Z}}$;
a decoder, $p_{\theta}(x|z)$, which probabilistically maps from latents to
data; and a prior, $p(z)$, that completes the generative model,
$p(z)p_{\theta}(x|z)$, and regularizes the encoder during training. The
encoder and decoder are parameterized by deep neural networks and are
simultaneously trained using a dataset $\\{x_{1},x_{2},...,x_{N}\\}$ and a
variational lower bound on the log-likelihood, most commonly,
$\displaystyle\operatorname{\mathcal{L}}(x,\theta,\phi):=\operatorname{\mathbb{E}}_{z\sim
q_{\phi}(z|x)}\left[\log
p_{\theta}(x|z)\right]-D_{\textrm{KL}}\left(q_{\phi}(z|x)\;\|\;p(z)\right).$
(1)
Namely, we optimize
$\operatorname{\mathcal{L}}(\theta,\phi):=\operatorname{\mathbb{E}}_{x\sim\operatorname{p_{\text{data}}(x)}}\left[\operatorname{\mathcal{L}}(x,\theta,\phi)\right]$,
where $\operatorname{p_{\text{data}}(x)}$ represents the empirical data
distribution. Here the prior is typically fixed to a standard Gaussian, i.e.
$p(z)=\mathcal{N}(z;0,I)$.
While it is well documented that this standard VAE setup with a ‘Gaussian’
latent space can be suboptimal (Davidson et al., 2018a; Mathieu et al., 2019b;
Tomczak & Welling, 2018; Bauer & Mnih, 2019; Tonolini et al., 2020), there is
perhaps less of a unified high-level view on exactly when, why, and how one
should change it to incorporate inductive biases. Note here that the prior
does not play the same role as in a Bayesian model: because the latents
themselves are somewhat arbitrary and the model is learned from data, it does
not encapsulate our initial beliefs in the way one might expect.
We argue that there are two core reasons why inductive biases can be important
for VAEs: (a) standard VAEs can fail to encourage, and even prohibit, desired
structure in the _representations_ we learn; and (b) standard VAEs do not
allow one to impart prior information or desired topological characteristic
into the _generative model_.
Considering the former, one often has some a priori desired characteristics,
or constraints, on the representations learned (Bengio et al., 2013). For
example, sparse features can be desirable because they can improve data
efficiency (Yip & Sussman, 1997), and provide robustness to noise (Wright et
al., 2009; Ahmad & Scheinkman, 2019) and attacks (Gopalakrishnan et al.,
2018). In other settings one might desire clustered (Jiang et al., 2017),
disentangled (Ansari & Soh, 2019; Kim & Mnih, 2018; Higgins et al., 2018) or
hierarchical representations (Song & Li, 2013; Sønderby et al., 2016; Zhao et
al., 2017). The KL-divergence term in Eq. 1 regularizes the encoding
distribution towards the prior and, as a standard Gaussian distribution
typically does not exhibit our desired characteristics, this regularization
can significantly hinder our ability to learn representations with the desired
properties.
Not only can this be problematic at an individual sample level, it can cause
even more pronounced issues at the _population level_ : desired structural
characteristics of our representations often relate to the pushforward
distribution of the data in the latent space,
$q_{\phi}(z):=\operatorname{\mathbb{E}}_{\operatorname{p_{\text{data}}(x)}}[q_{\phi}(z|x)]$,
which is both difficult to control and only implicitly regularized to the
prior (Hoffman & Johnson, 2016).
(a) Data
(b) VAE
Figure 2: VAE learned generative distribution
$\mathbb{E}_{p(z)}[p_{\theta}(x|z)]$ for mixture data.
Inductive biases can also be essential to the generation quality of VAEs:
because the generation process of standard VAEs is essentially pushing-forward
the Gaussian prior on $\operatorname{\mathcal{Z}}$ to data space
$\operatorname{\mathcal{X}}$ by a ‘smooth’ decoder, there is an underlying
inductive bias that standard VAEs prefer sample distributions with similar
topology structures to Gaussians. As a result, VAEs can perform poorly when
the data manifold exhibits certain different topological properties (Caterini
et al., 2020). For example, they can struggle when data is clustered into
unconnected components as shown in Fig. 2, or when data is not simply-
connected. This renders learning effective mappings using finite datasets and
conventional architectures (potentially prohibitively) difficult. In
particular, it can necessitate large Lipschitz constants in the decoder,
causing knock-on issues like unstable training and brittle models (Scaman &
Virmaux, 2018), as well as posterior collapse (van den Oord et al., 2017;
Alemi et al., 2018). In short, the Gaussian prior of a standard VAE can induce
fundamental topological differences to the true data distribution (Falorsi et
al., 2018; Shi et al., 2020).
## 3 Shortfalls of VAEs with non-Gaussian Priors
(a) Directly replacing $p(z)$
(b) InteL-VAE
Figure 3: Prior-encoder mismatch. We train (a) a VAE with a sparse prior and
(b) an InteL-VAE with a sparse inductive bias on 2 dimensional sparse data.
Figure shows target latent distribution $p(z)$ (blue), learned variational
embeddings $q_{\phi}(z|x)$ of exemplar data (green), and data pushforward
$q_{\phi}(z)$ (red shadow) for each method. Simply replacing the prior does
not help the VAE match prior structure on either a per-sample or population
level, whereas InteL-VAE produces an effective match.
Though directly replacing the Gaussian prior with a different prior sounds
like a simple solution, effectively introducing inductive biases can,
unfortunately, be more complicated.
Firstly, the only influence of the prior during training is as a regularizer
on the encoder through the
$D_{\textrm{KL}}\left(q_{\phi}(z|x)\;\|\;p(z)\right)$ term. This
regularization is always competing with the need for effective reconstructions
and only has an indirect influence on $q_{\phi}(z)$. As such, simply replacing
the prior can be an ineffective way of inducing desired structure at the
population level (Mathieu et al., 2019b), particularly if $p(z)$ is a complex
distribution that it is difficult to fit (see, e.g., Fig. 3a). Mismatches
between $q_{\phi}(z)$ and $p(z)$ can also have further deleterious effects on
the learned generative model: the former represents the distribution of the
data in latent space during training, while the latter is what is used by the
learned generative model, leading to unrepresentative generations if there is
mismatch.
Secondly, it can be extremely difficult to construct appropriate encoder
mappings and distributions for non-Gaussian priors. While the typical choice
of a mean-field Gaussian for the encoder distribution is simple, easy to
train, and often effective for Gaussian priors, it is often inappropriate for
other choices of prior. For example, in Fig. 3, we consider replacement with a
sparse prior. A VAE with a Gaussian encoder struggles to encode points in a
manner that even remotely matches the prior. One might suggest replacing the
encoder distribution as well, but this has its own issues, most notably that
other distributions can be hard to effectively parameterize or train. In
particular, the form of the required encoding noise might become heavily
spatially variant; in our sparse example, the noise must be elongated in a
particular direction depending on where the mean embedding is. If the prior
has constraints or topological properties distinct from the data, it can even
be difficult to learn a mean encoder mapping that respects these, due to the
continuous nature of neural networks.
## 4 The InteL-VAE Framework
To solve the issues highlighted in the previous section, and provide a
principled and effective method for adding inductive biases to VAEs, we
propose _Intermediary Latent Space VAEs_ (InteL-VAEs). The key idea behind
InteL-VAEs is to introduce an _intermediary_ set of latent variables
$y\in\operatorname{\mathcal{Y}}$, used as a stepping stone in the construction
of the _representation_ $z\in\operatorname{\mathcal{Z}}$. Data is initially
encoded in $\operatorname{\mathcal{Y}}$ using a conventional VAE encoder (e.g.
a mean-field Gaussian) before being passed through a _non-stochastic_ mapping
$g_{\psi}:\operatorname{\mathcal{Y}}\mapsto\operatorname{\mathcal{Z}}$ that
incorporates our desired inductive biases and which can be trained, if needed,
through its parameters $\psi$. The prior is defined on
$\operatorname{\mathcal{Y}}$ and taken to be a standard Gaussian,
$p(y)=\mathcal{N}(y;0,I)$, while our representations, $z=g_{\psi}(y)$,
correspond to a pushforward of $y$. By first encoding datapoints to $y$,
rather than $z$ directly, we can deal with all the encoder and prior
stochasticity in this first, well-behaved, latent space, while maintaining $z$
as our representation and using it for the decoder $p_{\theta}(x|z)$. In
principle, $g_{\psi}$ can be any arbitrary parametric (or fixed) mapping,
including non-differentiable or even discontinuous functions. However, to
allow for reparameterized gradient estimators (Kingma & Welling, 2014; Rezende
& Mohamed, 2015), we will restrict ourselves to $g_{\psi}$ that are sub-
differentiable (and thus continuous) with respect to both their inputs and
parameters. Note that setting $g_{\psi}$ to the identity mapping recovers a
conventional VAE.
As shown in Fig. 1, the auto-encoding process is now
$\operatorname{\mathcal{X}}\xrightarrow{q_{\phi}}\operatorname{\mathcal{Y}}\xrightarrow{g_{\psi}}\operatorname{\mathcal{Z}}\xrightarrow{p_{\theta}}\operatorname{\mathcal{X}}$.
This three-step process no longer unambiguously fits into the encoder-decoder
terminology of the standard VAE and permits a variety of interpretations; for
now we take the convention of calling $q_{\phi}(y|x)$ the encoder and
$p_{\theta}(x|z)$ the decoder, but also discuss some alternative
interpretations below. We emphasize here that these no longer respectively
match up with our representation model—which corresponds to passing an input
into the encoder and then mapping the resulting encoding using $g_{\psi}$—and
our generative model—which corresponds to
$\mathcal{N}(y;0,I)p_{\theta}(x|z=g_{\psi}(y))$, such that we sample a $y$
from the prior and then pass this through through $g_{\psi}$ and the decoder
in turn.
The mapping $g_{\psi}$ introduces inductive biases into _both_ the generative
model and our representations by imposing a particular form on $z$, such as
the spherical structure enforced in Fig. 1 (see also Sec. 6). It can be viewed
as a _shared module_ between them, ensuring congruence between the two. This
congruence allows us to more directly introduce inductive biases through
careful construction of $g_{\psi}$, without complicating the process of
learning an effective inference network. In particular, because
$\operatorname{\mathcal{Y}}$ is treated as our latent space for the purposes
of training, we sidestep the inference issues that non-Gaussian priors usually
cause. Moreover, because all samples must explicitly pass through $g_{\psi}$
during both training and generation, we can more directly ensure the desired
structure is enforced without causing a mismatch in the latent distribution
between training and deployment.
Training As with standard VAEs, training of an InteL-VAE is done by maximizing
a variational lower bound (ELBO) on the log evidence, which we denote
$\operatorname{\mathcal{L}}_{\operatorname{\mathcal{Y}}}$. Most simply, we
have
$\displaystyle\begin{split}\log
p_{\theta,\psi}(x):=&\,\log\left(\operatorname{\mathbb{E}}_{p(y)}\left[p_{\theta}(x|g_{\psi}(y))\right]\right)=\log\left(\operatorname{\mathbb{E}}_{q_{\phi}(y|x)}\left[\frac{p_{\theta}(x|g_{\psi}(y))\mathcal{N}(y;0,I)}{q_{\phi}(y|x)}\right]\right)\\\
\geq&\operatorname{\mathbb{E}}_{q_{\phi}(y|x)}[\log
p_{\theta}(x|g_{\psi}(y))]-D_{\textrm{KL}}\left(q_{\phi}(y|x)\;\|\;\mathcal{N}(y;0,I)\right)=:\operatorname{\mathcal{L}}_{\operatorname{\mathcal{Y}}}(x,\theta,\phi,\psi).\end{split}$
(2)
Note that the regularization is on $y$, but our representation corresponds to
$z=g_{\psi}(y)$. Training corresponds to the optimization
$\operatorname*{arg\,max}_{\theta,\phi,\psi}\operatorname{\mathbb{E}}_{x\sim\operatorname{p_{\text{data}}(x)}}\left[\operatorname{\mathcal{L}}_{\operatorname{\mathcal{Y}}}(x,\theta,\phi,\psi)\right]$,
which can be performed using stochastic gradient ascent with reparameterized
gradients in the standard manner. Although inductive biases are introduced,
the calculation, and optimization, of
$\operatorname{\mathcal{L}}_{\operatorname{\mathcal{Y}}}$ is thus equivalent
to the standard ELBO. In particular, parameterizing $q_{\phi}(y|x)$ with a
Gaussian distribution still yields an analytical
$D_{\textrm{KL}}\left(q_{\phi}(y|x)\;\|\;\mathcal{N}(y;0,I)\right)$ term.
Alternative Interpretations It is interesting to note that our representation,
$g_{\psi}(y)$, only appears in the context of the decoder in this training
objective. As such, we see that an important alternative interpretation of
InteL-VAEs is to consider $g_{\psi}$ as being a customized first layer in the
decoder, and our test–time representations as partial decodings of the latents
$y$. This viewpoint allows it to be applied with more general bounds and VAE
variants (e.g. Burda et al. (2016); Le et al. (2018); Maddison et al. (2017);
Naesseth et al. (2018); Zhao et al. (2019)), as it requires only a carefully
customized decoder architecture during training and an adjusted mechanism for
constructing representations at test–time.
Yet another interpretation is to think about InteL-VAEs as implicitly defining
a conventional VAE with latents $z$, but where both the non-Gaussian prior,
$p_{\psi}(z)$, and our encoder distribution, $q_{\phi,\psi}(z|x)$, are
themselves defined implicitly as pushforwards along $g_{\psi}$, which acts as
a shared module that instills a natural compatibility between the two.
Formally we have the following theorem.
###### Theorem 1.
Let $p_{\psi}(z)$ and $q_{\phi,\psi}(z|x)$ represent the respective
pushforward distributions of $\mathcal{N}(0,I)$ and $q_{\phi}(y|x)$ induced by
the mapping
$g_{\psi}:\operatorname{\mathcal{Y}}\mapsto\operatorname{\mathcal{Z}}$. The
following holds for all measurable $g_{\psi}$:
$\displaystyle
D_{\textrm{KL}}\left(q_{\phi,\psi}(z|x)\;\|\;p_{\psi}(z)\right)\leq
D_{\textrm{KL}}\left(q_{\phi}(y|x)\;\|\;\mathcal{N}(y;0,I)\right).$ (3)
If $g_{\psi}$ is also an invertible function then the above becomes an
equality and $\operatorname{\mathcal{L}}_{\operatorname{\mathcal{Y}}}$ equals
the standard ELBO on the space of $\operatorname{\mathcal{Z}}$ as follows
$\displaystyle\operatorname{\mathcal{L}}_{\operatorname{\mathcal{Y}}}(x,\theta,\phi,\psi)=\operatorname{\mathbb{E}}_{q_{\phi,\psi}(z|x)}[\log
p_{\theta}(x|z)]-D_{\textrm{KL}}\left(q_{\phi,\psi}(z|x)\;\|\;p_{\psi}(z)\right).$
(4)
The proof is given in Appendix A. Here, (3) shows that the divergence in our
representation space $\operatorname{\mathcal{Z}}$ is never more than that in
$\operatorname{\mathcal{Y}}$, or equivalently that the implied ELBO on the
space of $\operatorname{\mathcal{Z}}$ is always at least as tight as that on
$\operatorname{\mathcal{Y}}$; (4) shows they are exactly equal if $g_{\psi}$
is invertible. As the magnitude of
$D_{\textrm{KL}}\left(q_{\phi}(y|x)\;\|\;\mathcal{N}(y;0,I)\right)$ in an
InteL-VAE will remain comparable to the KL divergence in a standard Gaussian
prior VAE setup, this, in turn, ensures that
$D_{\textrm{KL}}\left(q_{\phi,\psi}(z|x)\;\|\;p_{\psi}(z)\right)$ does not
become overly large. This is in stark contrast to the conventional non-
Gaussian prior setup, where it can be difficult to avoid
$D_{\textrm{KL}}\left(q_{\phi}(z|x)\;\|\;p_{\psi}(z)\right)$ exploding without
undermining reconstruction (Mathieu et al., 2019b). The intuition here is that
having the stochasticity in the encoder _before_ it is passed through
$g_{\psi}$ ensures that the form of the noise in the embedding is inherently
appropriate for the space: the same mapping is used to warp this noise as to
define the generative model in the first place. For example, when $g_{\psi}$
is a sparse mapping, the Gaussian noise in $q_{\phi}(y|x)$ will be compressed
to a sparse subspace by $g_{\psi}$, leading to a sparse variational posterior
$q_{\phi,\psi}(z|x)$ as shown in Fig. 3b. In particular, $q_{\phi}(y|x)$ does
not need to learn any complex spatial variations that result from properties
of $\operatorname{\mathcal{Z}}$. In turn, InteL-VAEs further alleviate issues
of mismatch between $p_{\psi}(z)$ and $q_{\phi,\psi}(z)$.
Further Benefits A key benefit of InteL-VAEs is that the extracted features
are _guaranteed_ to have the desired structure. Take the spherical case for
example, all extracted features $g_{\psi}(\mu_{\phi}(x))$ lie within a small
neighborhood of the unit sphere. By comparison, methods based on training loss
modifications, e.g. Mathieu et al. (2019b), often fail to generate features
with the targeted properties.
A more subtle advantage is that we do not need to explicitly specify
$p_{\psi}(z)$. This can be extremely helpful when we want to specify complex
inductive biases: designing a non-stochastic mapping is typically much easier
than a density function, particularly for complex spaces. Further, this can
make it much easier to parameterize and learn aspects of $p_{\psi}(z)$ in a
data-driven manner (see e.g. Sec. 6.3).
## 5 Related work
Inductive biases There is much prior work on introducing human knowledge to
deep learning models by structural design, such as CNNs (LeCun et al., 1989),
RNNs (Hochreiter & Schmidhuber, 1997) and transformers (Vaswani et al., 2017).
However, most of these designs are on the _sample_ level, utilizing low–level
information such as transformation invariances or internal correlations in
each sample. By contrast, InteL-VAEs provide a convenient way to incorporate
_population_ level knowledge—information about the global properties of data
distributions can be effectively utilized.
Non-Gaussian priors There is an abundance of prior work utilizing non-Gaussian
priors to improve the fit and generation capabilities of VAEs, including MoG
priors (Dilokthanakul et al., 2016; Shi et al., 2020), sparse priors (Mathieu
et al., 2019b; Tonolini et al., 2020; Barello et al., 2018), Gaussian-process
priors (Casale et al., 2018) and autoregressive priors (Razavi et al., 2019;
van den Oord et al., 2017). However, these methods often require specialized
algorithms to train and are primarily applicable only to specific kinds of
data. Moreover, as we have explained, changing the prior alone often provides
insufficient pressure on its own to induce the desired characteristics. Others
have proposed non-Gaussian priors to reduce the prior-posterior gap, such as
Vamp-VAE (Tomczak & Welling, 2018) and LARS (Bauer & Mnih, 2019), but these
are tangential to our inductive bias aims.
Non-Euclidean latents A related line of work has focused on non-Euclidean
latent spaces. For instance Davidson et al. (2018a) leveraged a von Mises-
Fisher distribution on a hyperspherical latent space, Falorsi et al. (2018)
endowed the latent space with a $\text{SO}(3)$ group structure, and Mathieu et
al. (2019a); Ovinnikov (2019); Nagano et al. (2019) with hyperbolic geometry.
Other spaces like product of constant curvature spaces (Skopek et al., 2019)
and embedded manifolds (Rey et al., 2019) have also been considered. However,
these works generally require careful design and training.
Normalizing flows Our use of a non-stochastic mapping shares some interesting
links to normalizing flows (NFs) (Rezende & Mohamed, 2015; Papamakarios et
al., 2019; Grathwohl et al., 2018; Dinh et al., 2017; Huang et al., 2018;
Papamakarios et al., 2018). Indeed a NF would be a valid choice for
$g_{\psi}$, albeit an unlikely one due to their architectural constraints.
However, unlike previous use of NFs in VAEs, our $g_{\psi}$ is crucially
_shared_ between the generative and representational models, rather than just
being used in the encoder, while the KL divergence in our framework is taken
before, not after, the mapping. Moreover, the underlying motivation, and type
of mapping typically used, differs substantially: our mapping is used to
introduce inductive biases, not purely to improve inference. Our mapping is
also more general than a NF (e.g. it need not be invertible) and does not
introduce additional constraints or computational issues.
## 6 Specific Realizations of the InteL-VAE Framework
We now present several novel example InteL-VAEs, introducing various inductive
biases through different choices of $g_{\psi}$. We will start with artificial,
but surprisingly challenging, examples where some precise topological
properties of the target distributions are known, incorporating them directly
through a fixed $g_{\psi}$. We will then move onto experiments where we impose
a fixed clustering inductive bias when training on image data, allowing us to
learn InteL-VAEs that account effectively for multi-modality in the data
distribution. Finally, we consider the example of learning sparse
representations of high–dimensional data. Here we will see that it is
imperative to exploit the ability of InteL-VAEs to learn aspects of $g_{\psi}$
during training, providing a flexible inductive bias framework, rather than a
pre-fixed mapping. By comparing InteL-VAEs with strong baselines, we show that
InteL-VAEs are effective in introducing these desired inductive biases, and
consequently both improve generation quality and learn better data
representations for downstream tasks. One note of particular importance is
that we find that InteL-VAEs provide state-of-the-art performance for learning
sparse VAE representations. A further example of using InteL-VAEs to learn
hierarchical representations is presented in Appendix B, while full details on
the various examples are given in Appendix C.
### 6.1 Multiple–Connectivity
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(a) Data
(b) VAE
(c) InteL-VAE
Figure 4: Training data and samples from learned generative models of
vanilla-VAE and InteL-VAE for multiply-connected and clustered distributions.
InteL-VAE uses [Rows 1,2] circular prior with one hole, [Row 3] multiply-
connected prior with two holes, and [Row 4] clustered prior. Vamp-VAE behaves
similarly to a vanilla VAE; its results are presented in Fig. 4.
Data is often most naturally described on non-Euclidean spaces such as
circles, e.g. wind directions (Mardia & Jupp, 2000), and other multiply-
connected shapes, e.g. holes in disease databases (Liu et al., 1997). For
reasons previously explained in Sec. 2, standard VAEs cannot practically model
such topologies, which prevents them from learning generative models which
match even the simplest data distributions with non-trivial topological
structures, as shown in Fig. 4b.
Luckily, by designing $g_{\psi}$ to map the Gaussian prior to a simple
representative distribution in a topological class, we can easily equip InteL-
VAEs with the knowledge to approximate any data distributions with similar
topological properties. Specifically, by defining $g_{\psi}$ as the orthogonal
projection to $\mathbb{S}^{1}$, $g_{\psi}(z)=z/(||z||_{2}+\epsilon)$, we map
the Gaussian prior approximately to a uniform distribution to
$\mathbb{S}^{1}$, where $\epsilon$ is a small positive constant to ensure the
continuity of $g_{\psi}$ near the origin. From Rows 1 and 2 of Fig. 4, we find
that this inductive bias gives InteL-VAEs the ability to learn various
distributions with a hole. We can add further holes by simply ‘gluing’ point
pairs. For example, for two holes we can use
$\displaystyle g_{2}(y)$
$\displaystyle=\text{Concat}\left(g_{1}(y)_{[:,1]},~{}g_{1}(y)_{[:,2]}\sqrt{(4/3-(1-|g_{1}(y)_{[:,1]}|)^{2})}-1/\sqrt{3}\right),$
(5)
which first map $y$ to approximately $S^{1}$, and then glues $(0,1)$ and
$(0,-1)$ together to create new holes (see Fig. C.1 for an illustration).
Furthermore, we can continue to glue points together to achieve a higher
number of holes $h$, and thus more complex connectivity. Row 3 of Fig. 4 gives
an example of learning an infinity sign by introducing a ‘two-hole’ inductive
bias.
Compared with vanilla-VAE and Vamp-VAE, which try to find a convex hull for
real data distributions, InteL-VAEs can deal with distributions with highly
non-convex and very non-smooth supports (see Fig. 4 and Sec. C.1). We
emphasize here that our inductive bias does not contain the information about
the precise shape of the data, only the number of holes. We thus see that
InteL-VAEs can provide substantial improvements in performance by
incorporating only basic prior information about the topological properties of
the data, which point out a way to approximate distributions on more complex
structures, such as linear groups (Gupta & Mishra, 2018).
### 6.2 Multi–Modality
Many real-world datasets exhibit multi-modality. For example, data with
distinct classes are often naturally clustered into (nearly) disconnected
components representing each class. However, vanilla VAEs generally fail to
fit multi-modal data due to the topological issues explained in Sec. 2.
Previous work (Johnson et al., 2017; Mathieu et al., 2019b) has thus proposed
the use of a multi-modal prior, such as a mixture of Gaussian (MoG)
distribution, so as to capture all components of the data. Nonetheless, VAEs
with such priors often still struggle to model multi-modal data because of
mismatch between $q_{\phi}(z)$ and $p(z)$ or training instability issues.
(a)
(b)
(c)
Figure 5: Illustration of clustered mapping where $K=3$. The circle represents
a density isoline of a Gaussian. Note that not all points in the sector are
moved equally: points close to the boundaries between sectors are moved less,
with points on the boundary themselves not moved at all.
We tackle this problem by using a mapping $g_{\psi}$ which contains a
clustering inductive bias. The high-level idea is to design a mapping
$g_{\psi}$ with a localized high Lipschitz constant that ‘splits’ the
continuous Gaussian distribution into $K$ disconnected parts and then pushes
them away from each other. In particular, we split
$\operatorname{\mathcal{Y}}$ it into $K$ equally sized sectors using its first
two dimensions (noting it is not needed to split on all dimensions to form
clusters), as shown in Fig. 5. For any point $y$, we can easily get the center
direction $\text{r}(y)$ of the sector that $y$ belongs to and the distance
$\text{dis}(y)$ between $y$ and the sector boundary. Then we define
$g_{\psi}(y)$ as:
$\displaystyle g_{\psi}(y)=y+{c_{1}}\text{dis}(y)^{c_{2}}\text{r}(y),$ (6)
where $c_{1}$ and $c_{2}$ are empirical constants. We can see that although
$g_{\psi}$ has very different function on different sectors, it is still
continuous on the whole plane with $g_{\psi}(y)=y$ on sector boundaries, which
is desirable for gradient-based training. See Sec. C.2 for more details.
To assess the performance of our approach, we first consider a simple
2-component MoG synthetic dataset in the last row of Fig. 4. We see that the
vanilla VAE fails to learn a clustered distribution that fits the data, while
the InteL-VAE sorts this issue and fits the data well.
Method | FID Score ($\downarrow$)
---|---
VAE | $42.0\pm 1.1$
GM-VAE | $41.0\pm 4.7$
MoG-VAE | $41.2\pm 3.3$
Vamp-VAE | $38.8\pm 2.4$
VAE with Sylvester NF | $35.0\pm 0.9$
InteL-VAE | $32.2\pm 1.5$
Table 1: Generation quality on MNIST. Shown is mean FID score (lower better)
$\pm$ standard deviation over 10 runs.
To provide a more real-world example, we train an InteL-VAE and a variety of
baselines on the MNIST dataset, comparing the generation quality of the
learned models using the FID score (Heusel et al., 2017) in Table 1. We find
that the GM-VAE (Dilokthanakul et al., 2016) and MoG-VAE (VAE with a fixed MoG
prior) achieve performance gains by using non-Gaussian priors. The Vamp-VAE
(Tomczak & Welling, 2018) and a VAE with a Sylvester Normalizing Flow (Berg et
al., 2018) encoder provide further gains by making the prior and encoder
distributions more flexible respectively. However, the InteL-VAE comfortably
outperforms all of them.
(a) VAE
(b) MoG-VAE
(c) InteL-VAE
Figure 6: Generated samples for MNIST-01.
To gain insight into how InteL-VAEs achieve superior generation quality, we
perform analysis on a simplified setting where we select only the ‘0’ and ‘1’
digits from the MNIST dataset to form a strongly clustered dataset, MNIST-01.
We further decrease the latent dimension to $1$ to make the problem more
challenging. Fig. 6 shows that here the vanilla VAE generates some samples
which look like interpolations between ’0’ and ’1’, meaning that it still
tries to learn a connected distribution containing ’0’ and ’1’. Further, the
general generation quality is poor, with blurred images and a lack of
diversity in generated samples (e.g. all the ‘1’s have the same slant).
Despite using a clustered prior, the MoG-VAE still produces unwanted
interpolations between the classes. By contrast, InteL-VAE generates digits
that are unambiguous and crisper.
Table 2: Quantitative results on MNIST-01. Uncertainty is the proportion of images whose labels are ‘indistinguishable’ by the pre-trained classifier, defined as having prediction confidence $<80\%$. ‘1’ proportion is the proportion of images classified as ‘1’. Method | Data | VAE | GM-VAE | MoG-VAE | Vamp-VAE | Flow | InteL-VAE
---|---|---|---|---|---|---|---
Uncertainty(%) | 0.2 $\pm$ 0.1 | 2.5 $\pm$ 0.4 | 3.5 $\pm$ 1.8 | 4.5 $\pm$ 0.8 | 2.4 $\pm$ 0.3 | 16.2 $\pm$ 2.1 | 0.9 $\pm$ 0.8
‘1’ proportion(%) | 50.0 $\pm$ 0.2 | 48.8 $\pm$ 0.2 | 48.1 $\pm$ 0.3 | 47.7 $\pm$ 0.4 | 48.8 $\pm$ 0.1 | 42.5 $\pm$ 1.0 | 49.5 $\pm$ 0.4
Table 3: Learned proportions of ‘0’s on MNIST-01 for different ground truths. Error bars are std. dev. from 10 runs. True Prop. | Learned Prop.
---|---
0.5 | 0.47 $\pm$ 0.01
0.4 | 0.36 $\pm$ 0.10
0.25 | 0.25 $\pm$ 0.08
0.2 | 0.16 $\pm$ 0.11
0 | 0.02 $\pm$ 0.01
To quantify these results, we further train a logistic classifier on MNIST-01
and use it to classify images generated by each method. For each method, we
calculate the proportion of samples produced by the generative model that are
assigned to each class by this pre-trained classifier, as well as the
proportion of samples for which the classifier is uncertain. From Table 2 we
see that InteL-VAE significantly outperforms its competitors in the ability to
generate balanced and unambiguous digits. To extend this example further, and
show the ability of InteL-VAEs to learn aspects of $g_{\psi}$ during training,
we further consider parameterizing and then learning the relative size of the
clusters. Table 3 shows that this can be successfully learned by InteL-VAEs on
MNIST-01.
### 6.3 Sparsity
Sparse features are often well-suited to data efficiency on downstream tasks
(Huang & Aviyente, 2006), in addition to being naturally easier to visualize
and manipulate than dense features (Ng et al., 2011). However, existing VAE
models for sparse representations trade off generation quality to achieve this
sparsity (Mathieu et al., 2019b; Tonolini et al., 2020; Barello et al., 2018).
Here, we show that InteL-VAEs can instead _simultaneously_ increase feature
sparsity and generation quality. Moreover, they are able to achieve state-of-
the-art scores on sparsity metrics.
Compared with our previous examples, the $g_{\psi}$ here needs to be more
flexible so that it can learn to map points in a data-specific way and induce
sparsity without unduly harming reconstruction. To achieve this, we use the
simple form for the mapping: $g_{\psi}(y)=y\odot~{}\text{DS}_{\psi}(y)$, where
$\odot$ is pointwise multiplication, and DS is a ‘dimension selector’ network
that selects dimensions to deactivate given $y$. DS outputs values between
$[0,1]$ for each dimension, with $0$ being fully deactivated and $1$ fully
activated; the more dimensions we deactivate, the sparser the representation.
By learning DS during training, this setup allows us to learn a sparse
representation in a data-driven manner. To control the degree of sparsity, we
add a sparsity regularizer, $\operatorname{\mathcal{L}}_{sp}$, to the ELBO
with weighting parameter $\gamma$ (higher $\gamma$ corresponds to more
sparsity). Namely, we optimize
$\operatorname{\mathcal{L}}_{\operatorname{\mathcal{Y}}}(\theta,\phi,\psi)+\gamma\operatorname{\mathcal{L}}_{sp}(\phi,\psi)$,
where
$\displaystyle\operatorname{\mathcal{L}}_{sp}(\phi,\psi):=\mathbb{E}\left[\frac{1}{M}\sum_{i=1}^{M}(H\left(DS(y_{i}))\right)-H\left(\frac{1}{M}\sum_{i=1}^{M}DS(y_{i})\right)\right],$
(7)
$H(v)=-\sum_{i}\left(v_{i}/\|v\|_{1}\right)\log\left(v_{i}/\|v\|_{1}\right)$
is the normalized entropy of an positive vector $v$, and the expectation is
over drawing a minibatch of samples $x_{1},\dots,x_{M}$ and then sampling each
corresponding $y_{i}\sim q_{\phi}(\cdot|x=x_{i})$.
$\operatorname{\mathcal{L}}_{sp}$ encourages DS to deactivate more dimensions,
while also encouraging diversity in which dimensions are activated for
different data points, improving utilization of the latent space. Please see
Sec. C.3 for more details and intuitions. Initial qualitative results are
shown in Fig. 8, where we see that our InteL-VAE is able to learn sparse and
intuitive representations.
(a) (b)
Figure 7: Results on Fashion-MNIST. The left figure shows FID and sparsity
scores. Lower FID scores ($\downarrow$) represent better sample quality while
higher sparse scores ($\rightarrow$) indicate sparser features. The right
figure shows the performance of sparse features from InteL-VAE on downstream
classification tasks. See Sec. C.3 for details and results for MNIST.
[] [] []
Figure 8: Qualitative evaluation of sparsity. [Top] Average magnitude of each
latent dimension for three example classes in Fashion-MNIST; less than $10\%$
dimensions are activated for each class. [Bottom] Activated dimensions are
different between classes: (a-c) show the results of separately manipulating
an activated dimension for each class. (a) Trouser separation (Dim 18). (b)
Coat length (Dim 46). (c) Shoe style (formal/sport, Dim 25).
To quantitatively assess the ability of our approach to yield sparse
representations and good quality generations, we compare against vanilla VAEs,
the specially customized sparse-VAE of Tonolini et al. (2020), and the sparse
version of Mathieu et al. (2019b) (DD) on Fashion-MNIST (Xiao et al., 2017)
and MNIST. As shown in Fig. 7 (left), we find that InteL-VAEs increase
sparsity of the representations—measured by the Hoyer metric (Hurley &
Rickard, 2009)—while increasing generative sample quality at the same time.
Indeed, the FID score obtained by InteL-VAE outperforms the vanilla VAE when
$\gamma<3.0$, while the sparsity score substantially increases with $\gamma$,
reaching extremely high levels. By comparison, DD significantly degrades
generation quality and only provides a more modest increase in sparsity, while
its sparsity also drops if the regularization coefficient is set too high. The
level of sparsity achieved by sparse-VAEs was substantially less than both DD
and InteL-VAEs.
To further evaluate the quality of the learned features for downstream tasks,
we trained a classifier to predict class labels from the latent
representations. For this, we choose a random forest (Breiman, 2001) with
maximum depth $4$ as it is well-suited for sparse features. We vary the size
of training data given to the classifier to measure the data efficiency of
each model. Fig. 7 (right) shows that InteL-VAE typically outperforms other
the models, especially in few-shot scenarios.
Method | FID ($\downarrow$) | Sparsity ($\uparrow$)
---|---|---
VAE | 68.6$\pm$1.1 | 0.22$\pm$0.01
Vamp-VAE | 67.5$\pm$1.1 | 0.22$\pm$0.01
VAE with Sylvester NF | 66.3$\pm$0.4 | 0.22$\pm$0.01
Sparse-VAE ($\alpha=0.01$) | 328$\pm$10.1 | 0.25$\pm$0.01
Sparse-VAE ($\alpha=0.2$) | 337$\pm$8.1 | 0.28$\pm$0.01
InteL-VAE ($\gamma=30$) | 64.9$\pm$0.4 | 0.25$\pm$0.01
InteL-VAE ($\gamma=70$) | 68.0$\pm$0.6 | 0.46$\pm$0.02
Table 4: Generation results on CelebA.
Finally, to verify InteL-VAE’s effectiveness on larger and higher-resolution
datasets, we also make comparisons on CelebA (Liu et al., 2015). From Table 4,
we can see that InteL-VAE increase sparse scores to 0.46 without sacrificing
generation quality. By comparison, the maximal sparse score that sparse-VAE
gets is 0.30, with unacceptable sample quality. Interestingly, InteL-VAEs with
ly low regulation $\gamma$ achieved particularly good generative sample
quality, outperforming even the Vamp-VAE and a VAE with a Sylvester NF
encoder.
Conclusions In this paper, we proposed InteL-VAEs, a general schema for
incorporating inductive biases into VAEs. Experiments show that InteL-VAEs can
both provide representations with desired properties and improve generation
quality, outperforming a variety of baselines such as directly changing the
prior. This is achieved while maintaining the simplicity and stability of
standard VAEs.
## References
* Abadi et al. (2015) Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dandelion Mané, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viégas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015\. URL https://www.tensorflow.org/. Software available from tensorflow.org.
* Ahmad & Scheinkman (2019) Subutai Ahmad and Luiz Scheinkman. How can we be so dense? the benefits of using highly sparse representations. _arXiv preprint arXiv:1903.11257_ , 2019.
* Alemi et al. (2018) Alexander Alemi, Ben Poole, Ian Fischer, Joshua Dillon, Rif A Saurous, and Kevin Murphy. Fixing a broken elbo. In _International Conference on Machine Learning_ , pp. 159–168. PMLR, 2018.
* Ansari & Soh (2019) Abdul Fatir Ansari and Harold Soh. Hyperprior induced unsupervised disentanglement of latent representations. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pp. 3175–3182, 2019.
* Bahdanau et al. (2014) Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. _arXiv preprint arXiv:1409.0473_ , 2014.
* Barello et al. (2018) Gabriel Barello, Adam S. Charles, and Jonathan W. Pillow. Sparse-coding variational auto-encoders. Preprint, Neuroscience, August 2018.
* Bauer & Mnih (2019) Matthias Bauer and Andriy Mnih. Resampled priors for variational autoencoders. In _The 22nd International Conference on Artificial Intelligence and Statistics_ , pp. 66–75. PMLR, 2019.
* Belghazi et al. (2018) Mohamed Ishmael Belghazi, Sai Rajeswar, Olivier Mastropietro, Negar Rostamzadeh, Jovana Mitrovic, Aaron Courville, and AI Element. Hierarchical adversarially learned inference. _stat_ , 1050:4, 2018.
* Bengio et al. (2013) Yoshua Bengio, Aaron Courville, and Pascal Vincent. Representation learning: A review and new perspectives. _IEEE transactions on pattern analysis and machine intelligence_ , 35(8):1798–1828, 2013.
* Berg et al. (2018) Rianne van den Berg, Leonard Hasenclever, Jakub M Tomczak, and Max Welling. Sylvester normalizing flows for variational inference. _arXiv preprint arXiv:1803.05649_ , 2018.
* Breiman (2001) Leo Breiman. Random forests. _Machine learning_ , 45(1):5–32, 2001.
* Burda et al. (2016) Yuri Burda, Roger B Grosse, and Ruslan Salakhutdinov. Importance weighted autoencoders. In _ICLR (Poster)_ , 2016.
* Casale et al. (2018) Francesco Paolo Casale, Adrian V Dalca, Luca Saglietti, Jennifer Listgarten, and Nicoló Fusi. Gaussian process prior variational autoencoders. In _NeurIPS_ , 2018.
* Caterini et al. (2020) Anthony L Caterini, Robert Cornish, Dino Sejdinovic, and Arnaud Doucet. Variational inference with continuously-indexed normalizing flows. 2020\.
* Davidson et al. (2018a) Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M. Tomczak. Hyperspherical variational auto-encoders. _arXiv:1804.00891 [cs, stat]_ , September 2018a. URL http://arxiv.org/abs/1804.00891.
* Davidson et al. (2018b) Tim R. Davidson, Luca Falorsi, Nicola De Cao, Thomas Kipf, and Jakub M. Tomczak. Hyperspherical variational auto-encoders. _34th Conference on Uncertainty in Artificial Intelligence (UAI-18)_ , 2018b.
* De la Fuente & Aduviri (2019) Alfredo De la Fuente and Robert Aduviri. Replication/machine learning. 2019\.
* Dilokthanakul et al. (2016) Nat Dilokthanakul, Pedro AM Mediano, Marta Garnelo, Matthew CH Lee, Hugh Salimbeni, Kai Arulkumaran, and Murray Shanahan. Deep unsupervised clustering with gaussian mixture variational autoencoders. _arXiv preprint arXiv:1611.02648_ , 2016.
* Dinh et al. (2017) Laurent Dinh, Jascha Sohl-Dickstein, and Samy Bengio. Density estimation using real nvp. _arXiv:1605.08803 [cs, stat]_ , February 2017. URL http://arxiv.org/abs/1605.08803.
* Falorsi et al. (2018) Luca Falorsi, Pim de Haan, Tim R. Davidson, Nicola De Cao, Maurice Weiler, Patrick Forré, and Taco S. Cohen. Explorations in homeomorphic variational auto-encoding. _arXiv:1807.04689 [cs, stat]_ , July 2018. URL http://arxiv.org/abs/1807.04689.
* Glorot et al. (2011) Xavier Glorot, Antoine Bordes, and Yoshua Bengio. Deep sparse rectifier neural networks. In _Proceedings of the fourteenth international conference on artificial intelligence and statistics_ , pp. 315–323. JMLR Workshop and Conference Proceedings, 2011.
* Gopalakrishnan et al. (2018) Soorya Gopalakrishnan, Zhinus Marzi, Upamanyu Madhow, and Ramtin Pedarsani. Combating adversarial attacks using sparse representations. _arXiv preprint arXiv:1803.03880_ , 2018.
* Grathwohl et al. (2018) Will Grathwohl, Ricky T. Q. Chen, Jesse Bettencourt, Ilya Sutskever, and David Duvenaud. FFJORD: Free-form continuous dynamics for scalable reversible generative models. _arXiv:1810.01367 [cs, stat]_ , October 2018. URL http://arxiv.org/abs/1810.01367.
* Gulrajani et al. (2016) Ishaan Gulrajani, Kundan Kumar, Faruk Ahmed, Adrien Ali Taiga, Francesco Visin, David Vazquez, and Aaron Courville. Pixelvae: A latent variable model for natural images. _arXiv preprint arXiv:1611.05013_ , 2016.
* Gupta & Mishra (2018) Ved Prakash Gupta and Mukund Madhav Mishra. On the topology of certain matrix groups. _THE MATHEMATICS STUDENT_ , pp. 61, 2018.
* Heusel et al. (2017) Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. In _Proceedings of the 31st International Conference on Neural Information Processing Systems_ , pp. 6629–6640, 2017.
* Higgins et al. (2018) Irina Higgins, David Amos, David Pfau, Sebastien Racaniere, Loic Matthey, Danilo Rezende, and Alexander Lerchner. Towards a definition of disentangled representations. _arXiv preprint arXiv:1812.02230_ , 2018.
* Hochreiter & Schmidhuber (1997) Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. _Neural computation_ , 9(8):1735–1780, 1997.
* Hoffman & Johnson (2016) Matthew D Hoffman and Matthew J Johnson. Elbo surgery: yet another way to carve up the variational evidence lower bound. 2016\.
* Hou et al. (2017) Xianxu Hou, Linlin Shen, Ke Sun, and Guoping Qiu. Deep feature consistent variational autoencoder. In _2017 IEEE Winter Conference on Applications of Computer Vision (WACV)_ , pp. 1133–1141. IEEE, 2017.
* Huang et al. (2018) Chin-Wei Huang, David Krueger, Alexandre Lacoste, and Aaron Courville. Neural autoregressive flows. pp. 10, 2018.
* Huang & Aviyente (2006) Ke Huang and Selin Aviyente. Sparse representation for signal classification. _Advances in neural information processing systems_ , 19:609–616, 2006.
* Hurley & Rickard (2009) Niall Hurley and Scott Rickard. Comparing measures of sparsity. _IEEE Transactions on Information Theory_ , 55:4723–4741, 2009.
* Jiang et al. (2017) Zhuxi Jiang, Yin Zheng, Huachun Tan, Bangsheng Tang, and Hanning Zhou. Variational deep embedding: An unsupervised and generative approach to clustering. In _IJCAI_ , 2017.
* Johnson et al. (2017) Matthew J. Johnson, David Duvenaud, Alexander B. Wiltschko, Sandeep R. Datta, and Ryan P. Adams. Composing graphical models with neural networks for structured representations and fast inference. _arXiv:1603.06277 [stat]_ , July 2017. URL http://arxiv.org/abs/1603.06277.
* Kim & Mnih (2018) Hyunjik Kim and Andriy Mnih. Disentangling by factorising. In Jennifer Dy and Andreas Krause (eds.), _Proceedings of the 35th International Conference on Machine Learning_ , volume 80 of _Proceedings of Machine Learning Research_ , pp. 2649–2658. PMLR, 10–15 Jul 2018. URL http://proceedings.mlr.press/v80/kim18b.html.
* Kingma & Welling (2014) Diederik P Kingma and Max Welling. Auto-encoding variational bayes. In _International Conference on Learning Representations_ , 2014.
* (38) Alexej Klushyn, Nutan Chen, Richard Kurle, Botond Cseke, and Patrick van der Smagt. Learning hierarchical priors in vaes.
* Kumar et al. (2018) Abhishek Kumar, Prasanna Sattigeri, and Avinash Balakrishnan. Variational inference of disentangled latent concepts from unlabeled observations. In _International Conference on Learning Representations_ , 2018.
* Le et al. (2018) Tuan Anh Le, Maximilian Igl, Tom Rainforth, Tom Jin, and Frank Wood. Auto-encoding sequential monte carlo. In _International Conference on Learning Representations_ , 2018.
* LeCun et al. (1989) Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, and Lawrence D Jackel. Backpropagation applied to handwritten zip code recognition. _Neural computation_ , 1(4):541–551, 1989.
* Liu et al. (1997) Bing Liu, Liang-Ping Ku, and Wynne Hsu. Discovering interesting holes in data. In _Proceedings of the Fifteenth international joint conference on Artifical intelligence-Volume 2_ , pp. 930–935, 1997.
* Liu et al. (2015) Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep learning face attributes in the wild. In _Proceedings of International Conference on Computer Vision (ICCV)_ , December 2015.
* Maddison et al. (2017) Chris J Maddison, John Lawson, George Tucker, Nicolas Heess, Mohammad Norouzi, Andriy Mnih, Arnaud Doucet, and Yee Whye Teh. Filtering variational objectives. In _NIPS_ , 2017.
* Mardia & Jupp (2000) K. V. Mardia and Peter E. Jupp. _Directional Statistics_. Wiley Series in Probability and Statistics. J. Wiley, Chichester ; New York, 2000. ISBN 978-0-471-95333-3.
* Mathieu et al. (2019a) Emile Mathieu, Charline Le Lan, Chris J. Maddison, Ryota Tomioka, and Yee Whye Teh. Continuous hierarchical representations with poincar\’e variational auto-encoders. January 2019a. URL https://arxiv.org/abs/1901.06033v3.
* Mathieu et al. (2019b) Emile Mathieu, Tom Rainforth, N Siddharth, and Yee Whye Teh. Disentangling disentanglement in variational autoencoders. In _International Conference on Machine Learning_ , pp. 4402–4412. PMLR, 2019b.
* Naesseth et al. (2018) Christian Naesseth, Scott Linderman, Rajesh Ranganath, and David Blei. Variational sequential monte carlo. In _International Conference on Artificial Intelligence and Statistics_ , pp. 968–977. PMLR, 2018.
* Nagano et al. (2019) Yoshihiro Nagano, Shoichiro Yamaguchi, Yasuhiro Fujita, and Masanori Koyama. A wrapped normal distribution on hyperbolic space for gradient-based learning. _arXiv:1902.02992 [cs, stat]_ , May 2019. URL http://arxiv.org/abs/1902.02992.
* Ng et al. (2011) Andrew Ng et al. Sparse autoencoder. _CS294A Lecture notes_ , 72(2011):1–19, 2011\.
* Ovinnikov (2019) Ivan Ovinnikov. Poincar\’e wasserstein autoencoder. January 2019. URL https://arxiv.org/abs/1901.01427v2.
* Papamakarios et al. (2018) George Papamakarios, Theo Pavlakou, and Iain Murray. Masked autoregressive flow for density estimation. _arXiv:1705.07057 [cs, stat]_ , June 2018. URL http://arxiv.org/abs/1705.07057.
* Papamakarios et al. (2019) George Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, and Balaji Lakshminarayanan. Normalizing flows for probabilistic modeling and inference. _arXiv preprint arXiv:1912.02762_ , 2019.
* Ranganath et al. (2016) Rajesh Ranganath, Dustin Tran, and David Blei. Hierarchical variational models. In _International Conference on Machine Learning_ , pp. 324–333. PMLR, 2016.
* Razavi et al. (2019) Ali Razavi, Aaron van den Oord, and Oriol Vinyals. Generating diverse high-fidelity images with vq-vae-2. In _NIPS_ , 2019.
* Rey et al. (2019) Luis A. Pérez Rey, Vlado Menkovski, and Jacobus W. Portegies. Diffusion variational autoencoders. _arXiv:1901.08991 [cs, stat]_ , March 2019. URL http://arxiv.org/abs/1901.08991.
* Rezende & Mohamed (2015) Danilo Rezende and Shakir Mohamed. Variational inference with normalizing flows. In _International Conference on Machine Learning_ , pp. 1530–1538. PMLR, 2015.
* Sason (2019) Igal Sason. On data-processing and majorization inequalities for f-divergences with applications. _Entropy_ , 21(10):1022, October 2019. ISSN 1099-4300. doi: 10.3390/e21101022.
* Scaman & Virmaux (2018) Kevin Scaman and Aladin Virmaux. Lipschitz regularity of deep neural networks: analysis and efficient estimation. In _Proceedings of the 32nd International Conference on Neural Information Processing Systems_ , pp. 3839–3848, 2018.
* Shi et al. (2020) Wenxian Shi, Hao Zhou, Ning Miao, and Lei Li. Dispersed exponential family mixture vaes for interpretable text generation. In _International Conference on Machine Learning_ , pp. 8840–8851. PMLR, 2020.
* Skopek et al. (2019) Ondrej Skopek, Octavian-Eugen Ganea, and Gary Bécigneul. Mixed-curvature variational autoencoders. November 2019. URL https://arxiv.org/abs/1911.08411v2.
* Sønderby et al. (2016) Casper Kaae Sønderby, Tapani Raiko, Lars Maaløe, Søren Kaae Sønderby, and Ole Winther. Ladder variational autoencoders. In _NIPS_ , 2016.
* Song & Li (2013) Tiecheng Song and Hongliang Li. Wavelbp based hierarchical features for image classification. _Pattern Recognition Letters_ , 34(12):1323–1328, 2013.
* Tomczak & Welling (2018) Jakub M Tomczak and Max Welling. Vae with a vampprior. In _21st International Conference on Artificial Intelligence and Statistics, AISTATS 2018_ , 2018.
* Tonolini et al. (2020) Francesco Tonolini, Bjørn Sand Jensen, and Roderick Murray-Smith. Variational sparse coding. In _Uncertainty in Artificial Intelligence_ , pp. 690–700. PMLR, 2020.
* Vahdat & Kautz (2020) Arash Vahdat and Jan Kautz. Nvae: A deep hierarchical variational autoencoder. _arXiv preprint arXiv:2007.03898_ , 2020.
* van den Oord et al. (2017) Aaron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. Neural discrete representation learning. In _Proceedings of the 31st International Conference on Neural Information Processing Systems_ , pp. 6309–6318, 2017.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _NIPS_ , 2017.
* Webb et al. (2018) Stefan Webb, Adam Golinski, Robert Zinkov, Siddharth Narayanaswamy, Tom Rainforth, Yee Whye Teh, and Frank Wood. Faithful inversion of generative models for effective amortized inference. In _NeurIPS_ , 2018.
* Wright et al. (2009) John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma. Robust face recognition via sparse representation. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 31(2):210–227, 2009. doi: 10.1109/TPAMI.2008.79.
* Xiao et al. (2017) Han Xiao, Kashif Rasul, and Roland Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. _arXiv preprint arXiv:1708.07747_ , 2017.
* Yip & Sussman (1997) Kenneth Yip and Gerald Jay Sussman. Sparse representations for fast, one-shot learning. In _Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence_ , pp. 521–527, 1997.
* Zhao et al. (2017) Shengjia Zhao, Jiaming Song, and Stefano Ermon. Learning hierarchical features from generative models. In _International Conference on Machine Learning_ , 2017.
* Zhao et al. (2019) Shengjia Zhao, Jiaming Song, and Stefano Ermon. Infovae: Balancing learning and inference in variational autoencoders. In _Proceedings of the aaai conference on artificial intelligence_ , volume 33, pp. 5885–5892, 2019.
## Appendix A Proofs
See 1
###### Proof.
We first prove the inequality from Eq. 3, then we show that Eq. 3 is actually
an equality when $g_{\psi}$ is invertible, and finally we prove that the
reconstruction term is unchanged by $g_{\psi}$.
Let us denote by $\mathcal{F}$ and $\mathcal{G}$ the sigma-algebras of
respectively $\operatorname{\mathcal{Y}}$ and $\operatorname{\mathcal{Z}}$,
and we have by construction a measurable map
$g_{\psi}:(\operatorname{\mathcal{Y}},\mathcal{F})\rightarrow(\operatorname{\mathcal{Z}},\mathcal{G})$.
We can actually define the measurable space
$(\operatorname{\mathcal{Z}},\mathcal{G})$ as the image of
$(\operatorname{\mathcal{Y}},\mathcal{F})$ by $g_{\psi}$, then $g_{\psi}$ is
automatically both surjective and measurable.111We recall that $g_{\psi}$ is
said to be measurable if and only if for any $A\in\mathcal{G}$,
$g_{\psi}^{-1}(A)\in\mathcal{F}$. We also assume that there exists a measure
on $\operatorname{\mathcal{Y}}$, which we denote $\xi$, and denote with $\nu$
the corresponding pushforward measure by $g_{\psi}$ on
$\operatorname{\mathcal{Z}}$. We further have $\nu(A)=\xi(g_{\psi}^{-1}(A))$
for any $A\in\mathcal{G}$.222The notation $g_{\psi}^{-1}(A)$ does not imply
that $g_{\psi}$ is invertible, but denotes the preimage of $A$ which is
defined as
$g_{\psi}^{-1}(A)=\\{y\in\operatorname{\mathcal{Y}}~{}|~{}g_{\psi}(y)\in
A\\}$.
We start by proving Eq. 3, where the Kullback-Leibler (KL) divergence between
the two pushforward measures333We denote the pushforward of a probability
measure $\chi$ along a map $g$ by $\chi\circ g^{-1}$. $q_{\phi,\psi}\triangleq
q_{\phi}\circ g_{\psi}^{-1}$ and $p_{\psi}\triangleq p\circ g_{\psi}^{-1}$ is
upper bounded by $D_{\textrm{KL}}\left(q_{\phi}(y|x)\;\|\;p(y)\right)$, where
here we have $p(y)=\mathcal{N}(y;0,I)$ but we will use $p$ as a convenient
shorthand. At a high-level, we essentially have that Eq. 3 follows directly
the data processing inequality (Sason, 2019) with a deterministic kernel
$z=g_{\psi}(y)$. Nonetheless, we develop in what follows a proof which
additionally gives sufficient conditions for when this inequality becomes non-
strict. We can assume that
$D_{\textrm{KL}}\left(q_{\phi}(y|x)\;\|\;\mathcal{N}(y;0,I)\right)$ is finite,
as otherwise the result is trivially true, which in turn implies $q_{\phi}\ll
p$.444We denote the absolute continuity of measures with $\ll$, where $\mu$ is
said to be absolutely continuous w.r.t. $\nu$, i.e. $\mu\ll\nu$, if for any
measurable set $A$, $\nu(A)=0$ implies $\mu(A)=0$. For any $A\in\mathcal{G}$,
we have that if $p_{\psi}(A)=p\circ g_{\psi}^{-1}(A)=p(g_{\psi}^{-1}(A))=0$
then this implies $q_{\phi}(g_{\psi}^{-1}(A))=q_{\phi}\circ
g_{\psi}^{-1}(A)=q_{\phi,\psi}(A)=0$. As such, we have that $q_{\phi,\psi}\ll
p_{\psi}$ and so the
$D_{\textrm{KL}}\left(q_{\phi,\psi}(z|x)\;\|\;p_{\psi}(z)\right)$ is also
defined.
Our next significant step is to show that
$\displaystyle\operatorname{\mathbb{E}}_{p(y)}\left[\frac{q_{\phi}}{p}\>\Big{|}\>\sigma(g_{\psi})\right]=\frac{q_{\phi}\circ
g_{\psi}^{-1}}{p\circ g_{\psi}^{-1}}\circ g_{\psi},$ (A.1)
where $\sigma(g_{\psi})$ denotes the sigma-algebra generated by the function
$g_{\psi}$. To do this, let
$h:(\operatorname{\mathcal{Z}},\mathcal{G})\rightarrow(\operatorname{\mathbb{R}}_{+},\mathcal{B}(\operatorname{\mathbb{R}}_{+}))$
be a measurable function s.t.
$\operatorname{\mathbb{E}}_{p(y)}\left[\frac{q_{\phi}}{p}\>\Big{|}\>\sigma(g_{\psi})\right]=h\circ
g_{\psi}$. To show this, we will demonstrate that they lead to equivalent
measures when integrated over any arbitrary set $A\in\mathcal{G}$:
$\displaystyle\int_{\operatorname{\mathcal{Z}}}\mathds{1}_{A}~{}\frac{q_{\phi}\circ
g_{\psi}^{-1}}{p\circ g_{\psi}^{-1}}~{}p\circ g_{\psi}^{-1}~{}d\nu$
$\displaystyle=\int_{\operatorname{\mathcal{Z}}}\mathds{1}_{A}~{}q_{\phi}\circ
g_{\psi}^{-1}~{}d\nu=\int_{\operatorname{\mathcal{Z}}}\mathds{1}_{A}~{}d(q_{\phi}\circ
g_{\psi}^{-1})$
$\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}\int_{\operatorname{\mathcal{Y}}}(\mathds{1}_{A}\circ
g_{\psi})~{}dq_{\phi}=\int_{\operatorname{\mathcal{Y}}}(\mathds{1}_{A}\circ
g_{\psi})~{}q_{\phi}~{}d\xi$
$\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}\int_{\operatorname{\mathcal{Y}}}(\mathds{1}_{A}\circ
g_{\psi})~{}\frac{q_{\phi}}{p}~{}p~{}d\xi$
$\displaystyle\stackrel{{\scriptstyle(c)}}{{=}}\int_{\operatorname{\mathcal{Y}}}(\mathds{1}_{A}\circ
g_{\psi})~{}\operatorname{\mathbb{E}}_{p(y)}\left[\frac{q_{\phi}}{p}\>\Big{|}\>\sigma(g_{\psi})\right]~{}p~{}d\xi$
$\displaystyle\stackrel{{\scriptstyle(d)}}{{=}}\int_{\operatorname{\mathcal{Y}}}(\mathds{1}_{A}\circ
g_{\psi})~{}(h\circ
g_{\psi})~{}p~{}d\xi=\int_{\operatorname{\mathcal{Y}}}(\mathds{1}_{A}\circ
g_{\psi})~{}(h\circ g_{\psi})~{}dp$
$\displaystyle\stackrel{{\scriptstyle(e)}}{{=}}\int_{\operatorname{\mathcal{Z}}}\mathds{1}_{A}~{}h~{}d(p\circ
g_{\psi}^{-1})=\int_{\operatorname{\mathcal{Z}}}\mathds{1}_{A}~{}h~{}(p\circ
g_{\psi}^{-1})~{}d\nu,$
where we have leveraged the definition of pushforward measures in (a & e); the
absolute continuity of $q_{\phi}$ w.r.t. $p$ in (b); the conditional
expectation definition in (c); and the definition of $h$ in (d). By equating
terms, we have that $q_{\phi}\circ g_{\psi}^{-1}/p\circ g_{\psi}^{-1}=h$,
almost-surely with respect to $q_{\phi}\circ g_{\psi}^{-1}$ and thus that Eq.
A.1 is verified.
Let us define $f:x\mapsto x\log(x)$, which is strictly convex on $[0,\infty)$
(as it can be prolonged with $f(0)=0$). We have the following
$\displaystyle
D_{\textrm{KL}}\left(q_{\phi,\psi}(z|x)\;\|\;p_{\psi}(z)\right)$
$\displaystyle\stackrel{{\scriptstyle(a)}}{{=}}\int_{\operatorname{\mathcal{Z}}}\log\left(\frac{q_{\phi,\psi}}{p_{\psi}}\right)q_{\phi,\psi}~{}d\nu$
$\displaystyle\stackrel{{\scriptstyle(b)}}{{=}}\int_{\operatorname{\mathcal{Z}}}\log\left(\frac{q_{\phi,\psi}}{p_{\psi}}\right)\frac{q_{\phi,\psi}}{p_{\psi}}~{}p_{\psi}~{}d\nu$
$\displaystyle\stackrel{{\scriptstyle(c)}}{{=}}\int_{\operatorname{\mathcal{Z}}}f\left(\frac{q_{\phi,\psi}}{p_{\psi}}\right)~{}p_{\psi}~{}d\nu=\int_{\operatorname{\mathcal{Z}}}f\left(\frac{q_{\phi,\psi}}{p_{\psi}}\right)~{}d(p\circ
g_{\psi}^{-1})$
$\displaystyle\stackrel{{\scriptstyle(d)}}{{=}}\int_{\operatorname{\mathcal{Y}}}f\left(\frac{q_{\phi,\psi}}{p_{\psi}}\circ
g_{\psi}\right)~{}dp\stackrel{{\scriptstyle}}{{=}}\int_{\operatorname{\mathcal{Y}}}f\left(\frac{q_{\phi}\circ
g_{\psi}^{-1}}{p\circ g_{\psi}^{-1}}\circ g_{\psi}\right)p~{}d\xi$
$\displaystyle\stackrel{{\scriptstyle(e)}}{{=}}\int_{\operatorname{\mathcal{Y}}}f\left(\operatorname{\mathbb{E}}_{p(y)}\left[\frac{q_{\phi}}{p}\>\Big{|}\>\sigma(g_{\psi})\right]\right)p~{}d\xi$
$\displaystyle\stackrel{{\scriptstyle(f)}}{{\leq}}\int_{\operatorname{\mathcal{Y}}}\operatorname{\mathbb{E}}_{p(y)}\left[f\left(\frac{q_{\phi}}{p}\right)\>\Big{|}\>\sigma(g_{\psi})\right]p~{}d\xi$
$\displaystyle\stackrel{{\scriptstyle(g)}}{{=}}\int_{\operatorname{\mathcal{Y}}}f\left(\frac{q_{\phi}}{p}\right)p~{}d\xi$
$\displaystyle\stackrel{{\scriptstyle(h)}}{{=}}\int_{\operatorname{\mathcal{Y}}}\log\left(\frac{q_{\phi}}{p}\right)\frac{q_{\phi}}{p}~{}p~{}d\xi$
$\displaystyle\stackrel{{\scriptstyle(i)}}{{=}}\operatorname{\mathbb{E}}_{q_{\phi}(y|x)}\left[\log\left(\frac{q_{\phi}(y|x)}{p(y)}\right)\right]$
$\displaystyle\stackrel{{\scriptstyle(j)}}{{=}}D_{\textrm{KL}}\left(q_{\phi}(y|x)\;\|\;p(y)\right),$
where we leveraged the definition of the KL divergence in (a & j); the
absolute continuity of $q_{\phi}$ w.r.t. $p$ in (b & i); the definition of $f$
in (c & h); the definition of the pushforward measure in (d); Eq. A.1 in (e);
the conditional Jensen inequality in (f) and the law of total expectation in
(g). Note that this proof not only holds for the KL divergence, but for any
f-divergences as they are defined as in (b) with $f$ convex.
To prove Eq. 4, we now need to show that line (f) above becomes an equality
when $g_{\psi}$ is invertible. As $f$ is strictly convex, this happens if and
only if
$\frac{q_{\phi}}{p}=\operatorname{\mathbb{E}}_{p(y)}\left[\frac{q_{\phi}}{p}\>\Big{|}\>\sigma(g_{\psi})\right]$.
A sufficient condition for this to be true is for $\frac{q_{\phi}}{p}$ to be
measurable w.r.t. $\sigma(g_{\psi})$ which is satisfied when
$g_{\psi}:\operatorname{\mathcal{Y}}\mapsto\operatorname{\mathcal{Z}}$ is
invertible as $\sigma(g_{\psi})\supseteq\mathcal{F}$, as required. We have
thus shown that the KL divergences are equal when using an invertible
$g_{\psi}$.
For the reconstruction term, we instead have
$\displaystyle\operatorname{\mathbb{E}}_{q_{\phi}(y|x)}[\log
p_{\theta}(x|g_{\psi}(y))]$
$\displaystyle=\int_{\operatorname{\mathcal{Y}}}\log
p_{\theta}(x|g_{\psi}(y))q_{\phi}(y|x)d\xi$
$\displaystyle=\int_{\operatorname{\mathcal{Z}}}\log
p_{\theta}(x|z)q_{\phi,\psi}(z|x)d\nu$
$\displaystyle=\operatorname{\mathbb{E}}_{q_{\phi,\psi}(z|x)}[\log
p_{\theta}(x|z)].$
Eq. 4 now follows from the fact that both the reconstruction and KL terms are
equal.
∎
## Appendix B Hierarchical Representations
Figure B.1: Graphical model for hierarchical InteL-VAE
The isotropic Gaussian prior in standard VAEs assumes that representations are
independent across dimensions (Kumar et al., 2018). However, this assumption
is often unrealistic (Belghazi et al., 2018; Mathieu et al., 2019b). For
example, in Fashion-MNIST, high-level features such as object category, may
affect low-level features such as shape or height. Separately extracting such
global and local information can be beneficial for visualization and data
manipulation (Zhao et al., 2017). To try and capture this, we introduce an
inductive bias that is tailored to model and learn hierarchical features. We
note here that our aim is not to try and provide a state-of-the-art
hierarchical VAE approach, as a wide variety of highly–customized and powerful
approaches are already well–established, but to show how easily the InteL-VAE
framework can be used to induce hierarchical representations in a simple,
lightweight, manner.
#### Mapping design
Following existing ideas from hierarchical VAEs (Sønderby et al., 2016; Zhao
et al., 2017), we propose a hierarchical mapping $g_{\psi}$. As shown in Fig.
B.1, the intermediary Gaussian variable $y$ is first split into a set of $N$
layers $[y_{0},y_{1},...,y_{N}]$. The mapping $z=g_{\psi}(y)$ is then
recursively defined as $z_{i}=\text{NN}_{i}(z_{i-1},y_{i})$, where
$\text{NN}_{i}$ is a neural network combining information from higher-level
feature $z_{i-1}$ and new information from $y_{i}$. As a result, we get a
hierarchical encoding $z=[z_{0},z_{1},...,z_{N}]$, where high-level features
influence low-level ones but not vice-versa. This $g_{\psi}$ thus endows
InteL-VAEs with hierarchical representations.
[] [] [] [] []
Figure B.2: Manipulating representations of a hierarchical InteL-VAE. The
features are split into 5 levels, with each of (a) [highest] to (e) [lowest]
corresponding to an example feature from each. We see that high-level features
control more complex properties, such as class label or topological structure,
while low-level features control simpler details, (e.g. (d) controls collar
shape).
#### Experiments
While conventional hierarchical VAEs, e.g. (Sønderby et al., 2016; Zhao et
al., 2017; Vahdat & Kautz, 2020), use hierarchies to try and improve
generation quality, our usage is explicitly from the representation
perspective, with our experiments set up accordingly. Fig. B.2 shows some
hierarchical features learned by InteL-VAE on Fashion-MNIST. We observe that
high-level information such as categories have indeed been learned in the top-
level features, while low-level features control more detailed aspects.
To provide more quantitative investigation, we also consider the CelebA
dataset (Liu et al., 2015) and investigate performance on downstream tasks,
comparing to vanilla-VAEs with different latent dimensions. For this, we train
a linear classifier to predict all $40$ binary labels from the learned
features for each method. In order to eliminate the effect of latent
dimensions, we compare InteL-VAE (with fixed latent dimension $128$) and
vanilla VAE with different latent dimensions ($1,2,4,8,16,32,64,128$). We show
experiment results on some labels as well as the average accuracy on all
labels in Table B.1 and Fig. B.3. We first find that the optimal latent
dimension increases with the number of data points for the vanilla-VAEs, but
is always worse than the InteL-VAE. Notably, the accuracy with InteL-VAE is
quite robust, even as the number of data points gets dramatically low,
indicating high data efficiency. To the best of our knowledge, this is the
first result showing that a hierarchical inductive bias in VAE is beneficial
to feature quality.
Related work Hierarchical VAEs (Vahdat & Kautz, 2020; Ranganath et al., 2016;
Sønderby et al., 2016; Klushyn et al., ; Zhao et al., 2017) seek to improve
the fit and generation quality of VAEs by recursively correcting the
generative distributions. However, they require careful design of neural
layers, and the hierarchical KL divergence makes training deep hierarchical
VAEs unstable (Vahdat & Kautz, 2020). In comparison, InteL-VAE with
hierarchical mappings is extremely easy to implement without causing any
computational instabilities, while its aims also differ noticeably: our
approach successfully learns hierarchical _representations_ —something that is
rarely mentioned in prior works.
Figure B.3: InteL-VAE’s performance of attribute prediction on CelebA dataset. Each column shows results on the same feature with different data sizes and each column shows results on different features. In each graph, test accuracy of vanilla-VAE with different latent dimensions are shown in blue line. And results of InteL-VAE with hierarchical prior are shown in red. We find that our method (red line) achieves comparable or even better results compared with vanilla-VAE with all latent dimensions. Table B.1: Average accuracy in predicting all 40 binary labels of CelebA. Overall best accuracy is shown in bold and best results of vanilla-VAEs are underlined for comparison. Each experiment is repeated 10 times and differences are significant at the $5\%$ level for data size $\leq 1000$. Model | Latent dim | Data size
---|---|---
| | 50 | 100 | 500 | 1000 | 5000 | 10000
VAE | 8 | 0.791 | 0.799 | 0.814 | 0.815 | 0.819 | 0.819
| 16 | 0.788 | 0.801 | 0.820 | 0.824 | 0.829 | 0.831
| 32 | 0.769 | 0.795 | 0.825 | 0.832 | 0.842 | 0.846
| 64 | 0.767 | 0.794 | 0.826 | 0.832 | 0.849 | 0.855
| 128 | 0.722 | 0.765 | 0.817 | 0.825 | 0.830 | 0.852
InteL-VAE | 64 | 0.817 | 0.824 | 0.841 | 0.846 | 0.854 | 0.857
## Appendix C Full Method and Experiment Details
In this section, we first provide complete details of the mapping designs used
for our different InteL-VAE realizations along with some additional
experiments. We then provide other general information about datasets, network
structures, and experiment settings to facilitate results reproduction.
### C.1 Multiple-connectivity
#### Mapping design
Full details for this mapping were given in the main paper. Fig. C.1 provides
a further illustration of the gluing process. Additional resulting including
the Vamp-VAE are given in Fig. 4.
(a) Circular prior with $h=1$
(b) Glue point pair
(c) Implied prior with $h=2$
Figure C.1: An illustration of the glue function in multiply-connected
mappings.
### C.2 Multi-modality
#### Mapping design
In Sec. 6.2, we see the general idea of designing clustered mappings. In this
part, we delve into the details of mapping design as well as extending it to 1
dimensional and high-dimensional cases. For simplicity’s sake let us
temporarily assume that the dimension of $\operatorname{\mathcal{Y}}$ is $2$.
Our approach is based on splitting the original space into $K$ equally sized
sectors, where $K$ is the number of clusters we wish to create, as shown in
Fig. 5b. For any point $y$, we can get its component (sector) index
$\text{ci}(y)$ as well as its distance from the sector boundary
$\text{dis}(y)$. By further defining the radius direction for the $k$-th
sector (cf Fig. 5c) as
$\Delta(k)=\left(\cos\left(\frac{2\pi}{K}\left(k+\frac{1}{2}\right)\right),\sin\left(\frac{2\pi}{K}\left(k+\frac{1}{2}\right)\right)\right)\quad\forall
k\in\\{1,\dots,K\\},$
we can in turn define $g(y)$ as:
$\displaystyle\text{r}(y)$ $\displaystyle=\Delta(\text{ci}(y)),$ (C.1)
$\displaystyle g(y)$
$\displaystyle=y+{c_{1}}\text{dis}(y)^{c_{2}}\text{r}(y),$ (C.2)
where $c_{1}$ and $c_{2}$ are constants, which are set to 5 and 0.2 in our
experiments. we make sure $g$ still continuous by keeping $g(y)=y$ on
boundaries.
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(m)
(n)
(o)
(p)
(a) Real distribution
(b) VAE
(c) Vamp-VAE
(d) InteL-VAE
Figure C.2: Extension of Fig. 4 showing Vamp-VAE baseline and additional
circular target distribution (top row, uses the same single hole $g_{\psi}$ as
the second and third rows).
When dimension of $\operatorname{\mathcal{Y}}$ is greater than $2$, we have
more diverse choice for $g$. When $K$ is decomposable, i.e.,
$K=\prod_{i}K_{i}$, we can separately cut the plane expanded by
$\operatorname{\mathcal{Y}}_{2i}$ and $\operatorname{\mathcal{Y}}_{2i+1}$ into
$K_{i}$ sectors by the Eq. C.1. As a result, $\operatorname{\mathcal{Y}}$ is
split into $K=\prod_{i}k_{i}$ clusters. When $K=2$, we find that $g$ only
changes the $1$-st dimension of $\operatorname{\mathcal{Y}}$, so it can be
applied to cases where latent dimension is $1$.
#### Learnable proportions
We can also make the mapping more flexible by learning rather than assigning
the cluster proportions. To do so, we keep a learnable value $u_{i}$ for each
cluster and set the angle of the $i$-th sector as $2\pi\text{Softmax}(u)_{i}$.
Things are simpler for the 1-dimensional case where we can uniformly translate
$y$ by a learnable bias $b$ before splitting the space from the origin.
### C.3 Sparsity
#### Relationship to soft attention
We note that our setup for the sparsity mapping shares some similarities with
a soft attention layer (Bahdanau et al., 2014). However, there are also some
important points of difference. Firstly, soft attention aims to find the
weights to blend features from different time steps (for sequential data) or
different positions (for image data). In contrast, the dimension selector (DS)
selects which dimensions to activate or deactivate for the same latent vector.
Secondly, the weights of features are usually calculated by inner products of
features for soft attention, while DS relies on a network to directly output
the logits.
#### Sparsity regularizer
Our sparsity regularizer term, $\operatorname{\mathcal{L}}_{sp}$, is used to
encourage our dimensionality selector network (DS) to produce sparse mappings.
It is defined using a mini-batch of samples $\\{y_{i}\\}_{i=1}^{M}$ drawn
during training as per (7). During training, the first term of
$\operatorname{\mathcal{L}}_{sp}$ decreases the number of activated dimensions
for each sample, while the second term prevents the samples from all using the
same set of activated dimensions, which would cause the model to degenerate to
a vanilla VAE with a lower latent dimensionality.
We note that $\operatorname{\mathcal{L}}_{sp}$ alone is not expected to induce
sparsity without also using the carefully constructed $g_{\psi}$ of the
suggested InteL-VAE. We confirm this empirically by performing an ablation
study on MNIST where we apply this regularization directly to a vanilla VAE.
We find that even when using very large values of $\gamma>30.0$ we can only
slightly increase the sparsity score ($0.230\rightarrow 0.235$). Moreover,
unlikely for the InteL-VAE, this substantially deteriorates generation
quality, with the FID score raising to more than $80.0$ at the same time.
#### Sparse metric
We use the Hoyer extrinsic metric (Hurley & Rickard, 2009) to measure the
sparsity of representations. For a representation
$z\in\operatorname{\mathbb{R}}^{D}$,
$\displaystyle\text{Hoyer}(z)=\frac{\sqrt{D}-||\hat{z}||_{1}/||\hat{z}||_{2}}{\sqrt{D}-1}.$
(C.3)
Here, following Mathieu et al. (2019b), we crucially first normalized each
dimension $d$ of $z$ to have standard deviation $1$,
$\hat{z}_{d}=z_{d}/\sigma_{d}$, to ensure that we only measure sparsity that
varies between data points (as is desired), rather than any tendency to
uniformly ‘switch off’ certain latent dimensions (which is tangential to our
aims). In other words, this normalization is necessary to avoid giving high
scores to representations whose length scales vary between dimensions, but
which are not really sparse.
By averaging $\text{Hoyer}(z)$ over all representations, we can get the sparse
score of a method. For the sparsest case, where each representation has a
single activated dimension, the sparse score is $1$. And when the
representations get denser, $||\hat{z}||_{2}$ get smaller compared with
$||\hat{z}||_{1}$, leading to smaller sparse scores.
(a) (b)
Figure C.3: Results on MNIST. The left figure shows FID and sparsity scores.
Lower FID scores ($\downarrow$) represent better sample quality while higher
sparse scores ($\rightarrow$) indicate sparser features. The right figure
shows the performance of sparse features from InteL-VAE on downstream
classification tasks. See Sec. 6.3 for details and results for MNIST.
#### Reproduction of Sparse-VAE
We tried two different code bases for Sparse-VAE (Tonolini et al., 2020). The
official code base555https://github.com/ftonolini45/Variational_Sparse_Coding
gives higher sparse scores for MNIST and FashionMNIST (though still lower than
InteL-VAE), but is very unstable during training, with runs regularly failing
after diverging and producing NaNs. This issue gets even more severe on CelebA
which occurs after only a few training steps, undermining our ability to train
anything meaningful at all. To account for this, we switched to the
codebase666https://github.com/Alfo5123/Variational-Sparse-Coding from De la
Fuente & Aduviri (2019) that looked to replicate the results of the original
paper. We report the results from this code base because it solves the
instability issue and achieves reasonable results on CelebA. Interestingly,
though its generation quality is good on MNIST and Fashion-MNIST, it fails to
achieve a sparse score significantly higher than vanilla-VAE. As the original
paper does not provide any quantitative evaluation of the achieved sparsity,
it is difficult to know if this behavior is expected. We note though that the
qualitative results shown in the paper appear to be substantially less sparse
than those we show for the InteL-VAE, cf their Figure 5 compared to the top
row of our Fig. 8. In particular, their representation seems to mostly ‘switch
off’ some latents entirely, rather than having diversity between datapoints
that is needed to score well under the Hoyer metric.
Parameters | Synthetic | MNIST | Fashion-MNIST | MNIST-01 | CelebA
---|---|---|---|---|---
Dataset sizes | Unlimited | 55k/5k/10k | 55k/5k/10k | 10k/1k/2k | 163k/20k/20k
Input space | $\mathbf{R}^{2}$ | Binary 28x28 | Binary 28x28 | Binary 28x28 | RGB 64x64x3
Encoder net | MLP | CNN | CNN | CNN | CNN
Decoder net | MLP | CNN | CNN | CNN | CNN
Latent dimension | 2-10 | 50 | 50 | 1-10 | 1-128
Batch size | 10-500 | 100 | 100 | 100 | 100
Optimizer | Adam | Adam | Adam | Adam | Adam
Learning rate | 1e-3 | 1e-3 | 1e-3 | 1e-3 | 1e-3
Table C.1: Hyperparameters used for different experiments.
Encoder
---
Input 64 x 64 x 3
4x4 conv. 64 stride 2 & BN & LReLU
4x4 conv. 128 stride 2 & BN & LReLU
4x4 conv. 256 stride 2 & BN & LReLU
Dense ($dim$)
Decoder
---
Input $dim$
Dense (8x8x256) & BN & ReLU
4x4 upconv. 256 stride 2 & BN & ReLU
4x4 upconv. 128 stride 2 & BN & ReLU
4x4 upconv. 3 stride 2
Table C.2: Encoder and Decoder structures for CelebA, where $dim$ is the
latent dimension.
### C.4 Additional Experiment Details
#### Datasets
Both synthetic and real datasets are used in this paper. All synthetic
datasets (sphere, square, star, and mixture of Gaussian) are generated by
generators provided in our codes. For real datasets, We load MNIST, Fashion-
MNIST, and CelebA directly from Tensorflow (Abadi et al., 2015), and we resize
images from CelebA to $64$x$64$ following Hou et al. (2017). For experiments
with a specified number of training samples, we randomly select a subset of
the training data. We use the same random seed for each model in the same
experiment and different random seeds when repeating experiments.
#### Model structure
For low-dimensional data, the encoder and decoder are both simple multilayer
perceptrons with 3 hidden layers (10-10-10) and ReLU (Glorot et al., 2011)
activation. For MNIST and Fashion-MNIST, we use the same encoder and decoder
as Mathieu et al. (2019b). For CelebA, the structure of convolutional networks
are shown in Table C.2.
#### Experiment settings
Other hyperparameters are shown in Table C.1. All experiments are run on a
GTX-1080-Ti GPU.
|
# A Survey on Causal Discovery Methods for I.I.D. and Time Series Data
Uzma Hasan<EMAIL_ADDRESS>
Causal AI Lab
Department of Information Systems
University of Maryland, Baltimore County
Baltimore, MD, USA Emam Hossain<EMAIL_ADDRESS>
Causal AI Lab
Department of Information Systems
University of Maryland, Baltimore County
Baltimore, MD, USA Md Osman Gani<EMAIL_ADDRESS>
Causal AI Lab
Department of Information Systems
University of Maryland, Baltimore County
Baltimore, MD, USA
###### Abstract
The ability to understand causality from data is one of the major milestones
of human-level intelligence. Causal Discovery (CD) algorithms can identify the
cause-effect relationships among the variables of a system from related
observational data with certain assumptions. Over the years, several methods
have been developed primarily based on the statistical properties of data to
uncover the underlying causal mechanism. In this study, we present an
extensive discussion on the methods designed to perform causal discovery from
both independent and identically distributed (I.I.D.) data and time series
data. For this purpose, we first introduce the common terminologies used in
causal discovery literature and then provide a comprehensive discussion of the
algorithms designed to identify causal relations in different settings. We
further discuss some of the benchmark datasets available for evaluating the
algorithmic performance, off-the-shelf tools or software packages to perform
causal discovery readily, and the common metrics used to evaluate these
methods. We also evaluate some widely used causal discovery algorithms on
multiple benchmark datasets and compare their performances. Finally, we
conclude by discussing the research challenges and the applications of causal
discovery algorithms in multiple areas of interest.
## 1 Introduction
The identification of the cause-effect relationships among the variables of a
system from the corresponding data is called Causal Discovery (CD). A major
part of the causal analysis involves unfolding the _cause and effect
relationships_ among the entities in complex systems that can help us build
better solutions in health care, earth science, politics, business, education,
and many other diverse areas (Peyrot (1996), Nogueira et al. (2021)). The
_causal explanations_ precisely the causal factors obtained from a causal
analysis play an important role in decision-making and policy formulation as
well as in foreseeing the consequences of interventions without actually doing
them. Causal discovery algorithms enable the _discovery of the underlying
causal structure_ given a set of observations. The underlying causal structure
also known as a causal graph (CG) is a representation of the cause-effect
relationships between the variables in the data (Pearl (2009)). Causal graphs
represent the causal relationships with directed arrows from the cause to the
effect.
Figure 1: Causal Discovery: Identification of a causal graph from data.
Discovering the causal relations, and thereby, the estimation of their effects
would enable us to understand the underlying data generating mechanism (DGM)
better, and take necessary interventional actions. However, traditional
Artificial Intelligence (AI) applications rely solely on predictive models and
often ignore causal knowledge. Systems without the knowledge of causal
relationships often cannot make rational and informed decisions (Marwala
(2015)). The result may be devastating when correlations are mistaken for
causation. Because two variables can be highly correlated, and yet not have
any causal influence on each other. There may be a third variable often called
a latent confounder or hidden factor that may be causing both of them (see
Figure 2 (a)). Thus, _embedding the knowledge of causal relationships_ in
black-box AI systems is important to improve their explainability and
reliability (Dubois & Prade (2020), Ganguly et al. (2023)). In multiple fields
such as healthcare, politics, economics, climate science, business, and
education, the ability to understand causal relations can facilitate the
formulation of better policies with a greater understanding of the data.
(a) (b)
Figure 2: (a) Latent confounder $L$ causes both variables $S$ and $C$, and the
association between $S$ and $C$ is denoted by ? which can be mistaken as
causation. The graph in (b) is a causal graph depicting the causes and effects
of cancer (Korb & Nicholson (2010)).
The standard approach to discover the cause-effect relationships is to perform
randomized control trials (RCTs) (Sibbald & Roland (1998)). However, RCTs are
often infeasible to conduct due to high costs and some ethical reasons (Resnik
(2008)). As a result, over the last few decades, researchers have developed a
variety of methods to unravel causal relations from purely observational data
(Glymour et al. (2019), Vowels et al. (2021)). These methods are often based
on some assumptions about the data and the underlying mechanism. The _outcome_
of any causal discovery method is a causal graph or a causal adjacency matrix
where the cause and effect relations among the entities or variables are
represented. The structure of a causal graph is often similar to a _directed
acyclic graph (DAG)_ where directed edges from one variable to another
represent the cause-effect relationship between them. Figure 2 (b) represents
a causal graph showing the factors that are responsible for causing Cancer.
This type of structural representation of the underlying data-generating
mechanism is beneficial for understanding how the system entities interact
with each other.
There exists a wide range of approaches for performing causal discovery under
different settings or assumptions. Some approaches are designed particularly
for _independent and identically distributed (I.I.D.) data_ (Spirtes et al.
(2000b), Chickering (2002)) i.e. non-temporal data while others are focused on
_time series data_ (Runge et al. (2019), Hyvärinen et al. (2010)) or temporal
data. Since in real-world settings, both types of data are available in
different problem domains, it is essential to have approaches to perform
causal structure recovery from both of these. Recently, there has been a
growing body of research that considers _prior knowledge incorporation_ for
recovering the causal relationships (Mooij et al. (2020), Hasan & Gani (2022),
Hasan & Gani (2023)). Although there exist some surveys (see Table 1) on
causal discovery approaches (Heinze-Deml et al. (2018), Glymour et al. (2019),
Guo et al. (2020), Vowels et al. (2021), Assaad et al. (2022b)), none of these
present a comprehensive review of the different approaches designed for
structure recovery from both I.I.D. and time series data. Also, these surveys
do not discuss the approaches that perform causal discovery in the presence of
background knowledge. Hence, the goal of this survey is to provide an overview
of the wide range of existing approaches for performing causal discovery from
I.I.D. as well as time series data under different settings. Existing surveys
lack a combined overview of the approaches present for both I.I.D. and time
series data. So in this survey, we want to introduce the readers to the
methods available in both domains. We discuss prominent methods based on the
different approaches such as conditional independence (CI) testing, score
function usage, functional causal models (FCMs), continuous optimization
strategy, prior knowledge infusion, and miscellaneous ones. These methods
primarily differ from each other based on the primary strategy they follow.
Apart from introducing the different causal discovery approaches and
algorithms for I.I.D. and time series data, we also discuss the different
tools, metrics, and benchmark datasets used for performing CD and the
challenges and applications of CD in a wide range of areas.
Table 1: Comparison among the existing surveys for causal discovery approaches. A discussion on the different approaches can be found in section 3 and section 4. Survey | Focused Approaches | I.I.D. Data | Time Series Data
---|---|---|---
Heinze-Deml et al. (2018) | Constraint, Score, Hybrid & FCM-based approaches. | $\checkmark$ | $\times$
Glymour et al. (2019) | Traditional Constraint-based, Score-based, & FCM-based approaches. | $\checkmark$ | $\times$
Guo et al. (2020) | Constraint-based, Score-based, & FCM-based approaches. | $\checkmark$ | $\times$
Vowels et al. (2021) | Continuous Optimization-based. | $\checkmark$ | $\times$
Assaad et al. (2022b) | Constraint-based, Score-based, FCM-based, etc. approaches for time series data. | $\times$ | $\checkmark$
This study | Constraint-based, Score-based, FCM-based, Hybrid-based, Continuous-Optimization-based, Prior-Knowledge-based, and Miscellaneous. | $\checkmark$ | $\checkmark$
To summarize, the structure of this paper is as follows: First, we provide a
brief introduction to the common terminologies in the field of causal
discovery (section 2). Second, we discuss the wide range of causal discovery
approaches that exist for both I.I.D. (section 3) and time-series data
(section 4). Third, we briefly overview the common evaluation metrics (section
5) and datasets (section 6) used for evaluating the causal discovery
approaches, and report the performance comparison of some causal discovery
approaches in section 7. Fourth, we list the different technologies and open-
source software (section 8) available for performing causal discovery. Fifth,
we discuss the challenges (section 9.1) and applications (section 9.2) of
causal discovery in multiple areas such as healthcare, business, social
science, economics, and so on. Lastly, we conclude by discussing the scopes of
improvement in future causal discovery research, and the importance of
causality in improving the existing predictive AI systems which can thereby
impact informed and reliable decision-making in different areas of interest
(section 10).
## 2 Preliminaries of Causal Discovery
In this section, we briefly discuss the important terminologies and concepts
that are widely used in causal discovery. Some common notations used to
explain the terminologies are presented in Table 2.
Table 2: Common notations. Notation | Description
---|---
$G$ | A graph or DAG or ground-truth graph
$G^{\prime}$ | An estimated graph
$X,Y,Z,W$ | Observational variables
$X$ — $Y$ | An unoriented or undirected edge between $X$ and $Y$
$X$ → $Y$ | A directed edge from $X$ to $Y$ where $X$ is the cause and Y is the effect
$X$ $\not\to$ $Y$ | Absence of an edge or causal link between $X$ and $Y$
$X$ → $Z$ ← $Y$ | V-structure or Collider where $Z$ is the common child of $X$ and $Y$
$\perp\\!\\!\\!\perp$ | Independence or d-separation
$X$ $\perp\\!\\!\\!\perp$ $Y$ $|$ $Z$ | $X$ is d-separated from $Y$ given $Z$
### 2.1 Graphical Models
A graph G = (V, E) consists of a set of vertices (nodes) V and a set of edges
E where the edges represent the relationships among the vertices. Figure 3 (a)
represents a graph $G$ with vertices $V=[X,Y,Z]$ and edges
$E=[(X,Y),(X,Z),(Z,Y)]$. There can be different types of edges in a graph such
as directed edges (→), undirected edges (-), bi-directed edges
($\leftrightarrow$), etc. (Colombo et al. (2012)). A graph that consists of
only undirected edges (-) between the nodes which represent their adjacencies
is called a _skeleton graph_ $S_{G}$. This type of graph is also known as an
_undirected graph_ (Figure 3 (b)). A graph that has a mixture of different
types of edges is known as a _mixed graph_ $M_{G}$ (Figure 3 (c)). A _path_
$p$ between two nodes $X$ and $Y$ is a sequence of edges beginning from $X$
and ending at $Y$. A _cycle_ $c$ is a path that begins and ends at the same
vertex. A graph with no cycle $c$ is called an _acyclic graph_. And, a
directed graph in which the edges have directions (→) and has no cycle is
called a directed acyclic graph (DAG). In a DAG $G$, a directed path from $X$
to $Y$ implies that $X$ is an ancestor of $Y$, and $Y$ is a descendant of $X$.
The graph $G$ in Figure 3 (a) is a DAG as it is acyclic, and consists of
directed edges.
Figure 3: (a) A graph $G$, (b) its _skeleton_ graph $S_{G}$, (c) a _mixed
graph_ $M_{G}$ with directed & undirected edges.
There can be different kinds of DAGs based on the type of edges they contain.
A class of DAG known as partially directed acyclic graph (PDAG) contains both
directed ($\rightarrow$) and undirected (-) edges. The mixed graph of Figure 3
(c) is also a PDAG. A completed PDAG (CPDAG) consists of directed
($\rightarrow$) edges that exist in every DAG $G$ having the same conditional
dependencies, and undirected (-) edges that are reversible in $G$. An
extension of DAGs that retain many of the significant properties that are
associated with DAGs is known as ancestral graphs (AGs). Two different DAGs
may lead to the same ancestral graph (Richardson & Spirtes (2002a)). Often
there are hidden confounders and selection biases in real-world data.
Ancestral graphs can represent the data-generating mechanisms that may involve
latent confounders and/or selection bias, without explicitly modeling the
unobserved variables. There exist different types of ancestral graphs. A
maximal ancestral graph (MAG) is a mixed graph that can have both directed (→)
and bidirectional ($\leftrightarrow$) edges (Richardson & Spirtes (2002b)). A
partial ancestral graph (PAG) can have four types of edges such as directed
(→), bi-directed ($\leftrightarrow$), partially directed (o→), and undirected
($-$) (Spirtes (2001)). That is, edges in a PAG can have three kinds of
endpoints: $-$, o, or $>$. An ancestral graph without bi-directed edges
($\leftrightarrow$) is a DAG (Triantafillou & Tsamardinos (2016)).
### 2.2 Causal Graphical Models
A causal graphical model (CGM) or causal graph (CG) is a DAG $G$ that
represents a joint probability distribution $P$ over a set of random variables
$X=(X_{1},X_{2},…,X_{d})$ where $P$ is Markovian with respect to $G$. In a
CGM, the nodes represent variables $X$, and the arrows represent causal
relationships between them. The joint distribution $P$ can be factorized as
follows where $pa(x_{i},G)$ denotes the parents of $x_{i}$ in $G$.
$P(x_{1},…,x_{d})=\prod_{i=1}^{d}P(x_{i}|pa(x_{i},G))$ (1)
Causal graphs are often used to study the underlying data-generating mechanism
in real-world problems. For any dataset $D$ with variables $X$, causal graphs
can encode the cause-effect relationships among the variables using directed
edges ($\rightarrow$) from cause to the effect. Most of the time causal graphs
take the form of a DAG. In Figure 3 (a), $X$ is the cause that effects both
$Y$ and $Z$ (i.e. $Y$ $\leftarrow$ $X$ $\rightarrow$ $Z$). Also, $Z$ is a
cause of $Y$ (i.e. $Z$ $\rightarrow$ $Y$). The mechanism that enables the
estimation of a causal graph $G$ from a dataset $D$ is called _causal
discovery (CD)_ (Figure 1). The outcome of any causal discovery algorithm is a
causal graph $G$ where the directed edges ($\rightarrow$) represent the cause-
and-effect relationship between the variables $X$ in $D$. However, some
approaches have different forms of graphs (PDAGs, CPDAGs, ancestral graphs,
etc.) as the output causal graph. Table 3 lists the output causal graphs of
some common approaches which are discussed in section 3.
Table 3: List of some CD algorithms with their output causal graphs. A detailed discussion of the algorithms is in section 3. The cells with $\checkmark$ represent the type of graph produced by the corresponding algorithm. Algorithms | DAG | PDAG | CPDAG | MAG | PAG
---|---|---|---|---|---
PC | | | $\checkmark$ | |
FCI | | | | | $\checkmark$
RFCI | | | | | $\checkmark$
GES | | | $\checkmark$ | |
GIES | | $\checkmark$ | | |
MMHC | $\checkmark$ | | | |
LiNGAM | $\checkmark$ | | | |
NOTEARS | $\checkmark$ | | | |
GSMAG | | | | $\checkmark$ |
#### 2.2.1 Key Structures in Causal Graphs
There are three fundamental building blocks (key structures) commonly observed
in the graphical models or causal graphs, namely, _Chain_ , _Fork_ , and
_Collider_. Any graphical model consisting of at least three variables is
composed of these key structures. We discuss these basic building blocks and
their implications in dependency relationships below.
###### Definition 1 (Chain)
A chain $X\rightarrow Y\rightarrow Z$ is a graphical structure or a
configuration of three variables $X$, $Y$, and $Z$ in graph $G$ where $X$ has
a directed edge to $Y$ and $Y$ has a directed edge to $Z$ (see Figure 4 (a)).
Here, $X$ causes $Y$ and $Y$ causes $Z$, and $Y$ is called a mediator.
###### Definition 2 (Fork)
A fork $Y\leftarrow X\rightarrow Z$ is a triple of variables $X$, $Y$, and $Z$
where one variable is the common parent of the other two variables. In Figure
4 (b), the triple ($X$, $Y$, $Z$) is a fork where $X$ is a common parent of
$Y$ and $Z$.
###### Definition 3 (Collider/V-structure)
A v-structure or collider $X\rightarrow Z\leftarrow Y$ is a triple of
variables $X$, $Y$, and $Z$ where one variable is a common child of the other
two variables which are non-adjacent. In Figure 4 (c), the triple ($X$, $Y$,
$Z$) is a v-structure where $Z$ is a common child of $X$ and $Y$, but $X$ and
$Y$ are non-adjacent in the graph. Figure 4 (d) is also a collider with a
descendant $W$.
Figure 4: Fundamental building blocks in causal graphical models.
#### 2.2.2 Conditional Independence in Causal Graphs
Testing for conditional independence (CI) between the variables is one of the
most important techniques to find the causal relationships among the
variables. Conditional independence between two variables $X$ and $Y$ results
when they are independent of each other given a third variable $Z$ (i.e. $X$
$\perp\\!\\!\\!\perp$ $Y$ $|$ $Z$). In the case of causal discovery, CI
testing allows deciding if any two variables are causally connected or
disconnected. An important criterion for CI testing is the d-separation
criterion which is formally defined below.
###### Definition 4 (d-separation)
(Pearl (1988)) A path $p$ in $G$ is blocked by a set of nodes $N$ if either
1. i.
$p$ contains a chain of nodes $X\rightarrow Y\rightarrow Z$ or a fork
$X\leftarrow Y\rightarrow Z$ such that the middle node $Y$ is in $N$,
2. ii.
$p$ contains a collider $X\rightarrow Y\leftarrow Z$ such that the collision
node $Y$ is not in $N$, and no descendant of $Y$ is in $N$.
If $N$ blocks every path between two nodes, then they are d-separated,
conditional on N, and thus are independent conditional on N.
In d-separation, _d_ stands for directional. The d-separation criterion
provides a set of rules to check if two variables are independent when
conditioned on a set of variables. The conditioning variable can be a single
variable or a set of variables. However, two variables with a directed edge
($\rightarrow$) between them are always dependent. The set of testable
implications provided by _d-separation_ can be benchmarked with the available
data $D$. If a graph $G$ might have been generated from a dataset $D$, then
_d-separation_ tells us which variables in $G$ must be independent conditional
on other variables. If every _d-separation_ condition matches a conditional
independence in data, then no further test can refute the model (Pearl
(1988)). If there is at least one path between two variables that is
unblocked, then they are _d-connected_. If two variables are _d-connected_ ,
then they are most likely dependent (except intransitive cases) (Pearl
(1988)). The d-separation or conditional independence between the variables in
the key structures (Figure 4) or building blocks of causal graphs follow some
rules which are discussed below:
1. i.
_Conditional Independence in Chains:_ If there is only one unidirectional path
between variables $X$ and $Z$ (Figure 4 (a)), and $Y$ is any variable or set
of variables that intercept that path, then $X$ and $Z$ are conditionally
independent given $Y$, i.e. $X$ $\perp\\!\\!\\!\perp$ $Z$ $|$ $Y$.
2. ii.
_Conditional Independence in Forks:_ If a variable $X$ is a common cause of
variables $Y$ and $Z$, and there is only one path between $Y$ and $Z$, then
$Y$ and $Z$ are independent conditional on $X$ (i.e. $Y$ $\perp\\!\\!\\!\perp$
$Z$ $|$ $X$) (Figure 4(b)).
3. iii.
_Conditional Independence in Colliders:_ If a variable $Z$ is the collision
node between two variables $X$ and $Y$ (Figure 4(c)), and there is only one
path between $X$ and $Y$, then $X$ and $Y$ are unconditionally independent
(i.e. $X$ $\perp\\!\\!\\!\perp$ $Y$). But, they become dependent when
conditioned on $Z$ or any descendants of $Z$ (Figure 4(d)).
#### 2.2.3 Markov Equivalence in Causal Graphs
A set of causal graphs having the same set of conditional independencies is
known as a Markov equivalence class (MEC). Two DAGs that are Markov equivalent
have the (i) same skeleton (the underlying undirected graph) and (ii) same
v-structures (colliders) (Verma & Pearl (2022)). That is, all DAGs in a MEC
share the same edges, regardless of the direction of those edges, and the same
colliders whose parents are not adjacent. Chain and Fork share the same
independencies, hence, they belong to the same MEC (Figure 5).
###### Definition 5 (Markov Blanket)
For any variable X, its Markov blanket (MB) is the set of variables such that
X is independent of all other variables given MB. The members in the Markov
blanket of any variable will include all of its parents, children, and
spouses.
Figure 5: Markov Equivalence in Chains and Fork.
Markov equivalence in different types of DAGs may vary. A partial DAG (PDAG)
a.k.a an essential graph (Perković et al. (2017)) can represent an equivalence
class of DAGs. Each equivalent class of DAGs can be uniquely represented by a
PDAG. A completed PDAG or CPDAG represents the union (over the set of edges)
of Markov equivalent DAGs, and can uniquely represent an MEC (Malinsky &
Spirtes (2016b)). More specifically, in a CPDAG, an undirected edge between
any two nodes $X$ and $Y$ indicates that some DAG in the equivalence class
contains the edge $X$→$Y$ and some DAG may contain $Y$→$X$. Figure 6 shows a
CPDAG and the DAGs ($G$ and $H$) belonging to an equivalence class.
Markov equivalence in the case of ancestral graphs works as follows. A maximal
ancestral graph (MAG) represents a DAG where all hidden variables are
marginalized out and preserves all conditional independence relations among
the variables that are true in the underlying DAG. That is, MAGs can model
causality and conditional independencies in causally insufficient systems
(Triantafillou & Tsamardinos (2016)). Partial ancestral graphs (PAGs)
represent an equivalence class of MAGs where all common edge marks shared by
all members in the class are displayed, and also, circles for those marks that
are uncommon are presented. PAGs represent all of the observed d-separation
relations in a DAG. Different PAGs that represent distinct equivalence classes
of MAGs involve different sets of conditional independence constraints. An MEC
of MAGs can be represented by a PAG (Malinsky & Spirtes (2016b)).
Figure 6: DAGs $G$ and $H$ belong to the same MEC. The leftmost graph is a
CPDAG of $G$ and $H$ with an undirected edge ($-$) between $X$ and $Z$, and
the rest of the edges same as in $G$ and $H$.
### 2.3 Structural Causal Models
Pearl (2009) defined a _class of models_ for formalizing structural knowledge
about the _data-generating process_ known as the structural causal models
(SCMs). The SCMs are valuable tools for reasoning and decision-making in
causal analysis since they are capable of representing the underlying causal
story of data (Kaddour et al. (2022)).
###### Definition 6 (Structural Causal Model)
Pearl (2009); A structural causal model is a 4-tuple $M=\langle
U,V,F,P(u)\rangle$, where
1. i.
$U$ is a set of background variables (also called exogenous) that are
determined by factors outside the model.
2. ii.
$V$ is a set $\\{V_{1},V_{2},\ldots,V_{n}\\}$ of endogenous variables that are
determined by variables in the model, viz. variables in $U\cup V$.
3. iii.
$F$ is a set of functions $\\{f_{1},f_{2},\ldots,f_{n}\\}$ such that each
$f_{i}$ is a mapping from the respective domains of $U_{i}\cup{PA}_{i}$ to
$V_{i}$ and the entire set $F$ forms a mapping from $U$ to $V$. In other
words, each $f_{i}$ assigns a value to the corresponding $V_{i}\in V$,
$v_{i}\leftarrow f_{i}(pa_{i},u_{i}),$ for $i=1,2,\ldots n$.
4. iv.
$P(u)$ is a probability function defined over the domain of $U$.
Each SCM $M$ is associated with a causal graphical model $G$ that is a DAG,
and a set of functions $f_{i}$. Causation in SCMs can be interpreted as
follows: a variable $Y$ is directly caused by $X$ if $X$ is in the function
$f$ of $Y$. In other words, each $f_{i}$ assigns a value to the corresponding
$V_{i}\in V$, $v_{i}\leftarrow f_{i}(pa_{i},u_{i}),$ for $i=1,2,\ldots n$. In
the SCM of Figure 7, $X$ is a direct cause of $Y$ as $X$ appears in the
function that assigns $Y$’s value. That is, if a variable $Y$ is the child of
another variable $X$, then $X$ is a direct cause of $Y$. In Figure 7, $U_{X}$,
$U_{Y}$ and $U_{Z}$ are the exogenous variables; $X$, $Y$ and $Z$ are the
endogenous variables, and $f_{X}$, $f_{Y}$ & $f_{Z}$ are the functions that
assign values to the variables in the system. Any variable is an exogenous
variable if $(i)$ it is an unobserved or unmeasured variable and $(ii)$ it
cannot be a descendant of any other variables. Every endogenous variable is a
descendant of at least one exogenous variable.
Figure 7: A Structural Causal Model (SCM) with causal graph $G$ and functions
$f_{X}$, $f_{Y}$ and $f_{Z}$ which denotes how the variables $X$, $Y$, and $Z$
are generated respectively.
### 2.4 Causal Assumptions
Often, the available data provide only partial information about the
underlying causal story. Hence, it is essential to make some assumptions about
the world for performing causal discovery (Lee & Honavar (2020)). Following
are the common assumptions usually made by causal discovery algorithms.
1. i.
_Causal Markov Condition (CMC):_ The causal Markov assumption states that a
variable $X$ is independent of every other variable (except its descendants)
conditional on all of its direct causes (Scheines (1997)). That is, the CMC
requires that every variable in the causal graph is independent of its non-
descendants conditional on its parents (Malinsky & Spirtes (2016a)). In Figure
8, $W$ is the only descendant of $X$. As per the CMC, $X$ is independent of
$Z$ conditioned on its parent $Y$ ($X$ $\perp\\!\\!\\!\perp$ $Z$ $|$ $Y$).
Figure 8: Illustration of the causal Markov condition (CMC) among four
variables.
2. ii.
_Causal Faithfulness Condition (CFC):_ The faithfulness assumption states that
except for the variables that are d-separated in a DAG, all other variables
are dependent. More specifically, for a set of variables $V$ whose causal
structure is represented by a DAG $G$, no conditional independence holds
unless entailed by the causal Markov condition (Ramsey et al. (2012)). That
is, the CFC a.k.a the Stability condition is a converse principle of the CMC.
CFC can be also explained in terms of d-separation as follows: For every three
disjoint sets of variables $X$, $Y$, and $Z$, if $X$ and $Y$ are not
d-separated by $Z$ in the causal DAG, then $X$ and $Y$ are not independent
conditioned on $Z$ (Ramsey et al. (2012)). The faithfulness assumption may
fail in certain scenarios. For example, it fails whenever there exist two
paths with equal and opposite effects between variables. It also fails in
systems with deterministic relationships among variables, and also, when there
is a failure of transitivity along a single path (Weinberger (2018)).
3. iii.
_Causal Sufficiency:_ The causal sufficiency assumption states that there
exist no latent/hidden/unobserved confounders, and all the common causes are
measured. Thus, the assumption of causal sufficiency is satisfied only when
all the common causes of the measured variables are measured. This is a strong
assumption as it restricts the search space of all possible DAGs that may be
inferred. However, real-world datasets may have hidden confounders which might
frequently cause the assumption to be violated in such scenarios. Algorithms
that violate the causal sufficiency assumption may observe degradation in
their performance. The causal insufficiency in real-world datasets may be
overcome by leveraging domain knowledge in the discovery pipeline. The CMC
tends to fail for a causally insufficient set of variables.
4. iv.
_Acyclicity:_ It is the most common assumption which states that there are no
cycles in a causal graph. That is, a graph needs to be acyclic in order to be
a causal graph. As per the acyclicity condition, there can be no directed
paths starting from a node and ending back to itself. This resembles the
structure of a directed acyclic graph (DAG). A recent approach (Zheng et al.
(2018)) has formulated a new function (Equation 2) to enforce the acyclicity
constraint during causal discovery in continuous optimization settings. The
weighted adjacency matrix $W$ in Equation 2 is a DAG if it satisfies the
following condition where $\circ$ is the Hadamard product, $e^{W\circ W}$ is
the matrix exponential of $W\circ W$, and $d$ is the total number of vertices.
$h(W)=tr(e^{W\circ W})-d=0$ (2)
5. v.
_Data Assumptions:_ There can be different types of assumptions about the
data. Data may have linear or nonlinear dependencies and can be continuously
valued or discrete valued in nature. Data can be independent and identically
distributed (I.I.D.) or the data distribution may shift with time (e.g. time-
series data). Also, the data may belong to different noise distributions such
as Gaussian, Gumbel, or Exponential noise. Occasionally, some other data
assumptions such as the existence of selection bias, missing variables, hidden
confounders, etc. are found. However, in this survey, we do not focus much on
the methods with these assumptions.
forked edges, for tree=draw,align=left,l=1.7cm,edge=-latex [Causal Discovery
Algorithms
(for I.I.D. data) [Contraint-based [PC (3.1.1)
FCI (3.1.2)
Anytime FCI (3.1.3)
RFCI (3.1.4)
FCI with TBK∗ (3.1.5)
PC-stable (3.1.6)
PKCL∗ (3.1.7)
] ] [Score-based [GES (3.2.1)
FGS (3.2.2)
SGES (3.2.3)
RL-BIC (3.2.4)
A-star search (3.2.5)
Triplet A-star (3.2.6)
KCRL∗ (3.2.7)] ] [FCM-based [LiNGAM (3.3.1)
ANM (3.3.2)
PNL (3.3.3)
DirectLiNGAM∗ (3.3.4)
SAM (3.3.5)
CGNN (3.3.6)
CAM (3.3.7)
] ] [Gradient-based [NOTEARS⋄ (3.4.1)
GraN-DAG⋄ (3.4.2)
GAE (3.4.3)
DAG-GNN⋄ (3.4.4)
GOLEM⋄ (3.4.5)
DAG-NoCurl (3.4.6)
ENCO⋄ (3.4.7)] ] [Miscellaneous [FRITL (3.5.2)
HCM (3.5.3)
ETIO∗ (3.5.6)
JCI∗ (3.5.8)
Kg2Causal∗ (3.5.9)
Meta-RL (3.5.11)
LFCM (3.5.13)] ] ]
Figure 9: Taxonomy of some causal discovery approaches for I.I.D. data. The
approaches are classified based on their core contribution or the primary
strategy they adopt for causal structure recovery. The approaches that
leverage prior knowledge are marked by an $\ast$ symbol. Some of the gradient-
based optimization approaches that use a score function are indicated by a
$\diamond$ symbol. They are primarily classified as gradient-based methods
because of the use of gradient descent for optimization. However, they can be
a score-based method too as they compute data likelihood scores on the way.
## 3 Causal Discovery Algorithms for I.I.D. Data
Causal graphs are essential as they represent the underlying causal story
embedded in the data. There are two very common approaches to recovering the
causal structure from observational data, _i) Constraint-based_ (Spirtes et
al. (2000b), Spirtes (2001), Colombo et al. (2012)) and _ii) Score-based_
(Chickering (2002)). Among the other types of approaches, _functional causal
models (FCMs)-based_ (Shimizu et al. (2006), Hoyer et al. (2008)) approaches
and _hybrid_ approaches (Tsamardinos et al. (2006)) are noteworthy. Recently,
some _gradient-based_ approaches have been proposed based on neural networks
(Abiodun et al. (2018)) and a modified definition (Equation 2) of the
acyclicity constraint (Zheng et al. (2018), Yu et al. (2019)). Other
approaches include the ones that prioritize the use of _background knowledge_
and provides ways to incorporate prior knowledge and experts’ opinion into the
search process (Wang et al. (2020); Sinha & Ramsey (2021)). In this section,
we provide an overview of the causal discovery algorithms for I.I.D. data
based on the different types of approaches mentioned above. The algorithms
primarily distinguish from each other based on the core approach they follow
to perform causal discovery. We further discuss noteworthy similar approaches
specialized for non-I.I.D. or time series data in section 4.
### 3.1 Constraint-based
Testing for conditional independence (CI) is a core objective of constraint-
based causal discovery approaches. Conditional independence tests can be used
to recover the causal skeleton if the probability distribution of the observed
data is faithful to the underlying causal graph (Marx & Vreeken (2019)). Thus,
constraint-based approaches conduct CI tests between the variables to check
for the presence or absence of edges. These approaches infer the conditional
independencies within the data using the _d-separation criterion_ to search
for a DAG that entails these independencies, and detect which variables are
d-separated and which are d-connected (Triantafillou & Tsamardinos (2016)).
$X$ is conditionally independent of $Z$ given $Y$ i.e. $X$
$\perp\\!\\!\\!\perp$ $Z$ $|$ $Y$ in Figure 10 (a) and in Figure 10 (b), $X$
and $Z$ are independent, but are not conditionally independent given $Y$.
Table 4 lists different types of CI tests used by constraint-based causal
discovery approaches.
Figure 10: (a) $X$ $\perp\\!\\!\\!\perp$ $Z$ $|$ $Y$ and (b) $X$ and $Z$ are not conditionally independent given $Y$. Table 4: Types of conditional independence (CI) tests. Please refer to the study Runge (2018) for a detailed discussion on CI tests. | Conditional Independence Test | Ref.
---|---|---
1. | Conditional Distance Correlation (CDC) test | Wang et al. (2015)
2. | Momentary Conditional Independence (MCI) | Runge et al. (2019)
3. | Kernel-based CI test (KCIT) | Zhang et al. (2012)
4. | Randomized Conditional Correlation Test (RCoT) | Strobl et al. (2019)
5. | Generative Conditional Independence Test (GCIT) | Bellot & van der Schaar (2019)
6. | Model-Powered CI test | Sen et al. (2017)
7. | Randomized Conditional Independence Test (RCIT) | Strobl et al. (2019)
8. | Kernel Conditional Independence Permutation Test | Doran et al. (2014)
9. | Gaussian Processes and Distance Correlation-based (GPDC) | Rasmussen et al. (2006)
10. | Conditional mutual information estimated with a k-nearest neighbor estimator (CMIKnn) | Runge (2018)
#### 3.1.1 PC
The Peter-Clark (PC) algorithm (Spirtes et al. (2000b)) is one of the oldest
constraint-based algorithms for causal discovery. To learn the underlying
causal structure, this approach depends largely on conditional independence
(CI) tests. This is because it is based on the concept that two statistically
independent variables are not causally linked. The outcome of a PC algorithm
is a CPDAG. It learns the CPDAG of the underlying DAG in three steps: _Step 1
- Skeleton identification, Step 2 - V-structures determination, and Step 3 -
Edge orientations_. It starts with a fully connected undirected graph using
every variable in the dataset, then eliminates the unconditionally and
conditionally independent edges (skeleton detection), then it finds and
orients the v-structures or colliders (i.e. X → Y ← Z) based on the
d-separation set of node pairs, and finally orients the remaining edges based
on two aspects: i) availability of no new v-structures, and ii) not allowing
any cycle formation. The assumptions made by the PC algorithm include
acyclicity, causal faithfulness, and causal sufficiency. It is computationally
more feasible for sparse graphs. An implementation of this algorithm can be
found in the CDT repository
(https://github.com/ElementAI/causal_discovery_toolbox) and also, in the
gCastle toolbox (Zhang et al. (2021a)). A number of the constraint-based
approaches namely FCI, RFCI, PCMCI, PC-stable, etc. use the PC algorithm as a
backbone to perform the CI tests.
Figure 11: Step-by-step workflow of the PC (Spirtes et al. (2000b))
algorithm.
#### 3.1.2 FCI
The Fast Causal Inference (FCI) algorithm (Spirtes et al. (2000a)) is a
variant of the PC algorithm that can infer conditional independencies and
learn causal relations in the presence of many arbitrary latent and selection
variables. As a result, it is accurate in the large sample limit with a high
probability even when there exists hidden variables, and selection bias (Berk
(1983)). The first step of the FCI algorithm is similar to the PC algorithm
where it starts with a complete undirected graph to perform the skeleton
determination. After that, it requires additional tests to learn the correct
skeleton and has additional orientation rules. In the worst case, the number
of conditional independence tests performed by the algorithm grows
exponentially with the number of variables in the dataset. This can affect
both the speed and the accuracy of the algorithm in the case of small data
samples. To improve the algorithm, particularly in terms of speed, there exist
different variants such as the RFCI (Colombo et al. (2012)) and the Anytime
FCI (Spirtes (2001)) algorithms.
#### 3.1.3 Anytime FCI
Anytime FCI (Spirtes (2001)) is a modified and faster version of the FCI
(Spirtes et al. (2000a)) algorithm. The number of CI tests required by FCI
makes it infeasible if the model has a large number of variables. Moreover,
when the FCI requires independence tests conditional on a large set of
variables, the accuracy decreases for a small sample size. The outer loop of
the FCI algorithm performs independence tests conditional on the increasing
size of variables. In the anytime FCI algorithm, the authors showed that this
outer loop can be stopped anytime during the execution for any smaller
variable size. As the number of variables in the conditional set reduces,
anytime FCI becomes much faster for the large sample size. More importantly,
it is also more reliable on limited samples since the statistical tests with
the lowest power are discarded. To support the claim, the authors provided
proof for the change in FCI that guarantees good results despite the
interruption. The result of the interrupted anytime FCI algorithm is still
valid, but as it cannot provide answers to most questions, the results could
be less informative compared to the situation if it was allowed to run
uninterrupted.
#### 3.1.4 RFCI
Really Fast Causal Inference (RFCI) (Colombo et al. (2012)) is a much faster
variant of the traditional FCI for learning PAGs that uses fewer CI tests than
FCI. Unlike FCI, RFCI assumes that causal sufficiency holds. To ensure
soundness, RFCI performs some additional tests before orienting v-structures
and discriminating paths. It conditions only on subsets of the adjacency sets
and unlike FCI, avoids the CI tests given subsets of possible d-separation
sets which can become very large even for sparse graphs. As a result, the
number of these additional tests and the size of their conditioning sets are
small for sparse graphs which makes RFCI much faster and computationally
feasible than FCI for high-dimensional sparse graphs. Also, the lower
computational complexity of RFCI leads to high-dimensional consistency results
under weaker conditions than FCI.
#### 3.1.5 FCI with Tiered Background Knowledge
Andrews et al. (2020) show that the Fast Causal Inference (FCI) algorithm
(Spirtes et al. (2000a)) is sound and complete with tiered background
knowledge (TBK). By _tiered background knowledge_ , it means any knowledge
where the variables may be partitioned into two or more mutually exclusive and
exhaustive subsets among which there is a known causal order. Tiered
background knowledge may arise in many different situations, including but not
limited to instrumental variables, data from multiple contexts and
interventions, and temporal data with contemporaneous confounding. The proof
that FCI is complete with TBK suggests that the algorithm is able to find all
of the causal relationships that are identifiable from tiered background
knowledge and observational data under the typical assumptions.
#### 3.1.6 PC-stable
The independence tests in the original PC method are prone to errors in the
presence of a few samples. Additionally, because the graph is updated
dynamically, maintaining or deleting an edge incorrectly will affect the
neighboring sets of other nodes. As a result, the sequence in which the CI
tests are run will affect the output graph. Despite the fact that this order
dependency is not a significant issue in low-dimensional situations, it is a
severe problem in high-dimensional settings. To solve this problem, Colombo et
al. (2014) suggested changing the original PC technique to produce a stable
output skeleton that is independent of the input dataset’s variable ordering.
This approach, known as the stable-PC algorithm, queries and maintains the
neighbor (adjacent) sets of every node at each distinct level. Since the
conditioning sets of the other nodes are unaffected by an edge deletion at one
level, the outcome is independent of the variable ordering. They demonstrated
that this updated version greatly outperforms the original algorithm in high-
dimensional settings while maintaining the original algorithms’ low-
dimensional settings performance. However, this modification lengthens the
algorithm’s runtime even more by requiring additional CI checks to be done at
each level. The R-package pcalg contains the source code for PC-stable.
#### 3.1.7 PKCL
Wang et al. (2020) proposed an algorithm, Prior-Knowledge-driven Local Causal
Structure Learning (PKCL), to discover the underlying causal mechanism between
bone mineral density (BMD) and its factors from clinical data. It first
discovers the neighbors of the target variables and then detects the
MaskingPCs to eliminate their effect. After that, it finds the spouse of
target variables utilizing the neighbors set. This way the skeleton of the
causal network is constructed. In the global stage, PKCL leverages the _Markov
blanket (MB)_ sets learned in the local stage to learn the global causal
structure in which prior knowledge is incorporated to guide the global
learning phase. Specifically, it learns the causal direction between feature
variables and target variables by combining the constraint-based and score-
based structure search methods. Also, in the learning phase, it automatically
adds casual direction according to the available prior knowledge.
### 3.2 Score-based
Score-based causal discovery algorithms search over the space of all possible
DAGs to find the graph that best explains the data. Typically, any score-based
approach has two main components: (i) a search strategy to explore the
possible search states or space of candidate graphs $G^{{}^{\prime}}$, and
(ii) a score function to assess the candidate causal graphs. The search
strategy along with a score function helps to optimize the search over the
space of all possible DAGs. More specifically, a score function
$S(G^{{}^{\prime}},D)$ maps causal graphs $G^{{}^{\prime}}$ to a numerical
score, based on how well $G^{{}^{\prime}}$ fits a given dataset $D$. A
commonly used score function to select causal models is the Bayesian
Information Criterion (BIC) (Schwarz (1978a)) which is defined below:
$\mathcal{S}(G^{{}^{\prime}},D)=-2\text{log}\
\mathcal{L}\\{G^{{}^{\prime}},D\\}+k\text{log}\ n,$ (3)
where $n$ is the number of samples in $D$, $k$ is the dimension of
$G^{{}^{\prime}}$ and $\mathcal{L}$ is the maximum-likelihood function
associated with the candidate graph $G^{{}^{\prime}}$. The lower the BIC
score, the better the model. BDeu, BGe, MDL, etc. (listed in Table 5) are some
of the other commonly used score functions. These objective functions are
optimized through a heuristic search for model selection. After evaluating the
quality of the candidate causal graphs using the score function, the score-
based methods output one or more causal graphs that achieve the highest score
(Huang et al. (2018b)). We discuss some of the well-known approaches in this
category below.
Figure 12: General components of a score-based causal discovery approach. Table 5: Some commonly used score functions for causal discovery. Please refer to the study Huang et al. (2018a) for a detailed discussion of the score functions. Score Function/Criterion | Ref.
---|---
Minimum description length (MDL) | Schwarz (1978b)
Bayesian information criterion (BIC) | Schwarz (1978a)
Akaike information criterion (AIC) | Akaike (1998)
Bayesian Dirichlet equivalence score (BDeU) | Buntine (1991)
Bayesian metric for Gaussian networks (BGe) | Geiger & Heckerman (1994)
Factorized normalized maximum likelihood (fNML) | Silander et al. (2008)
#### 3.2.1 GES
Greedy Equivalence Search (GES) (Chickering (2002)) is one of the oldest
score-based causal discovery algorithms that perform a greedy search over the
space of equivalence classes of DAGs. Each search state is represented by a
CPDAG where some insert and delete operators allow for single-edge additions
and deletions respectively. Primarily GES works in two phases: i) Forwards
Equivalence Search (FES), and ii) Backward Equivalence Search (BES). In the
first phase, FES starts with an empty CPDAG (no-edge model), and greedily adds
edges by taking into account every single-edge addition that could be
performed to every DAG in the current equivalence class. After an edge
modification is done to the current CPDAG, a score function is used to score
the model. If the new score is better than the current score, only then the
modification is allowed. When the forward phase reaches a local maximum, the
second phase, BES starts where at each step, it takes into account all single-
edge deletions that might be allowed for all DAGs in the current equivalence
class. The algorithm terminates once the local maximum is found in the second
phase. Implementation of GES is available at the following Python packages:
Causal Discovery Toolbox or CDT (Kalainathan & Goudet (2019)) and gCastle
(Zhang et al. (2021a)). GES assumes that the score function is decomposable
and can be expressed as a sum of the scores of individual nodes and their
parents. A summary workflow of GES is shown in Figure 13.
Figure 13: Different stages in the GES algorithm.
#### 3.2.2 FGS
Fast Greedy Search (FGS) (Ramsey (2015)) is another score-based method that is
an optimized version of the GES algorithm (Chickering (2002)). This optimized
algorithm is based on the faithfulness assumption and uses an alternative
method to reduce scoring redundancy. An ascending list $L$ is introduced which
stores the score difference of arrows. After making a thorough search, the
first edge e.g. $X$ $\rightarrow$ $Y$ is inserted into the graph and the graph
pattern is reverted. For variables that are adjacent to $X$ or $Y$ with
positive score differences, new edges are added to $L$. This process in the
forward phase repeats until the $L$ becomes empty. Then the reverse phase
starts, filling the list $L$ and continuing until $L$ is empty. This study
considered the experiment where GES was able to search over 1000 samples with
50,000 variables in 13 minutes using a 4-core processor and 16GB RAM computer.
Following the new scoring method, FGS was able to complete the task with 1000
samples on 1,000,000 variables for sparse models in 18 hours using a
supercomputer having 40 processors and 384GB RAM at the Pittsburgh
Supercomputing Center. The code for FGS is available on GitHub as a part of
the Tetrad project: https://github.com/cmu-phil/tetrad.
#### 3.2.3 SGES
Selective Greedy Equivalence Search (SGES) (Chickering & Meek (2015)) is
another score-based causal discovery algorithm that is a restrictive variant
of the GES algorithm (Chickering (2002)). By assuming perfect generative
distribution, SGES provides a polynomial performance guarantee yet maintains
the asymptotic accuracy of GES. While doing this, it is possible to keep the
algorithm’s large sample guarantees by ignoring all but a small fraction of
the backward search operators that GES considered. In the forward phase, SGES
uses a polynomial number of insert operation calls to the score function. In
the backward phase, it consists of only a subset of delete operators of GES
which include, consistent operators to preserve GES’s consistency over large
samples. The authors demonstrated that, for a given set of graph-theoretic
complexity features, such as maximum-clique size, the maximum number of
parents, and v-width, the number of score assessments by SGES can be
polynomial in the number of nodes and exponential in these complexity
measurements.
#### 3.2.4 RL-BIC
RL-BIC is a score-based approach that uses _Reinforcement Learning (RL)_ and a
BIC score to search for the DAG with the best reward (Zhu et al. (2019)). For
data-to-graph conversion, it uses an _encoder-decoder architecture_ that takes
observational data as input and generates graph adjacency matrices that are
used to compute rewards. The reward incorporates a BIC score function and two
penalty terms for enforcing acyclicity. The _actor-critic RL algorithm_ is
used as a _search strategy_ and the final output is the causal graph that
achieves the best reward among all the generated graphs. The approach is
applicable to small and medium graphs of up to 30 nodes. However, dealing with
large and very large graphs is still a challenge for it. This study mentions
that their future work involves developing a more efficient and effective
score function since computing scores is much more time-consuming than
training NNs. The original implementation of the approach is available at:
https://github.com/huawei-noah/trustworthyAI.
Figure 14: Components of the RL-BIC (Zhu et al. (2019)) approach.
#### 3.2.5 A* search
Xiang & Kim (2013) proposed a one-stage method for learning sparse network
structures with continuous variables using the A* search algorithm with lasso
in its scoring system. This method increased the computational effectiveness
of popular exact methods based on dynamic programming. The study demonstrated
how the proposed approach achieved comparable or better accuracy with
significantly faster computation time when compared to two-stage approaches,
including L1MB and SBN. Along with that, a heuristic approach was added that
increased A* lasso’s effectiveness while maintaining the accuracy of the
outcomes. In high-dimensional spaces, this is a promising approach for
learning sparse Bayesian networks.
#### 3.2.6 Triplet A*
Lu et al. (2021) uses the _A* exhaustive search_ (Yuan & Malone (2013))
combined with an optimal BIC score that requires milder assumptions on data
than conventional CD approaches to guarantee its asymptotic correctness. The
optimal BIC score combined with the exhaustive search finds the MEC of the
true DAG if and only if the true DAG satisfies the optimal BIC Condition. To
gain scalability, they also developed an approximation algorithm for complex
large systems based on the A* method. This extended approach is named Triplet
A* which can scale up to more than 60 variables. This extended method is
rather general and can be used to scale up other exhaustive search approaches
as well. Triplet A* can particularly handle linear Gaussian and non-Gaussian
networks. It works in the following way. Initially, it makes a guess about the
parents and children of each variable. Then for each variable $X$ and its
neighbors $(Y,Z)$, it forms a cluster consisting of $X,Y,Z$ with their direct
neighbors and runs an exhaustive search on each cluster. Lastly, it combines
the results from all clusters. The study shows that empirically Triplet A*
outperforms GES for large dense networks.
#### 3.2.7 KCRL
Prior Knowledge-based Causal Discovery Framework with Reinforcement Learning
a.k.a. KCRL (Hasan & Gani (2022)) is a framework for causal discovery that
utilizes prior knowledge as constraints and penalizes the search process for
violation of these constraints. This utilization of background knowledge
significantly improves performance by reducing the search space, and also,
enabling a faster convergence to the optimal causal structure. KCRL leverages
reinforcement learning (RL) as the search strategy where the RL agent is
penalized each time for the violation of any imposed knowledge constraints. In
the KCRL framework (Figure 15), at first, the observational data is fed to an
RL agent. Here, data-to-adjacency matrix conversion is done using an encoder-
decoder architecture which is a part of the RL agent. At every iteration, the
agent produces an equivalent adjacency matrix of the causal graph. A
comparator compares the generated adjacency matrix with the true causal edges
in the prior knowledge matrix $P_{m}$, and thereby, computes a penalty $p$ for
the violation of any ground truth edges in the produced graph. Each generated
graph is also scored using a standard scoring function such as BIC. A reward
$R$ is estimated as a sum of the BIC score $S_{BIC}$, the penalty for
acyclicity $h(W)$, and $\beta$ weighted prior knowledge penalty $\beta p$.
Finally, the entire process halts when the stopping criterion $S_{c}$ is
reached, and the best-rewarded graph is the final output causal graph.
Although originally KCRL was designed for the healthcare domain, it can be
used in any other domain for causal discovery where some prior knowledge is
available. Code for KCRL is available at https://github.com/UzmaHasan/KCRL.
$R=S_{BIC}+\beta p+h(W)$ (4)
Figure 15: The KCRL (Hasan & Gani (2022)) framework.
Another recent method called KGS (Hasan & Gani (2023)) leverages prior causal
information such as the presence or absence of a causal edge to guide a greedy
score-based causal discovery process towards a more restricted and accurate
search space. It demonstrates how the search space as well as scoring
candidate graphs can be reduced when different edge constraints are leveraged
during a search over equivalence classes of causal networks. It concludes that
any type of edge information is useful to improve the accuracy of the graph
discovery as well as the run time.
#### 3.2.8 ILP-based structure learning
Bartlett & Cussens (2017) looked into the application of integer linear
programming (ILP) to the structure learning problem. To boost the
effectiveness of ILP-based Bayesian network learning, they suggested adding
auxiliary implied constraints. Experiments were conducted to determine the
effect of each constraint on the optimization process. It was discovered that
the most effective configuration of these constraints could significantly
boost the effectiveness and speed of ILP-based Bayesian network learning. The
study made a significant contribution to the field of structure learning and
showed how well ILP can perform under non-essential constraints.
### 3.3 Functional Causal Model-based
Functional Causal Model (FCM) based approaches describe the causal
relationship between variables in a specific functional form. FCMs represent
variables as a function of their parents (direct causes) together with an
independent noise term $E$ (see Equation 5) (Zhang et al. (2015)). FCM-based
methods can distinguish among different DAGs in the same equivalence class by
imposing additional assumptions on the data distributions and/or function
classes (Zhang et al. (2021b)). Some of the noteworthy FCM-based causal
discovery approaches are listed below.
$X=f(PA_{X})+E$ (5)
Figure 16: A functional causal model (FCM) with four variables.
#### 3.3.1 L$i$NGAM
Linear Non-Gaussian Acyclic Model (LiNGAM) aims to discover the causal
structure from observational data under the assumptions that the data
generating process is linear, there are no unobserved confounders, and noises
have non-Gaussian distributions with non-zero variances (Shimizu et al.
(2006)). It uses the statistical method known as independent component
analysis (ICA) (Comon (1994)), and states that when the assumption of non-
Gaussianity is valid, the complete causal structure can be estimated. That is,
the causal direction is identifiable if the variables have a linear relation,
and the noise ($\varepsilon$) distribution is non-Gaussian in nature. Figure
17 depicts three scenarios where when $X$ and $\varepsilon$ are Gaussian (case
1), the predictor and regression residuals are independent of each other. For
the other two cases, $X$ and $\varepsilon$ are non-Gaussian, and we see that
for the regression in the anti-causal or backward direction ($X$ given $Y$),
the regression residual and the predictor are not independent as earlier. That
is, for the non-Gaussian cases, independence between regression residual and
predictor occurs only for the correct causal direction. There are 3 properties
of a LiNGAM. First, the variables $x_{i}={x_{1},x_{2},...,x_{n}}$ are arranged
in a causal order $k(i)$ such that the cause always preceedes the effect.
Second, each variable $x_{i}$ is assigned a value as per the Equation 6 where
$e_{i}$ is the noise/disturbance term and $b_{ij}$ denotes the causal strength
between $x_{i}$ and $x_{j}$. Third, the exogenous noise $e_{i}$ follows a non-
Gaussian distribution, with zero mean and non-zero variance, and are
independent of each other which implies that there is no hidden confounder.
Python implementation of the LiNGAM algorithm is available at
https://github.com/cdt15/lingam as well as in the gCastle package (Zhang et
al. (2021b)). Any standard ICA algorithm which can estimate independent
components of many different distributions can be used in LiNGAM. However, the
original implementation uses the FastICA (Hyvarinen (1999)) algorithm.
$x_{i}=\sum_{k(j)<k(i)}b_{ij}x_{j}+e_{i}$ (6)
Figure 17: Causal asymmetry between two variables having a linear relation
(Glymour et al. (2019)). Here, the causal direction is from $X$ to $Y$. A
total of three scenarios are depicted where both $X$ and $\varepsilon$ follow
the i) Gaussian, ii) Uniform, or iii) Super-Gaussian distribution for each of
the scenarios.
#### 3.3.2 ANM
Hoyer et al. (2008) performs causal discovery with additive noise models
(ANMs) and provides a generalization of the linear non-Gaussian causal
discovery framework to deal with nonlinear functional dependencies where the
variables have an additive noise. It mentions that nonlinear causal
relationships typically help to break the symmetry between the observed
variables and help in the identification of causal directions. ANM assumes
that the data generating process of the observed variables is as per the
Equation 7 where a variable $x_{i}$ is a function of its parents and the noise
term $e_{i}$ which is an independent additive noise. An implementation of ANM
is available in the gCastle package (Zhang et al. (2021a)).
$x_{i}=f(PA_{x_{i}})+e_{i}$ (7)
#### 3.3.3 PNL
Post-nonlinear (PNL) acyclic causal model with additive noise (Zhang &
Hyvärinen (2010)) is a highly realistic model where each observed continuous
variable is made up of additive noise-filled nonlinear functions of its
parents, followed by a nonlinear distortion. The influence of sensor
distortions, which are frequently seen in practice, is taken into account by
the second stage’s nonlinearity. A two-step strategy is proposed to separate
the cause from the effect in a two-variable situation, consisting of
restricted nonlinear ICA followed by statistical independence tests. The PNL
model was able to effectively separate causes from effects when applied to
solve the "CauseEffectPairs" task proposed by Mooij & Janzing (2010) in the
Pot-luck challenge. That is, it successfully distinguished the cause from the
effect, even if the nonlinear function of the cause is not invertible.
#### 3.3.4 Direct-L$i$NGAM
Shimizu et al. (2011) proposed DirectLiNGAM, a direct method for learning a
linear non-Gaussian structural equation model (SEM) which is a direct method
to estimate causal ordering and connection strengths based on non-Gaussianity.
This approach estimates a causal order of variables by successively reducing
each independent component from given data in the model which is completed in
steps equal to the number of the variables in the model. Once the causal order
of variables is identified, their connection strengths are estimated using
conventional covariance-based methods such as least squares and maximum
likelihood approaches. If the data strictly follows the model i.e. if all the
model assumptions are met and the sample size is infinite, it converges to the
right solution within a small number of steps. If some prior knowledge on a
part of the structure is available, it suggests using those for more efficient
learning. Doing so will reduce the number of causal orders and connection
strengths to be estimated. Its implementation can be found at:
https://github.com/huawei-noah/trustworthyAI/tree/master/gcastle.
#### 3.3.5 SAM
Kalainathan et al. (2018) proposed the algorithm known as Structural Agnostic
Modeling (SAM) that uses an _adversarial learning_ approach to find the causal
graphs. Particularly, it searches for an FCM using _Generative Adversarial
Neural-networks (GANs)_ and enforces the discovery of sparse causal graphs
through adequate regularization terms. A learning criterion that combines
distribution estimation, sparsity, and acyclicity constraints is used to
enforce the end-to-end optimization of the graph structure and parameters
through stochastic gradient descent. SAM leverages both conditional
independencies and distributional asymmetries in the data to find the
underlying causal mechanism. It aims to achieve an optimal complexity/fit
trade-off while modeling the causal mechanisms. SAM enforces the acyclicity
constraint of a DAG using the function in Equation 8 where, $A$ is the
adjacency matrix of the ground-truth graph $G$, and $d$ denotes the total
number of nodes in $G$. The latest implementation of SAM is available in the
CDT package (Kalainathan & Goudet (2019)). Also, an older version of SAM is
available at https://github.com/Diviyan-Kalainathan/SAM.
$\sum_{i=1}^{d}=\frac{\operatorname{tr}(A^{i})}{i!}=0$ (8)
#### 3.3.6 CGNN
Causal Generative Neural Networks (CGNN) is an FCM-based framework that uses
_neural networks (NNs)_ to learn the joint distribution of the observed
variables (Goudet et al. (2018)). Particularly, it uses a generative model
that minimizes the _maximum mean discrepancy_ (MMD) between the generated and
observed data. CGNN has a high computational cost. However, it proposes an
approximate learning criterion to scale the computational cost to linear
complexity in the number of observations. This framework can also be used to
simulate interventions on multiple variables in the dataset. An implementation
of CGNN in Pytorch is available at
https://github.com/FenTechSolutions/CausalDiscoveryToolbox.
#### 3.3.7 CAM
Causal Additive Model (CAM) is a method for estimating high-dimensional
additive structural equation models which are logical extensions of linear
structural equation models (Bühlmann et al. (2014)). In order to address the
difficulties of computation and statistical accuracy in the absence of prior
knowledge about underlying structure, the authors established consistency of
the maximum likelihood estimator and developed an effective computational
algorithm. The technique was demonstrated using both simulated and actual data
and made use of tools in sparse regression techniques. The authors also
discussed identifiability problems and the enormous size of the space of
potential models, which presents significant computational and statistical
accuracy challenges.
#### 3.3.8 CAREFL
Causal Autoregressive Flows (CAREFL) uses _autoregressive flow models_ (Huang
et al. (2018c)) for causal discovery by interpreting the ordering of variables
in an autoregressive flow based on structural equation models (SEMs)
(Khemakhem et al. (2021)). In general, SEMs define a generative model for data
based on causal relationships. CAREFL shows that particularly affine flows
define a new class of causal models where the noise is modulated by the cause.
For such models, it proves a new causal identifiability result that
generalizes additive noise models. To learn the causal structure efficiently,
it selects the ordering with the highest test log-likelihood and reports a
measure of causal direction based on the likelihood ratio for non-linear SEMs.
Autoregressive flow models also enable CAREFL to evaluate interventional
queries by fixing the interventional variable while sampling from the flow.
Moreover, the invertible property of autoregressive flows facilitates
counterfactual queries as well. Code implementation of CAREFL is available at
https://github.com/piomonti/carefl.
### 3.4 Gradient-based
Some of the recent studies in causal discovery formulate the structure
learning problem as a continuous optimization task using the least squares
objective and an algebraic characterization of DAGs (Zheng et al. (2018), Ng
et al. (2020)). Specifically, the combinatorial structure learning problem has
been transformed into a continuous one and solved using gradient-based
optimization methods (Ng et al. (2019)). These methods leverage gradients of
an objective function with respect to a parametrization of a DAG matrix. Apart
from the usage of well-studied gradient-based solvers, they also leverage GPU
acceleration which has changed the nature of the task (Ng et al. (2020)).
Furthermore, to accelerate the task they often employ deep learning models
that are capable of capturing complex nonlinear mappings (Yu et al. (2019)).
As a result, they usually have a faster training time as deep learning is
known to be highly parallelizable on GPU, which gives a promising direction
for causal discovery with gradient-based methods (Ng et al. (2019)). In
general, these methods are more global than other approximate greedy methods.
This is because they update all edges at each step based on the gradient of
the score and as well as based on the acyclicity constraint.
#### 3.4.1 NOTEARS
DAGs with NO TEARS (Zheng et al. (2018)) is a recent breakthrough in the field
of causal discovery that formulates the structure learning problem as a purely
continuous constrained optimization task. It leverages an algebraic
characterization of DAGs and provides a novel characterization of acyclicity
that allows for a smooth global search, in contrast to a combinatorial local
search. The full form of the acronym NOTEARS is Non-combinatorial Optimization
via Trace Exponential and Augmented lagRangian for Structure learning which
particularly handles linear DAGs. It assumes a linear dependence between
random variables and thus models data $D$ as a structural equation model. To
discover the causal structure, it imposes the proposed acyclicity function
(Equation 10) as a constraint combined with a weighted adjacency matrix $W$
with least squares loss. The algorithm aims to convert the traditional
combinatorial optimization problem into a continuous constrained optimization
task by leveraging an algebraic characterization of DAGs via the trace
exponential acyclicity function as follows:
$\underset{\text{subject to }G(W)\in
DAGs}{\mathop{\min}_{W\in\mathbb{R}^{d\times
d}}F(W)}\iff\underset{\text{subject to
}h(W)=0}{\mathop{\min}_{W\in\mathbb{R}^{d\times d}}F(W)},$ (9)
where $G(W)$ is a graph with $d$ nodes induced by the weighted adjacency
matrix $W$, $F:\mathbb{R}^{d\times d}\rightarrow\mathbb{R}$ is a regularized
score function with a least-square loss $\ell$, and $h:\mathbb{R}^{d\times
d}\rightarrow\mathbb{R}$ is a smooth function over real matrices that enforces
acyclicity. Overall, the approach is simple and can be executed in about 50
lines of Python code. Its implementation in Python is publicly available at
https://github.com/xunzheng/notears. The acyclicity function proposed in
NOTEARS is as follows where $\circ$ is the Hadamard product and $e^{W\circ W}$
is the matrix exponential of $W\circ W$.
$h(W)=\operatorname{tr}(e^{W\circ W})-d=0$ (10)
#### 3.4.2 GraN-DAG
Gradient-based Neural DAG Learning (GraN-DAG) is a causal structure learning
approach that uses _neural networks (NNs)_ to deal with non-linear causal
relationships (Lachapelle et al. (2019)). It uses a stochastic gradient method
to train the NNs to improve scalability and allow implicit regularization. It
formulates a _novel characterization of acyclicity_ for NNs based on NOTEARS
(Zheng et al. (2018)). To ensure acyclicity in non-linear models, it uses an
argument similar to NOTEARS and applies it first at the level of neural
network paths and then at the graph paths level. For regularization, GraN-DAG
uses a procedure called preliminary neighbors selection (PNS) to select a set
of potential parents for each variable. It uses a final pruning step to remove
the false edges. The algorithm works well mostly in the case of non-linear
Gaussian additive noise models. An implementation of GraN-DAG can be found at
https://github.com/kurowasan/GraN-DAG.
#### 3.4.3 GAE
Graph Autoencoder (GAE) approach is a gradient-based approach to causal
structure learning that uses a _graph autoencoder framework_ to handle
nonlinear structural equation models (Ng et al. (2019)). GAE is a special case
of the causal additive model (CAM) that provides an alternative generalization
of NOTEARS for handling nonlinear causal relationships. GAE is easily
applicable to vector-valued variables. The architecture of GAE consists of a
variable-wise encoder and decoder which are basically multi-layer perceptrons
(MLPs) with shared weights across all variables $X_{i}$. The encoder-decoder
framework allows the reconstruction of each variable $X_{i}$ to handle the
nonlinear relations. The final goal is to optimize the reconstruction error of
the GAE with $l_{1}$ penalty where the optimization problem is solved using
the augmented Lagrangian method (Nemirovsky (1999)). The approach is
competitive in terms of scalability as it has a near-linear training time when
scaling up the graph size to 100 nodes. Also, in terms of time efficiency, GAE
performs well with an average training time of fewer than 2 minutes even for
graphs of 100 nodes. Its implementation can be found at the gCastle (Zhang et
al. (2021a)) repository.
#### 3.4.4 DAG-GNN
DAG Structure Learning with Graph Neural Networks (DAG-GNN) is a graph-based
deep generative model that tries to capture the sampling distribution faithful
to the ground-truth DAG (Yu et al. (2019)). It leverages variational inference
and a parameterized pair of _encoder-decoders_ with specially designed _graph
neural networks (GNN)_. Particularly, it uses _Variational Autoencoders
(VAEs)_ to capture complex data distributions and sample from them. The
weighted adjacency matrix $W$ of the ground-truth DAG is a learnable parameter
with other neural network parameters. The VAE model naturally handles various
data types both continuous and discrete in nature. In this study, the authors
also propose a _variant of the acyclicity function_ (Equation 11) which is
more suitable and practically convenient for implementation with the existing
deep learning methods. In the acyclicity function, $d$ = the number of nodes,
$\alpha$ is a hyperparameter, and $I$ is an identity matrix. An implementation
of the DAG-GNN algorithm is available at https://github.com/fishmoon1234/DAG-
GNN.
$\operatorname{tr}[(I+\alpha W\circ W)^{d}]-d=0$ (11)
#### 3.4.5 GOLEM
Gradient-based Optimization of DAG-penalized Likelihood for learning linear
DAG Models (GOLEM) is a _likelihood-based_ causal structure learning approach
with _continuous unconstrained optimization_ (Ng et al. (2020)). It studies
the asymptotic role of the sparsity and DAG constraints for learning DAGs in
both linear Gaussian and non-Gaussian cases. It shows that when the
optimization problem is formulated using a likelihood-based objective instead
of least squares (used by NOTEARS), then instead of a hard DAG constraint,
applying only soft sparsity and DAG constraints is enough for learning the
true DAG under mild assumptions. Particularly, GOLEM tries to optimize the
score function in Equation 12 w.r.t. the weighted adjacency matrix $B$
representing a directed graph. Here, $L(B;x)$ is the maximum likelihood
estimator, $R_{sparse}(B)$ is a penalty to encourage sparsity (i.e. fewer
edges), and $R_{DAG}(B)$ is the penalty that enforces DAGness on $B$.
$S(B;x)=L(B;x)+R_{sparse}(B)+R_{DAG}(B)$ (12)
In terms of denser graphs, GOLEM seems to outperform NOTEARS since it can
reduce the number of optimization iterations which makes it robust in terms of
scalability. With gradient-based optimization and GPU acceleration, it can
easily handle thousands of nodes while retaining high accuracy. An
implementation of GOLEM can be found at the gCastle (Zhang et al. (2021a))
repository.
#### 3.4.6 DAG-NoCurl
DAG-NoCurl also known as DAGs with No Curl uses a two-step procedure for the
causal DAG search (Yu et al. (2021)). At first, it finds an initial cyclic
solution to the optimization problem and then employs the _Hodge
decomposition_ (Bhatia et al. (2012)) of graphs to learn an acyclic graph by
projecting the cyclic graph to the gradient of a potential function. The goal
of this study is to investigate how the causal structure can be learned
without any explicit DAG constraints by directly optimizing the DAG space. To
do so, it proposes the method DAG-NoCurl based on the graph Hodge theory that
implicitly enforces the acyclicity of the learned graph. As per the Hodge
theory on graphs (Lim (2020)), a DAG is a sum of three components: a curl-
free, a divergence-free, and a harmonic component. The curl-free component is
an acyclic graph that motivates the naming of this approach. An implementation
of the method can be found at the link https://github.com/fishmoon1234/DAG-
NoCurl.
#### 3.4.7 ENCO
Efficient Neural Causal Discovery without Acyclicity Constraints (ENCO) _uses
both observational and interventional data_ by modeling a probability for
every possible directed edge between pairs of variables (Lippe et al. (2021)).
It formulates the graph search as an optimization of independent edge
likelihoods, with the edge orientation being modeled as a separate parameter.
This approach guarantees convergence when interventions on all variables are
available and do not require explicitly constraining the score function with
respect to acyclicity. However, the algorithm works on partial intervention
sets as well. Experimental results suggest that ENCO is robust in terms of
scalability, and is able to detect latent confounders. When applied to large
networks having 1000 nodes, it is capable of recovering the underlying
structure due to the benefit of its low-variance gradient estimators. The
source code of ENCO is available at this site:
https://github.com/phlippe/ENCO.
Figure 18: Graph optimization mechanism of ENCO.
#### 3.4.8 MCSL
Masked Gradient-based Causal Structure Learning (MCSL) (Ng et al. (2022))
utilizes a reformulated structural equation model (SEM) for causal discovery
using gradient-based optimization that leverages the _Gumbel-Softmax approach_
(Jang et al. (2016)). This approach is used to approximate a binary adjacency
matrix and is often used to approximate samples from a categorical
distribution. MCSL reformulates the SEM with additive noises in a form
parameterized by the binary graph adjacency matrix. It states that, if the
original SEM is identifiable, then the adjacency matrix can be identified up
to super-graphs of the true causal graph under some mild conditions. For
experimentation, MCSL uses multi-layer perceptrons (MLPs), particularly having
4-layers as the model function which is denoted as MCSL-MLP. An implementation
of the approach can be found in the gCastle (Zhang et al. (2021a)) package.
#### 3.4.9 DAGs with No Fears
Wei et al. (2020) provides an in-depth analysis of the NOTEARS framework for
causal structure learning. The study proposed a local search post-processing
algorithm that significantly increased the precision of NOTEARS and other
algorithms and deduced Karush-Kuhn-Tucker (KKT) optimality conditions for an
equivalent reformulation of the NOTEARS problem. Additionally, the authors
compared the effectiveness of NOTEARS and Abs-KKTS on various graph types and
discovered that Abs-KKTS performed better than NOTEARS in terms of accuracy
and computational efficiency. The authors concluded that this work improved
the understanding of optimization-based causal structure learning and may
result in further advancements in precision and computational effectiveness.
The code implementation is available at https://github.com/skypea/DAG_No_Fear.
### 3.5 Miscellaneous Approaches
Apart from the types of approaches mentioned so far, there are some other
causal discovery approaches that use some specialized or unique techniques to
search for the graph that best describes the data. There also exists some
methods that are specialized to handle latent or unobserved confounders. Also,
there are some approaches that are hybrid in nature, i.e. they are based on
the combination of constraint-based, score-based, FCM-based, gradient-based,
etc. causal discovery approaches. For example, some approaches integrate
conditional independence testing along with score functions to design a hybrid
approach for causal discovery. A detailed discussion can be found below.
#### 3.5.1 MMHC
Max-Min Hill Climbing (MMHC) is a hybrid causal discovery technique that
incorporates the concepts from both score-based and constraint-based
algorithms (Tsamardinos et al. (2006)). A challenge in causal discovery is the
identification of causal relationships within a reasonable time in the
presence of thousands of variables. MMHC can reliably learn the causal
structure in terms of time and quality for high-dimensional settings. MMHC is
a two-phase algorithm that assumes faithfulness. In the first phase, MMHC uses
Max-Min Parents and Children (MMPC) (Tsamardinos et al. (2003)) to initially
learn the skeleton of the network. In the second phase, using a greedy
Bayesian hill-climbing search, the skeleton is oriented. In the sample limit,
MMHC’s skeleton identification phase is reliable, but the orientation phase
offers no theoretical assurances. From the results of the experiments
performed, MMHC outperformed PC (Spirtes et al. (2000b)), Sparse Candidate
(Friedman et al. (2013)), Optimal Reinsertion (Moore & Wong (2003)), and GES
(Chickering (2002)) in terms of computational efficiency. Considering the
quality of reconstruction, MMHC performs better than all the above-mentioned
algorithms except for GES when the sample size is 1000. The authors also
proved the correctness of the results. The implementation of MMHC is available
at http://www.dsl-lab.org/supplements/mmhcpaper/mmhcindex.html as part of
Causal Explorer 1.3, a library of Bayesian network learning and local causal
discovery methods.
#### 3.5.2 FRITL
To discover causal relationships in linear and non-Gaussian models, Chen et
al. (2021) proposed a hybrid model named FRITL. FRITL works in the presence or
absence of latent confounders by incorporating independent noise-based
techniques and constraint-based techniques. FRITL makes causal Markov
assumption, causal faithfulness assumption, linear acyclic non-Gaussianity
assumption, and one latent confounder assumption. In the first phase of FRITL,
the FCI algorithm is used to generate asymptotically accurate results.
Unfortunately, relatively few unconfounded direct causal relations are
normally determined by the FCI since it always reveals the presence of
confounding factors. In the second phase, FRITL identifies the unconfounded
causal edges between observable variables within just those neighboring
pairings that have been influenced by the FCI results. The third stage can
identify confounders and the relationships that cause them to affect other
variables by using the Triad condition (Cai et al. (2019)). If further causal
relationships remain, Independent Component Analysis (ICA) is finally applied
to a notably reduced group of graphs. The authors also theoretically proved
that the results obtained from FRITL are efficient and accurate. FRITL
produces results that are in close accord with neuropsychological opinion and
in exact agreement with a causal link that is known from the experimental
design when applied to real functional magnetic MRI data and the SACHS (Sachs
et al. (2005)) dataset.
Figure 19: Stages of the FRITL model.
#### 3.5.3 HCM
Most of the causal discovery algorithms are applicable only to either discrete
or continuous data. However, in reality, we often have to work with mixed-type
data (e.g., shopping behavior of people) which don’t receive enough attention
in causal discovery. Li et al. (2022) proposed the approach _H ybrid Causal
Discovery on Mixed-type Data (HCM)_ to identify causal relationships with
mixed variables. HCM works under the causal faithfulness and causal Markov
assumption. HCM has three phases where in the first phase, the skeleton graph
is learned in order to limit the search space. To do this, they used the PC-
stable approach along with their proposed Mixed-type Randomized Causal
Independence Test (MRCIT) which can handle mixed-type data. They also
introduced a generalized score function called Cross-Validation based Mixed
Information Criterion (CVMIC). In the second phase, starting with an empty
DAG, they add edges to the DAG based on the highest CVMIC score. In order to
reduce false positives, the learned causal structure is pruned using MRCIT
once again in the final phase with a slightly bigger conditional set. They
compared their approach with other causal discovery approaches for mixed data
and showed HCM’s superiority. However, they didn’t consider any unobserved
confounders in the dataset which allows for further improvement. They made the
code available on the following GitHub site: https://github.com/DAMO-DI-
ML/AAAI2022-HCM.
Figure 20: Different phases of the method HCM.
#### 3.5.4 SADA
One of the biggest limitations of the traditional causal discovery methods is
that these models cannot identify causal relations when the problem domain is
large or there is a small number of samples available. To solve this problem,
Cai et al. (2013) proposed a _Split-and-Merge_ causal discovery method named
SADA which assumes causal faithfulness. Even in situations when the sample
size is substantially less than the total number of variables, SADA can
reliably identify the causal factors. SADA divides the main problem into two
subproblems and works in three phases. Initially, SADA separates the variables
of the causal model into two sets $V_{1}$ and $V_{2}$ using a causal cut set
$C$ where all paths between $V_{1}$ and $V_{2}$ are blocked by $C$. This
partitioning is continued until the variables in each subproblem are less than
some threshold. In the next phase, any arbitrary causal algorithm is applied
to both subproblems and the causal graphs are generated. Here, they used
LiNGAM as the causal algorithm. Then these graphs are merged in the final
step. But to handle the conflicts while merging, they only kept the most
significant edge and eliminated the others whenever there existed multiple
causal paths between two variables in the opposite direction. They compared
the performance of SADA against baseline LiNGAM (without splitting and
merging), and the results showed that SADA achieved better performance in
terms of the metrics precision, recall, and F1 score.
#### 3.5.5 CORL
Ordering-based Causal Discovery with Reinforcement Learning (CORL) formulates
the ordering search problem as a _multi-step Markov decision process_ (MDP) to
learn the causal graph (Wang et al. (2021)). It implements the ordering
generating process with an _encoder-decoder architecture_ and finally uses RL
to optimize the proposed model based on the reward mechanisms designed for
each order. A generated ordering is then processed using variable selection to
obtain the final causal graph. According to the empirical results, CORL
performs better than existing RL-based causal discovery approaches. This could
happen because CORL does not require computing the matrix exponential term
with O($d^{3}$) cost because of using ordering search. CORL is also good in
terms of scalability and has been applied to graphs with up to 100 nodes. The
gCastle package contains an implementation of CORL.
#### 3.5.6 ETIO
ETIO is a versatile _logic-based_ causal discovery algorithm specialized for
business applications (Borboudakis & Tsamardinos (2016)). Its features include
i) the ability to utilize prior causal knowledge, ii) addressing selection
bias, hidden confounders, and missing values in data, and iii) analyzing data
from pre and post-interventional distribution. ETIO follows a _query-based
approach_ , where the user queries the algorithm about the causal relations of
interest. In the first step, ETIO performs several CI tests on the input
dataset. Particularly, it performs non-Bayesian tests that return p-values of
the null hypothesis of conditional independencies. Then it employs an
empirical Bayesian method that converts the p-values of dependencies and
interdependencies into probabilities. Later, it selects a consistent subset of
dependence, and prior knowledge constraints to resolve conflicts which are
ranked in order of confidence. Particularly, ETIO imposes an m-separation
constraint if a given independence is more probable than the corresponding
dependence. These imposed constraints are the ones that correspond to test
results, in order of probability, while removing conflicting test results.
Finally, it identifies all invariant features based on input queries using the
well-known declarative programming language, answer set programming (Gelfond &
Lifschitz (1988)).
#### 3.5.7 $b$QCD
Discovering causal relationships from observational data has been a
challenging task, especially for the bivariate cases as it is difficult to
determine whether there actually exists a cause-effect relationship or whether
it is the effect of a hidden confounder. Tagasovska et al. (2020) proposed the
approach bivariate Quantile Causal Discovery (bQCD) to determine causal
relationships in bivariate settings. Although they made no assumptions on the
class of causal mechanisms, they did assume that there exists no confounder,
feedback, or selection bias. They utilized _quantile scoring_ in place of
Kolmogorov complexity (Kolmogorov (1963)), and used conditional quantiles,
pinball loss instead of conditional mean, and squared loss. The approach bQCD
performs almost similarly to the state-of-the-art techniques but it is much
more computationally inexpensive. Also, the usage of quantile conditioning
instead of mean conditioning makes bQCD more robust to heavy tails as the mean
is more susceptible to outliers than the quantile. Moreover, not making any
assumptions about the parametric class allows bQCD to be applied to a variety
of processes where baseline methods perform significantly poorly when the
assumptions do not hold. The source code of bQCD written in R is available on
this site: https://github.com/tagas/bQCD.
#### 3.5.8 JCI
Joint Causal Inference (JCI) leverages prior knowledge by combining data from
multiple datasets from different contexts (Mooij et al. (2020)). Particularly,
JCI is a _causal modeling framework_ rather than a specific algorithm, and it
can be implemented using any causal discovery algorithm that can take into
account some background knowledge. The main idea of JCI is to first, consider
auxiliary context variables that describe the context of each data set, then,
pool all the data from different contexts, including the values of the context
variables, into a single data set, and finally apply standard causal discovery
methods to the pooled data, incorporating appropriate background knowledge on
the causal relationships involving the context variables. The framework is
simple and easily applicable as it deals with latent confounders, cycles (if
the causal discovery method supports this), and various types of interventions
in a unified way. The JCI framework also facilitates analysis of data from
almost arbitrary experimental designs which allow researchers to trade off the
number and complexity of experiments to be done with the reliability of the
analysis for the purpose of causal discovery.
Figure 21: Workflow of the JCI framework.
#### 3.5.9 Kg2Causal
Kg2Causal (Sinha & Ramsey (2021)) uses a large-scale general-purpose
biomedical knowledge graph as a prior for data-driven causal discovery. With a
set of observed nodes in a dataset and some relationship edges between the
nodes derived from a knowledge graph, Kg2Causal uses the knowledge graph-
derived edges to guide the data-driven discovery of a causal graph. The main
ideas of this approach are first, mapping each variable in the dataset to a
node in the knowledge graph, and querying relationships between them; next,
extracting a subgraph containing the connected variables with edges between
them; and then this edge set is used as prior knowledge to guide an optimizing
scoring step for inferring the causal graph. An implementation of Kg2Causal is
available at https://github.com/meghasin/Kg2Causal in R language.
#### 3.5.10 C-MCMC
Constrained MCMC (C-MCMC) introduces _prior knowledge_ into the _Markov chain
Monte Carlo (MCMC)_ algorithm for structure learning (Xu et al. (2015)).
C-MCMC uses the following _three types of prior knowledge_ : the existence of
parent nodes, absence of parent nodes, and distribution knowledge including
the conditional probability distribution (CPD) of edges and the probability
distribution (PD) of nodes. All prior knowledge should be given by domain
experts. Existence knowledge means that for any node $X_{i}$, a node-set
$pa(X_{i})$ includes all parent nodes of $X_{i}$. The absence of knowledge
means that for a node $X_{i}$, a node-set $pa(X_{i})$ does not include any
parent node of $X_{i}$. PD/CPD knowledge means that the PD of a node and the
CPD of an edge are known. Considering that the prior knowledge may not be
consistent and reliable, a confidence lambda is assigned by domain experts on
each of the prior knowledge that ranges from 0 to 1. This denotes the
certainty level of prior knowledge. A _lambda_ value of 1 indicates very high
confidence in this knowledge.
#### 3.5.11 M$eta$-RL
Meta-RL is a _meta-learning algorithm_ in a Reinforcement Learning (RL)
setting where the agent learns to _perform interventions_ to construct a
causal graph (Sauter et al. (2022)). The goal is to be able to use previous
learning experiences during training to generalize in unseen environments.
This approach has some strong assumptions such as i) each environment is
defined by an acyclic SCM, ii) every observable variable can be intervened on,
iii) for each environment in the training set, the underlying SCM is given,
and iv) intervention can be performed on at most one variable at a time. Meta-
RL has two phases: i) Training, and ii) Application. The training phase starts
by randomly choosing an SCM from a set of environments. There are mainly two
sets of actions that an agent performs: _a) interventional actions_ , and _b)
structure actions_. In each step, any one action can be performed on the set
of variables to generate a PDAG. The _agent policy is updated_ via the
_interventional actions_ in each step. However, in case of the structural
actions (e.g. add, delete, or reverse), the agent policy only gets updated at
the end of the training procedure where a reward is sent to the agent. The
reward is computed by comparing the hamming distance of the generated PDAG to
the true causal structure when the training is completed. A _recurrent LSTM
layer_ enables the policy to remember samples from the post-interventional
distributions in the earlier steps. This should help to better identify causal
relations since the results of sequential interventions can be used to
estimate the distribution. Once trained, Meta-RL can then be applied to
environments that have a structure unseen during training. For training, 24
SCMs with 3 observable variables, and 542 SCMs with 4 observable variables
were created. Code to reproduce experiments or run Meta-RL is available at
https://github.com/sa-and/interventional_RL. One limitation of this approach
is that it needs modification in terms of scalability. Also, in real-world
scenarios, every variable might not be accessible for intervention.
Figure 22: Training phase of the Meta-RL algorithm (Sauter et al. (2022)).
#### 3.5.12 Tabu search for SEM
Marcoulides et al. (1998) presents an approach to structural equation modeling
(SEM) specification search that makes use of Tabu search, a heuristic
optimization algorithm. Using a neighborhood of the current solution as its
focus, the tabu search technique avoids local optimality by examining the area
around the current solution. To prevent cycling, it assigns recently involved
attributes a tabu status. A number of definitions and parameters, such as the
neighborhood definition and the model selection criterion, are necessary to
implement the Tabu search procedure for SEM specification search. The authors
conclude that Tabu search is a promising strategy for SEM specification search
after demonstrating its efficacy in a number of example analyses.
#### 3.5.13 LFCM
Latent Factor Causal Models (LFCMs) (Squires et al. (2022)) perform causal
discovery in the _presence of latent variables_. These models are motivated by
gene regulatory networks. LFCMs work in three stages where they discover: (i)
clusters of observed nodes, (ii) a partial ordering over clusters, and (iii)
finally, the entire structure over both observed and latent nodes. A graph $G$
is called a latent factor causal model (LFCM) if it satisfies the following
conditions: (a) Unique cluster assumption: Each observed node has exactly one
latent parent, (b) Bipartite assumption: There are no edges between pairs of
observed nodes or between pairs of latent nodes, (c) Triple-child assumption:
Each latent node has at least 3 observed children and (d) Double-parent
assumption. The other assumption of LFCMs is that it allows non-exogenous
latent variables. For cluster formation, LFCMs rely on t-separation (Sullivant
et al. (2010)). When two ordered pairs of variables [e.g. ($X_{i}$, $X_{j}$)
and ($X_{u}$, $X_{v}$)] are t-separated, then they belong to the same cluster.
LFCMs are a biologically motivated class of causal models with latent
variables. The limitations of LFCMs include their applicability to only a
linear Gaussian SEM, some major structural restrictions, and that it can fail
when the true graph violates the double parent assumption.
Figure 23: The graph $G$ on the left is a latent factor causal model (LFCM),
and the graph on the right is the latent graph $L(G)$ for $G$ (Squires et al.
(2022)).
There are some other noteworthy methods that are specialized to handle latent
variables or unobserved confounders. Kocaoglu et al. (2017) presented a non-
parametric algorithm for learning a causal graph in the presence of hidden
variables. The study took a stage-by-stage approach, first to learn the
induced graph between observational variables, and then use it to discover the
existence and location of the latent variables. The authors further proposed
an algorithm to discover the latent structure between variables depending on
the adjacency. To identify ancestral relationships and transitive closure of
the causal graph, the algorithm employed a pairwise independence test under
interventions. Then Kocaoglu et al. (2019) addressed causal discovery by
linking conditional independencies from observed data to graphical constraints
using the d-separation criterion. It broadened the application of this
strategy to scenarios involving numerous experimental and observational
distributions. The authors proposed that CIs and d-separation constraints are
just a subset of broader constraints obtained from comparing various
distributions, which is especially useful in the context of do-calculus for
soft interventions. They introduced the notion interventional equivalence
class of causal graphs with latent variables, which linked graphical
structures to groups of interventional distributions that adhered to do-
calculus. Two causal graphs are interventionally equivalent if they produce
identical interventional distributions that can not be distinguished by
invariances.
Sometimes complex systems require the knowledge of both observations and
experiments for recovering the underlying causal relationships. Utilizing both
observational and interventional data from various domains, the authors in Li
et al. proposed a novel approach for identifying causal structures in semi-
Markovian systems with latent confounders. They made a link between learning
from interventional data within a single domain and learning from
observational data across domains. They introduced the idea of S-Markov, a
property connecting multi-domain distributions to pairs of causal graphs and
interventional targets, to navigate the complexities of observational and
experimental data. A new causal discovery algorithm called S-FCI was
introduced that builds on the S-Markov property and is capable of effectively
learning from a mixture of observational and interventional data from various
domains.
Jaber et al. (2020) integrates soft experimental and observational data to
find the structure in non-Markovian systems with latent variables. They
introduced the idea of $\Psi$-Markov in this context when the intervention
targets were unidentified. This idea links a causal graph G and a list of
interventional targets I to the causal invariances found in both observational
and interventional data distributions. They also introduced a graphical method
for evaluating equivalence between causal graphs with various interventional
targets and an algorithm for learning the equivalence class.
Recently, Bellot et al. (2022) studied structure learning in discrete models
with arbitrary latent dependencies, and proposed a new score based on the
asymptotic expansion of the marginal likelihood to capture both equality and
inequality constraints in observational data. Furthermore, it claims to be the
first score-based method to learn causal models with latent variables.
The different methods discussed in this section so far use a variety of
strategies to perform causal discovery under diverse settings and assumptions.
Therefore, we present a comparative analysis of some of the common methods in
Table LABEL:table-IID-comparison based on their assumptions, output causal
graph, techniques used, advantages, and disadvantages. This comparative
analysis will help readers to find the similar and dissimilar methods, and
also help in deciding which method could be appropriate for performing causal
discovery given the data and its assumptions.
Table 6: Comparison among some causal discovery algorithms for I.I.D. data. Methods | Assumptions | | Outcome
---
Technique used | Advantages | Disadvantages
PC | Faithfulness, Sufficiency | CPDAG | Conditional Independence (CI) Tests | | Computationally more
---
feasible for sparse graphs.
| Lacks scalability as less
---
feasible for denser graphs
FCI | | Causal Markov
---
condition, faithfulness
PAG | | CI tests, Skeleton
---
finding step same as PC
| Handles latent and
---
selection variables.
| In the worst case, the no. of
---
CI tests performed grows
exponentially with the no.
of variables.
RFCI | CMC, faithfulness | PAG | Conditional Independence tests | | Faster variant of FCI, uses
---
fewer CI tests, computationally feasible
for high-dimensional sparse graphs
| Performs some additional tests
---
before orienting v-structures
and discriminating paths
GES | | Decomposable
---
score function
CPDAG | Score-based greedy search | | Run time faster compared to
---
the constraint-based methods
| Search space can grow
---
exponentially with the growing
no. of variables
FGS | Weak Faithfulness | DAG | Score-based greedy search | | Faster variant of GES, reduces scoring
---
redundancy, enables parallelization
| Require high power
---
computing to run
RL-BIC | | Decomposable score
---
function, acyclicity
DAG | | BIC score-based
---
reinforcement learning search
| Model gets feedback to update
---
/correct its search strategy
| Scalable only up to a few
---
(around 30) variables
Triplet A* | Acyclicity | DAG | | A* search combined
---
with BIC score
| Can handle both linear Gaussian &
---
non-Gaussian networks, scales up to
more than 60 variables
Has complexity issues
KCRL | | Decomposable score-function,
---
unbiased prior knowledge
DAG | Score-based RL search strategy | | Considers prior
---
knowledge constraints
Lacks scalability
LiNGAM | | Linear DGP, no unobserved
---
confounders, non-Gaussian noises
DAG | | FCM-based, uses independent
---
component analysis
| Determines the direction of
---
every causal arrow, does not r
require the faithfulness assumption.
| Estimated results may
---
vary in case of mixed data
(categorical values)
SAM | Acyclicity | DAG | Searches for an FCM using GANs | Good for sparse causal graphs | | Not suitable for
---
dense graphs
Tabu search | | Finite search space,
---
well-defined objective function
DAG | | Iteratively modifies the model
---
and evaluates its fit to the data
by examining a neighborhood of the
current solution
| Considers avoiding local optima
---
and also, cycle avoidance
| Quality of the results depends
---
on the quality of the initial
solution and the choice
of parameters
CAM | Sufficiency, Acyclicity | DAG | | FCM-based, uses sparse
---
regression techniques and decouples
order search among the variables
| Establishes consistency of the
---
maximum likelihood estimator
for low and high-dimensional cases
| Faces performance and
---
computational challenges
as the number of
variables increases
NOTEARS | | Acyclicity, linear dependence
---
between variables
DAG | | Models data as a SEM,
---
uses a regularized score-function
with a least-square loss
| Simple method,
---
easy to implement
| Works well mostly for
---
continuous data
GAE | | Structure learning under
---
additive noise models
DAG | | Gradient-based approach that uses a
---
graph autoencoder framework
| Can scale up to 100 nodes,
---
Lower run time, easily applicable
to vector-valued variables, & handles
non-linear relations well
| May not work well for
---
linear causal relations
DAG-GNN | Faithfulness, sufficiency | DAG | | Gradient-based, uses blackbox
---
stochastic optimization solvers
to solve the sub problem of
maximizing the ELBO
| Can handle both discrete and
---
vector-valued variables, and is
capable of capturing complex
nonlinear mappings
| Assumes acyclic causal
---
relationships, which may not
always be the case
in real-world scenarios
MMHC | | Faithfulness, sufficiency,
---
decomposable score-function
DAG | | Hybrid: uses both score and
---
constraint-based techniques such as
MMPC to initially learn the skeleton of
the network, then uses greedy
Bayesian hill-climbing search
| Good computational efficiency,
---
scalability, and applicable for
high-dimensional settings
| Scales up better only with
---
large number of samples
FRITL | | Faithfulness, non-Gaussianity,
---
latent confounder assumption
PAG | | Hybrid, uses FCI, Triad
---
condition and ICA
| Works in presence or
---
absence of latent confounders
| Not generalizable in
---
non-linear gaussian cases
Kg2Causal | | Presence of a knowledge graph
---
DAG | | Prior knowledge constraints
---
added in a score-based method
| Leverages information from
---
existing literature or domain
Requires a knowledge graph
## 4 Causal Discovery Algorithms for Time Series Data
Time series data arise when observations are collected over a period of time.
So far the methods that we have discussed are specialized for causal discovery
from I.I.D. or time-independent data. However, often, real-world data in
different domains can be a time series (non-I.I.D. data). For this type of
data, there are different specialized causal discovery approaches based on CI
testing, SEM/FCMs, Granger causality (Granger (1969)), or deep neural
networks. In this section, first, we provide a brief introduction to some of
the common terminologies related to time-series data and temporal causal
discovery. Then, we discuss the notable causal discovery approaches for time-
series data.
###### Definition 7 (Time Series Data)
Time series data is a collection of observations measured over consistent
intervals of time. The observation of a time series variable $X^{j}$ at time
$t$ is denoted by $X^{j}_{t}$.
Examples of time series data include retail sales, stock prices, climate data,
heart rate of patients, brain activity recordings, temperature readings, etc.
Any time series data may have the following _properties_ :
1. i
_Trend :_ When the data show a long-term rise or fall, a trend is present.
Such long-term increases or decreases in the data might not be always linear.
The trend is also referred to as changing direction when it might switch from
an upward trend to a downward trend.
2. ii
_Seasonality :_ It refers to the seasonal characteristics of time series data.
Seasonality exists when the data regularly fluctuates based on different time
spans (e.g. daily/weekly/ monthly/quarterly/yearly). An example is temperature
data, where it is mostly observed that the temperature is higher in the
summer, and lower in the winter. Any analysis related to time series usually
takes advantage of the seasonality in data to develop more robust models.
3. iii
Autocorrelation: Autocorrelation or self-correlation is the degree of
similarity between a given time series and a lagged version of itself over
successive time intervals. Time series data is usually autocorrelated i.e.,
the past influences the present and future (Lawton et al. (2001)).
4. iv
Stationarity & Non-stationarity: Stationarity means that the joint probability
distribution of the stochastic process does not change when shifted in time. A
time series is stationary if it has causal links such that for variables
$X^{i}$ and $X^{j}$, if $X^{i}$ → $X^{j}$ at any timestamp $t$, then $X^{i}$ →
$X^{j}$ also holds for all $t^{\prime}$ $\not=$ $t$. This condition does not
hold for a non-stationary time series where $X^{i}$ → $X^{j}$ at a particular
time $t$ need not necessarily be true at any other time stamp $t^{\prime}$.
Let $X^{j}_{1:t}$ = {$X^{1}_{1:t}$, $X^{2}_{1:t}$, …, $X^{n}_{1:t}$} be a
multivariate time series with $n$ variables and $t$ time steps. At any
particular timestamp $t$, the state of the $n$ variables can be represented as
$X^{j}_{t}$ = {$X^{1}_{t}$, $X^{2}_{t}$, …, $X^{n}_{t}$}. The past of a
variable $X^{j}_{t}$ is denoted by $X^{j}_{1:t-1}$. The parent set of a
variable includes all the nodes with an edge towards it. The goal of any
temporal causal discovery approach is to discover the causal relationships
between the time series variables. Any time series causal graph may have the
following _types of causal relationships/edges_ : (i) Instantaneous edges, and
(ii) Lagged edges.
Figure 24: Types of causal relationships: Instantaneous edges (red), Lagged
edges (blue), and Changing modules (green).
###### Definition 8 (Instantaneous Causal Effect)
When the delay between cause and effect is 0 timesteps, i.e. causal effects
are of the form $X^{i}_{t}$ $\rightarrow$ $X^{j}_{t}$ or $X^{i}_{t}$
$\rightarrow$ $X^{i}_{t}$ (self-causation), then it is known as an
instantaneous or contemporaneous causal relationship/effect (Nauta et al.
(2019)).
###### Definition 9 (Lagged Causal Effect)
When the delay between cause and effect is at least 1 or more timesteps (i.e.
causal effects of the form $X^{i}_{t-}$ $\rightarrow$ $X^{j}_{t}$ or
$X^{i}_{t-}$ $\rightarrow$ $X^{i}_{t}$), then it is known as a lagged causal
relationship/effect. That is, a lagged causal effect occurs when a variable
causes another variable or itself with a time lag = 1 or more.
In Figure 24, the red-colored edges represent the instantaneous causal effect
(relationships among the variables at the same time step), and the blue edges
represent the lagged causal effect. The green edges represent a special form
of temporal causal relationships known as the _changing modules_ (CM). The CMs
represent the direct effect of a time stamp on a variable (e.g. $t\rightarrow
X^{1}_{t}$ in Figure 24). Details on CM are available in Ferdous et al.
(2023b).
The causal graphs produced by different temporal causal discovery algorithms
vary based on the details of the relationships they represent. Any temporal
causal discovery algorithm may produce any of the following two types of
temporal causal graph as its outcome: a full-time causal graph or a summary
causal graph (see Figure 25).
###### Definition 10 (Full-time Causal Graph)
A full-time causal graph represents both the instantaneous ($X^{i}_{t}$
$\rightarrow$ $X^{j}_{t}$ or $X^{i}_{t}$ $\rightarrow$ $X^{i}_{t}$) and time-
lagged ($X^{i}_{t-}$ $\rightarrow$ $X^{j}_{t}$ or $X^{i}_{t-}$ $\rightarrow$
$X^{i}_{t}$) causal edges where the lag between a cause and effect is
specified in the graph. A full-time casual graph may sometimes present the
changing modules as well.
Figure 25: Full-time causal graph (Left) & Summary causal graph (Right)
Figure 25 (left) represents a full-time causal graph where both instantaneous
relations (e.g. $X^{1}_{t}$ $\rightarrow$ $X^{2}_{t}$, $X^{1}_{t-1}$
$\rightarrow$ $X^{2}_{t-1}$), and lagged relations (e.g. $X^{2}_{t-1}$
$\rightarrow$ $X^{3}_{t}$, $X^{3}_{t-1}$ $\rightarrow$ $X^{3}_{t}$) among the
variables are depicted.
###### Definition 11 (Summary Causal Graph)
A summary causal graph is a reduced version of a full-time causal graph where
each lagged node represents the entire past ($X^{j}_{t-}$) of its
corresponding instantaneous node ($X^{j}_{t}$), and the exact time lag between
the cause and effect is not specified in the graph.
In the following subsections, we describe briefly some of the notable causal
discovery algorithms that focus on time series data. Figure 26 presents a
taxonomy of some of the discussed approaches.
forked edges, for tree=draw,align=left,l=1.7cm,edge=-latex [Causal Discovery
Algorithms
(for time series data) [Contraint-based [tsFCI (4.1.1)
PCMCI (4.1.2)
LPCMCI∗ (4.1.4)
CDANs (4.1.6)] ] [FCM-based [VarLiNGAM (4.2.1)
TiMINo (4.2.2)] ] [Gradient-based [DYNOTEARS⋄ (4.3.1)
NTS-NOTEARS⋄∗ (4.3.2)] ] [Granger Causality-based [GVAR (4.4.1)
NAVAR (4.4.2)
ACD (4.4.3)] ] [Miscellaneous [oCSE (4.5.1)
TCDF (4.5.2)
NBCB (4.5.3)
PCTMI (4.5.4)
] ] ]
Figure 26: Taxonomy of some of the discussed causal discovery approaches for
time series data. The approaches are classified based on their core
contribution or the primary strategy they adopt for causal structure recovery.
The approaches that can leverage prior knowledge are marked by an $\ast$
symbol. Some of the gradient-based approaches that use a score function are
indicated by a $\diamond$ symbol. They are primarily classified as such as
they use gradient descent for optimization. However, they can be a score-based
method too as they compute data likelihood scores on the way.
### 4.1 Constraint-based
#### 4.1.1 $ts$FCI
The algorithm time series FCI or tsFCI (Entner & Hoyer (2010)) adapts the Fast
Causal Inference (Spirtes et al. (2000a)) algorithm (developed for the causal
analysis of non-temporal variables) to infer causal relationships from time
series data. It works in two phases: (i) an adjacency phase, and (ii) an
orientation phase. It makes use of temporal priority and consistency
throughout time to orient edges and restrict conditioning sets. It provides a
window causal graph, and an advantage is that it _can detect lagged hidden
confounders_. However, a disadvantage is that it cannot model cyclic
contemporaneous causation, and also instantaneous relationships. A code
package that implements tsFCI is available at
https://sites.google.com/site/dorisentner/publications/tsfci.
#### 4.1.2 PCMCI
A problem with large-scale time series data is that although adding more
variables makes causal analysis more interpretable, if the additional
variables don’t have a significant effect on the causal model, this, in turn,
makes the analysis less powerful, and original causal relations may also be
overlooked. Moreover, at large dimensions, certain nonlinear tests even lose
their ability to limit false positive rates (FPRs). Runge et al. (2019)
proposed a two-stage algorithm PCMCI that can overcome this problem. In
Step-1, the model selects conditions using $PC_{1}$ (a variant of the skeleton
discovery part of the PC algorithm) to remove irrelevant variables which solve
the issue of low power in the causal discovery process. In Step-2, the
momentary conditional independence (MCI) test is used which helps to reduce
the FPR even when the data is highly correlated. The MCI test measures if two
variables are independent or not given their parent sets (see Equation 13).
$X^{i}_{t-\tau}\perp\\!\\!\\!\perp
X^{j}_{t}|P_{A}(X^{j}_{t}),P_{A}(X^{i}_{t-\tau})$ (13)
PCMCI assumes that the data is stationary, has time-lagged dependencies, and
also assumes causal sufficiency. Even when the stationary assumption is
violated (probably by obvious confounders), PCMCI still provides a more robust
performance than Lasso regression or the PC algorithm. However, for highly
predictable systems where little new information is produced at each time
step, PCMCI is not a good fit. Python implementation of PCMCI is available in
the Tigramite package (https://github.com/jakobrunge/tigramite).
Figure 27: Steps involved in the PCMCI method for time series causal
discovery.
#### 4.1.3 PCMCI+
PCMCI+ (Runge (2020)) is an extension of the PCMCI algorithm to discover
contemporary or instantaneous causal links. PCMCI+ also assumes causal
sufficiency like the PCMCI algorithm. It is also a two-stage algorithm where
in the first stage, irrelevant edges from the causal model are eliminated.
Unlike PCMCI, the edges are removed separately for lagged and contemporary
conditioning sets where the contemporary phase employs more CI tests than the
lagged phase. In the second stage, PCMCI+ employs the notion of momentary
conditional independence (MCI) to improve the selection of conditioning sets
for the various CI tests, improving their autocorrelation calibration, and
boosting their detection power. The results show that when there is high
autocorrelation in the data, PCMCI+ can achieve better performance in terms of
higher recall, lower false positives, and faster execution compared to the PC
algorithm. For lower autocorrelation, PCMCI+ performs almost similarly to PC.
Implementation of PCMCI+ is also available in the Tigramite package
(https://github.com/jakobrunge/tigramite).
#### 4.1.4 LPCMCI
Latent PCMCI (LPCMCI) is a constraint-based causal discovery algorithm to
determine causal relationships from large-scale time series data (Gerhardus &
Runge (2020)). This is another extension of the PCMCI algorithm as it can
discover causal relationships even in the presence of latent confounders.
Moreover, it gives the flexibility to use the model when the data is linear or
nonlinear, and also when the data has lagged or contemporary conditioning
sets. The authors identified that when the CI tests have a low effect size,
existing techniques like FCI suffer from low recall in the presence of
autocorrelation. They demonstrated that this issue can be solved by including
causal parents in the conditioning sets. By utilizing the orientation rules,
these parents can be identified as early as in the edge removal stage. The
results show that the proposed LPCMCI method can achieve higher recall than
the baseline model SVAR-FCI. However, LPCMCI cannot differentiate all members
of the Markov class, and also, when the faithfulness assumption doesn’t hold,
LPCMCI might lead to an incorrect conclusion. Along with PCMCI and PCMCI+, the
Python code of LPCMCI is also available in the Tigramite GitHub package.
#### 4.1.5 CD-NOD
Many existing approaches assume that the causal model is static, and
therefore, there will be a fixed joint distribution of the observed data.
However, these methods fail when the underlying data changes over time, and
causal parameters vary during the period. Huang et al. (2020) proposed a
causal discovery method that assumes that the parameter of the causal model
can change over time or different datasets, and they named the method CD-NOD,
Constraint-based Causal Discovery from Heterogeneous/Nonstationary Data. The
proposed method can determine causal direction by taking advantage of
distribution shifts, and these distribution changes, in the presence of
stationary confounders, are helpful for causal discovery. The distribution
shifts can be either time or domain indexes and are denoted by a surrogate
variable $C$. Broadly, CD-NOD has two phases where in the first phase it
recovers the causal skeleton $S_{G}$, and in the second phase it orients the
edges as per some orientation rules. Given that the causal model offers a
concise summary of how the joint distribution changes, they demonstrated that
distribution shift contains important information for causal discovery.
Recently, researchers discovered that this idea could help solve machine
learning problems of domain adaptation and forecasting in nonstationary
situations (Schölkopf et al. (2012); Zhang et al. (2013)). The conducted
experiments in this study demonstrate the changes of causal influence between
the different states of brain functions, and the empirical results show that
CD-NOD has improved precision and F1 score. However, they didn’t consider that
the causal directions might flip, and the power of conditional independence
tests might reduce because of the distribution shifts. The algorithm’s source
code is available in the following link: https://github.com/Biwei-
Huang/Causal-Discovery-from-Nonstationary-Heterogeneous-Data.
Figure 28: Illustration of CD-NOD’s phase-1.
#### 4.1.6 CDANs
Ferdous et al. (2023b) introduces a constraint-based causal discovery approach
called CDANs for autocorrelated and non-stationary time series data that
handles high dimensionality issues. The method identifies both lagged and
instantaneous causal edges along with changing modules that vary over time. By
optimizing the conditioning sets in a constraint-based search, and also
considering lagged parents instead of conditioning on the entire past, it
tries to address the high dimensionality problem. CDANs first detect the
lagged adjacencies, then identify the changing modules and instantaneous
adjacencies, and finally determine the causal direction. The code to implement
this method is available at https://github.com/hferdous/CDANs. An extended
version of this study is presented in Ferdous et al. (2023a), where the method
called eCDANs is introduced that is capable of detecting lagged and
instantaneous causal relationships along with temporal changes. The method
eCDANs addresses high dimensionality by optimizing the conditioning sets while
conducting CI tests and identifies the changes in causal relations by
introducing a proxy variable to represent time dependency.
### 4.2 Functional Causal Model (FCM)-based
#### 4.2.1 VarLiNGAM
VarLiNGAM (Hyvärinen et al. (2010)) combines the non-Gaussian instantaneous
models with autoregressive models and shows that a non-Gaussian model is
identifiable without prior knowledge of network structure. It estimates both
instantaneous and lagged causal effects in models that are an example of
structural vector autoregressive (SVAR) models. These models are a combination
of structural equation models (SEM) and vector autoregressive (VAR) models.
VarLiNGAM also shows that taking instantaneous influences into account can
change the values of the time-lagged coefficients to a great extent. Thus,
neglecting instantaneous influences can lead to misleading interpretations of
causal effects. It also assesses the significance of the estimated causal
relations. An implementation of this method is available at:
https://lingam.readthedocs.io/en/latest/tutorial/var.html.
#### 4.2.2 T$i$MINo
Time-series Models with Independent Noise (TiMINo) (Peters et al. (2013))
studies a class of restricted structural equation models (SEMs) for time-
series data that include nonlinear and instantaneous effects. It assumes
$X_{t}$ to be a function of all direct causes and some noise variable, the
collection of which is supposed to be jointly independent. The algorithm is
based on unconditional independence tests and is applicable to multivariate,
linear, nonlinear, and instantaneous interactions. If the model assumptions
are not satisfied by the data, TiMINo remains mostly undecided instead of
making wrong causal decisions. While methods like Granger causality are built
on the asymmetry of time direction, TiMINo additionally takes into account
identifiability emerging from restricted SEMs. This leads to a straightforward
way of dealing with unknown time delays in different time series. An
implementation of TiMINo is available in this repository:
https://github.com/ckassaad/causal_discovery_for_time_series.
### 4.3 Gradient-based
#### 4.3.1 DYNOTEARS
Pamfil et al. (2020) proposed the Dynamic NOTEARS (DYNOTEARS) which is a
structure learning approach for dynamic data that simultaneously estimates
contemporaneous (intra-slice) and time-lagged (inter-slice) relationships
between variables in a time-series. DYNOTEARS revolves around minimizing a
penalized loss subject to an acyclicity constraint. The optimization finds the
conditional dependencies that are best supported by the data. It leverages
insight from the approach NOTEARS (Zheng et al. (2018)) which uses an
algebraic characterization of acyclicity in directed graphs for static data.
The assumptions made by DYNOTEARS include that the structure of the network is
fixed through time, and is identical for all time series in the data. This
approach is scalable to high-dimensional datasets. An implementation of this
approach is available in the CausalNex library
(https://github.com/quantumblacklabs/causalnex), and also at
https://github.com/ckassaad/causal_discovery_for_time_series.
#### 4.3.2 NTS-NOTEARS
NTS-NOTEARS (Sun et al. (2021)) is a causal discovery method for time series
data that uses 1-D convolutional neural networks (CNNs) to capture linear,
nonlinear, lagged, and instantaneous relations among variables in a time
series data along with ensuring the acyclicity property of a DAG. It extends
the continuous optimization-based approach NOTEARS for learning nonparametric
instantaneous DAGs, and adapts the acyclicity constraint from that approach.
It assumes that there are no latent confounders in the data, and the
underlying data-generating process is fixed and stationary over time. NTS-
NOTEARS is faster than other constraint-based methods because of the use of
nonlinear conditional independence tests. It incorporates prior knowledge into
the learning process to promote the use of optimization constraints on
convolutional layers for better casual discovery. Its implementation is
available at: https://github.com/xiangyu-sun-789/NTS-NOTEARS/.
### 4.4 Granger Causality (GC)-based
Granger (1969) investigated the causal relationships between the variables in
a time series data which is known as Granger Causality (GC). It is based on
the basic assumption that _causes precede their effects_. The author defines
GC as follows: A time series variable $X^{i}$ causes $X^{j}$, if the
probability of $X^{j}$ conditional on its own past, and the past of $X^{i}$
(besides the set of the available information) does not equal the probability
of $X^{j}$ conditional on its own past alone. The GC test can’t be performed
directly on non-stationary data. The non-stationary data needs to be
transformed into stationary data by differencing it, either using first-order
or second-order differencing. Granger Causality can be used when there are no
latent confounders, and also, no instantaneous effects exist, i.e., no
variable causes another variable at the same time stamp.
#### 4.4.1 GVAR
Generalized Vector AutoRegression (GVAR) (Marcinkevičs & Vogt (2021)) is a
framework for inferring multivariate Granger causality under nonlinear
dynamics based on autoregressive modeling with self-explaining neural
networks. It allows the detection of signs of Granger-causal effects and
inspection of their variability over time in addition to relational inference.
It focuses on two aspects: first, inferring Granger-causal relationships in
multivariate time series under nonlinear dynamics, and second, inferring signs
of Granger-causal relationships. A reproducible code of the approach is
available at: https://github.com/i6092467/GVAR.
#### 4.4.2 NAVAR
Bussmann et al. (2021) proposed the approach Neural Additive Vector
AutoRegression (NAVAR) which is a causal discovery approach for capturing
nonlinear relationships using _neural networks_. It is particularly trained
using deep neural networks that extract the (additive) Granger causal
influences from the time evolution in multivariate time series. NAVAR assumes
an additive structure where the predictions depend linearly on independent
nonlinear functions of the individual input variables. These nonlinear
functions are modeled using neural networks. The additive structure of NAVAR
allows scoring and ranking the causal relationships. Currently, NAVAR is
implemented with MLPs and LSTMs as the backbone using Python which is
available at: https://github.com/bartbussmann/NAVAR. However, more complex
architectures such as dilated CNNs and transformers can also be used to model
NAVAR.
#### 4.4.3 ACD
Most causal discovery algorithms applied for time-series analysis find a
causal graph for the data, and then refit the model whenever new samples do
not fit with the underlying causal graph. But in many cases, samples share
connections among them, for example, the brain activity of different regions
at different times. When the algorithms fit a new model, this dynamic nature
between the samples is lost, and can no longer identify the actual causal
relation. To solve this problem, Löwe et al. (2022) proposed the Amortized
Causal Discovery (ACD) technique which can identify the causal relations when
samples are from different causal graphs but share common dynamics. ACD
consists of an encoder and a decoder. The encoder predicts the causal graph’s
|
# Unsupervised Domain Adaptation of Black-Box Source Models
Haojian Zhang1, Yabin Zhang2, Kui Jia1, Lei Zhang2
1South China University of Technology 2The Hong Kong Polytechnic University
eehjzhang<EMAIL_ADDRESS>
kuijia<EMAIL_ADDRESS>
###### Abstract
Unsupervised domain adaptation (UDA) aims to learn models for a target domain
of unlabeled data by transferring knowledge from a labeled source domain. In
the traditional UDA setting, labeled source data are assumed to be available
for adaptation. Due to increasing concerns for data privacy, source-free UDA
is highly appreciated as a new UDA setting, where only a trained source model
is assumed to be available, while labeled source data remain private. However,
trained source models may also be unavailable in practice since source models
may have commercial values and exposing source models brings risks to the
source domain, e.g., problems of model misuse and white-box attacks. In this
work, we study a subtly different setting, named Black-Box Unsupervised Domain
Adaptation (B2UDA), where only the application programming interface of source
model is accessible to the target domain; in other words, the source model
itself is kept as a black-box one. To tackle B2UDA, we propose a simple yet
effective method, termed Iterative Learning with Noisy Labels (IterLNL). With
black-box models as tools of noisy labeling, IterLNL conducts noisy labeling
and learning with noisy labels (LNL), iteratively. To facilitate the
implementation of LNL in B2UDA, we estimate the noise rate from model
predictions of unlabeled target data and propose category-wise sampling to
tackle the unbalanced label noise among categories. Experiments on benchmark
datasets show the efficacy of IterLNL. Given neither source data nor source
models, IterLNL performs comparably with traditional UDA methods that make
full use of labeled source data.
## 1 Introduction
Although deep models have achieved success on various tasks, it is difficult
to generalize the model learned from labeled training data to a target domain
of slightly shifted data distribution. At the same time, it is expensive to
collect a new target dataset with a large number of labeled training data.
Therefore, unsupervised domain adaptation (UDA) [30, 24, 5] is introduced to
learn the target model by transferring knowledge from the labeled source
domain to the unlabeled target domain. Motivated by seminal theories [3, 49],
popular UDA methods [24, 5, 36, 25, 50] target at learning domain invariant
feature representations. The underlying motivation is that the source
classifier could be safely applied to the target domain once domain invariant
feature representations are achieved. In UDA, labeled source data are assumed
to be available for target domain.
Although remarkable success has been achieved in UDA, increasing concerns for
data privacy post new challenges to this task. Specifically, data of source
and target domains are typically captured and stored on different devices and
contain private information. Thus it is risky to expose source data to the
target domain and vice versa. In other words, labeled source data may be not
available for the target domain, impeding the application of popular UDA
methods [24, 5, 36, 25, 50]. For this reason, a novel task, source-free UDA,
is introduced [45, 13, 23] to facilitate the model adaptation and protect the
source data privacy simultaneously.
Unlike the vanilla UDA, a well-trained source model, instead of labeled source
data, is provided to unlabeled target domain in the source-free UDA [45, 23].
Specifically, a white-box source model is available for the target domain;
thus, we term this task as white-box unsupervised domain adaptation (WBUDA) to
distinguish it from our investigated one in later paragraphs. In WBUDA, the
adaptation could be achieved by fine-tuning the source model on unlabeled
target data with well-designed objectives [45, 23].
However, the white-box source model is not always given in practice. Most
valuable models on cloud services (e.g., Google Cloud) are sealed as
application programming interface (API), where only the input-output interface
of a model is available and the model itself is kept as a black-box one. As
stated in [29], releasing an API instead of a white-box model could
commercialize the technology, reduce model misuse and make the model use
conveniently for the public; the white-box attacks [38, 39] may be also
avoided. Due to all reasons mentioned above, white-box source models are
probably unavailable in practice, which hinders the application of WBUDA
methods.
In this work, we study a subtly different setting of source-free UDA, where
only the API of the source model is accessible for the target domain. In other
words, the source model itself is kept as a black-box one; thus, we term this
task as black-box unsupervised domain adaptation (B2UDA). A few recent
attempts [46, 4, 27] have been made to tackle the B2UDA problem, but achieving
less satisfactory results. In this work, we propose a simple yet effective
algorithmic framework, termed Iterative Learning with Noisy Labels (IterLNL).
With black-box models as tools of noisy labeling, IterLNL conducts noisy
labeling and LNL iteratively. Specifically, we first get model predictions of
target data based on the black-box model and obtain their noisy labels as the
category with the maximum prediction probability. We note that the label noise
via the black-box model is highly unbalanced among categories (cf. Figure
2(c)), which is significantly different from the simulated and balanced ones
in LNL [40, 10]; such unbalanced label noise hinders the application of state-
of-the-art LNL methods [15, 10], inspiring the category-wise sampling
strategy. To facilitate the implementation of LNL in B2UDA, we also estimate
the noise rate from model predictions of unlabeled target data. Experiments on
benchmark datasets confirm the efficacy of our method. Notably, our IterLNL
performs comparably with methods of traditional UDA setting where labeled
source data are fully available.
## 2 Related Work
Source Free UDA. Traditional UDA [24, 5] assumes that labeled source data are
available for the target domain. Due to increasing concerns for data privacy,
source-free UDA [23, 22, 13, 17, 18, 45] is highly appreciated as a new UDA
setting, where only a source model is available for the target domain while
labeled source data remain private. Source free UDA methods typically fine-
tune the source model for the target domain with unlabeled target data [23,
17, 45, 22]. Specifically, Liang _et al._ [23] fine-tune the source model with
pseudo-labeling and information maximization between target data and their
predictions; a weight constraint is adopted in [22] to encourage similarity
between the source model and adapted target model. Additionally, source data
and source-style target data are respectively generated in [18] and [13] using
the statistics information stored in source model. The white-box source model
is required in the methods above, but it may be unavailable due to the
commercial and/or safety consideration [29]. To this end, we study a subtly
different B2UDA setting, where only the API of source model is accessible for
the target domain; in other words, the source model itself is kept as a black-
box one. We note that several attempts have been made on the B2UDA problem
recently. Based on pre-trained features, a denoising auto-encoder is used for
prediction of target labels in [4] and an encoder-decoder framework is used in
[46] where encoded target features are aligned to reference Gaussian
distributions; however, both of the two methods obtain less satisfactory
results on benchmark datasets. Morerio et al. [27] first train a conditional
Generative Adversarial Network (cGAN) with unlabeled target data and their
source predictions, and then learn the target model with samples generated by
cGAN; its performance is conditioned on the high-quality samples generated by
cGAN, thus limiting its general usage in UDA tasks. In general, the B2UDA
problem is not well addressed yet. In the present work, we propose IterLNL and
conduct thorough experiments on popular UDA benchmarks; results show that our
proposed method works successfully for B2UDA.
Figure 1: An illustration of different UDA settings. Source data and source
model are respectively required in the traditional UDA and WBUDA settings. In
contrast, B2UDA requires a black-box access to the source model only, which is
the least restrictive condition to apply domain adaptation to the unsupervised
target data.
Learning with Noisy Labels (LNL). LNL aims to learn models with noisy labeled
data robustly. Seminal LNL methods include estimating noise transition matrix
to transfer observed noisy labels to latent clean ones [37, 41], refining the
objective function [7, 51], and avoiding overfitting noisy labels with
memorization effects of neuron networks [15, 10], as summarized in [9].
Although we also adopt LNL technical tools for the B2UDA problem, the task
divergence between LNL and B2UDA raises new problems. Specifically, as
predictions of black-box models are adopted as noisy labels, the noise rate is
unknown and the label noise is unbalanced among categories in B2UDA (cf.
Figure 2(c)), while simulated and balanced noisy labels with known noise rate
are typically adopted in LNL [40, 8, 10]. Besides, noisy labels are given and
fixed in LNL, while we obtain noisy labels from predictions of black-box
models and could update noisy labels via updating models, inspiring the
iterative learning strategy.
Hypothesis Transfer Learning (HTL). HTL [19] utilizes source hypotheses (i.e.,
source models) to assist learning in the target domain. Different from the
B2UDA task, at least a small number of labeled target samples are demanded and
source hypotheses are typically required to be white-box ones in HTL methods
[19, 47, 1], hindering their applications in B2UDA tasks.
## 3 Problems and the proposed Method
Given unlabeled target data
$\mathcal{T}=\\{\mathbf{x}^{t}_{i}\\}_{i=1}^{n^{t}}$ sampled from a
distribution $\mathcal{Q}$, our problem of interest is to learn a model
$F:\mathcal{X}^{t}\to[0,1]^{K}$ such that the empirical target risk
$\frac{1}{n^{t}}\sum_{i=1}^{n^{t}}[\mathcal{L}(F(\mathbf{x}_{i}^{t}),y_{i}^{t})]$
(or ideally, the expected risk
$\mathbb{E}_{(\mathbf{x}^{t},y^{t})\in\mathcal{Q}}[\mathcal{L}(F(\mathbf{x}^{t}),y^{t})]$)
could be minimized, where $K$ is the category number, $\mathcal{L}$ is the
loss function of the task, and $y^{t}_{i}\in\\{1,\dots,K\\}$,
$i=1,\dots,n^{t}$, is the target label to be estimated. Depending on how much
knowledge one may have from a source domain, the problem can fall in different
established realms of unsupervised learning [42], unsupervised domain
adaptation (UDA) [24, 5], and source-free UDA [45, 23]. While the first one
assumes no the source knowledge and is of machine learning foundations, in
this work, we focus on different problem settings of UDA.
Unsupervised Domain Adaptation. Labeled source data
$\mathcal{S}=\\{\mathbf{x}^{s}_{i},y^{s}_{i}\\}_{i=1}^{n^{s}}$ sampled from a
distribution $\mathcal{P}$ are assumed to be available to help the learning
with unlabeled target data $\mathcal{T}$ in UDA, leading to the following
objective:
$F^{t}\Leftarrow\mathcal{O}_{\textrm{UDA}}(\mathcal{T},\mathcal{S}),$ (1)
where $F^{t}$ is the expected target model. The most popular methods [24, 5]
for UDA (1) are to learn domain invariant feature representations, then the
classifier learned from labeled source data $\mathcal{S}$ could be safely
applied to target data.
White-box Unsupervised Domain Adaptation. Source-free UDA [22, 23] is proposed
recently due to increasing concerns for data privacy. Indeed, we are in an era
of cloud computing, and source and target data are usually captured and
privately stored on different devices; it is thus risky to expose source data
for transferable use to the target domain. Source-free UDA typically proposes
to use a trained white-box source model $F^{s}$, instead of the labeled source
data $\mathcal{S}$, to accomplish the UDA objective, which is formalized as:
$\displaystyle
F^{t}\Leftarrow\mathcal{O}_{\textrm{WBUDA}}(\mathcal{T},F^{s}),$ (2)
where $F^{s}=\operatorname*{arg\,min}_{F}\mathcal{L}_{task}(F,\mathcal{S})$
and the task loss $\mathcal{L}_{task}$ is typically instantiated as:
$\mathcal{L}_{task}(F,\mathcal{S})=\frac{1}{n^{s}}\sum_{i=1}^{n^{s}}-\log(F_{y_{i}^{s}}(\mathbf{x}_{i}^{s})),$
(3)
where $F_{k}(\mathbf{x})$ stands for the $k^{th}$ entry of $F(\mathbf{x})$.
Since white-box source model $F^{s}$ is required in (2), we term it as white-
box unsupervised domain adaptation (WBUDA) to distinguish it from our
investigated one in follow paragraphs. In WBUDA, the target model is typically
achieved by fine-tuning the white-box source model $F^{s}$ on unlabeled target
data $\mathcal{T}$ using proper objectives, e.g., the pseudo labeling and
information maximization losses in [23].
(a) Pair ($\epsilon$=0.45)
(b) Symmetry ($\epsilon$=0.5)
(c) VisDA-2017
(d) Rescale curve (9)
Figure 2: (a)-(c): Transition matrices [31, 34] of different noise types,
where the simulated (a) pair flipping [10] and (b) symmetry flipping [40] are
widely adopted in LNL works [10, 44], and (c) presents the realistic noise
matrix in the VisDA-2017 dataset based on the black-box source model
$\widehat{F}^{s}$. The value in row $r$, column $c$ represents the probability
with which samples of category $r$ are assigned with label $c$. In all
figures, deeper colour indicates a larger value and all values are rounded to
the level of $0.01$. (Zoom in to see the exact values.). The label noise in
(c) VisDA-2017 is significantly unbalanced among categories, where the noise
rates of $4$-th, $7$-th and $9$-th categories are less than 0.2 while the
noise rates of $6$-th and $12$-th categories are more than 0.96. (d)
Illustration of the rescale curve (9) with different $\kappa$.
Black-box Unsupervised Domain Adaptation. Although source data remain private
in the WBUDA mentioned above [22, 23], the required white-box source model may
be not available in practice. Most valuable models on cloud services (e.g.,
Google Cloud) are sealed as APIs, where only input-output interfaces are
available. As stated in [29], releasing an API instead of a white-box model
could commercialize the technology, reduce model misuse and make the model use
conveniently for the public; white-box attacks [38, 39] could also be avoided.
Considering above advantages of releasing APIs, we investigate a subtly
different setting of source-free UDA, where only the API of a source model is
accessible for the target domain and the source model itself is kept as a
black-box one. We denote this task as black-box unsupervised domain adaptation
(B2UDA), which is formulated as:
$\displaystyle
F^{t}\Leftarrow\mathcal{O}_{\textrm{B}^{2}\textrm{UDA}}(\mathcal{T},\widehat{F}^{s}),$
(4)
where $\widehat{F}^{s}$ is the API of $F^{s}$; specifically, we could get the
output of $F^{s}(\mathbf{x})$ with respect to any sample $\mathbf{x}$ via
$\widehat{F}^{s}(\mathbf{x})$ and the source model $F^{s}$ itself is kept as a
black-box one. As there are many model APIs on cloud services, B2UDA is a
promising way to improve their adaptabilities, presenting broad practical
values.
Labeled source data $\mathcal{S}$ and white-box source model $F^{s}$ are
respectively required in UDA and WBUDA methods, impeding their applications in
the B2UDA task. To tackle the challenging B2UDA, we propose an Iterative
Learning with Noisy Labels (IterLNL) framework by conducting noisy labeling
and LNL iteratively, which are introduced as follows.
Figure 3: Framework of our proposed Iterative Learning with Noisy Labels
(IterLNL), where we conduct noisy labeling and LNL iteratively. Note that we
introduce noisy labels based on predictions of the black-box model.
### 3.1 Noisy Labeling
Given a black-box model $\widehat{F}$ (e.g., the black-box source model
$\widehat{F}^{s}$) in B2UDA, we could get label predictions of target data
$\\{\mathbf{x}_{i}^{t}\\}_{i=1}^{n^{t}}$ with $\widehat{F}$ as:
$\widehat{\mathcal{Y}}=\\{\widehat{F}(\mathbf{x}_{i}^{t})\\}_{i=1}^{n^{t}}.$
(5)
The corresponding pseudo label of target sample $\mathbf{x}_{i}^{t}$ is
defined as
$\widehat{y}_{i}^{t}=\operatorname*{arg\,max}_{k}\widehat{F}_{k}(\mathbf{x}_{i}^{t})$.
The pseudo labels $\\{\widehat{y}_{i}^{t}\\}_{i=1}^{n^{t}}$ could be highly
noisy due to the divergence across source and target domains. Furthermore, we
emphasize that the label noise in $\\{\widehat{y}_{i}^{t}\\}_{i=1}^{n^{t}}$
could be significantly unbalanced among categories; for example, the noise
rate could be extremely high for some categories and extremely low for the
others, as illustrated in Figure 2(c). Such unbalanced label noise via domain
shift is substantially different from the simulated ones [40, 10] in many LNL
works, as compared in Figure 2. In addition, the noise rate, which is usually
required in LNL algorithms, is unknown in B2UDA, while it is usually assumed
to be given in LNL [40, 10]. In the next paragraph, we propose strategies to
estimate the noise rate and tackle unbalanced label noise, which support the
successful target model learning with noisy labels.
### 3.2 Learning with Noisy Labels
Give noisy target label predictions $\widehat{\mathcal{Y}}$ in Section 3.1, we
resort to LNL to learn the target model. State-of-the-art LNL methods [15, 10,
12] usually combat noisy labels by selecting ‘clean’ samples from each mini-
batch for training, which is achieved by utilizing the memorization effects of
neuron networks [2]. Before going into detail, we denote $R(n)$ as the
percentage of instances selected for training in the mini-batch of $n$-th
iteration. LNL methods [15, 10, 12] typically keep more instances in the mini-
batch (i.e., $R(n)$ is large) at the beginning, and then gradually drop noisy
samples (i.e., $R(n)$ becomes smaller) as training proceeds; by using more
training instances at the beginning, a relatively reliable model could be
achieved since deep models learn clean and easy patterns at the beginning [2];
with the reliable model, noisy instances could be filtered out by gradually
dropping instances with larger losses.
We also adopt the aforementioned LNL strategy in our method since it presents
high robustness even with extremely noisy labels [10]. In LNL [10, 44], the
selecting percentage $R(n)$ is depending on the noise rate, which is either
assumed to be known in advance [10] or estimated with few labeled clean data
[48, 33]. However, realistic noisy labels are introduced by domain shift in
B2UDA, where the unavailable of labeled target data and unknown noise rate
impede the design of $R(n)$.
To this end, we propose a simple yet efficient strategy to estimate noise
rate. We first follow [10] to define $R(n)$ as:
$R(n)=1-\min{(\frac{n}{n_{k}}\epsilon,\epsilon)},$ (6)
where $n_{k}$ is a hyperparameter and $\epsilon$ is the noise rate.
Noise rate Estimation. We first present the empirical noise rate $\epsilon$
as:
$\epsilon=1-\frac{1}{n^{t}}\sum_{i=1}^{n^{t}}\mathcal{I}[\mathbf{x}^{t}_{i},\widehat{y}_{i}^{t}],$
(7)
where $\mathcal{I}[\mathbf{x}^{t}_{i},\widehat{y}_{i}^{t}]\in\\{0,1\\}$ is a
binary indicator; $\mathcal{I}[\mathbf{x}^{t}_{i},\widehat{y}_{i}^{t}]=1$ if
$\widehat{y}_{i}^{t}$ is the correct label of $\mathbf{x}^{t}_{i}$ and 0
otherwise. It is obvious that the empirical noise rate $\epsilon$ (7) is close
correlated to the classification accuracy of the black-box model
$\widehat{F}$. In the meantime, there is a correlation between the
classification accuracy and maximum prediction probability, as observed in
[21, 52, 12]. Although the prediction probability may be overconfident and
misleading viewed in isolation, the probability statistics is often sufficient
to reflect on the overall classification accuracy [12, 6], and also the noise
rate $\epsilon$.
To estimate the noise rate $\epsilon$, we calculate the proportion of target
data $\mathcal{T}$ with high prediction probability as:
$\rho^{\prime}=\frac{1}{n^{t}}\sum_{i=1}^{n^{t}}\mathbb{I}[\max(\widehat{F}(\mathbf{x}_{i}^{t}))>\gamma],$
(8)
where $\gamma\in[0,1]$ is the threshold and
$\mathbb{I}[var]=\begin{cases}1,&var=True\\\ 0,&Otherwise\end{cases}$. One may
opt for approximating noise rate $\epsilon$ with $1-\rho^{\prime}$. However,
considering that deep models are prone to make over-confident predictions [6],
we rescale $\rho^{\prime}\in[0,1]\to\rho\in[0,1]$ as:
$\rho=\begin{cases}0.5(2\rho^{\prime})^{1/\kappa},&\rho^{\prime}<0.5\\\
-0.5(2-2\rho^{\prime})^{1/\kappa}+1,&Otherwise.\end{cases}$ (9)
Although the equation (9) seems complicated, there is only one hyperparameter
$\kappa$ controlling the curve degree and it degenerates to
$\rho=\rho^{\prime}$ if $\kappa=1$, as illustrated in Figure 2(d). The design
of (9) is further investigated in Section 4.1. We finally approximate noise
rate $\epsilon$ as:
$\epsilon=1-\rho.$ (10)
Although the estimated noise rate $\epsilon$ (10) is not precise, we find that
our method is robust to the noise rate $\epsilon$; such an estimation of noise
rate works well and achieves good results close to that using the grounding
truth noise rate, as presented in Section 4.1.
Category-wise Sampling. Given the estimated noise rate $\epsilon$ (10), we
could conduct LNL by selecting $R(n)$ (6) percent samples with smaller loss
for training in the mini-batch of $n$-th iteration. However, as we state in
Section 3.1, the label noise is unbalanced among categories in B2UDA (cf.
Figure 2(c)); thus samples in categories with higher noise rate are prone to
present larger loss and be rejected for training, leading to worse results for
these categories, as presented in Table 1.
To this end, we propose to individually sample the $R(n)$ (6) percent samples
with smaller loss for each category. Technically, we introduce a probability
queue buffer $\mathbf{u}_{k}\in[0,1]^{h}$ for category $k\in[1,\dots,K]$,
where $\mathbf{u}_{k}$ is initialized as a vector filled with positive
infinity values and $h$ is the buffer length. For any instance
$\mathbf{x}^{t}$ in the $n$-th iteration, we obtain its corresponding noisy
label
$\widehat{y}^{t}=\operatorname*{arg\,max}_{k}\widehat{F}_{k}(\mathbf{x}^{t})$
and loss
$\mathcal{L}(F(\mathbf{x}^{t}),\widehat{y}^{t})=-\log(F_{\widehat{y}^{t}}(\mathbf{x}^{t}))$,
where $F$ is the current model in learning. We propose an indicator
$\mathbf{I}(\mathcal{L}(F(\mathbf{x}^{t}),\widehat{y}^{t}),\mathbf{u}_{\widehat{y}^{t}},n)$
to decide whether $\mathbf{x}^{t}$ should be adopted in training, which is
formulated as:
$\mathbf{I}(\mathcal{L}(F(\mathbf{x}^{t}),\widehat{y}^{t}),\mathbf{u}_{\widehat{y}^{t}},n)=\begin{cases}1,&\mathcal{L}(F(\mathbf{x}^{t}),\widehat{y})\leq
L_{R(n)}(\mathbf{u}_{\widehat{y}^{t}})\\\ 0,&Otherwise,\end{cases}$ (11)
where $L_{R(n)}(\mathbf{u}_{\widehat{y}^{t}})$ is the $R(n)$-th largest value
in $\mathbf{u}_{\widehat{y}^{t}}$. We utilize the loss
$\mathcal{L}(F(\mathbf{x}^{t}),\widehat{y}^{t})$ to update current model $F$
if
$\mathbf{I}(\mathcal{L}(F(\mathbf{x}^{t}),\widehat{y}^{t}),\mathbf{u}_{\widehat{y}^{t}},n)=1$
and drop it otherwise.
We also update the queue buffers $\\{\mathbf{u}_{k}\\}_{k=1}^{K}$ with all
samples in the $n$-th iteration. Specifically, given an instance
$\mathbf{x}^{t}$ and its corresponding noisy label $\widehat{y}^{t}$ and loss
$\mathcal{L}(F(\mathbf{x}^{t}),\widehat{y}^{t})$ defined above, we push
$\mathcal{L}(F(\mathbf{x}^{t}),\widehat{y}^{t})$ into the queue buffer
$\mathbf{u}_{\widehat{y}^{t}}$ and pop the oldest value from
$\mathbf{u}_{\widehat{y}^{t}}$ simultaneously. In this way, we adopt samples
with the $R(n)$ percent smallest losses in each category for training in
$n$-th iteration.
Remarks. Note that our method is fundamentally different from existing DA
methods based on pseudo labels [26, 53]. Although we also adopt the categories
with maximum prediction probability on the black-box model as noisy labels, we
do not use them directly as accurate ones. We only use samples with noisy
labels to update the model if these samples present small losses based on the
current model; in other words, we treat noisy labels as accurate ones only if
they are consistent with the current model’s predictions. Moreover, source
data are not involved in model training, avoiding the misleading of source
data on target predictions.
### 3.3 Iternative Learning Strategy
With the noisy labeling in Section 3.1 and LNL in Section 3.2, we could get a
more reliable target model over the original black-box one (i.e., the one
introduces noisy labels), as illustrated in Figure 4(a). In other words, the
achieved target model could produce improved noisy labels over the original
noisy labels. It is a natural idea to conduct noisy labeling with the achieved
target model (or the black-box counterpart of the achieved target model), and
conduct LNL on the new noisy labels again. Finally, we define our algorithmic
framework by conducting noisy labeling and LNL iteratively, leading to the
Iterative Learning with Noisy Labels (IterLNL). We summarize IterLNL in
Algorithm 1 and illustrate its framework in Figure 3.
Remarks. The black-box source model and noisy labels are introduced
respectively in B2UDA and LNL tasks to help the learning with unlabeled data,
making the two tasks fundamentally different. Although we utilize the black-
box source model to assign noisy labels and learn the target model in a LNL
manner, there are three differences. First, the noise rate is unknown and to
be estimated in B2UDA and no labeled target data are provided for the noise
rate estimation. Second, the label noise is quite unbalanced among categories
in B2UDA. Third, different from the given and fixed noisy labels in LNL, the
noisy labels in B2UDA are introduced by black-box models and could be updated
as the model update, inspiring the iterative learning strategy.
Algorithm 1 Iterative Learning with Noisy Labels.
1:Black-box source model $\widehat{F}^{s}$, target data $\mathcal{T}$
2:Target model $F$
3:Initialize $\widehat{F}$ with $\widehat{F}^{s}$
4:for $m=1$ to $M$ do $\triangleright$ For each iterative step
5: Acquire noisy labels $\widehat{\mathcal{Y}}$ with $\mathcal{T}$ and
$\widehat{F}$ using (5)
6: Estimate noise rate $\epsilon$ with (8), (9) and (10)
7: Initialize target model $F$ and buffers $\\{\mathbf{u}_{k}\\}_{k=1}^{K}$
8: for $n=1$ to $N$ do $\triangleright$ For each iteration
9: Acquire $R(n)$ with (6)
10: Update model $F$ using data selected with (11)
11: Update buffers $\\{\mathbf{u}_{k}\\}_{k=1}^{K}$
12: end for
13: Update $\widehat{F}$ as the API (i.e., black-box model) of $F$
14:end for
Methods | plane | bcycl | bus | car | horse | knife | mcycl | person | plant | sktbrd | train | truck | Avg
---|---|---|---|---|---|---|---|---|---|---|---|---|---
Source Model [16] | 76.4 | 26.4 | 60.2 | 80.2 | 73.5 | 2.3 | 89.4 | 8.2 | 79.7 | 25.6 | 69.4 | 3.8 | 49.6
IterLNL (w/o Iter) | 91.4 | 68.9 | 69.6 | 89.9 | 88.0 | 25.1 | 92.8 | 26.1 | 93.1 | 59.1 | 83.3 | 16.3 | 67.0
IterLNL (w/o CateS) | 96.7 | 88.3 | 89.0 | 94.1 | 96.7 | 0.0 | 89.4 | 48.4 | 96.0 | 93.3 | 84.8 | 0.0 | 73.1
IterLNL (w/o Rescale) | 91.7 | 84.8 | 86.6 | 88.7 | 79.4 | 77.9 | 92.8 | 46.3 | 90.1 | 89.1 | 88.2 | 40.5 | 79.7
IterLNL | 77.0 | 84.6 | 85.1 | 92.0 | 92.1 | 74.1 | 92.6 | 49.1 | 89.1 | 91.7 | 84.5 | 49.4 | 80.1
IterLNL (with Val) | 89.0 | 79.5 | 84.3 | 81.0 | 87.7 | 88.1 | 92.5 | 38.7 | 87.1 | 96.9 | 78.8 | 67.0 | 80.9
Table 1: Ablation study on VisDA-2017 dataset, where all experiments are based
on a ResNet-101 model. Please refer to the main text for definitions.
## 4 Experiment
Office-31 [35] is the most popular benchmark dataset for UDA. There are
$4,110$ samples shared by three domains: amazon (A), webcam (W), and dslr (D).
VisDA-2017 [32] aims to transfer knowledge from synthetic images to real-world
ones, which is a challenging task with significant domain divergence. There
are $152K$ synthetic images and $55K$ real-world images shared by $12$
classes. Datasets of MNIST [20], Street View House Numbers (SVHN) [28], and
USPS [14] constitute the Digits task, which includes $10$ classes. There are
$50,000$ training samples, $10,000$ validation samples and $10,000$ test
samples in the MNIST dataset (M), where all images are black-and-white
handwritten digits; the SVHN dataset (S) contains $73,257$ training and
$26,032$ test images with colored backgrounds; the USPS dataset (U) contains
$7,291$ training and $2,007$ test images with black backgrounds.
Implementation Details. For experiments on datasets of Office-31 and
VisDA-2017, we employ the pre-trained ResNet model [11] as the backbone and
replace the last fully connected (FC) layer with a task-specific FC classifier
following [24, 5, 23]. We introduce the source model $F^{s}$ by fine-tuning
the constructed model on source data following [16] and then seal $F^{s}$ as
the black-box $\widehat{F}^{s}$, i.e., only the input-output interface of
$F^{s}$ is available. For experiments on Digits dataset, we follow [36] to
introduce the source model $F^{s}$ with convolutional layers and FC layers.
Following [5], we utilize the SGD optimizer and adopt the learning rate
strategy as $\eta_{n}=\frac{\eta_{0}}{(1+10\zeta)^{0.75}}$, where
$\eta_{0}=0.01$ and $\zeta$ is the process of training iterations linearly
changing from $0$ to $1$. We set the batch size as $64$, $\kappa=2$ in (9),
$\gamma=0.9$ in (8), buffer length $h=100$, and $n_{k}=0.5N$ ($N$ is the total
number of training iterations defined in Algorithm 1) in (6) for all
experiments; these hyperparameters are analyzed in Section 4.2.
(a) Acc. in iterative process
(b) Noise rate $\epsilon$
Figure 4: Illustration of (a) the accuracy of noisy labels
$\\{\widehat{y}_{i}^{t}\\}_{i=1}^{n^{t}}$ and target model $F$ via LNL, and
(b) the estimated noise rate in different iterative steps. ‘GT’, ‘with Val’,
‘IterLNL’ and ‘w/o Rescale’ indicate the noise rate calculated by all labeled
target data (7), labeled validation data, our strategy (10), and
$1-\rho^{\prime}$, respectively. Note that the noise rate estimation and
accuracy of noisy labels with $m=1$ are based on predictions of the black-box
source model $\widehat{F}^{s}$.
### 4.1 Ablation Study
We introduce several variants of IterLNL to investigate the individual
components in IterLNL. Specifically, following [10], we replace the category-
wise sampling (11) by simply using the $R(n)$ percent samples with smaller
losses in the $n$-th iteration for model learning, leading to ‘IterLNL (w/o
CateS)’. We also present the results of IterLNL by conducting noisy labeling
and learning with noisy labels only once (i.e., setting $M=1$ in Algorithm 1),
resulting in ‘IterLNL (w/o Iter)’. To investigate the noise rate estimation
strategy, we remove the rescale strategy (9) and set
$\epsilon=1-\rho^{\prime}$, leading to ‘IterLNL (w/o Rescale)’; we also
introduce a labeled target validation set by randomly selecting $30$ samples
per class (less than $1$% of the entire target data in VisDA-2017); we
estimate the noise rate $\epsilon$ by calculating the classification accuracy
$\alpha$ of $\widehat{F}$ on the validation set and approximating the noise
rate $\epsilon$ as $1-\alpha$, which is termed as ‘IterLNL (with Val)’.
As illustrated in Table 1, IterLNL improves over the IterLNL (w/o CateS) and
IterLNL (w/o Iter) significantly, justifying the efficacy of category-wise
sampling (11) and iterative learning strategy. Specifically, without the
category-wise sampling (11), the accuracy of samples in the knife and truck
categories drops to zero due to the initial high noise rate (i.e., low
accuracy) in Source Model. We also intuitively visualize the accuracy
improvement of model $F$ in the iterative learning process (i.e., with
different $m\in[1,M]$). As illustrated in Figure 4(a), the accuracy of $F$ via
LNL indeed improves over that of initial noisy labels in the beginning; the
improvement is gradually reduced as the iterative step $m$ increases, leading
to the final convergence. Additionally, IterLNL improves over IterLNL (w/o
Rescale) and approximate to results of IterLNL (with Val); this is intuitively
explained in Figure 4(b), where IterLNL (with Val) presents the most accurate
noise rate estimation, and IterLNL achieves more accurate estimation than
IterLNL (w/o Rescale) in most cases. The source model (i.e., $m$=1) tends to
make un-confident predictions on target data due to the domain divergence
while target models (i.e., $m$>1) are prone to make over-confident
predictions, motivating us to introduce the rescale curve (9) in Figure 2(d).
Settings | Methods | U$\to$M | S$\to$M | M$\to$U
---|---|---|---|---
B2UDA | Source Model | 82.0 | 69.4 | 79.4
sMDA [4] | 83.4 | 69.9 | 81.2
PLR [27] | 91.8 | 97.3 | 89.3
IterLNL | 96.7$\pm$0.3 | 98.0$\pm$0.7 | 97.4$\pm$0.1
WBUDA | SDDA [18] | – | 75.5 | 89.9
3C-GAN [22] | 99.3$\pm$0.1 | 99.4$\pm$0.1 | 97.3$\pm$0.2
SHOT [23] | 98.4 | 98.9 | 98.0
UDA | DANN [5] | 86.3$\pm$0.3 | 85.5$\pm$0.4 | 84.9$\pm$0.6
MCD [36] | – | 96.2$\pm$0.4 | 96.5$\pm$0.3
CDAN [25] | 98.0 | 89.2 | 95.6
RWOT [43] | 97.5$\pm$0.2 | 98.8$\pm$0.1 | 98.5$\pm$0.2
Table 2: Results on Digits dataset. Tasks | Method | A$\to$D | A$\to$W | D$\to$A | D$\to$W | W$\to$A | W$\to$D | Avg
---|---|---|---|---|---|---|---|---
B2UDA | Source Model [16] | 79.7 | 78.1 | 64.9 | 96.0 | 65.4 | 99.2 | 80.1
sMDA [4] | 80.5 | 79.3 | 65.7 | 96.3 | 67.3 | 99.2 | 81.4
IterLNL | 91.3$\pm$0.1 | 89.8$\pm$0.7 | 74.3$\pm$0.8 | 98.7$\pm$0.2 | 73.7$\pm$0.1 | 99.3$\pm$0.3 | 87.9
WBUDA | SDDA [18] | 85.3 | 82.5 | 66.4 | 99.0 | 67.7 | 99.8 | 83.5
SHOT [23] | 94.0 | 90.1 | 74.7 | 98.4 | 74.3 | 99.9 | 88.6
3C-GAN [22] | 92.7$\pm$0.4 | 93.7$\pm$0.2 | 75.3$\pm$0.5 | 98.5$\pm$0.1 | 77.8$\pm$0.1 | 99.8$\pm$0.2 | 89.6
UDA | DANN [5] | 79.7$\pm$0.4 | 82.0$\pm$0.4 | 68.2$\pm$0.4 | 96.9$\pm$0.2 | 67.4$\pm$0.5 | 99.1$\pm$0.1 | 82.2
CDAN [25] | 92.9$\pm$0.2 | 94.1$\pm$0.1 | 71.0$\pm$0.3 | 98.6$\pm$0.1 | 69.3$\pm$0.3 | 100.0$\pm$.0 | 87.7
SymNets [50] | 93.9$\pm$0.5 | 90.8$\pm$0.1 | 74.6$\pm$0.6 | 98.8$\pm$0.3 | 72.5$\pm$0.5 | 100.0$\pm$.0 | 88.4
RWOT [43] | 94.5$\pm$0.2 | 95.1$\pm$0.2 | 77.5$\pm$0.1 | 99.5$\pm$0.2 | 77.9$\pm$0.3 | 100.0$\pm$.0 | 90.8
Table 3: Results on Office31 dataset, where all methods are based on a ResNet-50 model. Tasks | Methods | plane | bcycl | bus | car | horse | knife | mcycl | person | plant | sktbrd | train | truck | Avg
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
B2UDA | Source Model [16] | 76.4 | 26.4 | 60.2 | 80.2 | 73.5 | 2.3 | 89.4 | 8.2 | 79.7 | 25.6 | 69.4 | 3.8 | 49.6
sMDA [4] | 77.8 | 39.3 | 66.1 | 73.7 | 74.3 | 4.2 | 87.9 | 16.7 | 79.8 | 36.9 | 71.4 | 9.8 | 53.1
SoFA [46] | – | – | – | – | – | – | – | – | – | – | – | – | 60.4
IterLNL | 77.0 | 84.6 | 85.1 | 92.0 | 92.1 | 74.1 | 92.6 | 49.1 | 89.1 | 91.7 | 84.5 | 49.4 | 80.1
WBUDA | 3C-GAN [22] | 94.8 | 73.4 | 68.8 | 74.8 | 93.1 | 95.4 | 88.6 | 84.7 | 89.1 | 84.7 | 83.5 | 48.1 | 81.6
SHOT [23] | 94.3 | 88.5 | 80.1 | 57.3 | 93.1 | 94.9 | 80.7 | 80.3 | 91.5 | 89.1 | 86.3 | 58.2 | 82.9
UDA | DANN [5] | 81.9 | 77.7 | 82.8 | 44.3 | 81.2 | 29.5 | 65.1 | 28.6 | 51.9 | 54.6 | 82.8 | 7.8 | 57.4
MCD [36] | 87.0 | 60.9 | 83.7 | 64.0 | 88.9 | 79.6 | 84.7 | 76.9 | 88.6 | 40.3 | 83.0 | 25.8 | 71.9
RWOT [43] | 95.1 | 80.3 | 83.7 | 90.0 | 92.4 | 68.0 | 92.5 | 82.2 | 87.9 | 78.4 | 90.4 | 68.2 | 84.0
Table 4: Results on VisDA-2017 dataset, where all methods are based on a
ResNet-101 model.
(a) Results with various $\gamma$
(b) Results with various $\kappa$
Figure 5: IterLNL’s results with various values of $\gamma$ (8) and $\kappa$
(9).
### 4.2 Analysis
Analyses on $\gamma$ and $\kappa$. We investigate the hyper-parameters
$\gamma$ (8) and $\kappa$ (9) on the Digits datasets considering the
experimental speed. As illustrated in Figure 5, IterLNL performs robustly
under a wide range of values. We adopt $\gamma=0.9$ and $\kappa=2.0$ in all
experiments; such simple settings work well but not necessarily lead to the
best results. Similar results are also observed in the study of buffer length
$h$ in Section 3.2 and $n_{k}$ (6), as presented in the appendices.
Comparison with LNL Method. We conduct the LNL method Co-teaching [10] with
our estimated noise rate $\epsilon$ (10) and noisy labels
$\\{\widehat{y}_{i}^{t}\\}_{i=1}^{n^{t}}$ introduced by the black-box source
model $\widehat{F}^{s}$ on the S$\to$M task. The result of $91.0\%$ with Co-
teaching is lower than $98.0\%$ with IterLNL, justifying the advantage of
IterLNL over vanilla LNL method on B2UDA.
### 4.3 Results
The results of IterLNL on datasets of Digits, Office31, and VisDA-2017 are
illustrated in Table 2, Table 3, and Table 4, respectively. Most comparable
results are directly reported from their original papers, except the Source
Model [16] and sMDA [4], which are implemented by ourselves. Taking advantages
of feature learning, our IterLNL improves over existing methods of B2UDA [4,
46] significantly; for example, IterLNL improves over sMDA [4] and SoFA [46]
by $27.0\%$ and $19.7\%$ on the VisDA-2017 dataset respectively; our IterLNL
also improves over [27] by presenting good generalization on various benchmark
datasets. Although we only use the black-box source model for the transfer use
in target domain, IterLNL achieves comparable results to methods of WBUDA,
where the white-box source model is required, and traditional UDA, where
labeled source data are required.
## 5 Conclusion and Broader Impact
Considering that the source model itself may not be available due to
commercial and/or safety considerations [29], we investigate the B2UDA task,
and propose a baseline algorithm of IterLNL. We verify its efficacy on popular
DA benchmarks with thorough experiments. The B2UDA task is of broad practical
value since it could further improve the utilities of APIs as cloud services,
pushing the DA research closer to practical applications.
## References
* [1] Sk Miraj Ahmed, Aske R Lejbolle, Rameswar Panda, and Amit K Roy-Chowdhury. Camera on-boarding for person re-identification using hypothesis transfer learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12144–12153, 2020.
* [2] Devansh Arpit, Stanisław Jastrzębski, Nicolas Ballas, David Krueger, Emmanuel Bengio, Maxinder S Kanwal, Tegan Maharaj, Asja Fischer, Aaron Courville, Yoshua Bengio, et al. A closer look at memorization in deep networks. arXiv preprint arXiv:1706.05394, 2017.
* [3] Shai Ben-David, John Blitzer, Koby Crammer, Alex Kulesza, Fernando Pereira, and Jennifer Wortman Vaughan. A theory of learning from different domains. Machine learning, 79(1-2):151–175, 2010.
* [4] Boris Chidlovskii, Stephane Clinchant, and Gabriela Csurka. Domain adaptation in the absence of source domain data. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 451–460, 2016.
* [5] Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096–2030, 2016\.
* [6] Chuan Guo, Geoff Pleiss, Yu Sun, and Kilian Q Weinberger. On calibration of modern neural networks. arXiv preprint arXiv:1706.04599, 2017.
* [7] Bo Han, Gang Niu, Xingrui Yu, Quanming Yao, Miao Xu, Ivor Tsang, and Masashi Sugiyama. Sigua: Forgetting may make learning with noisy labels more robust. In International Conference on Machine Learning, pages 4006–4016. PMLR, 2020.
* [8] Bo Han, Jiangchao Yao, Gang Niu, Mingyuan Zhou, Ivor Tsang, Ya Zhang, and Masashi Sugiyama. Masking: A new perspective of noisy supervision. NIPS, 2018.
* [9] Bo Han, Quanming Yao, Tongliang Liu, Gang Niu, Ivor W Tsang, James T Kwok, and Masashi Sugiyama. A survey of label-noise representation learning: Past, present and future. arXiv preprint arXiv:2011.04406, 2020.
* [10] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. In Advances in neural information processing systems, pages 8527–8537, 2018.
* [11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [12] Dan Hendrycks and Kevin Gimpel. A baseline for detecting misclassified and out-of-distribution examples in neural networks. ICLR, 2017.
* [13] Yunzhong Hou and Liang Zheng. Source free domain adaptation with image translation. arXiv preprint arXiv:2008.07514, 2020.
* [14] Jonathan J. Hull. A database for handwritten text recognition research. IEEE Transactions on pattern analysis and machine intelligence, 16(5):550–554, 1994.
* [15] Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In International Conference on Machine Learning, pages 2304–2313. PMLR, 2018.
* [16] Mingsheng Long Junguang Jiang, Bo Fu. Transfer-learning-library. https://github.com/thuml/Transfer-Learning-Library, 2020.
* [17] Youngeun Kim, Sungeun Hong, Donghyeon Cho, Hyoungseob Park, and Priyadarshini Panda. Domain adaptation without source data. arXiv preprint arXiv:2007.01524, 2020.
* [18] Vinod K Kurmi, Venkatesh K Subramanian, and Vinay P Namboodiri. Domain impression: A source data free domain adaptation method. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 615–625, 2021.
* [19] Ilja Kuzborskij and Francesco Orabona. Stability and hypothesis transfer learning. In International Conference on Machine Learning, pages 942–950. PMLR, 2013.
* [20] Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.
* [21] Dong-Hyun Lee. Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks. In Workshop on challenges in representation learning, ICML, volume 3, page 2, 2013.
* [22] Rui Li, Qianfen Jiao, Wenming Cao, Hau-San Wong, and Si Wu. Model adaptation: Unsupervised domain adaptation without source data. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), June 2020.
* [23] Jian Liang, Dapeng Hu, and Jiashi Feng. Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation. arXiv preprint arXiv:2002.08546, 2020.
* [24] Mingsheng Long, Yue Cao, Jianmin Wang, and Michael I. Jordan. Learning transferable features with deep adaptation networks. In Proceedings of the 32Nd International Conference on International Conference on Machine Learning - Volume 37, ICML’15, pages 97–105. JMLR.org, 2015.
* [25] Mingsheng Long, Zhangjie Cao, Jianmin Wang, and Michael I Jordan. Conditional adversarial domain adaptation. In Advances in Neural Information Processing Systems, pages 1640–1650, 2018.
* [26] Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan. Deep transfer learning with joint adaptation networks. arXiv preprint arXiv:1605.06636, 2016.
* [27] Pietro Morerio, Riccardo Volpi, Ruggero Ragonesi, and Vittorio Murino. Generative pseudo-label refinement for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 3130–3139, 2020.
* [28] Yuval Netzer, Tao Wang, Adam Coates, Alessandro Bissacco, Bo Wu, and Andrew Y Ng. Reading digits in natural images with unsupervised feature learning. 2011\.
* [29] OpenAI. Why did openai choose to release an api instead of open-sourcing the models? [EB/OL]. https://openai.com/blog/openai-api/ Accessed March 4, 2021.
* [30] Sinno Jialin Pan, Qiang Yang, et al. A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345–1359, 2010.
* [31] Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1944–1952, 2017.
* [32] Xingchao Peng, Ben Usman, Neela Kaushik, Judy Hoffman, Dequan Wang, and Kate Saenko. Visda: The visual domain adaptation challenge. arXiv preprint arXiv:1710.06924, 2017.
* [33] Harish Ramaswamy, Clayton Scott, and Ambuj Tewari. Mixture proportion estimation via kernel embeddings of distributions. In International conference on machine learning, pages 2052–2060. PMLR, 2016.
* [34] Scott Reed, Honglak Lee, Dragomir Anguelov, Christian Szegedy, Dumitru Erhan, and Andrew Rabinovich. Training deep neural networks on noisy labels with bootstrapping. arXiv preprint arXiv:1412.6596, 2014.
* [35] Kate Saenko, Brian Kulis, Mario Fritz, and Trevor Darrell. Adapting visual category models to new domains. In European conference on computer vision, pages 213–226. Springer, 2010.
* [36] Kuniaki Saito, Kohei Watanabe, Yoshitaka Ushiku, and Tatsuya Harada. Maximum classifier discrepancy for unsupervised domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3723–3732, 2018.
* [37] Sainbayar Sukhbaatar, Joan Bruna, Manohar Paluri, Lubomir Bourdev, and Rob Fergus. Training convolutional networks with noisy labels. ICLRW, 2015.
* [38] Simen Thys, Wiebe Van Ranst, and Toon Goedemé. Fooling automated surveillance cameras: adversarial patches to attack person detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pages 0–0, 2019.
* [39] Florian Tramèr, Alexey Kurakin, Nicolas Papernot, Ian Goodfellow, Dan Boneh, and Patrick McDaniel. Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204, 2017.
* [40] Brendan Van Rooyen, Aditya Krishna Menon, and Robert C Williamson. Learning with symmetric label noise: The importance of being unhinged. NIPS, 2015.
* [41] Brendan Van Rooyen and Robert C Williamson. A theory of learning with corrupted labels. J. Mach. Learn. Res., 18(1):8501–8550, 2017.
* [42] Markus Weber, Max Welling, and Pietro Perona. Unsupervised learning of models for recognition. In European conference on computer vision, pages 18–32. Springer, 2000.
* [43] Renjun Xu, Pelen Liu, Liyan Wang, Chao Chen, and Jindong Wang. Reliable weighted optimal transport for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4394–4403, 2020.
* [44] Hansi Yang, Quanming Yao, Bo Han, and Gang Niu. Searching to exploit memorization effect in learning from corrupted labels. arXiv preprint arXiv:1911.02377, 2019.
* [45] Shiqi Yang, Yaxing Wang, Joost van de Weijer, and Luis Herranz. Unsupervised domain adaptation without source data by casting a bait. arXiv preprint arXiv:2010.12427, 2020.
* [46] Hao-Wei Yeh, Baoyao Yang, Pong C Yuen, and Tatsuya Harada. Sofa: Source-data-free feature alignment for unsupervised domain adaptation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 474–483, 2021.
* [47] Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. How transferable are features in deep neural networks? arXiv preprint arXiv:1411.1792, 2014.
* [48] Xiyu Yu, Tongliang Liu, Mingming Gong, Kayhan Batmanghelich, and Dacheng Tao. An efficient and provable approach for mixture proportion estimation using linear independence assumption. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4480–4489, 2018.
* [49] Yuchen Zhang, Tianle Liu, Mingsheng Long, and Michael Jordan. Bridging theory and algorithm for domain adaptation. In International Conference on Machine Learning, pages 7404–7413, 2019.
* [50] Yabin Zhang, Hui Tang, Kui Jia, and Mingkui Tan. Domain-symmetric networks for adversarial domain adaptation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5031–5040, 2019.
* [51] Zhilu Zhang and Mert R Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. arXiv preprint arXiv:1805.07836, 2018.
* [52] Tianyi Zhou, Shengjie Wang, and Jeff Bilmes. Time-consistent self-supervision for semi-supervised learning. In International Conference on Machine Learning (ICML), 2020.
* [53] Yang Zou, Zhiding Yu, BVK Kumar, and Jinsong Wang. Unsupervised domain adaptation for semantic segmentation via class-balanced self-training. In Proceedings of the European conference on computer vision (ECCV), pages 289–305, 2018.
|
# Weak Supervision with Incremental Source Accuracy Estimation
Richard Correro
###### Abstract
Motivated by the desire to generate labels for real-time data we develop a
method to estimate the dependency structure and accuracy of weak supervision
sources incrementally. Our method first estimates the dependency structure
associated with the supervision sources and then uses this to iteratively
update the estimated source accuracies as new data is received. Using both
off-the-shelf classification models trained using publicly-available datasets
and heuristic functions as supervision sources we show that our method
generates probabilistic labels with an accuracy matching that of existing off-
line methods.
###### Index Terms:
Weak Supervision, Transfer Learning, On-line Algorithms.
## I Introduction
Weak supervision approaches obtain labels for unlabeled training data using
noiser or higher level sources than traditional supervision [1]. These sources
may be heuristic functions, off-the-shelf models, knowledge-base-lookups, etc.
[2]. By combining multiple supervision sources and modeling their dependency
structure we may infer the true labels based on the outputs of the supervision
sources.
### Problem Setup
In the weak supervision setting we have access to a dataset
$X=\\{x_{1},\dots,x_{n}\\}$ associated with unobserved labels
$Y=\\{y_{1},\dots,y_{n}\\},\ \ y_{i}\in\\{1,\dots,k\\}$ and a set of weak
supervision sources $p_{i}(y|x),i=1,\dots,m$.
We denote the outputs of the supervision sources by
$\lambda_{1},\dots,\lambda_{m}$ and let $\mathbf{\lambda_{j}}=[\lambda_{1}\
\lambda_{2}\ \dots\ \lambda_{m}]^{T}$ denote the vector of labels associated
with example $x_{j}$. The objective is to learn the joint density
$f(y,\mathbf{\lambda})$
over the sources and the latent label. Using this we may estimate the
conditional density
$\displaystyle
f_{Y\mid\Lambda}(y|\mathbf{\lambda})=\frac{f_{Y,\Lambda}(y,\mathbf{\lambda})}{f_{\Lambda}(\lambda)},\quad
f_{\Lambda}(\lambda)>0.$ (1)
These sources may take many forms but we restrict ourselves to the case in
which $\lambda_{i}\in\\{0,\dots,k\\}$ and thus the label functions generate
labels belonging to the same domain as $Y$. Here $\lambda_{i}=0$ indicates the
$i^{th}$ source has not generated a label for this example. Such supervision
sources may include heuristics such as knowledge base lookups, or pre-trained
models.
## II Related Work
Varma et. al. [3] and Ratner, et. al. [4] model the joint distribution of
$\lambda_{1},\dots,\lambda_{m},Y$ in the classification setting as a Markov
Random Field
$f_{G}(\lambda_{1},\dots,\lambda_{m},y)=\frac{1}{Z}\exp\left(\sum_{\lambda_{i}\in
V}\theta_{i}\lambda_{i}+\sum_{(\lambda_{i},\lambda_{j})\in
E}\theta_{i,j}\lambda_{i}\lambda_{j}+\theta_{Y}y+\sum_{\lambda_{i}\in
V}\theta_{Y,y}y\lambda_{i}\right)$
associated with graph $G=(V,E)$ where $\theta_{i,j}\ 1\leq i,j\leq m+1$ denote
the canonical parameters associated with the supervision sources and $Y$, and
$Z$ is a partition function [here
$V=\\{\lambda_{1},\dots,\lambda_{m}\\}\cup\\{Y\\}$]. If $\lambda_{i}$ is not
independent of $\lambda_{j}$ conditional on $Y$ and all sources $\lambda_{k},\
k\in\\{1,\dots,m\\}\setminus\\{i,j\\}$, then $(\lambda_{i},\lambda_{j})$ is an
edge in $E$.
Let $\Sigma$ denote the covariance matrix of the supervision sources and $Y$.
To learn $G$ from the labels
$O=\\{\lambda_{i}:\lambda_{i}=[\lambda_{1},\dots,\lambda_{m}]^{T};i=1,\dots,n\\}$
and without the ground truth labels, Varma et. al. assume that $G$ is sparse
and therefore that the inverse covariance matrix $\Sigma^{-1}$ associated with
$\lambda_{1},\dots,\lambda_{m},Y$ is graph-structured. Since $Y$ is a latent
variable the full covariance matrix $\Sigma$ is unobserved. We may write the
covariance matrix in block-matrix form as follows:
$Cov[O\cup S]:=\Sigma=\begin{bmatrix}\Sigma_{O}&\Sigma_{OS}\\\
\Sigma_{OS}^{T}&\Sigma_{S}\end{bmatrix}$
Inverting $\Sigma$, we write
$\Sigma^{-1}=\begin{bmatrix}K_{O}&K_{OS}\\\ K_{OS}^{T}&K_{S}\end{bmatrix}$
$\Sigma_{O}$ may be estimated empirically:
$\hat{\Sigma}_{O}=\frac{\Lambda\Lambda^{T}}{n}-\nu\nu^{T}$
where
$\Lambda=[\mathbf{\lambda}_{1}\mathbf{\lambda}_{2},\dots,\mathbf{\lambda}_{n}]$
denotes the $m\times n$ matrix of labels generates by the sources and
$\nu=\hat{E}[O]\in\mathbb{R}^{m}$ denotes the observed labeling rates.
Using the block-matrix inversion formula, Varma et. al. show that
$K_{O}=\Sigma_{O}^{-1}+c\Sigma_{O}^{-1}\Sigma_{OS}\Sigma_{OS}^{T}\Sigma_{O}^{-1}$
where
$c=(\Sigma_{S}-\Sigma_{OS}^{T}\Sigma_{O}^{-1}\Sigma_{OS})^{-1}\in\mathbb{R}^{+}$.
Letting $z=\sqrt{c}\Sigma_{O}^{-1}\Sigma_{OS}$, they write
$\Sigma_{O}^{-1}=K_{O}-zz^{T}$
where $K_{O}$ is sparse and $zz^{T}$ is low-rank positive semi definite.
Because $\Sigma_{O}^{-1}$ is the sum of a sparse matrix and a low-rank matrix
we may use Robust Principal Components Analysis [5] to solve the following:
$\displaystyle(\hat{S},\hat{L})=\text{argmin}_{(S,L)}||L||_{*}+\gamma||S||_{1}$
$\displaystyle s.t.\quad\quad S-L=\hat{\Sigma}^{-1}_{O}$
Varma et. al. then show that we may learn the structure of $G$ from $K_{O}$
and we may learn the accuracies of the sources from $z$ using the following
algorithm:
Result: $\hat{G}=(V,\hat{E}),\ \hat{L}$
$\mathbf{Inputs:}$ Estimate of covariance matrix $\hat{\Sigma}_{O}$, parameter
$\gamma$, threshold $T$
$\mathbf{Solve:}\quad(\hat{S},\hat{L})=\text{argmin}_{(S,L)}||L||_{*}+\gamma||S||_{1}$
s.t. $S-L=\hat{\Sigma}^{-1}_{O}$
$\hat{E}\xleftarrow{}\\{(i,j):i<j,\hat{S}_{i,j}>T\\}$
Algorithm 1 Weak Supervision Structure Learning and Source Estimation Using
Robust PCA (From [3])
Note that $\hat{L}=zz^{T}$.
Ratner, et. al. [4] show that we may estimate the source accuracies
$\hat{\mu}$ from $z$ and they propose a simpler algorithm for estimating $z$
if the graph structure is already known: If $E$ is already known we may
construct a dependency mask $\Omega=\\{(i,j):(\lambda_{i},\lambda_{j})\not\in
E\\}$. They use this in the following algorithm:
Result: $\hat{\mu}$
$\mathbf{Inputs:}$ Observed labeling rates $\hat{\mathbb{E}}[O]$ and
covariance $\hat{\Sigma}_{O}$; class balance $\hat{\mathbb{E}}[Y]$ and
variance $\hat{\Sigma}_{S}$; dependency mask $\Omega$
$\hat{z}\xleftarrow{}\text{argmin}_{Z}||\hat{\Sigma}_{O}^{-1}+zz^{T}||_{\Omega}$
$\hat{c}\xleftarrow{}\Sigma_{S}^{-1}(1+\hat{z}^{T}\hat{\Sigma}_{O}\hat{z})$
$\hat{\Sigma}_{OS}\xleftarrow{}\hat{\Sigma}_{O}\hat{z}/\sqrt{\hat{c}}$
$\hat{\mu}\xleftarrow{}\hat{\Sigma}_{OS}+\hat{\mathbb{E}}[Y]\hat{\mathbb{E}}[O]$
Algorithm 2 Source Estimation for Weak Supervision (From [4])
Snorkel, an open-source Python package, provides an implementation of
algorithm 2 [6].
## III Motivating Our Approach
Although the algorithm proposed by Varma et. al. may be used determine the
source dependency structure and source accuracy, it requires a robust
principal components decomposition of the matrix $\hat{\Sigma}_{O}$ which is
equivalent to a convex Principal Components Pursuit (PCP) problem [5]. Using
the current state-of-the-art solvers such problems have time complexity
$O(\epsilon^{-2})$ where $\epsilon$ denotes the solver convergence tolerance
[5]. For reasonable choices of $\epsilon$ this may be a very expensive
calculation.
In the single-task classification setting, algorithm 2 may be solved by least-
squares and is therefore much less expensive to compute than algorithm 1. Both
algorithms, however, require the observed labeling rates and covariance
estimates of the supervision sources over the entire dataset and therefore
cannot be used in an on-line setting.
We therefore develop an on-line approach which estimates the structure of $G$
using algorithm 1 on an initial ”minibatch” of unlabeled examples and then
iteratively updates the source accuracy estimate $\hat{\mu}$ using using a
modified implementation of algorithm 2.
## IV Methods
Given an initial batch $b_{1}$ of unlabeled examples
$X_{b_{1}}=\\{x_{1},\dots,x_{k}\\}$ we estimate $G$ by first soliciting labels
$\mathbf{\lambda}_{1},\dots,\mathbf{\lambda}_{k}$ for $X_{b_{1}}$ from the
sources. We then calculate estimated labeling rates $\hat{E}[O]$ and
covariances $\hat{\Sigma}_{Ob_{1}}$ which we then input to algorithm 1,
yielding $\hat{G}=(V,\hat{E})$ and $\hat{L}$. From $\hat{E}$ we create the
dependency mask
$\hat{\Omega}=\\{(i,j):(\lambda_{1},\lambda_{j})\not\in\hat{E}\\}$ which we
will use with future data batches. Using the fact that $\hat{L}=zz^{T}$ we
recover $\hat{z}$ by first calculating
$|\hat{z}|=\sqrt{diag(\hat{L})}$
We then break the symmetry using the method in [4]. Note that if a source
$\lambda_{i}$ is conditionally independent of the others then the sign of
$z_{i}$ determines the sign of all other elements of $z$.
Using $\hat{z},\ \hat{E}[O],\ \hat{\Sigma}_{Ob_{1}}$, class balance prior
$\hat{E}[Y]$ and class variance prior $\hat{\Sigma}_{S}$ we calculate
$\hat{\mu}$, an estimate of the source accuracies [if we have no prior beliefs
about the class distribution then we simply substitute uninformative priors
for $\hat{E}[O]$ and $\hat{\Sigma}_{Ob_{1}}$].
For each following batch $b_{p}$ of unlabeled examples $X_{b_{p}}$ we estimate
$\Sigma_{Ob_{p}}$ and $E[O]_{b_{p}}$. Using these along with $\hat{E}[O]$ and
$\hat{\Sigma}_{Ob_{1}}$ we calculate $\hat{\mu}_{b_{p}}$, an estimate of the
source accuracies over the batch. We then update $\hat{\mu}$ using the
following update rule:
$\hat{\mu}:=(1-\alpha)\hat{\mu}+\alpha\mu_{b_{p}}$
where $\alpha\in[0,1]$ denotes the mixing parameter. Our method thus models
the source accuracies using an exponentially-weighted moving average of the
estimated per-batch source accuracies.
Using the estimated source accuracies and dependency structure we may estimate
$p(y,\mathbf{\lambda})$ which we may then use to estimate
$p(y|\mathbf{\lambda})$ by (1).
Result: $\hat{\mu}$
$\mathbf{Inputs:}$ Observed labeling rates $\hat{\mathbb{E}}[O]_{b}$ and
covariance $\hat{\Sigma}_{Ob}$; class balance $\hat{\mathbb{E}}[Y]$ and
variance $\hat{\Sigma}_{S}$
for _each batch b_ do
if _is initial batch_ then
Use algorithm 1 to calculate $\hat{G}$ and $\hat{L}$
$|\hat{z}|\xleftarrow{}\sqrt{diag(\hat{L})}$
Determine the sign of the entries of $z$ using method from [4]
else
$\hat{z}\xleftarrow{}\text{argmin}_{z}||\hat{\Sigma}_{Ob}^{-1}+zz^{T}||_{\Omega}$
end if
$\hat{c}\xleftarrow{}\Sigma_{S}^{-1}(1+\hat{z}^{T}\hat{\Sigma}_{Ob}\hat{z})$
$\hat{\Sigma}_{OS}\xleftarrow{}\hat{\Sigma}_{Ob}\hat{z}/\sqrt{\hat{c}}$
$\hat{\mu}_{b}\xleftarrow{}\hat{\Sigma}_{OS}+\hat{\mathbb{E}}[Y]\hat{\mathbb{E}}[O]_{b}$
if _is initial batch_ then
$\hat{\mu}\xleftarrow{}\hat{\mu}$
else
$\hat{\mu}\xleftarrow{}(1-\alpha)\hat{\mu}+\alpha\hat{\mu}_{b}$
end if
end for
Algorithm 3 Incremental Source Accuracy Estimation
## V Tests
### Supervision Sources
We test our model in an on-line setting using three supervision sources. Two
of the sources are off-the-shelf implementations of Naïve Bayes classifiers
trained to classify text by sentiment. Each was trained using openly-available
datasets. The first model was trained using a subset of the IMDB movie reviews
dataset which consists of a corpus of texts labeled by perceived sentiment
[either ”positive” or ”negative”]. Because the labels associated with this
dataset are binary the classifier generates binary labels.
The second classifier was trained using another openly-available dataset, this
one consisting of a corpus of text extracted from tweets associated with air
carriers in the United States and labeled according to sentiment. These labels
in this dataset belong to three seperate classes [”positive”, ”neutral”, and
”negative”] and therefore the model trained using this dataset classifies
examples according to these classes.
The final supervision source is the Textblob Pattern Analyzer. This is a
heuristic function which classifies text by polarity and subjectivity using a
lookup-table consisting of strings mapped to polarity/subjectivity estimates.
To generate discrete labels for an example using this model we threshold the
polarity/subjectivity estimates associated with the label as follows:
* •
If polarity is greater than 0.33 we generate a positive label
* •
If polarity is less than or equal to 0.33 but greater than -0.33 we generate a
neutral label
* •
If polarity is less than or equal to 0.33 we generate a negative label
### Test Data
We test our incremental model using a set of temporally-ordered text data
extracted from tweets associated with a 2016 GOP primary debate labeled by
sentiment [”positive”, ”neutral”, or ”negative”]. We do so by solicting labels
$\mathbf{\lambda}_{1},\dots,\mathbf{\lambda}_{n}$ associated with the $n$
examples from the three supervision sources.
### Weak Supervision as Transfer Learning
Note that this setting is an example of a transfer learning problem [7].
Specifically, since we are using models pre-trained on datasets similar to the
target dataset we may view the Naive Bayes models as transferring knowledge
from those two domains [Tweets associated with airlines and movie reviews,
respectively] to provide supervision signal in the target domain [7]. The
Pattern Analyzer may be viewed through the same lens as it uses domain
knowledge gained through input from subject-matter experts.
### Test Setup
Because our model is generative we cannot use a standard train-validation-test
split of the dataset to determine model performance. Instead, we compare the
labels generated by the model with the ground-truth labels over separate folds
of the dataset.
#### Data Folding Procedure
We split the text corpus into five folds. The examples are not shuffled to
perserve temporal order within folds. Using these folds we perform 5 separate
tests, each using four of the five folds in order. For example, the fifth test
uses the fold 5 and folds 1—3, in that order.
#### Partition Tests
For each set of folds we further partition the data into $k=100$ batches of
size $q$ which we refer to as ”minibatches” [as they are subsets of the
folds]. For each minibatch we solicit labels
$\mathbf{\lambda}_{1},\dots,\mathbf{\lambda}_{q},\
\mathbf{\lambda}_{i}\in\mathbf{R^{3}}$ from the two pretrained models and the
Pattern Analyzer. Note that both pretrained classifiers first transform the
text by tokenizing the strings and then calculating the term-frequency to
inverse document frequency (Tf-idf) for each token. We store these labels in
an array $\mathbf{L}$ for future use. We then calculate $\hat{E}[O]_{b}$ and
$\hat{\Sigma}_{Ob}$ for the minibatch, which we use with algorithm 3 to
generate $\hat{\mu}_{b}$ and the dependency graph $\hat{G}$. Using these we
generate labels corresponding to the examples contained within the minibatch.
Using the ground-truth labels associated with the examples contained within
the minibatch we calculate the accuracy of our method by comparing the
generated labels $\mathbf{\hat{y}}$ with the ground-truth labels $\mathbf{y}$:
$\texttt{accuracy}(\mathbf{y},\mathbf{\hat{y}})=\frac{1}{q}\sum_{i=0}^{q-1}\mathbf{1}(\mathbf{\hat{y}}_{i}=\mathbf{y}_{i})$
We then average the accuracy scores associated with each minibatch over the
number of minibatches used in each test to calculate the average per-test
accuracy [calculated using four of the five folds of the overall dataset].
We then compare the average accuracies of the labels produced using our
incremental method to the accuracies of the labels produced by an existing
off-line source accuracy estimation method based on algorithm 2 [6]. Since
this method works in an off-line manner it requires access to the entire set
$\mathbf{L}$ of labels generated by the supervision sources. Using these this
method generates its own set of generated labels $\mathbf{\hat{y}}_{baseline}$
with which we then calculate the baseline accuracy using the accuracy metric
above.
Finally, we compare the accuracy of the labels generated by our method with
the accuracy of the labels generated by each of the supervision sources.
#### Comparing Values of $\alpha$
We then follow the same procedure as above to generate labels for our method,
except this time we use different values of $\alpha$.
## VI Results
Our tests demonstrate the following:
1. 1.
Our model generates labels which are more accurate than those generated by the
baseline [when averaged over all 5 tests].
2. 2.
Both our method and the baseline generate labels which are more accurate than
those generated by each of the supervision sources.
3. 3.
Our tests of the accuracy of labels generated by our method using different
values of $\alpha$ yields an optimal values $\alpha=0.05$ and shows convexity
over the values tested.
Theses tests show that the average accuracy of the incremental model
qualitatively appears to increase as the number of samples seen grows. This
result is not surprising as we would expect our source accuracy estimate
approaches the true accuracy $\hat{\mu}\xrightarrow{}{\mu}$ as the number of
examples seen increases. This implies that the incremental approach we propose
generates more accurate labels as a function of the number of examples seen,
unlike the supervision sources which are pre-trained and therefore do not
generate more accurate labels as the number of labeled examples grows.
These tests also suggest that an optimal value for $\alpha$ for this problem
is approximately $0.05$ which is in the interior of the set of values tested
for $\alpha$. Since we used $100$ minibatches in each test of the incremental
model this implies that choosing an $\alpha$ which places greater weight on
more recent examples yields better performance, although more tests are
necessary to make any stronger claims.
Finally, we note that none of the models here tested are in themselves highly-
accurate as classification models. This is not unexpected as the supervision
sources were intentionally chosen to be ”off-the-shelf” models and no feature
engineering was performed on the underlying text data, neither for the
datasets used in pre-training the two classifier supervision sources nor for
the test set [besides Tf-idf vectorization]. The intention in this test was to
compare the relative accuracies of the two generative methods, not to design
an accurate discriminative model.
## VII Conclusion
We develop an incremental approach for estimating weak supervision source
accuracies. We show that our method generates labels for unlabeled data which
are more accurate than those generated by pre-existing non-incremental
approaches. We frame our specific test case in which we use pre-trained models
and heuristic functions as supervision sources as a transfer learning problem
and we show that our method generates labels which are more accurate than
those generated by the supervision sources themselves.
Figure 1: Comparison of incremental and non-incremental model accuracy over minibatches. Figure 2: Average model accuracy over minibatches. Figure 3: Average per-batch accuracies for different values of $\alpha$. TABLE I: Average Accuracy of Incremental Model For Different Alpha Values Alpha | 0.001 | 0.01 | 0.025 | 0.05 | 0.1 | 0.25
---|---|---|---|---|---|---
Accuracy | 0.61245 | 0.61287 | 0.61832 | 0.61709 | 0.61498 | 0.61473
## References
* [1] Alexander Ratner. Stephen Bach. Paroma Varma. Chris Ré. (2017) ”Weak Supervision: The New Programming Paradigm for Machine Learning”. Snorkel Blog.
* [2] Mayee Chen. Frederic Sala. Chris Ré. ”Lecture Notes on Weak Supervision”. CS 229 Lecture Notes. Stanford University.
* [3] Paroma Varma. Frederic Sala. Ann He. Alexander Ratner. Christopher Ré. (2019) ”Learning Dependency Structures for Weak Supervision Models”. Preprint.
* [4] Alexander Ratner. Braden Hancock. Jared Dunnmon. Frederic Sala. Shreyash Pandey. Christopher Ré. (2018) ”Training Complex Models with Multi-Task Weak Supervision”. Preprint.
* [5] Emmanuel J. Candès. Xiaodong Li. Yi Ma. John Wright. ”Robust principal component analysis?” Journal of the ACM. Vol 58. Issue 11.
* [6] Alexander Ratner. Stephen H. Bach. Henry Ehrenberg. Jason Fries. Sen Wu. Christopher Ré. ”Snorkel: Rapid Training Data Creation with Weak Supervision.” Preprint.
* [7] Sinno Jialin Pan. Qiang Yang (2009) ”A Survey on Transfer Learning”. IEEE Transactions on Knowledge and Data Engineering. Vol 22. Issue 10.
|
# On the Suppression and Enhancement of Thermal Chemical Rates in a Cavity
Jing Sun<EMAIL_ADDRESS>[ Oriol Vendrell
<EMAIL_ADDRESS>[
###### Abstract
The observed modification of thermal chemical rates in Fabry-Perot cavities
remains a poorly understood effect theoretically. Recent breakthroughs explain
some of the observations through the Grote-Hynes theory, where the cavity
introduces friction with the reaction coordinate, thus reducing the
transmission coefficient and the rate. The regime of rate enhancement, the
observed sharp resonances at varying cavity frequencies, and the survival of
these effects in the collective regime remain mostly unexplained. In this
paper, we consider the _cis_ -_trans_ isomerization of HONO atomistically
using an _ab-initio_ potential energy surface. We evaluate the transmission
coefficient using the reactive flux method and identify the conditions for
rate acceleration. In the underdamped, low-friction regime of the reaction
coordinate, the cavity coupling enhances the rate with increasing coupling
strength until reaching the Kramers turnover point. Sharp resonances in this
regime are related to cavity-enabled energy redistribution channels.
Heidelberg University] Theoretische Chemie, Physikalisch-Chemisches Institut,
Universität Heidelberg,
69120 Heidelberg, Germany Heidelberg University] Theoretische Chemie,
Physikalisch-Chemisches Institut, Universität Heidelberg,
69120 Heidelberg, Germany
Vibrational strong coupling (VSC) has emerged as a very active front in the
field of polaritonic chemistry since the pioneering demonstrations of Rabi
splitting in the infrared domain in Fabry-Perot configurations 1, 2, 3. At the
core of this sub-discipline lies the promise of modifying 4, and ultimately
controlling 5 the mechanisms and rates of thermal chemical reactions using the
vacuum fields of cavities 6. Besides important breakthroughs in the linear and
non-linear spectroscopy of VSC systems 4, 6, 5, 7, the more spectacular
results remain the experiments reporting the modification of chemical rates in
cavities by the Ebbesen group and others 8, 9, 10, 11, 12, 13. These
experiments have triggered the proposal of several theoretical models to
explain how the cavity modifies the ground electronic state structure 14 and
spectroscopy 15 and, more recently, how it modifies reaction rates 16, 17, 18,
19, 20, 21.
Theoretical models based on the Grote-Hynes theory 22 predict the suppression
of the transmission coefficient with increasing cavity coupling due to
increased friction at the top of the reaction barrier 19, 20, 21. How cavities
can enhance chemical reactions 9, how sharp resonances of the cavity with
vibrational modes affect the mechanism 9, 10, and how these effects survive in
the collective VSC regime, have remained poorly understood questions.
Here, we simulate the rate of a realistic isomerization reaction atomistically
using an _ab initio_ potential, both for one and several HONO molecules, in
the VSC regime. Our simulations explain how the cavity enhances chemical rates
in the underdamped regime and capture the turnover from the underdamped to the
damped regime as a function of the cavity coupling strength. Moreover, we
explain how, in the underdamped regime, sharp resonances of the cavity with
vibrational modes can strongly affect the reaction rate. Finally, our results
show how, in the collective VSC regime, the strong direct coupling to the
reaction coordinate ceases to be the determining factor in the cavity effect.
Figure 1: _cis_ -_trans_ isomerization reaction in HONO. The axes indicate
the body-fixed frame of the molecules in the simulation. The presence of the
cavity is indicated schematically and is not to scale. HONO is characterized
by 6 vibrational coordinates: $3$ stretching modes, O$-$H, O$-$N and N$=$O;
$2$ bending modes, H$-$O$-$N and O$-$N$=$O; $1$ torsion mode $\tau$, the
isomerization reaction coordinate.
Our starting point is the Hamiltonian for a molecular ensemble coupled to one
or several cavity modes
$\displaystyle\hat{H}$
$\displaystyle=\sum_{l=1}^{N}\hat{H}_{mol}^{(l)}+\hat{H}_{cav}$ (1)
with
$\displaystyle\hat{H}_{mol}^{(l)}$
$\displaystyle=\sum_{j_{l}=1}^{F}\frac{\hat{P}_{j_{l}}^{2}}{2M_{j_{l}}}+\hat{V}(R_{1_{l}}\ldots
R_{F_{l}}),$ (2) $\displaystyle\hat{H}_{cav}$
$\displaystyle=\frac{1}{2}\left[\hat{p}_{cav}^{2}+\omega_{cav}^{2}\left(\hat{q}_{cav}+\frac{\boldsymbol{\lambda}}{\omega_{cav}}\cdot\sum^{N}_{l=1}\hat{\boldsymbol{\mu}}^{(l)})\right)^{2}\right],$
(3)
and where $V(R_{1_{l}}\ldots R_{F_{l}})$ denotes the ground electronic state
potential energy surface (PES) of the $l$-th molecule with momenta $P_{j_{l}}$
and positions $R_{j_{l}}$. Hence, the Born-Oppenheimer (BO) approximation is
assumed within each molecule, and
$\boldsymbol{\mu}^{(l)}\equiv\boldsymbol{\mu}^{(l)}(R_{1_{l}}\ldots
R_{F_{l}})$ is the permanent dipole vector of the l-th molecule.
$\hat{H}_{cav}$ can be reached from the Coulomb-gauge light-matter interaction
Hamiltonian by taking the long wave approximation followed by a unitary
transformation to the length form 23, 24, 17. For later convenience, we write
$\hat{H}_{cav}$ in its position-momentum representation ($\hat{q}_{cav}$,
$\hat{p}_{cav}$). $\omega_{cav}$ corresponds to the cavity mode frequency.
This form of the light-matter interaction has become standard in most
theoretical studies of VSC 17, 25, 19, 20, and details on its derivation 26,
23 and properties 27 can be found elsewhere. The parameter
$\boldsymbol{\lambda}$ equals the coupling strength
$\lambda=\sqrt{1/\epsilon_{0}V}$ times the unit polarization vector
$\boldsymbol{\epsilon}$ of the cavity mode, and $V$ represents the cavity
volume 23. Similarly to other studies and to facilitate comparisons, we
introduce the coupling parameter $g=\lambda\sqrt{\hbar\omega_{cav}/2}$, which
has units of electric field (using this relation and
$\hat{q}_{cav}=\sqrt{\hbar/2\omega_{cav}}(\hat{a}^{\dagger}+\hat{a})$, the
linear coupling term in $\hat{H}_{cav}$ reads
$g\,\boldsymbol{\epsilon}\cdot\boldsymbol{\mu}\,(\hat{a}^{\dagger}+\hat{a})$).
The simplicity of the unimolecular reaction mechanism in HONO makes it an
ideal benchmark system to understand how dynamical cavity effects can modify
chemical rates as compared, e.g., to bimolecular reactions in solution 10, 28.
We base our study on the CCSD(T)-quality _ab initio_ potential energy surface
(PES) of Richter et al. 29, which features a reaction barrier height of about
0.51 eV (49 kJ/mol) and where the _trans_ isomer is 11 meV more stable than
the _cis_ one. Quantum dynamics studies of the HONO isomerization triggered by
strong laser pulses have been based on this PES 30, 31. Despite its
simplicity, this chemical reaction constitutes a fully coupled and rich
dynamical system. Similarly to other isomerization reactions, e.g. involving
hydrocarbons 32, 33, 34, it takes place in the underdamped regime. Throughout
this work, the molecules are kept at a fixed orientation with respect to the
polarization direction of the cavity mode. In this way, we focus on the
coupling of the $\mu_{x}$ dipole component to the cavity polarizaton. As shown
in the SI, this component of the molecular dipole has the strongest modulation
at the isomerization barrier configuration.
The molecule-cavity system is considered within the classical approximation,
thus transforming all coordinate operators in Hamiltonians 2 and 3 to
classical functions. A classical description of the VSC regime is not new and
has been successfully applied to bulk systems described by force-field
potentials 35 and model Hamiltonians 19, 36. The _cis_ -_trans_ reaction rate
is described with the reactive flux method for the classical rate constant 32,
33, 37, 38, 34
$\displaystyle K(t)$
$\displaystyle=x_{cis}^{-1}\langle\dot{\tau}(0)\,\delta[\tau(0)-\tau^{{\ddagger}}]\,\theta[\tau(t)]\rangle,$
(4)
where $x_{cis}$ is the equilibrium fraction of HONO at the _cis_ geometry,
$\dot{\tau}(0)$ is the initial velocity of a phase-space point perpendicular
to the dividing surface between reactants and products, and
$\tau^{{\ddagger}}$ is the torsion angle corresponding to the transition state
(TS) geometry. The brackets indicate the canonical ensemble average over
trajectories, where we considered a temperature of 300 K throughout. The
Heaviside function $\theta[\tau]$ is defined to be one for the _trans_
configurations, and zero otherwise. The exact reactive flux is obtained in the
limit $t\to\infty$, in practice when the plateau for $K(t)$ is reached 38.
This occurs when all classical trajectories starting from the dividing surface
become trapped at either the reactants or products side. For example, for the
isomerization reaction of $n$-butane in the low-friction environment of a van
der Waals liquid this relaxation time is about 1 ps 33. Now, since 37
$\displaystyle\lim_{t\to 0^{+}}K(t)=K_{TST},$ (5)
one can introduce a transmission coefficient $\kappa(t)=K(t)/K_{TST}$ as the
quotient of the numerically exact reactive flux and the reactive flux without
recrossing, i.e. the TST assumption. $K_{TST}$ can be evaluated conveniently
using Eyring’s equation 39, 40, while $\kappa(t)$ is obtained from classical
trajectories. As shown in the SI, and as has been discussed in other works 17,
18, $K_{TST}$ is, to a very good approximation, insensitive to cavity effects.
Therefore, we consider $K_{TST}$ to be completely cavity-independent and
describe the cavity effect on the rate as
$\displaystyle K_{cav}=\kappa_{cav}\kappa_{0}K_{TST},$ (6)
where $K_{0}\equiv\kappa_{0}K_{TST}$ is the formally exact classical rate
outside the cavity. Here and in the following, transmission coefficients and
rate constants without a time argument refer to their plateau value. Clearly,
both $\kappa$ and $\kappa_{0}$ lie in the $[0,1]$ range but $\kappa_{cav}$ can
be both larger or smaller than one, corresponding to a chemical rate
enhancement or suppression, respectively.
In the following, we theoretically demonstrate that both enhancement and
suppression of reaction rates are possible within a cavity for realistic
chemical processes. Although we rely on classical rate theory 40, we note that
tunneling corrections for hydrogen abstraction reactions at 300 K result in
variations of the rate within the same order of magnitude 41. For reactions
involving heavier elements, quantum corrections to the rates are even more
insignificant. Along these lines, there is no reason to assume, a priori, that
photonic modes with frequencies similar to the atomic vibrations, and in
thermal equilibrium, shall result in significant quantum effects that affect
the general conclusions derived from classical rate theories for VSC systems.
This does not exclude situations where quantum effects may be important for
quantitative descriptions of cavity-modified rates in reactions involving
light atoms, as it is sometimes the case for rates outside cavities 42, 43,
41.
Figure 2: a) Transmission coefficient $\kappa(t)$ for various cavity-coupling
strengths $g$ (V/nm) for $N=1$ and the cavity polarization aligned with HONO’s
$x$-axis. b) same as a) but with the HONO molecules coupled to a bath (see
SI). The shaded area on top of the solid lines indicates the standard
deviation of the average over trajectory runs. c) Asymptotic
$\kappa_{cav}=\kappa/\kappa_{0}$ for the curves in a) (red) and b) (green). d)
$\kappa_{cav}$ for increasing number of molecules at constant total
polaritonic coupling (see text for details).
Let us consider a single HONO molecule coupled to a cavity mode with $x$
polarization with respect to the molecular frame. In this case, the variation
of the permenent dipole is largest at the transition state (TS) of the
reaction coordinate, $\tau^{\ddagger}\approx\pi/2$. Outside the cavity,
$\kappa_{0}\approx 0.35$ at 300 K, the plateau value of the black curve in
Fig. 2a. This relatively low transmission is caused by a slow rate of intra-
molecular vibrational energy redistribution (IVR) of the activated
trajectories in the underdamped regime. As the cavity-coupling increases, one
sees how the plateau value stabilizes at a larger total transmission $\kappa$
in the red, blue and green curves. The cavity accelerates the chemical
reaction by increasing the total transmission coefficient compared to
$\kappa_{0}$, i.e. $\kappa_{cav}>1$. This is illustrated by the red trace in
Fig. 2c in the coupling regime where $\kappa_{cav}$ increases, and it is well
understood within our theoretical framework: the cavity provides an extra
energy redistribution pathway for a system with a low-friction reaction
coordinate. Recrossing events are increasingly suppressed and the transmission
increases. Nonetheless, as the cavity coupling to the torsion coordinate
further increases, a turning point is reached for $g>3$ V/nm. The amount of
recrossing at the barrier keeps increasing as well, thus finally reverting the
trend and decreasing the transmission again. This is the well-known Kramers
turning point 22, which, e.g., was predicted long ago for the isomerization of
cyclohexane as a function of solvent viscosity 34. Figure 2a illustrates its
origin in the quick drop of $\kappa(t)$ at short times for the strongest
cavity coupling.
When adding an external bath to HONO (see SI for details), the regime of
validity of the GH theory is restored. Figure 2b shows how now $\kappa(t)$
quickly reaches the plateau value within a few tens of femtoseconds, meaning
that activated trajectories visit the region of the TS only once or twice.
Since the plateau is reached quickly, the cavity can only have a short-time
effect close to the top of the barrier, where it can increase the amount of
recrossing thus reduce the transmission coefficient. As illustrated in Fig.2c
by the green trace, now $\kappa_{cav}<1$ and the chemical rate is reduced for
all coupling strengths. This is the regime captured in Refs. 19, 20, 21.
Figure 3: Potential energy surface cut for a) $1$ and b) $100$ HONO molecules
as a function of the reaction coordinate $\tau$ and the cavity displacement
$q_{cav}-q_{cav}^{{\ddagger}}$. For $N=1$ the light-matter coupling is $g=8$
V/nm. The cavity coupling in b) is scaled by ${1/\sqrt{N}}$ to keep a constant
overall light-matter interaction. The color levels start at $0$ for the
lightest tone and increase in steps of $0.2$ eV. The red line indicates the
minimum energy path. The vertical dashed line separates the _cis_ and _trans_
configurations.
A question mark remains still in connection with the collective VSC regime,
where most experiments reporting modifications of chemical rates in Fabry-
Perot configurations operate. To shed some light into this issue, we have
performed trajectory calculations of the transmission coefficient for an
increasing number of molecules $N$ coupled to the cavity, again without an
extra bath. The coupling per molecule is scaled as usual by a factor
${N}^{-1/2}$ as a means to keep the overall light-matter coupling constant 44.
Starting from $N=1$, $g=1$ V/nm, and $\omega_{cav}=852$ cm$-1$, one sees in
Fig. 2d how, for increasing $N$, the cavity effect gradually fades away.
Responsible for the gradual trend $\kappa_{cav}\to 1$ is the decoupling of the
reaction coordinate from the cavity displacement with increasing $N$, as seen
by comparing the curvature of the minimum energy path (MEP) in Figs. 3a,
$N=1$, and 3b, with $N=100$. This reduction of the MEP curvature as $N$
increases, and thus the reduced friction caused by the cavity, implies that in
the large $N$ limit the cavity is not able to “cage” the TS and induce a
decrease of the transmission coefficient through this mechanism.
Figure 4: Transmission coefficient $\kappa$ for various coupling strengths
$g_{\omega}=g\mu_{x}^{{\ddagger}}/\omega_{cav}$ as a function of
$\omega_{cav}$. Vertical bars represent standard deviations over the run
trajectories. $\omega_{cav}$ is chosen to be resonant with fundamental modes
of molecule: $\omega_{ONO}=609(cm^{-1})$, $\omega_{\tau}=640(cm^{-1})$,
$\omega_{ON}=852(cm^{-1})$, $\omega_{HON}=1263(cm^{-1})$,
$\omega_{NO}=1641(cm^{-1})$, $\omega_{OH}=3426(cm^{-1})$ and with the average
of every consecutive pair. The black dashed line indicates $\kappa_{0}$
outside the cavity with its standard deviation.
Finally, we address the question of sharp resonant effects, meaning when the
modification of chemical rates is particularly pronounced at specific cavity
frequencies. Through trajectory calculations it has been observed that the
outcome of reactive events can depend on the resonance between the cavity and
vibrational modes of the molecule, but a link to the actual modification of
chemical rates has not been established 25, 45. Our simulations of the
transmission coefficient in the underdamped, slow IVR regime reveal sharp
resonances in the rate constant effect as a function of $\omega_{cav}$. As
already discussed, in this regime the effect of the cavity is to introduce
extra energy redistribution pathways, whereby the effect at short times while
passing the TS barrier region is not so important. Thus, when the cavity is
resonant with a vibrational mode that happens to be strongly coupled to the
reaction coordinate, the enhancement of the rate is more prominent. As seen in
Fig. 4, $\kappa_{cav}\approx 2$ when $\omega_{cav}$ is resonant with the O-N
stretching mode at 852 cm-1. This is not surprising. It is well-known that the
O-N stretch is strongly coupled to the torsion coordinate in HONO 29, 30:
Selective laser excitations of this mode result in an enhanced probability of
isomerization out of equilibrium 31. This brings us to the observed cavity
catalysis of a unimolecular dissociation in solution by the Ebbesen group 9. A
plausible explanation is that the strong resonance of the cavity mode with a
carbonyl group could stabilize the hot nascent products inside the solvent
pocket, in this way preventing recrossing events at early times after passage
over the TS. Further studies will be required to test our hypothesis.
Let us recapitulate and place our findings in the context of the current
literature on thermal rate models in cavities. The regime considered in model
studies is the one in which the cavity is strongly coupled to the reaction
coordinate, while reactants and products are strongly damped by a bath 19 or
effectively by a short propagation time that hinders recrossing 14. In this
case, the reaction pathway becomes curved, as seen in the Shin-Metieu and
other 1D models 14, 19, 21. The regime in which the coupling is strong at the
barrier top and the reactants and products are damped by a bath is well
captured by the Grote-Hynes (GH) theory 22 and has been the basis of
theoretical proposals for the reduction of chemical rates in cavities 19, 21,
20. In this regime there are no sharp resonance effects, although there is a
continuous dependence of $\kappa$ on $\omega_{cav}$ related to the frequency
of the inverted reaction barrier 21. Similarly to our findings, existing
theories report a reduction of the transmission coefficient in the collective
regime as $N$ increases (cf. Fig. 2a in Ref. 21). Reference 20 attributes the
persistence of the modification of chemical rates in the large $N$ collective
regime to the assumption that “the polariton state is thermally activated to
yield collective barrier crossing” 20. While a coherent superposition of
activated complexes may display this behavior, a physical mechanism by which a
polaritonic system spontaneously results in coherence in the dark and under
thermal equilibrium of the material and photonic modes, has not been put
forward.
Concluding, our main contribution has been to identify dynamical effects
played by the cavity in the low-friction (or underdamped) regime of the
reaction coordinate. In this regime, the cavity effect is twofold: (1) It
_accelerates_ the chemical rate by increasing the friction compared to the
cavity-free system. This reduces the recrossing due to trajectories that,
otherwise, would visit reactants and products several times, thus increasing
the transmission coefficient compared to $\kappa_{0}$. As the cavity coupling
keeps increasing, the overall increased friction can introduce again more
coupling at the barrier and the trend can overturn. This is the well-known
Kramers turnover situation, and this regime can exist in the condensed phase
22, 34. (2) In the low-friction regime, sharp resonant effects are possible.
These are related to the new IVR pathways offered by the cavity. In the
products-side, they dissipate energy from the nascent hot products. In the
reactants-side, the resonances funnel energy into the nascent activated
complexes. Numerically, the former is captured by trajectories starting
towards the products-side and being effectively captured there. The latter is
captured by the trajectories initially moving towards reactants and being
effectively captured as well. If this reactants-side capture would be
ineffective, they would be counted as products but with a negative
contribution to the flux, in this way lowering the transmission (cf. Eq. 4).
Finally, when a bath is added to the HONO molecule and the overall friction is
sufficiently increased, the model reverts to the already known GH regime where
the cavity only affects the recrossings at the top of the reaction barrier.
Our findings shed important new light onto the question of cavity-modified
reactivity. However, it still remains for future work to better understand how
these cavity effects can survive in actual liquid phases and in the collective
regime for truly macroscopic numbers of molecules. It is plausible that the
more detailed answers lie beyond models of independent molecules and may
require studies of the transmission coefficient with full consideration of
environmental effects in the bulk 46.
## References
* Hutchison et al. 2012 Hutchison, J. A.; Schwartz, T.; Genet, C.; Devaux, E.; Ebbesen, T. W. _Angew. Chem._ 2012, _124_ , 1624–1628
* Shalabney et al. 2015 Shalabney, A.; George, J.; Hutchison, J.; Pupillo, G.; Genet, C.; Ebbesen, T. W. _Nat Commun_ 2015, _6_ , 5981
* Ebbesen 2016 Ebbesen, T. W. _Acc. Chem. Res._ 2016, _49_ , 2403–2412
* Dunkelberger et al. 2016 Dunkelberger, A. D.; Spann, B. T.; Fears, K. P.; Simpkins, B. S.; Owrutsky, J. C. _Nat. Commun._ 2016, _7_ , 13504
* Yang et al. 2020 Yang, Z.; Xiang, B.; Xiong, W. _ACS Photonics_ 2020, _7_ , 919–924
* Dunkelberger et al. 2018 Dunkelberger, A. D.; Davidson, R. B.; Ahn, W.; Simpkins, B. S.; Owrutsky, J. C. _J. Phys. Chem. A_ 2018, _122_ , 965–971
* Fassioli et al. 2021 Fassioli, F.; Park, K. H.; Bard, S. E.; Scholes, G. D. _J. Phys. Chem. Lett._ 2021, _12_ , 11444–11459
* Thomas et al. 2016 Thomas, A. et al. _Angew. Chem. Int. Ed._ 2016, _55_ , 11462–11466
* Lather et al. 2019 Lather, J.; Bhatt, P.; Thomas, A.; Ebbesen, T. W.; George, J. _Angew. Chem. Int. Ed._ 2019, _58_ , 10635–10638
* Thomas et al. 2019 Thomas, A. et al. _Science_ 2019, _363_ , 615–619
* Vergauwe et al. 2019 Vergauwe, R. M. A.; Thomas, A.; Nagarajan, K.; Shalabney, A.; George, J.; Chervy, T.; Seidel, M.; Devaux, E.; Torbeev, V.; Ebbesen, T. W. _Angew. Chem. Int. Ed._ 2019, _58_ , 15324–15328
* Thomas et al. 2020 Thomas, A.; Jayachandran, A.; Lethuillier-Karl, L.; Vergauwe, R. M. A.; Nagarajan, K.; Devaux, E.; Genet, C.; Moran, J.; Ebbesen, T. W. _Nanophotonics_ 2020, _9_ , 249–255
* Imperatore et al. 2021 Imperatore, M. V.; Asbury, J. B.; Giebink, N. C. _J. Chem. Phys._ 2021, _154_ , 191103
* Galego et al. 2019 Galego, J.; Climent, C.; Garcia-Vidal, F. J.; Feist, J. _Phys. Rev. X_ 2019, _9_ , 021057
* del Pino et al. 2015 del Pino, J.; Feist, J.; Garcia-Vidal, F. J. _New J. Phys._ 2015, _17_ , 053040
* Campos-Gonzalez-Angulo et al. 2019 Campos-Gonzalez-Angulo, J. A.; Ribeiro, R. F.; Yuen-Zhou, J. _Nat Commun_ 2019, _10_ , 4685
* Li et al. 2020 Li, T. E.; Nitzan, A.; Subotnik, J. E. _J. Chem. Phys._ 2020, _152_ , 234107
* Vurgaftman et al. 2020 Vurgaftman, I.; Simpkins, B. S.; Dunkelberger, A. D.; Owrutsky, J. C. _J. Phys. Chem. Lett._ 2020, 3557–3562
* Li et al. 2021 Li, X.; Mandal, A.; Huo, P. _Nat Commun_ 2021, _12_ , 1315
* Yang and Cao 2021 Yang, P.-Y.; Cao, J. _J. Phys. Chem. Lett._ 2021, _12_ , 9531–9538
* Mandal et al. 2022 Mandal, A.; Li, X.; Huo, P. _J. Chem. Phys._ 2022, _156_ , 014101
* Hynes 1986 Hynes, J. _J. Stat. Phys._ 1986, _42_ , 149–168
* Flick et al. 2017 Flick, J.; Ruggenthaler, M.; Appel, H.; Rubio, A. _PNAS_ 2017, _114_ , 3026–3034
* Haugland et al. 2021 Haugland, T. S.; Schäfer, C.; Ronca, E.; Rubio, A.; Koch, H. _J. Chem. Phys._ 2021, _154_ , 094113
* Sidler et al. 2020 Sidler, D.; Ruggenthaler, M.; Appel, H.; Rubio, A. _J. Phys. Chem. Lett._ 2020, _11_ , 7525–7530
* Power et al. 1959 Power, E. A.; Zienau, S.; Massey, H. S. W. _Philos. Trans. R. Soc. Lond. Ser. Math. Phys. Sci._ 1959, _251_ , 427–454
* Schäfer et al. 2020 Schäfer, C.; Ruggenthaler, M.; Rokaj, V.; Rubio, A. _ACS Photonics_ 2020, _7_ , 975–990
* Climent and Feist 2020 Climent, C.; Feist, J. _Phys. Chem. Chem. Phys._ 2020, _22_ , 23545–23552
* Richter et al. 2004 Richter, F.; Hochlaf, M.; Rosmus, P.; Gatti, F.; Meyer, H.-D. _J. Chem. Phys._ 2004, _120_ , 1306–1317
* Richter et al. 2004 Richter, F.; Rosmus, P.; Gatti, F.; Meyer, H.-D. _J. Chem. Phys._ 2004, _120_ , 6072–6084
* Richter et al. 2007 Richter, F.; Gatti, F.; Léonard, C.; Le Quéré, F.; Meyer, H.-D. _J. Chem. Phys._ 2007, _127_ , 164315
* Montgomery et al. 1979 Montgomery, J. A.; Chandler, D.; Berne, B. J. _J. Chem. Phys._ 1979, _70_ , 4056–4066
* Rosenberg et al. 1980 Rosenberg, R. O.; Berne, B. J.; Chandler, D. _Chemical Physics Letters_ 1980, _75_ , 162–168
* Kuharski et al. 1988 Kuharski, R. A.; Chandler, D.; Montgomery, J. A.; Rabii, F.; Singer, S. J. _J. Phys. Chem._ 1988, _92_ , 3261–3267
* Li et al. 2020 Li, T. E.; Subotnik, J. E.; Nitzan, A. _Proc. Natl. Acad. Sci._ 2020, _117_ , 18324–18331
* Wang et al. 2021 Wang, D. S.; Neuman, T.; Yelin, S. F.; Flick, J. _ArXiv210906631 Nlin Physicsphysics Physicsquant-Ph_ 2021
* Chandler 1987 Chandler, D. _Introduction to Modern Statistical Mechanics_ ; Oxford University Press: New York, 1987
* Berne et al. 1988 Berne, B. J.; Borkovec, M.; Straub, J. E. _J. Phys. Chem._ 1988, _92_ , 3711–3725
* Eyring 1935 Eyring, H. _J. Chem. Phys._ 1935, _3_ , 107–115
* Hänggi et al. 1990 Hänggi, P.; Talkner, P.; Borkovec, M. _Rev. Mod. Phys._ 1990, _62_ , 251–341
* Masgrau et al. 2002 Masgrau, L.; González-Lafont, À.; Lluch, J. M. _J. Phys. Chem. A_ 2002, _106_ , 11760–11770
* Miller 1979 Miller, W. H. _J. Am. Chem. Soc._ 1979, _101_ , 6810–6814
* Matzkies and Manthe 1998 Matzkies, F.; Manthe, U. _J. Chem. Phys._ 1998, _108_ , 4828–4836
* Vendrell 2018 Vendrell, O. _Phys. Rev. Lett._ 2018, _121_ , 253001
* Schäfer et al. 2021 Schäfer, C.; Flick, J.; Ronca, E.; Narang, P.; Rubio, A. _ArXiv210412429 Phys. Physicsquant-Ph_ 2021
* Li et al. 2021 Li, T. E.; Nitzan, A.; Subotnik, J. E. _Angew. Chem. Int. Ed._ 2021, _60_ , 15533–15540
|
00footnotetext: $\mathbf{AMS\;2010\hskip
5.0ptMathematics\;Subject\;Classification\;}:$ 53C25,53D15.
Key words and phrases: Almost Kenmotsu manifold, Ricci soliton.
# Non-existence of Ricci solitons in almost Kenmotsu manifolds
U. C. De
Department of Pure Mathematics, University of Calcutta, $35$, Ballygunge
Circular Road, Kol-$700019$, West Bengal, INDIA<EMAIL_ADDRESS>
###### Abstract.
In this short note we prove that there does not exist Ricci soliton $(g,\xi)$
in an almost Kenmotsu manifold.
## 1\. Introduction
A Ricci soliton is a generalization of an Einstein metric. In a Riemannian
manifold $(M,g)$ a Ricci soliton is a triplet $(g,V,\lambda)$, with $g$, a
Riemannian metric, $V$ a smooth vector field (called the potential vector
field) and $\lambda$ a constant such that
(1.1) $\pounds_{V}g+2S-2\lambda g=0,$
where $\pounds_{V}g$ is the Lie derivative of $g$ along a vector field $V$ and
$S$ is the Ricci tensor of type $(0,2)$. If $V$ is zero or Killing, the the
soliton is an Einstein metric. The Ricci soliton is said to be shrinking,
steady or expanding according as $\lambda$ is positive, zero or negative,
respectively. Compact Ricci solitons are the fixed points of the Ricci flow
$\frac{\partial}{\partial t}g=-2S$ projected from the space of metrics onto
its quotient modulo diffeomorphisms and scalings. They often arise as blow-up
limits for the Ricci flow on compact manifolds. Metrics satisfying (1.1) are
very useful in physics. Theoretical physicists have also been looking into the
equation of Ricci soliton in relation with string theory. A Ricci soliton on a
compact manifold has constant curvature in dimension $2$ (Hamilton [14]) and
also in dimension $3$ (Ivey [15]).
Recently Cho [5] obtained some results about Ricci solitons in almost contact
and contact geometry. Also Ricci solitons have been studied by several authors
such as Bejan and Crasmareanu [1], Calin and Crasmareanu [4], Chow and Knopf
[6], De et al. [7], De and Mandal [10], Wang and Liu [19] and many others.
The purpose of this paper is to study Ricci soliton in an almost Kenmotsu
manifold. The present paper is organized as follows: Section 2 contains some
preliminary results of almost Kenmotsu manifolds. Finally, we prove that there
does not exist Ricci soliton in an almost Kenmotsu manifold.
## 2\. Almost Kenmotsu manifolds
A differentiable $(2n+1)$-dimensional manifold $M$ is said to have a
$(\phi,\xi,\eta)$-structure or an almost contact structure, if it admits a
$(1,1)$ tensor field $\phi$, a characteristic vector field $\xi$ and a 1-form
$\eta$ satisfying ([2],[3])
(2.1) $\phi^{2}=-I+\eta\otimes\xi,\;\eta(\xi)=1,$
where $I$ denote the identity endomorphism. Here also $\phi\xi=0$ and
$\eta\circ\phi=0$; both can be derived from $(2.1)$ easily.
If a manifold $M$ with a $(\phi,\xi,\eta)$-structure admits a Riemannian
metric $g$ such that
$g(\phi X,\phi Y)=g(X,Y)-\eta(X)\eta(Y),$
for any vector fields $X$, $Y$ of $T_{p}M^{2n+1}$, then $M$ is said to have an
almost contact structure $(\phi,\xi,\eta,g)$. The fundamental 2-form $\Phi$ on
an almost contact metric manifold is defined by $\Phi(X,Y)=g(X,\Phi Y)$ for
any $X$, $Y$ of $T_{p}M^{2n+1}$. The condition for an almost contact metric
manifold being normal is equivalent to vanishing of the $(1,2)$-type torsion
tensor $N_{\phi}$, defined by $N_{\phi}=[\phi,\phi]+2d\eta\otimes\xi$, where
$[\phi,\phi]$ is the Nijenhuis torsion of $\phi$ [2]. Recently in
([11],[12],[13]), almost contact metric manifold such that $\eta$ is closed
and $d\Phi=2\eta\wedge\Phi$ are studied and they are called almost Kenmotsu
manifolds. Obviously, a normal almost Kenmotsu manifold is a Kenmotsu
manifold. Also Kenmotsu manifolds can be characterized by
$(\nabla_{X}\phi)Y=g(\phi X,Y)\xi-\eta(Y)\phi X,$
for any vector fields $X$, $Y$. It is well known [16] that a Kenmotsu manifold
$M^{2n+1}$ is locally a warped product $I\times_{f}N^{2n}$ where $N^{2n}$ is a
Kähler manifold, $I$ is an open interval with coordinate $t$ and the warping
function $f$, defined by $f=ce^{t}$ for some positive constant c. Let us
denote the distribution orthogonal to $\xi$ by $\mathcal{D}$ and defined by
$\mathcal{D}=Ker(\eta)=Im(\phi)$. In an almost Kenmotsu manifold, since $\eta$
is closed, $\mathcal{D}$ is an intregrable distribution. Let $M^{2n+1}$ be an
almost Kenmotsu manifold. We denote by $h=\frac{1}{2}\pounds_{\xi}\phi$ and
$l=R(\cdot,\xi)\xi$ on $M^{2n+1}$. The tensor fields $l$ and $h$ are symmetric
operators and satisfy the following relations :
(2.2) $h\xi=0,\;l\xi=0,\;tr(h)=0,\;tr(h\phi)=0,\;h\phi+\phi h=0,$ (2.3)
$\nabla_{X}\xi=-\phi^{2}X-\phi hX(\Rightarrow\nabla_{\xi}\xi=0),$ (2.4) $\phi
l\phi-l=2(h^{2}-\phi^{2}),$ (2.5) $R(X,Y)\xi=\eta(X)(Y-\phi hY)-\eta(Y)(X-\phi
hX)+(\nabla_{Y}\phi h)X-(\nabla_{X}\phi h)Y,$
for any vector fields $X$, $Y$. The $(1,1)$-type symmetric tensor field
$h^{\prime}=h\circ\phi$ is anticommuting with $\phi$ and $h^{\prime}\xi=0$.
Also it is clear that ([11], [18])
(2.6) $h=0\Leftrightarrow h^{\prime}=0,\;\;h^{\prime
2}=(k+1)\phi^{2}(\Leftrightarrow h^{2}=(k+1)\phi^{2}).$
Almost Kenmotsu manifolds have been studied by several authors such as Dileo
and Pastore ([11], [12], [13]), Wang and Liu ([17], [18], [20]), De and Mandal
([8], [9]) and many others.
Here we state a lemma which will be used latter.
###### Lemma 2.1.
[5] If $(g,V)$ is a Ricci soliton of a Riemannian manifold, then we have
(2.7) $\displaystyle\frac{1}{2}||\pounds_{V}g||^{2}=dr(V)+2div(\lambda V-QV),$
where $r$ denotes the scalar curvature and $Q$ denotes the Ricci operator
defined by $S(X,Y)=g(QX,Y)$.
## 3\. Ricci solitons
In this section we characterize almost Kenmotsu manifolds admitting a Ricci
soliton. Now suppose that an almost Kenmotsu manifold admits a Ricci soliton.
Here we consider the Ricci soliton whose potential vector field is the Reeb
vector field. Hence
(3.1) $\displaystyle\pounds_{\xi}g+2S-2\lambda g=0.$
i.e.,
(3.2)
$\displaystyle\frac{1}{2}[g(\nabla_{X}\xi,Y)+g(\nabla_{Y}\xi,X)]+S(X,Y)-\lambda
g(X,Y)=0.$
Using $(\ref{b2})$ and $(\ref{g1})$ in $(\ref{1})$ yields
$\displaystyle\frac{1}{2}[g(X-\eta(X)\xi-\phi hX,Y)+g(Y-\eta(Y)\xi-\phi
hY,X)]$ (3.3) $\displaystyle+S(X,Y)-\lambda g(X,Y)=0.$
Using $(\ref{b1})$ in $(\ref{2})$ implies
$\displaystyle\frac{1}{2}[2g(X,Y)-2\eta(X)\eta(Y)-2g(\phi
hX,Y)]+S(X,Y)-\lambda g(X,Y)=0,$
that is,
(3.4) $\displaystyle g(X,Y)-\eta(X)\eta(Y)-g(\phi hX,Y)+S(X,Y)-\lambda
g(X,Y)=0.$
From the above equation it follows that
(3.5) $\displaystyle QX=(\lambda-1)X+\eta(X)\xi+\phi hX.$
Replacing $X$ by $\xi$ in $(\ref{4})$ we have
(3.6) $\displaystyle Q\xi=\lambda\xi.$
Let $\\{e_{1},e_{2},e_{3},...,e_{2n+1}\\}$ be a local orthonormal basis of the
tangent space at a point of the manifold $M$. Then by putting $X=Y=e_{i}$ in
$(\ref{3})$ and taking summation over $i$, $1\leqslant i\leqslant(2n+1)$, we
have
(3.7) $r=\lambda(2n+1)-2n.$
Therefore the scalar curvature is constant as $\lambda$ is constant.
Since
(3.8) $\displaystyle\frac{1}{2}||\pounds_{V}g||^{2}=dr(V)+2div(\lambda V-QV),$
so replacing $V$ by $\xi$ and using the fact the sclar curvature $r$ is
constant, we get
(3.9) $\displaystyle\frac{1}{2}||\pounds_{\xi}g||^{2}=2div(\lambda\xi-Q\xi).$
Making use of $(\ref{5})$ in $(\ref{7})$ we obtain
$||\pounds_{\xi}g||^{2}=0,$
which implies that $\xi$ is a Killing vector field. But in an almost Kenmotsu
manifold $\xi$ can never be a Killing vector field.
This leads to the following:
###### Theorem 3.1.
There does not exist Ricci soliton $(g,\xi)$ in an almost Kenmotsu manifold.
## References
* [1] Bejan, C. L. and Crasmareanu, M., Second order parallel tensors and Ricci solitons in $3$-dimensional normal paracontact geometry, Ann. Global Anal. Geom., 46(2014), 117–127.
* [2] Blair, D. E., Contact manifold in Reimannian Geomatry. Lecture Notes on Mathematics, Springer, Berlin, 509, (1976).
* [3] Blair, D. E., Reimannian geomatry on contact and sympletic manifolds, Progr. Math., (2010), (Birkhäuser).
* [4] Calin, C. and Crasmareanu, M., From the Eisenhart problem to Ricci solitons in $f$-Kenmotsu manifolds, Bull. Malays. Math. Sci. Soc. 33(2010), 361–368.
* [5] Cho, J. T., Notes on contact Ricci solitons, Proc. Edinb. Math. Soc. 54(2011), 47–53.
* [6] Chow, B. and Knopf, D., The Ricci flow: An introduction, Mathematical surveys and Monographs, American Math. Soc. 110(2004).
* [7] De, U. C., Deshmukh, S. and Mandal, K., On three-dimensional $N(k)$-paracontact metric manifolds and Ricci solitons, Bull. Iranian Math. Soc. 43(2017), 1571–1583.
* [8] De, U. C. and Mandal, K., On $\phi$-Ricci recurrent almost Kenmotsu manifolds with nullity distribution, Int. Electron. J. Geom., 9(2016), 70-79.
* [9] De, U. C. and Mandal, K., Ricci solitons on almost Kenmotsu manifolds, An. Univ. Oradea Fasc. Mat., 2(2016), 109-116.
* [10] De, U.C. and Mandal, K., Certain results on generalized $(k,\mu)$-contact metric manifolds, J. Geom., 108 (2017), 611–621.
* [11] Dileo, G., and Pastore, A. M., Almost Kenmotsu manifolds and nullity distributions, J. Geom. 93(2009), 46-61.
* [12] Dileo, G., and Pastore, A. M., Almost Kenmotsu manifolds with a condition of $\eta$-parallelsim, Differential Geom. Appl. 27(2009), 671-679.
* [13] Dileo, G., and Pastore, A. M., Almost Kenmotsu manifolds and local symmetry, Bull. Belg. Math. Soc. Simon Stevin 14(2007), 343-354.
* [14] Hamilton, R. S., The Ricci flow on surfaces, Mathematics and general relativity (Santa Cruz, CA, 1986), Contemp. Math. 71, American Math. Soc.,(1988), 237–262.
* [15] Ivey, T., Ricci solitons on compact $3$-manifolds, Diff. Geom. Appl., 3(1993), 301–307.
* [16] Kenmotsu, K., A class of almost contact Riemannian manifolds, Tohoku Math. J., 24(1972), 93-103.
* [17] Wang, Y. and Liu, X., Second order parallel tensors on almost Kenmotsu manifolds satisfying the nullity distributions, Filomat 28(2014), 839-847.
* [18] Wang, Y. and Liu, X., Riemannian semisymmetric almost Kenmotsu manifolds and nullity distributions, Ann. Polon. Math. 112(2014), 37-46.
* [19] Wang, Y. and Liu, X., Ricci solitons on three-dimensional $\eta$-Einstein almost Kenmotsu manifolds, Taiwanese J. Math., 19(2015), 91–100.
* [20] Wang, Y. and Liu, X., On $\phi$-recurrent almost Kenmotsu manifolds, Kuwait J. Sci.42(2015), 65-77.
|
# Eliciting and Understanding Cross-Task Skills
with Task-Level Mixture-of-Experts
Qinyuan Ye Juan Zha Xiang Ren
University of Southern California
{qinyuany, juanzha<EMAIL_ADDRESS>
###### Abstract
Recent works suggest that transformer models are capable of multi-tasking on
diverse NLP tasks and adapting to new tasks efficiently. However, the
potential of these multi-task models may be limited as they use the same set
of parameters for all tasks. In contrast, humans tackle tasks in a more
flexible way, by making proper presumptions on what skills and knowledge are
relevant and executing only the necessary computations. Inspired by this, we
propose to use task-level mixture-of-expert models, which has a collection of
transformer layers (i.e., experts) and a router component that chooses from
these experts dynamically and flexibly. We find that these models help improve
the average performance gain (ARG) metric by 2.6% when adapting to unseen
tasks in the few-shot setting and by 5.6% in the zero-shot generalization
setting. Further, we show that the learned routing decisions partly rediscover
human categorization of NLP tasks – certain experts are strongly associated
with extractive tasks, some with classification tasks, and some with tasks
requiring world knowledge.111Our code will be released at
https://github.com/INK-USC/CrossTaskMoE.
## 1 Introduction
Pre-trained transformer models Devlin et al. (2019); Liu et al. (2019b) have
demonstrated remarkable capabilities in natural language processing (NLP) in
recent years. Moreover, generative transformers can be viewed as a universal
model that can be optimized for any language task primed into text-to-text
format Raffel et al. (2020). Recently, researchers found that training these
transformer models to multi-task on a diverse collection of NLP tasks is
beneficial – not only are they better at handling seen tasks Aghajanyan et al.
(2021); Aribandi et al. (2022), but also at generalizing and adapting to
unseen tasks Wei et al. (2021); Sanh et al. (2022).
However, little is known about how multi-tasking capabilities and cross-task
generalization is achieved, especially that the exact same set of weights is
applied, and the same computation is executed, for very different tasks.
Humans, on the other hand, do not exhaust their brain capacity for every task
at hand. Humans develop skill sets and accumulate knowledge during learning,
and can reuse and recompose them when facing a task. Inspired by this, we
hypothesize that a model that explicitly emulate skill and knowledge sharing
may help improve multi-task performance and generalization to new tasks. A
natural fit for this goal would be task-level mixture-of-expert models Jacobs
et al. (1991); Kudugunta et al. (2021), where the model computation is
conditioned on the task at hand. More specifically, the model contains a
collection of experts and a router that chooses from the experts and composes
the final model (Fig. 1-2).
Figure 1: Illustration of Task-level Mixture-of-Expert Models. In this work,
we train such models to multi-task on diverse NLP tasks, aiming at modeling
skill sharing explicitly and understanding the learned patterns.
In this paper, we first empirically investigate several key design choices for
effectively training task-level mixture-of-experts models (§5). We further
test the model’s task-level generalization capabilities by testing it on
unseen tasks (§6). Compared to a multi-task BART-Base Lewis et al. (2020)
baseline, our final method leads to an 2.6% improvement in the average
performance gain (ARG) metric when adapting to 18 unseen tasks Ye et al.
(2021) in the few-shot learning setting. Further, a gain of 5.6% in ARG is
obtained in the zero-shot setting with P3 dataset Sanh et al. (2022). Lastly,
we conduct a detailed analysis quantifying the correlations between the
learned routes and the characteristics of tasks (§7). We find that the routing
decisions, though learned purely from multi-tasking without prior knowledge,
strongly correlate with human understanding of task characteristics, such as
the task being a classification task, the task being extractive, or the task
requiring world knowledge.
## 2 Related Work
#### Massive Multi-task Learning.
Multi-task learning Caruana (1997) has been continuously explored in NLP and
is shown to be beneficial McCann et al. (2018); Liu et al. (2019a). Recently,
multi-task learning in NLP is brought to a new scale by using a significantly
larger collection of tasks and examples Aghajanyan et al. (2021); Aribandi et
al. (2022); Khashabi et al. (2020); Hendrycks et al. (2021). These work
demonstrate that multi-task learning improves the learning of text
representation and thus boost the performance of seen tasks. Moreover, these
models also exhibit strong adaptability to unseen tasks, in both few-shot Ye
et al. (2021) and zero-shot settings Wei et al. (2021); Sanh et al. (2022);
Mishra et al. (2021). Despite their effectiveness in terms of performance, how
a model learns and spontaneously develops language skills during multi-task
learning is a relatively under-explored topic. In our work, we try to
investigate this question by training task-level MoE models and interpreting
them. We additionally discuss contemporary works Ponti et al. (2022); Gupta et
al. (2022); Asai et al. (2022) in Appendix D.
#### Mixture-of-Experts in NLP.
Mixture-of-experts models (Jacobs et al., 1991) divide the problem space into
several sub-spaces and allow experts to be specialized in each subspace.
Recently this concept is successfully applied to NLP Shazeer et al. (2017),
enabling models of billion or even trillion parameter scale Fedus et al.
(2021); Du et al. (2021); Artetxe et al. (2021); Zoph et al. (2022). However
these applications mainly focus on the scaling aspects. Besides, most of them
select experts on a per-example or per-token basis. In this work we are
interested in multi-task learning with per-task gating decisions Rosenbaum et
al. (2018); Kudugunta et al. (2021), and mainly focus on understanding and
interpreting task transferability.
#### Task Transferability in NLP.
Phang et al. (2018) explored supplementary training on intermediate tasks
(STILT), i.e., training on a data-rich intermediate task before fine-tuning on
the target task. STILT improves performance on the target task and stabilizes
the fine-tuning process. Pruksachatkun et al. (2020) and Vu et al. (2020)
further investigated when and why intermediate task transfer works. These
studies mainly focus on transferability between specific source-target pairs,
while we consider a more general setting of transferring within and beyond a
group of NLP tasks.
## 3 Problem Setting
Our goal is to better understand multi-task learning with mixture-of-experts
models with an explicit routing mechanism. We also hypothesize that such
models help improve the model’s capability to generalize/adapt to new tasks.
Our problem setting closely resembles CrossFit Ye et al. (2021). In the
following, we introduce data usage (§3.1), training procedure (§3.2), and
evaluation protocol (§3.3).
### 3.1 Data Usage
Assume that we have a collection of diverse NLP tasks $\mathcal{T}$,
partitioned into two non-overlapping sets
$(\mathcal{T}_{train},\mathcal{T}_{test})$. These sets are also referred to as
(Meta-Train, Meta-Test). $\mathcal{T}_{train}$ is mainly used for multi-task
learning; $\mathcal{T}_{test}$ is used to quantify the model’s adaptability to
new tasks. Each task $T\in\mathcal{T}$ has three subsets, i.e.,
$T=(D_{train},D_{dev},D_{test})$. Additionally, we assume that all tasks are
cast to a unified text-to-text format, i.e., $D=\\{(x,y)\\}$, where $x$ is the
input text sequence, and $y$ is the output text sequence.
### 3.2 Training Procedure
The training procedure has two stages: (1) an upstream learning stage for
multi-task learning on $T_{train}$, to develop the skills that are needed to
solve different tasks; and (2) a downstream fine-tuning stage on $T_{test}$,
for evaluating the model’s ability to adapt to new tasks. During the upstream
learning stage, the model is expected to be trained for multi-task learning
with the $D_{train}$ from tasks in $\mathcal{T}_{train}$. $D_{dev}$ for tasks
in $\mathcal{T}_{train}$ will be used for hyperparameter tuning and model
selection. During the downstream fine-tuning stage, the model will be fine-
tuned on each task in $\mathcal{T}_{test}$ respectively. $D_{train}$ will be
used for fine-tuning, $D_{dev}$ for validation, and $D_{test}$ for reporting
the final performance.
### 3.3 Evaluation Protocol
Each task in $\mathcal{T}$ has a pre-defined evaluation metric. For example,
F1 score for classification tasks, and accuracy for multi-choice QA tasks.
During the upstream learning stage, for simplicity, the model is validated on
the average $D_{dev}$ performance on all tasks in $\mathcal{T}_{train}$, and
we report average $D_{dev}$ performance and $D_{test}$ performance. During the
downstream fine-tuning stage, we compare the model’s performance to the
baseline of fine-tuning a vanilla transformer (without upstream learning), and
compute the average relative performance gain (ARG) as our evaluation metric.
More details about the baselines and ARG are deferred to §6.
## 4 Task-level MoE Transformers
Recall that our goal is to better elicit transferable skills during multi-task
learning, and understand how those skills contribute the model performance.
For this purpose we develop a mixture-of-experts variant of text-to-text
transformer models, conditioning on task representations. The model contains
two major components: (1) a router that selects and decides which experts to
use for each task in each layer, based on its task representation; (2) a
collection of experts that are dynamically composed into a final model based
on the router selection. See Fig. 2 for a detailed illustration.
In the following, we introduce the router and the experts in more details.
Note that we provide a general description in this section, and leave specific
design choices in §5.3 for empirical comparison.
#### Collection of Experts.
In an original implementation of text-to-text models Raffel et al. (2020);
Lewis et al. (2020), there are $n$ transformer layers stacked and executed
sequentially. The first $n/2$ layers are encoder layers and the last $n/2$
layers are decoder layers. In our variant of transformer models, we copied
each layer for $m$ times, resulting in $m*n$ experts in total. We refer to the
$j$-th expert in the $i$-th layer as $E^{(i,j)}$. Note that we assume that
each transformer block is an expert, which is different from Kudugunta et al.
(2021). This is to make whole model dynamic and conpositional.
#### Router.
For a given task $T_{k}\in\mathcal{T}$, with $k$ as its task index, the router
first takes the task representation ($\mathbf{T}_{k}$) from a look-up
embedding table ($\mathbf{T}$). The router network outputs a matrix
$\mathbf{L}\in\mathbb{R}^{m\times n}$, where $\mathbf{L}_{i,j}$ represents the
logits of using expert $E^{(i,j)}$ in layer $i$. $\mathbf{L}$ goes through a
selection function $f$ to normalize the routing decisions in each layer,
resulting in a final decision matrix $\mathbf{D}\in\mathbb{R}^{m\times n}$.
#### Task-level MoE Transformers.
We use the decision matrix $\mathbf{D}$ from the router to control the
computation conducted by the experts. More specifically, in layer $i$, given
input hidden states $\mathbf{h}^{(i)}_{in}$, the output
$\mathbf{h}^{(i)}_{out}$ would be the weighted sum of all experts in the
layer, and the weights are specified in $\mathbf{D}_{i,\cdot}$, i.e.,
$\mathbf{h}^{(i)}_{out}=\sum_{j=1}^{m}\mathbf{D}_{i,j}E^{(i,j)}(\mathbf{h}^{(i)}_{in})$
(1)
Figure 2: Task-level Mixture-of-experts Transformer models used in this study.
Right: A router takes in a task representation and make decisions on expert
selection. Left: the weighted sum of the outputs from each expert are
considered the final output for this layer.
## 5 Applying Task-level MoE Models to Multi-task Learning
In our pilot studies, we found it is non-trivial to train these mixture-of-
experts models properly and effectively. In this section, we present a
detailed empirical study on baselines and design choices. We first introduce
experiment details in §5.1. We then start with investigating simple baselines
such as random or average routing (§5.2), which will help navigate our
experiments on learning task-level MoE models. In §5.3 we introduce different
variants we experiment with for learning task-level MoEs, and we summarize our
findings in §5.4.
### 5.1 Experiment Details
#### Data.
We previously discussed that a collection of diverse NLP tasks is required for
the purpose of our study (§3.1). In our experiments, we use the task
collection in CrossFit Ye et al. (2021), which contains NLP tasks covering a
wide range of task formats, goals and domains. We use its random task
partition, with 120 tasks in $T_{train}$ and 18 tasks in $T_{test}$. All tasks
are converted to a unified text-to-text format and sub-sampled to be few-
shot222For classification tasks, there are 16 examples per task in
$D_{train}$; for non-classification tasks, $D_{train}$ has 32 examples..
Details about the tasks are listed in Appendix E-F.
#### Model and Its Initialization.
We previously introduced the model architecture of task-level MoEs in §4. In
our experiments, the model is instantiated with the pre-trained BART-Base
model Lewis et al. (2020), a 12-layer encoder-decoder transformer model
($n=12$). All $m$ experts in layer $i$ are initialized from the $i$-th layer
of the BART-Base model. Additionally we add a Gaussian noise with variance of
1e-8 to the weights of each expert to avoid symmetry. We manually set the
number of experts per layer $m=3$ to allow sufficient flexibility while
maintain a tractable model size.
#### Training Details.
Deferred in Appendix B.1.
### 5.2 Investigation on Baselines
Before we experiment with learning routers, we first launch a series of
baseline experiments related to the task-level MoE architecture. The goal is
to get insights to help us better design our final model. We experiment with
(1) Vanilla transformer, where mixture-of-experts are not involved; (2)
Instance-level random routing, where the routes are randomly sampled for each
instance during the forward pass; (3) Task-level random routing, where routes
are sampled for each task once before training; (4) Average routing, where
each experts were assigned the same weight in Eq. (1), i.e.,
$\mathbf{D}_{i,j}=1/3$. For (2) and (3), we try random selecting either one or
two out of the three experts in each layer (denoted as “1/3” and “2/3”). In
the case of “2/3”, the output is the average of the outputs produced by the
activated experts.
#### Findings.
Performance of these baseline models are in top 2 sections in Table 1. We also
plot the dev loss and performance curves during vanilla baseline training in
Fig. 6 in Appendix C.1. We have the following findings.
(1) In Fig. 6, we found that dev losses dip in the early phase of training,
then gradually rise. Meanwhile, the dev performance continue to increase. This
is an important lesson learned for comparing different design choices: the
simple and faster heuristic of model selection based on dev loss may be sub-
optimal. We hypothesize this is because the text generation loss may not align
well with the final evaluation metric333This finding is relevant to Csordás et
al. (2021) which advocates proper validation protocol..
(2) All random routing methods (except for “Random Task 2/3”) leads to
worsened performance compared to vanilla transformer baselines. This suggests
that introducing sparsity and routing mechanism into transformer models
naively can in fact hurt performance. This may be due to underfitting (the
number of examples routed to each expert is reduced) or asynchronism in
optimization (a different collection of experts is activated and updated at
each optimization step).
(3) The observation that Random Task Routing (2/3) is better than Vanilla and
Average Routing suggests that task interference exists in multi-task models
with fully shared parameters, and allowing task-specific computations (as in
Random Task 2/3) can be helpful. The observation that Random Task 2/3 is
better than 1/3 suggests that performance is highly sensitive to the portion
of shared vs. task-specific parameters. There is a fine line between MoE
mechanism being helpful or being intrusive, adding difficulty to training MoE
models.
### 5.3 Investigation on Design Choices
In the following we describe the key design choices we compared in training
task-level MoEs.
#### Expert Selection.
The selection function $f$ is responsible for normalizing and discretizing (if
necessary) the logit output of router network into final decisions. We
consider three variants: (a) Softmax, the default design in most MoE models.
(b) Gumbel-Softmax Jang et al. (2016), which add gumbel-distributed noise to
the logits and promote discrete decisions. (c) Gumbel-Softmax ST, where ST
stands for straight-through estimator. For (b) and (c), we apply the
temperature annealing mechanism to encourage exploration in the beginning of
training.
#### Router Architecture.
Router is a key component for our MoE model which computes the logits of
selecting experts based on input task representations (see §4). We consider
three router architecture with different complexities: (d) MLP, which contains
two dense layers separated by GELU activation. (e) Bi-LSTM, which takes the
sum of the task representation and a positional embedding as input at each
time step (i.e., layer). One linear layer is used to project the LSTM states
to routing decisions. (f) Transformer Vaswani et al. (2017), which takes the
same input as Bi-LSTM and applies one single transformer encoder layer.
#### Task Representations.
Vu et al. (2020) suggest that pre-computed task representations contain rich
information for predicting task transferability. Here we consider
incorporating these task representations as the initialization for the look-up
embedding table $\mathbf{T}$ in our model (§4). In particular, we consider:
(g) Random, which initialized every task representation with a randomly
initialized 768d vector. (h) TextEmb, which is produced by encoding the input
text with a pre-trained BART-Base model and taking the representations of the
last encoder layer. We tried both the average representation of all tokens in
the sequence (AVG) and BOS token representation. (i) FT-TextEmb, which is
mostly identical to (h), despite that the BART-Base model is first fine-tuned
on the $D_{train}$ of the current task. (j) Fisher-TaskEmb Vu et al. (2020),
which is the diagonal of fisher information of the trainable parameters in a
model. We use adapter Houlsby et al. (2019) fine-tuning on $D_{train}$ and
compute the fisher information on these adapter parameters to avoid expensive
computations.
#### Freezing Task Representations.
Since adaptability to unseen task will be considered in later parts of this
study, we further consider between (k) not freezing and (l) freezing the task
representations during multi-task learning. We conjecture that the structure
of seen task representations may be changed after multi-task learning, while
the unseen task representations may not reflect the change; hence the freezing
variant.
Model | Compute | Dev (%) | Test (%)
---|---|---|---
Vanilla Transformers
(1) BART-Base | 1x | 54.47$\pm$0.05 | 48.93$\pm$0.23
(1) BART-Large | - | 58.10$\pm$0.20 | 54.06$\pm$0.22
Baselines
(2) Random Inst. Routing (1/3) | 1x | 47.50$\pm$0.20 | 41.87$\pm$0.76
(2) Random Inst. Routing (2/3) | 2x | 44.81$\pm$1.76 | 38.48$\pm$1.00
(3) Random Task Routing (1/3) | 1x | 52.89$\pm$0.57 | 47.27$\pm$0.35
(3) Random Task Routing (2/3) | 2x | 55.35$\pm$0.23 | 50.44$\pm$0.29
(4) Average Routing (3/3) | 3x | 54.61$\pm$0.11 | 50.02$\pm$0.19
Task-level Mixture-of-Experts
(c)+(d)+(g)+(k)+(n) | 1x | 55.28$\pm$0.12 | 50.52$\pm$0.38
(c)+(d)+(j)+(k)+(m) | 1x | 53.07$\pm$0.45 | 48.16$\pm$0.34
(c)+(d)+(j)+(l)+(m) | 1x | 53.06$\pm$0.19 | 47.64$\pm$0.79
(c)+(d)+(j)+(l)+(n) | 1x | 55.40$\pm$0.08 | 50.39$\pm$0.68
Table 1: Performance on baselines and selected models. Average performance on
$D_{dev}$/$D_{test}$ over tasks in $\mathcal{T}_{train}$ are reported. Average
and standard dev. are computed based on runs with three different random
seeds.
Model | Dev (%) | Model | Dev (%)
---|---|---|---
Expert Selection | | Task Repr. (cont.) |
(a) Softmax | 40.93 | (i) FT-TextEmb-BOS | 52.93
(b) Gumbel-Softmax | 52.02 | (i) FT-TextEmb-AVG | 53.29
(c) Gumbel-Softmax ST | 53.14 | (j) Fisher-TaskEmb | 53.51
Router Architecture | | Freeze Task Repr. |
(d) MLP | 53.14 | (k) Not Freezing | 53.51
(e) LSTM | 53.55 | (l) Freezing | 53.37
(f) Transformer | 53.13 | - | -
Task Repr. | | Two-stage Training |
(g) Random | 53.14 | (m) Use one stage | 53.51
(h) TextEmb-BOS | 52.51 | (n) Use two stages | 55.36
(h) TextEmb-AVG | 53.30 | - | -
Table 2: Investigation on Design Choices. By default the model uses
(c)+(d)+(g)+(k)+(m) when comparing different choices in each colored section.
#### Two-stage Training.
In §5.2, we find that introducing routing mechanism naively may lead to
worsened performance. Also, average routing is stable and achieves competitive
performance. Based on these observations, we design a two-stage training
strategy to combine the benefits of both methods. In the first stage, the
model jointly learns the router and the experts. In the second stage, the
experts are re-initialized from BART’s pre-trained weights, and the routes
gradually transforms from average routing to the learned routes by controlling
the temperature used in the softmax function. As a result, in the beginning of
the training, the temperature is set to be high, so the router is functioning
like average routing; during the training process, the temperature decreases
gradually, and the router will give more discrete routing decisions.
### 5.4 Results and Findings
We first present the performance of variants mentioned above in Table 2. For
the best-performing model variants, we run three times with different random
seeds to reduce variance in performance (Table 1, Bottom). We have the
following observations. (1) What helps? We found that the choice of selection
function and the two-stage learning procedure are important for training task-
level MoEs. Gumbel-Softmax with straight-through estimator achieves the best
performance among the three choices.444See Appendix C.3 for further
investigation. Two-stage training helps improve performance by 1.8%.555We also
use heterogeneous batching Aghajanyan et al. (2021) and two-speed learning
rate Ponti et al. (2022) in our model as recommended by these works. (2) What
doesn’t help? We did not observe significant difference with choices in router
architecture or task representation initialization. Supposedly, LSTMs and
transformers are able to capture relations more complicated than MLPs, and
pre-computed task representations carry richer information about the task than
random initialization. This unexpected observation suggests that the router
struggle to leverage task-level information with the current training methods
and supervision signals. (3) Comparing with the baselines. Our best task-level
MoE using random initialized task representations ((c)+(d)+(g)+(k)+(n)) can
rival the best baselines in §5.2 (Random Task Routing 2/3), while using half
of its computation in a forward pass. With careful design, task-level MoEs are
beneficial for multi-task learning.
Figure 3: Few-shot Performance on Unseen Tasks. Bar heights represent relative
performance gain over directly fine-tuning a pre-trained BART-Base model. The
right-most bars are the average performance gain.
Main models | anli_r3 | HellaSwag | cb | wic | wsc | winogrande | arc-chan. | obqa | piqa | SQuADv2 | AVG | ARG
---|---|---|---|---|---|---|---|---|---|---|---|---
Multi-task BART-Base | 27.6 | 22.0 | 44.6 | 43.1 | 57.5 | 52.7 | 23.1 | 26.2 | 26.4 | 14.6 | 33.7 | -
Random Task Routing (2/3) | 23.7 | 14.5 | 19.3 | 37.8 | 45.2 | 49.0 | 14.5 | 18.5 | 3.5 | 9.1 | 23.5 | -33.6
(c)+(d)+(h)+(l)+(m) | 33.7 | 20.7 | 43.6 | 40.4 | 50.2 | 46.8 | 11.8 | 18.1 | 22.4 | 18.6 | 32.2 | -8.3
(c)+(d)+(h)+(l)+(n) | 32.0 | 23.7 | 44.3 | 43.4 | 56.5 | 52.2 | 21.1 | 28.5 | 30.2 | 17.6 | 34.9 | 5.6
Table 3: Zero-shot Performance on Unseen Tasks. Accuracy (%) on the test set
of 10 unseen tasks. We compare the AVG and calculate the ARG of routing model
(c)+(d)+(h)+(l)+(m) and (c)+(d)+(h)+(l)+(n) over multi-task BART-Base. The
former routing model uses one-stage training while the latter uses two-stage
straining.
## 6 Generalizing to Unseen Tasks
We hypothesize that task-level MoE models can recombine the learned skills
effectively when they encounter new tasks. In §6.1 we evaluate the models
obtained in §5 on adapting to new tasks in a few-shot learning setting. In
§6.2 we further extend our method to a zero-shot learning setting and test it
on the P3 dataset Sanh et al. (2022).
### 6.1 Few-shot Adaptation
#### Compared Methods.
We use the following models as initialization for few-shot fine-tuning on
unseen tasks ($T_{test}$). (1) Direct Fine-tuning. For each unseen task, we
fine-tune the off-the-shelf BART-Base model with its $D_{train}$. (2) Multi-
task BART. We take the multi-task BART-Base from §5 as initialization and
fine-tune the model on $D_{train}$. (3) Baseline Routing BART. We re-use the
models using random task routing (1/3, 2/3) and average routing in §5. (4)
Learned Routing BART. We take the (c)+(d)+(j) +(l)+(n) model from §5. This
models uses fisher information as the task representation (j) and the
representations for seen tasks are frozen (l) during multi-task learning. For
the unseen task, we first compute its fisher information based on $D_{train}$
and feed it to the learned router to select experts. We then fine-tune the
selected experts on $D_{train}$.
#### Data and Evaluation.
We use the 18 unseen tasks specified in CrossFit random partition in Ye et al.
(2021)666We exclude Free-base QA and Yelp Polarity from the evaluation as
performance is unusually unstable on these tasks.. We first obtain the
performance of fine-tuning the pre-trained BART-Base model as the baseline.
Then we compute and report the average relative gain (ARG) over pre-trained
BART for the multi-task BART and routing BART methods. For example, if fine-
tuning pre-trained BART achieves 50% accuracy on task A and 80% F1 on task B,
and fine-tuning multi-task BART achieves 80% accuracy on task A and 60% F1 on
task B, the ARG would be the average of $(80\%-50\%)/50\%$ and
$(60\%-80\%)/80\%$, which equals to $17.5\%$.
#### Results.
We present the performance gains on individual tasks and their average in Fig.
3. Multi-task BART remains a strong baseline, achieving an ARG of 9.74%.
Random task routing (2/3) and average routing baselines achieves 10.21% and
8.06% respectively. Our task-level MoE model (c)+(d) +(j)+(l)+(n) achieves the
best average performance gain (12.30%), which is 2.6% higher than the multi-
task BART. We observe that negative transfers are alleviated and few-shot
performance are improved compared to the baselines for many tasks. This
suggest that our task-level MoE model is learning reusable experts and
meaningful routes.
### 6.2 Zero-shot Generalization
In this section, we modify our proposed method to zero-shot learning settings
where each unseen task has no labeled data. We use Public Pool of Prompts (P3)
dataset as our testbed (Sanh et al., 2022).
#### Data.
Following Sanh et al. (2022); Bach et al. (2022), we use the prompt templates
to change texts from various NLP tasks into a unified text-to-text formats.
Specifically, we have 36 upstream tasks for $\mathcal{T}_{train}$, and 10
tasks for $\mathcal{T}_{test}$. We use accuracy as the evaluation metric. We
report both the average performance on $\mathcal{T}_{test}$ (AVG), and the
average performance gain (ARG) described in §6.1.
#### Compared Methods.
For all the models, we train on the $D_{train}$ for all tasks in
$\mathcal{T}_{train}$, and directly test the model on $D_{test}$ for each task
in $\mathcal{T}_{test}$. We mainly compare four methods: (1) Multi-task BART-
Base. (2) Random Task Routing (2/3). (3) We train a new (c)+(d)+(h)+(l)+(m)
model on P3 data. (4) Similar to (3), we train a model with the configuration
of (c)+(d)+(h)+(l)+(n). Note that in the zero-shot setting, we cannot use pre-
computed task representations for unseen tasks based on labeled examples (as
described in §5.3). Therefore for (h) TextEmb used in (3) and (4), we encode
prompt templates as the auxiliary task information. More details are in
Appendix B.3.
#### Results.
We present the results in Table 3. Our findings are: (1) Compared to the
multi-task BART-base baseline with an AVG of 33.7%, our routing model (4)
achieves a higher AVG (34.9%) and a positive ARG (5.6%). This demonstrates the
model’s improved generalization ability to novel tasks in the zero-shot
setting. (2) The gap between model (3) and model (4) shows that the two-stage
training strategy is essential in the zero-shot setting as well. (3) Different
from the findings in the few-shot setting, Random Task Routing (2/3) has a
negative ARG (-33.6%). Without labeled data in unseen tasks, random routing
cannot actively select relevant experts or update model parameters, resulting
in worsened performance. In contrast, task-level MoE has the flexibility to
select relevant experts and achieves better performance.
## 7 Interpreting the Routes and Experts
### 7.1 Learning Dynamics of the Routes
We visualized the learned routing decisions of the (c)+(d)+(g)+(k)+(m) model
trained on CrossFit data in Fig. 4. Note that (g) represents that the task
representations are randomly initialized and learned spontaneously during
multi-task learning. We observe that distinct patterns for classification and
generation tasks emerge in the early stage of the training (step 3000). These
patterns transition from coarse-grained to fine-grained gradually in the
training process. These observations align with our expectation that task-
level MoEs are learning to share parameters for similar tasks and avoid
interference among dissimilar tasks.
### 7.2 Correlation with Task Features
To better understand the learned routing decisions, we investigate the
relation between the routing decisions and manually-defined task features. In
the following, we first describe the methodology of computing correlation,
then describe the features we investigate, and finally describe our findings.
#### Method.
For each task in $\mathcal{T}_{train}$, we first compute the routing decisions
$\mathbf{D}\in\mathbb{R}^{m\times n}$ using the learned model. For each expert
$E^{(i,j)}$, we consider the routing decision $\mathbf{D}_{i,j}$ of all tasks
as a feature. Altogether, we have $m\times n$ features of dimension
$|\mathcal{T}_{train}|$ (the number of tasks). Additionally, we have $t$
manually-defined features on all tasks, giving $t$ features of dimension
$|\mathcal{T}_{train}|$. We compute Pearson correlation coefficient between
each pair of learned routing decisions and manual feature, resulting in a
$\mathbb{R}^{mn\times t}$ matrix quantifying the correlation between $m\times
n$ experts and $t$ manual features.
Figure 4: Routing Decisions Learned During Multi-task Learning
((c)+(d)+(g)+(k)+(m)). The router is able to distinguish classification tasks
from other types of tasks after 3000 steps of the training. It then gradually
learns more fine-grained patterns.
Feature Name | Example | Description
---|---|---
Task Format
Extractive | SQuAD, Race | Output is always a substring of the input
Sentence Completion | HellaSwag, LAMA-Probes | Requires the model to fill in a blank in the input or continue to generate based on the input
Required Skills and Knowledge
Linguistic | Blimp, CoLA | Tasks focusing on grammatical correctness, semantic equivalence and linguistic phenomenon
Commonsense | CommonsenseQA | Tasks testing commonsense knowledge and reasoning capabilities
Co-reference | Wino_grande | Tasks requiring co-reference resolution
Multi-hop Reasoning | DROP | Tasks requiring multi-hop/multi-step reasoning
Implicit Knowledge | TriviaQA | Tasks requiring world knowledge (acquired during pre-training)
Synthesize | Break, XSum | Combining ideas and allowing an evolving understanding of text
Table 4: Additional Features on Format, High-level Skills and Knowledge.
#### Manual Features.
We consider the following features in our correlation study777We admit that
several categorization criteria are subjective and they are by no means
exhaustive for fully describing a task. We use these features mainly to
quantify the relation between human understanding of tasks and the learned
routes.. The final feature table ($t\times|\mathcal{T}_{train}|$) is in Table
9.
* •
Task Format. We use the task categories provided in Ye et al. (2021). The top-
level labels include Classification, Question Answering, Conditional
Generation, and Others. Tasks in each category are divided into sub-
categories. For example, QA tasks are further categorized into machine reading
comprehension (MRC), multiple-choice QA, closed-book QA, etc.
* •
Input/Output Length. We classify tasks with into three features based on their
average input length: hasShortInput (shortest 25%), hasLongInput (longest
25%), hasMediumInput (remainder). We also classify tasks into three features
based on their average output length: hasShortOutput ($<$ 3 tokens),
hasLongOutput ($>$ 10 tokens), and hasMediumOutput (remainder).
Figure 5: Pearson Correlation Between Learned Routes and Selected Manual
Features. Correlation with $p$<$0.01$ are visualized. “L0E1” stands for expert
1 in layer 0. The correlation is computed based on a (c)+(d)+(g)+(k) model,
where (g) means the task embedding table $\mathbf{T}$ is randomly initialized.
This suggests that without prior knowledge of the tasks, the router can
partially rediscover human categorization of tasks during multi-task learning.
* •
Text Domain. We categorize tasks with into domains such as Science &
Technology, Social Network, News, Web, Bio-Medical, Review, Dialog, and Books.
* •
Granularity. We categorize tasks into Span-level (e.g., acronym
identification); Sentence-level (e.g., tweet classification); Paragraph-level
(e.g., news summarization) based on their main focus. This is different from
input length.
* •
Additional Features: Format, High-level Skills and Knowledge888These features
are mostly inspired by dataset papers such as SQuAD Rajpurkar et al. (2016),
BLiMP Warstadt et al. (2020), MNLI Williams et al. (2018), HotpotQA Yang et
al. (2018), CommonsenseQA Talmor et al. (2019).. We additionally describe
several common task characteristics in Table 4. These include whether a task
is Extractive, requires Sentence Completion, or requires high-level skills
such as Co-reference.
#### Findings.
Results on selected features are visualized in Fig. 5. Visualization of the
complete pairs of expert and feature are in Fig. 8-8. We have the following
observations: (1) There exists strong correlation between several pairs of
routing decisions and manual features. For example, L1E2, L3E1, L6E1 are
positively correlated with the feature of Classification, suggesting that
these experts are likely to be selected for classification tasks. (2) The
correlations are strongest with the top-level task category features (i.e.,
Classification, QA, Conditional Generation), suggesting that the router may
understand and categorize tasks in a way similar to us. (3) However,
correlation does not imply causal relationships. The correlation patterns of
Classification and hasShortOutput are similar, the same applies to Conditional
Generation and hasLongOutput. We cannot conclude whether the router is making
router decisions depending on output length, task format, or other hidden
aspects.
Manual Feature | Top3 Exp | Task | All | Top1 | Top3
---|---|---|---|---|---
Classification | L1E2 L6E1 L3E1 | imdb | 92.49 | 91.87 | 88.70
sms spam | 63.54 | 63.54 | 62.88
emo | 82.06 | 65.46 | 16.22
Conditional Generation | L9E2 L5E3 L7E2 | gigaword | 30.00 | 26.51 | 17.91
aeslc | 14.52 | 15.31 | 14.76
kilt_wow | 6.39 | 6.01 | 4.73
Closed-book QA | L3E2 L4E2 L6E3 | kilt_trex | 31.85 | 25.63 | 28.13
kilt_zsre | 13.13 | 11.25 | 9.38
numer_sense | 34.38 | 33.75 | 20.00
Table 5: Performance when top correlated experts are disabled. “Top1” means
the most positively correlated expert is disabled. Performance gradually drops
as more experts are disabled.
Task | All | Top1 | Top3 | Rand1 | Rand3 | Least1 | Least3
---|---|---|---|---|---|---|---
imdb | 92.49 | 91.87 | 88.70 | 92.49 | 91.66 | 92.49 | 92.49
sms spam | 63.54 | 63.54 | 62.88 | 63.54 | 63.53 | 63.54 | 63.54
emo | 82.06 | 65.46 | 16.22 | 82.06 | 64.13 | 82.06 | 82.06
Table 6: Disabling top/least correlated experts and random experts. The
experts that positively correlate (Top1/Top3) with the “classification”
feature contribute more to the performance than randomly selected or least
correlated experts (Least1/Least3).
### 7.3 Expert Disabling Experiments
We further examine the learned task-level MoE models by disabling experts
during evaluation. By “disabling”, we simply set the pre-softmax logit to be
$-\infty$, so that the second-best expert in that layer will be selected
instead. We hypothesize that if an expert corresponds to a critical skill
required by a certain type of tasks, then disabling it should bring
significant performance drop. (1) We select three manual features:
Classification, Conditional Generation, Closed-book QA, and select three tasks
that belong to these categories. We select the top 3 experts that positively
correlate with these features, and disable them during evaluation. Results are
listed in Table 5. As expected, these correlated experts are indispensable for
the task performance. Performance gradually drops as more experts are disabled
(All $\rightarrow$ Top1 $\rightarrow$ Top3). (2) For the three classification
tasks we select, we further compare the performance when disabling most/least
correlated experts and random experts. Results are presented in in Table 6.
Results suggest experts that are positively correlated with the classification
feature are more important to the final performance. (3) We further take two
classification tasks ($\diamondsuit$) and two closed-book QA tasks
($\heartsuit$), and consider disabling experts correlated with classification
and closed-book feature. Results are shown in Table 7. Performance are not
influenced significantly when experts relevant to other features are disabled.
To conclude, this set of experiments suggests that experts that positively
correlate with a specific type of tasks are irreplaceable; they greatly
contribute to the performance of that type of tasks.
Task | All | $\diamondsuit$ Top1 | $\diamondsuit$ Top3 | $\heartsuit$ Top1 | $\heartsuit$ Top3
---|---|---|---|---|---
$\diamondsuit$ imdb | 92.49 | 91.87 | 88.70 | 92.49 | 92.49
$\diamondsuit$ emo | 82.06 | 65.46 | 16.22 | 82.06 | 82.06
$\heartsuit$ kilt_zsre | 13.13 | 13.13 | 12.50 | 11.25 | 9.38
$\heartsuit$ numer_sense | 34.38 | 34.38 | 34.38 | 33.75 | 20.00
Table 7: Disabling experts associated with different task categories.
$\diamondsuit$=Classification, $\heartsuit$=Closed-book QA. Performance does
not drop significantly when experts relevant to other features are disabled
(red area).
## 8 Conclusions
Inspired by how humans accumulate skills from past experience and re-use them
to solve new tasks, in this paper, we develop and conduct extensive
experiments with transformer-based task-level mixture-of-expert (MoE) models,
in hope to provide new insights on multi-task learning and cross-task
generalization in NLP. Firstly, we empirically investigate importance design
choices and quantify their influence on final model. Secondly, in both few-
shot and zero-shot settings, we demonstrate that task-level mixture-of-expert
models are better at generalizing to new tasks. Finally, by conducting a
detailed analysis on the routing decisions, we find they have strong
correlations with human-defined task characteristics, even when the decisions
are learned spontaneously without no prior knowledge such as pre-computed task
representations. We hope our work provide useful advice on training and
interpreting multi-task models in NLP and we hope it will inspire future work
in improving multi-task learning and cross-task generalization in NLP.
## Limitations
Although we have done much analysis on the correlation between learned routes
and task characteristics, it is yet challenging to (1) ground each expert to
human-understandable language skills; (2) understand their causal
relationships. Much more needs to be discussed on how to systematically define
the atomic/basic skills that are used in solving NLP tasks. In terms of model
optimization, we find that we cannot achieve the best performance using the
one-stage training strategy, and our best method takes more training time and
needs more delicate hyper-parameters selection compared to the vanilla multi-
task model. We hypothesize that there are optimization challenges in training
task-level mixture-of-expert models. We hope future work can investigate and
address this problem.
## Acknowledgments
We thank authors and crowd-workers of all datasets used in our study. We thank
huggingface datasets team Lhoest et al. (2021) for making NLP datasets more
accessible. We thank anonymous reviewers, members of USC INK Lab and USC NLP
community for their valuable feedback. This work is supported in part by the
Office of the Director of National Intelligence (ODNI), Intelligence Advanced
Research Projects Activity (IARPA), via Contract No. 2019-19051600007; the
DARPA MCS program under Contract No. N660011924033; the Defense Advanced
Research Projects Agency with award W911NF-19-20271; NSF IIS 2048211.
## References
* Achille et al. (2019) Alessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless C. Fowlkes, Stefano Soatto, and Pietro Perona. 2019. Task2vec: Task embedding for meta-learning. In _2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019_ , pages 6429–6438. IEEE.
* Aghajanyan et al. (2021) Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. 2021. Muppet: Massive multi-task representations with pre-finetuning. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 5799–5811, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Almeida et al. (2011) Tiago A. Almeida, José María G. Hidalgo, and Akebo Yamakami. 2011. Contributions to the study of sms spam filtering: New collection and results. In _Proceedings of the 11th ACM Symposium on Document Engineering_ , DocEng ’11, page 259–262, New York, NY, USA. Association for Computing Machinery.
* Amini et al. (2019) Aida Amini, Saadia Gabriel, Shanchuan Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. 2019. MathQA: Towards interpretable math word problem solving with operation-based formalisms. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 2357–2367, Minneapolis, Minnesota. Association for Computational Linguistics.
* Aribandi et al. (2022) Vamsi Aribandi, Yi Tay, Tal Schuster, Jinfeng Rao, Huaixiu Steven Zheng, Sanket Vaibhav Mehta, Honglei Zhuang, Vinh Q. Tran, Dara Bahri, Jianmo Ni, Jai Gupta, Kai Hui, Sebastian Ruder, and Donald Metzler. 2022. Ext5: Towards extreme multi-task scaling for transfer learning. In _International Conference on Learning Representations_.
* Artetxe et al. (2021) Mikel Artetxe, Shruti Bhosale, Naman Goyal, Todor Mihaylov, Myle Ott, Sam Shleifer, Xi Victoria Lin, Jingfei Du, Srinivasan Iyer, Ramakanth Pasunuru, Giri Anantharaman, Xian Li, Shuohui Chen, Halil Akin, Mandeep Baines, Louis Martin, Xing Zhou, Punit Singh Koura, Brian O’Horo, Jeff Wang, Luke Zettlemoyer, Mona T. Diab, Zornitsa Kozareva, and Ves Stoyanov. 2021. Efficient large scale language modeling with mixtures of experts. _CoRR_ , abs/2112.10684.
* Asai et al. (2022) Akari Asai, Mohammadreza Salehi, Matthew E Peters, and Hannaneh Hajishirzi. 2022\. Attentional mixtures of soft prompt tuning for parameter-efficient multi-task knowledge sharing. _arXiv preprint arXiv:2205.11961_.
* Bach et al. (2022) Stephen Bach, Victor Sanh, Zheng Xin Yong, Albert Webson, Colin Raffel, Nihal V. Nayak, Abheesht Sharma, Taewoon Kim, M Saiful Bari, Thibault Fevry, Zaid Alyafeai, Manan Dey, Andrea Santilli, Zhiqing Sun, Srulik Ben-david, Canwen Xu, Gunjan Chhablani, Han Wang, Jason Fries, Maged Al-shaibani, Shanya Sharma, Urmish Thakker, Khalid Almubarak, Xiangru Tang, Dragomir Radev, Mike Tian-jian Jiang, and Alexander Rush. 2022. PromptSource: An integrated development environment and repository for natural language prompts.
* Bar-Haim et al. (2006) Roy Bar-Haim, Ido Dagan, Bill Dolan, Lisa Ferro, Danilo Giampiccolo, Bernardo Magnini, and Idan Szpektor. 2006. The second pascal recognising textual entailment challenge. In _Proceedings of the second PASCAL challenges workshop on recognising textual entailment_ , volume 6, pages 6–4. Venice.
* Barbieri et al. (2020) Francesco Barbieri, Jose Camacho-Collados, Luis Espinosa Anke, and Leonardo Neves. 2020. TweetEval: Unified benchmark and comparative evaluation for tweet classification. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 1644–1650, Online. Association for Computational Linguistics.
* Bartolo et al. (2020) Max Bartolo, Alastair Roberts, Johannes Welbl, Sebastian Riedel, and Pontus Stenetorp. 2020. Beat the AI: Investigating adversarial human annotation for reading comprehension. _Transactions of the Association for Computational Linguistics_ , 8:662–678.
* Bentivogli et al. (2009) Luisa Bentivogli, Peter Clark, Ido Dagan, and Danilo Giampiccolo. 2009. The fifth pascal recognizing textual entailment challenge. In _TAC_.
* Berant et al. (2013) Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In _Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing_ , pages 1533–1544, Seattle, Washington, USA. Association for Computational Linguistics.
* Bhagavatula et al. (2020) Chandra Bhagavatula, Ronan Le Bras, Chaitanya Malaviya, Keisuke Sakaguchi, Ari Holtzman, Hannah Rashkin, Doug Downey, Wen tau Yih, and Yejin Choi. 2020. Abductive commonsense reasoning. In _International Conference on Learning Representations_.
* Bisk et al. (2020) Yonatan Bisk, Rowan Zellers, Ronan LeBras, Jianfeng Gao, and Yejin Choi. 2020. PIQA: reasoning about physical commonsense in natural language. In _The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020_ , pages 7432–7439. AAAI Press.
* Boratko et al. (2020) Michael Boratko, Xiang Li, Tim O’Gorman, Rajarshi Das, Dan Le, and Andrew McCallum. 2020. ProtoQA: A question answering dataset for prototypical common-sense reasoning. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1122–1136, Online. Association for Computational Linguistics.
* Botha et al. (2018) Jan A. Botha, Manaal Faruqui, John Alex, Jason Baldridge, and Dipanjan Das. 2018\. Learning to split and rephrase from Wikipedia edit history. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 732–737, Brussels, Belgium. Association for Computational Linguistics.
* Caruana (1997) Rich Caruana. 1997. Multitask learning. _Machine learning_ , 28(1):41–75.
* Chatterjee et al. (2019) Ankush Chatterjee, Kedhar Nath Narahari, Meghana Joshi, and Puneet Agrawal. 2019\. SemEval-2019 task 3: EmoContext contextual emotion detection in text. In _Proceedings of the 13th International Workshop on Semantic Evaluation_ , pages 39–48, Minneapolis, Minnesota, USA. Association for Computational Linguistics.
* Chen et al. (2020a) Anthony Chen, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. 2020a. MOCHA: A dataset for training and evaluating generative reading comprehension metrics. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 6521–6532, Online. Association for Computational Linguistics.
* Chen et al. (2019) Michael Chen, Mike D’Arcy, Alisa Liu, Jared Fernandez, and Doug Downey. 2019. CODAH: An adversarially-authored question answering dataset for common sense. In _Proceedings of the 3rd Workshop on Evaluating Vector Space Representations for NLP_ , pages 63–69, Minneapolis, USA. Association for Computational Linguistics.
* Chen et al. (2020b) Wenhu Chen, Hongmin Wang, Jianshu Chen, Yunkai Zhang, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2020b. Tabfact: A large-scale dataset for table-based fact verification. In _8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020_. OpenReview.net.
* Clark et al. (2019) Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. 2019. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 2924–2936, Minneapolis, Minnesota. Association for Computational Linguistics.
* Clark et al. (2018) Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, and Oyvind Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. _ArXiv preprint_ , abs/1803.05457.
* Cohan et al. (2019) Arman Cohan, Waleed Ammar, Madeleine van Zuylen, and Field Cady. 2019. Structural scaffolds for citation intent classification in scientific publications. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 3586–3596, Minneapolis, Minnesota. Association for Computational Linguistics.
* Csordás et al. (2021) Róbert Csordás, Kazuki Irie, and Juergen Schmidhuber. 2021. The devil is in the detail: Simple tricks improve systematic generalization of transformers. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 619–634, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Dagan et al. (2005) Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The pascal recognising textual entailment challenge. In _Machine Learning Challenges Workshop_ , pages 177–190. Springer.
* Dasigi et al. (2019) Pradeep Dasigi, Nelson F. Liu, Ana Marasović, Noah A. Smith, and Matt Gardner. 2019. Quoref: A reading comprehension dataset with questions requiring coreferential reasoning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 5925–5932, Hong Kong, China. Association for Computational Linguistics.
* Davidson et al. (2017) Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In _Proceedings of the 11th International AAAI Conference on Web and Social Media_ , ICWSM ’17, pages 512–515.
* de Gibert et al. (2018) Ona de Gibert, Naiara Perez, Aitor García-Pablos, and Montse Cuadros. 2018\. Hate speech dataset from a white supremacy forum. In _Proceedings of the 2nd Workshop on Abusive Language Online (ALW2)_ , pages 11–20, Brussels, Belgium. Association for Computational Linguistics.
* de Marneffe et al. (2019) Marie-Catherine de Marneffe, Mandy Simons, and Judith Tonhauser. 2019. The commitmentbank: Investigating projection in naturally occurring discourse. _Proceedings of Sinn und Bedeutung_ , 23(2):107–124.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Diggelmann et al. (2020) T. Diggelmann, Jordan L. Boyd-Graber, Jannis Bulian, Massimiliano Ciaramita, and Markus Leippold. 2020. Climate-fever: A dataset for verification of real-world climate claims. _ArXiv preprint_ , abs/2012.00614.
* Dinan et al. (2019) Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In _7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019_. OpenReview.net.
* Dolan and Brockett (2005) William B. Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In _Proceedings of the Third International Workshop on Paraphrasing (IWP2005)_.
* Du et al. (2021) Nan Du, Yanping Huang, Andrew M. Dai, Simon Tong, Dmitry Lepikhin, Yuanzhong Xu, Maxim Krikun, Yanqi Zhou, Adams Wei Yu, Orhan Firat, Barret Zoph, Liam Fedus, Maarten Bosma, Zongwei Zhou, Tao Wang, Yu Emma Wang, Kellie Webster, Marie Pellat, Kevin Robinson, Kathleen S. Meier-Hellstern, Toju Duke, Lucas Dixon, Kun Zhang, Quoc V. Le, Yonghui Wu, Z. Chen, and Claire Cui. 2021. Glam: Efficient scaling of language models with mixture-of-experts. _ArXiv_ , abs/2112.06905.
* Dunn et al. (2017) Matthew Dunn, Levent Sagun, Mike Higgins, V. U. Güney, Volkan Cirik, and Kyunghyun Cho. 2017. Searchqa: A new q&a dataset augmented with context from a search engine. _ArXiv preprint_ , abs/1704.05179.
* Dušek et al. (2019) Ondřej Dušek, David M. Howcroft, and Verena Rieser. 2019. Semantic noise matters for neural natural language generation. In _Proceedings of the 12th International Conference on Natural Language Generation_ , pages 421–426, Tokyo, Japan. Association for Computational Linguistics.
* Dušek et al. (2020) Ondřej Dušek, Jekaterina Novikova, and Verena Rieser. 2020. Evaluating the State-of-the-Art of End-to-End Natural Language Generation: The E2E NLG Challenge. _Computer Speech & Language_, 59:123–156.
* Elsahar et al. (2018) Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Frederique Laforest, and Elena Simperl. 2018. T-REx: A large scale alignment of natural language with knowledge base triples. In _Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)_ , Miyazaki, Japan. European Language Resources Association (ELRA).
* Fabbri et al. (2019) Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 1074–1084, Florence, Italy. Association for Computational Linguistics.
* Fan et al. (2019) Angela Fan, Yacine Jernite, Ethan Perez, David Grangier, Jason Weston, and Michael Auli. 2019. ELI5: Long form question answering. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 3558–3567, Florence, Italy. Association for Computational Linguistics.
* Faruqui and Das (2018) Manaal Faruqui and Dipanjan Das. 2018. Identifying well-formed natural language questions. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 798–803, Brussels, Belgium. Association for Computational Linguistics.
* Fedus et al. (2021) William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. _arXiv preprint arXiv:2101.03961_.
* Giampiccolo et al. (2007) Danilo Giampiccolo, Bernardo Magnini, Ido Dagan, and Bill Dolan. 2007. The third PASCAL recognizing textual entailment challenge. In _Proceedings of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing_ , pages 1–9, Prague. Association for Computational Linguistics.
* Gliwa et al. (2019) Bogdan Gliwa, Iwona Mochol, Maciej Biesek, and Aleksander Wawer. 2019. SAMSum corpus: A human-annotated dialogue dataset for abstractive summarization. In _Proceedings of the 2nd Workshop on New Frontiers in Summarization_ , pages 70–79, Hong Kong, China. Association for Computational Linguistics.
* Gordon et al. (2012) Andrew Gordon, Zornitsa Kozareva, and Melissa Roemmele. 2012. SemEval-2012 task 7: Choice of plausible alternatives: An evaluation of commonsense causal reasoning. In _*SEM 2012: The First Joint Conference on Lexical and Computational Semantics – Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012)_ , pages 394–398, Montréal, Canada. Association for Computational Linguistics.
* Gupta et al. (2022) Shashank Gupta, Subhabrata Mukherjee, Krishan Subudhi, Eduardo Gonzalez, Damien Jose, Ahmed H Awadallah, and Jianfeng Gao. 2022. Sparsely activated mixture-of-experts are robust multi-task learners. _arXiv preprint arXiv:2204.07689_.
* Gurulingappa et al. (2012) Harsha Gurulingappa, Abdul Mateen Rajput, Angus Roberts, Juliane Fluck, Martin Hofmann-Apitius, and Luca Toldo. 2012. Development of a benchmark corpus to support the automatic extraction of drug-related adverse effects from medical case reports. _Journal of Biomedical Informatics_ , 45(5):885–892. Text Mining and Natural Language Processing in Pharmacogenomics.
* He et al. (2015) Luheng He, Mike Lewis, and Luke Zettlemoyer. 2015. Question-answer driven semantic role labeling: Using natural language to annotate natural language. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 643–653, Lisbon, Portugal. Association for Computational Linguistics.
* Hendrycks et al. (2021) Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. 2021. Measuring massive multitask language understanding. _Proceedings of the International Conference on Learning Representations (ICLR)_.
* Hoffart et al. (2011) Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In _Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing_ , pages 782–792, Edinburgh, Scotland, UK. Association for Computational Linguistics.
* Houlsby et al. (2019) Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin de Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. 2019. Parameter-efficient transfer learning for nlp. In _ICML_.
* Hovy et al. (2001) Eduard Hovy, Laurie Gerber, Ulf Hermjakob, Chin-Yew Lin, and Deepak Ravichandran. 2001. Toward semantics-based answer pinpointing. In _Proceedings of the First International Conference on Human Language Technology Research_.
* Huang et al. (2019) Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2019. Cosmos QA: Machine reading comprehension with contextual commonsense reasoning. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2391–2401, Hong Kong, China. Association for Computational Linguistics.
* Jacobs et al. (1991) R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. 1991. Adaptive mixtures of local experts. _Neural Computation_ , 3:79–87.
* Jang et al. (2016) Eric Jang, Shixiang Gu, and Ben Poole. 2016. Categorical reparameterization with gumbel-softmax. _arXiv preprint arXiv:1611.01144_.
* Jiang et al. (2020) Chao Jiang, Mounica Maddela, Wuwei Lan, Yang Zhong, and Wei Xu. 2020. Neural CRF model for sentence alignment in text simplification. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7943–7960, Online. Association for Computational Linguistics.
* Jiang et al. (2019) Kelvin Jiang, Dekun Wu, and Hui Jiang. 2019. FreebaseQA: A new factoid QA data set matching trivia-style question-answer pairs with Freebase. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 318–323, Minneapolis, Minnesota. Association for Computational Linguistics.
* Khashabi et al. (2018) Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 252–262, New Orleans, Louisiana. Association for Computational Linguistics.
* Khashabi et al. (2020) Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UNIFIEDQA: Crossing format boundaries with a single QA system. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 1896–1907, Online. Association for Computational Linguistics.
* Khot et al. (2020) Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020\. Qasc: A dataset for question answering via sentence composition. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 34(05):8082–8090.
* Khot et al. (2018) Tushar Khot, Ashish Sabharwal, and Peter Clark. 2018. Scitail: A textual entailment dataset from science question answering. In _Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018_ , pages 5189–5197. AAAI Press.
* Kim et al. (2019) Byeongchang Kim, Hyunwoo Kim, and Gunhee Kim. 2019. Abstractive summarization of Reddit posts with multi-level memory networks. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 2519–2531, Minneapolis, Minnesota. Association for Computational Linguistics.
* Kingma and Ba (2014) Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_.
* Kotonya and Toni (2020) Neema Kotonya and Francesca Toni. 2020. Explainable automated fact-checking for public health claims. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 7740–7754, Online. Association for Computational Linguistics.
* Kudugunta et al. (2021) Sneha Kudugunta, Yanping Huang, Ankur Bapna, Maxim Krikun, Dmitry Lepikhin, Minh-Thang Luong, and Orhan Firat. 2021. Beyond distillation: Task-level mixture-of-experts for efficient inference. In _Findings of the Association for Computational Linguistics: EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 16-20 November, 2021_ , pages 3577–3599. Association for Computational Linguistics.
* Kwiatkowski et al. (2019) Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. _Transactions of the Association for Computational Linguistics_ , 7:452–466.
* Lai et al. (2017) Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding comprehension dataset from examinations. In _Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing_ , pages 785–794, Copenhagen, Denmark. Association for Computational Linguistics.
* Lebret et al. (2016) Rémi Lebret, David Grangier, and Michael Auli. 2016. Neural text generation from structured data with application to the biography domain. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , pages 1203–1213, Austin, Texas. Association for Computational Linguistics.
* Lehmann et al. (2015) Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, D. Kontokostas, Pablo N. Mendes, Sebastian Hellmann, M. Morsey, Patrick van Kleef, S. Auer, and C. Bizer. 2015. Dbpedia - a large-scale, multilingual knowledge base extracted from wikipedia. _Semantic Web_ , 6:167–195.
* Levesque et al. (2012) Hector J. Levesque, Ernest Davis, and Leora Morgenstern. 2012. The winograd schema challenge. In _Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning_ , KR’12, page 552–561. AAAI Press.
* Levy et al. (2017) Omer Levy, Minjoon Seo, Eunsol Choi, and Luke Zettlemoyer. 2017. Zero-shot relation extraction via reading comprehension. In _Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017)_ , pages 333–342, Vancouver, Canada. Association for Computational Linguistics.
* Lewis et al. (2020) Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7871–7880, Online. Association for Computational Linguistics.
* Lhoest et al. (2021) Quentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gunjan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matussière, Lysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. 2021. Datasets: A community library for natural language processing. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations_ , pages 175–184, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Li and Roth (2002) Xin Li and Dan Roth. 2002. Learning question classifiers. In _COLING 2002: The 19th International Conference on Computational Linguistics_.
* Lin et al. (2020a) Bill Yuchen Lin, Seyeon Lee, Rahul Khanna, and Xiang Ren. 2020a. Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-Trained Language Models. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 6862–6868, Online. Association for Computational Linguistics.
* Lin et al. (2022) Bill Yuchen Lin, Kangmin Tan, Chris Miller, Beiwen Tian, and Xiang Ren. 2022. Unsupervised cross-task generalization via retrieval augmentation. _arXiv preprint arXiv:2204.07937_.
* Lin et al. (2020b) Bill Yuchen Lin, Wangchunshu Zhou, Ming Shen, Pei Zhou, Chandra Bhagavatula, Yejin Choi, and Xiang Ren. 2020b. CommonGen: A constrained text generation challenge for generative commonsense reasoning. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 1823–1840, Online. Association for Computational Linguistics.
* Lin et al. (2019) Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gardner. 2019. Reasoning over paragraph effects in situations. In _Proceedings of the 2nd Workshop on Machine Reading for Question Answering_ , pages 58–62, Hong Kong, China. Association for Computational Linguistics.
* Ling et al. (2017) Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. 2017. Program induction by rationale generation: Learning to solve and explain algebraic word problems. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 158–167, Vancouver, Canada. Association for Computational Linguistics.
* Liu et al. (2019a) Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. 2019a. Multi-task deep neural networks for natural language understanding. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4487–4496, Florence, Italy. Association for Computational Linguistics.
* Liu et al. (2019b) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining approach. _arXiv preprint arXiv:1907.11692_.
* Louis et al. (2020) Annie Louis, Dan Roth, and Filip Radlinski. 2020. “I’d rather just go to bed”: Understanding indirect answers. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 7411–7425, Online. Association for Computational Linguistics.
* Maas et al. (2011) Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_ , pages 142–150, Portland, Oregon, USA. Association for Computational Linguistics.
* Malo et al. (2014) Pekka Malo, Ankur Sinha, Pekka Korhonen, Jyrki Wallenius, and Pyry Takala. 2014\. Good debt or bad debt: Detecting semantic orientations in economic texts. _J. Assoc. Inf. Sci. Technol._ , 65(4):782–796.
* Manotas et al. (2020) Irene Manotas, Ngoc Phuoc An Vo, and Vadim Sheinin. 2020. LiMiT: The literal motion in text dataset. In _Findings of the Association for Computational Linguistics: EMNLP 2020_ , pages 991–1000, Online. Association for Computational Linguistics.
* Marelli et al. (2014) Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, and Roberto Zamparelli. 2014. A SICK cure for the evaluation of compositional distributional semantic models. In _Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14)_ , pages 216–223, Reykjavik, Iceland. European Language Resources Association (ELRA).
* Mathew et al. (2020) Binny Mathew, Punyajoy Saha, Seid Muhie Yimam, Chris Biemann, Pawan Goyal, and Animesh Mukherjee. 2020. Hatexplain: A benchmark dataset for explainable hate speech detection. _ArXiv preprint_ , abs/2012.10289.
* McAuley and Leskovec (2013) Julian J. McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In _Seventh ACM Conference on Recommender Systems, RecSys ’13, Hong Kong, China, October 12-16, 2013_ , pages 165–172. ACM.
* McCann et al. (2018) Bryan McCann, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. The natural language decathlon: Multitask learning as question answering. _arXiv preprint arXiv:1806.08730_.
* McCreery et al. (2020) Clara H. McCreery, Namit Katariya, Anitha Kannan, Manish Chablani, and Xavier Amatriain. 2020. Effective transfer learning for identifying similar questions: Matching user questions to COVID-19 faqs. In _KDD ’20: The 26th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Virtual Event, CA, USA, August 23-27, 2020_ , pages 3458–3465. ACM.
* Mihaylov et al. (2018) Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2381–2391, Brussels, Belgium. Association for Computational Linguistics.
* Mishra et al. (2021) Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2021. Cross-task generalization via natural language crowdsourcing instructions. _arXiv preprint arXiv:2104.08773_.
* Mollas et al. (2020) Ioannis Mollas, Zoe Chrysopoulou, Stamatis Karlos, and Grigorios Tsoumakas. 2020\. Ethos: an online hate speech detection dataset. _ArXiv preprint_ , abs/2006.08328.
* Nallapati et al. (2016) Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Çağlar Gu̇lçehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In _Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning_ , pages 280–290, Berlin, Germany. Association for Computational Linguistics.
* Nangia et al. (2020) Nikita Nangia, Clara Vania, Rasika Bhalerao, and Samuel R. Bowman. 2020. CrowS-pairs: A challenge dataset for measuring social biases in masked language models. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1953–1967, Online. Association for Computational Linguistics.
* Napoles et al. (2012) Courtney Napoles, Matthew Gormley, and Benjamin Van Durme. 2012. Annotated Gigaword. In _Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX)_ , pages 95–100, Montréal, Canada. Association for Computational Linguistics.
* Narayan et al. (2018) Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 1797–1807, Brussels, Belgium. Association for Computational Linguistics.
* Nie et al. (2020) Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Adversarial NLI: A new benchmark for natural language understanding. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4885–4901, Online. Association for Computational Linguistics.
* Othman and Jemni (2012) A. Othman and M. Jemni. 2012. English-asl gloss parallel corpus 2012: Aslg-pc12. In _IEnglish-ASL Gloss Parallel Corpus 2012_.
* Pang and Lee (2005) Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In _Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL’05)_ , pages 115–124, Ann Arbor, Michigan. Association for Computational Linguistics.
* Pappas et al. (2020) Dimitris Pappas, Petros Stavropoulos, Ion Androutsopoulos, and Ryan McDonald. 2020\. BioMRC: A dataset for biomedical machine reading comprehension. In _Proceedings of the 19th SIGBioMed Workshop on Biomedical Language Processing_ , pages 140–149, Online. Association for Computational Linguistics.
* Petroni et al. (2020) Fabio Petroni, Patrick Lewis, Aleksandra Piktus, Tim Rocktäschel, Yuxiang Wu, Alexander H. Miller, and Sebastian Riedel. 2020. How context affects language models’ factual predictions. In _Automated Knowledge Base Construction_.
* Petroni et al. (2019) Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowledge bases? In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 2463–2473, Hong Kong, China. Association for Computational Linguistics.
* Phang et al. (2018) Jason Phang, Thibault Févry, and Samuel R Bowman. 2018. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. _arXiv preprint arXiv:1811.01088_.
* Pilehvar and Camacho-Collados (2019) Mohammad Taher Pilehvar and Jose Camacho-Collados. 2019. WiC: the word-in-context dataset for evaluating context-sensitive meaning representations. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 1267–1273, Minneapolis, Minnesota. Association for Computational Linguistics.
* Ponti et al. (2022) Edoardo M Ponti, Alessandro Sordoni, and Siva Reddy. 2022. Combining modular skills in multitask learning. _arXiv preprint arXiv:2202.13914_.
* Pouran Ben Veyseh et al. (2020) Amir Pouran Ben Veyseh, Franck Dernoncourt, Quan Hung Tran, and Thien Huu Nguyen. 2020. What does this acronym mean? introducing a new dataset for acronym identification and disambiguation. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 3285–3301, Barcelona, Spain (Online). International Committee on Computational Linguistics.
* Pruksachatkun et al. (2020) Yada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman. 2020\. Intermediate-task transfer learning with pretrained language models: When and why does it work? In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5231–5247, Online. Association for Computational Linguistics.
* Raffel et al. (2020) Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. _Journal of Machine Learning Research_ , 21(140):1–67.
* Rahman and Ng (2012) Altaf Rahman and Vincent Ng. 2012. Resolving complex cases of definite pronouns: The Winograd schema challenge. In _Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning_ , pages 777–789, Jeju Island, Korea. Association for Computational Linguistics.
* Rajani et al. (2019) Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense reasoning. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4932–4942, Florence, Italy. Association for Computational Linguistics.
* Rajpurkar et al. (2016) Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In _Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing_ , pages 2383–2392, Austin, Texas. Association for Computational Linguistics.
* Rashkin et al. (2019) Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open-domain conversation models: A new benchmark and dataset. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 5370–5381, Florence, Italy. Association for Computational Linguistics.
* Rogers et al. (2020) Anna Rogers, Olga Kovaleva, Matthew Downey, and Anna Rumshisky. 2020. Getting closer to ai complete question answering: A set of prerequisite real tasks. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 34(05):8722–8731.
* Rosenbaum et al. (2018) Clemens Rosenbaum, Tim Klinger, and Matthew Riemer. 2018. Routing networks: Adaptive selection of non-linear functions for multi-task learning. In _International Conference on Learning Representations_.
* Saha et al. (2018) Amrita Saha, Rahul Aralikatte, Mitesh M. Khapra, and Karthik Sankaranarayanan. 2018\. DuoRC: Towards complex language understanding with paraphrased reading comprehension. In _Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)_ , pages 1683–1693, Melbourne, Australia. Association for Computational Linguistics.
* Sakaguchi et al. (2020) Keisuke Sakaguchi, Ronan Le Bras, Chandra Bhagavatula, and Yejin Choi. 2020. Winogrande: An adversarial winograd schema challenge at scale. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 34(05):8732–8740.
* Sanh et al. (2022) Victor Sanh, Albert Webson, Colin Raffel, Stephen Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Fevry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M Rush. 2022. Multitask prompted training enables zero-shot task generalization. In _International Conference on Learning Representations_.
* Sap et al. (2019) Maarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019. Social IQa: Commonsense reasoning about social interactions. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 4463–4473, Hong Kong, China. Association for Computational Linguistics.
* Saravia et al. (2018) Elvis Saravia, Hsien-Chi Toby Liu, Yen-Hao Huang, Junlin Wu, and Yi-Shin Chen. 2018\. CARER: Contextualized affect representations for emotion recognition. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 3687–3697, Brussels, Belgium. Association for Computational Linguistics.
* Shazeer et al. (2017) Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc V. Le, Geoffrey E. Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In _ICLR (Poster)_. OpenReview.net.
* Sheng and Uthus (2020) Emily Sheng and David Uthus. 2020. Investigating societal biases in a poetry composition system. In _Proceedings of the Second Workshop on Gender Bias in Natural Language Processing_ , pages 93–106, Barcelona, Spain (Online). Association for Computational Linguistics.
* Sileo et al. (2019) Damien Sileo, Tim Van De Cruys, Camille Pradel, and Philippe Muller. 2019. Mining discourse markers for unsupervised sentence representation learning. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 3477–3486, Minneapolis, Minnesota. Association for Computational Linguistics.
* Socher et al. (2013) Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In _Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing_ , pages 1631–1642, Seattle, Washington, USA. Association for Computational Linguistics.
* Sun et al. (2019) Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge data set and models for dialogue-based reading comprehension. _Transactions of the Association for Computational Linguistics_ , 7:217–231.
* Tafjord et al. (2019a) Oyvind Tafjord, Peter Clark, Matt Gardner, Wen-tau Yih, and Ashish Sabharwal. 2019a. Quarel: A dataset and models for answering questions about qualitative relationships. _Proceedings of the AAAI Conference on Artificial Intelligence_ , 33(01):7063–7071.
* Tafjord et al. (2019b) Oyvind Tafjord, Matt Gardner, Kevin Lin, and Peter Clark. 2019b. QuaRTz: An open-domain dataset of qualitative relationship questions. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 5941–5946, Hong Kong, China. Association for Computational Linguistics.
* Talmor et al. (2019) Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A question answering challenge targeting commonsense knowledge. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4149–4158, Minneapolis, Minnesota. Association for Computational Linguistics.
* Tandon et al. (2019) Niket Tandon, Bhavana Dalvi, Keisuke Sakaguchi, Peter Clark, and Antoine Bosselut. 2019. WIQA: A dataset for “what if…” reasoning over procedural text. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 6076–6085, Hong Kong, China. Association for Computational Linguistics.
* Thorne et al. (2018) James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018\. FEVER: a large-scale dataset for fact extraction and VERification. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 809–819, New Orleans, Louisiana. Association for Computational Linguistics.
* Vajjala and Lučić (2018) Sowmya Vajjala and Ivana Lučić. 2018. OneStopEnglish corpus: A new corpus for automatic readability assessment and text simplification. In _Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications_ , pages 297–304, New Orleans, Louisiana. Association for Computational Linguistics.
* Vaswani et al. (2017) Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. _Advances in neural information processing systems_ , 30.
* Vu et al. (2020) Tu Vu, Tong Wang, Tsendsuren Munkhdalai, Alessandro Sordoni, Adam Trischler, Andrew Mattarella-Micke, Subhransu Maji, and Mohit Iyyer. 2020. Exploring and predicting transferability across NLP tasks. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 7882–7926, Online. Association for Computational Linguistics.
* Wang et al. (2021) Jixuan Wang, Kuan-Chieh Wang, Frank Rudzicz, and Michael Brudno. 2021. Grad2task: Improved few-shot text classification using gradients for task representation. In _Advances in Neural Information Processing Systems_.
* Wang (2017) William Yang Wang. 2017. “liar, liar pants on fire”: A new benchmark dataset for fake news detection. In _Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)_ , pages 422–426, Vancouver, Canada. Association for Computational Linguistics.
* Warstadt et al. (2020) Alex Warstadt, Alicia Parrish, Haokun Liu, Anhad Mohananey, Wei Peng, Sheng-Fu Wang, and Samuel R. Bowman. 2020. BLiMP: The benchmark of linguistic minimal pairs for English. _Transactions of the Association for Computational Linguistics_ , 8:377–392.
* Warstadt et al. (2019) Alex Warstadt, Amanpreet Singh, and Samuel R. Bowman. 2019. Neural network acceptability judgments. _Transactions of the Association for Computational Linguistics_ , 7:625–641.
* Wei et al. (2021) Jason Wei, Maarten Bosma, Vincent Y Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language models are zero-shot learners. _arXiv preprint arXiv:2109.01652_.
* Welbl et al. (2017) Johannes Welbl, Nelson F. Liu, and Matt Gardner. 2017. Crowdsourcing multiple choice science questions. In _Proceedings of the 3rd Workshop on Noisy User-generated Text_ , pages 94–106, Copenhagen, Denmark. Association for Computational Linguistics.
* Welbl et al. (2018) Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. _Transactions of the Association for Computational Linguistics_ , 6:287–302.
* Williams et al. (2018) Adina Williams, Nikita Nangia, and Samuel Bowman. 2018. A broad-coverage challenge corpus for sentence understanding through inference. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)_ , pages 1112–1122, New Orleans, Louisiana. Association for Computational Linguistics.
* Wolfson et al. (2020) Tomer Wolfson, Mor Geva, Ankit Gupta, Matt Gardner, Yoav Goldberg, Daniel Deutch, and Jonathan Berant. 2020. Break it down: A question understanding benchmark. _Transactions of the Association for Computational Linguistics_ , 8:183–198.
* Xiong et al. (2019) Wenhan Xiong, Jiawei Wu, Hong Wang, Vivek Kulkarni, Mo Yu, Shiyu Chang, Xiaoxiao Guo, and William Yang Wang. 2019. TWEETQA: A social media focused question answering dataset. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 5020–5031, Florence, Italy. Association for Computational Linguistics.
* Yang et al. (2015) Yi Yang, Wen-tau Yih, and Christopher Meek. 2015. WikiQA: A challenge dataset for open-domain question answering. In _Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing_ , pages 2013–2018, Lisbon, Portugal. Association for Computational Linguistics.
* Yang et al. (2018) Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 2369–2380, Brussels, Belgium. Association for Computational Linguistics.
* Ye et al. (2021) Qinyuan Ye, Bill Yuchen Lin, and Xiang Ren. 2021. CrossFit: A few-shot learning challenge for cross-task generalization in NLP. In _Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing_ , pages 7163–7189, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
* Yu et al. (2018) Tao Yu, Rui Zhang, Kai Yang, Michihiro Yasunaga, Dongxu Wang, Zifan Li, James Ma, Irene Li, Qingning Yao, Shanelle Roman, Zilin Zhang, and Dragomir Radev. 2018\. Spider: A large-scale human-labeled dataset for complex and cross-domain semantic parsing and text-to-SQL task. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 3911–3921, Brussels, Belgium. Association for Computational Linguistics.
* Zellers et al. (2018) Rowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversarial dataset for grounded commonsense inference. In _Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing_ , pages 93–104, Brussels, Belgium. Association for Computational Linguistics.
* Zellers et al. (2019) Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. HellaSwag: Can a machine really finish your sentence? In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4791–4800, Florence, Italy. Association for Computational Linguistics.
* Zhang et al. (2020) Hao Zhang, Jae Ro, and Richard Sproat. 2020. Semi-supervised URL segmentation with recurrent neural networks pre-trained on knowledge graph entities. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 4667–4675, Barcelona, Spain (Online). International Committee on Computational Linguistics.
* Zhang and Tetreault (2019) Rui Zhang and Joel Tetreault. 2019. This email could save your life: Introducing the task of email subject line generation. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 446–456, Florence, Italy. Association for Computational Linguistics.
* Zhang et al. (2018) Sheng Zhang, X. Liu, J. Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018\. Record: Bridging the gap between human and machine commonsense reading comprehension. _ArXiv preprint_ , abs/1810.12885.
* Zhang et al. (2015) Xiang Zhang, Junbo Jake Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classification. In _Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada_ , pages 649–657.
* Zhang et al. (2019) Yuan Zhang, Jason Baldridge, and Luheng He. 2019. PAWS: Paraphrase adversaries from word scrambling. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 1298–1308, Minneapolis, Minnesota. Association for Computational Linguistics.
* Zhong et al. (2017) Victor Zhong, Caiming Xiong, and Richard Socher. 2017. Seq2sql: Generating structured queries from natural language usin. _ArXiv preprint_ , abs/1709.00103.
* Zhou et al. (2019) Ben Zhou, Daniel Khashabi, Qiang Ning, and Dan Roth. 2019. “going on a vacation” takes longer than “going for a walk”: A study of temporal commonsense understanding. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3363–3369, Hong Kong, China. Association for Computational Linguistics.
* Zoph et al. (2022) Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, and William Fedus. 2022. Designing effective sparse expert models. _arXiv preprint arXiv:2202.08906_.
## Appendix A Computing Task Representations
In the following, we describe the method to construct the task representations
used in §5.3.
#### TaskEmb.
Task2Vec (Achille et al., 2019) is a method to generate tasks embedding for
visual classification tasks based fisher information matrix (FIM). It was then
extended to NLP domain Vu et al. (2020); Wang et al. (2021) and was found to
be useful. We compute the empirical fisher and use them as task
representations, following Vu et al. (2020). Specifically, given a model
$P_{\theta}$ parameterized by $\theta$ (e.g., a BART-Base model) and a set of
labeled examples $\\{(x,y)\\}$, we first fine-tune the model on the examples,
then compute the fisher information matrix:
${F_{\theta}=\frac{1}{n}\sum_{i=1}^{n}\left[\nabla_{\theta}\log{P}_{\theta}\left(y^{i}|x^{i}\right)\nabla_{\theta}\log{P}_{\theta}\left(y^{i}|x^{i}\right)^{T}\right]}$
(2)
To reduce the computational complexity, (1) we only use the diagonal entries
of $F_{\theta}$, following Achille et al. (2019) and Vu et al. (2020); (2) we
use a parameter-efficient fine-tuning method named adapter fine-tuning Houlsby
et al. (2019) and only compute the FIM with respect to adapter parameters. (3)
we use PCA to reduce the dimension ($d=768$, which is the same as TextEmb), as
we will use these representations as input to our router in the task-level MoE
model.
#### TextEmb and FT-TextEmb.
For TextEmb, we first concatenate the input sequence $x$ and the output
sequence $y$ into a longer sequence, and feed it to the encoder of BART to get
token-level representations. For TextEmb-AVG, we compute the average over
tokens for each example, and then average over all examples, to get a final
vector as task representation. For TextEmb-BOS, we average the BOS
representation of all examples999We later found out that this is less
meaningful since BART pre-training does not train these BOS tokens with any
special objective.. For fair comparison with TaskEmb, which fine-tunes the
model on labeled examples and thus may obtain extra information through this
process, we also include FT-TextEmb-AVG and FT-TextEmb-BOS in our comparison.
In these two variants, the BART model is first fine-tune on the labeled
examples $\\{(x,y)\\}$.
## Appendix B Additional Experiment Details
### B.1 Multi-task Learning Experiments
We concatenate the $D_{train}$ of the 120 tasks in $\mathcal{T}_{train}$ into
a large dataset and use it for multi-task learning. We adopt heterogeneous
batching Aghajanyan et al. (2021), i.e., each batch contains examples from
different tasks. For the vanilla multi-task baseline, we train the model for
30,000 steps, with the batch size equals to 32 and the learning rate equals to
3e-5. For BART-Large we use the same setting, except that the learning rate is
set to 1e-5. We use validation every 3,000 steps and select the best model
based on validation performance.
For the task-level MoE models, they are trained with a basic learning rate of
1e-5, while we set the router with bigger learning rate of 1e-3 based on our
pilot experiments following Ponti et al. (2022). For the task representations,
we use 1e-2 as learning rate when they are randomly initialized, and 1e-3 when
initialized from pre-computed representations. We train the model for 60,000
steps because it takes more exploration time for the routes and experts to be
stable. All models are trained with Adam optimizer (Kingma and Ba, 2014).
### B.2 Few-shot Adaptation Experiments
For few-shot fine-tuning we mainly follow the experiment setting in Ye et al.
(2021). Each task has five different few-shot samples of
$(D_{train},D_{dev})$. We train on $D_{train}$ for 1000 steps, and validate on
$D_{dev}$ every 100 steps. We run a grid search for learning rate {1e-5, 2e-5,
5e-5} and batch size {2,4,8} for each few-shot sample. Finally, the model with
best $D_{dev}$ performance is evaluated on $D_{test}$, the we report the
performance on $D_{test}$.
### B.3 Zero-shot Experiments
#### Data.
Following Sanh et al. (2022) and Lin et al. (2022), we use the prompt
templates in the Public Pool of Prompts (P3) Bach et al. (2022) to change
texts from various NLP tasks into a unified text-to-text format. To save
compute, we use a sub-sampled version of P3 dataset. We use up to 5k examples
for $D_{train}$, 1k examples for both $D_{dev}$ and $D_{test}$ following Lin
et al. (2022) for all tasks. We use 36 upstream tasks (which is the same as T0
upstream learning) for $\mathcal{T}_{train}$ and use 10 unseen tasks as our
$\mathcal{T}_{test}$. $D_{train}$ for tasks in $\mathcal{T}_{train}$ are used
for upstream learning; $D_{test}$ for tasks in $\mathcal{T}_{test}$ are used
for reporting the performance. For simplicity, we only keep the prompt that
can be evaluated with accuracy, and we report the mean acurracy for all tasks
in $\mathcal{T}_{test}$.
#### Training.
(1) For Multi-task BART-Base and Random Task Routing (2/3), we use 1e-5 as the
learning rate, 16 as training batch size, and the total training steps is set
to be 200k. (2) For the (c)+(d)+(h)+(l)+(m) model, we use 1e-5 as the base
learning rate for experts and 1e-3 for the router. We train the model for 200k
steps. (3) For the (c)+(d) +(h)+(l)+(n) model, we use 1e-5 as the base
learning rate for experts and 1e-3 for router. For the first learning stage we
train for 60k steps, and 200k steps for the second stage. For both MoE models
we use a batch size as 4. In this zero-shot setting, the task representation
is computed by applying TextEmb-AVG (h) to the prompt templates.
## Appendix C Extended Results and Analysis
### C.1 Loss and Performance Discrepancy
In Fig. 6, we plot the $D_{dev}$ loss and performance during multitask
learning. We conclude that $D_{dev}$ loss does not align well with the final
metrics, and thus validation should be done with the final metrics.
Figure 6: Dev loss and dev performance discrepancy when training multi-task
transformer baselines. We found that smaller dev loss does not guarantee
better dev performance. Dev losses tend to plunge then rise, while dev
performance continue to increase. BART-Large outperforms BART-Base despite
larger dev loss.
### C.2 Full Manual Feature Correlation Results
We show the full results of Pearson Correlation between learned routes and
manual features in Figure 8 and Figure 8. Figure 8 is based on routes in the
(c)+(d)+(g)+(k) model, and Figure 8 is based on the (c)+(d)+(j)+(k) model.
Figure 7: Pearson Correlation Between Learned Routes and Manual Features.
Correlation with $p$<$0.01$ are visualized. The correlation is based on a
(c)+(d)+(g)+(k) model.
Figure 8: Pearson Correlation Between Learned Routes and Manual Features.
Correlation with $p$<$0.01$ are visualized. The correlation is based on a
(c)+(d)+(j)+(k) model.
### C.3 Further Investigation on Selection Functions
In our initial experiments, the implementation of softmax does not have
temperature annealing. When we include this trick, the performance is
comparable to gumbel-softmax ST.
## Appendix D Discussion on Contemporary Works
Training dynamical models that condition the computation on task information
is a growing and active research field. Several contemporary works Ponti et
al. (2022); Gupta et al. (2022); Asai et al. (2022) are studying this problem.
We share similar motivations with these works; meanwhile, these works and ours
differ in methodology and research focus. We would like to highlight that (1)
we conduct extensive analysis on interpreting the learned routes and experts
in §7; (2) we use 120 seen tasks and 18 unseen tasks, which is more diverse,
and creates a challenging learning setting. We hope our findings are useful to
the EMNLP community.
## Appendix E Tasks Used and References
We list all the tasks used in this paper in Table LABEL:tab:ontology and its
corresponding manual feature labels in Table 9.
Table 8: Tasks used in this work. Task Name | Ontology | Reference
---|---|---
acronym_identification | other | Pouran Ben Veyseh et al. 2020
ade_corpus_v2-classification | cls/other | Gurulingappa et al. 2012
ade_corpus_v2-dosage | other/slot filling | Gurulingappa et al. 2012
ade_corpus_v2-effect | other/slot filling | Gurulingappa et al. 2012
adversarialqa | qa/machine reading comprehension | Bartolo et al. 2020
aeslc | cg/summarization | Zhang and Tetreault 2019
ag_news | cls/topic | Gulli (link)
ai2_arc | qa/multiple-choice qa | Clark et al. 2018
amazon_polarity | cls/sentiment analysis | McAuley and Leskovec 2013
anli | cls/nli | Nie et al. 2020
app_reviews | other/regression | Missing
aqua_rat | qa/multiple-choice qa | Ling et al. 2017
art (abductive nli) | other | Bhagavatula et al. 2020
aslg_pc12 | other | Othman and Jemni 2012
biomrc | qa/machine reading comprehension | Pappas et al. 2020
blimp-anaphor_gender_agreement | other/linguistic phenomenon | Warstadt et al. 2020
blimp-anaphor_number_agreement | other/linguistic phenomenon | Warstadt et al. 2020
blimp-determiner_noun_agreement_with_adj_irregular_1 | other/linguistic phenomenon | Warstadt et al. 2020
blimp-ellipsis_n_bar_1 | other/linguistic phenomenon | Warstadt et al. 2020
blimp-ellipsis_n_bar_2 | other/linguistic phenomenon | Warstadt et al. 2020
blimp-existential_there_quantifiers_1 | other/linguistic phenomenon | Warstadt et al. 2020
blimp-irregular_past_participle_adjectives | other/linguistic phenomenon | Warstadt et al. 2020
blimp-sentential_negation_npi_licensor_present | other/linguistic phenomenon | Warstadt et al. 2020
blimp-sentential_negation_npi_scope | other/linguistic phenomenon | Warstadt et al. 2020
blimp-wh_questions_object_gap | other/linguistic phenomenon | Warstadt et al. 2020
boolq | qa/binary | Clark et al. 2019
break-QDMR | other | Wolfson et al. 2020
break-QDMR-high-level | other | Wolfson et al. 2020
circa | cls/other | Louis et al. 2020
climate_fever | cls/fact checking | Diggelmann et al. 2020
codah | qa/multiple-choice qa | Chen et al. 2019
common_gen | other | Lin et al. 2020b
commonsense_qa | qa/multiple-choice qa | Talmor et al. 2019
cos_e | other/generate explanation | Rajani et al. 2019
cosmos_qa | qa/multiple-choice qa | Huang et al. 2019
crawl_domain | other | Zhang et al. 2020
crows_pairs | other | Nangia et al. 2020
dbpedia_14 | cls/topic | Lehmann et al. 2015
definite_pronoun_resolution | other | Rahman and Ng 2012
discovery | cls/other | Sileo et al. 2019
dream | qa/multiple-choice qa | Sun et al. 2019
duorc | qa/machine reading comprehension | Saha et al. 2018
e2e_nlg_cleaned | other | Dušek et al. 2020, 2019
eli5-askh | qa/long-form qa | Fan et al. 2019
eli5-asks | qa/long-form qa | Fan et al. 2019
eli5-eli5 | qa/long-form qa | Fan et al. 2019
emo | cls/emotion | Chatterjee et al. 2019
emotion | cls/emotion | Saravia et al. 2018
empathetic_dialogues | cg/dialogue | Rashkin et al. 2019
ethos-directed_vs_generalized | cls/hate speech detection | Mollas et al. 2020
ethos-disability | cls/hate speech detection | Mollas et al. 2020
ethos-gender | cls/hate speech detection | Mollas et al. 2020
ethos-national_origin | cls/hate speech detection | Mollas et al. 2020
ethos-race | cls/hate speech detection | Mollas et al. 2020
ethos-religion | cls/hate speech detection | Mollas et al. 2020
ethos-sexual_orientation | cls/hate speech detection | Mollas et al. 2020
financial_phrasebank | cls/sentiment analysis | Malo et al. 2014
freebase_qa | qa/closed-book qa | Jiang et al. 2019
gigaword | cg/summarization | Napoles et al. 2012
glue-cola | cls/other | Warstadt et al. 2019
glue-mnli | cls/nli | Williams et al. 2018
glue-mrpc | cls/paraphrase | Dolan and Brockett 2005
glue-qnli | cls/nli | Rajpurkar et al. 2016
glue-qqp | cls/paraphrase | (link)
glue-rte | cls/nli | | Dagan et al. 2005; Bar-Haim et al. 2006
---
Giampiccolo et al. 2007; Bentivogli et al. 2009
glue-sst2 | cls/sentiment analysis | Socher et al. 2013
glue-wnli | cls/nli | Levesque et al. 2012
google_wellformed_query | cls/other | Faruqui and Das 2018
hate_speech18 | cls/hate speech detection | de Gibert et al. 2018
hate_speech_offensive | cls/hate speech detection | Davidson et al. 2017
hatexplain | cls/hate speech detection | Mathew et al. 2020
health_fact | cls/fact checking | Kotonya and Toni 2020
hellaswag | qa/multiple-choice qa | Zellers et al. 2019
hotpot_qa | qa/machine reading comprehension | Yang et al. 2018
imdb | cls/sentiment analysis | Maas et al. 2011
jeopardy | qa/closed-book qa | (link)
kilt_ay2 | other/entity linking | Hoffart et al. 2011
kilt_fever | cls/fact checking | Thorne et al. 2018
kilt_hotpotqa | qa/closed-book qa | Yang et al. 2018
kilt_nq | qa/closed-book qa | Kwiatkowski et al. 2019
kilt_trex | qa/closed-book qa | Elsahar et al. 2018
kilt_wow | cg/dialogue | Dinan et al. 2019
kilt_zsre | qa/closed-book qa | Levy et al. 2017
lama-conceptnet | qa/closed-book qa | Petroni et al. 2019, 2020
lama-google_re | qa/closed-book qa | Petroni et al. 2019, 2020
lama-squad | qa/closed-book qa | Petroni et al. 2019, 2020
lama-trex | qa/closed-book qa | Petroni et al. 2019, 2020
liar | cls/fact checking | Wang 2017
limit | other | Manotas et al. 2020
math_qa | qa/multiple-choice qa | Amini et al. 2019
mc_taco | qa/binary | Zhou et al. 2019
medical_questions_pairs | cls/paraphrase | McCreery et al. 2020
mocha | other/regression | Chen et al. 2020a
multi_news | cg/summarization | Fabbri et al. 2019
numer_sense | qa/closed-book qa | Lin et al. 2020a
onestop_english | cls/other | Vajjala and Lučić 2018
openbookqa | qa/multiple-choice qa | Mihaylov et al. 2018
paws | cls/paraphrase | Zhang et al. 2019
piqa | other | Bisk et al. 2020
poem_sentiment | cls/sentiment analysis | Sheng and Uthus 2020
proto_qa | other | Boratko et al. 2020
qa_srl | other | He et al. 2015
qasc | qa/multiple-choice qa | Khot et al. 2020
quail | qa/multiple-choice qa | Rogers et al. 2020
quarel | qa/multiple-choice qa | Tafjord et al. 2019a
quartz-no_knowledge | qa/multiple-choice qa | Tafjord et al. 2019b
quartz-with_knowledge | qa/multiple-choice qa | Tafjord et al. 2019b
quoref | qa/machine reading comprehension | Dasigi et al. 2019
race-high | qa/multiple-choice qa | Lai et al. 2017
race-middle | qa/multiple-choice qa | Lai et al. 2017
reddit_tifu-title | cg/summarization | Kim et al. 2019
reddit_tifu-tldr | cg/summarization | Kim et al. 2019
ropes | qa/machine reading comprehension | Lin et al. 2019
rotten_tomatoes | cls/sentiment analysis | Pang and Lee 2005
samsum | cg/summarization | Gliwa et al. 2019
scicite | cls/other | Cohan et al. 2019
sciq | qa/multiple-choice qa | Welbl et al. 2017
scitail | cls/nli | Khot et al. 2018
search_qa | qa/closed-book qa | Dunn et al. 2017
sick | cls/nli | Marelli et al. 2014
sms_spam | cls/other | Almeida et al. 2011
social_i_qa | qa/multiple-choice qa | Sap et al. 2019
spider | cg/other | Yu et al. 2018
squad-no_context | qa/closed-book qa | Rajpurkar et al. 2016
squad-with_context | qa/machine reading comprehension | Rajpurkar et al. 2016
superglue-cb | cls/nli | de Marneffe et al. 2019
superglue-copa | qa/multiple-choice qa | Gordon et al. 2012
superglue-multirc | qa/multiple-choice qa | Khashabi et al. 2018
superglue-record | qa/machine reading comprehension | Zhang et al. 2018
superglue-rte | cls/nli | | Dagan et al. 2005; Bar-Haim et al. 2006
---
Giampiccolo et al. 2007; Bentivogli et al. 2009
superglue-wic | cls/other | Pilehvar and Camacho-Collados 2019
superglue-wsc | cls/other | Levesque et al. 2012
swag | qa/multiple-choice qa | Zellers et al. 2018
tab_fact | cls/fact checking | Chen et al. 2020b
trec | cls/other | Li and Roth 2002; Hovy et al. 2001
trec-finegrained | cls/other | Li and Roth 2002; Hovy et al. 2001
tweet_eval-emoji | cls/emotion | Barbieri et al. 2020
tweet_eval-emotion | cls/emotion | Barbieri et al. 2020
tweet_eval-hate | cls/emotion | Barbieri et al. 2020
tweet_eval-irony | cls/emotion | Barbieri et al. 2020
tweet_eval-offensive | cls/emotion | Barbieri et al. 2020
tweet_eval-sentiment | cls/emotion | Barbieri et al. 2020
tweet_eval-stance_abortion | cls/emotion | Barbieri et al. 2020
tweet_eval-stance_atheism | cls/emotion | Barbieri et al. 2020
tweet_eval-stance_climate | cls/emotion | Barbieri et al. 2020
tweet_eval-stance_feminist | cls/emotion | Barbieri et al. 2020
tweet_eval-stance_hillary | cls/emotion | Barbieri et al. 2020
tweet_qa | qa/machine reading comprehension | Xiong et al. 2019
web_questions | qa/closed-book qa | Berant et al. 2013
wiki_auto | cls/other | Jiang et al. 2020
wiki_bio | cg/other | Lebret et al. 2016
wiki_qa | cls/other | Yang et al. 2015
wiki_split | cg/other | Botha et al. 2018
wikisql | cg/other | Zhong et al. 2017
wino_grande | qa/multiple-choice qa | Sakaguchi et al. 2020
wiqa | qa/multiple-choice qa | Tandon et al. 2019
xsum | cg/summarization | Narayan et al. 2018
yahoo_answers_topics | cls/topic | (link)
yelp_polarity | cls/sentiment analysis | Zhang et al. 2015; (link)
yelp_review_full | other/regression | Zhang et al. 2015; (link)
cnn_dailymail | cg/summarization | Nallapati et al. 2016
wiki_hop | qa/multiple-choice qa | Welbl et al. 2018
## Appendix F Random Task Partition
Different from the original random task partition used in Ye et al. (2021), we
remove yelp_polarity and freebase_qa from $\mathcal{T}_{test}$ because we
observe unusual instability when doing few-shot fine-tuning on these tasks.
⬇
1{
2 "train": [’glue-mrpc’, ’math_qa’, ’quarel’, ’e2e_nlg_cleaned’, ’tweet_eval-
stance_atheism’, ’lama-squad’, ’tab_fact’, ’aqua_rat’, ’tweet_eval-emoji’,
’glue-wnli’, ’codah’, ’tweet_eval-offensive’, ’wiki_qa’, ’blimp-
ellipsis_n_bar_1’, ’openbookqa’, ’sms_spam’, ’acronym_identification’, ’blimp-
determiner_noun_agreement_with_adj_irregular_1’, ’ethos-national_origin’,
’spider’, ’definite_pronoun_resolution’, ’hellaswag’, ’superglue-wsc’,
’numer_sense’, ’ade_corpus_v2-dosage’, ’blimp-ellipsis_n_bar_2’, ’kilt_ay2’,
’squad-no_context’, ’google_wellformed_query’, ’xsum’, ’wiqa’, ’tweet_eval-
stance_abortion’, ’reddit_tifu-tldr’, ’ade_corpus_v2-effect’, ’qa_srl’,
’ethos-religion’, ’commonsense_qa’, ’jeopardy’, ’biomrc’, ’superglue-multirc’,
’ethos-race’, ’eli5-askh’, ’glue-qqp’, ’paws’, ’ethos-
directed_vs_generalized’, ’glue-sst2’, ’mocha’, ’tweet_eval-hate’, ’glue-rte’,
’blimp-anaphor_number_agreement’, ’lama-conceptnet’, ’hate_speech_offensive’,
’superglue-wic’, ’boolq’, ’kilt_hotpotqa’, ’quartz-no_knowledge’, ’aslg_pc12’,
’sick’, ’tweet_eval-stance_climate’, ’tweet_eval-sentiment’, ’crows_pairs’,
’glue-mnli’, ’medical_questions_pairs’, ’break-QDMR-high-level’, ’qasc’,
’imdb’, ’ethos-gender’, ’trec-finegrained’, ’adversarialqa’,
’onestop_english’, ’web_questions’, ’duorc’, ’yelp_review_full’, ’swag’,
’proto_qa’, ’scitail’, ’tweet_eval-stance_feminist’, ’limit’, ’common_gen’,
’scicite’, ’blimp-irregular_past_participle_adjectives’, ’social_i_qa’,
’anli’, ’kilt_zsre’, ’cosmos_qa’, ’superglue-record’, ’squad-with_context’,
’emotion’, ’blimp-existential_there_quantifiers_1’, ’race-middle’, ’kilt_wow’,
’sciq’, ’wino_grande’, ’rotten_tomatoes’, ’superglue-cb’, ’poem_sentiment’,
’ropes’, ’reddit_tifu-title’, ’piqa’, ’climate_fever’, ’lama-google_re’,
’search_qa’, ’wiki_auto’, ’mc_taco’, ’blimp-wh_questions_object_gap’,
’hotpot_qa’, ’emo’, ’kilt_nq’, ’kilt_trex’, ’quartz-with_knowledge’,
’dbpedia_14’, ’yahoo_answers_topics’, ’app_reviews’, ’superglue-copa’, ’blimp-
anaphor_gender_agreement’, ’hate_speech18’, ’gigaword’, ’multi_news’, ’aeslc’,
’quail’],
3 "dev": [’cos_e’, ’kilt_fever’, ’eli5-asks’, ’trec’, ’eli5-eli5’, ’art’,
’empathetic_dialogues’, ’tweet_qa’, ’wikisql’, ’lama-trex’, ’tweet_eval-
stance_hillary’, ’discovery’, ’tweet_eval-emotion’, ’liar’, ’wiki_bio’,
’dream’, ’ade_corpus_v2-classification’, ’health_fact’, ’samsum’,
’financial_phrasebank’],
4 "test": [’quoref’, ’wiki_split’, ’ethos-disability’, ’superglue-rte’, ’glue-
cola’, ’ethos-sexual_orientation’, ’blimp-sentential_negation_npi_scope’,
’ai2_arc’, ’amazon_polarity’, ’race-high’, ’blimp-
sentential_negation_npi_licensor_present’, ’tweet_eval-irony’, ’break-QDMR’,
’crawl_domain’, ’glue-qnli’, ’hatexplain’, ’ag_news’, ’circa’],
5}
## Appendix G Manually-Defined Features
Task Name | Science Technology | Social Network | News | Web | Bio-Medical | Review | Dialog | Books | Financial | Phrase | Sentence | Paragraph | Extractive | Linguistic | Commonsense | Co-reference | World Knowledge | Multi-hop | Sentence Completion | Synthesize
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---
acronym_identification | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0
ade_corpus_v2-classification | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
ade_corpus_v2-dosage | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0
ade_corpus_v2-effect | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0
adversarialqa | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0
aeslc | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1
ag_news | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
ai2_arc | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0
amazon_polarity | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
anli | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
app_reviews | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
aqua_rat | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0
art | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0
aslg_pc12 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1
biomrc | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0
blimp-anaphor_gender_agreement | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 1 | 0 | 0 | 0 | 0 | 0
blimp-anaphor_number_agreement | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0
blimp-determiner_noun_agreement_with_adj_irregular_1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0
blimp-ellipsis_n_bar_1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0
blimp-ellipsis_n_bar_2 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0
blimp-existential_there_quantifiers_1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0
blimp-irregular_past_participle_adjectives | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0
blimp-sentential_negation_npi_licensor_present | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0
blimp-sentential_negation_npi_scope | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0
blimp-wh_questions_object_gap | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0
boolq | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0
break-QDMR | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1
break-QDMR-high-level | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1
circa | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0
climate_fever | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0
codah | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0
common_gen | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1
commonsense_qa | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0
cos_e | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0
cosmos_qa | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0
crawl_domain | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1
crows_pairs | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0
dbpedia_14 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
definite_pronoun_resolution | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0
discovery | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0
dream | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0
duorc | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 1 | 0 | 0
e2e_nlg_cleaned | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1
eli5-askh | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0
eli5-asks | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0
eli5-eli5 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0
emo | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
emotion | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
empathetic_dialogues | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
ethos-directed_vs_generalized | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
ethos-disability | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
ethos-gender | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
ethos-national_origin | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
ethos-race | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
ethos-religion | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
ethos-sexual_orientation | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
financial_phrasebank | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
freebase_qa | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0
gigaword | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
glue-cola | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0
glue-mnli | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
glue-mrpc | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0
glue-qnli | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
glue-qqp | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0
glue-rte | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
glue-sst2 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
glue-wnli | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
google_wellformed_query | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0
hate_speech18 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
hate_speech_offensive | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
hatexplain | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
health_fact | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0
hellaswag | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0
hotpot_qa | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0
imdb | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
jeopardy | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0
kilt_ay2 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0
kilt_fever | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0
kilt_hotpotqa | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0
kilt_nq | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0
kilt_trex | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0
kilt_wow | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0
kilt_zsre | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0
lama-conceptnet | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0
lama-google_re | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0
lama-squad | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0
lama-trex | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0
liar | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0
limit | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0
math_qa | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0
mc_taco | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0
medical_questions_pairs | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0
mocha | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0
multi_news | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1
numer_sense | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0
onestop_english | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
openbookqa | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 1 | 0 | 1 | 1 | 0 | 0
paws | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0
piqa | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0
poem_sentiment | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
proto_qa | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0
qa_srl | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0
qasc | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0
quail | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0
quarel | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0
quartz-no_knowledge | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 1 | 0
quartz-with_knowledge | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0
quoref | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0
race-high | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0
race-middle | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 1 | 0
reddit_tifu-title | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1
reddit_tifu-tldr | 0 | 1 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1
ropes | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0
rotten_tomatoes | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
samsum | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 1
scicite | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
sciq | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0
scitail | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
|
# Non-minimally coupled scalar field and scaling symmetry in a cosmological
background
Malik<EMAIL_ADDRESS>
(Department of Theoretical Physics, Faculty of Science, University of
Mazandaran, 47416-95447, Babolsar, Iran)
###### Contents
1. 1 Abstract
2. 2 Introducing a Scaling Symmetric Lagrangian in Non-Minimally Coupled Scalar Field in FRW spacetime
3. 3 Critical points and scaling symmetry breaking
4. 4 Slow Rolling Solutions
5. 5 Stability of ground state value of scalar field and energy
6. 6 Summary and Conclusion
## 1 Abstract
We study scaling symmetry in a class of non-minimally coupled scalar field in
a background of Friedmann-Robertson-Walker (FRW) spacetime. We use a non-
minimally coupling $RL^{(\varphi)}$. We find the corresponding conserved
charge of that symmetry and see its role in cosmology, and search for its
possible breaking down and its outcomes. A suitable potential
$V(\varphi)=\varphi^{2}/2$ of scalar field is adopted which is necessary to
get a scaling symmetric Lagrangian of the system including scalar field, non-
minimally coupling to Ricci scalar $RL^{(\varphi)}$ and dark matter dust. We
study evolution of the scalar field in the phase space of the model and
explore the stability of the obtained critical point. In this manner we derive
a relation that relates the cosmological constant and gravitational constant
via a unique identity which reflects the scaling symmetry breaking in the
space $(a,\varphi)$. And relate the cosmological constant to the vacuum
expectation value of the potential energy of $\varphi$. Finally we study the
stability of that vacuum expectation value.
Keywords: Scalar Field Cosmology, Non-minimal Gravitational Coupling; Noether
Symmetry, Cosmic Speed Up.
## 2 Introducing a Scaling Symmetric Lagrangian in Non-Minimally Coupled
Scalar Field in FRW spacetime
A scaling symmetry in the space $(a,\varphi)$ is a kind of unification of real
scalar field $\varphi(t)$(represents dark energy) with the universal scale
factor $a(t)$ (represents spatial homogenous FRW metric), that is, existence
one of them implies existence the other, by that the energy and geometry can
be accompanied in one identity, i.e, we can put them in one field such as
$\Phi=(a,\varphi)$ on a manifold without needing introducing any geometry and
then we can establish a Lagrangian in terms of $\Phi$ that respect the
corresponding symmetry(but in this paper we do not do that). By that we can
explain the relation between energy and geometry in more general concept( i.e,
symmetry concept), by which the gravity is explained by concept of scaling
symmetry group. Note that there is dissimilarity with symmetry of general
relativity in which the transformations do not change $\varphi$ regarding it
as a scalar field, unlike the scaling symmetry, in which the scalar field
changes with changing the metric.
In this paper, we use a non-minimally coupled scalar field to gravity by the
term $RL^{(\varphi)}$, we use FRW metric and find a global scaling symmetry
Lagrangian in the space $(a,\varphi)$, whose symmetry breaking yields the
usual Lagrangian of the non-minimally coupled scalar field. The scaling
symmetry Lagrangian in the space $(a,\varphi)$ implies existence a globally
conserved quantity (charge) which can be used for global classification of the
cosmological solutions, i.e, two solutions with unequal charges can not be
related to each other by any coordinates transformation. We treat the role of
the charge in the solutions of $\varphi$ and we show that by the universal
positively accelerated expansion (increasing the scale factor $a$
exponentially) the field $\varphi$ is always exponentially decreasing until
reaching a critical point in $\dot{\varphi}=0$ with $\varphi=\varphi_{0}\neq
0$, in which the global scaling symmetry breaks and the universal expansion is
approximately in a constant rate $H=H_{0}$.
The evidence of existence of symmetry breaking is seen by violating
conservation of the corresponding charge; $dQ/dt\neq 0$. We will find that
symmetry breaking occurs in the critical point $\dot{\varphi}=0$,
$\varphi=\varphi_{0}\neq 0$. The existence of a non-vanishing constant value
$\varphi_{0}$ at that critical point $\dot{\varphi}=0$ is needed for
satisfying the constraint equation $\delta S/\delta N=0$. We find that the
critical point $\dot{\varphi}=0$, $\varphi=\varphi_{0}\neq 0$ is unique and
stable (there are no other critical points). As a result, we can relate the
cosmological constant and gravitational constant to a same identity, which is
scaling symmetry breaking in the space $(a,\varphi)$. And relate the
cosmological constant to the vacuum expectation value of the potential energy
of $\varphi$. If we thank that the vacuum expectation value of $\varphi$ does
not depend on any metric and it is a quantum phenomena, we obtain universal
constant vacuum energy(cosmological constant). By that we relate the
cosmological constant to a quantum phenomena, but here the field is specified
by a scaling symmetry Lagrangian (5).
The Lagrangian of the gravity plus the scalar field can be written in the
background of the spatially flat FRW metric
$ds^{2}=-N(t)dt^{2}+a^{2}(t)(dx^{2}+dy^{2}+dz^{2})$ as a one-point Lagrangian
up to boundary terms as [1]
$\begin{split}\sqrt{g}L(N,a,\dot{a},\varphi,\dot{\varphi})&=\frac{1}{{16\pi
G}}\sqrt{-g}R+\sqrt{-g}L^{(\varphi)}\\\
&=-3m_{pl}^{2}Na^{3}\left(\frac{{\dot{a}^{2}}}{N^{2}a^{2}}\right)+Na^{3}\left({\frac{1}{2}\frac{{\dot{\varphi}^{2}}}{N^{2}}-V\left(\varphi\right)}\right)+{\text{boundary
terms}}\,.\end{split}$ (1)
Where $N(t)$ is lapse function and $a(t)$ is cosmic scale factor. We note that
$a$ and $\varphi$ are dynamical variables, while $N(t)$ is a non-dynamical
variable, it just represents the symmetry in direction of time, therefore we
choose $N(t)=1$ after deriving the equations of motion.
In this paper we study the scalar fields coupled non-minimally to gravity as
$RL^{(\varphi)}$, where $R$ is Ricci scalar and $L^{(\varphi)}$ is Lagrangian
of scalar field $\varphi$. Let us introduce a non-minimally coupled scalar
field to gravity by
$\begin{split}\sqrt{-g}L(N,a,\dot{a},\varphi,\dot{\varphi})=-3m_{pl}^{2}a\frac{{\dot{a}^{2}}}{N}&+a^{3}\left({\frac{1}{2}\frac{{\dot{\varphi}^{2}}}{N}-NV\left(\varphi\right)}\right)\\\
&+3kNa\frac{{\dot{a}^{2}}}{{N^{2}}}\left({\frac{1}{2}\frac{{\dot{\varphi}^{2}}}{N^{2}}-V\left(\varphi\right)}\right)\,,\end{split}$
(2)
in which we have used some constant $k>0$ for satisfying the units. Here the
non-minimal interaction(third term) of the scalar field with gravity is
represented by product of the scalar $Na\left({\dot{a}/N}\right)^{2}$ with the
Lagrangian $\dot{\varphi}^{2}/2N^{2}-V\left(\varphi\right)$ of scalar field.
We note that there is no problem with that coupling since both
$Na\left({\dot{a}}/N\right)^{2}$ and $L^{(\varphi)}$ are scalars, therefore
their product is also scalar and preserves all of their symmetries.
For more general case, we add dust matter (visible and dark) density
$\rho_{m}\left(a\right)=\rho_{0}^{(m)}/a^{3}$, where $\rho_{0}^{(m)}$ is a
constant as matter density at some scale factor $a_{0}=1$, and a cosmological
constant $\Lambda$ which will just become as a result of global scaling
symmetry breaking of the Lagrangian (5), while the dust matter does not effect
on the results of that global symmetry and its breaking, and setting
$\rho_{0}^{(m)}=0$ is possible, but we add it just to notify that dark matter
dust can exist in the phase of global scaling symmetry. By adopting the scalar
field potential of the form $V\left(\varphi\right)=\varphi^{2}/2$ which is
needed to obtain a scaling symmetric Lagrangian (5), we get
$\begin{split}\sqrt{-g}L(N,a,\dot{a},\varphi,\dot{\varphi},\rho_{m})=&-3m_{pl}^{2}a\frac{{\dot{a}^{2}}}{N}+a^{3}\left({\frac{1}{2}\frac{{\dot{\varphi}^{2}}}{N}-\frac{1}{2}N\varphi^{2}}\right)\\\
&+3ka\frac{{\dot{a}^{2}}}{{N^{2}}}\left({\frac{1}{2}\frac{{\dot{\varphi}^{2}}}{N}-\frac{1}{2}N\varphi^{2}}\right)-Na^{3}\rho_{m}\left(a\right)-Na^{3}\Lambda\,.\end{split}$
(3)
Thus we obtain a one point Lagrangian of gravity + NMC term + scalar field +
matter density in the minimum super-space $(a,\varphi)$ of the model. If we
let the variables to be measured in units of Planck mass, we set $m_{pl}=1$ to
get
$\begin{split}\sqrt{-g}L(N,a,\dot{a},\varphi,\dot{\varphi},\rho_{m})=&-3a\frac{{\dot{a}^{2}}}{N}+\frac{{a^{3}}}{2}\left({\frac{{\dot{\varphi}^{2}}}{N}-N\varphi^{2}}\right)\\\
&+\frac{{3k}}{2}\frac{{a\dot{a}^{2}}}{{N^{2}}}\left({\frac{{\dot{\varphi}^{2}}}{N}-N\varphi^{2}}\right)-N\rho_{0}^{(m)}-Na^{3}\Lambda\,.\end{split}$
(4)
We can let this Lagrangian be produced from another Lagrangian which has a
global scaling symmetry in the space $(a,\varphi)$, like the Lagrangian
$\sqrt{-g}L\left({N,a,\dot{a},\varphi,\dot{\varphi}}\right)=\frac{{a^{3}}}{2}\left({\frac{{\dot{\varphi}^{2}}}{N}-N\varphi^{2}}\right)+\frac{{3k}}{2}\frac{{a\dot{a}^{2}}}{{N^{2}}}\left({\frac{{\dot{\varphi}^{2}}}{N}-N\varphi^{2}}\right)-N\rho_{0}^{(m)}\,.$
(5)
This Lagrangian includes a scalar field Lagrangian with interaction with
gravity in addition to a dark matter density term
$\rho_{m}(a)=\rho_{0}^{(m)}/a^{3}$. Thus we have a global scaling symmetry in
the space of dynamical variables $a$ and $\varphi$ represented by the
transformations
$a\to e^{2\alpha}a,\quad\textrm{and}\quad\varphi\to e^{-3\alpha}\varphi\,,$
(6)
for an arbitrary real constant parameter $\alpha$ which can be either positive
or negative. Thus the Lagrangian (5) has global scaling symmetry
$L\left({e^{2\alpha}a,e^{-3\alpha}\varphi}\right)=L\left({a,\varphi}\right)$
in the space $(a,\varphi)$, but this symmetry is broken when there is a non-
vanishing ground state value of $\varphi^{2}$, like
$\left\langle\Omega\right|\varphi^{2}\left|\Omega\right\rangle=\varphi_{0}^{2}\neq
0$, for a ground state wave function $\left|\Omega\right\rangle$ of the
Lagrangian (5). This implies a replacing $\varphi^{2}$ with
$\varphi^{2}+\varphi^{2}_{0}$ nearby the minimum energy state
$\left|\Omega\right\rangle$ in the Lagrangian (5) to get
$\begin{split}L&\left({N,a,\dot{a},\varphi,\dot{\varphi}}\right)\\\
&=\frac{{a^{3}}}{2}\left({\frac{{\dot{\varphi}^{2}}}{N}-N\varphi^{2}-N\varphi_{0}^{2}}\right)+\frac{{3k}}{2}\frac{{a\dot{a}^{2}}}{{N^{2}}}\left({\frac{{\dot{\varphi}^{2}}}{N}-N\varphi^{2}-N\varphi_{0}^{2}}\right)-N\rho_{0}^{(m)}\\\
&=-\varphi_{0}^{2}\frac{{3k}}{2}\frac{{a\dot{a}^{2}}}{N}+\frac{{a^{3}}}{2}\left({\frac{{\dot{\varphi}^{2}}}{N}-N\varphi^{2}}\right)+\frac{{3k}}{2}\frac{{a\dot{a}^{2}}}{{N^{2}}}\left({\frac{{\dot{\varphi}^{2}}}{N}-N\varphi^{2}}\right)\\\
&-N\rho_{0}^{(m)}-\frac{{\varphi_{0}^{2}}}{2}Na^{3}\,.\end{split}$ (7)
If we choose $k$ and $\varphi_{0}$ to get
$k\varphi_{0}^{2}/2=1,\quad and\quad\varphi_{0}^{2}/2=\Lambda,\quad for\quad
N(t)=1\,,$ (8)
thus we get the Lagrangian (4) with scaling symmetry breaking. In terms of
Planck mass, we get $k\varphi_{0}^{2}/2=m^{2}_{pl}$ so $k\Lambda=m^{2}_{pl}$
which unifies the gravitational constant with the cosmological constant via
the scaling symmetry breaking. While $\varphi_{0}^{2}/2=\Lambda$ relates
cosmological constant $\Lambda$ to vacuum energy $\varphi_{0}^{2}/2$ of scalar
field. Actually the equation $k\varphi_{0}^{2}/2=1$ ensures that $k>0$,
otherwise we will not get the usual general relativity of FRW metic as a
result of scaling symmetry breaking in the space $\left(a,\varphi\right)$. As
we will see that symmetry breaking occurs in the critical point
$\dot{\varphi}=0$, $\varphi=\varphi_{0}\neq 0$ and this critical point is
stable and unique.
Now we derive the conserved charge and the equations of motions of the
Lagrangian (5). Since
$L\left({e^{2\alpha}a,e^{-3\alpha}\varphi}\right)=L\left({a,\varphi}\right)$,
the action $S=\int Ldt$ is also invariant. Therefore,
$\delta_{\alpha}S=\int{dt\delta_{\alpha}L}=\int{dt\left({L\left({e^{2\alpha}a,e^{-3\alpha}\varphi}\right)-L\left({a,\varphi}\right)}\right)}=0\,.$
If we use an infinitesimal transformation $\alpha\ll 1$, we get
$\delta_{\alpha}a=e^{2\alpha}a-a\approx\left({1+2\alpha}\right)a-a=2\alpha
a\,,$
and
$\delta_{\alpha}\varphi=e^{-3\alpha}\varphi-\varphi\approx\left({1-3\alpha}\right)\varphi-\varphi=-3\alpha\varphi\,.$
Using these results in the following relation
$\begin{split}\delta_{\alpha}S&=\int{dt\delta_{\alpha}L}\\\
&=-\int{dt\left(\frac{d}{{dt}}\frac{{\partial
L}}{{\partial\dot{a}}}-\frac{{\partial L}}{{\partial
a}}\right)}-\int{dt\left(\frac{d}{{dt}}\frac{{\partial
L}}{{\partial\dot{\varphi}}}-\frac{{\partial L}}{{\partial\varphi}}\right)}\\\
&+\int{dt\frac{d}{{dt}}\left({\frac{{\partial
L}}{{\partial\dot{a}}}\delta_{\alpha}a+\frac{{\partial
L}}{{\partial\dot{\varphi}}}\delta_{\alpha}\varphi}\right)}=0\,,\end{split}$
(9)
with regarding the equations of motions, we obtain a conserved charge as
$Q=\frac{{\partial L}}{{\partial\dot{a}}}\left({2a}\right)+\frac{{\partial
L}}{{\partial\dot{\varphi}}}\left({-3\varphi}\right),\quad\frac{dQ}{{dt}}=0\,.$
Therefore we get(With using the gauge $N(t)=1$)
$\begin{split}Q&=3ka\dot{a}\left({\dot{\varphi}^{2}-\varphi^{2}}\right)\left({2a}\right)+\left({a^{3}+3ka\dot{a}^{2}}\right)\dot{\varphi}\left({-3\varphi}\right)\\\
&=6ka^{3}H\left({\dot{\varphi}^{2}-\varphi^{2}}\right)-3a^{3}\left({1+3kH^{2}}\right)\frac{d}{{dt}}\left({\frac{{\varphi^{2}}}{2}}\right)\\\
&=12ka^{3}Hp-\frac{3}{2}a^{3}\left({1+3kH^{2}}\right)\left({\dot{\rho}-\dot{p}}\right)=constant\,.\end{split}$
(10)
In which we have used $H\equiv\dot{a}/a$, the energy density
$\rho\equiv\dot{\varphi}^{2}/2+\varphi^{2}/2$ and the momentum density
(pressure density) $p\equiv\dot{\varphi}^{2}/2-\varphi^{2}/2$ of $\varphi$.
We note that for a solution like $H=H_{0}$, $\dot{\rho}=\dot{p}=0$ and
$\varphi=\varphi_{0}=constant\neq 0$ (that is, $\dot{\varphi}=0$), we have
$\left.{\frac{{dQ}}{{dt}}}\right|_{c}=-12ka^{3}\left({3H_{0}^{2}+\dot{H}_{0}}\right)\rho_{0}\neq
0\,,$ (11)
where the non-vanishing value in the right side comes from slow-rolling
condition $\dot{H}_{0}\approx 0$. Thus in this case, the scaling symmetry of
the Lagrangian (5) breaks and by that we get the Lagrangian (4). Actually we
will find that the point $H=H_{0}=constant$, $\varphi=\varphi_{0}=constant\neq
0$ is a stable critical point for the dynamical system of the Lagrangian (5)
and it is a unique critical point.
The equation of motions of $a$ from the Lagrangian (5), $\delta S/\delta
a=0$(With using the gauge $N(t)=1$), is
$\frac{d}{{dt}}\left({\frac{{\partial
L}}{{\partial\dot{a}}}}\right)-\frac{{\partial L}}{{\partial a}}=0\,,$
which yields
$\begin{split}&3k\dot{a}\dot{a}\left({\dot{\varphi}^{2}-\varphi^{2}}\right)+3ka\ddot{a}\left({\dot{\varphi}^{2}-\varphi^{2}}\right)+6ka\dot{a}\frac{d}{{dt}}\left({\frac{{\dot{\varphi}^{2}}}{2}-\frac{{\varphi^{2}}}{2}}\right)\\\
&-\frac{{3a^{2}}}{2}\left({\dot{\varphi}^{2}-\varphi^{2}}\right)-\frac{{3k}}{2}\dot{a}^{2}\left({\dot{\varphi}^{2}-\varphi^{2}}\right)=0\,,\end{split}$
(12)
and by using $H=\dot{a}/a$, $\ddot{a}/a=\dot{H}+H^{2}$ and the momentum
density $p=\dot{\varphi}^{2}/2-\varphi^{2}/2$, the last equation becomes
$\left({6k\dot{H}+9kH^{2}-3}\right)p+6kH\frac{{dp}}{{dt}}=0\,.$
Using a dimensionless time parameter defined as
$\eta=\ln\left({a/a_{0}}\right)$ which regards the scale factor $a$ as a
cosmological time, we have $d/dt=Hd/d\eta$. The last equation becomes
$\left({6kHH^{\prime}+9kH^{2}-3}\right)p+6kH^{2}p^{\prime}=0\,,$
or
$\left({h^{\prime}+3h-3}\right)p+2hp^{\prime}=0\,,$ (13)
where the prime indicates the derivative with respect to the dimensionless
time $\eta$, and we used $h=3kH^{2}$ as a dimensionless function.
The equation of motion of $\varphi$ from the Lagrangian (5), $\delta
S/\delta\varphi=0$(With using the gauge $N(t)=1$), is
$\left({3a^{2}\dot{a}+3k\dot{a}\dot{a}^{2}+6ka\dot{a}\ddot{a}}\right)\dot{\varphi}+\left({a^{3}+3ka\dot{a}^{2}}\right)\ddot{\varphi}+a^{3}\varphi+3ka\dot{a}^{2}\varphi=0\,.$
Following the same steps as for $a$, we obtain
$\left({h^{\prime}+3h+3}\right)\left(\rho+p\right)+\left({1+h}\right)\rho^{\prime}=0\,,$
(14)
where we used the energy density $\rho=\dot{\varphi}^{2}/2+\varphi^{2}/2$ of
$\varphi$.
We note that the previous equations of $\rho$ and $p$ include $h^{\prime}$, so
we need to omit it, by that the two equations (13) and (14) give one equation.
The constraint equation of the Lagrangian (5), $\delta S/\delta N=0$, implies
$\frac{{a^{3}}}{2}\left({-\frac{{\dot{\varphi}^{2}}}{{N^{2}}}-\varphi^{2}}\right)-3k\frac{{a\dot{a}^{2}}}{{N^{3}}}\left({\frac{{\dot{\varphi}^{2}}}{N}-N\varphi^{2}}\right)+\frac{{3k}}{2}\frac{{a\dot{a}^{2}}}{{N^{2}}}\left({-\frac{{\dot{\varphi}^{2}}}{{N^{2}}}-\varphi^{2}}\right)-\rho_{0}^{(m)}=0\,.$
Using the gauge $N(t)=1$, we obtain
$\frac{{a^{3}}}{2}\left({\dot{\varphi}^{2}+\varphi^{2}}\right)+\frac{{9k}}{2}a\dot{a}^{2}\dot{\varphi}^{2}-\frac{{3k}}{2}a\dot{a}^{2}\varphi^{2}+\rho_{0}^{(m)}=0\,,$
or
$2a^{3}\rho+3a^{3}h\left({\rho+p}\right)-a^{3}h\left({\rho-p}\right)+2\rho_{0}^{(m)}=0\,.$
Therefore we obtain the energy constraint equation
$\rho+h\left({\rho+2p}\right)+\frac{{\rho_{0}^{(m)}}}{{a^{3}}}=0\,.$ (15)
We note that the energy constraint (15) does not include any critical
energy(such as $3H^{2}$). But it imposes some conditions, since
$\rho=\dot{\varphi}^{2}/2+\varphi^{2}/2$, $h=3kH^{2}$ and $\rho_{0}^{(m)}\neq
0$ are always positive, we have $\rho+2p<0$ and it must be always satisfied.
Therefore the pressure $p=\dot{\varphi}^{2}/2-\varphi^{2}/2$ must be always
negative, $p<0$, and does not vanish. However negative pressure is needed for
getting universal expansion.
This means that the potential energy $\varphi^{2}/2$ is always larger than the
kinetic energy $\dot{\varphi}^{2}/2$, therefore there is no possibility to
increase the kinetic energy and vanishing the potential energy, while the
opposite is possible, that is increasing in potential energy while decreasing
in kinetic energy until it vanishes. Thus the solution $\dot{\varphi}=0$,
$\varphi=\varphi_{0}=constant\neq 0$ is possible.
Since $\rho+2p<0$ and $p<0$, we obtain $\rho+3p<0$ which according to
Friedmann equations implies an universal accelerated expansion.
We also note that the case $\rho=0$ ($\varphi=0$) does not exist since it
implies $p=0$. So, we have $0+{{\rho_{0}^{(m)}}}/{{a^{3}}}=0$ which is not
satisfied unless $\rho_{0}^{(m)}=0$. Therefore the acceptable minimum energy
is $\rho_{0}\neq 0$ (for $\dot{\varphi}=0$) and this value corresponds to the
vacuum expectation value of $\varphi^{2}$, as discussed just after equation
(6).
The energy constraint equation (15) does not give $h$ as a function only of
$\rho$ and $p$, in addition it includes $a(t)$. Therefore we need to omit
$h^{\prime}$ from the two equations (13) and (14) to get
$\left({1+h}\right)p\rho^{\prime}-2h\left({\rho+p}\right)p^{\prime}+6p\left({\rho+p}\right)=0\,.$
(16)
The same equation we will obtain if we get $h^{\prime}$ from the constraint
equation (15) and use it in the equations (13) and (14).
In order to get another equation for $\rho^{\prime}$ and $p^{\prime}$, we omit
$1/a^{3}$ from the charge equation (10) and constraint equation (15). We
obtain
$12kHp-\frac{3}{2}\left({1+h}\right)H\left({\rho^{\prime}-p^{\prime}}\right)+c\rho+ch\left({\rho+2p}\right)=0,\quad\textrm{for}\quad
c=\frac{Q}{{\rho_{0}^{(m)}}}\,,$
which gives
$\rho^{\prime}-p^{\prime}=\frac{{8kp}}{{\left({1+h}\right)}}+\frac{{2c\rho}}{{3H\left({1+h}\right)}}+\frac{{2ch}}{{3H\left({1+h}\right)}}\left({\rho+2p}\right)\,.$
(17)
Since both $H$ and $h>0$ can not vanish for any solution, there is no problem
with $H\left({1+h}\right)$ in the denominator of the last equation.
By that we have two equations, (16) and (17), that include $\rho^{\prime}$,
$p^{\prime}$, $\rho$, $p$ and $h=3kH^{2}$. From these equations, we obtain
$\begin{split}&\left[2h\left({\rho+p}\right)-\left({1+h}\right)p\right]\rho^{\prime}\\\
&=\left({\rho+p}\right)\left[{\frac{{16khp}}{{\left({1+h}\right)}}+\frac{{4ch\rho}}{{3H\left({1+h}\right)}}+\frac{{4ch^{2}}}{{3H\left({1+h}\right)}}\left({\rho+2p}\right)+6p}\right]\\\
&=\left({\rho+p}\right)\left[{{4ckH}\rho+\frac{{16khp}}{{\left({1+h}\right)}}+\frac{{8ch^{2}}}{{3H\left({1+h}\right)}}p+6p}\right]\,,\end{split}$
(18)
and
$\left[2h\left({\rho+p}\right)-\left({1+h}\right)p\right]p^{\prime}=8kp^{2}+\frac{{2c}}{{3H}}p\rho+2ckHp\left({\rho+2p}\right)+6p\left({\rho+p}\right)\,.$
(19)
We have $p<0$, $h>0$ and $\rho+p=\dot{\varphi}^{2}\geq 0$, therefore it is
always $[2h\left({\rho+p}\right)-\left({1+h}\right)p]>0$ and does not vanish.
Thus there is no problem with multiplying $\rho^{\prime}$ and $p^{\prime}$ by
$[2h\left({\rho+p}\right)-\left({1+h}\right)p]$.
## 3 Critical points and scaling symmetry breaking
We note that the constraint equation (15) does not imply any critical energy
(such as $3H^{2}$), so we do not need to divide $\rho$ and $p$ by any energy
and since we set $m_{pl}=1$, the variables $\rho$, $p$, $H$, $a$ and
$\eta=\ln(a)$ are dimensionless. Thus the critical points of the equations
(18) and (19) can be obtained by finding the points of
$\rho^{\prime}=p^{\prime}=0$, at a time $\eta_{0}=\ln(a_{0})$, in the space
$(\rho,p)$, where $H$ can be written in terms of these quantities. We note
that the time $\eta_{0}=\ln(a_{0})$ does not mean to stop universal expansion,
but it is just point in the space $(\rho,p)$, and nearby that point the
velocity $(\rho^{\prime}(\eta),p^{\prime}(\eta))$ decreases till finish at the
point $\eta_{0}=\ln(a_{0})$. So this does not mean stop universal expansion,
but it is just a moment of it(at $a_{0}=a(t_{0})$). And since velocity
$(\rho^{\prime},p^{\prime})$ is infinitesimal in vicinity of the point
$\rho^{\prime}=p^{\prime}=0$, thus the evolution of the system nearby that
point needs largest times, so the time is most spent in vicinity of critical
points $\rho^{\prime}=p^{\prime}=0$. Therefore the solutions near by the
critical points characterizes the solutions of the system in good accepted
approximation, i.e, solutions in $t=\pm\infty$ or at $t=t_{0}$.
We note that since the scale factor $a(t)$ is assumed always in increasing, so
indeed the energy density $\rho$ of the scalar field is in decreasing till
reaching a smallest possible value at
$\rho^{\prime}=p^{\prime}=0$($\eta_{0}=\ln(a_{0})$). We denote
$(\rho_{0},p_{0})$ as a critical point ($\rho^{\prime}=p^{\prime}=0$) and this
critical point belongs to a trajectory in the space $(\rho,p)$ where this
trajectory is parameterized by the time parameter $\eta=\ln(a)$. Therefore,
the critical point $(\rho_{0},p_{0})$ is determined by the time
$\eta_{0}=\ln(a_{0})$ on that trajectories. Thus, for each critical point
($\rho^{\prime}=p^{\prime}=0$), we have the quantities of $\rho_{0}$, $p_{0}$,
$H_{0}$ and $\eta_{0}=\ln(a_{0})$. As we will show there is only one critical
point associated with the scaling symmetry breaking of the Lagrangian (5).
The condition $\rho^{\prime}=0$(equation (18)) gives the following two
equations,
$4ckH\rho+\frac{{16khp}}{{\left({1+h}\right)}}+\frac{{8ch^{2}}}{{3H\left({1+h}\right)}}p+6p=0\,,$
(20)
and
$\rho+p=0\,.$ (21)
While the condition $p^{\prime}=0$(equation (19)) gives only one equation
(with $p\neq 0$) as
$8kp^{2}+\frac{{2c}}{{3H}}p\rho+2ckHp\left({\rho+2p}\right)+6p\left({\rho+p}\right)=0\,.$
(22)
While the energy constraint (15) implies(at $\rho^{\prime}=p^{\prime}=0$)
$\left.{h^{\prime}}\right|_{c}\left({\rho_{0}+2p_{0}}\right)-\frac{{3\rho_{0}^{(m)}}}{{a_{0}^{4}}}\left.{a^{\prime}}\right|_{c}=0\,,$
and by using
$a^{\prime}=\frac{{\partial a}}{{\partial\eta}}=\frac{{\partial
a}}{{\partial\ln\left(a\right)}}=a\frac{{\partial a}}{{\partial a}}=a\,,$
we get the equation
$\left.{h^{\prime}}\right|_{c}\left({\rho_{0}+2p_{0}}\right)-\frac{{3\rho_{0}^{(m)}}}{{a_{0}^{3}}}=0\,,$
(23)
which determines $h^{\prime}$ at the critical point
$\rho^{\prime}=p^{\prime}=0$. Note that $\left.{h^{\prime}}\right|_{c}=0$ is
satisfied only when $\rho_{0}^{(m)}=0$(so getting de Sitter solution). However
if we assume that ${\rho_{0}^{(m)}}/{a_{0}^{3}}$ is small enough, which
implies $\left.{h^{\prime}}\right|_{c}\approx 0$ (so $\dot{H}\approx 0$), we
obtain solutions close to de Sitter solution(we will find that in slow-rolling
condition).
In fact, the two equations (20) and (22) disagree, therefore the critical
points are given only by the two equations (21) and (22). We can see this
disagreement if we multiply the equation (20) by $3H\left({1+h}\right)/2\neq
0$, to get
$2c\rho h+2ch^{2}\left({\rho+2p}\right)+24khpH+9pH\left({1+h}\right)=0\,.$
(24)
While, multiplying equation (22) by $3Hh\neq 0$ and dividing it by $p\neq 0$
with using $h=3kH^{2}$, we find
$24kHhp+2c\rho h+2ch^{2}\left({\rho+2p}\right)+18Hh\left({\rho+p}\right)=0\,.$
(25)
Now subtracting equation (24) from equation (25), we obtain
$-9pH\left({1+h}\right)+18Hh\left({\rho+p}\right)=0\,.$ (26)
But as we saw, the pressure $p$ in this setup is always negative and non-
vanishing, $p<0$ (which comes from the conditions $\rho\neq 0$ and
$(\rho+2p)<0$), and also $H>0$ does not vanish, while $(\rho+p)\geq 0$, thus
the last equation is sum of two positive terms and one of them does not
vanish, so their sum also does not vanish. Therefore the last equation can not
be satisfied as required. So, the two equations (20) and (22) disagree and the
critical points $\rho^{\prime}=p^{\prime}=0$ are described only by two
equations (21) and (22).
From the equation (21), we get $p_{0}=-\rho_{0}<0$, using it in equation (22),
we get
$3ckH^{2}_{0}+12kH_{0}-c=0\,.$
Its positive solution is
$H_{0}=\frac{{-2}}{c}+\sqrt{\frac{4}{{c^{2}}}+\frac{1}{{3k}}}=\frac{{-2\rho_{0}^{(m)}}}{Q}+\sqrt{\left({\frac{{2\rho_{0}^{(m)}}}{Q}}\right)^{2}+\frac{1}{{3k}}}\,.$
From the equation of the charge (10), we get
$Q=12ka^{3}Hp-\frac{3}{2}a^{3}\left({1+3kH^{2}}\right)\left({\dot{\rho}-\dot{p}}\right)=-12ka_{0}^{3}H_{0}\rho_{0}\,.$
But the quantities $a_{0}$, $H_{0}$, and $\rho_{0}$ are all positive,
therefore $Q$ is negative. Thus we replace $Q\to-Q$ to get a positive quantity
for our forthcoming purpose. In this manner we obtain the expansion rate at
the critical point as follows
$H_{0}=\frac{{2\rho_{0}^{(m)}}}{Q}+\sqrt{\left({\frac{{2\rho_{0}^{(m)}}}{Q}}\right)^{2}+\frac{1}{{3k}}}>0\,.$
We note that for $\rho_{0}^{(m)}\ll\rho_{0}$, this expansion rate approximates
to $H_{0}=1/\sqrt{3k}$ which agrees with slow rolling solution.
Now we show that the conservation of the charge (10) is broken at this
critical point. We have
$\begin{split}\left.{Q^{\prime}}\right|_{c}&=\left.{\frac{{dQ}}{{d\eta}}}\right|_{c}=-12ka^{3}\left.{\left({3H+H^{\prime}}\right)}\right|_{c}\rho_{0}=-\frac{{12ka^{3}}}{{3kH_{0}}}\left.{\left({9kH^{2}+3kHH^{\prime}}\right)}\right|_{c}\rho_{0}\\\
&=-\frac{{12ka^{3}}}{{3kH_{0}}}\left.{\left({3h+\frac{1}{2}h^{\prime}}\right)}\right|_{c}\rho_{0}\,,\end{split}$
(27)
where we have used $\rho^{\prime\prime}=p^{\prime\prime}=0$ because
$\rho^{\prime}\sim(\rho-\rho_{0})$ and $p^{\prime}\sim(p-p_{0})$, so
$\rho^{\prime\prime}\sim(\rho^{\prime}-\rho_{0})$ and
$p^{\prime\prime}\sim(p^{\prime}-p_{0})$, therefore
$\rho^{\prime\prime}=p^{\prime\prime}=0$ at the critical point
$(\rho_{0},p_{0})$.
From the equations (15) and (23), we obtain
$\left.h\right|_{c}=1+\frac{{\rho_{0}^{(m)}}}{{\rho_{0}a^{3}_{0}}},\quad\textrm{and}\quad\left.{h^{\prime}}\right|_{c}=-\frac{{3\rho_{0}^{(m)}}}{{\rho_{0}a^{3}_{0}}}\,.$
(28)
Using these relations in $Q^{\prime}$, we get
$\left.{Q^{\prime}}\right|_{c}=-\frac{{12ka_{0}^{3}}}{{3kH_{0}}}\left({3+\frac{{3\rho_{0}^{(m)}}}{{2\rho_{0}a_{0}^{3}}}}\right)\rho_{0}\neq
0\,.$
In this situation, the scaling symmetry of the Lagrangian (5) is broken at the
critical point $\dot{\varphi}=0$, $\varphi(a_{0})=\varphi_{0}\neq 0$, at time
$\eta_{0}=\ln(a_{0})$, thus we get the Lagrangian (4). We note that for
$\rho_{0}^{(m)}\ll\rho_{0}$, we have $\left.{h^{\prime}}\right|_{c}\approx 0$
implying $H=H_{0}=constant$, which agrees with the slow rolling solution and
indicates that nearby the critical point $\dot{\varphi}\approx 0$,
$\varphi_{0}\neq 0$, the universal expansion rate becomes constant and we
obtain a de Sitter solution.
We note that the quantities $\varphi_{0}$ and $H_{0}$ do not need to depend on
$\eta_{0}=\ln(a_{0})$, so only indeed the matter dust $\rho^{(m)}\sim 1/a^{3}$
will depend on $a_{0}$, thus we are free in choosing $a_{0}$ to get a suitable
$\rho_{0}^{(m)}$ at point of scaling symmetry breaking.
Now we show that the critical point $\dot{\varphi}=0$, $\varphi=\varphi_{0}>0$
is stable. We find first order approximation of $\rho^{\prime}$ and
$p^{\prime}$ nearby the critical point $(\rho_{0},p_{0})$; $\rho_{0}+p_{0}=0$.
Actually according to the equations (28), and with
$\rho_{0}^{(m)}\ll\rho_{0}$, we can neglect perturbations on $h$ and so on
$H$; $\delta H\sim 1/a_{0}^{3}<<1$.
We have, the equations (18) and (19),
$\begin{split}&\left[2h\left({\rho+p}\right)-\left({1+h}\right)p\right]\rho^{\prime}\\\
&=\left({\rho+p}\right)\left[{{4ckH}\rho+\frac{{16khp}}{{\left({1+h}\right)}}+\frac{{8ch^{2}}}{{3H\left({1+h}\right)}}p+6p}\right]\,,\end{split}$
(29)
and
$\left[2h\left({\rho+p}\right)-\left({1+h}\right)p\right]p^{\prime}=8kp^{2}+\frac{{2c}}{{3H}}p\rho+2ckHp\left({\rho+2p}\right)+6p\left({\rho+p}\right)\,.$
(30)
Multiplying first equation by ${3H\left({1+h}\right)/2}$ and using
$h=3kH^{2}$, we obtain
$\begin{split}&\frac{{3H\left({1+h}\right)}}{2}\left[{2h\left({\rho+p}\right)-\left({1+h}\right)p}\right]\rho^{\prime}\\\
&=\left({\rho+p}\right)\left[{2ch\left({1+h}\right)\rho+24kHhp+4ch^{2}p+9H\left({1+h}\right)p}\right]\\\
&=\left({\rho+p}\right)\left[{2ch\rho+2ch^{2}\rho+24kHhp+4ch^{2}p+9H\left({1+h}\right)p}\right]\\\
&=\left({\rho+p}\right)\left[{24kHhp+2ch\rho+2ch^{2}\left({\rho+2p}\right)+9H\left({1+h}\right)p}\right]\,.\end{split}$
(31)
Thus, nearby $\rho_{0}+p_{0}=0$ and by using the equation (25) (equation of
$p^{\prime}=0$), we get first order approximation
$\begin{split}\left({1+h_{0}}\right)^{2}\rho_{0}\rho^{\prime}&=\left({\Delta\rho+\Delta
p}\right)\left[{-12h\left({\rho_{0}+p_{0}}\right)-6\left({1+h_{0}}\right)\rho_{0}}\right]\\\
&\\\ &\to\left({\Delta\rho+\Delta
p}\right)\left[{-6\left({1+h_{0}}\right)\rho_{0}}\right]\,,\end{split}$ (32)
so
$\left({1+h_{0}}\right)\rho_{0}\rho^{\prime}=\left({\Delta\rho+\Delta
p}\right)\left({-6\rho_{0}}\right)\,\Rightarrow\rho^{\prime}=\frac{{-6}}{{1+h_{0}}}\left({\Delta\rho+\Delta
p}\right)\,,$
for $\Delta\rho=\rho-\rho_{0}\ll 1$ and $\Delta p=p-p_{0}\ll 1$. Using this
equation in the first order perturbation of equation (17), we get
$p^{\prime}=\frac{{-2}}{{1+h_{0}}}\left[{3\Delta\rho+\left({3+4k}\right)\Delta
p}\right]\,.$
From last two equations, we obtain $(\lambda_{1},\lambda_{2})$ the eigenvalues
of the velocities $(\rho^{\prime},p^{\prime})$ nearby $(\rho_{0},p_{0})$, we
get
$\lambda_{1}=-\frac{1}{{1+h_{0}}}\left({6+4k-2\sqrt{4k^{2}+9}}\right)\approx-\left({3+2k-\sqrt{4k^{2}+9}}\right)\,,$
and
$\lambda_{2}=-\frac{1}{{1+h_{0}}}\left({6+4k+2\sqrt{4k^{2}+9}}\right)\approx-\left({3+2k+\sqrt{4k^{2}+9}}\right)\,.$
Since $k>0$(regarding equation (8)), it is always $(6+4k-2\sqrt{4k^{2}+9})>0$,
therefore both $\lambda_{1}$ and $\lambda_{2}$ are negative, thus the critical
point $(\rho_{0},p_{0})$; $\rho_{0}+p_{0}=0$, $\rho_{0}>0$ is stable.
Therefore the global scaling symmetry breaking is inevitable matter, and it is
global critical point since it depends on vacuum energy of the scalar field
$\varphi(t)$, which can be related to quantum phenomena(i.e, quantization,
bosonic fields,…).
## 4 Slow Rolling Solutions
According to the equation (23), in all critical points, we have
$\left.{h^{\prime}}\right|_{c}\approx 0$ when ${\rho_{0}^{(m)}}/{a_{0}^{3}}$
is small enough(such ${\rho_{0}^{(m)}}/{a_{0}^{3}}<<1$). This condition yields
to the slow-rolling conditions
$\left|{\ddot{\varphi}}\right|\ll\left|{\varphi}\right|$ and
$\left|{\dot{\varphi}}\right|\ll\left|{\varphi}\right|$ which take place
nearby the critical point $\dot{\varphi}=0$, $\varphi(a_{0})=\varphi_{0}\neq
0$ of the scaling symmetry Lagrangian (equation (5)), that yields to solutions
close to de Sitter solution(universal expansion with constant rate
$H=constant$). The necessity of slow-rolling solutions is in their obtaining
the behaviour of all variables nearby the critical point $\dot{\varphi}=0$,
$\varphi(a_{0})=\varphi_{0}\neq 0$ and before the scaling symmetry breaking.
As usual, we get the equation of expansion rate $H$ from the energy constraint
equation (equation (15)). We obtain
$\begin{split}h=3kH^{2}&=\frac{{-\rho-\frac{{\rho_{0}^{(m)}}}{{a^{3}}}}}{{\rho+2p}}=\frac{{-\dot{\varphi}^{2}-\varphi^{2}-\frac{{\rho_{0}^{(m)}}}{{a^{3}}}}}{{\dot{\varphi}^{2}+\varphi^{2}+2\dot{\varphi}^{2}-2\varphi^{2}}}\\\
&=\frac{{-\dot{\varphi}^{2}-\varphi^{2}-\frac{{\rho_{0}^{(m)}}}{{a^{3}}}}}{{3\dot{\varphi}^{2}-\varphi^{2}}}\Rightarrow\frac{{-\varphi^{2}-\frac{{\rho_{0}^{(m)}}}{{a^{3}}}}}{{-\varphi^{2}}}=1+\frac{{\rho_{0}^{(m)}}}{{\varphi^{2}a^{3}}}\approx
1+\frac{{\rho_{0}^{(m)}}}{{\varphi_{0}^{2}a^{3}}}\,,\end{split}$ (33)
where we have used the slow rolling condition
$\left|{\dot{\varphi}}\right|\ll\left|{\varphi}\right|$. If we impose a
condition as ${{\rho_{0}^{(m)}}}/{{\varphi_{0}^{2}a^{3}}}\ll 1$ which takes
place at large scale factor values $a\gg 1$ and with
$\rho_{0}^{(m)}\ll\varphi_{0}^{2}/2$ for which the universe is dominated by
the ground state energy of $\varphi$(vacuum energy) which has the role of
cosmological constant, however, we identified the energy $\varphi_{0}^{2}/2$
with cosmological constant, formulas (8). We obtain $3kH^{2}\approx 1$,
therefore we get approximately constant expansion rate $H_{0}=1/\sqrt{3k}$.
Note that phase occurs at late times $a\gg 1$ of universal expansion. Using
$h^{\prime}=0$, $h=1$ in the equation (14), we obtain
$6\dot{\varphi}^{2}+2\left({\dot{\varphi}\ddot{\varphi}+\varphi\dot{\varphi}}\right)=0\,\,\,\Rightarrow\,\,\,6\dot{\varphi}+2\left({\ddot{\varphi}+\varphi}\right)=0\,,$
which has the solution
$\varphi\left(t\right)=Ae^{-0.4t}+Be^{-2.6t}\,.$
For some real constants $A$, $B$. It is clear that in this approximation, the
field $\varphi$ decreases in time until vanishes.
However, $t$ is measured in unit of Plank mass, so $t=1$ is the time of value
$m_{pl}^{-1}$ which is a large value, thus the slow rolling period is long,
but if a vacuum expectation value $\left\langle
0\right|\varphi^{2}\left|0\right\rangle=\varphi_{0}^{2}\neq 0$ appears, the
scaling symmetry breaks and a new Lagrangian (equation (4)) takes place
instead.
## 5 Stability of ground state value of scalar field and energy
222This section is not included in the published edition.
We have seen that there is a non-zero positive value of the energy density of
the scalar field $\varphi$, this value $\rho_{0}>0$ is given in the critical
point $\dot{\varphi}=0$. But in order to relate $\varphi_{0}\neq 0$ to quantum
phenomena(i.e, vacuum expectation value), we need $\rho_{0}$ be stable and do
not depend on time $\eta=\ln\left(a(t)\right)$. So we can regard
$\varphi_{0}\neq 0$ as a global constant value that can be given by
$\varphi_{0}^{2}=\left\langle\Omega\right|\hat{\varphi}^{2}\left|\Omega\right\rangle>0$,
for a ground state function $\left|\Omega\right\rangle$. But we need to relate
$\hat{\varphi}$ and $\left|\Omega\right\rangle$ to a quantum phenomena which
is global and does not depend on any geometry.
From the charge equation (10) and constraint equation (15), we obtain at the
critical point $\dot{\varphi}=0$, $\varphi=\varphi_{0}\neq 0$ the relations
$\left.Q\right|_{c}=Q=-12ka_{0}^{3}H_{0}\rho_{0}\Rightarrow\rho_{0}=-\frac{Q}{{12ka_{0}^{3}H_{0}}}\,;\quad
Q<0\,,$
and
$\rho_{0}-h_{0}\rho_{0}+\frac{{\rho_{0}^{(m)}}}{{a^{3}_{0}}}=0\Rightarrow\rho_{0}=\frac{{\rho_{0}^{(m)}}}{{\left({h_{0}-1}\right)a_{0}^{3}}}\,.$
(34)
These two equations imply
$-\frac{Q}{{12kH_{0}}}=\frac{{\rho_{0}^{(m)}}}{{\left({h_{0}-1}\right)}}\,,$
and by using $h=3kH^{2}$, we obtain
$H_{0}=-\frac{{2\rho_{0}^{(m)}}}{Q}+\sqrt{\left({\frac{{2\rho_{0}^{(m)}}}{Q}}\right)^{2}+\frac{1}{{3k}}}>0\,;\quad-Q>0\,.$
(35)
It is clear that $H_{0}$ does not depend on the scale factor $a_{0}$, also it
is global by its dependence only on the constants $k$, $Q$ and
$\rho_{0}^{(m)}$ which are global by the meaning that they classify the
solutions(do not depend on time).
Therefore $H_{0}$ is global constant value. But in other side, we have
$a\left(t\right)=a\left(0\right)e^{\int{H\left(t\right)dt}}\,.$
Regarding the scaling symmetry, transformations (6), and before reaching the
critical point $\dot{\varphi}=0$, $\varphi_{0}=\varphi(a_{0})\neq 0$(in
vicinity of it), we have the more general solution
$a\left(t\right)=a\left(0\right)e^{2\alpha+\int{H\left(t\right)dt}}\,,$
for any real arbitrary constant $\alpha$. And according to equation (28),
$h^{\prime}\approx 0$ and so $H^{\prime}\approx 0$ when
$\rho^{(m)}_{0}/a^{3}_{0}\rho_{0}<<1$. Thus in vicinity of the critical point
$\dot{\varphi}=0$, $\varphi_{0}=\varphi(a_{0})\neq 0$, we use the value (35)
of $H_{0}$ to approximate $a\left(t\right)$ to
$a\left(t\right)=Ae^{2\alpha+H_{0}t}\,,$
for some constant $A>0$. If we let the critical point $\dot{\varphi}=0$,
$\varphi(a_{0})=\varphi_{0}\neq 0$ be reached in time $t=t_{0}$, we obtain
$a_{0}=a\left({t_{0}}\right)=Ae^{2\alpha+H_{0}t_{0}}\,.$
Now we can write
$2\alpha+H_{0}t_{0}=H_{0}T_{0}$
and choose $\alpha$ such that $T_{0}=1$, by that we obtain
$a_{0}=Ae^{H_{0}}\,,$
in the critical point $\dot{\varphi}=0$, $\varphi(a_{0})=\varphi_{0}\neq 0$.
But according to the equation (35), $H_{0}=H(k,Q,\rho_{0}^{(m)})$ which
implies that $a_{0}$ depends only on the globally constants $k$, $Q$ and
$\rho_{0}^{(m)}$. Thus $a_{0}(k,Q,\rho_{0}^{(m)})$ is also globally constant
value and it also classifies the solutions, by that the energy density
$\rho_{0}$, equation (34), depends only on the globally constants $k$, $Q$ and
$\rho_{0}^{(m)}$, so it is also a globally constant value, not geometrical,
thus it does not change under the universal expansion after passing the
critical point $a=a_{0}$($\dot{\varphi}=0$). Therefore
$\varphi_{0}^{2}=\left\langle\Omega\right|\hat{\varphi}^{2}\left|\Omega\right\rangle$
and $\left|\Omega\right\rangle$ are global structures, where
$\rho_{0}=\varphi_{0}^{2}/2$.
According to this discussion, we can think that $\rho_{0}$ is the vacuum
expectation value of $\hat{\varphi}^{2}$, where $\hat{\varphi}$ is quantum
field that does not depend on any geometry, as well as the quantum ground
state $\left|\Omega\right\rangle$. By that the equality
$\varphi_{0}^{2}/2=\Lambda$(equations (8)) is well defined and the
cosmological constant $\Lambda$ in this view is global stable value, that it
does not relate with the universal expansion, i.e, does not change under the
universal expansion after passing the critical point
$a=a_{0}$($\dot{\varphi}=0$).
## 6 Summary and Conclusion
In this paper, we have studied some novel aspects of cosmological dynamics of
a quintessence scalar field non-minimally coupled to gravity in a spatially
flat FRW background via the Noether Symmetry approach. We considered the non-
minimal coupling between the scalar field and gravitational sector as
$RL^{(\varphi)}$, that is essentially a subclass of the general Horndeski
gravity and reduces to non-minimal derivative coupling in the case of kinetic
dominance of the scalar field. We applied the Noether symmetry approach to the
Lagrangian of the model and derived the corresponding Noether charge by
exploring the status of the scaling symmetry in this framework. We adopted a
suitable potential of the scalar field $\varphi$ and estimated the behaviour
of the scale factor via scaling symmetry breaking in this setup. We treated
the role of the Noether charge in the solutions of the scalar field and we
have shown that by the universal positively accelerated expansion (especially
an exponential expansion), the field $\varphi$ is always exponentially
decreasing until reaching a critical point at $\dot{\varphi}=0$, that is, when
$\varphi=\varphi_{0}\neq 0$, in which the global scaling symmetry breaks and
the universal expansion is approximately in a constant rate $H=H_{0}$.
Existence of scaling symmetry breaking violates the conservation of the
corresponding charge, that is, $dQ/dt\neq 0$ in the critical point
$\dot{\varphi}=0$, $\varphi=\varphi_{0}\neq 0$. The existence of a non-
vanishing constant positive value $\varphi_{0}$ at the critical point
$\dot{\varphi}=0$ is necessary for fulfilling the constraint equation $\delta
S/\delta N=0$. We have demonstrated that the critical point $\dot{\varphi}=0$,
$\varphi=\varphi_{0}\neq 0$ is unique and stable in this setup and as an
important result, we were able to relate the cosmological constant and
gravitational constant via an identity, which is scaling symmetry breaking in
the space $(a,\varphi)$. Finally we tried to show that the ground state energy
density $\rho_{0}$ relates to quantum phenomena and globally stable.
Funding and/or Conflicts of interests/Competing interests:
There is no funding and/or conflicts of interests/ompeting interests regarding
this manuscript.
Data Availability Statement:
No Data associated in the manuscript.
## References
* [1] G. N. Remmen, S. M. Carroll, _Attractor Solutions in Scalar-Field Cosmology_ , Phys. Rev. D 88 (2013) 083518.
|
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{{}}} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}
{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.43355pt}{0.24849pt}\pgfsys@lineto{0.56302pt}{1.03938pt}\pgfsys@lineto{0.78201pt}{1.68481pt}\pgfsys@lineto{1.14635pt}{2.07536pt}\pgfsys@lineto{1.67227pt}{2.16876pt}\pgfsys@lineto{2.33296pt}{2.00148pt}\pgfsys@lineto{3.06679pt}{1.67958pt}\pgfsys@lineto{3.7945pt}{1.35068pt}\pgfsys@lineto{4.44101pt}{1.1645pt}\pgfsys@lineto{4.95526pt}{1.23265pt}\pgfsys@lineto{5.3231pt}{1.59816pt}\pgfsys@lineto{5.57036pt}{2.2236pt}\pgfsys@lineto{5.75446pt}{3.00046pt}\pgfsys@lineto{5.94843pt}{3.77782pt}\pgfsys@lineto{6.22081pt}{4.40279pt}\pgfsys@lineto{6.6175pt}{4.76138pt}\pgfsys@lineto{7.15103pt}{4.81004pt}\pgfsys@lineto{7.79602pt}{4.58661pt}\pgfsys@lineto{8.49872pt}{4.2015pt}\pgfsys@lineto{9.19377pt}{3.80902pt}\pgfsys@lineto{9.81995pt}{3.56505pt}\pgfsys@lineto{10.33705pt}{3.5856pt}\pgfsys@lineto{10.736pt}{3.91527pt}\pgfsys@lineto{11.03981pt}{4.5147pt}\pgfsys@lineto{11.29648pt}{5.27074pt}\pgfsys@lineto{11.5647pt}{6.02638pt}\pgfsys@lineto{11.89777pt}{6.6227pt}\pgfsys@lineto{12.32887pt}{6.9413pt}\pgfsys@lineto{12.8652pt}{6.93729pt}\pgfsys@lineto{13.48524pt}{6.6508pt}\pgfsys@lineto{14.14665pt}{6.19714pt}\pgfsys@lineto{14.79799pt}{5.73584pt}\pgfsys@lineto{15.39532pt}{5.42859pt}\pgfsys@lineto{15.9105pt}{5.39491pt}\pgfsys@lineto{16.34088pt}{5.68059pt}\pgfsys@lineto{16.7063pt}{6.24434pt}\pgfsys@lineto{17.04218pt}{6.96849pt}\pgfsys@lineto{17.39029pt}{7.69069pt}\pgfsys@lineto{17.78717pt}{8.24728pt}\pgfsys@lineto{18.2521pt}{8.51685pt}\pgfsys@lineto{18.78568pt}{8.45354pt}\pgfsys@lineto{19.37083pt}{8.09969pt}\pgfsys@lineto{19.97705pt}{7.57439pt}\pgfsys@lineto{20.57188pt}{7.04207pt}\pgfsys@lineto{21.1294pt}{6.66835pt}\pgfsys@lineto{21.63686pt}{6.57542pt}\pgfsys@lineto{22.09676pt}{6.80908pt}\pgfsys@lineto{22.52478pt}{7.32617pt}\pgfsys@lineto{22.9437pt}{8.0058pt}\pgfsys@lineto{23.37518pt}{8.68176pt}\pgfsys@lineto{23.83585pt}{9.18724pt}\pgfsys@lineto{24.33092pt}{9.39934pt}\pgfsys@lineto{24.85385pt}{9.27237pt}\pgfsys@lineto{25.39247pt}{8.8505pt}\pgfsys@lineto{25.93037pt}{8.25551pt}\pgfsys@lineto{26.45534pt}{7.65468pt}\pgfsys@lineto{26.96283pt}{7.21559pt}\pgfsys@lineto{27.45462pt}{7.06113pt}\pgfsys@lineto{27.93924pt}{7.23643pt}\pgfsys@lineto{28.42715pt}{7.69676pt}\pgfsys@lineto{28.92627pt}{8.31944pt}\pgfsys@lineto{29.43839pt}{8.93677pt}\pgfsys@lineto{29.95836pt}{9.3812pt}\pgfsys@lineto{30.47595pt}{9.53021pt}\pgfsys@lineto{30.97891pt}{9.33928pt}\pgfsys@lineto{31.46053pt}{8.85373pt}\pgfsys@lineto{31.92067pt}{8.19652pt}\pgfsys@lineto{32.36711pt}{7.5351pt}\pgfsys@lineto{32.81633pt}{7.03642pt}\pgfsys@lineto{33.28522pt}{6.82214pt}\pgfsys@lineto{33.78778pt}{6.93611pt}\pgfsys@lineto{34.3295pt}{7.33252pt}\pgfsys@lineto{34.9019pt}{7.88884pt}\pgfsys@lineto{35.48607pt}{8.43848pt}\pgfsys@lineto{36.05623pt}{8.81584pt}\pgfsys@lineto{36.58739pt}{8.90056pt}\pgfsys@lineto{37.0632pt}{8.64978pt}\pgfsys@lineto{37.48209pt}{8.10945pt}\pgfsys@lineto{37.85922pt}{7.40147pt}\pgfsys@lineto{38.22363pt}{6.691pt}\pgfsys@lineto{38.60994pt}{6.14212pt}\pgfsys@lineto{39.05081pt}{5.87335pt}\pgfsys@lineto{56.90564pt}{-0.00003pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{}{}{}{}{} {{}} {{}} {{}} {{}}
{ {} {} {} {}{ } {} {} { } {} {}
{{}}{}{{}}{}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{}{}{}{}{}{{}}{}
{{}{}}{{}{}}{{}}
{{{}}{{}}}{{}}{{}{}}{{{}}{{}}}{{}}{}{{}}{}{}{}{\pgfsys@beginscope\pgfsys@invoke{
}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}}{{}{}}
{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}} {}
}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}} {}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}} {}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}} {}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}} {}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}} {}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{}}
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{{}}} }{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}
{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.43355pt}{-0.24849pt}\pgfsys@lineto{1.17426pt}{0.05095pt}\pgfsys@lineto{1.82849pt}{0.21054pt}\pgfsys@lineto{2.34038pt}{0.12105pt}\pgfsys@lineto{2.6937pt}{-0.2598pt}\pgfsys@lineto{2.91527pt}{-0.89543pt}\pgfsys@lineto{3.06679pt}{-1.67958pt}\pgfsys@lineto{3.22736pt}{-2.4646pt}\pgfsys@lineto{3.47217pt}{-3.1007pt}\pgfsys@lineto{3.85233pt}{-3.47618pt}\pgfsys@lineto{4.38185pt}{-3.5479pt}\pgfsys@lineto{5.03508pt}{-3.35318pt}\pgfsys@lineto{5.75446pt}{-3.00046pt}\pgfsys@lineto{6.46701pt}{-2.64049pt}\pgfsys@lineto{7.10423pt}{-2.42616pt}\pgfsys@lineto{7.62021pt}{-2.47128pt}\pgfsys@lineto{8.00409pt}{-2.82014pt}\pgfsys@lineto{8.27957pt}{-3.43396pt}\pgfsys@lineto{8.49872pt}{-4.2015pt}\pgfsys@lineto{8.72853pt}{-4.96918pt}\pgfsys@lineto{9.03033pt}{-5.58098pt}\pgfsys@lineto{9.44426pt}{-5.92075pt}\pgfsys@lineto{9.97946pt}{-5.94386pt}\pgfsys@lineto{10.61284pt}{-5.6895pt}\pgfsys@lineto{11.29648pt}{-5.27074pt}\pgfsys@lineto{11.97171pt}{-4.84453pt}\pgfsys@lineto{12.58514pt}{-4.56966pt}\pgfsys@lineto{13.10194pt}{-4.56389pt}\pgfsys@lineto{13.51663pt}{-4.87256pt}\pgfsys@lineto{13.85075pt}{-5.45546pt}\pgfsys@lineto{14.14665pt}{-6.19714pt}\pgfsys@lineto{14.45403pt}{-6.93756pt}\pgfsys@lineto{14.81857pt}{-7.51541pt}\pgfsys@lineto{15.26668pt}{-7.81056pt}\pgfsys@lineto{15.80273pt}{-7.7777pt}\pgfsys@lineto{16.40695pt}{-7.45796pt}\pgfsys@lineto{17.04218pt}{-6.96849pt}\pgfsys@lineto{17.66663pt}{-6.47163pt}\pgfsys@lineto{18.24554pt}{-6.13132pt}\pgfsys@lineto{18.75781pt}{-6.06854pt}\pgfsys@lineto{19.203pt}{-6.32909pt}\pgfsys@lineto{19.59978pt}{-6.87086pt}\pgfsys@lineto{19.97705pt}{-7.57439pt}\pgfsys@lineto{20.36723pt}{-8.27519pt}\pgfsys@lineto{20.79623pt}{-8.8076pt}\pgfsys@lineto{21.27687pt}{-9.04936pt}\pgfsys@lineto{21.80663pt}{-8.95462pt}\pgfsys@lineto{22.36981pt}{-8.5665pt}\pgfsys@lineto{22.9437pt}{-8.0058pt}\pgfsys@lineto{23.50488pt}{-7.43854pt}\pgfsys@lineto{24.03848pt}{-7.0317pt}\pgfsys@lineto{24.53929pt}{-6.90804pt}\pgfsys@lineto{25.01212pt}{-7.11313pt}\pgfsys@lineto{25.47093pt}{-7.60297pt}\pgfsys@lineto{25.93037pt}{-8.25551pt}\pgfsys@lineto{26.4027pt}{-8.90355pt}\pgfsys@lineto{26.89413pt}{-9.37956pt}\pgfsys@lineto{27.40128pt}{-9.56055pt}\pgfsys@lineto{27.91556pt}{-9.40135pt}\pgfsys@lineto{28.42651pt}{-8.94676pt}\pgfsys@lineto{28.92627pt}{-8.31944pt}\pgfsys@lineto{29.41306pt}{-7.68704pt}\pgfsys@lineto{29.89201pt}{-7.21716pt}\pgfsys@lineto{30.37335pt}{-7.03232pt}\pgfsys@lineto{30.86768pt}{-7.17711pt}\pgfsys@lineto{31.38338pt}{-7.60614pt}\pgfsys@lineto{31.92067pt}{-8.19652pt}\pgfsys@lineto{32.46999pt}{-8.78085pt}\pgfsys@lineto{33.01665pt}{-9.19218pt}\pgfsys@lineto{33.54202pt}{-9.3089pt}\pgfsys@lineto{34.0321pt}{-9.08733pt}\pgfsys@lineto{34.48315pt}{-8.57303pt}\pgfsys@lineto{34.9019pt}{-7.88884pt}\pgfsys@lineto{35.30739pt}{-7.20132pt}\pgfsys@lineto{35.72533pt}{-6.67622pt}\pgfsys@lineto{36.18068pt}{-6.43388pt}\pgfsys@lineto{36.68985pt}{-6.51718pt}\pgfsys@lineto{37.25443pt}{-6.88036pt}\pgfsys@lineto{37.85922pt}{-7.40147pt}\pgfsys@lineto{38.47525pt}{-7.91539pt}\pgfsys@lineto{39.06618pt}{-8.25854pt}\pgfsys@lineto{39.60104pt}{-8.31204pt}\pgfsys@lineto{56.90564pt}{0.00003pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} }{{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@beginscope\pgfsys@invoke{ } { {}{}{}}{}{{}} {}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}\pgfsys@moveto{-17.4551pt}{18.77892pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}{\pgfsys@beginscope\pgfsys@invoke{ }
{}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}}{{{{}{}{{}}
}}{{}}}{{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}}\pgfsys@beginscope\pgfsys@invoke{
} {\pgfsys@beginscope\pgfsys@invoke{ } {{}}
{{}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.68082}{-0.73245}{0.73245}{0.68082}{-13.74097pt}{14.7831pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}}
{{{}}} }{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {{{}}} }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} { {}{}{}}{}{{}} {}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}\pgfsys@moveto{-17.4551pt}{-18.77892pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}{\pgfsys@beginscope\pgfsys@invoke{ }
{}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}}{{{{}{}{{}}
}}{{}}}{{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}}\pgfsys@beginscope\pgfsys@invoke{
} {\pgfsys@beginscope\pgfsys@invoke{ } {{}}
{{}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.68082}{0.73245}{-0.73245}{0.68082}{-13.74097pt}{-14.7831pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}}
{{{}}} }{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {{{}}} }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@beginscope\pgfsys@invoke{ } {{}}{}{{}}{}{ {}{}{}} {{{{{}}{
{}{}}{}{}{{}{}}}}}{}{}{}{}\pgfsys@moveto{56.90552pt}{0.0pt}\pgfsys@lineto{73.69656pt}{18.77892pt}\pgfsys@stroke\pgfsys@invoke{
}{\pgfsys@beginscope\pgfsys@invoke{ }
{}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}}{{{{}{}{{}}
}}{{}}}{{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}}\pgfsys@beginscope\pgfsys@invoke{
} {\pgfsys@beginscope\pgfsys@invoke{ } {{}}
{{}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.66655}{0.74545}{-0.74545}{0.66655}{65.4599pt}{9.56702pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}}
{{{}}} }{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {{{}}} }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope} {{}}{}{{}}{}{ {}{}{}}
{{{{{}}{
{}{}}{}{}{{}{}}}}}{}{}{}{}\pgfsys@moveto{56.90552pt}{0.0pt}\pgfsys@lineto{73.69656pt}{-18.77892pt}\pgfsys@stroke\pgfsys@invoke{
}{\pgfsys@beginscope\pgfsys@invoke{ }
{}{{}{}}{}{}{}{{}}{{}}{{}{}}{{}{}}{{{{}{}{{}} }}{{}}}{{{{}{}{{}}
}}{{}}}{{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}}\pgfsys@beginscope\pgfsys@invoke{
} {\pgfsys@beginscope\pgfsys@invoke{ } {{}}
{{}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.66655}{-0.74545}{0.74545}{0.66655}{65.4599pt}{-9.56702pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}}}{{{{}{}{{}} }}{{}}
{{{}}} }{{}{}}{{}{}}{{{{}{}{{}} }}{{}} {{{}}} }
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@beginscope\pgfsys@invoke{ } {{}}{}{{}}{} {{}{}}{{}{}}{{}}
{{{}}{{}}}{{}}{{}{}}{{{}}{{}}}{{}}{}{{}}{}{}{}{}{}{}{}{{}}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@curveto{0.0pt}{37.7255pt}{56.90552pt}{37.7255pt}{56.90552pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}{\pgfsys@beginscope\pgfsys@invoke{ }
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}}{{}{}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}\pgfsys@beginscope\pgfsys@invoke{
} {\pgfsys@beginscope\pgfsys@invoke{ } {{}}
{{}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.94124}{0.33774}{-0.33774}{0.94124}{13.35678pt}{24.85806pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}\pgfsys@beginscope\pgfsys@invoke{
} {\pgfsys@beginscope\pgfsys@invoke{ } {{}}
{{}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.73982}{-0.6728}{0.6728}{0.73982}{44.73386pt}{24.20703pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{{}}} }{{}{}}{{}{}}{{}{}{}{}{{}}{}{{}} {{{}}} }
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ }}{
} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{17.3304pt}{33.63155pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{{$r_{1}$}}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}
{{}{}}{{}{}}{{}}
{{{}}{{}}}{{}}{{}{}}{{{}}{{}}}{{}}{}{{}}{}{}{}{}{}{}{}{{}}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@curveto{0.0pt}{-37.7255pt}{56.90552pt}{-37.7255pt}{56.90552pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}{\pgfsys@beginscope\pgfsys@invoke{ }
{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}}{{}{}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}\pgfsys@beginscope\pgfsys@invoke{
} {\pgfsys@beginscope\pgfsys@invoke{ } {{}}
{{}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.94124}{-0.33774}{0.33774}{0.94124}{13.35678pt}{-24.85806pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}\pgfsys@beginscope\pgfsys@invoke{
} {\pgfsys@beginscope\pgfsys@invoke{ } {{}}
{{}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{0.73982}{0.6728}{-0.6728}{0.73982}{44.73386pt}{-24.20703pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}}{{}{}{}{}{{}}{}{{}}}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}}{{}{}{}{}{{}}{}{{}}
{{{}}} }{{}{}}{{}{}}{{}{}{}{}{{}}{}{{}} {{{}}} }
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{}}{}
{{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{17.3304pt}{-36.13264pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{${r_{2}}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope \par{
{}{}{}}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{
{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {
{}{}{}}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{
{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{28.45276pt}{-42.67914pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {
{}{}{}}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{
{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{15.7337pt}{12.17688pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$k$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {
{}{}{}}{}{}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{
{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{13.56pt}{-16.8452pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{{$r_{X}$}}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} \par{{}}{}{}{{}}{}{{{}}
{}{}{}{}{}{}{}{} }\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{1,1,1}\definecolor[named]{.}{rgb}{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{8.0pt}{0.0pt}\pgfsys@curveto{8.0pt}{4.41833pt}{4.41833pt}{8.0pt}{0.0pt}{8.0pt}\pgfsys@curveto{-4.41833pt}{8.0pt}{-8.0pt}{4.41833pt}{-8.0pt}{0.0pt}\pgfsys@curveto{-8.0pt}{-4.41833pt}{-4.41833pt}{-8.0pt}{0.0pt}{-8.0pt}\pgfsys@curveto{4.41833pt}{-8.0pt}{8.0pt}{-4.41833pt}{8.0pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{}}{}{}{{}}{}{{{}}
{}{}{}{}{}{}{}{} }\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0.4,0.5,1}\pgfsys@color@rgb@fill{0.4}{0.5}{1}\pgfsys@invoke{
}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{8.0pt}{0.0pt}\pgfsys@curveto{8.0pt}{4.41833pt}{4.41833pt}{8.0pt}{0.0pt}{8.0pt}\pgfsys@curveto{-4.41833pt}{8.0pt}{-8.0pt}{4.41833pt}{-8.0pt}{0.0pt}\pgfsys@curveto{-8.0pt}{-4.41833pt}{-4.41833pt}{-8.0pt}{0.0pt}{-8.0pt}\pgfsys@curveto{4.41833pt}{-8.0pt}{8.0pt}{-4.41833pt}{8.0pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\par{{}}{}{}{{}}{}{{{}}
{}{}{}{}{}{}{}{} }\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{tikz@color}{rgb}{1,1,1}\definecolor[named]{.}{rgb}{1,1,1}\definecolor[named]{pgfstrokecolor}{rgb}{1,1,1}\pgfsys@color@gray@stroke{1}\pgfsys@invoke{
}\pgfsys@color@gray@fill{1}\pgfsys@invoke{
}{}\pgfsys@moveto{56.90552pt}{0.0pt}\pgfsys@moveto{64.90552pt}{0.0pt}\pgfsys@curveto{64.90552pt}{4.41833pt}{61.32385pt}{8.0pt}{56.90552pt}{8.0pt}\pgfsys@curveto{52.48718pt}{8.0pt}{48.90552pt}{4.41833pt}{48.90552pt}{0.0pt}\pgfsys@curveto{48.90552pt}{-4.41833pt}{52.48718pt}{-8.0pt}{56.90552pt}{-8.0pt}\pgfsys@curveto{61.32385pt}{-8.0pt}{64.90552pt}{-4.41833pt}{64.90552pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{56.90552pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{}}{}{}{{}}{}{{{}}
{}{}{}{}{}{}{}{} }\pgfsys@beginscope\pgfsys@invoke{
}\definecolor[named]{pgffillcolor}{rgb}{0.4,0.5,1}\pgfsys@color@rgb@fill{0.4}{0.5}{1}\pgfsys@invoke{
}{}\pgfsys@moveto{56.90552pt}{0.0pt}\pgfsys@moveto{64.90552pt}{0.0pt}\pgfsys@curveto{64.90552pt}{4.41833pt}{61.32385pt}{8.0pt}{56.90552pt}{8.0pt}\pgfsys@curveto{52.48718pt}{8.0pt}{48.90552pt}{4.41833pt}{48.90552pt}{0.0pt}\pgfsys@curveto{48.90552pt}{-4.41833pt}{52.48718pt}{-8.0pt}{56.90552pt}{-8.0pt}\pgfsys@curveto{61.32385pt}{-8.0pt}{64.90552pt}{-4.41833pt}{64.90552pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{56.90552pt}{0.0pt}\pgfsys@fillstroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope\par
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope { {}{}{}}{}{ {}{}{}}
{{{{{}}{ {}{}}{}{}{{}{}}}}}{}{{{{{}}{
{}{}}{}{}{{}{}}}}}{{}}{}{}{}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setdash{3.0pt,3.0pt}{0.0pt}\pgfsys@invoke{
}{}\pgfsys@moveto{28.45276pt}{39.14613pt}\pgfsys@lineto{28.45276pt}{-39.14613pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}},$
which demonstrates that we can think of the expectation value as the weighted
cut of a loop amplitude. As $X$ can be empty, the lowest-order contribution
arises from the weighted cut of a two-loop amplitude.
##### 4.2.1 Conservation of momentum
The expectation of the radiated momentum is not independent of the impulse. In
fact the relation between these quantities is physically rich. In the
classical electrodynamics of point particles, for example, the impulse is due
to a total time integral of the usual Lorentz force, (2.21a). However, when
the particles emit radiation the point-particle approximation leads to well-
known issues. This is a celebrated problem in classical field theory. Problems
arise because of the singular nature of the point-particle source. In
particular, the electromagnetic field at the position of a point charge is
infinite, so to make sense of the Lorentz force acting on the particle the
traditional route is to subtract the particle’s own field from the full
electromagnetic field in the force law. The result is a well-defined force,
but conservation of momentum is lost.
Conservation of momentum is restored by including another force, the
Abraham–Lorentz–Dirac (ALD) force [118, 119, 120, 121, 122], acting on the
particles. This gives rise to an impulse on particle 1 in addition to the
impulse due to the Lorentz force. The Lorentz force exchanges momentum between
particles 1 and 2, while the radiation reaction impulse,
$\Delta{p^{\mu}_{1}}_{\rm ALD}=\frac{e^{2}Q_{1}^{2}}{6\pi
m_{1}}\int_{-\infty}^{\infty}\\!d\tau\left(\frac{d^{2}p_{1}^{\mu}}{d\tau^{2}}+\frac{p_{1}^{\mu}}{m_{1}^{2}}\frac{dp_{1}}{d\tau}\cdot\frac{dp_{1}}{d\tau}\right),$
(4.8)
accounts for the irreversible loss of momentum due to radiation. Of course,
the ALD force is a notably subtle issue in the classical theory.
In the quantum theory of electrodynamics there can be no question of violating
conservation of momentum, so the quantum observables we have defined must
already include all the effects which would classically be attributed to both
the Lorentz and ALD forces. This must also hold for the counterparts of these
forces in any other theory. In particular, it must be the case that our
definitions respect conservation of momentum; it is easy to demonstrate this
formally to all orders using our definitions. Later, in section 4.4.2, we will
indicate how the radiation reaction is included in the impulse more
explicitly.
Our scattering processes involve two incoming particles. Consider, then,
$\displaystyle\langle\Delta p_{1}^{\mu}\rangle+\langle\Delta
p_{2}^{\mu}\rangle$
$\displaystyle=\langle\Psi|i[\mathbb{P}_{1}^{\mu}+\mathbb{P}_{2}^{\mu},T]|\Psi\rangle+\langle\Psi|T^{\dagger}[\mathbb{P}_{1}^{\mu}+\mathbb{P}_{2}^{\mu},T]|\Psi\rangle$
(4.9)
$\displaystyle=\bigl{\langle}\Psi\big{|}i\bigl{[}\textstyle{\sum_{\alpha}}\mathbb{P}_{\alpha}^{\mu},T\bigr{]}\big{|}\Psi\bigr{\rangle}+\langle\Psi|T^{\dagger}[\mathbb{P}_{1}^{\mu}+\mathbb{P}_{2}^{\mu},T]|\Psi\rangle\,,$
where the sum $\sum\mathbb{P}_{\alpha}^{\mu}$ is now over all momentum
operators in the theory, not just those for the two initial particles. The
second equality above holds because $\mathbb{P}_{\alpha}^{\mu}|\Psi\rangle=0$
for $\alpha\neq 1,2$; only quanta of fields 1 and 2 are present in the
incoming state. Next, we use the fact that the total momentum is time
independent, or in other words
$\Bigl{[}\sum\mathbb{P}_{\alpha}^{\mu},T\Bigr{]}=0\,,$ (4.10)
where the sum extends over all fields. Consequently,
$\langle\Psi|i[\mathbb{P}_{1}^{\mu}+\mathbb{P}_{2}^{\mu},T]|\Psi\rangle=\bigl{\langle}\Psi\big{|}i\bigl{[}\textstyle{\sum_{\alpha}}\mathbb{P}_{\alpha}^{\mu},T\bigr{]}\big{|}\Psi\bigr{\rangle}=0\,.$
(4.11)
Thus the first term $\langle\Psi|i[\mathbb{P}_{1}^{\mu},T]|\Psi\rangle$ in the
impulse (3.6) describes only the exchange of momentum between particles 1 and
2; in this sense it is associated with the classical Lorentz force (which
shares this property) rather than with the classical ALD force (which does
not). The second term in the impulse, on the other hand, includes radiation.
To make the situation as clear as possible, let us restrict attention to the
case where the only other momentum operator is $\mathbb{K}^{\mu}$, the
momentum operator for the messenger field. Then we know that
$[\mathbb{P}_{1}^{\mu}+\mathbb{P}_{2}^{\mu}+\mathbb{K}^{\mu},T]=0$, and
conservation of momentum at the level of expectation values is easy to
demonstrate:
$\langle\Delta p_{1}^{\mu}\rangle+\langle\Delta
p_{2}^{\mu}\rangle=-\langle\Psi|T^{\dagger}[\mathbb{K}^{\mu},T]|\Psi\rangle=-\langle\Psi|T^{\dagger}\mathbb{K}^{\mu}T|\Psi\rangle=-\langle
k^{\mu}\rangle=-R^{\mu}\,,$ (4.12)
once again using the fact that there are no messengers in the incoming state.
In the classical theory, radiation reaction is a subleading effect, entering
for two-body scattering at order $e^{6}$ in perturbation theory in
electrodynamics. This is also the case in the quantum theory. To see why, we
again expand the operator product in the second term of equation (3.6) using a
complete set of states:
$\langle\Psi|\,T^{\dagger}[\mathbb{P}_{1}^{\mu},T]\,|\Psi\rangle=\sum_{X}\int\\!d\Phi(r_{1})d\Phi(r_{2})d\mu(\zeta_{1})d\mu(\zeta_{2})\;\\\
\times\langle\Psi|\,T^{\dagger}|r_{1}\,\zeta_{1};r_{2}\,\zeta_{2};X\rangle\langle
r_{1}\,\zeta_{1};r_{2}\,\zeta_{2};X|[\mathbb{P}_{1}^{\mu},T]\,|\Psi\rangle\,.$
(4.13)
The sum over $X$ is over all states, including an implicit integral over their
momenta and a sum over any other quantum numbers. The inserted-state momenta
of particles 1 and 2 (necessarily present) are labeled by $r_{\alpha}$, and
the corresponding integrations over these momenta by $d\Phi(r_{\alpha})$.
These will ultimately become integrations over the final-state momenta in the
scattering. To make the loss of momentum due to radiation explicit at this
level, we note that
$\langle\Psi|\,T^{\dagger}[\mathbb{P}_{1}^{\mu}+\mathbb{P}_{2}^{\mu},T]\,|\Psi\rangle=-\sum_{X}\int\\!d\Phi(r_{1})d\Phi(r_{2})d\mu(\zeta_{1})d\mu(\zeta_{2})\;\\\
\times\langle\Psi|\,T^{\dagger}|r_{1}\,\zeta_{1};r_{2}\,\zeta_{2};X\rangle\langle
r_{1}\,\zeta_{1};r_{2}\,\zeta_{2};X|\,\mathbb{P}_{X}^{\mu}T\,|\Psi\rangle\,,$
(4.14)
where $\mathbb{P}_{X}$ is the sum over momentum operators of all quantum
fields other than the scalars 1 and 2. The sum over all states $X$ will
contain, for example, terms where the state $X$ includes messengers of
momentum $k^{\mu}$ along with other massless particles. We can further
restrict attention to the contributions of the messenger’s momentum to
$\mathbb{P}_{X}^{\mu}$. This contribution produces a net change of momentum of
particle 1 given by
$-\sum_{X}\int\\!d\Phi(k)d\Phi(r_{1})d\Phi(r_{2})d\mu(\zeta_{1})d\mu(\zeta_{2})\;k^{\mu}\,\\\
\times\langle\Psi|\,T^{\dagger}|k;r_{1}\,\zeta_{1};r_{2}\,\zeta_{2};X\rangle\langle
k;r_{1}\,\zeta_{1};r_{2}\,\zeta_{2};X|\,T\,|\Psi\rangle=-\langle
k^{\mu}\rangle\,,$ (4.15)
with the help of equation (4.3). Thus we explicitly see the net loss of
momentum due to radiating messengers. In any theory this quantity is
suppressed by factors of the coupling $\tilde{g}$ because of the additional
state. The lowest order case corresponds to $X=\emptyset$; as there are two
quanta in $|\psi\rangle$, we must compute the modulus squared of a five-point
tree amplitude. The term is proportional to $\tilde{g}^{6}$, where $\tilde{g}$
is the coupling of an elementary three-point amplitude; as far as the impulse
is concerned, it is a next-to-next-to-leading order (NNLO) effect. Other
particles in the state $X$, and other contributions to its momentum, describe
higher-order effects.
#### 4.3 Classical radiation
Following our intensive study of the classical limit of the impulse in the
previous chapter, the avenue leading to the classical limit of $R^{\mu}$ is
clear: provided we work with the wavefunctions of chapter 2 in the the
Goldilocks zone $\ell_{c}\ll\ell_{w}\ll\ell_{s}$, we can simply adopt the
rules of section 3.2. In particular the radiated momentum $k$ will scale as a
wavenumber in the classical region. This is enforced by the energy-momentum-
conserving delta function in equation (4.5), rewritten in terms of momentum
transfers $w_{\alpha}=r_{\alpha}-p_{\alpha}$:
$\hat{\delta}^{(4)}(w_{1}+w_{2}+k+r_{X})\,.$ (4.16)
The arguments given after equation (3.41) then ensure that the typical values
of all momenta in the argument should again by scaled by $1/\hbar$ and
replaced by wavenumbers.
With no new work required on the formalities of the classical limit, let us
turn to explicit expressions for the classical radiated momentum in terms of
amplitudes. Recall that our expressions for the total emitted radiation in
section 4.2 depended on $q$, which represents a momentum mismatch rather than
a momentum transfer. However, we expect the momentum transfers to play an
important role in the classical limit, and so it is convenient to change
variables from the $r_{\alpha}$ to make use of them:
$\displaystyle R^{\mu}$
$\displaystyle=\sum_{X}\int\\!d\Phi(k)\prod_{\alpha=1,2}d\Phi(p_{\alpha})\hat{d}^{4}w_{\alpha}\hat{d}^{4}q\;\hat{\delta}(2p_{\alpha}\cdot
w_{\alpha}+w_{\alpha}^{2})\Theta(p_{\alpha}^{0}+w_{\alpha}^{0})$ (4.17)
$\displaystyle\times\hat{\delta}(2p_{1}\cdot q+q^{2})\hat{\delta}(2p_{2}\cdot
q-q^{2})\Theta(p_{1}{}^{0}+q^{0})\Theta(p_{2}{}^{0}-q^{0})\,\varphi_{1}(p_{1})\varphi_{2}(p_{2})$
$\displaystyle\qquad\times\varphi_{1}^{*}(p_{1}+q)\varphi_{2}^{*}(p_{2}-q)\,k_{X}^{\mu}\,e^{-ib\cdot
q/\hbar}\hat{\delta}^{(4)}(w_{1}+w_{2}+k+r_{X})$
$\displaystyle\qquad\qquad\times\langle\mathcal{A}^{*}(p_{1}+q\,,p_{2}-q\rightarrow
p_{1}+w_{1}\,,p_{2}+w_{2}\,,k\,,r_{X})$
$\displaystyle\qquad\qquad\qquad\times\mathcal{A}(p_{1}\,,p_{2}\rightarrow
p_{1}+w_{1}\,,p_{2}+w_{2}\,,k\,,r_{X})\rangle\,.$
We can now recast this expression in the notation of equation (3.42):
$\displaystyle R^{\mu}_{\textrm{cl}}$
$\displaystyle=\sum_{X}\,\biggl{\langle}\\!\\!\\!\biggl{\langle}\int\\!d\Phi(k)\prod_{\alpha=1,2}\hat{d}^{4}w_{\alpha}\,\hat{d}^{4}q\;\hat{\delta}(2p_{\alpha}\cdot
w_{\alpha}+w_{\alpha}^{2})\Theta(p_{\alpha}^{0}+w_{\alpha}^{0})\,k_{X}^{\mu}$
(4.18) $\displaystyle\times\hat{\delta}(2p_{1}\cdot
q+q^{2})\hat{\delta}(2p_{2}\cdot
q-q^{2})\hat{\delta}^{(4)}(w_{1}+w_{2}+k+r_{X})\Theta(p_{1}{}^{0}+q^{0})$
$\displaystyle\times\Theta(p_{2}{}^{0}-q^{0})\,e^{-ib\cdot
q/\hbar}\,\mathcal{A}^{*}(p_{1}+q,p_{2}-q\rightarrow
p_{1}+w_{1}\,,p_{2}+w_{2}\,,k\,,r_{X})$
$\displaystyle\qquad\qquad\qquad\times\mathcal{A}(p_{1},p_{2}\rightarrow
p_{1}+w_{1}\,,p_{2}+w_{2}\,,k\,,r_{X})\,\biggr{\rangle}\\!\\!\\!\biggr{\rangle}\,.$
We will determine the classical limit of this expression using precisely the
same logic as in the preceding chapter. Let us again focus on the leading
contribution, with $X=\emptyset$. Once again, rescale
$q\rightarrow\hbar\bar{q}$, and drop the $q^{2}$ inside the on-shell delta
functions. Here, remove an overall factor of $\tilde{g}^{6}$ and accompanying
$\hbar$’s from the amplitude and its conjugate. In addition, rescale the
momentum transfers $w\rightarrow\hbar\overline{w}$ and the radiation momenta,
$k\rightarrow\hbar\bar{k}$. At leading order there is no sum, so there will be
no hidden cancellations, and we may drop the $w_{\alpha}^{2}$ inside the on-
shell delta functions to obtain
$\displaystyle R^{\mu,(0)}_{\textrm{cl}}$
$\displaystyle=\tilde{g}^{6}\biggl{\langle}\\!\\!\\!\biggl{\langle}\hbar^{4}\\!\int\\!d\Phi(\bar{k})\prod_{\alpha=1,2}\hat{d}^{4}\overline{w}_{\alpha}\hat{d}^{4}\bar{q}\,\hat{\delta}(2\overline{w}_{\alpha}\cdot
p_{\alpha})\hat{\delta}(2\bar{q}\cdot p_{1})\hat{\delta}(2\bar{q}\cdot
p_{2})\,e^{-ib\cdot\bar{q}}$ (4.19)
$\displaystyle\qquad\times\bar{k}^{\mu}\,\mathcal{\bar{A}}^{(0)*}(p_{1}+\hbar\bar{q},p_{2}-\hbar\bar{q}\rightarrow
p_{1}+\hbar\overline{w}_{1}\,,p_{2}+\hbar\overline{w}_{2}\,,\hbar\bar{k})$
$\displaystyle\qquad\times\mathcal{\bar{A}}^{(0)}(p_{1},p_{2}\rightarrow
p_{1}+\hbar\overline{w}_{1}\,,p_{2}+\hbar\overline{w}_{2}\,,\hbar\bar{k})\,\hat{\delta}^{(4)}(\overline{w}_{1}+\overline{w}_{2}+\bar{k})\,\biggr{\rangle}\\!\\!\\!\biggr{\rangle}\,.$
We will make use of this expression below to verify that momentum is conserved
as expected.
One disadvantage of this expression for the leading order radiated momentum is
that it is no longer in a form of an integral over a perfect square, such as
shown in equation (4.6). Nevertheless we can recast equation (4.18) in such a
form. To do so, perform a change of variable, including in the (momentum
space) wavefunctions. To begin, it is helpful to write equation (4.18) as
$\displaystyle R^{\mu}_{\textrm{cl}}=$
$\displaystyle\,\sum_{X}\prod_{\alpha=1,2}\int\\!d\Phi(p_{\alpha})\,|\varphi_{\alpha}(p_{\alpha})|^{2}\int\\!d\Phi(k)d\Phi(w_{\alpha}+p_{\alpha})d\Phi(q_{\alpha}+p_{\alpha})\;$
(4.20)
$\displaystyle\times\hat{\delta}^{(4)}(w_{1}+w_{2}+k+r_{X})\hat{\delta}^{(4)}(q_{1}+q_{2})\,e^{-ib\cdot
q_{1}/\hbar}\,k_{X}^{\mu}\,$
$\displaystyle\qquad\times\langle\mathcal{A}^{*}(p_{1}+q_{1}\,,p_{2}+q_{2}\rightarrow
p_{1}+w_{1}\,,p_{2}+w_{2}\,,k\,,r_{X})$
$\displaystyle\qquad\qquad\times\mathcal{A}(p_{1}\,,p_{2}\rightarrow
p_{1}+w_{1}\,,p_{2}+w_{2}\,,k\,,r_{X})\rangle\,\,.$
We will now re-order the integration and perform a change of variables. Let us
define $\tilde{p}_{\alpha}=p_{\alpha}-\tilde{w}_{\alpha}$,
$\tilde{q}_{\alpha}=q_{\alpha}+\tilde{w}_{\alpha}$, and
$\tilde{w}_{\alpha}=-w_{\alpha}$, changing variables from $p_{\alpha}$ to
$\tilde{p}_{\alpha}$, from $q_{\alpha}$ to $\tilde{q}_{\alpha}$, and from
$w_{\alpha}$ to $\tilde{w}_{\alpha}$:
$\displaystyle R^{\mu}_{\textrm{cl}}=$
$\displaystyle\,\sum_{X}\prod_{\alpha=1,2}\int\\!d\Phi(\tilde{p}_{\alpha})d\Phi(k)d\Phi(\tilde{w}_{\alpha}+\tilde{p}_{\alpha})d\Phi(\tilde{q}_{\alpha}+\tilde{p}_{\alpha})|\varphi_{\alpha}(\tilde{p}_{\alpha}+\tilde{w}_{\alpha})|^{2}\;$
(4.21)
$\displaystyle\times\hat{\delta}^{(4)}(\tilde{w}_{1}+\tilde{w}_{2}-k-r_{X})\hat{\delta}^{(4)}(\tilde{q}_{1}+\tilde{q}_{2}-k-r_{X})\,e^{-ib\cdot(\tilde{q}_{1}-\tilde{w}_{1})/\hbar}\,k_{X}^{\mu}$
$\displaystyle\qquad\times\langle\mathcal{A}^{*}(\tilde{p}_{1}+\tilde{q}_{1}\,,\tilde{p}_{2}+\tilde{q}_{2}\rightarrow\tilde{p}_{1}\,,\tilde{p}_{2}\,,k\,,r_{X})$
$\displaystyle\qquad\qquad\times\mathcal{A}(\tilde{p}_{1}+\tilde{w}_{1}\,,\tilde{p}_{2}+\tilde{w}_{2}\rightarrow\tilde{p}_{1}\,,\tilde{p}_{2}\,,k\,,r_{X})\rangle\,\,.$
As the $\tilde{w}_{\alpha}$ implicitly carry a factor of $\hbar$, just as
argued in section 2.3.1 for the momentum mismatch $q$, we may neglect the
shift in the wavefunctions. Dropping the tildes, and associating the
$w_{\alpha}$ integrals with $\mathcal{A}$ and the $q_{\alpha}$ integrals with
$\mathcal{A}^{*}$, our expression is revealed as an integral over a perfect
square,
$\displaystyle R^{\mu}_{\textrm{cl}}$
$\displaystyle=\sum_{X}\prod_{\alpha=1,2}\biggl{\langle}\\!\\!\\!\biggl{\langle}\int\\!d\Phi(k)\,k_{X}^{\mu}\biggl{|}\int\\!d\Phi(w_{\alpha}+p_{\alpha})\;\hat{\delta}^{(4)}(w_{1}+w_{2}-k-r_{X})$
(4.22) $\displaystyle\hskip 71.13188pt\times e^{ib\cdot
w_{1}/\hbar}\,\mathcal{A}(p_{1}+w_{1},p_{2}+w_{2}\rightarrow
p_{1}\,,p_{2}\,,k\,,r_{X})\biggr{|}^{2}\biggr{\rangle}\\!\\!\\!\biggr{\rangle}\,.$
The perfect-square structure allows us to define a radiation kernel,
$\displaystyle\mathcal{R}(k,r_{X})$
$\displaystyle\equiv\hbar^{3/2}\prod_{\alpha=1,2}\int\\!d\Phi(p_{\alpha}+w_{\alpha})\;\hat{\delta}^{(4)}(w_{1}+w_{2}-k-r_{X})$
(4.23) $\displaystyle\qquad\qquad\times e^{ib\cdot
w_{1}/\hbar}\,\mathcal{A}(p_{1}+w_{1},p_{2}+w_{2}\rightarrow
p_{1}\,,p_{2}\,,k\,,r_{X}),$ $\displaystyle=$
$\displaystyle\hbar^{3/2}\prod_{\alpha=1,2}\int\\!\hat{d}^{4}w_{\alpha}\;\hat{\delta}(2p_{\alpha}\cdot
w_{\alpha}+w_{\alpha}^{2})\,\hat{\delta}^{(4)}(w_{1}+w_{2}-k-r_{X})$
$\displaystyle\quad\times\Theta(p_{\alpha}^{0}+w_{\alpha}^{0})\,e^{ib\cdot
w_{1}/\hbar}\,\mathcal{A}(p_{1}+w_{1},p_{2}+w_{2}\rightarrow
p_{1}\,,p_{2}\,,k\,,r_{X})\,,$
so that
$\displaystyle R^{\mu}_{\textrm{cl}}$
$\displaystyle=\sum_{X}\hbar^{-3}\biggl{\langle}\\!\\!\\!\biggl{\langle}\int\\!d\Phi(k)\,k_{X}^{\mu}\left|\mathcal{R}(k,r_{X})\right|^{2}\biggr{\rangle}\\!\\!\\!\biggr{\rangle}\,.$
(4.24)
The prefactor along with the normalization of $\mathcal{R}$ are again chosen
so that the classical limit of the radiation kernel will be of
$\mathcal{O}(\hbar^{0})$. Let us now focus once more on the leading
contribution, with $X=\emptyset$. As usual, rescale
$w\rightarrow\hbar\overline{w}$, and remove an overall factor of
$\tilde{g}^{6}$ and accompanying $\hbar$’s from the amplitude and its
conjugate. Then the LO radiation kernel is
$\displaystyle\mathcal{R}^{(0)}(\bar{k})$
$\displaystyle\equiv\hbar^{2}\prod_{\alpha=1,2}\int\\!\hat{d}^{4}\overline{w}_{\alpha}\,\hat{\delta}(2p_{\alpha}\cdot\overline{w}_{\alpha}+\hbar\overline{w}_{\alpha}^{2})\,\hat{\delta}^{(4)}(\overline{w}_{1}+\overline{w}_{2}-\bar{k})e^{ib\cdot\overline{w}_{1}}$
(4.25) $\displaystyle\hskip
70.0pt\times\mathcal{\bar{A}}^{(0)}(p_{1}+\hbar\overline{w}_{1},p_{2}+\hbar\overline{w}_{2}\rightarrow
p_{1}\,,p_{2}\,,\hbar\bar{k})\,,$
ensuring that the leading-order momentum radiated is simply
$\displaystyle R^{\mu,(0)}_{\textrm{cl}}$
$\displaystyle=\tilde{g}^{6}\biggl{\langle}\\!\\!\\!\biggl{\langle}\int\\!d\Phi(\bar{k})\,\bar{k}^{\mu}\left|\mathcal{R}^{(0)}(\bar{k})\right|^{2}\biggr{\rangle}\\!\\!\\!\biggr{\rangle}\,.$
(4.26)
###### Conservation of momentum
Conservation of momentum certainly holds to all orders, as we saw in section
4.2.1. However, it is worth making sure that we have not spoiled this critical
physical property in our previous discussion, or indeed in our discussion of
the classical impulse in section 3.2.3. One might worry, for example, that
there is a subtlety with the order of limits.
There is no issue at LO and NLO for the impulse, because
$\Delta p_{1}^{\mu,(0)}+\Delta p_{2}^{\mu,(0)}=0,\quad\Delta
p_{1}^{\mu,(1)}+\Delta p_{2}^{\mu,(1)}=0.$ (4.27)
These follow straightforwardly from the definitions of the observables,
equation (3.44) and equation (3.47). The essential point is that the
amplitudes entering into these orders in the impulse conserve momentum for two
particles. At LO, for example, using equation (3.44) the impulse on particle 2
can be written as
$\Delta
p_{2}^{\mu,(0)}=\frac{i\tilde{g}^{2}}{4}\biggl{\langle}\\!\\!\\!\biggl{\langle}\hbar^{2}\\!\int\\!\hat{d}^{4}\bar{q}_{1}\hat{d}^{4}\bar{q}_{2}\;\hat{\delta}(\bar{q}_{1}\cdot
p_{1})\hat{\delta}(\bar{q}_{1}\cdot
p_{2})\hat{\delta}^{(4)}(\bar{q}_{1}+\bar{q}_{2})\\\ \times
e^{-ib\cdot\bar{q}_{1}}\,\bar{q}_{2}^{\mu}\,\mathcal{\bar{A}}^{(0)}(p_{1},\,p_{2}\rightarrow
p_{1}+\hbar\bar{q}_{1},p_{2}+\hbar\bar{q}_{2})\,\biggr{\rangle}\\!\\!\\!\biggr{\rangle}.$
(4.28)
In this equation, conservation of momentum at the level of the four point
amplitude $\mathcal{\bar{A}}^{(0)}(p_{1},\,p_{2}\rightarrow
p_{1}+\hbar\bar{q}_{1},p_{2}+\hbar\bar{q}_{2})$ is expressed by the presence
of the four-fold delta function $\hat{\delta}^{(4)}(\bar{q}_{1}+\bar{q}_{2})$.
Using this delta function, we may replace $\bar{q}_{2}^{\mu}$ with
$-\bar{q}_{1}^{\mu}$ and then integrate over $\bar{q}_{2}$, once again using
the delta function. The result is manifestly $-\Delta p_{1}^{\mu,(0)}$,
equation (3.44). A similar calculation goes through at NLO.
In this sense, the scattering is conservative at LO and at NLO. At NNLO,
however, we must take radiative effects into account. This backreaction is
entirely described by the quadratic part of the impulse, $I_{(2)}^{\mu}$. As
indicated in equation (4.11), $I_{(1)}^{\mu}$ is always conservative. From our
perspective here, this is because it involves only four-point amplitudes. Thus
to understand conservation of momentum we need to investigate $I_{(2)}^{\mu}$.
The lowest order case in which a five point amplitude can enter
$I_{(2)}^{\mu}$ is at NNLO. Let us restrict attention to this lowest order
case, taking the additional state $X$ to be a messenger.
For $I_{(2)}^{\mu}$ the lowest order term inolving one messenger is, in the
classical regime,
$\displaystyle I_{(2),\textrm{cl}}^{\mu,(\textrm{rad})}=$
$\displaystyle\,\tilde{g}^{6}\biggl{\langle}\\!\\!\\!\biggl{\langle}\hbar^{4}\\!\int\\!d\Phi(\bar{k})\prod_{\alpha=1,2}\hat{d}^{4}\overline{w}_{\alpha}\,\hat{d}^{4}\bar{q}_{1}\hat{d}^{4}\bar{q}_{2}\;\hat{\delta}(2\overline{w}_{\alpha}\cdot
p_{\alpha}+\overline{w}_{\alpha}^{2})$ (4.29)
$\displaystyle\times\hat{\delta}(2\bar{q}_{1}\cdot
p_{1})\hat{\delta}(2\bar{q}_{2}\cdot
p_{2})\,e^{-ib\cdot\bar{q}_{1}}\,\overline{w}_{1}^{\mu}\,\hat{\delta}^{(4)}(\overline{w}_{1}+\overline{w}_{2}+\bar{k})\,\hat{\delta}^{(4)}(\bar{q}_{1}+\bar{q}_{2})$
$\displaystyle\quad\times\mathcal{\bar{A}}^{(0)}(p_{1}\,,p_{2}\rightarrow
p_{1}+\hbar\overline{w}_{1}\,,p_{2}+\hbar\overline{w}_{2},\hbar\bar{k})$
$\displaystyle\qquad\times\mathcal{\bar{A}}^{(0)*}(p_{1}+\hbar\bar{q}_{1}\,,p_{2}+\hbar\bar{q}_{2}\rightarrow
p_{1}+\hbar\overline{w}_{1}\,,p_{2}+\hbar\overline{w}_{2},\hbar\bar{k})\,\biggr{\rangle}\\!\\!\\!\biggr{\rangle}\,.$
To see that this balances the radiated momentum, we use equation (4.19). The
structure of the expressions are almost identical; conservation of momentum
holds because the factor $\bar{k}^{\mu}$ in equation (4.19) is balanced by
$\overline{w}_{1}^{\mu}$ in equation (4.29) and $\overline{w}_{2}^{\mu}$ in
the equivalent expression for particle 2.
Thus conservation of momentum continues to hold in our expressions once we
have passed to the classical limit, at least through NNLO. At this order there
is non-zero momentum radiated, so momentum conservation is non-trivial from
the classical point of view. We will see by explicit calculation in QED that
our classical impulse correctly incorporates the impulse from the ALD force in
addition to the Lorentz force.
##### 4.3.1 Perspectives from classical field theory
Before jumping into examples, it is useful to reflect on the total radiated
momentum, expressed in terms of amplitudes, by digressing into classical field
theory. To do so we must classically describe the distribution and flux of
energy and momentum in the radiation field itself. Although our final
conclusions also hold in YM theory and gravity, let us work in electrodynamics
for simplicity. Here the relevant stress-energy tensor is
$T^{\mu\nu}(x)=F^{\mu\alpha}(x)F_{\alpha}{}^{\nu}(x)+\frac{1}{4}\eta^{\mu\nu}F^{\alpha\beta}(x)F_{\alpha\beta}(x)\,.$
(4.30)
In particular, the (four-)momentum flux through a three dimensional surface
$\partial\Omega$ with surface element $d\Sigma_{\nu}$ is
$K^{\mu}=\int_{\partial\Omega}\\!\\!d\Sigma_{\nu}T^{\mu\nu}(x)\,.$ (4.31)
We are interested in the total momentum radiated as two particles scatter. At
each time $t$, we therefore surround the two particles with a large sphere.
The instantaneous flux of momentum is measured by integrating over the surface
area of the sphere; the total momentum radiated is then the integral of this
instantaneous flux over all times. It is straightforward to determine the
momentum radiated by direct integration over these spheres using textbook
methods — see appendix D of [1].
A simpler but more indirect method is the following. We wish to use the Gauss
theorem to write
$K^{\mu}=\int_{\partial\Omega}\\!\\!d\Sigma_{\nu}T^{\mu\nu}(x)=\int\\!d^{4}x\,\partial_{\nu}T^{\mu\nu}(x)\,.$
(4.32)
However, the spheres surrounding our particle are not the boundary of all
spacetime: they do not include the timelike future and past boundaries. To
remedy this, we use a trick due to Dirac [122].
The radiation we have in mind is causal, so we solve the Maxwell equation with
retarded boundary conditions. We denote these fields by
$F^{\mu\nu}_{\textrm{ret}}(x)$. We could equivalently solve the Maxwell
equation using the advanced Green’s function. If we wish to determine
precisely the same fields $F^{\mu\nu}_{\textrm{ret}}(x)$ but using the
advanced Green’s function, we must add a homogeneous solution of the Maxwell
equation. Fitting the boundary conditions in this way requires subtracting the
incoming radiation field $F^{\mu\nu}_{\textrm{in}}(x)$ which is present in the
advanced solution (but not in the retarded solution) and adding the outgoing
radiation field (which is present in the retarded solution, but not the
advanced solution.) In other words,
$F^{\mu\nu}_{\textrm{ret}}(x)-F^{\mu\nu}_{\textrm{adv}}(x)=-F^{\mu\nu}_{\textrm{in}}(x)+F^{\mu\nu}_{\textrm{out}}(x)\,.$
(4.33)
Now, the radiated momentum $K^{\mu}$ in which we are interested is described
by $F^{\mu\nu}_{\textrm{out}}(x)$. The field $F^{\mu\nu}_{\textrm{in}}(x)$
transports the same total amount of momentum in from infinity, ie it
transports momentum $-K^{\mu}$ out. Therefore the difference between the
momenta transported out to infinity by the retarded and by the advanced fields
is simply $2K^{\mu}$. This is useful, because the contributions of the point-
particle sources cancel in this difference.
The relationship between the momentum transported by the retarded and advanced
field is reflected at the level of the Green’s functions themselves. The
difference in the Green’s function takes an instructive form:
$\displaystyle\tilde{G}_{\textrm{ret}}(\bar{k})-\tilde{G}_{\textrm{adv}}(\bar{k})$
$\displaystyle=\frac{(-1)}{(\bar{k}^{0}+i\epsilon)^{2}-\boldsymbol{{\bar{k}}}^{2}}-\frac{(-1)}{(\bar{k}^{0}-i\epsilon)^{2}-\boldsymbol{{\bar{k}}}^{2}}$
(4.34)
$\displaystyle=i\left(\Theta(\bar{k}^{0})-\Theta(-\bar{k}^{0})\right)\hat{\delta}(\bar{k}^{2})\,.$
In this equation, $\boldsymbol{{\bar{k}}}$ denotes the spatial components of
wavenumber four-vector $\bar{k}$. This difference is a homogeneous solution of
the wave equation since it is supported on $\bar{k}^{2}=0$. The two terms
correspond to positive and negative angular frequencies. As we will see, the
relative sign ensures that the momenta transported to infinity add.
With this in mind, we return to the problem of computing the momentum radiated
and write
$2K^{\mu}=\int_{\partial\Omega}\\!\\!d\Sigma_{\nu}\Big{(}T^{\mu\nu}_{\textrm{ret}}(x)-T^{\mu\nu}_{\textrm{adv}}(x)\Big{)}\,.$
(4.35)
In this difference, the contribution of the sources at timelike infinity
cancel, so we may regard the surface $\partial\Omega$ as the boundary of
spacetime. Therefore,
$2K^{\mu}=\int\\!d^{4}x\,\partial_{\nu}\\!\left(T^{\mu\nu}_{\textrm{ret}}(x)-T^{\mu\nu}_{\textrm{adv}}(x)\right)=-\int\\!d^{4}x\left(F^{\mu\nu}_{\textrm{ret}}(x)-F^{\mu\nu}_{\textrm{adv}}(x)\right)J_{\nu}(x)\,,$
(4.36)
where the last equality follows from the equations of motion. We now pass to
momentum space, noting that
$F^{\mu\nu}(x)=-i\\!\int\\!\hat{d}^{4}\bar{k}\left(\bar{k}^{\mu}\tilde{A}^{\nu}(\bar{k})-\bar{k}^{\nu}\tilde{A}^{\mu}(\bar{k})\right)e^{-i\bar{k}\cdot
x}\,.$ (4.37)
Using conservation of momentum, the radiated momentum becomes
$\displaystyle 2K^{\mu}$
$\displaystyle=i\\!\int\\!\hat{d}^{4}\bar{k}\;\bar{k}^{\mu}\left(\tilde{A}^{\nu}_{\textrm{ret}}(\bar{k})-\tilde{A}^{\nu}_{\textrm{adv}}(\bar{k})\right)\tilde{J}_{\nu}^{*}(\bar{k}),$
(4.38)
$\displaystyle=-\int\\!\hat{d}^{4}\bar{k}\;\bar{k}^{\mu}\left(\Theta(\bar{k}^{0})-\Theta(-\bar{k}^{0})\right)\hat{\delta}(\bar{k}^{2})\tilde{J}^{\nu}(\bar{k})\tilde{J}_{\nu}^{*}(\bar{k})\,.$
The two different $\Theta$ functions arise from the outgoing and incoming
radiation fields. Setting $k^{\prime\mu}=-k^{\mu}$ in the second term, and
then dropping the prime, it is easy to see that the two terms add as
anticipated. We arrive at a simple general result for the momentum radiated:
$\displaystyle K^{\mu}$
$\displaystyle=-\int\\!\hat{d}^{4}\bar{k}\,\Theta(\bar{k}^{0})\hat{\delta}(\bar{k}^{2})\,\bar{k}^{\mu}\,\tilde{J}^{\nu}(\bar{k})\tilde{J}_{\nu}^{*}(\bar{k})$
(4.39)
$\displaystyle=-\int\\!d\Phi(\bar{k})\,\bar{k}^{\mu}\,\tilde{J}^{\nu}(\bar{k})\tilde{J}_{\nu}^{*}(\bar{k})\,.$
It is now worth pausing to compare this general classical formula for the
radiated momentum to the expression we derived previously in equation (4.24).
Evidently the radiation kernel we defined in equation (4.23) is related to the
classical current $\tilde{J}^{\mu}(\bar{k})$. This fact was anticipated in
ref. [123]. Indeed, if we introduce a basis of polarisation vectors
$\varepsilon^{h}_{\mu}(\bar{k})$ associated with the wavevector $\bar{k}$ with
helicity $h$, we may write the classical momentum radiated as
$K^{\mu}=\sum_{h}\int\\!d\Phi(\bar{k})\,\bar{k}^{\mu}\,\left|\varepsilon^{h}\cdot\tilde{J}(\bar{k})\right|^{2}\,,$
(4.40)
where here we have written the sum over helicities explicitly. Similar
expressions hold in classical YM theory and gravity [103].
#### 4.4 Examples
At leading-order the amplitude appearing in the radiation kernel in equation
(4.25) is a five-point, tree amplitude (figure 4.1) that can be readily
computed. In Yang–Mills theory,
$\displaystyle\bar{\mathcal{A}}^{(0)}(\bar{k}^{a})$
$\displaystyle=\sum_{D}\mathcal{C}^{a}(D)\bar{A}^{(0)}_{D}(p_{1}+w_{1},p_{2}+w_{2}\rightarrow
p_{1},p_{2};k,h)$ (4.41)
$\displaystyle=\Big{[}\mathcal{C}^{a}\\!\left(\leavevmode\hbox to15.03pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{3.5566pt}{1.42271pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{3.5566pt}{1.42271pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{3.5566pt}{1.42271pt}\pgfsys@lineto{7.11319pt}{-2.13388pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right)\\!A_{\scalebox{0.5}{
\leavevmode\hbox to15.03pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{3.5566pt}{1.42271pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{3.5566pt}{1.42271pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{3.5566pt}{1.42271pt}\pgfsys@lineto{7.11319pt}{-2.13388pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}+\mathcal{C}^{a}\\!\left(\leavevmode\hbox
to15.03pt{\vbox to15.03pt{\pgfpicture\makeatletter\hbox{\hskip
7.51318pt\lower-11.78134pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{-3.5566pt}{1.42271pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-3.5566pt}{1.42271pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-3.5566pt}{1.42271pt}\pgfsys@lineto{-7.11319pt}{-2.13388pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right)\\!A_{\scalebox{0.5}{
\leavevmode\hbox to15.03pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{-3.5566pt}{1.42271pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-3.5566pt}{1.42271pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-3.5566pt}{1.42271pt}\pgfsys@lineto{-7.11319pt}{-2.13388pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}$
$\displaystyle\qquad\qquad+\mathcal{C}^{a}\\!\left(\leavevmode\hbox
to15.03pt{\vbox to15.03pt{\pgfpicture\makeatletter\hbox{\hskip
7.51318pt\lower-11.78134pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{-4.26773pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right)\\!A_{\scalebox{0.5}{
\leavevmode\hbox to15.03pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{-4.26773pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}+(1\leftrightarrow
2)\Big{]}-i\,\mathcal{C}^{a}\\!\left(\leavevmode\hbox to15.74pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-4.26773pt}\pgfsys@lineto{7.82433pt}{-4.26773pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right)\\!A_{\scalebox{0.5}{
\leavevmode\hbox to15.74pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-4.26773pt}\pgfsys@lineto{7.82433pt}{-4.26773pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}\,.$
Explicitly, the colour factors are given by
$\begin{gathered}\mathcal{C}^{a}\\!\left(\leavevmode\hbox to15.03pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{3.5566pt}{1.42271pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{3.5566pt}{1.42271pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{3.5566pt}{1.42271pt}\pgfsys@lineto{7.11319pt}{-2.13388pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right)=(C_{1}^{a}\cdot
C_{1}^{b})C_{2}^{b}\,,\qquad\mathcal{C}^{a}\\!\left(\leavevmode\hbox
to15.03pt{\vbox to15.03pt{\pgfpicture\makeatletter\hbox{\hskip
7.51318pt\lower-11.78134pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{-3.5566pt}{1.42271pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-3.5566pt}{1.42271pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-3.5566pt}{1.42271pt}\pgfsys@lineto{-7.11319pt}{-2.13388pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right)=(C_{1}^{b}\cdot
C_{1}^{a})C_{2}^{b}\,,\\\ \mathcal{C}^{a}\\!\left(\leavevmode\hbox
to15.03pt{\vbox to15.03pt{\pgfpicture\makeatletter\hbox{\hskip
7.51318pt\lower-11.78134pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{-4.26773pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right)=\frac{1}{2}\mathcal{C}^{a}\\!\left(\leavevmode\hbox
to15.03pt{\vbox to15.03pt{\pgfpicture\makeatletter\hbox{\hskip
7.51318pt\lower-11.78134pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{3.5566pt}{1.42271pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{3.5566pt}{1.42271pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{3.5566pt}{1.42271pt}\pgfsys@lineto{7.11319pt}{-2.13388pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right)+\frac{1}{2}\mathcal{C}^{a}\\!\left(\leavevmode\hbox
to15.03pt{\vbox to15.03pt{\pgfpicture\makeatletter\hbox{\hskip
7.51318pt\lower-11.78134pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{-3.5566pt}{1.42271pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-3.5566pt}{1.42271pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-3.5566pt}{1.42271pt}\pgfsys@lineto{-7.11319pt}{-2.13388pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right),\qquad\mathcal{C}^{a}\\!\left(\leavevmode\hbox
to15.74pt{\vbox to15.03pt{\pgfpicture\makeatletter\hbox{\hskip
7.51318pt\lower-11.78134pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-4.26773pt}\pgfsys@lineto{7.82433pt}{-4.26773pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right)=i\hbar
f^{abc}C_{1}^{b}C_{2}^{c}\,,\end{gathered}$ (4.42)
with the replacement $1\leftrightarrow 2$ for diagrams with gluon emission
from particle 2. Just as in the 4-point case at 1-loop, this is an
overcomplete set for specifying a basis, because
$\displaystyle\mathcal{C}^{a}\\!\left(\leavevmode\hbox to15.03pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{-3.5566pt}{1.42271pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-3.5566pt}{1.42271pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-3.5566pt}{1.42271pt}\pgfsys@lineto{-7.11319pt}{-2.13388pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right)=(C_{1}^{a}\cdot
C_{1}^{b})C_{2}^{b}+i\hbar
f^{bac}C_{1}^{c}C_{2}^{b}=\mathcal{C}^{a}\\!\left(\leavevmode\hbox
to15.03pt{\vbox to15.03pt{\pgfpicture\makeatletter\hbox{\hskip
7.51318pt\lower-11.78134pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{3.5566pt}{1.42271pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{3.5566pt}{1.42271pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{3.5566pt}{1.42271pt}\pgfsys@lineto{7.11319pt}{-2.13388pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right)+\mathcal{C}^{a}\\!\left(\leavevmode\hbox
to15.74pt{\vbox to15.03pt{\pgfpicture\makeatletter\hbox{\hskip
7.51318pt\lower-11.78134pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-4.26773pt}\pgfsys@lineto{7.82433pt}{-4.26773pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right).$ (4.43)
$p_{1}+w_{1}$$p_{1}$$k$$p_{2}+w_{2}$$p_{2}$ Figure 4.1: The amplitude
$\mathcal{A}^{(0)}(p_{1}+w_{1}\,,p_{2}+w_{2}\rightarrow p_{1}\,,p_{2}\,,k)$
appearing in the radiation kernel at leading order.
Hence the full basis of colour factors is only 3 dimensional, and the colour
decomposition of the 5-point tree is
$\bar{\mathcal{A}}^{(0)}(\bar{k}^{a})=\mathcal{C}^{a}\\!\left(\leavevmode\hbox
to15.03pt{\vbox to15.03pt{\pgfpicture\makeatletter\hbox{\hskip
7.51318pt\lower-11.78134pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{3.5566pt}{1.42271pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{3.5566pt}{1.42271pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{3.5566pt}{1.42271pt}\pgfsys@lineto{7.11319pt}{-2.13388pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right)\Big{(}A_{\scalebox{0.5}{
\leavevmode\hbox to15.03pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{3.5566pt}{1.42271pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{3.5566pt}{1.42271pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{3.5566pt}{1.42271pt}\pgfsys@lineto{7.11319pt}{-2.13388pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>\leavevmode\hbox to15.03pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{-3.5566pt}{1.42271pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-3.5566pt}{1.42271pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-3.5566pt}{1.42271pt}\pgfsys@lineto{-7.11319pt}{-2.13388pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>\leavevmode\hbox to15.03pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{-4.26773pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}}}\Big{)}\\\
+\frac{1}{2}\mathcal{C}^{a}\\!\left(\leavevmode\hbox to15.74pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-4.26773pt}\pgfsys@lineto{7.82433pt}{-4.26773pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}\right)\Big{(}-iA_{\scalebox{0.5}{
\leavevmode\hbox to15.74pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-4.26773pt}\pgfsys@lineto{7.82433pt}{-4.26773pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>\leavevmode\hbox to15.03pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{-3.5566pt}{1.42271pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-3.5566pt}{1.42271pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-3.5566pt}{1.42271pt}\pgfsys@lineto{-7.11319pt}{-2.13388pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
{}{}{}{}{}}{{{}}{{}}}}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys<EMAIL_ADDRESS>\leavevmode\hbox to15.03pt{\vbox
to15.03pt{\pgfpicture\makeatletter\hbox{\hskip 7.51318pt\lower-11.78134pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@setlinewidth{0.8pt}\pgfsys@invoke{ }
\pgfsys@beginscope\pgfsys@invoke{ }
{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
{{}}{}{}{{}}{{}}\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{{}}{}{}}
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{2.84544pt}\pgfsys@lineto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{2.84544pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{-7.11319pt}{-11.38135pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{-8.5359pt}\pgfsys@lineto{7.11319pt}{-11.38135pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{7.11319pt}{-4.26773pt}\pgfsys@stroke\pgfsys@invoke{
} {{}}{}{{}}{}{{}}
{}{}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{0.0pt}{-8.5359pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope
\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope{{
|
# Emergent Non-Abelian Gauge Theory in Coupled Spin-Electron Dynamics
Nicolas Lenzing I. Institute of Theoretical Physics, Department of Physics,
University of Hamburg, Notkestraße 9-11, 22607 Hamburg, Germany Alexander I.
Lichtenstein I. Institute of Theoretical Physics, Department of Physics,
University of Hamburg, Notkestraße 9-11, 22607 Hamburg, Germany The Hamburg
Centre for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg, Germany
Michael Potthoff I. Institute of Theoretical Physics, Department of Physics,
University of Hamburg, Notkestraße 9-11, 22607 Hamburg, Germany The Hamburg
Centre for Ultrafast Imaging, Luruper Chaussee 149, 22761 Hamburg, Germany
###### Abstract
A clear separation of the time scales governing the dynamics of “slow” and
“fast” degrees of freedom often serves as a prerequisite for the emergence of
an independent low-energy theory. Here, we consider (slow) classical spins
exchange coupled to a tight-binding system of (fast) conduction electrons. The
effective equations of motion are derived under the constraint that the
quantum state of the electron system at any instant of time $t$ lies in the
$n$-dimensional low-energy subspace for the corresponding spin configuration
at $t$. The effective low-energy theory unfolds itself straightforwardly and
takes the form of a non-abelian gauge theory with the gauge freedom given by
the arbitrariness of the basis spanning the instantaneous low-energy sector.
The holonomic constraint generates a gauge covariant spin-Berry curvature
tensor in the equations of motion for the classical spins. In the non-abelian
theory for $n>1$, opposed to the $n=1$ adiabatic spin dynamics theory, the
spin-Berry curvature is generically nonzero, even for time-reversal symmetric
systems. Its expectation value with the representation of the electron state
is gauge invariant and gives rise to an additional geometrical spin torque.
Besides anomalous precession, the $n\geq 2$ theory also captures the spin
nutational motion, which is usually considered as a retardation effect. This
is demonstrated by proof-of-principle numerical calculations for a minimal
model with a single classical spin. Already for $n=2$ and in parameter regimes
where the $n=1$ adiabatic theory breaks down, we find good agreement with
results obtained from the full (unconstrained) theory.
## I Introduction
Classical spin models Nowak (2007); Bertotti _et al._ (2009) are a highly
useful and widely employed tool to understand the non-equilibrium dynamics of
magnetic materials. At the expense of disregarding the quantum nature of the
magnetic moments and related phenomena, such as the Kondo effect Kondo (1964);
Hewson (1993), they provide a numerically tractable framework for spin
dynamics on an atomistic length scale Tatara _et al._ (2008); Skubic _et
al._ (2008); Fähnle and Illg (2011); Evans _et al._ (2014). Typically,
classical spin models may comprise a short-range isotropic Heisenberg-type
exchange, various anisotropic couplings and long-range, e.g., dipole
interactions. The classical equations of motion are usually supplemented by
Gilbert-damping terms to account for dissipation effects.
Spin-only models can actually be seen as effective low-energy theories
emerging from a more fundamental level of modelling, where the local magnetic
moments (classical spins ${\boldsymbol{S}}_{i}$) at sites $i$ of a lattice are
coupled to the local spins ${\boldsymbol{s}}_{i}$ of a system of conduction
electrons via a local exchange coupling $J$. Such quantum-classical spin-
electron hybrid models are necessary to explain various phenomena, including
indirect spin exchange interactions, like the Ruderman-Kittel-Kasuya-Yoshida
(RKKY) interaction rkk , Gilbert spin damping due to coupling to electronic
degrees of freedom llg , spin inertia effects (nutation) Butikov (2006);
Wegrowe and Ciornei (2012), and other more strongly retarded effective spin-
spin interactions mediated by the conduction-electron system.
The standard formal approach Onoda and Nagaosa (2006); Umetsu _et al._
(2012); Bhattacharjee _et al._ (2012); Sayad and Potthoff (2015); Bajpai and
Nikolic (2019) that achieves the derivation of the effective spin-only theory
is based on the (usually realistic) assumption that the local exchange
coupling $J$ is weak as compared to the typical energy scales of the electron
system. Consider the $s$-$d$ model vz with Hamiltonian $H=H_{\rm
el.}+J\sum_{i}{\boldsymbol{s}}_{i}{\boldsymbol{S}}_{i}$ as a prototype. The
torque on the classical spin at site $i$ is given by
$J\langle{\boldsymbol{s}}_{i}\rangle_{t}\times{\boldsymbol{S}}_{i}$, where the
expectation value of the local electron spin ${\boldsymbol{s}}_{i}$ at site
$i$ is obtained from the many-body state $|\Psi(t)\rangle$ of the electron
system (Hamiltonian $H_{\rm el.}$) at time $t$. Since the electron state
itself must be computed in the presence of the local exchange interaction term
$\propto J$ for the (time-dependent) classical spin configuration
$\\{{\boldsymbol{S}}\\}$, there is a retarded mutual effective interaction
emerging. This is uncovered, for example, by linear-response theory, i.e., by
lowest-order time-dependent perturbation theory in $J$. This leads to an
integro-differential equation of motion for ${\boldsymbol{S}}_{i}$,
$\dot{{\boldsymbol{S}}}_{i}(t)=J^{2}\sum_{i^{\prime}}\int_{0}^{t}dt^{\prime}\underline{\chi}_{ii^{\prime}}(t-t^{\prime})\boldsymbol{S}_{i^{\prime}}(t^{\prime})\times{\boldsymbol{S}}_{i}(t)$
(1)
which involves the retarded magnetic susceptibility tensor with elements
${\chi}_{ii^{\prime}}^{(\alpha\alpha^{\prime})}(t-t^{\prime})$ of the electron
ground state as the integral kernel ($\alpha,\alpha^{\prime}=x,y,z$). The
resulting spin dynamics is non-conservative, as Eq. (1) describes an open
quantum system and is known from Redfield theory Breuer and Petruccione
(2002).
Assuming that $\underline{\chi}_{ii^{\prime}}(t-t^{\prime})$ is strongly
peaked at $t^{\prime}=t$, we can replace
${\boldsymbol{S}}_{i^{\prime}}(t^{\prime})$ by the first few terms in its
Taylor expansion around $t^{\prime}=t$, i.e.,
${\boldsymbol{S}}_{i^{\prime}}(t^{\prime})\approx{\boldsymbol{S}}_{i^{\prime}}(t)+\dot{{\boldsymbol{S}}}_{i^{\prime}}(t)(t^{\prime}-t)+\ddot{{\boldsymbol{S}}}_{i^{\prime}}(t)(t^{\prime}-t)^{2}/2$.
Keeping the first term on the right-hand side only and extending the
integration over $t^{\prime}$ to infinity, one obtains an effective
Hamiltonian equation of motion for the spins ${\boldsymbol{S}}_{i}$, which
involves the instantaneous spin-spin interaction mediated by the RKKY coupling
$J_{ii^{\prime}}^{\rm(RKKY)}=J^{2}\chi_{ii^{\prime}}(\omega=0)$. Including the
second term in addition, gives rise to a (non-local) Gilbert damping tensor
$\underline{\alpha}_{ii^{\prime}}=-iJ^{2}\partial_{\omega}\underline{\chi}_{ii^{\prime}}(\omega)|_{\omega=0}$,
while the third term leads to spin-inertia effects, i.e., additional nutation
of the spins. This derivation has been put forward in Refs. Bhattacharjee _et
al._ (2012); Sayad and Potthoff (2015) and can be employed in the context of
strongly correlated electron models Sayad _et al._ (2016a) or, when combined
with band-structure theory, for an ab initio computation of the Gilbert
damping Antropov _et al._ (1995); Kuneš and Kamberský (2002); Capelle and
Gyorffy (2003); Ebert _et al._ (2011). Nutation effects, as have been
discussed in Refs. Fähnle _et al._ (2011); Kikuchi and Tatara (2015); Sayad
_et al._ (2016b), for example, find a natural explanation in the same
framework set by Eq. (1). Furthermore, at least in principle, systematic
extensions of the resulting low-energy spin-only theory can be achieved by
taking into account terms of higher order in the expansion. One may also drop
the approximation on the $t^{\prime}$-integration range. This leads to a time-
dependent RKKY coupling $J^{(RKKY)}_{ii^{\prime}}(t)$ and a time-dependent
Gilbert damping $\underline{\alpha}_{ii^{\prime}}(t)$, as has been mentioned
in Ref. Sayad and Potthoff (2015); Bajpai and Nikolic (2019).
The above-sketched standard theory misses, however, an important effect
pointed out recently Stahl and Potthoff (2017): The slow dynamics of the
classical spins results in a non-trivial Berry curvature of the electronic
quantum system as is well known since long Berry (1984); Xiao _et al._
(2010). Quite generally, however, this Berry curvature in turn has a feedback
on the classical spin dynamics Wen and Zee (1988); Niu and Kleinman (1998);
Bohm _et al._ (2003); Niu _et al._ (1999); Stahl and Potthoff (2017).
Namely, there is a geometrical spin torque which comes with the same prefactor
$J^{2}$ as the RKKY coupling and the Gilbert damping. This torque can give
rise to unconventional spin dynamics as has been demonstrated Stahl and
Potthoff (2017); Bajpai and Nikolić (2020) not only for a quantum-classical
system as is considered here as well, but also for slow classical spins
locally exchange coupled to a system of fast classical spins Elbracht _et
al._ (2020); Michel and Potthoff (2021) and even for the dynamics of a quantum
spin in a Kondo model Stahl and Potthoff (2017).
This geometrical spin torque emerges in an effective low-energy spin-only
theory that is derived by starting from full theory of classical spins coupled
to conduction electrons by imposing the constraint that, at any instant of
time $t$, the electron system is in its ground state, i.e.,
$|\Psi(t)\rangle=|\Psi_{0}(\\{{\boldsymbol{S}}(t)\\})\rangle$, for the spin
configuration $\\{{\boldsymbol{S}}(t)\\}$ at time $t$. This is analogous to
molecular dynamics approaches Marx and Hutter (2000); Bohm _et al._ (2003);
Zhang and Wu (2006) where the slow nuclear coordinates are treated
classically. If the exchange coupling $J$ is weak, the classical spin dynamics
is slow compared to typical energy scales of the electron systems. The
adiabatic spin dynamics (ASD) thus addresses the same parameter regime as the
standard perturbative linear-response approach discussed above.
With the present paper we explore a systematic extension of the ASD by
relaxing the adiabatic constraint. The impact of electronic low-energy
excitations from the instantaneous ground state
$|\Psi_{0}(\\{{\boldsymbol{S}}(t)\\})\rangle$ on the classical spin dynamics
can be taken into account by imposing, as a weaker constraint, that the
electron state $|\Psi(t)\rangle$ be at time $t$ in the subspace of the Fock
space spanned by the first $n>1$ eigenstates of the Hamiltonian for the spin
configuration $\\{{\boldsymbol{S}}(t)\\}$ at $t$. This beyond-adiabatic
constraint leads to a non-abelian Berry connection and curvature Xiao _et
al._ (2010); Wilczek and Zee (1984). Here, we will work out the general
formalism of the non-abelian gauge theory that emerges as the effective low-
energy theory. The formally correct incorporation of the constraint is
achieved within conventional Lagrange formalism. A simple toy model will be
considered and solved numerically to study the effect of the geometric torque
on the classical spin dynamics in the non-abelian case. We discuss the
anomalies in the precessional spin dynamics and demonstrate that spin nutation
arises naturally in our framework. The previously developed ASD represents the
$n=1$ limit of our non-abelian spin-dynamics (NA-SD) theory. In the ASD for a
single classical spin, the presence of an anomalous precession frequency has
been found Stahl and Potthoff (2017) for an odd number of conduction electrons
only, while the full solution of the coupled equations of motion for spin and
electron dynamics yields an anomalous frequency for both, odd and even
electron numbers. In the broader framework of NA-SD we can resolve this open
issue.
The paper is organized as follows: The next section II presents the general
Hamiltonian and Lagrangian formulation of the theory. The equations of motion
of the non-abelian gauge theory in the instantaneous low-energy sector are
worked out in Sec. III, and various formal aspects of the theory are discussed
in Sec. IV. Sections V and VI are particularly devoted to a discussion of the
impact of time-reversal symmetry and of gauge transformations, respectively. A
minimal model, suitable for proof-of-principle studies, is introduced in Sec.
VII. In Sec. VIII we present and discuss the results of numerical
calculations. Conclusions are given in Sec. IX.
## II General Theory
Geometric forces or torques originate in the adiabatic limit of hybrid systems
consisting of quantum degrees of freedom interacting with classical degrees of
freedom. Here, we consider a quantum lattice model of $N$ conduction electrons
interacting with $M$ classical “spins” ${\boldsymbol{S}}_{m}$ of unit length
$|{\boldsymbol{S}}_{m}|=1$. The system dynamics is governed by a quantum-
classical Hamiltonian of the form
$\hat{H}(\\{\boldsymbol{S}\\})=\hat{H}_{\text{qu}}+{H}_{\text{cl}}(\\{\boldsymbol{S}\\})+\hat{H}_{\rm
int}(\\{\boldsymbol{S}\\})\>.$ (2)
The quantum Hamiltonian $\hat{H}_{\text{qu}}$ is constructed in terms of
fermion creation and annihilation operators
$c^{\dagger}_{{\boldsymbol{r}}\sigma}$ and $c_{{\boldsymbol{r}}\sigma}$, where
${\boldsymbol{r}}$ refers to the sites of the lattice and
$\sigma=\uparrow,\downarrow$ is the spin projection. Additional orbital
degrees of freedom may be considered as well. The formulation of the theory is
largely independent of $\hat{H}_{\text{qu}}$ but requires a well-defined local
quantum spin ${\boldsymbol{s}}_{{\boldsymbol{r}}}$ at lattice site
${\boldsymbol{r}}$:
${\boldsymbol{s}}_{{\boldsymbol{r}}}=\frac{1}{2}\sum_{\sigma\sigma^{\prime}}c^{\dagger}_{{\boldsymbol{r}}\sigma}{\boldsymbol{\sigma}}_{\sigma\sigma^{\prime}}c_{{\boldsymbol{r}}\sigma^{\prime}}\>.$
(3)
Here, ${\boldsymbol{\sigma}}$ is the vector of $2\times 2$ Pauli matrices (and
$\hbar\equiv 1$).
The dynamics of the subsystem of $M$ classical spins
$\\{{\boldsymbol{S}}\\}\equiv\\{{\boldsymbol{S}}_{1},...,{\boldsymbol{S}}_{M}\\}$
derives from a classical Hamilton function
${H}_{\text{cl}}(\\{\boldsymbol{S}\\})$ and may comprise an external magnetic
field and isotropic or anisotropic spin exchange couplings. The third term in
Eq. (2) represents a quantum-classical interaction term. Here, we choose an
isotropic local exchange interaction
$\hat{H}_{\text{int}}(\\{\boldsymbol{S}\\})=J\sum_{m=1}^{M}\boldsymbol{S}_{m}\boldsymbol{s}_{{\boldsymbol{r}}_{m}}\>,$
(4)
between the $m$-th classical spin ${\boldsymbol{S}}_{m}$ and the local spin
${\boldsymbol{s}}_{{\boldsymbol{r}}_{m}}$ of the conduction-electron system at
the site ${\boldsymbol{r}}_{m}$. The coupling strength is $J>0$. The theory is
developed for an arbitrary number of classical spins $M$, but we will later
focus on a single-classical-spin Kondo model ($M=1$) for the sake of
simplicity.
If the classical spins $\\{{\boldsymbol{S}}\\}$ were replaced by quantum
spins, Eq. (2) would represent the Hamiltonian of the multi-impurity or
lattice Kondo model. With the classical-spin approximation we disregard
typical correlation effects, such as Kondo screening and heavy-fermion
behavior, and hence we are essentially working on a mean-field-type level. The
approximation may be justified in cases where there are well-formed spin
moments which are stable on time scales exceeding all remaining time scales of
the problem, e.g., in cases, where the Kondo effect is suppressed by magnetism
or in case of quantum spins with large spin quantum numbers. An example has
been given in Ref. Sayad _et al._ (2016a), where anomalous quantum-classical
dynamics due to a geometrical torque has also been found in the corresponding
full quantum system. A consistent theory for a system that is entirely quantum
with a least two largely different time scales has yet to be developed. This
means that the presence of slow classical degrees of freedom is necessarily
required for the very concept of geometrical forces and torques. The classical
degrees of freedom are required to define the smooth manifold onto which the
quantum dynamics is restricted in the adiabatic limit.
A pure state of the quantum-classical hybrid system at time $t$ is specified
by a Hilbert-space vector $|\Psi(t)\rangle$ and by the classical spin
configuration $\\{{\boldsymbol{S}}(t)\\}$, see Refs. Heslot (1985); Hall
(2008); Elze (2012) for a general discussion of hybrid dynamics. The
trajectory of the system state is obtained as the solution of a system of
coupled ordinary differential equations. These consist of the Schrödinger
equation, involving the quantum Hamiltonian and the interaction term, which
depends on the classical-spin configuration,
$i\partial_{t}\ket{\Psi(t)}=[\hat{H}_{\text{qu}}+\hat{H}_{\rm
int}(\\{\boldsymbol{S}(t)\\})]\ket{\Psi(t)}\>,$ (5)
and the Hamilton equations of motion for the classical-spin configuration,
involving the classical Hamilton function and the expectation value of the
interaction term in the quantum state $|\Psi(t)\rangle$:
$\dot{{\boldsymbol{S}}}_{m}(t)=\Big{\\{}{\boldsymbol{S}}_{m}(t),{H}_{\text{cl}}(\\{\boldsymbol{S}(t)\\})+\langle\hat{H}_{\rm
int}(\\{\boldsymbol{S}(t)\\})\rangle\Big{\\}}_{S}\>.$ (6)
Here, the dot denotes the time derivative, and $\\{\cdot,\cdot\\}_{S}$ is the
Poisson bracket. In case of spin systems, the latter is defined for two
arbitrary functions $A(\\{\boldsymbol{S}\\})$ and $B(\\{\boldsymbol{S}\\})$ as
Bulgac and Kusnezov (1990)
$\\{A,B\\}_{S}=\sum_{m}\frac{\partial
A}{\partial{\boldsymbol{S}}_{m}}\times\frac{\partial
B}{\partial{\boldsymbol{S}}_{m}}\cdot{\boldsymbol{S}}_{m}\>.$ (7)
The coupled equations of motion, Eq. (5) and Eq. (6), are generated as Euler-
Lagrange equations by requiring stationarity of an action functional ${\cal
S}=\int Ldt$ with the Lagrangian
$L=L(\\{\boldsymbol{S}\\},\\{\dot{\boldsymbol{S}}\\},\ket{\Psi},\dot{\ket{\Psi}},\bra{\Psi},\dot{\bra{\Psi}})$:
$L=\sum_{m}\boldsymbol{A}(\boldsymbol{S}_{m})\dot{\boldsymbol{S}}_{m}+\bra{\Psi(t)}i\partial_{t}-\hat{H}\ket{\Psi(t)}\>.$
(8)
Here, ${\boldsymbol{A}}({\boldsymbol{S}})$ is a function satisfying
$\nabla\times{\boldsymbol{A}}({\boldsymbol{S}})=-{\boldsymbol{S}}/S^{3}$,
which can thus be interpreted as the vector potential of a unit magnetic
monopole located at ${\boldsymbol{S}}=0$. We have
${\boldsymbol{A}}({\boldsymbol{S}})=-\frac{1}{S^{2}}\frac{{\boldsymbol{e}}\times{\boldsymbol{S}}}{1+{\boldsymbol{e}}{\boldsymbol{S}}/S}\;,$
(9)
with a unit vector ${\boldsymbol{e}}$. In the standard gauge Dirac (1931) this
is chosen as ${\boldsymbol{e}}={\boldsymbol{e}}_{z}$. In this gauge, another
representation is
${\boldsymbol{A}}({\boldsymbol{S}})=-(1/S)\tan(\vartheta/2){\boldsymbol{e}}_{\rm\varphi}$,
using spherical coordinates $(S,\vartheta,\varphi)$. For details of deriving
Eq. (5) and Eq. (6) from $\delta{\cal S}=0$, see Ref. Elbracht _et al._
(2020) (supplemental material).
We will address the parameter regime of the Hamiltonian, where the system
dynamics is characterized by two strongly different time scales, a slow spin
dynamics and a fast dynamics of the electron state, which almost
instantaneously follows the motion of the spins. In the extreme adiabatic
limit, the quantum many-body state $|\Psi(t)\rangle$ of the electron system at
time $t$ is given by the ground state,
$|\Psi_{0}(\\{{\boldsymbol{S}}(t)\\})\rangle$ of
$\hat{H}_{\text{qu}}+\hat{H}_{\text{int}}(\\{\boldsymbol{S}(t)\\})$, for the
spin configuration $\\{{\boldsymbol{S}}(t)\\}$ at time $t$. When approaching
the adiabatic limit in parameter space, the fast electron dynamics will be
more and more constrained to the ground manifold
$\\{|\Psi_{0}(\\{{\boldsymbol{S}}(t)\\})\rangle\\}$. Adiabatic spin-dynamics
(ASD) theory Stahl and Potthoff (2017); Elbracht _et al._ (2020); Michel and
Potthoff (2021) assumes that the dynamics is perfectly constrained to the
ground-state manifold and employs
$|\Psi(t)\rangle=|\Psi_{0}(\\{{\boldsymbol{S}}(t)\\})\rangle$ (10)
as a holonomic constraint to completely eliminate the electron degrees of
freedom from the Lagrangian Eq. (8). In this way, one arrives at a spin-only
effective Lagrangian $L_{\rm
eff}(\\{{\boldsymbol{S}}\\},\\{\dot{{\boldsymbol{S}}}\\})$, and the resulting
effective equations of motion include the geometrical spin torque as an
holonomy effect Stahl and Potthoff (2017). The unconventional spin dynamics
originating from the corresponding geometrical spin torque is missed by other
approaches, such as the standard linear-response approach to a spin-only
theory that has been discussed in the introduction. On the other hand,
retardation effects, e.g., nutational motion, are excluded within ASD by the
very construction.
The validity of the basic assumption, Eq. (10), strongly depends on the
specific system considered and on the considered parameter range. Even for
gapped systems, however, the strict adiabatic approximation is never perfectly
satisfied, and the true slow spin dynamics will be affected to some degree by
admixtures from (low-energy) excited electron states. As a systematic
generalization of ASD, we therefore propose to relax the constraint Eq. (10)
and to replace it by the weaker constraint
$\ket{\Psi(t)}=\sum_{i=0}^{n-1}\alpha_{i}(t)\ket{\Psi_{i}(\\{\boldsymbol{S}(t)\\})}\>.$
(11)
Here, $\ket{\Psi_{i}(\\{\boldsymbol{S}(t)\\})}$ is the $i$-th excited state of
$\hat{H}_{\text{qu}}+\hat{H}_{\text{int}}(\\{\boldsymbol{S}(t)\\})$, i.e., we
assume that at any instant of time $t$ the conduction-electron state
$\ket{\Psi(t)}$ is contained in the low-energy subspace ${\cal
E}_{n}(\\{{\boldsymbol{S}}\\})$ spanned by the instantaneous ground state and
the lowest $n-1$ instantaneous eigenstates for the spin configuration
$\\{{\boldsymbol{S}}\\}=\\{{\boldsymbol{S}}(t)\\}$ at time $t$. Choosing a
fixed orthonormal basis
$\\{\ket{\Psi_{i}(\\{\boldsymbol{S}\\})}\,|\,{i=0,...,n-1}\\}$ (12)
of ${\cal E}_{n}(\\{{\boldsymbol{S}}\\})$ for any spin configuration, the
electron state at time $t$ is fully specified by the set of expansion
coefficients $\\{\alpha(t)\\}\equiv\\{\alpha_{0}(t),...,\alpha_{n-1}(t)\\}$
via Eq. (11).
For $n=1$, we recover conventional ASD, and thus obtain a true spin-only
theory. For small $n>1$, the effective Lagrangian is obtained from Eq. (8) by
substituting $\ket{\Psi}$, $\partial_{t}|{\Psi}\rangle$, $\bra{\Psi}$,
$\partial_{t}\langle{\Psi}|$ using Eq. (11). It thereby becomes a function of
$\\{{\boldsymbol{S}}\\}$ and $\\{\dot{{\boldsymbol{S}}}\\}$ and furthermore a
function of the set of expansion coefficients $\\{\alpha\\}$, i.e., we get
$L_{\text{eff}}=L_{\text{eff}}(\\{\boldsymbol{S}\\},\\{\dot{\boldsymbol{S}}\\},\\{\alpha\\},\\{\alpha^{\ast}\\},\\{\dot{\alpha}\\}),\\{\dot{\alpha}^{\ast}\\})$.
Hence, besides the spin degrees of freedom, the resulting low-energy theory
contains a few electronic degrees of freedom as well.
We also define the eigenenergies $E_{i}=E_{i}(\\{{\boldsymbol{S}}\\})$ of
$\hat{H}_{\text{qu}}+\hat{H}_{\text{int}}(\\{\boldsymbol{S}(t)\\})$
corresponding to the basis states $|\Psi_{i}(\\{\boldsymbol{S}\\})\rangle$.
$E_{i}(\\{{\boldsymbol{S}}\\})$ is the analog of the $i$-th potential-energy
(Born-Oppenheimer) surface known from molecular-dynamics theory Marx and
Hutter (2000); Bohm _et al._ (2003). The spin configuration
$\\{{\boldsymbol{S}}\\}$ takes the role of the configuration of atomic nuclei.
Note that the strict adiabatic approximation, Eq. (10), becomes invalid, if
the trajectory of the spin configuration $\\{{\boldsymbol{S}}(t)\\}$ passes a
configuration $\\{{\boldsymbol{S}}_{\rm cr}\\}$, at which there is a crossing
of the ground state with the first excited state, i.e.,
$E_{0}(\\{{\boldsymbol{S}}_{\rm cr}\\})=E_{1}(\\{{\boldsymbol{S}}_{\rm
cr}\\})$, since this is in conflict with the adiabatic theorem Kato (1950);
Avron and Elgart (1999); Comparat (2009).
For $n>1$, the relaxed condition (11) corresponds to a generalized adiabatic
theorem, see Ref. Kato (1950), stating that the condition is respected, if the
low-energy sector ${\cal E}_{n}(\\{{\boldsymbol{S}}\\})$ and its orthogonal
complement (the “high-energy sector”) remain gapped for all
$\\{{\boldsymbol{S}}(t)\\})$ and, of course, if the electron dynamics is
sufficiently slow. In other words, for a given $n$, NA-SD applies if there is
no crossing $E_{n-1}(\\{{\boldsymbol{S}}_{\rm
cr}\\})=E_{n}(\\{{\boldsymbol{S}}_{\rm cr}\\})$, while crossings of states
within the low-energy sector are irrelevant. One should note, however, that a
crossing of two states belonging to the low- and the high-energy sector,
respectively, is in fact unproblematic, if the expansion coefficient
$\alpha_{n-1}(t)=0$ for all $t$, since in this case the $n-1$-th excited
eigenstate would not contribute to $|\Psi(t)\rangle$ anyway. This argument can
be extended to $k<n-1$, as long as there are crossings between “unoccupied”
states with $\alpha_{i}(t)=0$ and $\alpha_{j}(t)=0$ for $k\leq i,j\leq n$
only. We conclude that the relaxed condition (11) for $n>1$ also implies a
less severe, relaxed approximation.
## III Effective equations of motion
The effective Lagrangian that is obtained by using the constraint Eq. (11) to
eliminate $\ket{\Psi(t)}$ from the original Lagrangian Eq. (8), is given by:
$L_{\text{eff}}=L_{\text{eff}}(\\{\boldsymbol{S}\\},\\{\dot{\boldsymbol{S}}\\},\\{\alpha\\},\\{\alpha^{\ast}\\},\\{\dot{\alpha}\\},\\{\dot{\alpha}^{\ast}\\})=\sum_{m}\boldsymbol{A}_{m}(\boldsymbol{S}_{m})\dot{\boldsymbol{S}}_{m}+i\sum_{ij}\alpha_{i}^{\ast}\bra{\Psi_{i}}\partial_{t}(\alpha_{j}\ket{\Psi_{j}})-\sum_{ij}\alpha_{i}^{\ast}\alpha_{j}\bra{\Psi_{i}}\hat{H}\ket{\Psi_{j}}\>,$
(13)
where $\ket{\Psi_{i}}=\ket{\Psi_{i}(\\{\boldsymbol{S}_{m}\\})}$, and where the
$\\{\dot{{\boldsymbol{S}}}\\}$-dependence, besides the first term, is due to
$\bra{\Psi_{i}}\partial_{t}\ket{\Psi_{j}}=\sum_{m}\bra{\Psi_{i}}\partial_{{\boldsymbol{S}}_{m}}\ket{\Psi_{j}}\dot{{\boldsymbol{S}}}_{m}$.
The Euler-Lagrange equation $\partial_{t}(\partial
L_{\text{eff}}/\partial\dot{\alpha}^{\ast}_{i})-\partial
L_{\text{eff}}/\partial\alpha^{\ast}_{i}=0$ for the “wave function”
$\alpha_{i}$ is straightforwardly obtained as:
$i\partial_{t}\alpha_{i}=\sum_{j}\bra{\Psi_{i}}(\hat{H}_{\text{qu}}+\hat{H}_{\text{int}})\ket{\Psi_{j}}\alpha_{j}-i\sum_{m}\sum_{j}\alpha_{j}\bra{\Psi_{i}}\partial_{{\boldsymbol{S}}_{m}}\ket{\Psi_{j}}\dot{{\boldsymbol{S}}}_{m}\>.$
(14)
The complex conjugate of this equation is just the equation of motion that is
obtained for $\alpha_{i}^{\ast}$.
Note that the second term involves the non-abelian spin-Berry connection
$\underline{{\boldsymbol{C}}}_{m}=\underline{{\boldsymbol{C}}}_{m}(\\{{\boldsymbol{S}}\\})$.
Opposed to the (abelian) spin-Berry connection
${\boldsymbol{C}}_{m}=i\langle\Psi_{0}|\partial_{{\boldsymbol{S}}_{m}}|\Psi_{0}\rangle$
of the (abelian) ASD, this is, for each $m$, a matrix-valued vector with
elements:
${\boldsymbol{C}}^{(ij)}_{m}=i\bra{\Psi_{i}}\partial_{{\boldsymbol{S}}_{m}}\ket{\Psi_{j}}=i\sum_{\gamma}\bra{\Psi_{i}}\partial_{S_{m\gamma}}\ket{\Psi_{j}}{\boldsymbol{e}}_{\gamma}=\sum_{\gamma}C^{(ij)}_{m\gamma}{\boldsymbol{e}}_{\gamma}\>.$
(15)
The matrix dimension is given by the dimension of the low-energy subspace
$n=\dim{\cal E}_{n}(\\{{\boldsymbol{S}}\\})$. It is easy to see that this is a
real quantity. Its transformation behavior under gauge transformations will be
discussed in Sec. VI.
We proceed by deriving the second set of equations of motion from the
effective Lagrangian $\partial_{t}(\partial L_{\rm
eff}/\partial\dot{{\boldsymbol{S}}}_{m})-\partial L_{\rm
eff}/\partial{\boldsymbol{S}}_{m}=0$. With Eq. (13) we straightforwardly find:
$\frac{\partial
L_{\text{eff}}}{\partial\boldsymbol{S}_{m}}=\frac{\partial}{\partial{\boldsymbol{S}}_{m}}({\boldsymbol{A}}_{m}\dot{{\boldsymbol{S}}}_{m})+i\sum_{k}\sum_{ij}\alpha^{\ast}_{i}\alpha_{j}\frac{\partial}{\partial{\boldsymbol{S}}_{m}}(\bra{\Psi_{i}}\partial_{{\boldsymbol{S}}_{k}}\ket{\Psi_{j}}\dot{{\boldsymbol{S}}}_{k})-\sum_{ij}\alpha^{\ast}_{i}\alpha_{j}\frac{\partial}{\partial{\boldsymbol{S}}_{m}}(\bra{\Psi_{i}}\hat{H}\ket{\Psi_{j}})\>,$
(16)
and with $\partial
L_{\text{eff}}/\partial\dot{\boldsymbol{S}}_{m}={\boldsymbol{A}}_{m}+i\sum_{ij}\alpha^{\ast}_{i}\alpha_{j}\bra{\Psi_{i}}\partial_{{\boldsymbol{S}}_{m}}\ket{\Psi_{j}}$,
$\frac{\text{d}}{\text{d}t}\frac{\partial
L_{\text{eff}}}{\partial\dot{\boldsymbol{S}}_{m}}=\sum_{\gamma}\frac{\partial{\boldsymbol{A}}_{m}}{\partial
S_{m\gamma}}\dot{S}_{m\gamma}+i\sum_{ij}[(\partial_{t}\alpha^{\ast}_{i})\alpha_{j}+\alpha^{\ast}_{i}(\partial_{t}\alpha_{j})]\bra{\Psi_{i}}\partial_{{\boldsymbol{S}}_{m}}\ket{\Psi_{j}}+i\sum_{k}\sum_{ij}\alpha^{\ast}_{i}\alpha_{j}(\dot{{\boldsymbol{S}}}_{k}\partial_{{\boldsymbol{S}}_{k}})(\bra{\Psi_{i}}\partial_{{\boldsymbol{S}}_{m}}\ket{\Psi_{j}})\>.$
(17)
Both, Eq. (16) and Eq. (17) involve the spin-Berry connection. The third term
in Eq. (16) can be rewritten using the identity
$\frac{\partial}{\partial{\boldsymbol{S}}_{m}}(\bra{\Psi_{i}}\hat{H}\ket{\Psi_{j}})=\bra{\Psi_{i}}(\partial_{{\boldsymbol{S}}_{m}}\hat{H})\ket{\Psi_{j}}-(E_{j}-E_{i})\bra{\Psi_{i}}\partial_{{\boldsymbol{S}}_{m}}\ket{\Psi_{j}}\>,$
(18)
and for the second term in Eq. (17) it is convenient to get rid of the time
derivatives by using
$i[(\partial_{t}\alpha^{\ast}_{i})\alpha_{j}+\alpha^{\ast}_{i}(\partial_{t}\alpha_{j})]=i\sum_{k}\sum_{l}\left[\alpha^{\ast}_{l}\alpha_{j}\bra{\Psi_{l}}\partial_{{\boldsymbol{S}}_{k}}\ket{\Psi_{i}}-\alpha^{\ast}_{i}\alpha_{l}\bra{\Psi_{j}}\partial_{{\boldsymbol{S}}_{k}}\ket{\Psi_{l}}\right]\dot{{\boldsymbol{S}}}_{k}+\alpha^{\ast}_{i}\alpha_{j}(E_{j}-E_{i})\>,$
(19)
which directly follows from the equation of motion for the wave functions Eq.
(14). Therewith, we arrive at
$\displaystyle 0$ $\displaystyle=\frac{\text{d}}{\text{d}t}\frac{\partial
L_{\text{eff}}}{\partial\dot{\boldsymbol{S}}_{m}}-\frac{\partial
L_{\text{eff}}}{\partial\boldsymbol{S}_{m}}$
$\displaystyle=\sum_{\beta\gamma}\left(\frac{\partial A_{m\beta}}{\partial
S_{m\gamma}}-\frac{\partial A_{m\gamma}}{\partial
S_{m\beta}}\right)\dot{S}_{m\gamma}\hat{e}_{\beta}+\sum_{ij}\alpha^{\ast}_{i}\alpha_{j}\bra{\Psi_{i}}(\partial_{{\boldsymbol{S}}_{m}}\hat{H})\ket{\Psi_{j}}$
$\displaystyle+i\sum_{k}\sum_{ij}\sum_{\gamma}\alpha^{\ast}_{i}\alpha_{j}\left[\partial_{S_{k\gamma}}(\bra{\Psi_{i}}\partial_{{\boldsymbol{S}}_{m}}\ket{\Psi_{j}})-\partial_{{\boldsymbol{S}}_{m}}(\bra{\Psi_{i}}\partial_{S_{k\gamma}}\ket{\Psi_{j}})\right]\dot{S}_{k\gamma}$
$\displaystyle+i\sum_{k}\sum_{ijl}\sum_{\gamma}\left[\alpha^{\ast}_{l}\alpha_{j}\bra{\Psi_{l}}\partial_{S_{k\gamma}}\ket{\Psi_{i}}-\alpha^{\ast}_{i}\alpha_{l}\bra{\Psi_{j}}\partial_{S_{k\gamma}}\ket{\Psi_{l}}\right]\bra{\Psi_{i}}\partial_{{\boldsymbol{S}}_{m}}\ket{\Psi_{j}}\dot{S}_{k\gamma}\>.$
(20)
The first on the right-hand side is a twofold cross product,
$-\dot{\boldsymbol{S}}_{m}\times(\nabla_{\boldsymbol{S}_{m}}\times\boldsymbol{A}_{m})$,
and with Eq. (9) and with the normalization $|{\boldsymbol{S}}_{m}|=1$, the
curl can be written as
$\nabla\times{\boldsymbol{A}}({\boldsymbol{S}})=-{\boldsymbol{S}}$. The second
term is an expectation value $\langle\partial_{{\boldsymbol{S}}_{m}}H\rangle$
of the “effective field” $\partial_{{\boldsymbol{S}}_{m}}H$ in the state of
the electron system $|\Psi\rangle$, see Eq. (11). With Eq. (15), the third
term reads
$\sum_{k}\sum_{ij}\sum_{\gamma}\alpha^{\ast}_{i}\alpha_{j}\left[\partial_{S_{k\gamma}}{\boldsymbol{C}}^{(ij)}_{m}-\partial_{{\boldsymbol{S}}_{m}}C^{(ij)}_{k\gamma}\right]\dot{S}_{k\gamma}$.
Its $\beta$-th component involves the “curl”
$\underline{\Omega}^{\rm(A)}_{k\gamma,m\beta}=\partial_{S_{k\gamma}}\underline{C}_{m\beta}-\partial_{S_{m\beta}}\underline{C}_{k\gamma}$
(21)
of the spin-Berry connection. Here, the underlines indicate that the spin-
Berry connection and its curl are matrices in the indices $i,j$ labelling the
basis of the low-energy subspace for given spin configuration.
$\underline{\Omega}^{\rm(A)}$ has the form of the spin-Berry curvature in the
abelian ($n=1$) theory. We refer to this as the “abelian spin-Berry
curvature”. Again with Eq. (15), the $\beta$-th component of the fourth term
in Eq. (20) reads
$-i\sum_{k}\sum_{ijl}\sum_{\gamma}\left[\alpha^{\ast}_{l}\alpha_{j}C_{k\gamma}^{(li)}-\alpha^{\ast}_{i}\alpha_{l}C_{k\gamma}^{(jl)}\right]C_{m\beta}^{(ij)}\dot{S}_{k\gamma}$.
This involves the commutator
$[\underline{C}_{k\gamma},\underline{C}_{m\beta}]$ of the spin-Berry
connection.
We define the (non-abelian) spin-Berry curvature
$\underline{\Omega}_{k\gamma,m\beta}=\partial_{S_{k\gamma}}\underline{C}_{m\beta}-\partial_{S_{m\beta}}\underline{C}_{k\gamma}-i[\underline{C}_{k\gamma},\underline{C}_{m\beta}]=\underline{\Omega}^{\rm(A)}_{k\gamma,m\beta}-i[\underline{C}_{k\gamma},\underline{C}_{m\beta}]\>,$
(22)
which differs from the abelian one by the additional commutator. Furthermore,
we define the “expectation value” of the spin-Berry curvature in the state
given by the wave function $\\{\alpha\\}$ as:
$\langle{\Omega}\rangle_{k\gamma,m\beta}=\sum_{ij}\alpha_{i}^{\ast}\Omega^{(ij)}_{k\gamma,m\beta}\alpha_{j}\>.$
(23)
With this, the effective equation of motion (20) for the classical-spin
configuration can be written in the compact form
$0=\frac{\text{d}}{\text{d}t}\frac{\partial
L_{\text{eff}}}{\partial\dot{\boldsymbol{S}}_{m}}-\frac{\partial
L_{\text{eff}}}{\partial\boldsymbol{S}_{m}}=\dot{\boldsymbol{S}}_{m}\times\boldsymbol{S}_{m}+\langle\partial_{{\boldsymbol{S}}_{m}}\hat{H}\rangle+\sum_{k}\sum_{\beta\gamma}\dot{S}_{k\gamma}\langle{\Omega}\rangle_{k\gamma,m\beta}{\boldsymbol{e}}_{\beta}\>,$
(24)
or, exploiting the structure of the quantum-classical Hamiltonian Eq. (2) and
the normalization of the wave functions, $\sum_{i}|\alpha_{i}|^{2}=1$,
$0=\dot{\boldsymbol{S}}_{m}\times\boldsymbol{S}_{m}+\langle\partial_{{\boldsymbol{S}}_{m}}\hat{H}_{\rm
int}\rangle+\partial_{{\boldsymbol{S}}_{m}}H_{\rm
cl}+\sum_{k}\sum_{\beta\gamma}\dot{S}_{k\gamma}\langle{\Omega}\rangle_{k\gamma,m\beta}{\boldsymbol{e}}_{\beta}\>.$
(25)
This equation is an implicit equation for $\dot{\boldsymbol{S}}_{m}$. An
explicit form is derived in Appendix A. Finally, we rewrite Eq. (14) using the
definition of the spin-Berry connection Eq. (15):
$i\partial_{t}\alpha_{i}=\sum_{j}\bra{\Psi_{i}}(\hat{H}_{\text{qu}}+\hat{H}_{\text{int}})\ket{\Psi_{j}}\alpha_{j}-\sum_{m}\sum_{j}\dot{{\boldsymbol{S}}}_{m}{\boldsymbol{C}}_{m}^{(ij)}\alpha_{j}\>.$
(26)
Eqs. (25) and (26) represent a closed coupled set of non-linear first-order
differential equations for the effective many-body wave function
$\\{\alpha\\}$ and for the classical spin configuration
$\\{{\boldsymbol{S}}\\}$.
## IV Discussion
The respective last terms in the equations of motion (25) and (26) originate
from the strict treatment of the holonomic constraint (11). Although the first
time derivative of the local spins is reminiscent of a dissipative Gilbert-
like damping, the resulting dynamics is strictly conserving, i.e., the total
energy given by the expectation value of the total Hamiltonian (2) with the
quantum state of the conduction-electron system is a constant of motion.
Unlike the standard approach discussed in the introduction, the equations of
motion thus describe the dynamics of a closed quantum system (at low
energies).
For the derivation of the equations of motion, we have treated all components
of the spins and of the wave function as independent and have thereby
disregarded the normalization conditions for the length of the classical spin
and for the norm of the wave function
$\absolutevalue{\boldsymbol{S}_{m}(t)}=1\;,\quad\sum_{i}\absolutevalue{\alpha_{i}(t)}^{2}=1\;,$
(27)
which must hold at any instant of time $t$. One can easily check directly,
however, that these are respected. The normalization condition for the wave
function can also be derived by noting that the effective Lagrangian is
invariant under global $U(1)$ phase transformations. Noether’s theorem yields
$Q=\sum_{i}\absolutevalue{\alpha_{i}(t)}^{2}$ as a conserved charge.
Alternatively, the conditions can be treated as additional constraints via
appropriate Lagrange multipliers. As is shown in Appendix B, the resulting
Euler-Lagrange equations are in fact unchanged.
Adiabatic spin dynamics (ASD) theory Stahl and Potthoff (2017) is recovered
for $n=1$, where the conduction-electron dynamics is constrained to the
ground-state manifold $\ket{\Psi(t)}=\ket{\Psi_{0}(\\{\boldsymbol{S}(t)\\})}$
and where the wave function is $\alpha_{0}\equiv 1$ trivially, see Eq. (11).
In this case, the spin-Berry connection
${\boldsymbol{C}}^{(ij)}_{m}=i\bra{\Psi_{i}}\partial_{{\boldsymbol{S}}_{m}}\ket{\Psi_{j}}$
with $i,j=0,...,n-1$, reduces to a vector with scalar entries only,
${\boldsymbol{C}}_{m}=i\bra{\Psi_{0}}\partial_{{\boldsymbol{S}}_{m}}\ket{\Psi_{0}}$.
Hence, the commutator in Eq. (22) vanishes, and the spin-Berry curvature
$\underline{\Omega}_{k\gamma,m\beta}$ reduces to the corresponding expression
$\underline{\Omega}^{\rm(A)}_{k\gamma,m\beta}$, Eq. (21), of (abelian) ASD
theory.
In the opposite extreme case, i.e., when $n$ is chosen as the dimension of the
full many-electron Fock space $\cal H$, Eq. (11) is actually no longer a
constraint but rather represents the expansion of the electron state
$|\Psi(t)\rangle$ with respect to a complete orthonormal system of time-
dependent basis states $\\{|\Psi_{i}(t)\rangle\\}$ with
$|\Psi_{i}(t)\rangle=|\Psi_{i}(\\{{\boldsymbol{S}}(t)\\}\rangle$. In this
case, it is straightforward to see that Eq. (26) is just Schrödingers’s
equation $i\partial_{t}|\Psi(t)\rangle=\hat{H}|\Psi(t)\rangle$, i.e., Eq. (5),
but formulated for the coefficients $\alpha_{i}(t)$ of $|\Psi(t)\rangle$ in
that basis. The spin-Berry connection merely takes care of the fact that the
basis changes smoothly with the parameters $\\{{\boldsymbol{S}}\\}$. Eq. (25)
trivializes as well in this case: We can rewrite the (non-abelian) spin-Berry
curvature in the form (see Appendix C):
$\Omega^{(ij)}_{k\gamma,m\beta}=i\left[\bra{\partial_{S_{k\gamma}}\Psi_{i}}\mathcal{Q}_{n}\ket{\partial_{S_{m\beta}}\Psi_{j}}-(k\gamma\leftrightarrow
m\beta)\right]\>,$ (28)
where
$\mathcal{Q}_{n}:=\mathbb{1}-\sum_{i=0}^{n-1}\ket{\Psi_{i}}\bra{\Psi_{i}}$
projects onto the orthogonal complement of the low-energy space ${\cal
E}_{n}(\\{{\boldsymbol{S}}\\})$. If $n=\dim\cal H$, the complement is zero,
and the spin-Berry curvature vanishes identically, so that
$0=\dot{\boldsymbol{S}}_{m}\times\boldsymbol{S}_{m}+\langle\partial_{{\boldsymbol{S}}_{m}}\hat{H}_{\rm
int}\rangle+\partial_{{\boldsymbol{S}}_{m}}H_{\rm cl}\>.$ (29)
Taking the cross product with ${\boldsymbol{S}}_{m}$ from the right on both
sides of Eq. (25) and exploiting the normalization condition for the spin
length, we get:
$\dot{\boldsymbol{S}}_{m}=\frac{\partial\hat{H}(\\{\boldsymbol{S}\\})}{\partial{\boldsymbol{S}}_{m}}\times{\boldsymbol{S}}_{m}\>.$
(30)
This is just the explicit form of Eq. (6).
Some general properties of the spin-Berry curvature can be derived from Eq.
(28). One immediately notes the antisymmetry
$\Omega^{(ij)}_{k\gamma,m\beta}=-\Omega^{(ij)}_{m\beta,k\gamma}$ (31)
for fixed $i,j$. Furthermore, complex conjugation yields
$\Omega^{(ij)^{*}}_{k\gamma,m\beta}=-\Omega^{(ji)}_{m\beta,k\gamma}\>.$ (32)
With these properties, one can immediately conclude that
$\langle{\Omega}\rangle_{k\gamma,m\beta}=\sum_{ij}\alpha_{i}^{\ast}\Omega^{(ij)}_{k\gamma,m\beta}\alpha_{j}=\langle{\Omega}\rangle_{k\gamma,m\beta}^{\ast}\>,$
(33)
i.e., the expectation value, which enters the effective equation of motion Eq.
(25), is real.
Quite generally, the (abelian) Berry connection and Berry curvature arise in
the adiabatic problem, where a quantum Hamiltonian
$\hat{H}=\hat{H}({\boldsymbol{\lambda}})$ depends on a family of slowly
varying parameters ${\boldsymbol{\lambda}}$ and has a non-degenerate ground
state for all ${\boldsymbol{\lambda}}$. This gives rise to the famous Berry
phase Berry (1984), which the ground state picks up during a closed loop in
parameter space and which can be computed, e.g., as an integral of the Berry
curvature over the surface bounded by the loop. Mathematically, the phase is a
holonomy, i.e., it results from a twist of the line bundle
$\\{({\boldsymbol{\lambda}},|\Psi_{0}\rangle)\,|\,\hat{H}({\boldsymbol{\lambda}})|\Psi_{0}\rangle=E_{0}(\\{{\boldsymbol{\lambda}}\\})|\Psi_{0}\rangle\\}$
Simon (1983). The Berry phase is gauge invariant and thus observable and
depends on the geometry of the closed loop only. Similarly, non-abelian gauge
fields arise in the adiabatic time evolution of an $n>1$-fold degenerate
ground state of a quantum system Wilczek and Zee (1984) and produce a non-
trivial phase after completing a loop in parameter space.
Here, we consider a quantum system coupled to dynamical classical degrees of
freedom (classical spins). In case of a clear time-scale separation between
the slow classical and the fast quantum dynamics, the classical spins induce a
spin-Berry curvature in the quantum conduction-electron system. Generically,
it is highly unlikely, however, that the classical state evolves along a
closed path. The essential observation, however, is that there is an
additional feedback of the Berry curvature on the classical spin dynamics,
seen in the last term in Eq. (25) for $\Omega=\Omega^{\rm(A)}$. Already in the
abelian case $n=1$, this leads to an anomalous geometrical spin torque Stahl
and Potthoff (2017). This geometric feedback on slow classical dynamics has
been pointed out Wen and Zee (1988); Niu and Kleinman (1998); Bohm _et al._
(2003); Niu _et al._ (1999); Stahl and Potthoff (2017); Elbracht _et al._
(2020); Bajpai and Nikolić (2020); Michel and Potthoff (2021) but has not yet
been studied for spin dynamics in the non-abelian case $1<n=\dim{\cal
E}_{n}\\{{\boldsymbol{S}}\\}\ll\dim{\cal H}$.
## V Time reversal
Time-reversal symmetry plays an important role for the presence of a finite
spin-Berry curvature in the adiabatic case ($n=1$) Stahl and Potthoff (2017).
For $n>1$, however, this is entirely different:
We assume that the electron system is time-reversal symmetric, i.e., that the
Hamiltonian $\hat{H}_{\text{qu}}$ commutes with the anti-unitary operator for
time reversal $\Theta$. The interaction term, Eq. (4), on the other hand, is
odd under time reversal, $\Theta\hat{H}_{\rm
int}\Theta^{\dagger}=-\hat{H}_{\rm int}$, since
$\Theta{\boldsymbol{s}}_{{\boldsymbol{r}}_{m}}\Theta^{\dagger}=-{\boldsymbol{s}}_{{\boldsymbol{r}}_{m}}$.
The local spins ${\boldsymbol{S}}_{m}$ are classical degrees of freedom, which
act as local magnetic fields and explicitly break time-reversal symmetry of
the quantum system.
This effect, however, can be disregarded in the weak-$J$ regime, where the
spin-Berry curvature, in the spirit of linear-response theory, is a physical
property of the electron system $\hat{H}_{\text{qu}}$ only. Namely, expanding
$E_{i}=E_{i0}+{\cal O}(J)$ and $|\Psi_{l}\rangle=|\Psi_{l}^{0}\rangle+{\cal
O}(J)$ and using the identity
$\langle\Psi_{l}|\partial_{{\boldsymbol{S}}_{m}}\Psi_{j}\rangle=\frac{\langle\Psi_{l}|\partial_{{\boldsymbol{S}}_{m}}\hat{H}(\\{\boldsymbol{S}\\})|\Psi_{j}\rangle}{E_{j}-E_{l}}\>,$
(34)
which holds for $E_{j}\neq E_{l}$, Eq. (28) can be rewritten as
$\displaystyle\Omega^{(ij)}_{k\gamma,m\beta}$ $\displaystyle=$ $\displaystyle
i\sum_{l\geq
n}\Bigg{[}\frac{\langle\Psi^{0}_{i}|\partial_{S_{k\gamma}}\hat{H}|\Psi_{l}^{0}\rangle}{E_{i0}-E_{l0}}\frac{\langle\Psi^{0}_{l}|\partial_{S_{m\beta}}\hat{H}|\Psi_{j}^{0}\rangle}{E_{j0}-E_{l0}}$
(35) $\displaystyle-$ $\displaystyle(k\gamma\leftrightarrow
m\beta)\Bigg{]}+{\cal O}(J^{3})\>,$
since $\partial_{S_{k\gamma}}\hat{H}_{\rm int}=Js_{\rm i_{k}\gamma}={\cal
O}(J)$, so that the spin-Berry curvature is of order $J^{2}$ for weak $J$ and
expressed in terms of the eigenstates and eigenenergies of
$\hat{H}_{\text{qu}}$ only. Note that $0\leq i,j\leq n-1$ in Eq. (35).
For a system with an even number of spin-$1/2$ electrons, the time-reversal
operator squares to unity, $\Theta^{2}=+1$. In this case, we can choose an
orthonormal basis of time-reversal-symmetric energy eigenstates
$|\Psi^{0}_{i}\rangle=\Theta|\Psi^{0}_{i}\rangle$. This implies that the
matrix elements,
$\displaystyle\langle\Psi^{0}_{i}|\partial_{S_{k\gamma}}\hat{H}|\Psi^{0}_{l}\rangle$
$\displaystyle=$
$\displaystyle-\langle\Psi^{0}_{i}|\Theta^{\dagger}\partial_{S_{k\gamma}}\hat{H}\Theta|\Psi^{0}_{l}\rangle$
(36) $\displaystyle=$
$\displaystyle-(\langle\Theta\Psi^{0}_{i}|\partial_{S_{k\gamma}}\hat{H}|\Theta\Psi^{0}_{l}\rangle)^{\ast}$
$\displaystyle=$
$\displaystyle-(\langle\Psi^{0}_{i}|\partial_{S_{k\gamma}}\hat{H}|\Psi^{0}_{l}\rangle)^{\ast}\;,$
are purely imaginary. Note that only the (odd) interaction term $\hat{H}_{\rm
int}(\\{{\boldsymbol{S}}\\})$ contributes. Using this in Eq. (35) shows that
$\Omega^{(ij)}_{k\gamma,m\beta}$ is purely imaginary. With Eq. (32) we can
conclude that
$\Omega^{(ij)}_{k\gamma,m\beta}=\Omega^{(ji)}_{m\beta,k\gamma}\>.$ (37)
In particular, Eq. (31) and Eq. (37) imply that the $i=j$ elements of the
spin-Berry curvature must vanish in the weak-$J$ limit for $\Theta^{2}=+1$.
This is important for the abelian case $n=1$. For $i=j=0$ we have
$\Omega^{(00)}_{k\gamma,m\beta}=0$ and, hence, there is no geometrical spin
torque in the weak-$J$ limit for a time-reversal-symmetric system with
$\Theta^{2}=+1$. In the general non-abelian case, on the other hand, we find
with Eq. (33) that
$\langle{\Omega}\rangle_{k\gamma,m\beta}=-\sum_{ij}\mbox{Im}(\alpha_{i}^{\ast}\alpha_{j})\mbox{Im}\Omega^{(ij)}_{k\gamma,m\beta}\>,$
(38)
since $\Omega^{(ij)}_{k\gamma,m\beta}$ is imaginary. Generically, the
coefficients $\alpha_{i}=\alpha_{i}(t)$ in the expansion Eq. (11) will be
complex and oscillatory functions of time. The expression above thus shows
that even in the weak-$J$ limit and for a time-reversal symmetric system, the
geometrical spin torque in the equation of motion (25) is generally finite.
Let us briefly discuss the case of an odd electron number with
$\Theta^{2}=-1$. Here, the basis states can be grouped in orthogonal and
energy-degenerate Kramers pairs
$\\{|\Psi^{0}_{i}\rangle,|\overline{\Psi}_{i}^{0}\rangle\\}$ with
$|\overline{\Psi}^{0}_{i}\rangle\equiv\Theta|\Psi_{i}^{0}\rangle$ for
$i=0,...,(n/2)-1$. An even number of states must be included in formulating
the constraint (11). For the matrix elements, we have
$\displaystyle\langle\Psi^{0}_{i}|\partial_{S_{k\gamma}}\hat{H}|\Psi^{0}_{l}\rangle$
$\displaystyle=$
$\displaystyle-\langle\Psi^{0}_{i}|\Theta^{\dagger}\partial_{S_{k\gamma}}\hat{H}\Theta|\Psi^{0}_{l}\rangle$
(39) $\displaystyle=$
$\displaystyle-(\langle\Theta\Psi^{0}_{i}|\partial_{S_{k\gamma}}\hat{H}|\Theta\Psi^{0}_{l}\rangle)^{\ast}$
$\displaystyle=$
$\displaystyle-(\langle\overline{\Psi}^{0}_{i}|\partial_{S_{k\gamma}}\hat{H}|\overline{\Psi}^{0}_{l}\rangle)^{\ast}\;.$
This can be used in Eq. (35) since in the $l$-sum with each term also the
Kramers partner is included. We find
$\Omega^{(ij)}_{k\gamma,m\beta}=\Omega^{(\overline{j}\,\overline{i})}_{m\beta,k\gamma}\>,$
(40)
where the index $\overline{i}$ refers to the Kramers partner of
$|\Psi^{0}_{i}\rangle$, and, furthermore,
$(\Omega^{(ij)}_{k\gamma,m\beta})^{\ast}=-\Omega^{(\overline{i}\,\overline{j})}_{k\gamma,m\beta}$.
As for the case $\Theta^{2}=+1$, time-reversal symmetry does not lead to a
vanishing spin-Berry curvature or a vanishing expectation value
$\langle{\Omega}\rangle_{k\gamma,m\beta}$. Note that for $\Theta^{2}=-1$ the
adiabatic theory is not applicable anyway (for the weak-coupling limit), since
the ground state is at least twofold Kramers degenerate.
## VI Gauge transformations
The effective Lagrangian Eq. (13) can be written in a compact form as
$\displaystyle L_{\text{eff}}$ $\displaystyle=$
$\displaystyle\sum_{m}\boldsymbol{A}_{m}({\boldsymbol{S}}_{m})\dot{\boldsymbol{S}}_{m}+i\boldsymbol{\alpha}^{\dagger}\partial_{t}\boldsymbol{\alpha}$
(41) $\displaystyle+$
$\displaystyle\sum_{m}\boldsymbol{\alpha}^{\dagger}[\underline{{\boldsymbol{C}}}_{m}(\\{{\boldsymbol{S}}\\})\dot{\boldsymbol{S}}_{m}]\boldsymbol{\alpha}-\boldsymbol{\alpha}^{\dagger}\underline{H}(\\{{\boldsymbol{S}}\\})\boldsymbol{\alpha}\;,$
where ${\boldsymbol{\alpha}}=(\alpha_{0},...,\alpha_{n-1})^{T}$ and where
$\underline{H}$ is the Hamilton matrix with elements
$H_{ij}=\bra{\Psi_{i}}\hat{H}\ket{\Psi_{j}}$ and the local basis states
$|\Psi_{j}\rangle=|\Psi_{j}(\\{{\boldsymbol{S}}\\})\rangle$. We consider a
gauge transformation
$\displaystyle|\Psi_{j}(\\{{\boldsymbol{S}}\\})\rangle$ $\displaystyle\mapsto$
$\displaystyle|\Psi^{\prime}_{j}(\\{{\boldsymbol{S}}\\})\rangle=\sum_{i}U_{ij}^{\dagger}|\Psi_{i}(\\{{\boldsymbol{S}}\\})\rangle$
$\displaystyle{\boldsymbol{\alpha}}$ $\displaystyle\mapsto$
$\displaystyle{\boldsymbol{\alpha}}^{\prime}=\underline{U}{\boldsymbol{\alpha}}\>,$
(42)
where $\underline{U}$ (with elements $U_{ij}$) is the defining matrix
representation of SU(n) on the local low-energy subspace ${\cal
E}_{n}(\\{{\boldsymbol{S}}\\})$ for given spin configuration
$\\{{\boldsymbol{S}}\\}$. This transformation must leave observables
invariant, since Eq. (42) just means a rotation of the basis in ${\cal
E}_{n}(\\{{\boldsymbol{S}}\\})$, which leaves the quantum state
$|\Psi\rangle=\sum_{j=0}^{n-1}\alpha_{j}|\Psi_{j}(\\{{\boldsymbol{S}}\\})\rangle$,
and thus the constraint Eq. (11) invariant when rotating the expansion
coefficients (the wave function) accordingly. We distinguish between global
SU(n) and local SU(n) transformations. For the latter, the transformation
matrix $\underline{U}=\underline{U}(\\{{\boldsymbol{S}}\\})$ is an arbitrary
but smooth function of the spin configuration $\\{{\boldsymbol{S}}\\}$. The
effective Lagrangian is invariant under both, global and local gauge
transformations.
Note that the Hamilton matrix transforms in a covariant way,
$\underline{H}\mapsto\underline{H}^{\prime}=\underline{U}\,\underline{H}\,\underline{U}^{\dagger}\>,$
(43)
while the Berry connection transforms covariantly under a global gauge
transformation only. For a local gauge transformation we rather have:
$\underline{{\boldsymbol{C}}}_{m}\mapsto\underline{{\boldsymbol{C}}}_{m}^{\prime}=\underline{U}\,\underline{{\boldsymbol{C}}}_{m}\,\underline{U}^{\dagger}+i\underline{U}\partial_{{\boldsymbol{S}}_{m}}\underline{U}^{\dagger}\>.$
(44)
The non-abelian Berry curvature, opposed to its abelian part (21), transforms
covariantly:
$\underline{\Omega}_{k\gamma,m\beta}\mapsto\underline{\Omega}^{\prime}_{k\gamma,m\beta}=\underline{U}\,\underline{\Omega}_{k\gamma,m\beta}\,\underline{U}^{\dagger}\,\>,$
(45)
so that its expectation value in the state given by the wave function
$\alpha_{i}$ is invariant:
$\langle{\Omega^{\prime}}\rangle^{\prime}_{k\gamma,m\beta}=\langle{\Omega}\rangle_{k\gamma,m\beta}$.
Hence, Eq. (25) is invariant under local gauge transformations. The
Schrödinger-type equation Eq. (26), on the other hand, is form-invariant under
local transformations, i.e.,
$i\partial_{t}\alpha^{\prime}_{i}=\sum_{j}\bra{\Psi^{\prime}_{i}}(\hat{H}_{\text{qu}}+\hat{H}_{\text{int}})\ket{\Psi^{\prime}_{j}}\alpha^{\prime}_{j}-\sum_{mj}\dot{{\boldsymbol{S}}}_{m}{{\boldsymbol{C}}_{m}^{(ij)}}^{\prime}\alpha^{\prime}_{j}\>,$
(46)
and the spin-Berry connection term on the right-hand side is necessary to
compensate the extra term appearing on the left-hand side in case of an
$\\{{\boldsymbol{S}}\\}$-dependent transformation.
Concluding, the effective Lagrangian emerging in the low-energy sector of
hybrid spin-electron dynamics represents a non-abelian SU(n) gauge theory.
This is reminiscent of standard quantum field theories Peskin and Schroeder
(1996), where the Lagrangian is invariant under simultaneous transformations
of coupled matter and gauge fields, and where these gauge transformations
involve a gauge group, like SU(n), and are local in space-time. There are a
couple of differences though: Within non-abelian spin-dynamics theory, space-
time is not only replaced by a compact parameter manifold, namely the
Cartesian product of classical Bloch spheres representing the space of the
spin configurations, but furthermore the spin configurations have their own
dynamics. The theory is thus much more related to gauge theories that have
been devised for molecular physics Bohm _et al._ (2003), where the state
space of the nuclei, when treated classically, define a dynamical parameter
manifold, and where the role of the gauge field is played by the non-abelian
Berry connection.
Finally, it is worth mentioning that there is a second, less important class
of gauge freedom. This concerns the vector potential
${\boldsymbol{A}}({\boldsymbol{S}}_{m})$, see the first term of $L$ in Eq.
(8), i.e., already in the full Lagrangian. Any transformation of the unit
vector ${\boldsymbol{e}}\mapsto{\boldsymbol{e}}^{\prime}$ leads to a
transformed potential
${\boldsymbol{A}}({\boldsymbol{S}}_{m})\mapsto{\boldsymbol{A}}^{\prime}({\boldsymbol{S}}_{m})$
but leaves its curl invariant. This even includes “local”, $m$-dependent
transformations
${\boldsymbol{A}}({\boldsymbol{S}}_{m})\mapsto{\boldsymbol{A}}^{\prime}_{m}({\boldsymbol{S}}_{m})$
resulting from ${\boldsymbol{e}}\mapsto{\boldsymbol{e}}^{\prime}_{m}$.
However, since only the curl
$\nabla_{{\boldsymbol{S}}}\times{\boldsymbol{A}}({\boldsymbol{S}}_{m})$ enters
the equations of motion resulting from the full or from the effective
Lagrangian, see Eq. (20) for instance, these are invariant.
## VII Minimal model
Figure 1: Sketch of the minimal model studied numerically. A classical spin
${\boldsymbol{S}}$ of length $|{\boldsymbol{S}}|=1$ is antiferromagnetically
exchange coupled with coupling strength $J>0$ to the local spin moment
${\boldsymbol{s}}_{i_{0}}$ at the first site $i_{0}=1$ of a system of
conduction electrons on a one-dimensional chain with open boundaries. $T$ is
the nearest-neighbor hopping. Real-time dynamics is initiated by a sudden
change of the direction of a local magnetic field ${\boldsymbol{B}}$ coupled
to ${\boldsymbol{S}}$.
For a further discussion of non-abelian spin-dynamics theory, we will present
numerical results for a minimal model, which includes a few degrees of freedom
only but is sufficient to illustrate several key aspects. Our intention is to
show by example that and how our theoretical approach can be evaluated in
practice, how the numerical results compare with the full solution of the
equations of motion, and what improvements the theory offers over the purely
adiabatic (abelian) version. This may also be seen as a preparation for future
applications to more realistic but also more complicated physical systems,
where various secondary issues become important.
The Hamiltonian our our toy model is given by
$\hat{H}=-T\sum_{\langle
i,j\rangle,\sigma}c_{i\sigma}^{\dagger}c_{j\sigma}+J\boldsymbol{s}_{i_{0}}\boldsymbol{S}-\boldsymbol{B}\boldsymbol{S}\>.$
(47)
It describes a single classical spin ($M=1$) locally exchange coupled
(coupling constant $J>0$) to a non-interacting tight-binding model in an open
chain geometry with a small number of sites $L$ hosting $N=L$ electrons, i.e.,
a half-filled conduction-electron system. The spin is coupled to the first
site of the chain $i_{0}=1$. This is the $s$-$d$ model vz discussed in the
introduction and the same model as in Ref. Stahl and Potthoff (2017). Energy
and time units are fixed by setting the nearest-neighbor hopping amplitude to
$T=1$. In addition, the Hamiltonian includes a local magnetic field of
strength $B$ coupling to the classical spin ${\boldsymbol{S}}$. The model is
visualized in Fig. 1.
The field term is employed to initiate the real-time dynamics: At time $t=0$
the system is prepared in the ground state of $\hat{H}$ with the field in $x$
direction, i.e., the spin ${\boldsymbol{S}}=S{\boldsymbol{e}}_{x}$ is aligned
to ${\boldsymbol{B}}=B{\boldsymbol{e}}_{x}$, and the conduction-electron state
is the ground state $|\Psi(t=0)\rangle=|\Psi_{0}({\boldsymbol{S}})\rangle$.
Time propagation for $t>0$ is driven by the same Hamiltonian but with the
field pointing in $z$ direction. Dynamics is thus initiated by a sudden change
of the field direction from $x$\- to $z$ direction.
For $t>0$ one expects that the spin starts precessing around the $z$ axis. In
the adiabatic approximation with $n=1$, the electron system will follow the
respective spin direction instantaneously, and its state at time $t$ would be
the instantaneous ground state $|\Psi_{0}({\boldsymbol{S}}(t))\rangle$. The
time scale on which the precession takes place is given by the inverse of the
Larmor frequency $\omega_{\rm L}=B$. Depending on the field strength, this
time scale $\tau_{L}=1/\omega_{\rm L}=1/B$ can be much shorter than the
inverse of the finite-size gap $\Delta={\cal O}(T/L)$. With $T=1$ we thus
expect that the adiabatic approximation breaks down for $B\gg T/L$ and that
excited states $|\Psi_{j}({\boldsymbol{S}})\rangle$ with $0<j<n-1$ will be
populated. The number of states $n$ included in the
${\boldsymbol{S}}$-dependent basis controls the accuracy of the non-abelian
spin-dynamics approach.
For the single-classical-spin model the effective equations of motion Eq. (25)
and Eq. (26) are somewhat simplified. For $M=1$ we can skip the $m$-index and
take the cross product with ${\boldsymbol{S}}$ on both sides of Eq. (25).
Furthermore, we have $\langle\partial_{{\boldsymbol{S}}}\hat{H}_{\rm
int}\rangle=J\langle{\boldsymbol{s}}_{i_{0}}\rangle-{\boldsymbol{B}}$ and
$\partial_{{\boldsymbol{S}}}H_{\rm cl}=0$. Therewith we get
$\dot{\boldsymbol{S}}=\frac{J\langle\boldsymbol{s}_{i_{0}}\rangle\times\boldsymbol{S}-\boldsymbol{B}\times\boldsymbol{S}}{1-\boldsymbol{S}\langle\boldsymbol{\Omega}\rangle}\>,$
(48)
where
$\langle\boldsymbol{\Omega}\rangle=\sum_{ij}\alpha_{i}^{\ast}{\boldsymbol{\Omega}}^{(ij)}\alpha_{j}$
is the expectation value of the pseudovector ${\boldsymbol{\Omega}}^{(ij)}$
with components
$\Omega^{(ij)}_{\alpha}=\frac{1}{2}\sum_{\beta\gamma}\varepsilon_{\alpha\beta\gamma}\Omega^{(ij)}_{\beta\gamma}$
that can be constructed for $M=1$ due to the antisymmetry of the Berry
curvature tensor under $\beta\leftrightarrow\gamma$ for each pair $(ij)$, see
Eq. (31). Furthermore,
$\langle\boldsymbol{s}_{i_{0}}\rangle=\langle\boldsymbol{s}_{i_{0}}\rangle_{t}=\sum_{ij}\alpha_{i}^{\ast}(t)\langle\Psi_{i}({\boldsymbol{S}}(t))|{\boldsymbol{s}}_{i_{0}}|\Psi_{j}({\boldsymbol{S}}(t))\rangle\alpha_{j}(t)$.
Remarkably, there is a renormalization of the precession frequency resulting
from the geometrical spin torque, which has already been studied for the
adiabatic case Stahl and Potthoff (2017); Bajpai and Nikolić (2020); Elbracht
_et al._ (2020); Michel and Potthoff (2021). This manifests itself as an
additional factor $1/(1-\boldsymbol{S}\langle\boldsymbol{\Omega}\rangle)$ in
Eq. (48). In the adiabatic case $n=1$, the expectation value
$\langle\boldsymbol{\Omega}\rangle$ is strictly parallel or antiparallel to
classical-spin orientation due to symmetry reasons Stahl and Potthoff (2017).
For $\boldsymbol{S}\uparrow\uparrow\langle\boldsymbol{\Omega}\rangle$ this
results in a faster precessional dynamics, and its orientation is even
reversed if $\boldsymbol{S}\langle\boldsymbol{\Omega}\rangle>1$, while for
$\boldsymbol{S}\uparrow\downarrow\langle\boldsymbol{\Omega}\rangle$ the
precession is slowed down. Exactly at
$\boldsymbol{S}\langle\boldsymbol{\Omega}\rangle=1$ the right-hand side of Eq.
(48) becomes singular. This is linked to a divergence of the precession
frequency which, however, becomes relevant in an extreme case only: For the
adiabatic case and $L=1$, it was found in Ref. Stahl and Potthoff (2017) that
singular dynamics can in principle be approached, if the length of the
classical spin $|{\boldsymbol{S}}|\to\frac{1}{2}$. At the same time, however,
to stay in the adiabatic regime of the model, it was necessary to consider an
ever-increasing coupling strength, i.e., $J\to\infty$.
Here, we see that the same type of singularity is in principle also present in
the non-adiabatic case (for $M=1$). Generally, however, we find
$0<\boldsymbol{S}\langle\boldsymbol{\Omega}\rangle<1$ (for antiferromagnetic
exchange coupling $J>0$): A possible singularity is regularized for $n>1$ due
to contributions from excited states and partly also to due the fact that
$\langle{\boldsymbol{\Omega}}\rangle$ and ${\boldsymbol{S}}$ are no longer
necessarily collinear.
The following NA-SD studies of the minimal model are based on a numerical
solution of the coupled effective equations of motion Eq. (48) for the
classical spin ${\boldsymbol{S}}$ and Eq. (26) for the wave function
$\\{\alpha\\}$. For the computation of the expectation value of the spin-Berry
curvature $\langle{\boldsymbol{\Omega}}\rangle$ we profit from
simplifications, which hold in case of a non-interacting conduction-electron
system. These are detailed in the Appendix D.
We also compare the results of the NA-SD theory with the full solution of the
fundamental equations of motion (5) and (6), which is obtained independently.
More explicitly, Eq. (5) for the minimal model reads:
$\dot{\boldsymbol{S}}=J\langle\boldsymbol{s}_{i_{0}}\rangle_{t}\times\boldsymbol{S}-\boldsymbol{B}\times\boldsymbol{S}\>.$
(49)
Furthermore, in case of a non-interacting electron system, Eq. (6) can be
replaced by the equation of motion
$i\frac{d}{dt}\underline{\rho}=\commutator{\underline{T}^{(\text{eff})}}{\underline{\rho}}$
(50)
for the one-particle reduced density matrix $\underline{\rho}$ with elements
$\rho_{ii^{\prime}\sigma\sigma^{\prime}}(t)\ =\langle
c^{\dagger}_{i^{\prime}\sigma^{\prime}}c_{i\sigma}\rangle$, and where the
elements of the effective hopping matrix $\underline{T}^{(\text{eff})}$ are
given by:
$T^{(\text{eff})}_{ii^{\prime}\sigma\sigma^{\prime}}=T\delta_{\langle
ii^{\prime}\rangle}\delta_{\sigma\sigma^{\prime}}+\frac{J}{2}{\boldsymbol{\sigma}}_{\sigma\sigma^{\prime}}{\boldsymbol{S}}\,\delta_{ii_{0}}\delta_{i^{\prime}i_{0}}\;.$
(51)
## VIII Numerical results
### VIII.1 Full theory
The precession around the $z$ axis defined by the local magnetic field is
expected to be the dominant effect in the classical spin dynamics. In fact,
this is the main phenomenon found by solving the full set of equations of
motion (49) and (50). Fig. 2 displays numerical results obtained with the full
theory for a system with $L=10$ sites at half-filling $N=L$, and for generic
parameter values $J=1$ and $B=0.1$. The $x$ component of the classical spin
undergoes a quite regular oscillation with a period close to $2\pi/\omega_{\rm
L}=2\pi/B\approx 62.8$. The $y$ component exhibits the same but phase shifted
dynamics. We note that, for the selected parameter set, the geometrical spin
torque is too small to produce a sizeable renormalization of the precession
frequency.
Figure 2: Time evolution of the $x$ and the $z$ component of the classical
spin as obtained from the full theory for a system with $L=10$ sites at half-
filling $N=L$. Parameters: $J=1$, $B=0.1$. The energy and time units are set
by fixing the nearest-neighbor hopping at $T=1$.
Damping of the spin dynamics and eventual alignment of the classical spin with
the field ${\boldsymbol{B}}=B{\boldsymbol{e}}_{z}$ is typically a weaker
effect, which takes place on a much longer time scale, see e.g. the discussion
in Refs. llg ; Bhattacharjee _et al._ (2012); Sayad and Potthoff (2015). For
a closed, finite and with $L=10$ small system, as considered here, relaxation
will be imperfect anyway, and even in the long-time limit, the system cannot
fully approach its ground state locally, in the vicinity of $i_{0}$.
Uncovering this type of relaxation dynamics requires much larger systems, as
discussed in Refs. Elbracht and Potthoff (2020, 2021), for example.
Fig. 2 also displays the $z$ component of the spin. In case of a perfect
precessional motion, one would expect a constant $S_{z}$. As is seen in the
figure, however, an almost oscillatory motion of $S_{z}$ with some additional
irregularities is found instead. This nutation of the spin is reminiscent of
gyroscope theory Butikov (2006); Wegrowe and Ciornei (2012), but is not
understood easily. An explanation in terms of linear-response theory (see Eq.
(1)), i.e., Redfield theory for open quantum systems, involves the second-
order term in the Taylor expansion of the memory kernel Bhattacharjee _et
al._ (2012); Sayad _et al._ (2016b). For the parameters considered here, the
nutation effect is at least an order of magnitude smaller as compared to the
precessional dynamics (see Fig. 2). There are cases, however, where
precessional and nutational oscillations can be of the same order of
magnitude. The additional “irregularities” on top of the nutation are even
more subtle. At this level of resolution at the latest, the complexity of the
dynamics caused by the nonlinearity of the quantum-classical equations of
motion appears to prohibit a simple explanation.
### VIII.2 Anomalous precession
Figure 3: Time evolution of the angle enclosed by the classical spin
$\boldsymbol{S}$ and the expectation value of the local spin of the electron
system at the impurity site $\langle\boldsymbol{s}_{i_{0}}\rangle$. Results as
obtained by the full theory for $J=1$ (blue) and $J=15$ (orange). Other
parameters as in Fig. 2.
In the case of strong exchange coupling $J\gg T$, the classical spin
${\boldsymbol{S}}$ and the local magnetic moment
$\langle{\boldsymbol{s}}_{i_{0}}\rangle$ at $i_{0}$ are tightly bound
together. In this regime one would thus expect that
$\langle{\boldsymbol{s}}_{i_{0}}\rangle$ follows the classical-spin direction
almost instantaneously such that $\langle{\boldsymbol{s}}_{i_{0}}\rangle$ is
almost perfectly aligned antiferromagnetically to ${\boldsymbol{S}}$. The time
evolution of the angle enclosed by ${\boldsymbol{S}}$ and
$\langle{\boldsymbol{s}}_{i_{0}}\rangle$ is shown in Fig. 3. For $J=1$ the
mean deviation of the angle from $180^{\circ}$ is in fact about $2^{\circ}$
only, and it shrinks with increasing $J$, see the result for $J=15$. On the
other hand, the absolute value of the local moment
$\langle{\boldsymbol{s}}_{i_{0}}\rangle$ of the conduction-electron systems
that is induced by ${\boldsymbol{S}}$, increases from
$|\langle{\boldsymbol{s}}_{i_{0}}\rangle|\approx 0.18$ at $J=1$ to
$|\langle{\boldsymbol{s}}_{i_{0}}\rangle|\approx 0.49$ at $J=15$. The net
effect, however, is that the spin torque on ${\boldsymbol{S}}$ originating
from the exchange term,
$J\langle{\boldsymbol{s}}_{i_{0}}\rangle\times{\boldsymbol{S}}$ is weak
compared to the torque due to the field
$-{\boldsymbol{B}}\times{\boldsymbol{S}}$. Following naive adiabatic theory
one would therefore expect a precessional motion of ${\boldsymbol{S}}$ in the
$x$-$y$ plane with a frequency $\omega_{\rm p}$ close to the Larmor frequency
$\omega_{\rm L}=B$. However, this naive picture in principle disregards the
effect due to the geometrical spin torque, which can be sizeable. It is thus
instructive to compare the naive expectation as well as adiabatic spin
dynamics theory (ASD) with the full solution of the fundamental equations of
motion.
Numerical results for a strong coupling $J=15$ are displayed in Fig. 4. The
full theory (see red curve) does predict an oscillatory motion of $S_{x}$ as
expected for precessional dynamics. However, the precession is not perfect:
Note, e.g., that $S_{x}$ does not reach its minimum value $S_{x}=-1$, while
$S_{x}\approx+1$ after a full period. In fact, the precession does not take
place in the $x$-$y$ plane but within a plane that is a somewhat tilted and,
furthermore, the plane normal
$\boldsymbol{n}\propto{\boldsymbol{S}}\times\dot{{\boldsymbol{S}}}$ is
slightly time dependent.
The most important effect seen in Fig. 4, however, is the strongly enhanced
precession frequency $\omega_{\rm p}\approx 0.19$, which is close to twice the
Larmor freqency $\omega_{\rm L}=B=0.1$. This anomalous precession frequency
$\omega_{\rm p}$ is clearly at variance with the naive expectation and must
therefore result from the renormalization factor
$1/(1-\boldsymbol{S}\langle\boldsymbol{\Omega}\rangle)$ in Eq. (48). In fact,
the full theory (red) almost perfectly agrees with the prediction of the non-
abelian spin-dynamics (NA-SD) theory (blue), when spanning the low-energy
subspace ${\cal E}_{n}(\\{{\boldsymbol{S}}\\})$ by the instantaneous ground
and first excited state, i.e., for $n=2$.
Figure 4: Time dependence of the $x$-component of the classical spin for
$L=10$, $J=15$, $T=1$, $B=0.1$. Results as obtained from ASD ($n=1$, orange),
NA-SD with $n=2$ (blue), and the full theory (red).
Fig. 5 presents the same results of the NA-SD (blue curve) and the full theory
(red) in a classical-Bloch-sphere representation. At $t=0$, the motion of
${\boldsymbol{S}}$ starts at ${\boldsymbol{S}}=(1,0,0)$ (see blue dot) and
completes about three full periods up to the maximum propagation time $t=100$.
The dynamics is close to a planar precession but the instantaneous plane
normal ${\boldsymbol{n}}$ (green curve) exhibits a weak time dependence and
precesses itself around an axis that is somewhat tilted against the $z$ axis.
The full theory exhibits some additional wiggles which can also be seen in
Fig. 4 already and which are absent in the NA-SD. A low-energy subspace with
more than $n=2$ dimensions would be necessary to capture this effect. Apart
from that, however, there is an almost perfect agreement of the NA-SD results
with the results of the full theory.
Figure 5: The same as in Fig. 4 for the NA-SD (blue curve) and the full
theory (red) but displayed on a classical Bloch sphere. The blue dot marks the
spin position at time $t=0$. Green curve: unit vector ${\boldsymbol{n}}$
normal to the instantaneous precession plane. The trajectories are shown for
$0\leq t\leq 100$.
While this is very satisfying and underpins the construction of the NA-SD,
there is an interesting problem remaining: Comparing with the $n=1$ theory,
i.e., with ASD, there is strong discrepancy. ASD (see orange curve in Fig. 4)
does in fact yield the same result as the naive adiabatic picture for the
present setup since the ($n=1$) spin-Berry curvature vanishes identically:
${\boldsymbol{\Omega}}=0$. This has been noted in Ref. Stahl and Potthoff
(2017) already, and the anomalous precession frequency has been explained by
referring to an effective two-spin model $H_{\rm two-
spin}=J{\boldsymbol{s}}_{i_{0}}{\boldsymbol{S}}-{\boldsymbol{B}}{\boldsymbol{S}}$
which disregards the presence of the sites $i\neq i_{0}$, which can be argued
to the justified in the strong-$J$ regime. The two-spin model indeed predicts
${\boldsymbol{\Omega}}=\frac{1}{2}{\boldsymbol{S}}$, so that the
renormalization factor $1/(1-{\boldsymbol{\Omega}}{\boldsymbol{S}})=2$, which
is in reasonable agreement with the results of the full theory.
The remaining problem is to clarify why, for the full model (47), the $n=1$
spin-Berry curvature vanishes. One should note that there is actually an odd-
even effect. For an odd number of sites $L$, the spin-Berry curvature is in
fact finite, and the agreement with the full theory is satisfying already at
the $n=1$ level, while extending the effective theory to $n=2$ yields smaller
corrections only.
The odd-even effect can in fact be explained by a combination of time-reversal
symmetry and the fact that a local spin-dependent perturbation applied to a
non-magnetic ground state cannot induced a finite spin polarization in one
dimension. For $J=0$, the ground state $|\Psi_{0}\rangle$ of an spin-
SU(2)-symmetric tight-binding model is a total-spin singlet. For $J>0$, we
have $|\Psi_{0}\rangle=|\Psi_{0}({\boldsymbol{S}})\rangle$, where the
${\boldsymbol{S}}$ dependence is induced by the local perturbation
$J{\boldsymbol{S}}{\boldsymbol{s}}_{i_{0}}$. Assuming, without loss of
generality, that ${\boldsymbol{S}}=S{\boldsymbol{e}}_{z}$, it is given by a
Slater determinant of the form
$|\Psi_{0}({\boldsymbol{S}})\rangle=\prod_{k}\prod_{k^{\prime}}c^{\dagger}_{k\uparrow}c^{\dagger}_{k^{\prime}\downarrow}|\mbox{vac.}\rangle$,
where $k,k^{\prime}$ refer to the occupied spin-$\uparrow$ and
spin-$\downarrow$ eigenstates of the full Hamiltonian, including the
perturbation, with eigenenergies $\varepsilon_{\uparrow}(k)$ and
$\varepsilon_{\downarrow}(k)$, respectively.
For a one-dimensional particle-hole symmetric tight-binding model at half-
filling, a local spin-dependent but spin-diagonal perturbation
$JS_{z}s_{i_{0}z}$ does not change the number of $\uparrow$ and of
$\downarrow$ eigenstates with eigenenergies
$\varepsilon_{\uparrow}(k),\varepsilon_{\downarrow}(k)<0$, for arbitrary
coupling strength $J$ Kulkarni _et al._ (1999). This implies that for even
$L$ and at half-filling $N=L$, we must have $N_{\uparrow}=N_{\downarrow}=N/2$.
Consequently, the number of factors in the Slater determinant, labelled by $k$
and $k^{\prime}$, is the same, and thus $|\Psi_{0}({\boldsymbol{S}})\rangle$
is still a total-spin singlet (constructed from ${\boldsymbol{S}}$-dependent
one-particle states), irrespective of the strength of the perturbation $J$.
This argument holds for any direction of ${\boldsymbol{S}}$ and thus implies
that
$\Theta|\Psi_{0}({\boldsymbol{S}})\rangle=|\Psi_{0}({\boldsymbol{S}})\rangle$,
i.e., the ground state is invariant under time reversal $\Theta$ for all
${\boldsymbol{S}}$. Hence, the same holds for its ${\boldsymbol{S}}$
derivative:
$\Theta|\partial_{{\boldsymbol{S}}}\Psi_{0}({\boldsymbol{S}})\rangle=|\partial_{{\boldsymbol{S}}}\Psi_{0}({\boldsymbol{S}})\rangle$.
Some details on the invariance under time reversal are given in Appendix E.
Specializing Eq. (28) to the adiabatic case $n=1$ we thus have
${\boldsymbol{\Omega}}=\frac{1}{2}\sum_{\alpha\beta\gamma}\varepsilon_{\alpha\beta\gamma}{\boldsymbol{e}}_{\alpha}\Omega_{\beta\gamma}$
with
$\displaystyle\Omega_{\beta\gamma}$ $\displaystyle=$ $\displaystyle
i\left[\bra{\partial_{S_{\beta}}\Psi_{0}}\ket{\partial_{S_{\gamma}}\Psi_{0}}-(\beta\leftrightarrow\gamma)\right]$
(52) $\displaystyle=$
$\displaystyle-2\,\mbox{Im}\bra{\partial_{S_{\beta}}\Psi_{0}}\ket{\partial_{S_{\gamma}}\Psi_{0}}$
$\displaystyle=$
$\displaystyle-2\,\mbox{Im}\bra{\partial_{S_{\beta}}\Psi_{0}}\Theta^{\dagger}\Theta\ket{\partial_{S_{\gamma}}\Psi_{0}}^{\ast}$
$\displaystyle=$
$\displaystyle-2\,\mbox{Im}\bra{\partial_{S_{\beta}}\Psi_{0}}\ket{\partial_{S_{\gamma}}\Psi_{0}}^{\ast}=0\>,$
where we have exploited the anti-unitarity of $\Theta$. In an extension of the
discussion of Sec. V for the weak-$J$ case, we can thus infer that the abelian
spin-Berry curvature must vanish for even $L$ and arbitrary $J$ in one
dimension. Let us emphasize that the argument cannot be transferred to the
non-abelian case. For $n>1$, we have $\langle{\boldsymbol{\Omega}}\rangle\neq
0$ in general.
### VIII.3 Nutation
Apart from the precessional motion, the classical-spin dynamics also exhibits
nutational oscillations with a frequency that is in general different from the
precession frequency. The nutation is most easily seen in an oscillatory
behavior of the $z$ component of the classical spin: The field points into the
$z$ direction, ${\boldsymbol{B}}=B{\boldsymbol{e}}_{z}$, such that the $z$
component of the torque on ${\boldsymbol{S}}$ due to the field must vanish
$({\boldsymbol{B}}\times{\boldsymbol{S}})_{z}=0$. A nonzero time derivative
$\dot{S}_{z}\neq 0$ is, therefore, solely due to the exchange coupling and
directly proportional to
$J(\langle{\boldsymbol{s}}_{i_{0}}\rangle\times{\boldsymbol{S}})_{z}$.
As such, a nutational motion cannot be captured by $n=1$ adiabatic spin-
dynamics (ASD) theory: The adiabatic constraint and a simple symmetry argument
immediately imply that the ground-state local moment
$\langle{\boldsymbol{s}}_{i_{0}}\rangle=\bra{\Psi_{0}({\boldsymbol{S}})}{\boldsymbol{s}}_{i_{0}}\ket{\Psi_{0}({\boldsymbol{S}})}$
must be strictly antiparallel (for $J>0$) to ${\boldsymbol{S}}$, which in turn
implies that $S_{z}$ is a constant of motion. The adiabatic spin dynamics is
thus perfectly precessional albeit, opposed to naive adiabatic theory, with a
renormalized precession frequency, as already discussed above.
Numerical results for $L=11$ as obtained from non-abelian spin-dynamics theory
with $n=2$, see Fig. 6 (blue curve), show that there can be a considerable
variation of the amplitude of the $z$ component of ${\boldsymbol{S}}$. The
nutational oscillation is perfectly harmonic and $S_{z}$ stays non-negative,
when starting with $S_{z}=0$ at $t=0$. As compared with the $S_{z}$ dynamics
predicted by the full theory (red curve), the step from $n=1$ (ASD) to $n=2$
(most simple variant of NA-SD) is in fact the essential one, and the results
for $n=2$ are already close to those of the full theory. The latter, however,
predicts a slight deviation from perfectly harmonic nutational motion, which
is not reproduced with $n=2$ but can be captured with an improved ($n=4$)
approximation within the NA-SD (dashed blue curve). A further increase of $n$
becomes technically more and more involved and has also been found to improve
the results in a non-monotonic way only. It is thus very fortunate that the
main improvement of the $n=1$ ASD is already achieved with $n=2$ NA-SD. For
the rest of the discussion, we will therefore stick to the $n=2$ case.
Figure 6: Time dependence of the $z$-component of the classical spin as
obtained from $n=2$ (solid blue curve) and from $n=4$ (dashed blue curve) NA-
SD, compared to the result (red curve) of the full theory. $L=11$, $J=1$,
$B=0.1$.
The physical cause of the nutation can be traced back to the time-dependent
admixture of the first excited state $|\Psi_{1}({\boldsymbol{S}}(t))\rangle$
to the instantaneous ground state $|\Psi_{0}({\boldsymbol{S}}(t))\rangle$.
Fig. 7 for $J=1$ (blue curve) displays the absolute square of the ground-state
coefficient $|\alpha_{0}|^{2}$ as function of propagation time corresponding
to the $n=2$ NA-SD result for $S_{z}$ in Fig. 6. Note that we have
$|\alpha_{1}|^{2}=1-|\alpha_{0}|^{2}$ for $n=2$. As for $t=0$ the conduction-
electron system is prepared as the ground state of $\hat{H}$, the ground-state
weight $|\alpha_{0}|^{2}=1$ initially. In the course of time, there is a
weight transfer to the first excited state, which results in a significant
reduction of the ground-state weight down to a minimal value of
$|\alpha_{0}|^{2}\approx 0.72$. Within the $n=2$ NA-SD, the time-dependent
weight transfer is perfectly harmonic, and its frequency is exactly the same
as the nutation frequency of $S_{z}$ (see Fig. 6).
Figure 7: Time dependence of the ground-state weight $|\alpha_{0}|^{2}$ as
obtained from NA-SD with $n=2$ for a system with $L=11$, for $B=0.1$, and for
various coupling strengths $J=1$ (blue), $J=2$ (orange), and $J=10$ (green).
Figure 8: The same as Fig. 7 but for the time dependence of $S_{z}$. Coupling
strengths: $J=1$ (blue), $J=2$ (orange), and $J=10$ (green). Figure 9: The
same as Fig. 7 but for $L=10$. Note that the same color coding is used.
Coupling strengths: $J=1$ (blue), $J=2$ (orange), and $J=10$ (green). Figure
10: The same as Fig. 9 but for the time dependence of $S_{z}$. Coupling
strengths: $J=1$ (blue), $J=2$ (orange), and $J=10$ (green).
Increasing the coupling strength $J$, results in a weaker admixture of the
first excited state, as can be seen by the results for $J=2$ (orange) and
$J=10$ (green) in Fig. 7. This is accompanied by an increasing frequency of
the time-dependent weight transfer. Again, this frequency is precisely the
nutation frequency that is observed in the time dependence of $S_{z}$, which
is displayed in Fig. 8 for the different coupling strengths. We also note that
this is unrelated with the precession frequency which is much less $J$
dependent. Furthermore, also the $J$ dependence of the minimal (maximal)
amplitude shows the same trend for both, the ground-state weight and for
$S_{z}$, respectively.
Compared to the standard perturbative linear-response approach discussed in
the introduction, our approach thus provides an alternative explanation of
nutational spin dynamics. As in the standard theory, nutation is the first
phenomenon that is found in a systematic expansion starting around the
adiabatic limit, namely Taylor expansion in the retardation time on the one
hand and expansion in the dimension of the instantaneous low-energy subspace
on the other. Another important difference is that the NA-SD is formulated for
a closed system while the standard theory relies on a formalism for open
quantum systems. This also explains that the standard approach necessarily
predicts non-conserving Gilbert damping accompanying the nutation motion.
Figure 11: $J$ dependence of the single-particle eigenenergies for $L=10$
(right) and $L=11$ (left). $B=0.1$. Figure 12: The finite-size energy gap
$\Delta E$ between the ground state and the first excited state as function of
$J$ for $L=10$ (blue) and $L=11$ (orange). $B=0.1$.
The time dependence of the weight $|\alpha_{0}|^{2}$, as shown in Figs. 7 and
9, is reminiscent of the Rabi oscillations of the ground-state occupation in a
simple two-level system driven by an oscillatory time-dependent external
field. In our case the driving is due to the classical spin which is
precessing around the axis of the magnetic field. However, the case is more
complicated. Opposed to the standard Rabi setup Bellac (2006), the “two-level
system” emerging in the ($n=2$) NA-SD is itself time-dependent, has a feedback
on the classical spin induced by the spin-Berry curvature via the geometrical
torque, and ${\boldsymbol{S}}$ couples locally rather than globally to a time-
dependent and in general only partially polarized local magnetic moment.
Let us return to the results for the ground-state weight for $L=11$ shown in
Fig. 7. It is tempting to interpret the decrease of the amplitude of the
oscillations of $|\alpha_{0}|^{2}$ with increasing $J$ as a consequence of
approaching the adiabatic limit, where $|\alpha_{0}|^{2}=1$. In fact, this
trend is consistent with the time-averaged angle enclosed by
${\boldsymbol{S}}$ and $\langle{\boldsymbol{s}}_{i_{0}}\rangle$ approaching
$180^{\circ}$ with increasing $J$ (see Fig. 3). However, for a tight-binding
chain with an even number sites ($L=10$), see the data in Fig. 9, we find that
the oscillation amplitude of $|\alpha_{0}|^{2}$ grows with increasing $J$. We
conclude that there is an odd-even effect not only with respect to the
precessional but also to the nutational dynamics.
For an explanation of the effect, we consider the $2L$ single-particle
eigenenergies $\varepsilon_{k}$ of the minimal model Eq. (47). Their $J$
dependence is shown in Fig. 11 for $L=10$ (right, blue lines) and $L=11$
(left, orange lines). Only at $J=0$ are the eigenenergies spin-degenerate, any
finite $J>0$ immediately lifts this degeneracy. Consistent with analytical
results available for tridiagonal pseudo-Toeplitz matrices Kulkarni _et al._
(1999), we find that the $\varepsilon_{k}(J)$ curves do not intersect and that
a finite “critical” coupling $J\approx 2$ is necessary to split off a pair of
bound states, localized in the vicinity of $i_{0}$, from the “continuum” of
delocalized states. Importantly, however, we note that the finite-size gap
$\Delta E$ between the highest occupied and the lowest unoccupied eigenenergy,
right below and right above $\varepsilon=0$, respectively, shows opposite
trends for $L=10$ and $L=11$.
The $J$ dependence of the gap is displayed in Fig. 12. We note that $\Delta E$
monotonically shrinks with $J$ for $L=10$ (blue lines) and grows with $J$ for
$L=11$. This is also characteristic in general, for systems with an even and
odd number of sites, respectively. According to the adiabatic theorem Bellac
(2006), the real-time dynamics is close to adiabatic if the gap size is large
compared to the inverse $\tau^{-1}$ of the typical time scale $\tau$. Here,
this can be estimated as given by $\tau^{-1}\sim B=0.1$. For the case $L=11$,
this indeed implies that the adiabatic limit is approached with increasing
$J$, while for $L=10$ a decreasing $J$ favours adiabatic dynamics. This also
explains the different $J$ dependence of the amplitudes of the nutational
oscillations of $S_{z}$ shown in Figs. 8 and 10, respectively.
## IX Concluding discussion
Systems of a single or a few quantum spins coupled to an extended lattice
fermion model pose notoriously difficult quantum many-body problems. Here, by
treating the impurity spins as classical objects with a dynamics that is slow
as compared to the typical electronic time scales, we have concentrated on a
simplified case with the ambition to exactly trace out the high-energy scales
and to arrive at an effective low-energy theory that, apart from the classical
spins, includes a minimal number of electronic degrees of freedom. Our
approach in fact represents a systematic extension of the previously proposed
adiabatic spin dynamics (ASD) theory Stahl and Potthoff (2017), where
unconventional spin dynamics was observed to result from a geometrical spin
torque.
For systems where the typical spin-dynamics time scale is much slower than the
time scale of the electron dynamics, the adiabatic theorem, in case of gapped
systems, tells us that the electron state at an instant of time $t$ is the
ground state of the electronic Hamiltonian for the given spin configuration at
$t$. Alternatively and more general, one may argue that adiabatic dynamics is
due to fast electronic relaxation processes dissipating the excess energy to
the bulk of the system or to external baths. These standard arguments and more
explicit criteria, which typically motivate a purely adiabatic theory, are
rarely controllable and hardly ever fully met in applications to realistic
systems. In most practical cases, it is a priori extremely difficult to decide
whether or not the dynamics is adiabatic. Our approach therefore aims at a
straightforward way to improve the adiabatic spin-dynamics theory in an, at
least in principle, systematic manner.
As the central and sole approximation we assume that the electronic state at
any instant of time $t$ lies in the $n$-dimensional low-energy sector spanned
by the instantaneous ground state, realized for the classical-spin
configuration at time $t$, and the corresponding lowest $n-1$ instantaneous
excited states of the electron system. The approximation is implemented as a
holonomic constraint within a Lagrange formalism. We have seen that the
effective low-energy theory unfolds itself straightforwardly and naturally
takes the form of a non-abelian gauge theory, where the non-abelian spin-Berry
connection and spin-Berry curvature enter the resulting effective equations of
motions for the electronic state and for the spins. The gauge freedom is given
by the arbitrary choice of an orthonormal basis in the instantaneous low-
energy subspace of the electron system. SU(n) gauge transformations leave
observables invariant. The number $n$ of states considered in the non-abelian
spin dynamics (NA-SD) theory can be seen as a control parameter, so that
comparing results for different $n$ allows us to check the validity of the
approach, at least in principle.
The physically interesting point of the emergent low-energy theory is that the
spin dynamics is crucially affected by the gauge-invariant expectation value
of the (gauge-covariant) spin-Berry curvature, i.e., by an additional
geometrical spin torque. In the ASD ($n=1$) a non-zero spin-Berry curvature is
obtained for systems with broken time-reversal symmetry only. Opposed to ASD
($n=1$), however, the non-abelian spin dynamics (NA-SD) theory incorporates a
spin-Berry curvature tensor, the elements of which are generically non-zero
even in the more common time-reversal-symmetric case and both, for the anti-
unitary time-reversal operator squaring to $+1$ and to $-1$. The NA-SD
formalism also provides an elegant and straightforward explanation for the
odd-even effect observed as function of the system size in the simpler ASD
Stahl and Potthoff (2017).
Applications of the NA-SD theory are promising in cases, where (i) the
classical-spin approximation is reasonable, e.g., for magnetic atoms with high
spin quantum numbers or, more generally, with well-developed local magnetic
moments, which are stable on time scales exceeding all other relevant time
scales of the full system. This excludes, e.g., Kondo systems with a fast
screening of the local moment. Strong magnetic anisotropies at surfaces or
interfaces, on the other hand, can favor extremely stable magnetic moments
with respect to both, longitudinal and transversal spin fluctuations
Wiesendanger (2009).
(ii) As regards the electron system, the amount of energy pumped in with the
initial excitation must be small compared to the lowest electron excitation
energies, such that a low-dimensional instantaneous low-energy subspace can
fully capture the essential dynamics. Such situations could be realized in
case of magnetic atoms coupled to tight-binding systems with essentially a
finite number of orbitals, e.g., to metallic nanoislands supported by an
insulating substrate Wiesendanger (2009) or in nanowires Bajpai and Nikolić
(2020), for example. Correlated molecular magnetic systems are interesting as
well, particularly in cases with a degenerate ground-state manifold (see Ref.
R. Rausch and Karrasch (2022) for an instructive example), which naturally
defines the low-energy subspace. In case of formally infinite, e.g.,
condensed-matter systems, NA-SD may be applicable whenever there is a low-
energy sector with a finite gap to excited states at higher energies, such as
insulating systems with a symmetry-induced degenerate ground state.
Topological insulators with gapless edge modes, e.g., Chern or Z2 insulators,
represent another class of systems which are worth to be considered, and the
study of the relation between different Berry curvatures, the spin-Berry
curvature considered here and the conventional Berry curvature of topological
band theory is expected to be particularly instructive. The real-time dynamics
of classical spins coupled to the edge of a one-dimensional spinful Su-
Schrieffer-Heger model Elbracht and Potthoff (2021) and to a two-dimensional
spinful Kane-Mele model Quade and Potthoff (2022) have been discussed
recently. In the former case, the low-energy subspace (at one edge) is spanned
by two quantum states only. For the Z2 Kane-Mele nanoribbon, the helical edge
modes form a continuum but with an extremely small phase space for spin
excitations, which suggests that considering a finite number of basis states
for the low-energy sector could be a reasonably good approximation.
For classical spins coupled to gapless metallic bulk systems, any low-energy
sector is formally infinite-dimensional. While the adiabatic theorem does not
apply to this case, one still expects that a low-energy subspace defined by a
certain maximum excitation energy $\Delta E$ above the many-electron ground
state could reliably capture the electron dynamics, depending on the initial
excitation energy pumped into the system. If the electron system may be
treated in the independent-electron approximation, the application of NA-SD is
well conceivable, since it merely involves diagonalization of the single-
electron hopping matrix and computation of matrix elements of two-electron
operators with two-electron and two-hole excited states above the Fermi sea
(see Appendix D). By varying $\Delta E$, the reliability of the approximation
can be tested.
Here, as a proof of principle, we performed numerical calculations for a
minimal but non-trivial model consisting of a single impurity spin coupled to
the first site of a one-dimensional non-interacting tight-binding model with a
small number of $L$ sites. The real-time dynamics is initiated by a sudden
change of the direction of a local magnetic field coupled to the impurity spin
only. Results obtained from ASD ($n=1$) and NA-SD (for $n=2$ and $n=4$) have
been checked against results obtained from the numerical solution of the full,
unconstrained set of equations of motion for the coupled spin-electron system.
We find that the NA-SD reproduces the anomalous precession frequency that is
already predicted by ASD for systems with an odd number of sites $L$. For even
$L$, NA-SD correctly predicts anomalous precession, which is absent in the
purely adiabatic approach. This deficiency of the ASD can be explained by a
symmetry analysis. Depending on the coupling strength $J$, the dynamics of the
impurity spin can exhibit a considerable nutational motion. As judged by
comparison with the full theory, this more subtle effect is almost
quantitatively covered with NA-SA for $n=2$. NA-SD calculations for $n=4$ show
an even closer agreement with the full theory.
###### Acknowledgements.
We acknowledge support from the Deutsche Forschungsgemeinschaft (DFG, German
Science Foundation) through FOR 5249-449872909 (Project P8) and the European
Research Council via Synergy Grant 854843-FASTCORR.
## Appendix A Explicit form of the equation of motion for the classical spins
The equations of motion Eq. (25) for the classical spins derived in Sec. III
are implicit differential equations. An explicit form, however, is more
convenient for the numerical evaluation. Here, we briefly discuss a
corresponding reformulation. We start with Eq. (25) and apply
$\times\boldsymbol{S}_{m}$ from the right. This yields
$\dot{\boldsymbol{S}}_{m}=\langle\partial_{{\boldsymbol{S}}_{m}}\hat{H}_{\rm
int}\rangle\times\boldsymbol{S}_{m}+\partial_{{\boldsymbol{S}}_{m}}H_{\rm
cl}\times\boldsymbol{S}_{m}+\sum_{\delta}\sum_{k\gamma}\left(\sum_{\alpha\beta}\varepsilon_{\alpha\beta\delta}S_{m\beta}\langle{\Omega}\rangle_{k\gamma,m\alpha}\right)\dot{S}_{k\gamma}\boldsymbol{e}_{\delta}\>.$
(53)
Next, we combine the components of all $M$ spins in a single $3M$-dimensional
column,
$\boldsymbol{\mathcal{S}}:=(\boldsymbol{S}_{1},\boldsymbol{S}_{2},\dots)^{T}=\sum_{m=1}^{M}\boldsymbol{e}^{M}_{m}\otimes\boldsymbol{S}_{m}\>.$
(54)
Here $\boldsymbol{e}^{M}_{m}$ is the $m$-th canonical $M$-dimensional unit
vector and $\otimes$ denotes the Kronecker product. Writing
$\chi_{m\delta,k\gamma}=\sum_{\alpha\beta}\varepsilon_{\alpha\beta\delta}S_{m\beta}\langle{\Omega}\rangle_{k\gamma,m\alpha}$
for short, the last term on the right-hand side of Eq. (53) can be written as
$\sum_{\delta}(\underline{\chi}\dot{\boldsymbol{\mathcal{S}}})_{m\delta}\boldsymbol{e}_{\delta}$,
and we find the explicit form of the $3M$-dimensional system of differential
equations of motion:
$\dot{\boldsymbol{\mathcal{S}}}=\Big{(}\mathbb{1}-\underline{\chi}\Big{)}^{-1}\cdot\left(\sum_{m}\boldsymbol{e}^{M}_{m}\otimes\left(\langle\partial_{{\boldsymbol{S}}_{m}}\hat{H}_{\rm
int}\rangle\times\boldsymbol{S}_{m}+\partial_{{\boldsymbol{S}}_{m}}H_{\rm
cl}\times\boldsymbol{S}_{m}\right)\right)\>.$ (55)
This involves an inversion of the $3M$-dimensional matrix
$\mathbb{1}-\underline{\chi}$.
## Appendix B Normalisation conditions
The equations of motion Eq. (25) and Eq. (26) respect the normalisation
conditions Eq. (27). We start with the wave-function normalization. Eq. (26)
implies
$i\sum_{i}\alpha^{\ast}_{i}(\partial_{t}\alpha_{i})=\sum_{ij}\alpha^{\ast}_{i}\alpha_{j}\bra{\Psi_{i}}\hat{H}\ket{\Psi_{j}}-i\sum_{ij}\alpha^{\ast}_{i}\alpha_{j}\bra{\Psi_{i}}\partial_{t}\ket{\Psi_{j}}=-i\sum_{i}(\partial_{t}\alpha^{\ast}_{i})\alpha_{i}\>.$
(56)
This yields $\partial_{t}\sum_{i}|\alpha_{i}|^{2}=0$ as required. Conservation
of the length of the classical spins can be verified directly from their
equations of motion, Eq. (25), or, more conveniently by taking the scalar
product of both sides of Eq. (53) with ${\boldsymbol{S}}_{m}$. This yields
${\boldsymbol{S}}_{m}\dot{{\boldsymbol{S}}}_{m}=0$ as required. However,
conservation of the spin length has been exploited already in deriving Eq.
(25), directly after Eq. (20).
Alternatively, we may thus explicitly take care of the normalization
conditions $\boldsymbol{S}^{2}_{m}=1$ by treating them as additional
constraints when deriving the equations of motion from the Lagrangian (13).
This is done with $M$ Lagrange multipliers $\lambda_{m}$, i.e., we replace the
Lagrangian by
$L^{\prime}_{\text{eff}}(\\{\boldsymbol{S}\\},\\{\dot{\boldsymbol{S}}\\},\\{\alpha\\},\\{\alpha^{\ast}\\},\\{\dot{\alpha}\\},\\{\dot{\alpha}^{\ast}\\},\\{\lambda\\})=L_{\text{eff}}(\\{\boldsymbol{S}\\},\\{\dot{\boldsymbol{S}}\\},\\{\alpha\\},\\{\alpha^{\ast}\\},\\{\dot{\alpha}\\},\\{\dot{\alpha}^{\ast}\\})-\sum_{m}\lambda_{m}(\boldsymbol{S}^{2}_{m}-1)\>,$
(57)
such that the Euler-Lagrange equation for $\lambda_{m}$ reads
$\boldsymbol{S}^{2}_{m}=1$. Further, the equation of motion for a classical
spin $\boldsymbol{S}_{m}$ is modified as
$0=\frac{1}{\absolutevalue{\boldsymbol{S}_{m}}^{3}}\dot{\boldsymbol{S}}_{m}\times\boldsymbol{S}_{m}+\langle\partial_{{\boldsymbol{S}}_{m}}\hat{H}_{\rm
int}\rangle+\partial_{{\boldsymbol{S}}_{m}}H_{\rm
cl}+\sum_{k}\sum_{\beta\gamma}\dot{S}_{k\gamma}\langle{\Omega}\rangle_{k\gamma,m\beta}{\boldsymbol{e}}_{\beta}+2\lambda_{m}\boldsymbol{S}_{m}\>.$
(58)
Acting on both sides of the equation with $\times\boldsymbol{S}_{m}$ and with
$\cdot\boldsymbol{S}_{m}$, respectively, gives a system of two equations,
which is equivalent with Eq. (58):
$\displaystyle 0$ $\displaystyle=$
$\displaystyle(\dot{\boldsymbol{S}}_{m}\times\boldsymbol{S}_{m})\times\frac{\boldsymbol{S}_{m}}{\absolutevalue{\boldsymbol{S}_{m}}^{3}}+\langle\partial_{{\boldsymbol{S}}_{m}}\hat{H}_{\rm
int}\rangle\times\boldsymbol{S}_{m}+\partial_{{\boldsymbol{S}}_{m}}H_{\rm
cl}\times\boldsymbol{S}_{m}+\sum_{k}\sum_{\beta\gamma}\dot{S}_{k\gamma}\langle{\Omega}\rangle_{k\gamma,m\beta}{\boldsymbol{e}}_{\beta}\times\boldsymbol{S}_{m}\>,$
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\langle\partial_{{\boldsymbol{S}}_{m}}\hat{H}_{\rm
int}\rangle\cdot\boldsymbol{S}_{m}+\partial_{{\boldsymbol{S}}_{m}}H_{\rm
cl}\cdot\boldsymbol{S}_{m}+\sum_{k}\sum_{\beta\gamma}\dot{S}_{k\gamma}\langle{\Omega}\rangle_{k\gamma,m\beta}{\boldsymbol{e}}_{\beta}\cdot\boldsymbol{S}_{m}+2\lambda_{m}\boldsymbol{S}^{2}_{m}\>.$
(59)
Exploiting $\boldsymbol{S}^{2}_{m}=1$ in the second equation fixes the
Lagrange multipliers as
$\lambda_{m}=-\frac{1}{2}\left(\langle\partial_{{\boldsymbol{S}}_{m}}\hat{H}_{\rm
int}\rangle\cdot\boldsymbol{S}_{m}+\partial_{{\boldsymbol{S}}_{m}}H_{\rm
cl}\cdot\boldsymbol{S}_{m}+\sum_{k}\sum_{\beta\gamma}\dot{S}_{k\gamma}\langle{\Omega}\rangle_{k\gamma,m\beta}{\boldsymbol{e}}_{\beta}\cdot\boldsymbol{S}_{m}\right)\>,$
(60)
while using it in the first equation reproduces the familiar equation of
motion (25).
## Appendix C Spin-Berry curvature in terms of a projection operator
To prove Eq. (28) we start from the definition (22) of the non-abelian spin-
Berry curvature and insert the definition for the spin-Berry connection Eq.
(15). This gives
$\displaystyle\Omega^{(ij)}_{k\gamma,m\beta}$ $\displaystyle=$ $\displaystyle
i\left[\bra{\partial_{S_{k\gamma}}\Psi_{i}}\ket{\partial_{S_{m\beta}}\Psi_{j}}-\bra{\partial_{S_{m\beta}}\Psi_{i}}\ket{\partial_{S_{k\gamma}}\Psi_{j}}\right]$
(61) $\displaystyle+$ $\displaystyle
i\sum_{l=0}^{n-1}\left[\bra{\Psi_{i}}\partial_{S_{k\gamma}}\ket{\Psi_{l}}\bra{\Psi_{l}}\partial_{S_{m\beta}}\ket{\Psi_{j}}-\bra{\Psi_{i}}\partial_{S_{m\beta}}\ket{\Psi_{l}}\bra{\Psi_{l}}\partial_{S_{k\gamma}}\ket{\Psi_{j}}\right]\>,$
where we have exploited the commutativity of the derivatives
$\partial_{S_{k\gamma}}$ and $\partial_{S_{m\beta}}$. Using the completeness
relation and inserting a unity,
${\mathbb{1}}=\mathcal{Q}_{n}+\sum_{l=0}^{n-1}\ket{\Psi_{l}}\bra{\Psi_{l}}\>,$
(62)
where $Q_{n}=\sum_{i\geq n}\ket{\Psi_{i}}\bra{\Psi_{i}}$ is the projector onto
the orthogonal complement of the low-energy space ${\cal
E}_{n}(\\{{\boldsymbol{S}}\\})$, we find
$\displaystyle\Omega^{(ij)}_{k\gamma,m\beta}$ $\displaystyle=$ $\displaystyle
i\left[\bra{\partial_{S_{k\gamma}}\Psi_{i}}\mathcal{Q}_{n}\ket{\partial_{S_{m\beta}}\Psi_{j}}-\bra{\partial_{S_{m\beta}}\Psi_{i}}\mathcal{Q}_{n}\ket{\partial_{S_{k\gamma}}\Psi_{j}}\right]$
(63) $\displaystyle+$ $\displaystyle
i\sum_{l=0}^{n-1}\left[\bra{\partial_{S_{k\gamma}}\Psi_{i}}\ket{\Psi_{l}}\bra{\Psi_{l}}\ket{\partial_{S_{m\beta}}\Psi_{j}}-\bra{\partial_{S_{m\beta}}\Psi_{i}}\ket{\Psi_{l}}\bra{\Psi_{l}}\ket{\partial_{S_{k\gamma}}\Psi_{j}}\right]$
$\displaystyle+$ $\displaystyle
i\sum_{l=0}^{n-1}\left[\bra{\Psi_{i}}\partial_{S_{k\gamma}}\ket{\Psi_{l}}\bra{\Psi_{l}}\partial_{S_{m\beta}}\ket{\Psi_{j}}-\bra{\Psi_{i}}\partial_{S_{m\beta}}\ket{\Psi_{l}}\bra{\Psi_{l}}\partial_{S_{k\gamma}}\ket{\Psi_{j}}\right]\>.$
Noting that
$\bra{\partial_{S_{m\beta}}\Psi_{i}}\ket{\Psi_{j}}=-\bra{\Psi_{i}}\partial_{S_{m\beta}}\ket{\Psi_{j}}$,
we see that the last two terms on the right-hand side cancel, and thus
$\Omega^{(ij)}_{k\gamma,m\beta}=i\left[\bra{\partial_{S_{k\gamma}}\Psi_{i}}\mathcal{Q}_{n}\ket{\partial_{S_{m\beta}}\Psi_{j}}-\bra{\partial_{S_{m\beta}}\Psi_{i}}\mathcal{Q}_{n}\ket{\partial_{S_{k\gamma}}\Psi_{j}}\right]\>.$
(64)
## Appendix D Numerical computation of spin-Berry curvature and connection
The equations of motion Eq. (25) and Eq. (26) form a coupled, non-linear set
of ordinary differential equations, which can be solved numerically by
standard techniques. Making use of the fact that the conduction-electron
system is non-interacting, however, is essential for an efficient computation
of the key quantities of the electron system, namely the spin-Berry curvature
and connection.
We start by specializing Eqs. (23) and (28) to the single-spin case $M=1$,
$\langle\Omega\rangle_{\beta\gamma}=i\sum_{i,j=0}^{n-1}\alpha_{i}^{\ast}\alpha_{j}\left(\bra{\partial_{\beta}\Psi_{i}}\mathcal{Q}_{n}\ket{\partial_{\gamma}\Psi_{j}}-\bra{\partial_{\gamma}\Psi_{i}}\mathcal{Q}_{n}\ket{\partial_{\beta}\Psi_{j}}\right)=2\sum_{i,j=0}^{n-1}\sum_{l\geq
n}\Im{\alpha_{i}^{\ast}\alpha_{j}\bra{\Psi_{i}}\partial_{\beta}\ket{\Psi_{l}}\bra{\Psi_{l}}\ket{\partial_{\gamma}\Psi_{j}}}\>,$
(65)
and use the identity
$\bra{\Psi_{i}}\partial_{\beta}\ket{\Psi_{l}}=\frac{\bra{\Psi_{i}}\frac{\partial\hat{H}}{\partial
S_{\beta}}\ket{\Psi_{l}}}{E_{l}-E_{i}}\qquad(E_{i}\neq E_{l})$ (66)
to express $\langle\boldsymbol{\Omega}\rangle$ in the form
$\displaystyle\langle\Omega\rangle_{\beta\gamma}$ $\displaystyle=$
$\displaystyle-2\imaginary\sum_{ij}\sum_{l}^{E_{l}\neq
E_{i},E_{j}}\alpha_{i}^{\ast}\alpha_{j}\frac{\bra{\Psi_{i}}\frac{\partial\hat{H}}{\partial
S_{\beta}}\ket{\Psi_{l}}\bra{\Psi_{l}}\frac{\partial\hat{H}}{\partial
S_{\gamma}}\ket{\Psi_{j}}}{(E_{i}-E_{l})(E_{j}-E_{l})}=-2\imaginary
J^{2}\sum_{ij}\sum_{l}^{E_{l}\neq
E_{i},E_{j}}\frac{\alpha_{i}^{\ast}\alpha_{j}\bra{\Psi_{i}}s_{i_{0}\beta}\ket{\Psi_{l}}\bra{\Psi_{l}}s_{i_{0}\gamma}\ket{\Psi_{j}}}{(E_{i}-E_{l})(E_{j}-E_{l})}\>.$
The matrix elements can be computed by plugging in the definition of the local
spin
$\boldsymbol{s}_{i}=\frac{\hbar}{2}\sum_{\sigma\sigma^{\prime}}c_{i\sigma}^{\dagger}\boldsymbol{\sigma}_{\sigma\sigma^{\prime}}c_{i\sigma^{\prime}}$
and by transforming to the eigenstates of the effective hopping matrix:
$c^{\dagger}_{i\sigma}=\sum_{k\tilde{\sigma}}U^{\dagger}_{k\tilde{\sigma},i\sigma}c^{\dagger}_{k\tilde{\sigma}}\;,\qquad
c_{i\sigma}=\sum_{k\tilde{\sigma}}U_{i\sigma,k\tilde{\sigma}}c_{k\tilde{\sigma}}\>.$
(68)
This yields
$\displaystyle\sum_{l}^{E_{l}\neq
E_{i},E_{j}}\frac{\bra{\Psi_{i}}s_{i_{0}\beta}\ket{\Psi_{l}}\bra{\Psi_{l}}s_{i_{0}\gamma}\ket{\Psi_{j}}}{(E_{i}-E_{l})(E_{j}-E_{l})}$
$\displaystyle=$
$\displaystyle\frac{1}{4}\sum_{\sigma\sigma^{\prime}\tau\tau^{\prime}}{\sum}^{\prime\prime}_{kk^{\prime}qq^{\prime}\atop\tilde{\sigma}\tilde{\sigma}^{\prime}\tilde{\tau}\tilde{\tau}^{\prime}}U^{\dagger}_{k\tilde{\sigma},i_{0}\sigma}\sigma^{(\beta)}_{\sigma\sigma^{\prime}}U_{i_{0}\sigma^{\prime},k^{\prime}\tilde{\sigma}^{\prime}}U^{\dagger}_{q\tilde{\tau},i_{0}\tau}\sigma^{(\gamma)}_{\tau\tau^{\prime}}U_{i_{0}\tau^{\prime},q^{\prime}\tilde{\tau}^{\prime}}\times$
(69) $\displaystyle\times$
$\displaystyle\frac{\bra{\Psi_{i}}c^{\dagger}_{k\tilde{\sigma}}c_{k^{\prime}\tilde{\sigma}^{\prime}}c^{\dagger}_{q\tilde{\tau}}c_{q^{\prime}\tilde{\tau}^{\prime}}\ket{\Psi_{j}}}{(E_{i}-E_{j}+\varepsilon_{q^{\prime}\tilde{\tau}^{\prime}}-\varepsilon_{q\tilde{\tau}})(\varepsilon_{q^{\prime}\tilde{\tau}^{\prime}}-\varepsilon_{q\tilde{\tau}})}\>,$
where $\sum^{\prime\prime}$ means that the indices
$k,k^{\prime},q,q^{\prime},\tilde{\sigma},\tilde{\sigma}^{\prime},\tilde{\tau},\tilde{\tau}^{\prime}$
can only take values such that
$c^{\dagger}_{k^{\prime}\tilde{\sigma}^{\prime}}c_{k\tilde{\sigma}}\ket{i}$
and $c^{\dagger}_{q\tilde{\tau}}c_{q^{\prime}\tilde{\tau}^{\prime}}\ket{j}$
are not contained in the low-energy subspace. For the summation indices it is
required that
$(k,\tilde{\sigma})\neq(k^{\prime},\tilde{\sigma}^{\prime})\quad\mbox{and}\quad(q,\tilde{\tau})\neq(q^{\prime},\tilde{\tau}^{\prime})$
(70)
since $i\neq l$ and $j\neq l$. Plugging this into (LABEL:eq:gammaim) gives an
expression that can be evaluated straightforwardly by numerical means.
We also have to compute the Berry connection, i.e., the matrix elements
$\bra{\Psi_{i}}\partial_{\beta}\ket{\Psi_{j}}$ in Eq. (26) (see also Eq.
(15)). For $i\neq j$ we can again use (66), since the single-particle energies
are generically nondegenerate for finite $J$ and since this implies that
states $\ket{\Psi_{i}}$ and $\ket{\Psi_{j}}$ with $E_{i}=E_{j}$ must differ in
more than one single-particle eigenstate. For $i=j$, one the other hand,
$\bra{\Psi_{i}}\partial_{\beta}\ket{\Psi_{i}}$ mast be computed differently.
We exploit that the many-particle state $\ket{\Psi_{i}}$ is a Slater
determinant:
$\ket{\Psi_{i}}=c^{\dagger}_{n_{1}}c^{\dagger}_{n_{2}}\cdots
c^{\dagger}_{n_{N}}\ket{\text{vac}}\>.$ (71)
Therewith, we get
$\partial_{S_{\beta}}\ket{\Psi_{i}}=\sum_{i=1}^{N}c^{\dagger}_{n_{1}}\cdots(\partial_{S_{\beta}}c^{\dagger}_{n_{i}})\cdots
c^{\dagger}_{n_{N}}\ket{\text{vac}}$ (72)
with
$\displaystyle\partial_{S_{\beta}}c^{\dagger}_{n_{i}}=\partial_{S_{\beta}}\sum_{j\sigma}U_{j\sigma,n_{i}}c^{\dagger}_{j\sigma}=\sum_{j\sigma}(\partial_{S_{\beta}}U_{j\sigma,n_{i}})c^{\dagger}_{j\sigma}=\sum_{j\sigma}\sum_{m}(\partial_{S_{\beta}}U_{j\sigma,n_{i}})U^{\dagger}_{m,j\sigma}c^{\dagger}_{m}=\sum_{m}(U^{\dagger}\partial_{S_{\beta}}U)_{mn_{i}}c^{\dagger}_{m}\>.$
(73)
Multiplying Eq. (72) with $\bra{\Psi_{i}}$ from the left yields
$\displaystyle\bra{\Psi_{i}}\partial_{\beta}\ket{\Psi_{i}}$ $\displaystyle=$
$\displaystyle\sum_{i=1}^{N}\sum_{m}(U^{\dagger}\partial_{S_{\beta}}U)_{mn_{i}}\underbrace{\bra{\text{vac}}c_{n_{N}}\cdots
c_{n_{i}}\cdots c_{n_{1}}c^{\dagger}_{n_{1}}\cdots c^{\dagger}_{m}\cdots
c^{\dagger}_{n_{N}}\ket{\text{vac}}}_{\delta_{n_{i}m}}$ (74) $\displaystyle=$
$\displaystyle\sum_{i=1}^{N}(U^{\dagger}\partial_{S_{\beta}}U)_{n_{i}n_{i}}={\sum_{n}}^{\prime}(U^{\dagger}\partial_{S_{\beta}}U)_{nn}=\sum_{n}(U^{\dagger}\partial_{S_{\beta}}U)_{nn}\bra{\Psi_{i}}\hat{n}_{n}\ket{\Psi_{i}}\>,$
where $\sum_{n}^{\prime}$ indicates that the sum only contains those single-
particle states that are occupied in the many-particle state $\ket{\Psi_{i}}$.
The derivative of the $U$-matrix can be computed by standard numerical means.
## Appendix E Time-reversal symmetric ground state
We consider the minimal model with Hamiltonian $H$ given by Eq. (47). For
$J=0$ the (electronic part of the) model is invariant under SU(2) spin
rotations. For a given direction of the classical spin, say
${\boldsymbol{S}}=S{\boldsymbol{e}}_{z}$, and for $J>0$ the symmetry is breaks
down to a U(1) symmetry under spin rotations around the $z$ axis. As argued in
the main text, the local spin-dependent perturbation is not strong enough to
spin-polarize the system, irrespective of the coupling strength $J$. In this
case the ground state of $H$ is invariant under time reversal, as is shown in
the following:
The antiunitary operator $\Theta$ representing time reversal in Fock space is
defined via its action on the creation and annihilation operators as
$\Theta
c^{\dagger}_{i\uparrow}\Theta^{\dagger}=c^{\dagger}_{i\downarrow}\;,\quad\Theta
c^{\dagger}_{i\downarrow}\Theta^{\dagger}=-c^{\dagger}_{i\uparrow}\>,$ (75)
where $i$ refers to lattice sites and $\sigma=\uparrow,\downarrow$ to the spin
projection with respect to the $z$ axis. Due to the remaining U(1) symmetry,
the Hamiltonian can be diagonalized in the spin-$\uparrow$ and
spin-$\downarrow$ sectors separately, i.e., the single-particle eigenstates
$c^{\dagger}_{k\sigma}|\mbox{vac}\rangle$ of $H$ are obtained via a spin-
diagonal and spin-independent unitary transformation:
$c^{\dagger}_{k\sigma}=\sum_{i}U_{ik}c^{\dagger}_{i\sigma}\>.$ (76)
For the model Eq. (47) with ${\boldsymbol{S}}=S{\boldsymbol{e}}_{z}$, the
effective hopping matrix Eq. (51) is real and symmetric, and we can thus
assume a real and orthogonal transformation matrix $U$. The creation operators
referring to the eigenbasis of $H$ in the one-particle subspace thus transform
as
$\Theta
c^{\dagger}_{k\uparrow}\Theta^{\dagger}=c^{\dagger}_{k\downarrow}\;,\quad\Theta
c^{\dagger}_{k\downarrow}\Theta^{\dagger}=-c^{\dagger}_{k\uparrow}\>$ (77)
under time reversal.
For even $N$, the ground state of $H$ is the Slater determinant
$\ket{\Psi_{0}}=\prod^{\rm occ.}_{k}c^{\dagger}_{k\uparrow}\prod^{\rm
occ.}_{k^{\prime}}c^{\dagger}_{k^{\prime}\downarrow}\ket{\text{vac}}\>,$ (78)
where $\ket{\text{vac}}$ is the time-reversal invariant vacuum,
$k=1,...,N_{\uparrow}$, and $k^{\prime}=1,...,N_{\downarrow}$ with
$N_{\uparrow}=N_{\downarrow}=N/2$, as the ground state is unpolarized.
Applying $\Theta$ yields
$\Theta\ket{\Psi_{0}}=(-1)^{N_{\downarrow}}\prod^{\rm
occ.}_{k}c^{\dagger}_{k\downarrow}\prod^{\rm
occ.}_{k^{\prime}}c^{\dagger}_{k^{\prime}\uparrow}\ket{\text{vac}}\>,$ (79)
and, after reordering,
$\Theta\ket{\Psi_{0}}=(-1)^{N_{\uparrow}N_{\downarrow}}(-1)^{N_{\downarrow}}\prod^{\rm
occ.}_{k^{\prime}}c^{\dagger}_{k^{\prime}\uparrow}\prod^{\rm
occ.}_{k}c^{\dagger}_{k\downarrow}\ket{\text{vac}}\>.$ (80)
For $N_{\uparrow}=N_{\downarrow}=N/2$, however, the total sign is $+1$, and
hence the ground state is time-reversal symmetric
$\Theta\ket{\Psi_{0}}=\ket{\Psi_{0}}\>.$ (81)
## References
* Nowak (2007) U. Nowak, “Classical spin models,” in _Handbook of Magnetism and Advanced Magnetic Materials_ (American Cancer Society, 2007).
* Bertotti _et al._ (2009) G. Bertotti, I. D. Mayergoyz, and C. Serpico, _Nonlinear Magnetization Dynamics in Nanosystemes_ (Elsevier, Amsterdam, 2009).
* Kondo (1964) J. Kondo, Prog. Theor. Phys. 32, 37 (1964).
* Hewson (1993) A. C. Hewson, _The Kondo Problem to Heavy Fermions_ (Cambridge University Press, Cambridge, 1993).
* Tatara _et al._ (2008) G. Tatara, H. Kohno, and J. Shibata, Physics Reports 468, 213 (2008).
* Skubic _et al._ (2008) B. Skubic, J. Hellsvik, L. Nordström, and O. Eriksson, J. Phys.: Condens. Matter 20, 315203 (2008).
* Fähnle and Illg (2011) M. Fähnle and C. Illg, J. Phys.: Condens. Matter 23, 493201 (2011).
* Evans _et al._ (2014) R. F. L. Evans, W. J. Fan, P. Chureemart, T. A. Ostler, M. O. A. Ellis, and R. W. Chantrell, J. Phys.: Condens. Matter 26, 103202 (2014).
* (9) M. A. Ruderman and C. Kittel, Phys. Rev. 96, 99 (1954); T. Kasuya, Prog. Theor. Phys. 16, 45 (1956); K. Yosida, Phys. Rev. 106, 893 (1957).
* (10) L. D. Landau and E. M. Lifshitz, Physik. Zeits. Sowjetunion 8,153 (1935); T. Gilbert, Phys. Rev. 100, 1243 (1955); T. Gilbert, Magnetics, IEEE Transactions on 40, 3443 (2004).
* Butikov (2006) E. Butikov, European Journal of Physics 27, 1071 (2006).
* Wegrowe and Ciornei (2012) J.-E. Wegrowe and M.-C. Ciornei, Am. J. Phys. 80, 607 (2012).
* Onoda and Nagaosa (2006) M. Onoda and N. Nagaosa, Phys. Rev. Lett. 96, 066603 (2006).
* Umetsu _et al._ (2012) N. Umetsu, D. Miura, and A. Sakuma, J. Appl. Phys. 111, 07D117 (2012).
* Bhattacharjee _et al._ (2012) S. Bhattacharjee, L. Nordström, and J. Fransson, Phys. Rev. Lett. 108, 057204 (2012).
* Sayad and Potthoff (2015) M. Sayad and M. Potthoff, New J. Phys. 17, 113058 (2015).
* Bajpai and Nikolic (2019) U. Bajpai and B. K. Nikolic, Phys. Rev. B 99, 134409 (2019).
* (18) S. V. Vonsovsky, Zh. Éksp. Teor. Fiz. 16, 981 (1946); C. Zener, Phys. Rev. 81, 440 (1951); S. V. Vonsovsky and E. A. Turov, Zh. Éksp. Teor. Fiz. 24, 419 (1953).
* Breuer and Petruccione (2002) H. Breuer and F. Petruccione, _The Theory of Open Quantum Systems_ (Oxford University Press, New York, 2002).
* Sayad _et al._ (2016a) M. Sayad, R. Rausch, and M. Potthoff, Phys. Rev. Lett. 117, 127201 (2016a).
* Antropov _et al._ (1995) V. P. Antropov, M. I. Katsnelson, M. van Schilfgaarde, and B. N. Harmon, Phys. Rev. Lett. 75, 729 (1995).
* Kuneš and Kamberský (2002) J. Kuneš and V. Kamberský, Phys. Rev. B 65, 212411 (2002).
* Capelle and Gyorffy (2003) K. Capelle and B. L. Gyorffy, Europhys. Lett. 61, 354 (2003).
* Ebert _et al._ (2011) H. Ebert, S. Mankovsky, D. Ködderitzsch, and P. J. Kelly, Phys. Rev. Lett. 107, 066603 (2011).
* Fähnle _et al._ (2011) M. Fähnle, D. Steiauf, and C. Illg, Phys. Rev. B 84, 172403 (2011).
* Kikuchi and Tatara (2015) T. Kikuchi and G. Tatara, Phys. Rev. B 92, 184410 (2015).
* Sayad _et al._ (2016b) M. Sayad, R. Rausch, and M. Potthoff, Europhys. Lett. 116, 17001 (2016b).
* Stahl and Potthoff (2017) C. Stahl and M. Potthoff, Phys. Rev. Lett. 119, 227203 (2017).
* Berry (1984) M. V. Berry, Proc. R. Soc. London A 392, 45 (1984).
* Xiao _et al._ (2010) D. Xiao, M.-C. Chang, and Q. Niu, Rev. Mod. Phys. 82, 1959 (2010).
* Wen and Zee (1988) X. G. Wen and A. Zee, Phys. Rev. Lett. 61, 1025 (1988).
* Niu and Kleinman (1998) Q. Niu and L. Kleinman, Phys. Rev. Lett. 80, 2205 (1998).
* Bohm _et al._ (2003) A. Bohm, A. Mostafazadeh, H. Koizumi, Q. Niu, and J. Zwanziger, _The Geometric Phase in Quantum Systems_ (Springer, Berlin, 2003).
* Niu _et al._ (1999) Q. Niu, X. Wang, L. Kleinman, W. Liu, D. M. C. Nicholson, and G. M. Stocks, Phys. Rev. Lett. 83, 207 (1999).
* Bajpai and Nikolić (2020) U. Bajpai and B. K. Nikolić, Phys. Rev. Lett. 125, 187202 (2020).
* Elbracht _et al._ (2020) M. Elbracht, S. Michel, and M. Potthoff, Phys. Rev. Lett. 124, 197202 (2020).
* Michel and Potthoff (2021) S. Michel and M. Potthoff, Phys. Rev. B 103, 024449 (2021).
* Marx and Hutter (2000) D. Marx and J. Hutter, _Ab initio molecular dynamics: Theory and Implementation, In: Modern Methods and Algorithms of Quantum Chemistry_, NIC Series, Vol. 1, Ed. by J. Grotendorst, p. 301 (John von Neumann Institute for Computing, Jülich, 2000).
* Zhang and Wu (2006) Q. Zhang and B. Wu, Phys. Rev. Lett. 97, 190401 (2006).
* Wilczek and Zee (1984) F. Wilczek and A. Zee, Phys. Rev. Lett. 52, 2111 (1984).
* Heslot (1985) A. Heslot, Phys. Rev. D 31, 1341 (1985).
* Hall (2008) M. J. W. Hall, Phys. Rev. A 78, 042104 (2008).
* Elze (2012) H. Elze, Phys. Rev. A 85, 052109 (2012).
* Bulgac and Kusnezov (1990) A. Bulgac and D. Kusnezov, Ann. Phys. (N.Y.) 199, 187 (1990).
* Dirac (1931) P. A. M. Dirac, Proc. R. Soc. London A 133, 60 (1931).
* Kato (1950) T. Kato, J. Phys. Soc. Jpn. 5, 435 (1950).
* Avron and Elgart (1999) J. E. Avron and A. Elgart, Commun. Math. Phys. 203, 445 (1999).
* Comparat (2009) D. Comparat, Phys. Rev. A 80, 012106 (2009).
* Simon (1983) B. Simon, Phys. Rev. Lett. 51, 2167 (1983).
* Peskin and Schroeder (1996) M. E. Peskin and D. V. Schroeder, _An introduction to quantum field theory_ , 3rd ed., edited by Array, Graduate Texts in Mathematics (Addison-Wesley, Reading u.a., 1996).
* Elbracht and Potthoff (2020) M. Elbracht and M. Potthoff, Phys. Rev. B 102, 115434 (2020).
* Elbracht and Potthoff (2021) M. Elbracht and M. Potthoff, Phys. Rev. B 103, 024301 (2021).
* Kulkarni _et al._ (1999) D. Kulkarni, D. Schmidt, and S.-K. Tsui, _Eigenvalues of tridiagonal pseudo-Toeplitz matrices_ , Linear algebra and its applications, Vol. 297 (Elsevier, 1999).
* Bellac (2006) M. L. Bellac, _Quantum Physics_ (Cambridge University Press, Cambridge, 2006).
* Wiesendanger (2009) R. Wiesendanger, Rev. Mod. Phys. 81, 1495 (2009).
* R. Rausch and Karrasch (2022) C. P. R. Rausch, M. Peschke and C. Karrasch, SciPost Phys. 12, 143 (2022).
* Quade and Potthoff (2022) R. Quade and M. Potthoff, Phys. Rev. B 105, 035406 (2022).
|
* [85] S. Garoufalidis, A. Its, A. Kapaev and M. Mariño, _Asymptotics of the instantons of Painlevé I_ , _Int. Math. Res. Not._ 2012 (2012) 561 [1002.3634].
* [86] I. Aniceto, R. Schiappa and M. Vonk, _The resurgence of instantons in string theory_ , _Commun. Num. Theor. Phys._ 6 (2012) 339 [1106.5922].
* [87] M. Mariño, _Open string amplitudes and large order behavior in topological string theory_ , _JHEP_ 03 (2008) 060 [hep-th/0612127].
* [88] J.c.v. Cížek, R.J. Damburg, S. Graffi, V. Grecchi, E.M. Harrell, J.G. Harris et al., _$1/R$ expansion for ${\mathrm{H}}_{2}^{+}$: Calculation of exponentially small terms and asymptotics_, _Phys. Rev. A_ 33 (1986) 12.
* [89] J. Zinn-Justin, _Expansion around instantons in quantum mechanics_ , _J. Math. Phys._ 22 (1981) 511.
* [90] C. Pazarbaşı and M. Ünsal, _Cluster expansion and resurgence in the Polyakov model_ , _Phys. Rev. Lett._ 128 (2022) 151601 [2110.05612].
* [91] G.V. Dunne and M. Ünsal, _WKB and resurgence in the Mathieu equation_ , in _Resurgence, Physics and Numbers_ , pp. 249–298, Springer (2017), DOI [1603.04924].
* [92] R.B. Dingle, _Asymptotic expansions: their derivation and interpretation_ , vol. 521, Academic Press London (1973).
* [93] M.V. Berry and C.J. Howls, _Hyperasymptotics for integrals with saddles_ , _Proceedings of the Royal Society of London. Series A: Mathematical and Physical Sciences_ 434 (1991) 657.
* [94] O. Costin, _Asymptotics and Borel summability_ , CRC press (2008).
* [95] I. Aniceto and R. Schiappa, _Nonperturbative ambiguities and the reality of resurgent transseries_ , _Commun. Math. Phys._ 335 (2015) 183 [1308.1115].
* [96] G. Dunne, _Resurgent trans-series analysis of Hopf algebraic renormalization_ , _Talk at IHES conference Algebraic Structures in Perturbative Quantum Field Theories_ Video: https://www.youtube.com/watch?v=29ENkKqUDvI#t=37m00s (19 Nov 2020) .
* [97] E. Caliceti, M. Meyer-Hermann, P. Ribeca, A. Surzhykov and U.D. Jentschura, _From useful algorithms for slowly convergent series to physical predictions based on divergent perturbative expansions_ , _Phys. Rept._ 446 (2007) 1 [0707.1596].
* [98] O. Costin and G.V. Dunne, _Uniformization and constructive analytic continuation of taylor series_ , _Commun. Math. Phys._ 392 (2022) 863 [2009.01962].
* [99] O. Costin and G.V. Dunne, _Physical resurgent extrapolation_ , _Phys. Lett. B_ 808 (2020) 135627 [2003.07451].
* [100] G. van Baalen, D. Kreimer, D. Uminsky and K. Yeats, _The QED beta-function from global solutions to Dyson–Schwinger equations_ , _Annals Phys._ 324 (2009) 205 [0805.0826].
* [101] L. Klaczynski and D. Kreimer, _Avoidance of a Landau pole by flat contributions in QED_ , _Annals Phys._ 344 (2014) 213 [1309.5061].
* [102] E. Panzer and R. Wulkenhaar, _Lambert-W solves the noncommutative $\varphi^{4}$-model_, _Commun. Math. Phys._ 374 (2019) 1935 [1807.02945].
* [103] G. Sberveglieri, M. Serone and G. Spada, _Self-dualities and renormalization dependence of the phase diagram in 3d $O(N)$ vector models_, _JHEP_ 02 (2021) 098 [2010.09737].
* [104] M. Borinsky, _Graphs in perturbation theory: Algebraic structure and asymptotics_ , Springer (2018), 10.1007/978-3-030-03541-9.
* [105] The PARI Group, Univ. Bordeaux, _PARI/GP version 2.13.1_. http://pari.math.u-bordeaux.fr/, 2021.
* [106] A.A. Mahmoud and K. Yeats, _Connected chord diagrams and the combinatorics of asymptotic expansions_ , 2010.06550.
* [107] A.A. Mahmoud, _An asymptotic expansion for the number of 2-connected chord diagrams_ , 2009.12688.
* [108] A.A. Mahmoud, _On Enumerative Structures in Quantum Field Theory_ , Ph.D. thesis, U. Waterloo, 2020. 2008.11661.
* [109] A.A. Mahmoud, _New prospects of enumeration in quantum electrodynamics_ , 2011.04291.
* [110] I. Aniceto, D. Hasenbichler, C.J. Howls and C.J. Lustri, _Capturing the cascade: a transseries approach to delayed bifurcations_ , _Nonlinearity_ 34 (2021) 8248 [2012.09779].
* [111] D. Kreimer and K. Yeats, _An étude in non-linear Dyson–Schwinger equations_ , _Nucl. Phys. B Proc. Suppl._ 160 (2006) 116 [hep-th/0605096].
* [112] L. Foissy, _Faà di Bruno subalgebras of the Hopf algebra of planar trees from combinatorial Dyson–Schwinger equations_ , _Advances in Mathematics_ 218 (2008) 136 [0707.1204].
* [113] G. van Baalen, D. Kreimer, D. Uminsky and K. Yeats, _The QCD beta-function from global solutions to Dyson–Schwinger equations_ , _Annals Phys._ 325 (2010) 300 [0906.1754].
* [114] N. Marie and K. Yeats, _A chord diagram expansion coming from some Dyson–Schwinger equations_ , _Commun. Num. Theor Phys._ 07 (2013) 251 [1210.5457].
* [115] O. Krüger and D. Kreimer, _Filtrations in Dyson–Schwinger equations: Next-to j-leading log expansions systematically_, _Annals Phys._ 360 (2015) 293 [1412.1657].
* [116] O. Krüger, _Log expansions from combinatorial Dyson–Schwinger equations_ , _Lett. Math. Phys._ 110 (2020) 2175 [1906.06131].
* [117] J. Courtiel and K. Yeats, _$\hbox{Next-to}{}^{k}$ leading log expansions by chord diagrams_, _Commun. Math. Phys._ 377 (2020) 469 [1906.05139].
* [118] G.V. Dunne and M. Meynig, _Instantons or renormalons? Remarks on $\phi^{4}_{d=4}$ theory in the MS scheme_, _Phys. Rev. D_ 105 (2022) 025019 [2111.15554].
* [119] A.J. McKane, _Perturbation expansions at large order: Results for scalar field theories revisited_ , _J. Phys. A_ 52 (2019) 055401 [1807.00656].
|
# Particle acceleration by magnetic reconnection in relativistic jets: the
transition from small to large scales
Tania E. Medina-Torrejón Elisabete M. de Gouveia Dal Pino Universidade de São
Paulo, Instituto de Astronomia, Geofísica e Ciências Atmosféricas,
Departamento de Astronomia, 1226 Matão Street, São Paulo, 05508-090, Brasil
Grzegorz Kowal Escola de Artes, Ciências e Humanidades - Universidade de São
Paulo, Av. Arlindo Béttio, 1000 – Vila Guaraciaba, CEP: 03828-000, São Paulo -
SP, Brazil
(Accepted May 16, 2023)
###### Abstract
Several MHD works and, in particular, the recent one by Medina-Torrejon et al.
(2021) based on three-dimensional MHD simulations of relativistic jets, have
evidenced that particle acceleration by magnetic reconnection driven by the
turbulence in the flow occurs from the resistive up to the large injection
scale of the turbulence. Particles experience Fermi-type acceleration up to
ultra-high-energies, predominantly of the parallel velocity component to the
local magnetic field, in the reconnection layers in all scales due to the
ideal electric fields of the background fluctuations ($V\times B$, where $V$
and $B$ are the velocity and magnetic field of the fluctuations,
respectively). In this work, we show MHD-particle-in-cell (MHD-PIC)
simulations following the early stages of the particle acceleration in the
relativistic jet which confirm these previous results, demonstrating the
strong potential of magnetic reconnection driven by turbulence to accelerate
relativistic particles to extreme energies in magnetically dominated flows.
Our results also show that the dynamical time variations of the background
magnetic fields do not influence the acceleration of the particles in this
process.
acceleration of particles - magnetic reconnection - magnetohydrodynamics (MHD)
- particle-in-cell - methods: numerical
††journal: ApJ
## 1 Introduction
The role of magnetic reconnection in the acceleration of energetic particles
has lately gained tremendous importance in high energy astrophysics (de
Gouveia Dal Pino & Lazarian, 2005; Giannios et al., 2009; de Gouveia Dal Pino
et al., 2010; Zhang & Yan, 2011; Hoshino & Lyubarsky, 2012; McKinney &
Uzdensky, 2012; Arons, 2013; Kadowaki et al., 2015; Singh et al., 2015; Zhang
& Li, 2015; Zhang et al., 2018). It is now regarded as a strong candidate for
the production of ultra-high energy cosmic rays (UHECRs) (e.g. Medina-Torrejón
et al., 2021) and very high energy (VHE) flares in the magnetically dominated
regions of relativistic sources (i.e., where the magnetic energy is of the
order or exceeds the rest mass energy of the particles) (e.g., Cerutti et al.,
2013; Yuan et al., 2016; Lyutikov et al., 2018; Petropoulou et al., 2016;
Christie et al., 2019; Mehlhaff et al., 2020; Kadowaki et al., 2021).
The comprehension of particle acceleration driven by magnetic reconnection has
greatly improved thanks to both particle-in-cell (PIC) simulations
(predominantly performed in two-dimensions - 2D) (e.g., Zenitani & Hoshino,
2001; Drake et al., 2006; Zenitani & Hoshino, 2007, 2008; Lyubarsky & Liverts,
2008; Drake et al., 2010; Clausen-Brown & Lyutikov, 2012; Cerutti et al.,
2012, 2014; Li et al., 2015; Werner et al., 2018, 2019; Lyutikov et al., 2017;
Sironi & Spitkovsky, 2014; Guo et al., 2015, 2016, 2020; Sironi et al., 2015;
Ball et al., 2018; Kilian et al., 2020; Sironi, 2022)), and MHD simulations
(generally performed in 3D) (e.g., Kowal et al., 2011, 2012; del Valle et al.,
2016; Beresnyak & Li, 2016; Guo et al., 2019; Medina-Torrejón et al., 2021).
They both have established reconnection as an efficient process of
acceleration.
Our understanding is that particles are predominantly accelerated in
reconnection sites by a Fermi-type mechanism in ideal electric fields (de
Gouveia Dal Pino & Lazarian, 2005; Drake et al., 2006; Kowal et al., 2012; Guo
et al., 2019). They undergo multiple crossings in the two converging magnetic
fluxes of opposite polarity moving to each other at the reconnection velocity
($V_{rec}$), thereby gaining energy from head-on interactions with background
magnetic irregularities (see also Lazarian et al., 2012; de Gouveia Dal Pino &
Kowal, 2015; Lazarian et al., 2020, for reviews). In order to produce fast
reconnection and hence, efficient particle acceleration, the ubiquitous
turbulence in astrophysical MHD flows is acknowledged as one of the main
driving mechanisms. The wandering of the magnetic field lines in the turbulent
flow allows for many simultaneous events of reconnection and the enlargement
of the outflow regions, removing the reconnected flux more efficiently. These
two factors result in the reconnection rate being a substantial fraction of
the Alfvén speed and independent of the microscopic magnetic resistivity
(i.e., independent of the Lundquist number and depending only on the
parameters of the turbulence) (Lazarian & Vishniac, 1999; Kowal et al., 2009;
Eyink et al., 2013; Takamoto et al., 2015; Santos-Lima et al., 2010, 2020;
Lazarian et al., 2020). The intrinsic 3D nature of the turbulent reconnection
and the particle acceleration that it entails makes the process more efficient
than the acceleration in the 2D shrinking plasmoids and X-points that are
usually excited by tearing mode instability in PIC (Hoshino & Lyubarsky, 2012;
Drake et al., 2006; Sironi & Spitkovsky, 2014) and in resistive MHD (e.g.,
Kowal et al., 2011; Puzzoni et al., 2022) simulations. Moreover, 2D plasmoids
are nothing but the cross section of 3D reconnecting magnetic flux tubes, and
particle acceleration in nature cannot be confined to 2D plasmoids. This has
been successfully verified in 3D MHD simulations considering the injection of
thousands of test particles in a current sheet with embedded forced turbulence
(Kowal et al., 2012; del Valle et al., 2016). In these simulations, the
formation of a thick volume filled with large number of converging
reconnecting layers covering the entire inertial range of the turbulence, from
the resistive to the injection scale, allows particle acceleration up to the
very large scales of the system and to high energies. These are crucial
differences with regard to PIC simulations which can probe only the kinetic
small (resistive) scales of the acceleration process, dealing with large
intrinsic resistivity wherein particles are predominantly accelerated by non-
ideal electric fields and only up to a few thousand times their rest mass
energy. Due to these differences one has to be very cautious when
extrapolating the results of particle acceleration from PIC simulations to the
macroscopic scales of real systems (see e.g. review in Lazarian et al., 2012).
The MHD studies mentioned above (Kowal et al., 2012; del Valle et al., 2016)
considered particle acceleration in non-relativistic domains of 3D
reconnection. More recently, Medina-Torrejón et al. (2021) (hereafter MGK+21,
) and Kadowaki et al. (2021) (hereafter KGM+21, ), motivated by current
debates related to the origin of cosmic ray acceleration and VHE variable
emission in relativistic jets, and specially in blazars (e.g., Aharonian et
al., 2007; Ackermann et al., 2016; Britto et al., 2016; Aartsen et al., 2018),
investigated particle acceleration in a 3D relativistic magnetically dominated
jet subject to current driven kink instability (CDKI), by means of
relativistic MHD simulations (using the RAISHIN code; Mizuno et al., 2012;
Singh et al., 2016). The instability drives turbulence and fast magnetic
reconnection in the jet flow. Its growth and saturation causes the excitation
of large amplitude wiggles along the jet and the disruption of the initial
helical magnetic field configuration, leading to the formation of several
sites of fast reconnection. The turbulence developed follows approximately a
Kolmogorov spectrum ( KGM+21, ). Test protons injected in the nearly
stationary snapshots of the jet, experience an exponential acceleration in
time, predominantly its momentum component parallel to the local field, up to
a maximum energy. For a background magnetic field of $B\sim 0.1$ G, this
saturation energy is $\sim 10^{16}$ eV, while for $B\sim 10$ G it is $\sim
10^{18}$ eV. There is a clear association of the accelerated particles with
the regions of fast reconnection and largest current density. The particles
interact with magnetic fluctuations from the small dissipative scales up to
the injection scales of the turbulence, which is of the order of the size of
the jet diameter. For this reason, the Larmor radius of the particles
attaining the saturation energy, which gives the maximum size of the
acceleration region, is also of the same order. Beyond the saturation value,
the particles suffer further acceleration to energies up to 100 times larger,
but at a slower rate, due to drift in the largest scale non-reconnecting
fields. The energy spectrum of the accelerated particles develops a high
energy tail with a power law index $p\sim$ -1.2 in the beginning of the
acceleration, in agreement with earlier works ( MGK+21, ).
In this work, we present results of 3D MHD-PIC simulations of relativistic
jets (using the PLUTO code; Mignone et al., 2018), considering in most of the
tests the same initial jet setup as in MGK+21 and KGM+21. Our main goals here
are: (i) to test the early stages of the acceleration of the particles
evolving at the same time that the jet develops the turbulence driven by the
CDKI; (ii) to compare with these previous studies which were performed with
test particles launched in the MHD jet after it achieved a nearly steady state
regime of fully developed turbulence; and (iii) to investigate potential
effects of the background magnetic field dynamical time evolution on particle
acceleration. We find that the results are very similar to the previous
studies. Particles are accelerated by the ideal electric field of the
background fluctuations in the reconnection layers of the turbulent flow, from
the small resistive scale up to the large injection scales of the turbulence.
Furthermore, the time evolution of the background fields does not affect their
acceleration.
The paper is organized as follows, in Section 2 we describe the numerical
method and setup, in Section 3, the results we obtained from the numerical
simulations, and in Section 4 we discuss the results and draw our conclusions.
## 2 Numerical Method and Setup
We performed 3D relativistic MHD-PIC simulations of a jet using the PLUTO code
with non explicit resistivity (Mignone et al., 2018). We employed the HLLD
Riemann solver to calculate the fluxes (Mignone, Ugliano, & Bodo, 2009), a
flux-interpolated constrained transport to control the divergence $\nabla\cdot
B=0$ (Mignone et al., 2019), and a second-order TVD Runge–Kutta scheme to
advance the equations in time.
We have used a similar setup as in MGK+21 and KGM+21, considering a rotating
relativistic jet with initial force-free helical magnetic field and initial
decreasing radial density profile (for more details, see MGK+21, ) and also
(Mizuno et al., 2012; Singh et al., 2016).
The computational domain in Cartesian coordinates $(x,y,z)$ has dimensions
$10L\times 10L\times 6L$, where $L$ is the length scale unit. The larger
domain adopted in the x and y directions is due to the fact that the jet
structure exceeds the boundaries of the box in evolved times. We have imposed
outflow boundaries in the transverse directions x and y and periodic
boundaries in the z direction. We have considered in most of the simulations a
grid resolution with $256$ cells in each direction, (implying a cell size in
the z direction of $\sim$0.02 L, and in the x and y directions of 0.04 L), but
in order to test the convergence of the results we have also run a model with
$426$ cells in the x and y directions and 256 in e the z direction, (implying
a cell size of $\sim$0.02 L in all directions).
The code unit (c.u.) for the velocity is the light speed $c$, for time is
$L/c$, for density is $\rho_{0}=$1, for magnetic field is
$\sqrt{4\pi\rho_{0}c^{2}}$, and for pressure is $\rho_{0}c^{2}$.
We have considered two different initial values of the magnetization parameter
$\sigma_{0}=B_{0}^{2}/\gamma^{2}\rho h\sim 0.6$ and $10$ at the jet axis,
corresponding to a magnetic field $B_{0}=0.7$ and density $\rho=0.8$, and
$B_{0}=4.0$ and $\rho=1.6$, respectively, where $\gamma$ is the Lorentz factor
and $h$ is the specific enthalpy (with $\gamma\sim 1$ and $h\sim 1$ at the
axis). Hereafter, We will refer to these models simply as the $\sigma\sim 1$
and $\sigma\sim 10$ models.
In order to drive turbulence in the jet, we allow for the development of the
current-driven-kink instability (CDKI) by imposing an initial perturbation in
the radial velocity profile as in MGK+21 (equation 7; see also Mizuno et al.,
2012; Singh et al., 2016).
In the MHD-PIC mode, the test particle trajectories are integrated in the time
evolving plasma fields (velocity and magnetic ) using the Boris pusher method
(Boris, 1970) which requires the definition of the charge-to-mass ratio for
the particles. We have adopted here $e/mc=$ 20,000, which implies a physical
length scale relation in cgs units:
$\left(\frac{e}{mc}\right)=\left(\frac{e}{mc}\right)_{cgs}L_{cgs}\sqrt{\rho_{cgs}}$
(1)
Where $e$ and $m$ are the particle charge and mass, respectively. We have
adopted $\rho_{cgs}=1.67\times 10^{-24}g$ $cm^{-3}$ (or $n_{cgs}=1$
$cm^{-3}$), which results a physical length scale $L_{cgs}\sim 5.2\times
10^{-7}$ pc. In most of the models, we integrated the trajectories of 10,000 -
50,000 protons with initial uniform space distribution inside the domain, and
initial kinetic energies between $(\gamma_{p}-1)\sim$ 1 and 200, where
$\gamma_{p}$ is the particle Lorentz factor, with velocities randomly
generated by a Gaussian distribution.
Besides employing the MHD-PIC mode of the PLUTO code to investigate particle
acceleration, we have also considered a model where we injected test particles
after the full development of turbulence in the jet flow, as in MGK+21. This
test was performed with the GACCEL code (Kowal et al., 2012; Medina-Torrejón
et al., 2021).
We further notice that, in order to make direct comparisons of the MHD-PIC
simulations with the previous work involving test particle injections in
frozen-in-time MHD fields, we did not account for the accelerated particles
feedback on the background plasma, which will be considered in forthcoming
work.
## 3 Results
Figure 1: Three dimensional view of the $\sigma\sim 1$ jet evolved with the
MHD-PIC mode at t = 20 (top), and 45 L/c (bottom). Left panels: the black
lines represent the magnetic field, and the circles the 50,000 particles
distribution. The color and size of the circles indicate the value of their
kinetic energy normalized by the rest mass energy ($\gamma_{p}-1$). Right
panels: the orange color represents iso-surfaces of half of the maximum of the
current density intensity $|J|$, the black lines the magnetic field, and the
green squares correspond to the positions of the fastest magnetic reconnection
events, with reconnection rate $\geq 0.05$. See text for more details.
Figure 1 shows the $\sigma\sim 1$ jet evolved with the MHD-PIC mode of the
PLUTO code (with a resolution $256^{3}$) for two snapshots. A total of 50,000
particles were initially injected in the system. The dynamical evolution of
the jet is very similar to the one obtained in MGK+21 and KGM+21 with the
RAISHIN MHD code. With the growth of the CDKI, the initial helical magnetic
field structure starts to wiggle (see $t=20$ L/c) and then, turbulence
develops distorting entirely the field lines and driving fast magnetic
reconnection sites, as we see in the right panel for $t=45L/c$. We note that
there are already a few particles being accelerated in the wiggling jet spine
at $t=20L/c$ (left top panel). This is due to curvature drift acceleration, as
detected also in the PIC simulations by Alves et al. (2018), and in MGK+21
with test particles injected in a similar snapshot of the background MHD jet
(see their Figure 6). Nevertheless, massive particle acceleration takes place
only later on, when turbulence and fast reconnection fully develops in the
system, as indicated in the left bottom panel at $t=45L/c$. The correlation of
the accelerated particles (represented by the red circles with increasing
diameter as the energy increases) with the sites of high current density and
fast reconnection (right bottom panel) is evident. A very similar result was
obtained for the $\sigma\sim$ 1 jet model run with larger resolution
($426^{2}\times 256$). In the next paragraphs, we will further quantify these
associations.
Figure 2 shows the time evolution of the volume-averaged kinetic energy
density transverse to the z-axis (upper panel), and the volume-averaged total
relativistic electromagnetic energy density ($E_{m}$) (bottom panel) for the
$\sigma\sim 1$ jet, as the CDKI grows (see also Mizuno et al., 2012; Singh et
al., 2016; Medina-Torrejón et al., 2021). For this jet model, these quantities
are presented for two different resolutions, $256^{3}$ (solid red lines) and
$426^{2}\times 256$ (dot-dashed black lines), and the results are both very
similar. These curves are also compared with those obtained by MGK+21 (and
KGM+21, ) using the RAISHIN code for the same jet model (labeled as MGK+21 in
Figure 2), and with the $\sigma\sim 10$ jet. Note that $E_{m}$ is presented in
the linear scale, while the kinetic energy is in the log scale. The results of
both $\sigma\sim 1$ jet models are comparable. As the CDKI develops, $E_{m}$
is converted into kinetic energy. For the $\sigma\sim 1$ models, the initial
relaxation of the system to equilibrium leads to a hump in the kinetic and
$E_{m}$ curves. After this relaxation, there is an initial growth of $E_{m}$
caused by the increasing wiggling distortion of the magnetic field structure
in the jet spine due to the initial growth of the CDKI. The kinetic energy,
after a slower increase, undergoes an exponential growth which is a little
more advanced in time in the PLUTO run, that starts around $\sim 25$ L/c, than
in the RAISHIN run (MGK+21), that starts around $\sim 30$ L/c. This causes the
jet model in this work to achieve earlier a turbulent state than in the model
of MGK+21, with a time delay $\Delta t\sim 5$ L/c between them111We attribute
this small delay to intrinsic numerical differences between the two codes and
to the slight difference in the grid resolution. The $\sigma\sim 1$ jet model
run with the RAISHIN code by MGK+21 has a cell size $\sim 0.03$ L in the three
directions.. After the exponential growth, the kinetic energy reaches
approximately a plateau while $E_{m}$ decreases. This coincides with full
increase of the turbulence and of the number of fast reconnection events in
Figure 1 (bottom right; see also Figure 4). In fact, this plateau
characterizes the achievement of saturation of the CDKI and a nearly steady-
state turbulent regime in the system (see Figure 3). A similar behaviour has
been identified in MGK+21 and KGM+21. We also notice that there is a
difference of at most $30\%$ in the amplitude of $E_{m}$ between the two
models. In the $\sigma\sim 10$ jet, the CDKI clearly increases faster
achieving saturation much earlier, at about half of the time of the
$\sigma\sim 1$ jet.
Since the two models with different resolution for the $\sigma\sim 1$ jet are
so similar, in the rest of the manuscript we consider only the $256^{3}$
resolution model.
Figure 2: Top: time evolution of the volume-averaged kinetic energy density
transverse to the z-axis within a cylinder of radius $R\leq 3.0L$ for the
$\sigma\sim 1$ jet (red solid line for the model with resolution $256^{3}$ and
dashed-dotted black line for the model with resolution $426^{2}-256$), and for
the $\sigma\sim 10$ jet (blue solid line). Bottom: volume-averaged
relativistic electromagnetic energy density for the same models. For
comparison, also plotted with dashed red lines are the results obtained in
MGK+21 for the $\sigma\sim 1$. The kinetic energy is presented in log scale,
while $E_{m}$ is in linear scale.
To quantify the development of the turbulence, we have evaluated the three-
dimensional power spectra of the magnetic and kinetic energy densities in the
jet, considering averages in spherical or ellisoidal shells between $k$ and
$k+dk$ (where $k=\sqrt{k_{x}^{2}+k_{y}^{2}+k_{z}^{2}}$ in the Fourier space)
(KGM+21). Figure 3 depicts these power spectra for different times for both,
$\sigma\sim 1$ and $\sigma\sim 10$ jets. A $3D$-Kolmogorov spectrum slope
($\propto k^{-11/3}$; red dotted line) was included for comparison. The
diagrams show inertial ranges both for the kinetic
$|\sqrt{\rho}\bm{v}(\bm{k})|^{2}$ and for the magnetic $|\bm{B}(\bm{k})|^{2}$
energy density spectra between $0.2\lesssim k\lesssim 25$ (in units of 1/L) in
agreement with a Kolmogorov-like spectrum, after $t\simeq 30$L/c for the
$\sigma\sim 1$ jet and $t\simeq 10$L/c for the $\sigma\sim 10$ jet. This
indicates a turbulent energy cascade between an injection scale $\sim 5$L and
a resistive small scale $\sim 0.11$L. The magnetic energy spectrum shows a
little steeper slope, probably due to the strong (guiding) magnetic field of
the background plasma (see, e.g., Kowal et al., 2007; Kadowaki et al., 2021).
As expected, the $\sigma\sim 10$ jet has maximum magnetic energy density 10
times larger than the $\sigma\sim 1$ jet. The results are comparable to those
obtained in KGM+21 for the $\sigma\sim 1$ jet, as shown in the left diagrams
of the figure222We note that the turbulent power spectra of the kinetic and
magnetic energy densities of the $\sigma\sim 1$ jet presented in KGM+21 were
produced with a distinct normalization from the one used in Figure 3. For this
reason, we have reproduced them again here for direct comparison with the
other spectra of Figure 3..
Figure 3: Power spectrum of the magnetic (left) and kinetic (right) energy
densities for the $\sigma\sim 1$ jet model of KGM+21 (upper row), the
$\sigma\sim 1$ (middle row) and $\sigma\sim 10$ (bottom row) jet models of
this work, for different times in unit of L/c. The red doted line corresponds
to a $k^{-11/3}$ 3D-Kolmogorov spectrum and its extension gives the inertial
range of the turbulence for evolved times $>30$ L/c for the $\sigma\sim 1$
models, and $>10$ L/c for the $\sigma\sim 10$ model. The wavenumber is in unit
of L.
In order to identify fast magnetic reconnection sites in the turbulent flow of
the relativistic jet and quantify their reconnection velocities, we have used
the same algorithm employed in KGM+21 wherein the method is described in
detail (see also, Zhdankin et al., 2013; Kadowaki et al., 2018). The time
evolution of the magnetic reconnection rate, ${V}_{rec}$, for all identified
sites and the time evolution of the average value, $\langle{V}_{rec}\rangle$
(blue line in the upper and middle panels), in units of the Alfvén velocity,
are shown in Figure 4. The evolution of $\langle{V}_{rec}\rangle$ changes more
abruptly after $t\sim 25$ in the $\sigma\sim 1$ jet and $t\sim 10$ in the
$\sigma\sim 10$ jet, when the CDKI starts to grow exponentially (Figure 2).
After that, as the CDKI tends to saturation, the average reconnection rate
also attains a value $\langle{V}_{rec}\rangle\sim 0.03\pm 0.02$ for the
$\sigma\sim 1$ jet, in agreement with KGM+21 (see their reference model
m240ep0.5 and their Figure 8). For the $\sigma\sim 10$ jet, it is still
growing to a plateau to a similar average value (middle diagram)
$\langle{V}_{rec}\rangle\sim 0.02\pm 0.02$. A peak reconnection rate of the
order $\sim 0.9$ (not shown in the figure) is obtained for the $\sigma\sim 1$
jet, while a peak value $\sim 0.6$ is attained for the $\sigma\sim 10$ jet.
The bottom diagram compares directly the evolution of the average reconnection
speed of both models including their respective variances which are
similar333We note that the slightly smaller mean value of the reconnection
rate for the larger $\sigma$ model is compatible with the fact that the
necessary wandering of the field lines by the turbulence in order to drive
fast reconnection is naturally more difficult the larger the strength of the
magnetic field (Lazarian & Vishniac, 1999)..
Figure 4: Histogram of the reconnection rate evolution for the $\sigma\sim 1$
(top) and $\sigma\sim 10$ jet (middle). The blue line gives the average
reconnection rate evolution. Bottom diagram compares the average reconnection
rate evolution of the two models and the colored shades correspond to the
standard deviations of each model.
In MGK+21, test particles were injected with an initial Mawellian distribution
(with initial mean kinetic energy $\left<E_{p}\right>\sim 10^{-2}m_{p}c^{2}$)
in the simulated $\sigma\sim 1$ jet with already fully developed turbulence
(with the RAISHIN code), and accelerated by magnetic reconnection up to VHEs.
Figure 5 (upper panel) depicts the kinetic energy growth as a function of time
for 1,000 particles injected (with the GACCEL code) in the snapshot $t=50$ L/c
of their model (see also bottom panel of Figure 5 in MGK+21, ). The lower
panel of Figure 5 shows a similar plot, but obtained for particles injected
(also with the GACCEL code) in the fully turbulent jet simulated in this work
with the PLUTO code, at $t=45$ L/c. As remarked previously in Figure 2, the
model run here develops turbulence earlier, with an advance in time of $\Delta
t\sim 5$ L/c and thus, in order to compare with MGK+21 results, we have
considered the corresponding earlier snapshot. The results are very similar,
as expected. As in MGK+21, particles are accelerated exponentially in the
magnetic reconnection sites in all scales of the turbulence driven by the CDKI
up to $\sim 10^{7}mc^{2}$, which corresponds to a Larmor radius comparable to
the diameter of the jet and the size of the largest turbulent magnetic
structures (see the plot in the inset). As we see in the figure, beyond this
energy, particles suffer further acceleration at a smaller rate, which is
attributed to drift in the large scale non-reconnected fields. We also see
that the parallel component of the velocity is predominantly accelerated in
the exponential regime, as expected in a Fermi-type process, while in the
drift regime, it is the perpendicular component that prevails (see MGK+21 for
more details).
\begin{overpic}[scale={0.44}]{Raishint50_240_1000_oB-1_en_vpervpar.pdf}
\put(14.0,50.0){\includegraphics[scale={0.23}]{Raishint50_240_1000_oB-1_gy.pdf}}
\end{overpic}\begin{overpic}[scale={0.44}]{Plutot45_256_1000_oB-1_en_vpervpar.pdf}
\put(14.0,50.0){\includegraphics[scale={0.23}]{Plutot45_256_1000_oB-1_gy.pdf}}
\end{overpic}
Figure 5: Kinetic energy evolution, normalized by the proton rest mass energy,
for 1,000 particles injected into the fully turbulent snapshot $t=50$ L/c of
the $\sigma\sim 1$ jet run by MGK+21 (top). The same for particles injected
into the the snapshot $t=45$ L/c of the $\sigma\sim 1$ jet in this work
(bottom). The colors indicate which velocity component is being accelerated
(red or blue for the parallel or perpendicular component to the local magnetic
field, respectively). The insets in the upper left corner show the time
evolution of the particles gyroradius. The color bars indicate the number of
particles. The horizontal grey stripe is bounded on the upper part by the jet
diameter ($4L$) and on lower part by the cell size of the simulated background
jet. In these particle simulations, the particle acceleration time is given in
hours and the adopted physical size for $L$ is the same as in MGK+21 for
comparison, $L=3.5\times 10^{-5}$ pc.
The figures described above evidence the similarity of the results obtained
with the two MHD codes and reinforce the results of MGK+21 and KGM+21.
Figure 6 shows the first stages of the kinetic energy evolution of the
particles evolving together with the background jet as obtained with the
present model (i.e., employing the MHD-PIC mode) both for the $\sigma\sim 1$
and $\sigma\sim 10$ jet. In the very beginning, while the CDKI is still
developing, particles only suffer drift in the background magnetic fields.
Then, as the jet column starts to wiggle around $t\sim 20$ L/c in the
$\sigma\sim 1$, and around $t\sim 7$ L/c in the $\sigma\sim 10$ jet, due to
the kink instability (Figure 1), the particles suffer curvature drift
acceleration. Note that at these times, fast reconnection driven by turbulence
is not developed yet (Figure 4). As stressed earlier, curvature drift
acceleration has been also detected in the $\sigma\sim 1$ jet by MGK+21, for a
similar resolution, around similar jet dynamical time (more precisely, at
$t\sim 25$ L/c, due to the time delay between the two runs; see their Figure
6), and by Alves et al. (2018) in PIC simulations of the early stages of the
development of the kink instability.
After $t\sim 30$ L/c in the $\sigma\sim 1$ jet (and $t\sim 15$ L/c in the
$\sigma\sim 10$ jet), which coincides with the nonlinear growth and saturation
of the CDKI leading to fully developed turbulence in the jet (Figures 2 and
3), the particles in Figure 6 start exponential acceleration, as in Figure 5.
The maximum achieved energy is about 10 times larger for the jet with
corresponding larger $\sigma.$ The entire dynamical time of the system
evolution is of only $60$ L/c for the $\sigma\sim 1$ jet (and half this time
for the $\sigma\sim 10$ jet). For the particles, the physical time elapsed is
only $\sim 60L/c\sim 1$ hr (and half-hour for the $\sigma\sim 10$ jet, for the
adopted $L=5.2\times 10^{-7}$ pc in physical units), which is much smaller
than the several hundred hours that particles can accelerate in the nearly
steady state jet snapshot of Figure 5 where they can re-enter the system
several times through the periodic boundaries of the jet in the z direction
until they reach the saturation energy (see also MGK+21). This explains why
particles do not achieve the maximum possible energy by acceleration in the
largest turbulent magnetic reconnection structures of the order of the jet
diameter ($\sim 4L$), as we see in the inset in the figure, which depicts the
particles Larmor radius distribution. For this value of the Larmor radius
($R_{max}\sim 4L$), the particles would achieve an energy $E_{sat}\sim
e\,B\,R_{max}\sim 200,000$ $m_{p}c^{2}$ in the $\sigma\sim 1$ jet, and $\sim
1,000,000$ $m_{p}c^{2}$ in the $\sigma\sim 10$ jet, if the jet were allowed to
evolve for a dynamical time about one hundred times larger (where $R_{max}\sim
4L=2.1\times 10^{-6}$ pc, and $B\sim 0.1$ G and $\sim 0.6$ G for the
$\sigma\sim 1$ and $\sigma\sim 10$ jets, respectively, considering the
physical units employed in the MHD-PIC simulations). Nonetheless, the results
in these early stages of particle acceleration, follow the same trend depicted
in Figure 5, indicating that particles are accelerated exponentially by
magnetic reconnection in the turbulent flow, from the small resistive scales
up to the large scales of the turbulence in the ideal electric field of the
magnetic reconnecting structures. These results also indicate that the time
evolution of the background magnetic fields does not influence the
acceleration of the particles since they enter the exponential regime of
acceleration in the same jet dynamical times in which turbulence becomes fully
developed, as obtained in the MHD simulations with test particles of Figure 5.
At the more evolved dynamical times, particularly in the $\sigma\sim 10$ jet,
we also identify particles having their perpendicular velocity component being
accelerated suggesting the presence of drift acceleration too, as in the late
stages of particle acceleration in Figure 5.
\begin{overpic}[scale={0.44}]{PICMHD_s01_p50000emc020000c1v200.0_en_vpervpar.pdf}
\put(14.0,52.0){\includegraphics[scale={0.19}]{PICMHD_s01_p50000emc020000c1v200.0_gy.pdf}}
\end{overpic}\begin{overpic}[scale={0.44}]{PICMHD_r256s10_p50000emc020000c1v200.0_en_vpervpar.pdf}
\put(15.0,51.0){\includegraphics[scale={0.19}]{PICMHD_s10_p50000emc020000c1v200.0_gy.pdf}}
\end{overpic}
Figure 6: Kinetic energy evolution for 50,000 particles evolved in the MHD-PIC
simulation for the $\sigma\sim 1$ (top) and for the $\sigma\sim 10$ (bottom)
jet. Particles are initially injected with energy $\left<E_{p}\right>\sim
1-200m_{p}c^{2}$. The colors indicate which velocity component of the
particles is being accelerated (red or blue for the parallel or perpendicular
component to the local magnetic field, respectively). The inset panels depict
the evolution of the particles gyroradius, and the red horizontal lines
correspond to the jet diameter ($4L$) (top) and the cell size of the simulated
jet (bottom).
We have also run the MHD-PIC model for the $\sigma\sim 1$ and 10 jets with the
larger resolution $426^{2}-256$, and the results we obtained for particle
acceleration evolution are very similar to those shown in Figure 6. The only
difference is that less particles re-enter the system and thus the histogram
has comparatively less accelerated particles. In particular, there are almost
no particles undergoing curvature drift in the very early times (around $t\sim
20$ L/c), but the exponential regime, with a dominance of the acceleration of
the parallel component of the velocity, is clearly detected, as in Figure 6
(top)444The absence of accelerated particles by curvature drift in this case
could be explained by the fact that this acceleration can be experienced only
by particles with a Larmor radius large enough to $feel$ the curvature of the
field (Alves et al. (2018), MGK+21). When we increase the resolution of the
MHD domain (and thus decrease the cell size), particles with the same (still
small) Larmor radius, at the same dynamical time step around $t\sim 20$ L/c as
in the lower resolution simulation (Figure 6), will see no field curvature
when moving from a smaller cell to the other and then, experience only linear
drift, as in much earlier times..
In Figure 7 we show the particle energy spectrum for the $\sigma\sim 1$ and
$\sigma\sim 10$ jets, for different time steps in these early stages of the
acceleration. The initial distribution is represented by a red line. As
particles accelerate, they start to populate the high energy tail in the
distribution, which becomes flatter as time evolves. In the $\sigma\sim 1$
jet, we note the formation of two slopes in more evolved times with a smooth
transition between them which may be an indication of the two different
regimes of acceleration specially coexisting at larger energies, the
reconnection and later drift acceleration regimes we identified in Figure 6.
Interestingly, the power-law tail of the flatter part of the spectrum for
$t=45$ L/c, when the $\sigma\sim 1$ jet develops a fully turbulent regime, is
very similar to the slope obtained in the snapshot $t=50$ L/c in MGK+21 which
is in a similar dynamical state of the background jet (see their Figure 11).
For the $\sigma\sim 10$ jet, the transition is more abrupt and characterized
by large humps around 6000 and 10000 $E_{p}/m_{p}c^{2}$. Examining the
particles energy evolution in Figure 6, these humps seem to concentrate a
substantial number of particles with acceleration of the parallel component
predominantly, but the two regimes of acceleration also seem to coexist in
these large energies, as indicated by the presence of particles also with the
perpendicular component dominating the acceleration. Clearly, for this model
the amount of particles accelerated in this short dynamical time is
comparatively smaller. Since the acceleration of the particles is still in
very early stages and far from reaching the saturation energy by reconnection,
the large energy tails of these spectra are clearly still under development.
Figure 7: Particle energy spectrum evolution as a function of the normalized
kinetic energy for the particles evolved in the MHD-PIC simulation for the
$\sigma\sim 1$ (top) and $\sigma\sim 10$ (bottom) jet. The solid red line
corresponds to the initial distribution. The high-energy tails in more evolved
times of the system are fitted by power laws. Figure 8: Power-law index
$\alpha=\Delta(\log t)/\Delta(\log E_{p})$ of the acceleration time as
function of the particle kinetic energy normalized by the proton rest mass
energy. The minimum in the curves, $\alpha\sim 0.1$, indicates the nearly
exponential regime of particle acceleration. Depicted are the models with
steady-state turbulent background of Figure 5, namely the $\sigma\sim 1$ jet
at t=50 L/c run by MGK+21 (black line) and the $\sigma\sim 1$ jet at t = 45
L/c run in this work (blue line). Also shown is $\alpha$ for the nearly
exponential regime (between $30L/c<t<50L/c$) of the $\sigma\sim 1$ MHD-PIC
model of the top of Figure 6 where particles evolved with the background
plasma (red curve).
Finally, we can quantify and compare the particle acceleration, in particular,
in the nearly exponential regime, by evaluating the acceleration time directly
from the diagrams of particles kinetic energy versus time, in a similar way as
performed previously in del Valle, de Gouveia Dal Pino, & Kowal (2016) and
MGK+21. Specifically, we compute the slope of the logarithmic diagrams in
Figures 5 and 6 (top), $\alpha=\Delta(\log t)/\Delta(\log E_{p})$, which gives
the acceleration time dependence with particle energy, $t_{acc}\propto
E_{p}^{\alpha}$. The result is shown in Figure 8. We find that the slope
$\alpha$ has essentially the same minimum value in all models, which
corresponds to the nearly exponential regime of the acceleration of the
particles, i.e., $\alpha\sim 0.1$, implying an acceleration time
$t_{acc}\propto E_{p}^{0.1}$, as found in MGK+21, with very weak dependence on
the energy, as expected in this regime. The increase in $\alpha$ (and thus in
the acceleration time) around $E_{p}/m_{p}c^{2}\sim 10^{3}$ for the MHD-PIC
model is due to the contribution of several particles that are already
experiencing drift and thus slower acceleration at this energy (see the blue
points in Figure 6 that correspond to the perpendicular momentum component,
predominant in drift acceleration).
## 4 Discussion and Conclusions
In this work, we have investigated the early stages of the acceleration of the
particles in 3D Poynting flux dominated jets with magnetization $\sigma\sim$ 1
and 10, subject to CDKI, using the MHD-PIC mode of the PLUTO code, in order to
follow the evolution of the particles along with the flow. The CDKI drives
turbulence and fast magnetic reconnection which we find to be the dominant
mechanism of particle acceleration.
Our results are very similar to those of MGK+21 which were carried out with
test particles launched in the simulated MHD relativistic jet after it
achieved a regime of fully developed turbulence. Particles are accelerated by
the ideal electric field ($V\times B$) of the background fluctuations, over
the entire inertial range of the turbulence, starting in the small, resistive
scales up to the large injection scales (Figure 3). The connection of the
accelerated particles with the magnetic reconnection layers is clear (Figure
1). During this regime, the particles energy grow nearly exponentially and the
parallel velocity component to the local magnetic field is the one that is
preferentially accelerated, both expected in a Fermi-type process. In the test
particle simulations of MGK+21 (see also Figure 5), particles re-enter the
system several times through the periodic boundaries of the nearly steady
state turbulent jet and are accelerated in the reconnection sites up to the
saturation energy that is achieved when their Larmor radius becomes of the
order of the size of the acceleration region, or the jet diameter. This takes
several hundred hours in the $\sigma\sim 1$ jet and the particles energy
become as large $\sim 10^{7}$ $m_{p}c^{2}$. Beyond this energy, particles
still experience further acceleration, but at smaller rate due to drift in the
large scale non-reconnected fields. In the MHD-PIC simulations, we can follow
particle acceleration only during the dynamical time evolution of the MHD jet
which lasts $\sim 60$ L/c and $\sim 35$ L/c for the $\sigma\sim 1$ and
$\sigma\sim 10$ jet, respectively, and corresponds to only $\sim 1hr$ and
half-hour, respectively, in physical units for the particles. During this
time, the particles obviously do not reach the maximum possible (saturation)
energy, but follow the same exponential acceleration trend as in the test
particle simulations (Figure 6).
At later times, when turbulence is fully developed, the particle energy
spectrum develops a power law tail with two slopes (better defined in the
$\sigma\sim 1$ jet), suggesting the presence of the two different regimes of
acceleration, the reconnection and the drift regimes (Figure 7). The slope of
the power-law tail of the flatter part of the spectrum for $t=45$ L/c in the
$\sigma\sim 1$ is the same as obtained for particles accelerating in the
snapshot $t=50$ L/c in MGK+21, which has a similar state of the background jet
(see their Figure 11). These slopes are also comparable to previous studies of
particle acceleration both in MHD flows (Kowal et al., 2012; del Valle et al.,
2016) and PIC simulations (e.g., Comisso & Sironi, 2018; Werner et al., 2018).
However, we expect that in realistic systems, the presence of radiative losses
and dynamical feedback of the accelerated particles into the plasma will lead
to steepening of the spectra (e.g., MGK+21, ).
Our results also indicate that the time evolution of the background magnetic
field ($\partial B/\partial t$) does not influence the acceleration of the
particles. They enter the exponential regime of acceleration in the same
dynamical times of the jet in which turbulence becomes fully developed ($\sim
30$ L/c for the $\sigma\sim 1$ jet, and $\sim 15$ L/c for the $\sigma\sim 10$,
respectively; Figure 6), in agreement with the results of the MHD simulations
with test particles injected in the nearly steady state turbulent jet in
MGK+21 (see also Figure 5). The particles also undergo curvature drift
acceleration in the initial stage of the CDKI when the jet column starts to
wiggle in similar dynamical time both in the test particle $+$ MHD and in the
MHD-PIC simulations. The background magnetic field time evolution effect, also
known as betatron acceleration, has been found to affect particle acceleration
in pure turbulent flows only by a factor two in the acceleration rate (e.g.,
de Gouveia Dal Pino & Kowal, 2015). Therefore, while it can be substantial in
very early times when particles are still undergoing linear drift
acceleration, it is negligible in the more advanced times when exponential
acceleration takes over.
The increase of the jet magnetization by a factor 10, speeds up the growth of
the CDKI which attains saturation in nearly half of the time (see Figure 2)
and particles are accelerated to energies about 10 times larger, as also
expected from PIC simulations (e.g. Werner et al., 2018).
The results above indicate that particle acceleration by fast magnetic
reconnection in a Fermi process can be dominant in magnetically dominated
flows from the injection (large) to the resistive (small) scales of the
turbulence. These results (and those produced in earlier MHD works with test
particles; e.g. Kowal et al., 2012; del Valle et al., 2016; Medina-Torrejón et
al., 2021) are in contrast with recent studies based on 3D PIC simulations
that suggest that acceleration by reconnection would be dominant only in the
very early stages of particle energizing (e.g., Comisso & Sironi, 2019; Sironi
et al., 2021; Sironi, 2022; Comisso & Sironi, 2022). This apparent
inconsistency is essentially due to the intrinsic difference in scales and in
the accelerating electric fields that prevail in the two regimes. While in
these PIC simulations, plasmoid-like reconnection acceleration occurs at the
small kinetic, resistive scales and is dominated by the resistive electric
field ($\eta J$, where $\eta$ is the resistivity and $J$ the current density),
in our collisional MHD turbulent flow simulations where resistivity is
naturally small (the ubiquitous Ohmic resistivity is mimicked by the numerical
truncation error), the reconnection layers persist up to the large injection
scales and particles are accelerated by the ideal electric fields (V$\times
B$) of the fluctuations in these sites. Therefore, these intrinsic differences
(inherent to scale and accelerating electric field), indicate that direct
extrapolation from the resistive small scales probed by PIC simulations
(wherein non-ideal accelerating electric fields generally prevail), to the
large MHD scales should be taken with caution (see also Guo et al., 2019,
2022).
The same applies to the recent study of Puzzoni, Mignone, & Bodo (2022) who
examined the impact of resistive electric fields on particle acceleration in
reconnection layers. The authors claimed that their results are in
contradiction with earlier MHD works (Kowal, de Gouveia Dal Pino, & Lazarian,
2011, 2012; Medina-Torrejón, de Gouveia Dal Pino, Kadowaki, Kowal, Singh, &
Mizuno, 2021). However, they are clearly exploring a different regime of
reconnection endowed with extremely high artificial resistivity, which is much
larger than the Ohmic resistivity expected in most astrophysical MHD flows and
in particular, in turbulent ones. In other words, they are exploring the
resisistive, kinetic scales well below the inertial range of the turbulence
that is explored in the works above and in the present one. While in the
present simulations and those of the previous works mentioned above, particles
are predominantly accelerated by the ideal electric fields of the magnetic
fluctuations in the reconnection layers, in (Puzzoni et al., 2022)
simulations, the dominant component is the resistive electric field component
which prevails in the kinetic scales. Therefore, there is no contradiction
with the MHD (non-resistive) works above.555One may still inquire how the
results of the present study would change if we had included an explicit
resistivity in the flow. As remarked above, this would affect only the very
small scales of the flow, of the order of a few grid cells size (e.g. Santos-
Lima et al., 2010). In the integration of the particles equation of motion, we
accounted only for the ideal electric fields of the magnetic fluctuations that
persist in the entire range of the turbulence. Still, the non-ideal term could
be important for the small-scale topology of the velocity and magnetic fields,
especially in the vicinity of the reconnection regions, indirectly affecting
the particles’ evolution before they reach a gyroradius of the order of a few
cells size. Therefore, if we had included an initial small explicit
resistivity of the typical strength of Ohmic resistivity (as expected in
astrophysical turbulent flows), the results for particle acceleration would be
the same as in the present work. On the other hand, if we had adopted an
artificial much larger explicit resistivity, well above the Ohmic resistivity,
this would kill all the turbulence in the range of scales smaller than this
resistive scale and particle acceleration by turbulent reconnection would be
possible only in a more limited inertial range of turbulent structures, from
the injection scale down to the resistive scale.
Future studies exploring in depth both regimes and scales, and also including
particle feedback into the plasma are required. Our present study, combining
PIC and MHD altogether in a relativistic jet with turbulence induced by an
instability was a first attempt in this direction and the results in general
confirm the predictions of previous MHD studies with test particles which show
that turbulent reconnection acceleration prevails in most of the scales of the
system. As stressed, e.g. in MGK+21, the implications of these results for
particle acceleration and the origin of VHE emission phenomena in Poynting
flux dominated systems like the relativistic jets in microquasars, AGN and
GRBs, is rather important.
The authors acknowledge very useful discussions with L. Kadowaki. TEMT and
EMdGDP acknowledge support from the Brazilian Funding Agency FAPESP (grant
13/10559-5), EMdGDP also acknowledges support from CNPq (grant 308643/2017-8),
and G.K. from FAPESP (grants 2013/10559-5, 2019/03301-8, and 2021/06502-4).
The simulations presented in this work were performed in the cluster of the
Group of Plasmas and High-Energy Astrophysics (GAPAE), acquired with support
from FAPESP (grant 2013/10559-5), and the computing facilities of the
Laboratory of Astroinformatics (IAG/USP, NAT/Unicsul), whose purchase was also
made possible by FAPESP (grant 2009/54006-4) and the INCT-A.
## References
* Aartsen et al. (2018) Aartsen, M., Ackermann, M., Adams, J., et al. 2018, Science, 361, 147. https://science.sciencemag.org/content/361/6398/147
* Ackermann et al. (2016) Ackermann, M., Anantua, R., Asano, K., et al. 2016, ApJ, 824, L20
* Aharonian et al. (2007) Aharonian, F., Akhperjanian, A. G., Bazer-Bachi, A. R., et al. 2007, ApJ, 664, L71
* Alves et al. (2018) Alves, E. P., Zrake, J., & Fiuza, F. 2018, Phys. Rev. Lett., 121, 245101
* Arons (2013) Arons, J. 2013, in Particle Acceleration in Cosmic Plasmas. Series: Space Sciences Series of ISSI, ed. A. Balogh, A. Bykov, R. P. Lin, J. Raymond, & M. Scholer, Vol. 45, 341–367
* Ball et al. (2018) Ball, D., Sironi, L., & Özel, F. 2018, ApJ, 862, 80
* Beresnyak & Li (2016) Beresnyak, A., & Li, H. 2016, ApJ, 819, 90
* Boris (1970) Boris, J. P. 1970, Proceeding of Fourth Conference on Numerical Simulations of Plasmas
* Britto et al. (2016) Britto, R. J., Bottacini, E., Lott, B., Razzaque, S., & Buson, S. 2016, ApJ, 830, 162
* Cerutti et al. (2012) Cerutti, B., Uzdensky, D. A., & Begelman, M. C. 2012, ApJ, 746, 148
* Cerutti et al. (2013) Cerutti, B., Werner, G. R., Uzdensky, D. A., & Begelman, M. C. 2013, ApJ, 770, 147
* Cerutti et al. (2014) —. 2014, Physics of Plasmas, 21, 056501
* Christie et al. (2019) Christie, I. M., Petropoulou, M., Sironi, L., & Giannios, D. 2019, MNRAS, 482, 65
* Clausen-Brown & Lyutikov (2012) Clausen-Brown, E., & Lyutikov, M. 2012, MNRAS, 426, 1374
* Comisso & Sironi (2018) Comisso, L., & Sironi, L. 2018, Phys. Rev. Lett., 121, 255101
* Comisso & Sironi (2019) —. 2019, ApJ, 886, 122
* Comisso & Sironi (2022) —. 2022, ApJ, 936, L27
* de Gouveia Dal Pino & Kowal (2015) de Gouveia Dal Pino, E. M., & Kowal, G. 2015, Astrophysics and Space Science Library, Vol. 407, Particle Acceleration by Magnetic Reconnection, ed. A. Lazarian, E. M. de Gouveia Dal Pino, & C. Melioli (Springer Berlin Heidelberg), 373
* de Gouveia Dal Pino & Lazarian (2005) de Gouveia Dal Pino, E. M., & Lazarian, A. 2005, A&A, 441, 845
* de Gouveia Dal Pino et al. (2010) de Gouveia Dal Pino, E. M., Piovezan, P. P., & Kadowaki, L. H. S. 2010, A&A, 518, 5
* del Valle et al. (2016) del Valle, M. V., de Gouveia Dal Pino, E. M., & Kowal, G. 2016, MNRAS, 463, 4331
* Drake et al. (2010) Drake, J. F., Opher, M., Swisdak, M., & Chamoun, J. N. 2010, ApJ, 709, 963
* Drake et al. (2006) Drake, J. F., Swisdak, M., Che, H., & Shay, M. A. 2006, Nature, 443, 553
* Eyink et al. (2013) Eyink, G., Vishniac, E., Lalescu, C., et al. 2013, Nature, 497, 466
* Giannios et al. (2009) Giannios, D., Uzdensky, D. A., & Begelman, M. C. 2009, MNRAS, 395, L29
* Guo et al. (2016) Guo, F., Li, H., Daughton, W., Li, X., & Liu, Y.-H. 2016, Physics of Plasmas, 23, 055708
* Guo et al. (2019) Guo, F., Li, X., Daughton, W., et al. 2019, ApJ, 879, L23
* Guo et al. (2015) Guo, F., Liu, Y.-H., Daughton, W., & Li, H. 2015, ApJ, 806, 167
* Guo et al. (2020) Guo, F., Liu, Y.-H., Li, X., et al. 2020, Physics of Plasmas, 27, 080501
* Guo et al. (2022) Guo, F., Li, X., French, O., et al. 2022, arXiv e-prints, arXiv:2208.03435
* Hoshino & Lyubarsky (2012) Hoshino, M., & Lyubarsky, Y. 2012, Space Sci. Rev., 173, 521
* Kadowaki et al. (2021) Kadowaki, L. H. S., de Gouveia Dal Pino, E. M., Medina-Torrejón, T. E., Mizuno, Y., & Kushwaha, P. 2021, ApJ, 912, 109
* Kadowaki et al. (2015) Kadowaki, L. H. S., de Gouveia Dal Pino, E. M., & Singh, C. B. 2015, ApJ, 802, 113
* Kadowaki et al. (2018) Kadowaki, L. H. S., de Gouveia Dal Pino, E. M., & Stone, J. M. 2018, ApJ, 864, 52
* Kilian et al. (2020) Kilian, P., Li, X., Guo, F., & Li, H. 2020, ApJ, 899, 151
* Kowal et al. (2011) Kowal, G., de Gouveia Dal Pino, E. M., & Lazarian, A. 2011, ApJ, 735, 102
* Kowal et al. (2012) —. 2012, Phys. Rev. Lett., 108, 241102
* Kowal et al. (2007) Kowal, G., Lazarian, A., & Beresnyak, A. 2007, ApJ, 658, 423
* Kowal et al. (2009) Kowal, G., Lazarian, A., Vishniac, E. T., & Otmianowska-Mazur, K. 2009, ApJ, 700, 63
* Lazarian et al. (2020) Lazarian, A., Eyink, G. L., Jafari, A., et al. 2020, Physics of Plasmas, 27, 012305
* Lazarian & Vishniac (1999) Lazarian, A., & Vishniac, E. T. 1999, ApJ, 517, 700
* Lazarian et al. (2012) Lazarian, A., Vlahos, L., Kowal, G., et al. 2012, Space Sci. Rev., 173, 557
* Li et al. (2015) Li, X., Guo, F., Li, H., & Li, G. 2015, ApJ, 811, L24
* Lyubarsky & Liverts (2008) Lyubarsky, Y., & Liverts, M. 2008, ApJ, 682, 1436
* Lyutikov et al. (2018) Lyutikov, M., Komissarov, S., Sironi, L., & Porth, O. 2018, Journal of Plasma Physics, 84, 635840201
* Lyutikov et al. (2017) Lyutikov, M., Sironi, L., Komissarov, S. S., & Porth, O. 2017, Journal of Plasma Physics, 83, 635830602
* McKinney & Uzdensky (2012) McKinney, J. C., & Uzdensky, D. A. 2012, MNRAS, 419, 573
* Medina-Torrejón et al. (2021) Medina-Torrejón, T. E., de Gouveia Dal Pino, E. M., Kadowaki, L. H. S., et al. 2021, ApJ, 908, 193
* Mehlhaff et al. (2020) Mehlhaff, J. M., Werner, G. R., Uzdensky, D. A., & Begelman, M. C. 2020, MNRAS, 498, 799
* Mignone et al. (2018) Mignone, A., Bodo, G., Vaidya, B., & Mattia, G. 2018, ApJ, 859, 13
* Mignone et al. (2019) Mignone, A., Mattia, G., Bodo, G., & Del Zanna, L. 2019, MNRAS, 486, 4252
* Mignone et al. (2009) Mignone, A., Ugliano, M., & Bodo, G. 2009, MNRAS, 393, 1141
* Mizuno et al. (2012) Mizuno, Y., Lyubarsky, Y., Nishikawa, K.-I., & Hardee, P. E. 2012, ApJ, 757, 16
* Petropoulou et al. (2016) Petropoulou, M., Giannios, D., & Sironi, L. 2016, MNRAS, 462, 3325
* Puzzoni et al. (2022) Puzzoni, E., Mignone, A., & Bodo, G. 2022, MNRAS, 517, 1452
* Santos-Lima et al. (2020) Santos-Lima, R., Guerrero, G., de Gouveia Dal Pino, E. M., & Lazarian, A. 2020, arXiv e-prints, arXiv:2005.07775
* Santos-Lima et al. (2010) Santos-Lima, R., Lazarian, A., de Gouveia Dal Pino, E. M., & Cho, J. 2010, ApJ, 714, 442
* Singh et al. (2015) Singh, C. B., de Gouveia Dal Pino, E. M., & Kadowaki, L. H. S. 2015, ApJ, 799, L20
* Singh et al. (2016) Singh, C. B., Mizuno, Y., & de Gouveia Dal Pino, E. M. 2016, ApJ, 824, 48
* Sironi (2022) Sironi, L. 2022, Phys. Rev. Lett., 128, 145102
* Sironi et al. (2015) Sironi, L., Petropoulou, M., & Giannios, D. 2015, MNRAS, 450, 183
* Sironi et al. (2021) Sironi, L., Rowan, M. E., & Narayan, R. 2021, ApJ, 907, L44
* Sironi & Spitkovsky (2014) Sironi, L., & Spitkovsky, A. 2014, ApJ, 783, L21
* Takamoto et al. (2015) Takamoto, M., Inoue, T., & Lazarian, A. 2015, ApJ, 815, 16
* Werner et al. (2019) Werner, G. R., Philippov, A. A., & Uzdensky, D. A. 2019, MNRAS, 482, L60
* Werner et al. (2018) Werner, G. R., Uzdensky, D. A., Begelman, M. C., Cerutti, B., & Nalewajko, K. 2018, MNRAS, 473, 4840
* Yuan et al. (2016) Yuan, Y., Nalewajko, K., Zrake, J., East, W. E., & Blandford, R. D. 2016, ApJ, 828, 92
* Zenitani & Hoshino (2001) Zenitani, S., & Hoshino, M. 2001, Apjl, 562, L63
* Zenitani & Hoshino (2007) —. 2007, ApJ, 670, 702
* Zenitani & Hoshino (2008) —. 2008, ApJ, 677, 530
* Zhang & Yan (2011) Zhang, B., & Yan, H. 2011, ApJ, 726, 90
* Zhang et al. (2018) Zhang, H., Li, X., Guo, F., & Giannios, D. 2018, ApJ, 862, L25
* Zhang & Li (2015) Zhang, J., & Li, T. 2015, arXiv e-prints, arXiv:1512.06501
* Zhdankin et al. (2013) Zhdankin, V., Uzdensky, D. A., Perez, J. C., & Boldyrev, S. 2013, ApJ, 771, 124
|
# Efficient equidistribution of periodic nilsequences and applications
James Leng Department of Mathematics, UCLA, Los Angeles, CA 90095, USA
<EMAIL_ADDRESS>
###### Abstract.
This is a companion paper to [13]. We deduce an equidistribution theorem for
periodic nilsequences and use this theorem to give two applications in
arithmetic combinatorics. The first application is quasi-polynomial bounds for
a certain complexity one polynomial progression, improving the iterated
logarithm bound previusly obtained. The second application is a proof of the
quasi-polynomial $U^{4}[N]$ inverse theorem. In work with Sah and Sawhney, we
obtain improved bounds for sets lacking nontrivial $5$-term arithmetic
progressions.
## 1\. Introduction
In [13], the author gave a proof of improved bounds for the equidistribution
of nilsequences. The author found that the Ratner-type factorization theorem
was inefficient for quantitative higher order Fourier analysis. In attempting
to salvage that theorem, the author proved an equidistribution theorem for a
$G_{(s)}$-vertical character. An informal version of that equidistribution
theorem is as follows (the precise statement can be found in [13, Theorem 3]).
###### Theorem 1 (Informal Equidistribution Theorem).
Let $G/\Gamma$ is a nilmanifold and $F(g(n)\Gamma)$ is a nilsequence with $F$
a $G_{(s)}$-vertical character with nonzero frequency $\xi$. Suppose
$\left|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)\right|\geq\delta.$
Then $F$ is “morally” a step $\leq s-1$ nilsequence with “good bounds.”
Here, “good bounds” denotes bounds whose exponents are polynomial in the
dimension of the nilmanifold. Since nilsequences may be Fourier expanded into
nilcharacters via [13, Lemma A.7], it follows that such a theorem can recover
an equidistribution theorem for all nilsequences with good bounds. In this
article, we shall consider the analogue of this equidistribution theorem for
periodic nilsequences along with applications of that theorem.
### 1.1. Main Results
Our first main result is the following equidistribution theorem with better
bounds (see Section 2 and [13, Section 2] for undefined notions).
###### Theorem 2.
mainresult1 Let $\delta\in(0,1/10)$, $N>100$ prime, and $G/\Gamma$ be a
nilmanifold of step $s$, degree $k$, dimension $d$, and complexity at most
$M$. Furthermore, let $F(g(n)\Gamma)$ be a periodic nilsequence modulo $N$
with $F$ a $G_{(s)}$ vertical character with frequency $\xi$ with $|\xi|\leq
M/\delta$. Suppose
$|\mathbb{E}_{n\in\mathbb{Z}/N\mathbb{Z}}F(g(n)\Gamma)|\geq\delta.$
Then either $N\ll(M/\delta)^{O_{k}(d^{O_{k}(1)})}$ or there exists horizontal
characters $\eta_{1},\dots,\eta_{r}$ of size at most
$(M/\delta)^{O_{k}(d^{O_{k}(1)})}$ with $1\leq r\leq\mathrm{dim}(G/[G,G])$
such that
$\|\eta_{i}\circ g\|_{C^{\infty}[N]}=0$
and such that any $s$ elements $w_{1},\dots,w_{s}\in
G^{\prime}:=\bigcap_{i=1}^{r}\mathrm{ker}(\eta_{i})$ satisfies
$\xi([[[w_{1},w_{2}],w_{3}],\dots,w_{s}])=0.$
###### Remark 1.1.
This theorem is proven by black-boxing [13, Theorem 3] (via an argument due to
Candela and Sisask [2]; see also, [12, 10, 11]). However one main motivation
of the article is to dicuss the proof in 3 simpler cases (with self contained
proofs) which were crucial in developing the proof in [13].
Our second main result is the following.
###### Theorem 3.
mainresult2 Let $N$ be a large prime, $P,Q\in\mathbb{Z}[x]$ be linearly
independent with $P(0)=Q(0)=0$, and let $A\subseteq\mathbb{Z}_{N}$ be a subset
such that $A$ contains no configuration of the form
$(x,x+P(y),x+Q(y),x+P(y)+Q(y))$ with $y\neq 0$. Then
$|A|\ll_{P,Q}N\exp(-\log^{c_{P,Q}}(N)).$
This will be proven using a similar method as in [12], with input from
mainresult1 rather than [7]. Our final result is the quasi-polynomial
$U^{4}(\mathbb{Z}/N\mathbb{Z})$ inverse theorem.
###### Theorem 4.
mainresult4 Suppose $f\colon\mathbb{Z}/N\mathbb{Z}\to\mathbb{C}$ is a one-
bounded function with
$\|f\|_{U^{4}(\mathbb{Z}/N\mathbb{Z})}\geq\delta.$
Then there exists a nilmanifold $G/\Gamma$ with degree at most $3$, dimension
$\log(1/\delta)^{O(1)}$, complexity $O(1)$, and a nilsequence $F(g(n)\Gamma)$
on the nilmanifold with $F$ $1$-Lipschitz such that
$|\mathbb{E}_{n\in[N]}f(n)F(g(n)\Gamma)|\geq\exp(-\log(1/\delta)^{O(1)}).$
Since the initial version of this paper, this result and work with Sah and
Sawhney on improved bounds for $5$-term arithmetic progressions [15] have been
generalized in work with Sah and Sawhney [17] to the quasi-polynomial
$U^{s+1}[N]$ inverse theorem and sets lacking nontrivial $k$-term arithmetic
progressions [16]. The article can thus be treated as a “stepping-stone” for
the more general [13] and [17].
### 1.2. Discussion on the proof of the $U^{4}(\mathbb{Z}/N\mathbb{Z})$
inverse theorem
mainresult4 will be proven using a very similar method as in [8], with input
from twostepcase instead of [7] which simplifies the argument somewhat. The
advantage of the approach of [8] is that it avoids the use of “$1\%$
quadratics” that Gowers and Milicevic [4] and Manners [18] consider. We
instead consider a “$1\%$ linear” equation, which is substantially simpler.
Our primary improvement over [8] is the “sunflower” and “linearization” steps
which correspond to Sections 7 and 8 of that article. In [8], the authors
prove these steps by invoking the _Ratner-type factorization theorem_ , which
given a nilmanifold of dimension $d$ involves $O(d)$ iterations of the
Leibman-type theorem of Green and Tao [7, Theorem 2.9]. The sunflower and
linearization steps are also proven iteratively, with the number of iterations
being $O(d)$. Altogether, their proof involves $O(d^{2})$ many iterations of
the quantitative Leibman theorem of [7]. We are able to do each of the
“sunflower” and “linearization” steps in one single application of
twostepcase. The proofs given here are refinements of the proofs given in [8].
For instance, the proof of the sunflower step (sunflower) relies on the
Furstenberg-Weiss commutator argument used in [8] (but adapted to the setting
of [13, Theorem 8]) and the proof of linearization step (linearizationstep) is
similar to that of [8] in that it also relies on the Balog-Szemerédi-Gowers
theorem and a Freiman type theorem, though we use the refinement of the theory
due to Sanders [21].
### 1.3. Organization of the paper
In Section 2, we will define notation we need in addition to the notation in
[13]. Sections 3-5 will delve into examples and simple cases of the proof of
mainresult1, in order to motivate the proof of [13, Theorem 3]. In Sections 3,
we will prove an equidistribution theorem for degree two periodic bracket
polynomials. In Section 4, we will prove the degree two periodic case of
mainresult1. In Section 5, we will prove the full two-step case of
mainresult1. We will prove mainresult1 in Section 6 (by black-boxing [13,
Theorem 3]). We will prove mainresult2 in Section 7, and mainresult4 in
Section 8.
In Appendix A, we will deduce some auxiliary lemmas, used in Sections 3 and 4.
In Appendix B, we will collect and prove some lemmas on bracket polynomials;
these will be useful in Sections 3 and 8. Finally, in Appendix C, we will
include a different proof than the one suggested in Section 3 of a key lemma
of the equidistribution theory: the _refined bracket polynomial lemma_. We
believe this proof is somewhat more intuitive and offers more motivation for
the statement of the lemma than the proof presented in [13], which is cleaner.
### 1.4. Acknowledgements
We would like to thank Terry Tao for advisement and for suggesting a simpler
proof of the refined bracket polynomial lemma than the proof the author
initially came up with, which is present in Appendix C. We would also like to
thank Ashwin Sah and Mehtaab Sawhney for their interest in the author’s work
and being helpful and extremely supportive of the author. In particular, the
persistent questions of Ashwin Sah and Mehtaab Sawhney in the author’s work
led the author to realize a mistake in Section 8 in an earlier version of the
document. When the author presented a fix to them, they pointed out that there
is a more efficient way to “Pigeonhole” in the proof of sunflower which lead
to the quasi-polynomial bounds in mainresult4 instead of bounds of the shape
of $\exp(-\exp(O(\log\log(1/\delta)^{2})))$ that the author had in a previous
version. We are immensely grateful to them for communicating this point with
the author and allowing the author to write up their argument here. We would
in addition like to thank Ben Green and Sarah Peluse for helpful discussions.
The author is supported by an NSF Graduate Research Fellowship Grant No.
DGE-2034835.
## 2\. Notation
We shall use the notation in [13], with a few differences, which we shall
describe below.
###### Definition 2.1 (Periodic nilsequences).
Given a nilmanifold $G/\Gamma$ and an integer $N$, a polynomial sequence
$g\in\text{poly}(\mathbb{Z},G)$, and a Lipschitz function
$F:G/\Gamma\to\mathbb{C}$, we say $F(g(n)\Gamma)$ is a _periodic nilsequence
modulo $N$_ if $g(n+N)\Gamma=g(n)\Gamma$ for each $n\in\mathbb{Z}$.
###### Definition 2.2 (Smoothness norms).
Given a polynomial $p\colon\mathbb{Z}\to\mathbb{R}$ with
$p(n)=\sum_{i=0}^{d}\alpha_{i}n^{i}$, we write
$\|p\|_{C^{\infty}[N]}=\sup_{1\leq i\leq
d}N^{i}\|\alpha_{i}\|_{\mathbb{R}/\mathbb{Z}}.$
The reason we work with the above definition of $C^{\infty}$ norm rather than
the definition in [13] involving binomial coefficients is that this definition
is better adapted to polynomials $p\colon\mathbb{Z}/N\mathbb{Z}\to\mathbb{T}$,
which appear in the analysis of periodic nilsequences.
We shall also use $c(\delta)$ as in [13] as any quantity
$\gg(\delta/M)^{O_{k,\ell}(d)^{O_{k,\ell}(1)}}$; here, since we will be
working with single parameter nilsequences, we will always have $\ell=1$.
## 3\. Bracket polynomial heuristics
In this section, we shall deduce an equidistribution theorem for a periodic
degree two bracket polynomial. Such a polynomial is of the form
$e(\phi(n)):=e(-(\alpha_{1}n[\beta_{1}n]+\cdots+\alpha_{d}n[\beta_{d}n])+P(n))$
with $\phi(n+N)\equiv\phi(n)\pmod{1}$ and $\alpha_{i},\beta_{i}$ are rational
with denominator $N$. Such a function is _not_ a nilsequence but can be
written as $F(g(n)\Gamma)$ where $F\colon G/\Gamma\to\mathbb{C}$ is only
piecewise Lipschitz on a degree two nilmanifold $G/\Gamma$. To realize this,
we require the following definitions.
###### Definition 3.1 (Elementary two-step nilmanifold).
We shall define the _elementary two-step nilmanifold_ of dimension $2d+1$.
Define
$G=\begin{pmatrix}1&\mathbb{R}&\cdots&\cdots&\mathbb{R}\\\
0&1&\cdots&0&\mathbb{R}\\\ 0&0&1&\cdots&\mathbb{R}\\\ 0&0&0&\ddots&\vdots\\\
0&0&0&\cdots&1\end{pmatrix}$
and
$\Gamma=\begin{pmatrix}1&\mathbb{Z}&\cdots&\cdots&\mathbb{Z}\\\
0&1&\cdots&0&\mathbb{Z}\\\ 0&0&1&\cdots&\mathbb{Z}\\\ 0&0&0&\ddots&\vdots\\\
0&0&0&\cdots&1\end{pmatrix}$
equipped with the lower central series filtration. We write this in
coordinates as
$(x_{1},\dots,x_{d},y_{1},\dots,y_{d},z).$
We furthermore define the _horizontal component_ as
$\psi_{\mathrm{horiz}}(x_{1},\dots,x_{d},y_{1},\dots,y_{d},z):=(\vec{x},\vec{y})$
and the _vertical component_ as
$\psi_{\mathrm{vert}}(x_{1},\dots,x_{d},y_{1},\dots,y_{d},z):=z$.
We also define an elementary bracket quadratic below.
###### Definition 3.2 (Elementary bracket quadratic).
Consider $G/\Gamma$ an elementary two-step nilmanifold of dimension $2d+1$.
Given real numbers $\alpha_{1},\dots,\alpha_{d},\beta_{1},\dots,\beta_{d}$,
and $P$ a quadratic polynomial, we define the _elementary polynomial sequence_
associated to $(\vec{\alpha},\vec{\beta},P)$ via
$g(n)=(\alpha_{1}n,\dots,\alpha_{d}n,\beta_{1}n\cdots,\beta_{d}n,P(n))$. Note
that $G/\Gamma$ has the fundamental domain of $(-1/2,1/2]^{2d+1}$ via the map
$(x_{1},\dots,x_{d},y_{1},\dots,y_{d},z)\mapsto(\\{x_{1}\\},\dots,\\{x_{d}\\},\\{y_{1}\\},\dots,\\{y_{d}\\},\\{z-\vec{x}\cdot[\vec{y}]\\}).$
We can thus define the function $F:G/\Gamma\to\mathbb{C}$ as
$F((x_{1},\dots,x_{d},y_{1},\dots,y_{d},z)\Gamma)=e(-\sum_{i=1}^{d}x_{i}[y_{i}]+z).$
We see that
$F(g(n)\Gamma)=e(-\alpha_{1}n[\beta_{1}n]+\cdots+\alpha_{d}n[\beta_{d}n]+P(n));$
we define an _elementary bracket quadratic_ associated to
$(\vec{\alpha},\vec{\beta},P)$ as $F(g(n)\Gamma)$. We say that the two-step
bracket quadratic is _periodic_ modulo $N$ if $g(n+N)\Gamma=g(n)$.
We next define the asymmetric bilinear form of an elementary two-step
nilmanifold.
###### Definition 3.3 (Bilinear form of an elementary two-step nilmanifold).
Given an elementary two-step nilmanifold $G/\Gamma$ of dimension $d$, we can
define the associated _asymmetric bilinear—_ form
$\omega\colon(\mathbb{R}^{2d})^{2}\to\mathbb{R}$ as follows. If
$u_{1}=(\vec{x},\vec{y}),u_{2}((\vec{z},\vec{w})\in(\mathbb{R}^{2d})^{2}$, we
define
$\omega(u_{1},u_{2})=\vec{x}\cdot\vec{w}-\vec{y}\cdot\vec{z}.$
Finally, we require the following lemma, whose proof we omit (see periodic).
###### Lemma 3.1.
periodiclemma If $F(g(n)\Gamma)$ is an $N$-periodic elementary bracket
quadratic, and if $g(n)=(\vec{\alpha}n,\vec{\beta}n,P)$ with $P=an^{2}+bn+c$,
then $\vec{\alpha},\vec{\beta}$ are rational with denominator $N$, and $a$ is
rational with denominator $2N$.
We are now ready to state the equidistribution theorem for degree two bracket
polynomials.
###### Theorem 5.
twostepbracket Let $\delta\in(0,1/10)$ and $N>10$ a prime, and $G/\Gamma$ an
elementary two-step nilmanifold of dimension $2d+1$ with bilinear form
$\omega$, and $F(g(n)\Gamma)$ an elementary two-step bracket quadratic.
Suppose
$|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)|\geq\delta.$
Then one of the following holds.
* •
$N\ll\delta^{-O(d^{O(1)})}$;
* •
or there exists some $0\leq r\leq 2d$ and $w_{1},\dots,w_{r}$ and
$\eta_{1},\dots,\eta_{2d-r}$ all linearly independent vectors in
$\mathbb{Z}^{2d}$ with $\langle w_{i},\eta_{j}\rangle=0$ and such that
$\|\eta_{i}\cdot\psi_{\mathrm{horiz}}(g)\|_{C^{\infty}[N]}=0\text{ and
}\|\omega(w_{i},\psi_{\mathrm{horiz}}(g))\|_{C^{\infty}[N]}=0.$
###### Remark 3.1.
Observing that $\omega$ is nondegenerate on the first $2d$ coordinates, we see
that $\omega$ is a _symplectic form_ on $\mathbb{R}^{2d}$. This theorem then
states that $\psi_{\mathrm{horiz}}(g)$ lies on a _isotropic subspace_ $V$ of
$\omega$; that is, a subspace $V$ with $\omega(V,V)=0$. To see this, note that
if $v,w\in V$, it suffices to show that $\omega(v,w)=0$. Since $\eta_{i}$
annihilates $v$, it may be spanned by $w_{j}$’s, and because
$\omega(w_{j},w)=0$, it follows that $\omega(v,w)=0$.
### 3.1. Proof of twostepbracket
Let
$g(n)=(\alpha_{1}n,\dots,\alpha_{d}n,\beta_{1}n,\dots,\beta_{d}n,P(n)).$
Since
$\alpha_{i}n[\beta_{i}n]=(\\{\alpha_{i}\\}+[\alpha_{i}])n[(\\{\beta_{i}\\}+[\beta_{i}])n]=\\{\alpha_{i}\\}n[\\{\beta_{i}n\\}]+\\{\alpha_{i}\\}[\beta_{i}]n^{2}\pmod{1},$
we may reduce to the case where $|\alpha_{i}|,|\beta_{i}|\leq\frac{1}{2}$ by
modifying $P$. Applying the van der Corput inequality, there exists
$\delta^{O(1)}N$ many $h\in[N]$ such that
(3.1) $|\mathbb{E}_{n\in[N]}e(\phi(n+h)-\phi(n))|\geq\delta^{O(1)}.$
We next observe that if $\alpha$ and $\beta$ are arbitrary, then
$\alpha(n+h)[\beta(n+h)]-\alpha n[\beta n]=\alpha n[\beta h]+\alpha h[\beta
n]+[\text{Lower order terms}]$
where “lower order terms” denote a sum of $O(1)$ terms of the form $\\{\alpha
n\\}\\{\beta n\\},\\{\alpha h\\}\\{\beta n\\}$, $\\{\alpha h\\}\\{\beta h\\}$,
$\\{\alpha n\\}\\{\beta h\\}$. The key observation is that each of the
functions
$e(\\{\alpha n\\}\\{\beta n\\}),e(\\{\alpha h\\}\\{\beta n\\}),\text{ and
}e(\\{\alpha n\\}\\{\beta h\\})$
are functions on $\mathbb{T}^{2}$ in $(\\{\alpha n\\},\\{\beta n\\})$,
$(\\{\alpha h\\},\\{\beta n\\})$, and $(\\{\alpha n\\},\\{\beta h\\})$,
respectively, and thus may be Fourier expanded into terms such as (say)
$e(k_{1}\\{\alpha n\\}+k_{2}\\{\beta n\\})=e(k_{1}\alpha n+k_{2}\beta n)$.
Note that we have “removed” a bracket. Such a heuristic is made precise in
onevarfouriercomplexity and bilinearfouriercomplexity.
We may further analyze
$\alpha n[\beta h]=\alpha n(\beta h-\\{\beta h\\}),\alpha h[\beta n]=\\{\alpha
h\\}(\beta n-\\{\beta n\\})$
and so
$\alpha(n+h)[\beta(n+h)]-\alpha n[\beta n]=\alpha\beta nh+[(\alpha n,\beta
n),(\\{\alpha h\\},\\{\beta h\\})]+[\text{Lower order terms}]$
where $[(x,y),(z,w)]=xw-yz$. Thus, letting $\beta$ denote the coefficient the
quadratic term of the vertical component of $g$, we have
(3.2) $|\mathbb{E}_{n\in[N]}e(2\beta
nh-n\omega(\psi_{\mathrm{horiz}}(g),\\{\psi_{\mathrm{horiz}}(g)\\})+\text{[Lower
order terms]})|\geq\delta^{O(1)}.$
By onevarfouriercomplexity, bilinearfouriercomplexity, we thus have for real
$\gamma$ of denominator $N$ that
$|\mathbb{E}_{n\in\mathbb{Z}_{N}}e(\gamma
nh-n\omega(\psi_{\mathrm{horiz}}(g),\\{\psi_{\mathrm{horiz}}(g)\\})+\gamma
n)|\geq\delta^{O(d^{O(1)})}.$
By periodiclemma and letting $\alpha=\psi_{\mathrm{horiz}}(g)$ and $a$ equal
to vector induced by $a\cdot y=\omega(\psi_{\mathrm{horiz}}(g),y)$, we have
(3.3) $\|a\cdot\\{\alpha h\\}+\beta+\gamma h\|_{\mathbb{R}/\mathbb{Z}}=0.$
for $\delta^{O(d^{O(1)})}N$ many $h\in[N]$. In the next subsection, we prove a
crucial Diophantine approximation result for this hypothesis.
### 3.2. The refined bracket polynomial lemma
We have the following lemma.
###### Lemma 3.2 (Refined bracket polynomial lemma).
periodicrefined Let $\delta\in(0,1/10)$ and $N>100$ be prime. Suppose
$a,\alpha\in\mathbb{R}^{d}$, $a$ and $\alpha$ are rational with denominator
$N$, $|a|\leq M$, and
$\|\beta+a\cdot\\{\alpha h\\}\|_{\mathbb{R}/\mathbb{Z}}=0$
for $\delta N$ many $h\in[N]$. The either $N\ll(\delta/d^{d}M)^{-O(1)}$ or
$K/N\geq 1/10$ or else there exists linearly independent $w_{1},\dots,w_{r}$
and $\eta_{1},\dots,\eta_{d-r}$ in $\mathbb{Z}^{d}$ with size at most
$(\delta/d^{d}M)^{-O(1)}$ such that $\langle w_{i},\eta_{j}\rangle=0$,
$\|\eta_{j}\cdot\alpha\|_{\mathbb{R}/\mathbb{Z}}=0,\text{ and }|w_{i}\cdot
a|=\frac{(\delta/M)^{-O(1)}d^{O(d)}K}{N}.$
###### Remark 3.2.
The name “refined bracket polynomial lemma” is derived from the analogous
“bracket polynomial lemma” of [7, Proposition 5.3].
This lemma can be proved via the following lemma by applying $K=1/N^{2}$.
###### Lemma 3.3.
magicargument Let $\delta\in(0,1/10)$ and $N>100$ be prime. Suppose
$a,\alpha\in\mathbb{R}^{d}$ and $\alpha$ is rational with denominator $N$,
$|a|\leq M$, and
$\|\beta+a\cdot\\{\alpha h\\}\|_{\mathbb{R}/\mathbb{Z}}<K/N$
for $\delta N$ many $h\in[N]$. The either $N\ll(\delta/d^{d}M)^{-O(1)}$ or
$K/N\geq 1/10$ or else there exists linearly independent $w_{1},\dots,w_{r}$
and $\eta_{1},\dots,\eta_{d-r}$ in $\mathbb{Z}^{d}$ with size at most
$(\delta/d^{d}M)^{-O(1)}$ such that $\langle w_{i},\eta_{j}\rangle=0$,
$\|\eta_{j}\cdot\alpha\|_{\mathbb{R}/\mathbb{Z}}=0,\text{ and }|w_{i}\cdot
a|=\frac{(\delta/M)^{-O(1)}d^{O(d)}K}{N}.$
The magicargument is proved in [13, Lemma 3.4]. We will give another proof of
this lemma with worse bounds in Appendix C. Although our bounds there are
worse, they are sufficient for twostepbracket and for any application of the
refined bracket polynomial lemma in (the one-variable case of) [13]. A
corollary of this lemma is the following.
###### Corollary 3.1.
bracketpolynomialcorollary Let $\delta\in(0,1/10)$ and $N>100$ be prime.
Suppose $a,\alpha\in\mathbb{R}^{d}$ are rationals with $N$, $|a|\leq M$,
$\beta,\gamma\in\mathbb{R}$ with $\beta$ rational with denominator $N$, and
$\|\gamma+a\cdot\\{\alpha h\\}+\beta h\|_{\mathbb{R}/\mathbb{Z}}=0$
for $\delta N$ many $h\in[N]$. Then either $N\ll(d\delta/M)^{-O(d)^{O(1)}}$ or
there exists linearly independent $w_{1},\dots,w_{r}$ and
$\eta_{1},\dots,\eta_{d-r}$ such that $\langle w_{i},\eta_{j}\rangle=0$ and
$\|\eta_{j}\cdot\alpha\|_{\mathbb{R}/\mathbb{Z}}=0,\hskip
7.22743pt\|w_{i}\cdot a\|_{\mathbb{R}/\mathbb{Z}}=0.$
###### Proof.
We define $\tilde{a}=(a,1)\in\mathbb{R}^{d+1}$ and
$\tilde{\alpha}=(\alpha,\beta)\in\mathbb{R}^{d+1}$. Invoking periodicrefined,
there exists $w_{1},\dots,w_{r}$ and $\eta_{1},\dots,\eta_{d+1-r}$ such that
$|w_{i}(\tilde{a})|=0$,
$\|\eta_{j}(\tilde{\alpha})\|_{\mathbb{R}/\mathbb{Z}}=0$. We denote
$w_{i}=(u_{i},v_{i})$ and $\eta_{j}=(\mu_{j},\nu_{j})$ where
$u_{i}\in\mathbb{R}^{d}$ and $\mu_{j}\in\mathbb{R}^{d}$ for each $i$ and $j$.
Suppose $\nu_{1}\neq 0$. Let $\tilde{\eta_{j}}=\nu_{j}\mu_{1}-\mu_{j}\nu_{1}$.
We see that $\|\tilde{\eta_{j}}(\mu)\|_{\mathbb{R}/\mathbb{Z}}=0$. We claim
that the $\tilde{\eta_{j}}$’s are independent of each other. Suppose there
exists some $a_{i}$ such that
$\sum_{i\neq 1}a_{i}(\nu_{i}\mu_{1}-\mu_{i}\nu_{1})=0.$
We can rewrite this sum as
$\mu_{1}\left(\sum_{i\neq 1}a_{i}\nu_{i}\right)+\sum_{i\neq
1}(-a_{i}\nu_{1})\mu_{i}=0.$
Letting these coefficients of $\mu_{i}$ be $c_{i}$, we see that
$\sum_{i}c_{i}\nu_{i}=\nu_{1}\left(\sum_{i\neq
1}a_{i}\nu_{i}\right)-\sum_{i\neq 1}a_{i}\nu_{1}\nu_{i}=0.$
Thus, each of these coefficients are zero, and since $\nu_{1}$ is nonzero,
$a_{i}=0$. Thus, $\tilde{\eta_{j}}$’s are independent of each other. We next
claim that $\tilde{\eta_{j}}$ are orthogonal to the $u_{i}$’s. This is because
$\tilde{\eta_{j}}\cdot u_{i}=\nu_{j}\mu_{1}\cdot u_{i}-\nu_{1}\mu_{j}\cdot
u_{i}$ $\eta_{j}\cdot w_{i}=\mu_{j}\cdot u_{i}+\nu_{j}\cdot v_{i}=0$
$\eta_{1}\cdot w_{i}=\mu_{1}\cdot u_{i}+\nu_{1}\cdot v_{i}=0$
so subtracting the second and third equations gives that the first equation is
equal to zero. Finally, we claim that the $u_{i}$’s are linearly independent
of each other. To see this, note that $(u_{i},v_{i})$ are orthogonal to
$(\tilde{\eta_{j}},0)$ and $(\mu_{1},\nu_{1})$. Since $(0,1)$ is not
orthogonal to $(\mu_{1},\nu_{1})$, it follows that $(u_{i},v_{i})$ cannot span
$(0,1)$, so $(u_{i},v_{i}),(0,1)$ are linearly independent of each other,
which implies that $u_{i}$ are linearly independent of each other.
If $\nu_{i}\neq 0$ for some $i$, we let $\nu_{i}$ play the role of $\nu_{1}$
in the above argument. If $\nu_{i}=0$ for all $i$, we observe that $\mu_{j}$
are all linearly independent and that
$\|\mu_{j}\cdot\alpha\|_{\mathbb{R}/\mathbb{Z}}=0$. In addition, since
$v_{i}\cdot 1\in\mathbb{Z}$, we have
$\|w_{i}\cdot\tilde{a}\|_{\mathbb{R}/\mathbb{Z}}=\|u_{i}\cdot
a\|_{\mathbb{R}/\mathbb{Z}}=0$. Hence, choosing a linearly independent subset
of the $u_{i}$’s, we finish. ∎
We are now ready to finish the proof of twostepbracket.
### 3.3. Finishing the proof
We return to (3.3). Applying bracketpolynomialcorollary, we obtain
$w_{1},\dots,w_{r},\eta_{1},\dots,\eta_{d-r}$ such that
$\|w_{i}\cdot a\|_{\mathbb{R}/\mathbb{Z}}=0\text{ and
}\|\eta_{j}\cdot\alpha\|_{\mathbb{R}/\mathbb{Z}}=0.$
Unwinding the definitions of $a$ and $\alpha$ gives twostepbracket.
## 4\. The degree two nilsequence case
Having discussed bracket polynomial heuristics, we now turn to the degree-two
two-step nilsequence case. Our proof follows that of [7] which applies the van
der Corput inequality and performs an analysis of the resulting nilsequence on
the group
$G^{\square}:=G\times_{G_{2}}G:=\\{(g^{\prime},g)\in G^{2}:g^{\prime}g^{-1}\in
G_{2}\\}.$
Properties of $G^{\square}$ can be found in [13, Lemma A.3, Lemma A.4]. The
reader is encouraged to observe parallels between this section and the
previous section. We now state the main theorem of this section:
###### Theorem 6.
twostepcase Let $N$ be a prime, $\delta\in(0,1/10)$, and $G/\Gamma$ a two-step
nilmanifold of dimension $d$, complexity $M$, equipped with the lower central
series filtration. Furthermore, let $F(g(n)\Gamma)$ be a periodic nilsequence
modulo $N$ on $G/\Gamma$ with $F$ a $1$-Lipschitz vertical character of
nonzero frequency $|\xi|\leq M/\delta$. Suppose
$|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)|\geq\delta.$
Then either $N\ll(\delta/M)^{-O(d)^{O(1)}}$ or there exists some integer
$d_{\mathrm{horiz}}\geq r\geq 0$ and elements
$w_{1},\dots,w_{r}\in\Gamma/(\Gamma\cap[G,G])$ and horizontal characters
$\eta_{1},\dots,\eta_{d_{\mathrm{horiz}}-r}$ all bounded by
$(\delta/M)^{-O(d)^{O(1)}}$ such that
* •
$\psi_{\mathrm{horiz}}(w_{i})$’s are linearly independent of each other and
$\eta_{j}$’s are linearly independent of each other and
$\langle\eta_{j},w_{i}\rangle=0$ for all $i$ and $j$.
* •
We have
$\|\xi([w_{i},g])\|_{C^{\infty}[N]}=0$ $\|\eta_{j}\circ
g\|_{C^{\infty}[N]}=0.$
It’s worth noting that the subgroup $\tilde{G}=\\{g\in
G:\eta_{j}(g)=0,[w_{i},g]=\mathrm{id}_{G}\forall i,j\\}$ is an abelian
subgroup of $G$ since given two elements $g,h\in\tilde{G}$, we see that since
$\eta_{j}(h)=0$ and $w_{i}$ and $\eta_{j}$ are orthogonal, it follows that the
horizontal component of $h$ can be spanned by the $w_{i}$’s, so to verify that
$[g,h]=0$, it just suffices to verify that $[w_{i},g]=\mathrm{id}_{G}$, which
is true by definition. In fact, by [13, Lemma A.9], each abelian rational
subgroup of $G$ is a subgroup of some group of this form. Combining this lemma
with factorization, we obtain the following.
###### Corollary 4.1.
twostepcor Let $N$ be a prime, $0<\delta<\frac{1}{10}$, $G/\Gamma$ be a two-
step nilmanifold of dimension $d$, complexity $M$, and equipped with the
standard filtration. Furthermore, let $F(g(n)\Gamma)$ be a periodic
nilsequence modulo $N$ on $G/\Gamma$ with $F$ a $1$-Lipschitz vertical
character of nonzero frequency $|\xi|\leq M/\delta$. Suppose $G/\Gamma$ has a
one-dimensional vertical torus, and
$|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)|\geq\delta.$
Then either $N\ll(\delta/M)^{-O(d)^{O(1)}}$ or we can write
$g(n)=\epsilon(n)g_{1}(n)\gamma(n)$ where $\epsilon$ is constant, $g_{1}(n)$
lies on an abelian subgroup of $G$ with rationality
$(\delta/M)^{-O(d)^{O(1)}}$ and the image of $\gamma$ lies inside $\Gamma$.
###### Proof of twostepcase..
We first make a few preliminary reductions. By [13, Lemma 2.1], we may reduce
to the case that $g(0)=\mathrm{id}_{G}$ and $|\psi(g(1))|\leq\frac{1}{2}$.
Suppose
$|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)|\geq\delta.$
Using the van der Corput inequality, we see that there are $\delta^{O(1)}N$
many $h$’s such that for each such $h$,
(4.1)
$|\mathbb{E}_{n\in[N]}F(g(n+h)\Gamma)\overline{F(g(n)\Gamma)}|\geq\delta^{O(1)}.$
We recall (once again) from [13, Definition 4.1, 4.2] the definitions
$G\times_{G_{2}}G=\\{(g^{\prime},g):g^{\prime}g^{-1}\in G_{2}\\}=G^{\square}$
$\Gamma\times_{\Gamma\cap
G_{2}}\Gamma=\\{(\gamma,\gamma^{\prime}):\gamma^{\prime}\gamma^{-1}\in
G_{2}\\}:=\Gamma^{\square}$
and $g_{2}(n)=g(n)g(1)^{-n}$. By defining
$F_{h}(x,y)=F(\\{g(1)^{h}\\}x)\overline{F(y)}$, the nonlinear part
$g_{2}(n)=g(n)g(1)^{-n}$, and
$g_{h}(n)=(\\{g(1)^{h}\\}^{-1}g_{2}(n+h)g(1)^{n}\\{g(1)^{h}\\},g_{2}(n)g(1)^{n})$,
we see that
$|\mathbb{E}_{n\in[N]}F_{h}(g_{h}(n)\Gamma)|\geq\delta^{O(1)}.$
One can verify that equipping $G^{\square}$ with the lower central series
filtration, $g_{h}\in\mathrm{poly}(\mathbb{Z},G^{\square})$, and that
$F_{h}(g_{h}(n)\Gamma^{\square})$ is a periodic nilsequence. Since $F_{h}$ is
invariant under $G_{2}^{\triangle}$, the diagonal subgroup of $G_{2}^{2}$, and
since $[G^{\square},G^{\square}]=G_{2}^{\triangle}$ it follows that $F_{h}$
descends to a function $\overline{F_{h}}$ on
$\overline{G^{\square}}:=G^{\square}/G_{2}^{\triangle}$, which is a one-step
nilpotent group. By [13, Lemma A.6] we may approximate $\overline{F_{h}}$ by
$\overline{F_{h}}=\sum_{|\eta|\leq
c(\delta)^{-1}}F_{h,\eta}+O_{L^{\infty}}(c(\delta))$
with $F_{h,\eta}$ $c(\delta)^{-1}$-Lipschitz vertical characters with
frequency $\eta$.
We now analyze characters on $\overline{G^{\square}}$. Such a character lifts
to a horizontal character on $G^{\square}$ which annihilates
$G_{2}^{\triangle}$. By [13, Lemma A.3] that we may decompose
$\eta(g_{2}g_{1},g_{1})=\eta(g_{1},g_{1})+\eta(g_{2},id)=\eta^{1}(g_{1})+\eta^{2}(g_{2})$
with $g_{1}\in G$ and $g_{2}\in G_{2}$ where $\eta^{1}$ and $\eta^{2}$ are
horizontal characters of size at most $c(\delta)^{-1}$. In order to emphasize
that $\eta^{2}$ “lies in the $x_{2}-x_{1}$ direction”, we shall write
$\eta^{2}$ as $\eta^{2}\otimes\overline{\eta^{2}}$. Since $F$ is a frequency
$\xi$ on $G_{2}$, we expect $F_{h,\eta}$ to also be of frequency
$\xi\otimes\overline{\xi}$ on $G_{2}^{2}/G_{2}^{\triangle}$. This may not
necessarily be true as written above, but we may average over
$G_{2}^{2}/G_{2}^{\triangle}$ as follows:
$\overline{F_{h}}(x)=\int_{G_{2}^{2}/G_{2}^{\triangle}}\tilde{F}_{h}(g_{2}x\Gamma)e(-\xi\otimes\overline{\xi}(g_{2}))dg_{2}=\sum_{|\eta^{\prime}|\leq
c(\delta)^{-1}}F_{h,\eta^{\prime}}+O_{L^{\infty}}(c(\delta)).$
The point is that $(\eta^{\prime}-\xi\otimes\overline{\xi})(G_{2}^{2})=0$.
Here, note that we have abusively lifted $\eta^{\prime}$ to a horizontal
character on $G\times_{G_{2}}G_{2}/G_{2}^{\triangle}$ and
$\xi\otimes\overline{\xi}$ to a character on $G_{2}^{2}$ that annihilates
$G_{2}^{\triangle}$. By applying the Pigeonhole principle, there exists one
frequency $\eta^{\prime}$ independent of $h$ such that for $c(\delta)N$ many
$h\in\mathbb{Z}/N\mathbb{Z}$,
(4.2)
$|\mathbb{E}_{n\in\mathbb{Z}/N\mathbb{Z}}F_{\eta^{\prime}}(g(n)\Gamma)|\geq
c(\delta).$
Since $G$ was two-step, $G\times_{G_{2}}G$ is one-step, so decomposing
$\eta^{\prime}(g^{\prime},g)=\eta_{1}(g)+\xi(g^{\prime}g^{-1})$, (4.2) is
equivalent to the fact that
(4.3)
$|\mathbb{E}_{n\in[N]}e(n\eta_{1}(g(1))+\xi(g_{2}(n+h)g_{2}(n)^{-1})+\xi([\\{g(1)^{h}\\},g(1)^{n}]))|\geq(\delta/M)^{O(d)^{O(1)}}$
for $c(\delta)N$ many elements $h\in[N]$.
We claim that $\xi([g(1),[g(1)^{h}])$ is rational with denominator $N$. To see
this, note that
$\xi([g(1),\\{g(1)^{h}\\}])=\xi([g(1),[g(1)^{h}]]),$
and since $\xi([g(1),[g(1)^{h}]])^{N}=\xi([g(1)^{N},[g(1)^{h}]])=0$, it
follows that $\xi([g(1),[g(1)^{h}])$ is rational with denominator $N$. From
(4.3) and periodic, we have for some $\beta,\gamma\in\mathbb{R}$ with $\beta$
rational with denominator $AN$ for some $A=O_{k}(1)$,
$\|\beta n+\gamma+\xi([g(1),\\{g(1)^{h}\\}])\|_{\mathbb{R}/\mathbb{Z}}=0.$
We can write
$\xi([g(1),\\{g(1)^{h}\\}])=\langle(C-C^{t})g(1),\\{g(1)^{h}\\}\rangle=\langle
a,\\{\alpha h\\}\rangle$
where $C-C^{t}$ is the antisymmetric matrix representing the commutator
identity in the horizontal torus. Note that $C-C^{t}$ has height at most $M$.
This suggests that we should’ve made a change of variables $n\mapsto M_{1}n$
in (4.1) (using the fact that $M_{1}$ has a modular inverse mod $N$ since $N$
is prime) for some $M_{1}\leq AM^{d}$ which is a multiple of the denominator
of each entry of $C-C^{t}$ and $A$,
$|\mathbb{E}_{n\in[N]}F(g(M_{1}n)\Gamma)|\geq\delta.$
Thus, after applying the change of variables, we can assume that $a$,
$\alpha$, and $\beta$ have denominator $N$. Applying
bracketpolynomialcorollary and noticing that $C-C^{t}$ is antisymmetric, we
obtain $w_{i}$’s and $\eta_{j}$’s which are linearly independent,
$\eta_{j}(w_{i})=0$, $\|M_{1}\xi([w_{i},g])\|_{\mathbb{R}/\mathbb{Z}}=0$, and
$\|\eta_{j}(g)\|_{\mathbb{R}/\mathbb{Z}}=0$. Using the fact that $N$ is prime,
we obtain $\|\xi([w_{i},g])\|_{\mathbb{R}/\mathbb{Z}}=0$. This completes the
proof of twostepcase. ∎
## 5\. The two-step polynomial sequence case
In the previous section, we observed that in the two-step case, applying van
der Corput once landed us in the group $\overline{G^{\square}}$, which was
one-step. In general, this may not be true, even in the two-step polynomial
sequence case. This is exhibited by the following example in bracket
polynomial formalism. Consider
$\alpha n^{2}[\beta n].$
Differentiating once in $h$ gives a top term of
$2\alpha nh[\beta n]+\alpha n^{2}[\beta h].$
Here, we see that the term $2\alpha nh[\beta n]$ is still a “bracket term” in
$n$. Hence, our proof divides into two cases.
* •
The first case is what happens when $\overline{G^{\square}}$ is one-step. This
would correspond to centralcase and in bracket polynomial formalism, what
happens when the bracket polynomial is of the form
$\sum_{i=1}^{d}\alpha_{i}n[\beta_{i}n]+P(n)$
for $P$ a (not necessarily degree two) polynomial.
* •
The second case is what happens when $\overline{G^{\square}}$ is not one-step.
We have not isolated a specific lemma for that case, for its proof occupies
much of twosteppolynomial. In bracket polynomial formalism, this corresponds
to a bracket polynomial of the form
$\sum_{i=1}^{d}P_{i}(n)[Q_{i}(n)]+R(n)$
where $P_{i},Q_{i}$ are polynomials with at least one with degree larger than
one and $R$ is a polynomial.
We now state the main theorem of this section.
###### Theorem 7.
twosteppolynomial Let $\delta\in(0,1/10)$, $N>100$ prime, and $G/\Gamma$ a
two-step nilmanifold of dimension $d$, complexity $M$, and degree $k$.
Furthermore, let $F(g(n)\Gamma)$ be a periodic nilsequence modulo $N$ on
$G/\Gamma$ with $F$ a $1$-Lipschitz vertical character with nonzero frequency
$|\xi|\leq M/\delta$. Suppose
$|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)|\geq\delta.$
Then either $N\ll(\delta/M)^{-O_{k}(d)^{O_{k}(1)}}$ or else there exists some
integer $d_{\mathrm{horiz}}\geq r\geq 1$ elements $w_{1},\dots,w_{r}\in\Gamma$
with $\psi_{horiz}(w_{1}),\dots,\psi_{horiz}(w_{r})$ linearly independent,
linearly independent horizontal characters $\eta_{1},\dots,\eta_{d_{horiz}-r}$
with $|w_{i}|,|\eta_{j}|\leq(\delta/M)^{-O_{k}(d)^{O_{k}(1)}}$,
$\langle\eta_{j},w_{i}\rangle=0$, and
$\|\xi([w_{i},g])\|_{C^{\infty}[N]},\|\eta_{j}\circ g\|_{C^{\infty}[N]}=0.$
It turns out that the proof of the two-step polynomial sequence case breaks
down naturally into two cases: one where $\xi([G_{2},G])=0$ and one where
$\xi([G_{2},G])=\mathbb{R}$. This is analogous to the two cases considered in
[13, Section 4]. We start with the case of when $\xi([G_{2},G])=0$ first.
###### Lemma 5.1.
centralcase Let $\delta\in(0,1/10)$, $N>100$ prime, and $G/\Gamma$ be a two-
step nilmanifold of dimension $d$, complxity $M$, and degree $k$ with
$\xi([G_{2},G])=0$. Furthermore, let $F(g(n)\Gamma)$ be a periodic nilsequence
modulo $N$ on $G/\Gamma$ with $F$ a $1$-Lipschitz vertical character with
nonzero frequency $|\xi|\leq M/\delta$. Suppose
$|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)|\geq\delta.$
Then either $N\ll(M/\delta)^{O_{k}(d^{O_{k}(1)})}$ or else there exists some
integer $d_{\mathrm{horiz}}\geq r\geq 1$, elements
$w_{1},\dots,w_{r}\in\Gamma$ with
$\psi_{\mathrm{horiz}}(w_{1}),\dots,\psi_{\mathrm{horiz}}(w_{r})$ linearly
independent, and linearly independent horizontal characters
$\eta_{1},\dots,\eta_{d_{horiz}-r}$ with
$|w_{i}|,|\eta_{j}|\leq(M/\delta)^{O_{k}(d^{O_{k}(1)})}$,
$\langle\eta_{j},w_{i}\rangle=0$, and
$\|\xi([w_{i},g])\|_{C^{\infty}[N]},\|\eta_{j}\circ g\|_{C^{\infty}[N]}=0.$
###### Proof.
By [13, Lemma 2.2], we may reduce to the case of when $g(0)=1$ and
$|\psi(g(1))|\leq 1/2$. We proceed similarly as the proof of twostepcase with
one key difference. Instead of Fourier expanding along the vertical torus, we
Fourier expand along the $G_{2}$-torus. By [13, Lemma A.6], we may decompose
$F=\sum_{|\alpha|\leq c(\delta)^{-1}}F_{\alpha}+O(c(\delta))$
where $\alpha$ is a $G_{2}$-vertical frequency. By averaging over $G_{(2)}$,
we may write
$F(x)=\int F(g_{2}x)e(-\xi(g_{2}))dg_{2}=\sum_{|\beta|\leq
c(\delta)^{-1}}F_{\beta}+O(c(\delta))$
where crucially, $(\beta-\xi)(G_{(2)})=0$.
Once again, we apply the van der Corput inequality
$|\mathbb{E}_{n\in[N]}F(g(n)\Gamma)\overline{F(g(n+h)\Gamma)}|\geq\delta^{2}$
for $\delta^{2}N$ many $h\in[N]$. Then letting $g_{2}(n):=g(n)g(1)^{-n}$ be
the nonlinear part of $g$, we denote
$g_{h}(n)=(\\{g(1)^{h}\\}^{-1}g_{2}(n+h)g(1)^{n}\\{g(1)^{h}\\},g(n))$
and $F_{h}(x,y)=\overline{F}(\\{g(1)^{h}\\}x)F(y)$. By [13, Lemma A.3], we see
that $g_{h}$ lies inside $G^{\square}$ and $F_{h}(g_{h}(n))$ descends to
$\tilde{F}_{h}(\tilde{g}_{h}(n)\overline{\Gamma^{\square}})$ a nilsequence on
$\overline{G^{\square}}$. Since $g(n)\Gamma$ is periodic modulo $N$, it
follows that $g_{h}(n)\Gamma\times\Gamma$ is periodic modulo $N$ and thus
$g_{h}(n)\Gamma\times_{G_{2}\cap\Gamma}\Gamma$ is also periodic modulo $N$,
and so $\tilde{g}_{h}(n)\overline{\Gamma^{\square}}$ is periodic modulo $N$.
The hypothesis then rearranges to
$|\mathbb{E}_{n\in[N]}\tilde{F_{h}}(\tilde{g_{h}}(n)\overline{\Gamma^{\square}})|\geq\delta^{2}$
for $\delta^{2}N$ many $h\in[N]$. Making a change of variables for some
integer $1\leq M_{1}\leq(10^{k}k)!M^{d}$
$|\mathbb{E}_{n\in[N]}\tilde{F_{h}}(\tilde{g_{h}}(M_{1}n)\overline{\Gamma^{\square}})|\geq\delta^{2}$
By [13, Lemma A.3], it follows that $F_{h}$ has Lipschitz norm at most
$M^{O(1)}$ on $G^{\square}$, $G^{\square}$ is abelian; we now repeat a similar
procedure as in the proof of twostepcase,approximating $\tilde{F}_{h}$ by
$F_{\eta,h}$ where $(\eta-\xi\otimes\overline{\xi})(G_{(2)})=0$, Pigeonholing
in $\eta$ so it is $h$-independent.
Again, we decompose via [13, Lemma A.3]
$\eta(g_{2}g_{1},g_{1})=\eta^{1}(g_{1})+\eta^{2}(g_{2})$ with $g_{1}\in G$ and
$g_{2}\in G_{2}$. It thus follows that
$|\mathbb{E}_{n\in\mathbb{Z}/N\mathbb{Z}}e(\eta^{1}(g(M_{1}n))+\eta^{2}(g_{2}(M_{1}n+h))-\eta^{2}(g_{2}(M_{1}n))+\xi([g(1)^{M_{1}n},\\{g(1)^{h}\\}]))|\geq
c(\delta)$
for $c(\delta)N$ many $h\in\mathbb{Z}/N\mathbb{Z}$. Thus, by classical results
in Diophantine approximation (e.g., [13, Lemma A.11]),
$\|\eta^{1}(g(M_{1}n))+\eta^{2}(g_{2}(M_{1}n+h))-\eta^{2}(g_{2}(M_{1}n))+\xi([g(1)^{M_{1}n},\\{g(1)^{h}\\}])\|_{C^{\infty}[N]}=0.$
Expanding $\eta^{2}(g_{2}(M_{1}n+h))$ out, applying the hypothesis, and
applying Vinogradov’s lemma to eliminate the coefficients of $nh^{i}$ for
$i>1$ (see also [13, Lemma 4.2]), we see that there exists $\beta$ and
$\gamma$ such that
$\|\beta
h+\xi([g(1)^{M_{1}n},\\{g(1)^{h}\\}])+\gamma\|_{\mathbb{R}/\mathbb{Z}}=0$
The point of making the change of variables is so that by periodic, the
coefficients of $g_{2}(M_{1}\cdot)$ have denominator $N$, and
$\xi([g(1)^{M_{1}n},\\{g(1)^{h}\\}])$ consists of $\langle an,\\{\alpha
h\\}\rangle$ where $a$ and $\alpha$ have denominator $N$. Applying
bracketpolynomialcorollary and using the fact that $N$ is prime yields
$w_{i}$’s and $\eta_{j}$’s which satisfy the conclusions of the lemma. ∎
We now address the case of when $\xi([G_{2},G])=\mathbb{R}$. The key fact to
keep in mind in the below proof is that if $G_{(2)}$ is one-dimensional, then
$[G_{2},G]=[G,G]$.
###### Proof of twosteppolynomial.
By [13, Lemma 2.3], we may assume that $G_{(2)}$ is one-dimensional. If
$G_{2}$ lies in the center of $G$, we may apply centralcase to finish.
At this point, the proof of [7] Fourier expands $F$ into $G_{k}$-vertical
characters and reduces to a nilsequence on $\overline{G^{\square}}$ of degree
$k-1$. In our proof, we wish to preserve information of $F$ being a
$G_{(2)}$-vertical character in some way. This can be done by modifying the
filtration. For $k\geq\ell>2$, we replace $G_{\ell}$ with $G_{\ell}G_{(2)}$.
Then since $G_{(2)}$ is in the center of $G$, this gives a filtered
nilmanifold; by [13, Lemma B.12], there exists a Mal’cev basis which makes
this filtered nilmanifold of complexity at most $c(\delta)^{-1}$.
As in previous arguments, at the cost of replace $\delta$ with $c(\delta)$ and
increasing the Lipschitz constant of $F$ to $c(\delta)^{-1}$, we may replace
$F$ with a $G_{k}$-vertical character of frequency $\eta$ with
$(\eta-\xi)(G_{(2)})=0$.
By the van der Corput inequality, we have for $c(\delta)N$ many $h$’s
$|\mathbb{E}_{n\in[N]}F(g(n+h)\Gamma)\overline{F(g(n)\Gamma)}|\geq c(\delta).$
Defining
$g_{h}(n)=(\\{g(1)^{h}\\}^{-1}g_{2}(n+h)g(1)^{n}\\{g(1)^{h}\\},g_{2}(n)g(1)^{n})$
and $F_{h}(x,y)=F(\\{g(1)^{h}\\}x)\overline{F(y)}$, it follows that
$|\mathbb{E}_{n\in[N]}F_{h}(g_{h}(n))|\geq\delta^{O(1)}.$
As before, by [13, Lemma A.3, Lemma A.4], $g_{h}$ is a periodic polynomial
sequence on $G^{\square}$. By [13, Lemma A.3], this group has the filtration
$(G\times_{G_{2}}G)_{i}=G_{i}\times_{G_{i+1}}G_{i}$ with
$(G\times_{G_{2}}G)_{k}=G_{k}^{\triangle}$. However, $F_{h}$ is
$G_{k}^{\triangle}$-invariant, so descends via a quotient by
$G_{k}^{\triangle}$ to a degree $d-1$ nilsequence
$\tilde{F}_{h}(\overline{g_{h}}(n)\overline{\Gamma^{\square}})$. The
observation to make is that $\tilde{F}_{h}$ is a nilcharacter of frequency
$\xi\otimes\overline{\xi}$ on $\overline{G^{\square}}$. This is because
$[G_{2},G]=[G,G]$ so $(\overline{G^{\square}})_{(2)}=G_{(2)}\times
G_{(2)}/G_{k}^{\triangle}$. Applying the induction hypothesis, we have for
linearly independent horizontal characters $\eta_{1},\dots,\eta_{d^{\prime}}$
and elements
$w_{1},\dots,w_{d_{horiz,\overline{G^{\square}}}-d^{\prime}}\in\overline{\Gamma^{\square}}$
such that
$\psi_{horiz,\overline{G^{\square}}}(w_{1}),\dots,\psi_{horiz,\overline{G^{\square}}}(w_{d_{horiz,\overline{G^{\square}}}-d^{\prime}})$
are linearly independent and for each $i,j$, and $c(\delta)N$ many $h\in[N]$
that
$\|\xi\otimes\overline{\xi}([w_{i},\tilde{g_{h}}(n)])\|_{C^{\infty}[N]}=0$
$\|\eta_{j}\circ\tilde{g_{h}}(n)\|_{C^{\infty}[N]}=0.$
where $\tilde{g_{h}}$ is the projection of $g_{h}$ to $G^{\square}$. Abusively
lifting $\eta_{j}$ to be horizontal characters on $G^{\square}$ and lifting
$w_{j}$ to be elements in $\Gamma^{\square}$ and appending elements
$z_{1},\dots,z_{\ell}$ in $G_{k}^{\triangle}\cap\Gamma^{\square}$ to the set
of $w_{i}$’s such that
$\overline{\exp(z_{1})},\dots,\overline{\exp(z_{\ell})}$ span
$G_{k}^{\triangle}/[G^{\square},G^{\square}]$, we see that
* •
since $G_{k}^{\triangle}/[G^{\square},G^{\square}]$ can be naturally
identified with its Lie algebra which can be identified via the Mal’cev
coordinates as the orthogonal complement within $G_{k}^{\triangle}$ of
$[G^{\square},G^{\square}]$, that we can take $z_{1},\dots,z_{\ell}$ to be
size at most $c(\delta)^{-1}$, and thus
$w_{1},\dots,w_{d_{horiz,G^{\square}}-d^{\prime}}$ also has size at most
$c(\delta)^{-1}$;
* •
Since $\eta_{1},\dots,\eta_{d^{\prime}}$ annihilates $G_{k}^{\square}$, they
span the annihilators of $w_{1},\dots,w_{d_{horiz,G^{\square}}-d^{\prime}}$.
Hence, we have for each $i,j$
$\|\xi\otimes\overline{\xi}([w_{i},g_{h}(n)])\|_{C^{\infty}[N]}=0$
$\|\eta_{j}\circ g_{h}(n)\|_{C^{\infty}[N]}=0.$
By [13, Lemma A.3], we may write $w_{i}=(u_{i}v_{i},u_{i})$ and decompose
$\eta_{j}(g^{\prime},g)=\eta_{j}^{1}(g)+\eta_{j}^{2}(g^{\prime}g^{-1})$ so
$\displaystyle\eta_{j}(g_{h}(n))$
$\displaystyle=\eta_{j}^{1}(g(n))+\eta_{j}^{2}(g_{2}(n+h)g_{2}(n)^{-1})$
$\displaystyle\eta_{j}(w_{i})$
$\displaystyle=\eta_{j}^{1}(u_{i})+\eta_{j}^{2}(v_{i})=0$
$\displaystyle\xi\otimes\overline{\xi}([w_{i},g_{h}(n)])$
$\displaystyle=\xi\otimes\xi^{-1}(([u_{i},g(n)],[u_{i}v_{i},g(n)][u_{i}v_{i},g_{2}(n+h)g_{2}(n)^{-1}])$
$\displaystyle=\xi([v_{i},g(n)]+[u_{i}v_{i},g_{2}(n+h)g_{2}(n)^{-1}])$
Not here we are crucially using that $[G_{2},G]=[G,G]$, so $\beta_{j}$
annihilates $[G,G]$. By expanding out the polynomial
$[u_{i}v_{i},g_{2}(n+h)g_{2}(n)^{-1}]$, and grouping coefficients, and
applying a polynomial Vinogradov-type lemma (e.g., [13, Lemma A.11]; see also
[13, Lemma 4.2]), it follows that
$\|\eta_{j}^{1}\circ
g\|_{C^{\infty}[N]},\|\xi([v_{i},g])\|_{C^{\infty}[N]},\|\eta_{j}^{2}\circ
g_{2}\|_{C^{\infty}[N]},\|\xi([u_{i}v_{i},g_{2}])\|_{C^{\infty}[N]}=0.$
Note that here we must use the fact that $N$ is prime and the fact that
$g_{2}$ has horizontal component with denominator $N$ to eliminate the
binomial coefficients that come from expanding out $g_{2}(n+h)g_{2}(n)^{-1}$.
Let $\tilde{G}=\\{g\in G:\eta_{j}^{1}\circ g=0,[v_{i},g]=0\\}$ and
$\tilde{G}_{2}=\\{g\in\tilde{G}\cap
G_{2},\eta_{j}^{2}(g)=0,[u_{i}v_{i},g]=0\\}$. We note that
$[G,G]\subseteq\tilde{G}$ and $G_{2}$ and also that $\eta_{j}^{2}([G,G])=0$.
Hence, (abusing notation) the sequence of subgroups
$\tilde{G}_{i}=\tilde{G}_{2}\cap G_{i}$ for $i>2$ and
$\tilde{G}_{0}=\tilde{G}_{1}=\tilde{G}$ form a filtration.
We claim that $[\tilde{G},\tilde{G}_{2}]=0$. To show this, we let
$\tilde{H}=\tilde{G}^{\square}$ and we claim that $\tilde{H}$ is Abelian. To
see this, note that each element of $\tilde{H}$ of the form $(gg_{2},g)$
satisfies $\eta_{j}^{1}(g)+\eta_{j}^{2}(g_{2})=0$ and
$[v_{i},g]+[u_{i}v_{i},g_{2}]=0$. Since
$\eta_{j}^{1}(u_{i})+\eta_{j}^{2}(v_{i})=0$ for each $i,j$, it follows that
$(gg_{2},g)$ can be generated by $(u_{i}v_{i},u_{i})$ modulo $[G,G]^{2}$.
However, for any other $(hh_{2},h)$ in $\tilde{H}$, we have
$\xi\otimes\overline{\xi}[(u_{i}v_{i},u_{i}),(hh_{2},h)]=0$. Hence $\tilde{H}$
is Abelian. Finally, we have
$\xi\otimes\overline{\xi}([(g,g),(hg_{2},h)])=\xi([g,g_{2}])=0$ whenever
$g\in\tilde{G}$ and $g_{2}\in\tilde{G}_{2}$. This shows that
$\xi([\tilde{G},\tilde{G}_{2}])=0$ and since $[G,G]$ is one-dimensional,
$[\tilde{G},\tilde{G}_{2}]=0$. Thus, by factorization and removerational, we
may write $g(n)=g^{\prime}(n)\gamma^{\prime}(n)$ where $\gamma^{\prime}(n)$
has image in $\Gamma$ and $g^{\prime}(n)$ has image in $G^{\prime}$. We see
that
$g^{\prime}(1)^{n}\equiv g(1)^{n}\gamma^{\prime}(1)^{n}\pmod{[G,G]}$
and by abusing notation,
$g^{\prime}_{2}(n)\equiv g_{2}(n)\gamma^{\prime}_{2}(n)\pmod{[G,G]}.$
Since $\eta_{j}$ and $\xi([u_{i}v_{i},\cdot])$ both annihilate $[G,G]$, we see
that
$\|\eta_{j}\circ
g^{\prime}_{2}\|_{C^{\infty}[N]},\|\xi([u_{i}v_{i},g^{\prime}_{2}])\|_{C^{\infty}[N]}=0.$
Thus, by factorization2, and removerational, we can write
$g(n)\Gamma=g_{1}(n)\Gamma$ with $g_{1}\in\text{poly}(\mathbb{Z},\tilde{G})$.
Finally, we apply centralcase and [13, Lemma A.9] to $g_{1}(n)$ to find
linearly independent $\alpha_{1},\dots,\alpha_{d^{\prime}}$ of size at most
$c(\delta)^{-1}$ with the property that
$\|\alpha_{i}\circ g\|_{C^{\infty}[N]}=\|\alpha_{i}\circ
g_{1}\|_{C^{\infty}[N]}=0$
and such that if $w,w^{\prime}$ are elements in $\Gamma/([G,G]\cap\Gamma)$
that are orthogonal to each of the $\alpha_{i}$’s, then
$\xi([w,w^{\prime}])=0$. To satisfy the second conclusion, we invoke
factorization and removerational to find a factorization
$g_{1}(n)=g_{1}^{\prime}(n)\gamma(n)$ where $g_{1}^{\prime}$ lies in the
kernel of $\alpha_{i}$ for all $i$ and $\gamma$ lies in $\Gamma$. It follows
that $g_{1}^{\prime}$ lies in the subspace generated by orthogonal elements to
$\alpha_{i}$’s. Thus, for any $w\in\Gamma/(\Gamma\cap[G,G])$ orthogonal to all
of the $\alpha_{i}$’s, it follows that
$\|\xi([w,g_{1}^{\prime}(n)\Gamma])\|_{C^{\infty}[N]}=\|\xi([w,g_{1}(n)])\|_{C^{\infty}[N]}=\|\xi([w,g(n)])\|_{C^{\infty}[N]}=0.$
By [13, Lemma A.8], we may choose elements
$w_{1},\dots,w_{d_{horiz}-d^{\prime}}$ inside $\Gamma$ whose size is at most
$c(\delta)^{-1}$ and such that their projections to $\Gamma/([G,G]\cap\Gamma)$
are linearly independent and are orthogonal to the $\alpha_{i}$. To verify
size bounds, we must verify that the projection of the Mal’cev coordinates of
$w_{i}$ to the orthogonal complexity of $[\mathfrak{g},\mathfrak{g}]$ is
bounded by $c(\delta)^{-1}$. This follows from invoking [13, Lemma A.7] to
construct a $c(\delta)^{-1}$-rational basis for the orthogonal to
$[\mathfrak{g},\mathfrak{g}]$ and constructing $c(\delta)^{-1}$-rational basis
for $[\mathfrak{g},\mathfrak{g}]$ and rewriting the Mal’cev coordinates of
$w_{i}$ in terms of linear combinations of these bases and projecting to the
dimensions where $[\mathfrak{g},\mathfrak{g}]=0$. By Cramer’s rule we can
ensure that if we write $w_{i}$ terms of these linear combinations, all
components are rational with height at most $c(\delta)^{-1}$. ∎
## 6\. The general periodic case
Recall the following theorem. See 2
###### Proof.
As the direct proof of this theorem is no shorter than the proof in [13], we
shall only give a proof of this assuming [13, Theorem 3]. We see from
hypothesis that
$|\mathbb{E}_{n\in[MN]}F(g(n)\Gamma)|\geq\delta$
for each $M$. Invoking [13, Theorem 3], we find $\eta_{1},\dots,\eta_{r}$ such
that
$\|\eta_{i}\circ g\|_{C^{\infty}[MN]}\leq c(\delta)^{-1}$
and such that $s$ elements $w_{1},\dots,w_{s}\in
G^{\prime}:=\bigcap_{i=1}^{r}\text{ker}(\eta_{i})$ satisfies
$\xi([w_{1},\dots,w_{s}])=0.$
Sending $M$ to infinity, we see that
$\|\eta_{i}\circ g\|_{C^{\infty}[MN]}=0$
as desired. ∎
## 7\. The complexity one polynomial Szemerédi theorem
We now deduce mainresult2. Fix $P(x)$ and $Q(x)$ to be linearly independent
polynomials. For functions $f,g,h,k\colon\mathbb{Z}/N\mathbb{Z}\to\mathbb{C}$,
define
$\Lambda(f,g,k,p):=\mathbb{E}_{x,y}f(x)g(x+P(y))k(x+Q(y))p(x+P(y)+Q(y)).$
and
$\Lambda^{1}(f,g,j,p):=\mathbb{E}_{x,y,z}f(x)g(x+y)k(x+z)p(x+y+z).$
We will show the following:
###### Theorem 8.
asymptotic There exists some $c_{P,Q}$ such that for any one-bounded
$f,g,k,p$, we have
$\Lambda(f,g,k,p)=\Lambda^{1}(f,g,k,p)+O\left(\frac{1}{\exp(\log^{c_{P,Q}}(N))}\right).$
To see how this implies mainresult2, suppose the set $A$ has no nontrivial
configuration of the form $(x,x+P(y),x+Q(y),x+P(y)+Q(y))$. Then
$|\Lambda(1_{A},1_{A},1_{A},1_{A})|\leq\frac{1}{N}.$
We have
$|\Lambda^{1}(1_{A},1_{A},1_{A},1_{A})|=\|1_{A}\|_{U^{2}(\mathbb{Z}/N\mathbb{Z})}^{4}\geq\|1_{A}\|_{U^{1}(\mathbb{Z}/N\mathbb{Z})}^{4}=\alpha^{4}$
where $\alpha$ is the density of $A$ in $\mathbb{Z}/N\mathbb{Z}$. asymptotic
then implies that
$\alpha\ll O\left(\frac{1}{\exp(\log^{c_{P,Q}}(N))}\right)$
as desired.
The proof of asymptotic will closely follow the proof of [12, Theorem 3]. To
prove this theorem, we prove the following inverse-type theorem:
###### Theorem 9.
polynomialinverse Suppose $|\Lambda(f,g,k,p)|\geq\delta$. Then either
$N\ll\exp(\log^{O_{P,Q}(1)}(1/\delta))$ or
$\|p\|_{U^{2}}\gg\exp(-\log^{O_{P,Q}}(1/\delta))$.
Let us assume for a moment that we can prove polynomialinverse. We shall
deduce asymptotic. For arbitrary one-bounded $p$, we invoke [12, Lemma 4.2] to
decompose $p=p_{a}+p_{b}+p_{c}$ with
$\|\hat{p_{a}}\|_{\ell^{1}}\leq\epsilon_{1}^{-1},\|p_{b}\|_{L^{1}}\leq\epsilon_{2},\|p_{c}\|_{L^{\infty}}\leq\epsilon_{3}^{-1},\|\hat{p_{c}}\|_{L^{\infty}}\leq\epsilon_{4}$
where $\epsilon_{1},\dots,\epsilon_{4}$ will be chosen later and satisfy
$\epsilon_{1}\epsilon_{4}^{-1}+\epsilon_{2}^{-1}\epsilon_{3}\leq\frac{1}{2}$.
We thus have
$\Lambda(f,g,k,p)=\Lambda(f,g,k,p_{a})+\Lambda(f,g,k,p_{b})+\Lambda(f,g,k,p_{c}),$
$\Lambda(f,g,k,p_{a})=\Lambda^{1}(f,g,k,p_{a})+O(N^{-\delta}\epsilon_{1}^{-1}),$
and
$|\Lambda(f,g,k,p_{b})|\leq\epsilon_{2}.$
To control $|\Lambda(f,g,k,p_{c})$, we invoke polynomialinverse which states
that either
$|\Lambda(f,g,k,p_{c})|\ll\epsilon_{3}^{-1}\exp(-\log^{c_{P,Q}}(N))$
or
$\displaystyle|\Lambda(f,g,k,p_{c})|$
$\displaystyle\leq\epsilon_{3}^{-1}|\lambda(f,g,k,\epsilon_{3}p_{c})|$
$\displaystyle\leq\epsilon_{3}^{-1}\exp(-\log^{C_{P,Q}}(\|\epsilon_{3}\hat{p_{c}}\|_{L^{\infty}}^{-1/2}))$
$\displaystyle=\epsilon_{3}^{-1}\exp(-\log^{C_{P,Q}}(\epsilon_{3}^{-1/2}\epsilon_{4}^{-1/2})).$
Choosing $\epsilon_{1}=N^{\alpha}$,
$\epsilon_{3}=\exp(-\log^{c^{\prime}_{P,Q}}(N))$,
$\epsilon_{4}=N^{-\alpha^{\prime}}$ for $\alpha^{\prime}>\alpha$, we see that
$\exp(-\log^{C_{P,Q}}(\epsilon_{3}^{-1/2}\epsilon_{4}^{-1}))\ll_{\epsilon}\exp(-(\alpha^{\prime}/2-\epsilon)^{C_{P,Q}}\log^{C_{P,Q}}(N)).$
Thus, choosing $\epsilon_{3}$ to be larger than both
$\exp(-\log^{c_{P,Q}}(N))$ and
$\exp(-(\alpha^{\prime}-\epsilon)^{C_{P,Q}}\log^{C_{P,Q}}(N))$, and
$\epsilon_{2}$ to be less than $\epsilon_{3}$ but also of the form
$\exp(-\log^{1/O_{P,Q}(1)}(N))$, we have the desired estimate of
$\Lambda(f,g,k,p)=\Lambda^{1}(f,g,k,p_{a})+O(\exp(-\log^{c_{P,Q}}(N))).$
The decomposition we chose also allows us to prove the same estimates of
$\Lambda^{1}$, with $\Lambda$ replaced with $\Lambda^{1}$, and one of the
estimates being
$|\Lambda^{1}(f,g,k,p_{c})|\leq\epsilon_{3}^{-1/2}\|\hat{p_{c}}\|_{L^{\infty}}^{1/2}\leq\epsilon_{3}^{-1/2}\epsilon_{4}^{1/2}\ll_{\epsilon}N^{-\alpha^{\prime}/2+\epsilon}.$
Thus, we can also show that
$\Lambda^{1}(f,g,k,p)=\Lambda^{1}(f,g,k,p_{a})+O(\exp(-\log^{c_{P,Q}}(N)))$
which gives
$\Lambda(f,g,k,p)=\Lambda^{1}(f,g,k,p)+O(\exp(-\log^{c_{P,Q}}(N)))$
as desired. It thus remains to prove polynomialinverse.
### 7.1. Proof of polynomialinverse
We will show the following:
###### Proposition 7.1.
degreelowering Given functions $f,g,k:\mathbb{Z}/N\mathbb{Z}\to\mathbb{C}$, we
define
$\mathcal{D}(f,g,k)(x)=\mathbb{E}_{y}f(x-P(y)-Q(y))g(x-Q(y))k(x-P(y)).$
Let $s\geq 2$. If $f,g,k:\mathbb{Z}/N\mathbb{Z}\to\mathbb{C}$ are one-bounded
and
$\|\mathcal{D}(f,g,k)\|_{U^{s+1}(\mathbb{Z}/N\mathbb{Z})}\geq\delta,$
then either $\delta\ll\exp(-\log^{1/O_{P,Q}(1)}(N))$ or
$\|f\|_{U^{s}(\mathbb{Z}/N\mathbb{Z})}\gg\exp(-\log^{O_{P,Q}}(1/\delta))$.
First, assuming this is true, by [12, Lemma 5.1, Lemma 5.2], for some
$s=s_{P,Q}$, we have
$|\Lambda(f,g,k,p)|\leq\|p\|_{U^{s}(\mathbb{Z}/N\mathbb{Z})}+O(N^{-1/O_{P,Q}(1)}).$
Thus,
$|\Lambda(f,g,k,p)|\leq\left(\mathbb{E}_{x}|\mathcal{D}(f,g,k)|^{2}\right)^{1/2}=|\Lambda(\overline{f},\overline{g},\overline{k},\mathcal{D}(f,g,k))|^{1/2}\leq\|\mathcal{D}(f,g,k)\|_{U^{s}(\mathbb{Z}/N\mathbb{Z})}^{O(1)}+O(N^{-\zeta})$
for some $\zeta$ that depends only on $P$ and $Q$. Thus, if we can show
degreelowering, this will in turn imply by an iterative argument
polynomialinverse. For the remainder of the argument, we will now indicate how
to make improvements in [12]. The first improvement we apply is the Sanders
$U^{3}$ inverse theorem (see [14, Appendix A] for how to deduce the improved
$U^{3}$ inverse theorem from [21]), where we end up with
(7.1)
$\mathbb{E}_{h_{1},\dots,h_{s-2}}|\langle\mathcal{D}_{h}(f,g,k),F_{\vec{h}}(m_{h}(x)\Gamma)\rangle|\gg\exp(-\log^{O(1)}(1/\delta))$
instead of the inferior
$\mathbb{E}_{h_{1},\dots,h_{s-2}}|\langle\mathcal{D}_{h}(f,g,k),F_{\vec{h}}(m_{h}(x)\Gamma)\rangle|^{8}\gg\exp(-\delta^{-O(1)}).$
We will then set $\epsilon=\exp(-\log^{O(1)}(1/\delta))$ and continue as usual
in the argument until [12, Lemma 6.1]. We now highlight the improvement to
[12]. Instead of invoking [12, Lemma 6.1], we invoke the following
quantitative improvement to [12, Lemma 6.1]:
###### Lemma 7.1.
Let $\delta\in(0,1/10)$, $N>100$ be prime, and $G/\Gamma$ be a two-step
nilmanifold of dimension $d$, complexity $M$, and degree $k$ equipped with the
standard filtration and let $g(n)\in\mathrm{poly}(\mathbb{Z},G)$ with
$g(n)\Gamma$ periodic modulo $N$. Furthermore, let $F_{1},F_{2},F_{3}$ be
$1$-Lipschitz functions on $G/\Gamma$ with the same nonzero frequency $\xi$.
If
$|\mathbb{E}_{n\in[N]}F_{1}(g(P(n))\Gamma)F_{2}(g(Q(n))\Gamma)\overline{F_{3}(g(P(n)+Q(n))\Gamma)}e(\alpha
P(n)+\beta Q(n))|\geq\delta$
for some frequencies $\alpha,\beta\in\widehat{\mathbb{Z}/N\mathbb{Z}}$, then
either $N\ll(\delta/M)^{-O_{P,Q}(d^{O_{P,Q}(1)})}$ or there exists
$w_{1},\dots,w_{r}\in\Gamma$ with
$\psi_{\mathrm{horiz}}(w_{1}),\dots,\psi_{\mathrm{horiz}}(w_{r})$ linearly
independent and linearly independent horizontal characters
$\eta_{1},\dots,\eta_{d-1-r}$ such that
$|w_{i}|,|\eta_{j}|\leq(\delta/M)^{-O_{P,Q}(d^{O_{P,Q}(1)})}$,
$\langle\eta_{j},w_{i}\rangle=0$ for all $i,j$, and
$\|\xi([w_{i},g])\|_{C^{\infty}[N]}=\|\eta_{j}\circ g\|_{C^{\infty}[N]}=0.$
###### Proof.
The first part of the argument is similar to the first part of the argument in
[12, Lemma 6.1]. We first use the fact that $F$ is a nilcharacter of nonzero
frequency to absorb $\alpha P(n)$ and $\beta Q(n)$ to the vertical component
of $g(P(n))$ and $g(Q(n))$, respectively to obtain $g_{1}(P(n))$ and
$g_{2}(Q(n))$. Since the conclusion only depends on the horizontal component
of $g$, and since the horizontal component of $g_{1}$ and $g_{2}$ agree with
$g$, it follows that we can assume that both $g_{1}$ and $g_{2}$ are $g$ and
$\alpha,\beta=0$. Let $H$ denote the subgroup of $G^{3}$ consisting of
elements $\\{(g_{1},g_{2},g_{3}):g_{1}g_{2}g_{3}^{-1}\in[G,G]\\}$. We claim
that $[H,H]=[G,G]^{3}$. By definition, we see that for $h\in[G,G]$ that
$(1,h,h)$ and $(h,1,h)$, and $(h,h,h^{4})$ lies inside $[H,H]$ (the last fact
is true because
$[(g_{1},g_{1},g_{1}^{2}),(h_{1},h_{1},h_{1}^{2})]=([g_{1},h_{1}],[g_{1},h_{1}],[g_{1},h_{1}]^{4})$).
This yields that $(1,1,h^{2})$ lies inside $[H,H]$, and because of
connectedness and simple connectedness, it follows that $(1,1,h)\in[H,H]$. We
can verify from there that $[H,H]=[G,G]^{3}$.
We were given the polynomial sequence
$(g(P(n)),g(Q(n)),g(P(n)+Q(n)))$
on $H$. Since $F_{i}$ are nilcharacters of frequency $\xi$ on $H$,
$F_{1}\otimes F_{2}\otimes F_{3}$ is a nilcharacter on $H$ of frequency
$(\xi,\xi,-\xi)$. Taking a quotient of $H$ by the kernel of $(\xi,\xi,-\xi)$,
which is $(x,y,x+y)$, we obtain that the center is of the form $(x,x,-x)$ with
$(x,y,z)$ being projected to $(x+y-z)(1,1,1)$. Let $H_{1}$ denote the subgroup
with the one dimensional vertical directions. Applying twosteppolynomial, we
obtain $w_{1},\dots,w_{r}$ and $\eta_{1},\dots,\eta_{d-r}$ such that $\langle
w_{i},\eta_{j}\rangle=0$ and
$\eta_{j}\circ(g(P(n)),g(Q(n)),g(P(n)+Q(n)))\equiv 0\pmod{1}$, and
$\xi([w_{i},g(P(n)),g(Q(n)),g(P(n)+Q(n))])\equiv 0\pmod{1}$. Denoting
$\eta_{j}=(\alpha_{j},\beta_{j})$ and $w_{i}=(u_{i},v_{i},u_{i}v_{i})$ and the
action $\eta_{j}(w_{i}):=\alpha_{j}(u_{i})+\beta_{j}(v_{i})$, we see that
$\|\xi([v_{i},g(P(n))])+\xi([u_{i},g(Q(n))])\|_{C^{\infty}[N]}\equiv
0\pmod{1}$ $\|\alpha_{j}(g(P(n)))+\beta_{j}(g(Q(n)))\|_{C^{\infty}[N]}\equiv
0\pmod{1}.$
Since $P$ and $Q$ are linearly independent, it follows that there exists some
coefficients $c_{k}x^{k}$, $c_{\ell}x^{\ell}$ of $P$, and $d_{k}x^{k}$ and
$d_{\ell}x^{\ell}$ of $Q$ such that $c_{k}d_{\ell}-d_{k}c_{\ell}\neq 0$. Thus,
the conditions become
$c_{k}\xi([u_{i},g(1)])+d_{k}\xi([v_{i},g(1)])\equiv 0\pmod{1}$
$c_{\ell}\xi([u_{i},g(1)])+d_{\ell}\xi([v_{i},g(1)])\equiv 0\pmod{1}$
which implies since $\alpha$ has denominator $N$ which is prime that
$\xi([u_{i},g(1)])\equiv 0\pmod{1}$ and $\xi([v_{i},g(1)])\equiv 0\pmod{1}$.
Similarly, we have $\alpha_{j}(g(1))\equiv 0\pmod{1}$ and
$\beta_{j}(g(1))\equiv 0\pmod{1}$. Let $\tilde{G}:=\\{g\in
G:\xi([v_{i},g])=0,\xi([u_{i},g])=0,\alpha_{j}(g)=0,\beta_{j}(g)=0\forall
i,j\\}$. We claim that $\tilde{G}$ is abelian, from whence the Lemma would
follow from an application of [13, Lemma A.9]. This amounts to showing that
for any $g,h\in\tilde{G}$ that $[g,h]=\mathrm{id}_{G}$. For such $g$, $(g,g)$
is annihilated by $(\alpha_{j},\beta_{j})$, and since
$\alpha_{j}(u_{i})+\beta_{j}(v_{i})=0$, it follows that $(g,g)$ can be written
as a combination of $(u_{i},v_{i})$ modulo $[G,G]^{2}$. It follows that
$[(g,g),(h,h)]=\mathrm{id}_{G}$, and thus $[g,h]=\mathrm{id}_{G}$. ∎
to obtain that the image of $m_{h}$ under the kernel of $\xi$ lies in an
abelian subnilmanifold of rationality at most $\epsilon^{-O(r)^{O(1)}}$ where
this time $r=\log^{O(1)}(1/\epsilon)$. We can then Fourier expand
$F_{\vec{h}}(m_{h}(x)\Gamma)$ in 7.1 as in the argument after [12, Lemma 6.1]
to eventually obtain
$\|p\|_{U^{s}(\mathbb{Z}/N\mathbb{Z})}\gg\epsilon^{O(r)^{O(1)}}$
which gives the desired estimate for degreelowering. This completes the proof
of asymptotic.
## 8\. The $U^{4}(\mathbb{Z}/N\mathbb{Z})$ inverse theorem
We now prove mainresult4. We restate it for the reader’s convenience. See 4
The hypothesis implies that
$\mathbb{E}_{h}\|\Delta_{h}f\|_{U^{3}}^{8}\geq\delta^{16}.$
An application of the inverse theorem of Sanders [21] combined with the
argument of [5, Theorem 10.9] (see also [14, Appendix A]) gives the following:
###### Theorem 10.
bracketpolynomialu3 There exists some real number $c>0$ with the following
property: if $c>\eta>0$ and $f\colon\mathbb{Z}/N\mathbb{Z}\to\mathbb{C}$ be
one-bounded with
$\|f\|_{U^{3}(\mathbb{Z}/N\mathbb{Z})}\geq\eta.$
Then there exists a constant $C>0$, a subset
$S\subseteq\widehat{\mathbb{Z}/N\mathbb{Z}}$ with $|S|\leq\log(1/\eta)^{C}$
and a phase $\phi\colon\mathbb{Z}/N\mathbb{Z}\to\mathbb{R}$ such that
$\phi(n)=\sum_{\alpha,\beta\in S}a_{\alpha,\beta}\\{\alpha\cdot
n\\}\\{\beta\cdot n\\}+\sum_{\alpha\in S}a_{\alpha}\\{\alpha\cdot n\\}$
with $\alpha_{i},\beta_{i}\in S$ and $a_{i}\in\mathbb{R}$ and
$|\mathbb{E}_{n\in[N]}f(n)e(\phi(n))|\geq\exp(-\log(1/\eta)^{C}).$
This implies that for a family of degree two periodic bracket polynomials
$\chi_{h}(n)$ with at most $\log(1/\delta)^{O(1)}$ many bracketed phases that
for a subset $H\subseteq\mathbb{Z}_{N}$ with $|H|\geq\delta^{O(1)}N$ that for
any $h\in H$,
$|\mathbb{E}_{n}\Delta_{h}f(n)\chi_{h}(n)|\geq\delta^{\log(1/\delta)^{O(1)}}.$
We will now use [8, Proposition 6.1]:
###### Proposition 8.1.
additivequadruples Let $f_{1},f_{2}\colon\mathbb{Z}/N\mathbb{Z}\to\mathbb{C}$
be one-bounded and $H\subseteq\mathbb{Z}/N\mathbb{Z}$ a set of cardinality
$\eta N$ such that for each $h\in H$,
$|\mathbb{E}_{n\in\mathbb{Z}/N\mathbb{Z}}f_{1}(n)f_{2}(n+h)\chi_{h}(n)|\geq\delta.$
Then for at least $\eta^{8}\delta^{4}N^{3}/2$ many quadruples
$(h_{1},h_{2},h_{3},h_{4})$ in $H$ satisfying $h_{1}+h_{2}=h_{3}+h_{4}$,
$|\mathbb{E}_{n\in\mathbb{Z}/N\mathbb{Z}}\chi_{h_{1}}(n)\chi_{h_{2}}(n+h_{1}-h_{4})\overline{\chi_{h_{3}}(n)\chi_{h_{4}}(n+h_{1}-h_{4})}|\gg\eta^{4}\delta^{2}.$
###### Proof.
We follow [8, Proposition 6.1]. We extend $\chi_{h}$ to be zero for $h\not\in
H$. The condition implies that
$\mathbb{E}_{h}1_{H}(h)|\mathbb{E}_{n}f_{1}(n)f_{2}(n+h)\chi_{h}(n)|^{2}\gg\delta^{2}\eta.$
Expanding out, we obtain
$\mathbb{E}_{h}1_{H}(h)\mathbb{E}_{n,n^{\prime}}f_{1}(n)\overline{f_{1}(n^{\prime})}f_{2}(n+h)\overline{f_{2}(n^{\prime}+h)}\chi_{h}(n)\overline{\chi_{h}(n^{\prime})}\gg\delta^{2}\eta.$
Making a change of variables $h=m-n$, $n^{\prime}=n+k$, we obtain
$\mathbb{E}_{m,n,k}1_{H}(m-n)f_{1}(n)\overline{f_{1}(n+k)}f_{2}(m)\overline{f_{2}}(m+k)\Delta_{k}\chi_{m-n}(n)\gg\delta^{2}\eta.$
Applying Cauchy-Schwarz twice, we obtain
$\mathbb{E}_{k,m,n,m^{\prime},n^{\prime}}1_{H}(m-n)\Delta_{k}\chi_{m-n}(n)$
$\overline{1_{H}(m^{\prime}-n)\Delta_{k}\chi_{m^{\prime}-n}(n)1_{H}(m-n^{\prime})\Delta_{k}\chi_{m-n^{\prime}}(n^{\prime})}1_{H}(m^{\prime}-n^{\prime})\Delta_{k}\chi_{m^{\prime}-n^{\prime}}(n^{\prime})\gg\eta^{4}\delta^{8}.$
This can be rewritten as
$\mathbb{E}_{h_{1}+h_{2}=h_{3}+h_{4}}|1_{H}(h_{1})\chi_{h_{1}}(n)1_{H}(h_{2})\chi_{h_{2}}(n+h_{1}-h_{4})\overline{1_{H}(h_{3})\chi_{h_{3}}(n)1_{H}(h_{4})\chi_{h_{4}}(n+h_{1}-h_{4})}|^{2}\gg\eta^{4}\delta^{8}$
as desired. ∎
We thus have that for at least $\delta^{O(\log(1/\delta))^{O(1)}}N^{3}$ many
additive quadruples, that is quadruples $(h_{1},h_{2},h_{3},h_{4})$ with
$h_{1}+h_{2}=h_{3}+h_{4}$, we have
$|\mathbb{E}_{n}\chi_{h_{1}}(n)\chi_{h_{2}}(n+h_{1}-h_{4})\overline{\chi_{h_{3}}(n)\chi_{h_{4}}(n+h_{1}-h_{4})}|\geq\delta^{\log(1/\delta)^{O(1)}}.$
The remainder of this section is devoted to the following.
* 1.
Defining nilcharacters and giving constructions of three-step nilpotent Lie
groups. We will show that we can take $\chi_{h}(n)=F(g_{h}(n)\Gamma)$ for some
_Fourier expanded nilcharacter_ $F(g_{h}(n)\Gamma)$ (to be defined in Section
8.1). This occupies Sections 8.1 and 8.2
* 2.
Analyzing the above inequality to glean structure out of $\chi_{h}$ on some
dense subset of $\mathbb{Z}_{N}$. This will involve a “sunflower-type
decomposition”, and a “linearization argument.” These are analogous to [8,
Section 7, Section 8]. This argument occupies Sections 8.3 and 8.4 and is
almost all of the improvement over [8].
* 3.
Insert this structure to deduce the inverse theorem. This will comprise of the
“symmetry and integration steps.” This is analogous to [8, Section 9] and we
essentially follow their argument. This argument will take place in Section
8.5.
Before we proceed, we shall specify some notation.
###### Definition 8.1 (Lower order terms).
lowerorderterms The quantity $[\text{Lower order terms}]$ denotes a sum of
$d^{O(1)}$ quantities of the following form:
* •
$a\\{\alpha_{h}n\\}\\{\beta_{h}n\\}$ or $a\\{\alpha h\\}\\{\beta n\\}$ or
$a\\{\alpha h_{i}\\}\\{\beta n\\}$
* •
$a_{h}\\{\alpha_{h}n\\}$ or $a_{h_{i}}\\{\alpha_{h_{i}}n\\}$
* •
$\alpha h$
where all instances of $\alpha,\beta\in\mathbb{R}$ are rational with
denominator $N$ and $|a|\leq\exp(\log(1/\delta)^{O(1)})$.
The point of lower order terms is that they can be eliminated via
onevarfouriercomplexity and bilinearfouriercomplexity. We will also use the
following shorthand.
###### Definition 8.2 (Equal up to lower order terms).
shorthandequal We say that $a\equiv b$ if $a=be([\text{Lower order terms}])$.
### 8.1. Bracket polynomials and Fourier expanded nilcharacters
In this section, we give precise definitions for notation for bracket
polynomials we work with. We refer the reader to [13, Section 2] for various
notions of Mal’cev bases.
###### Definition 8.3 (Periodic fourier expanded nilcharacter).
Given a degree two two-step nilmanifold $G/\Gamma$ with one-dimensional
vertical torus, we define a _Fourier expanded nilcharacter_ , which we will
shorten as _nilcharacter_ on $G/\Gamma$ as follows. Write
$\mathcal{X}=\\{X_{1},\dots,X_{d-1},Y\\}$ as the Mal’cev basis for $G/\Gamma$.
Then letting $g(n)$ a polynomial sequence on $G/\Gamma$ with $g(n)\Gamma$
periodic modulo $N$ and $\psi(g(n))=(\alpha_{1}n,\dots,\alpha_{d-1}n,P(n))$ we
let111We note by calculations done in [6, Appendix B] that this is indeed a
function on $G/\Gamma$.
$F(g(n)\Gamma)=e(-k\sum_{i<j}C_{[i,j]}\alpha_{i}n[\alpha_{j}n]+kP(n))$
where $C_{[i,j]}Y=[X_{i},X_{j}]$ with $C_{[i,j]}$ an integer bounded by $Q$.
We denote $k$ to be the _frequency_ of $F$. We define $\omega=k[\cdot,\cdot]$
as the _associated asymmetric bilinear form_ on
$\text{Span}(X_{1},\dots,X_{d-1})$ to $F$.
As the proof will perform many “change of bases”, we require the following
lemma.
###### Lemma 8.1.
changeofvar Let $Q\geq 2$ and $G/\Gamma$ be a two-step nilmanifold with Malcev
basis $\mathcal{X}=\\{X_{1},\dots,X_{d-1},Y\\}$ of complexity $Q$. Let
$\mathcal{X}^{\prime}=\\{X_{1}^{\prime},\dots,X_{d-1}^{\prime},Y\\}$ be a
basis of $\mathfrak{g}$ with $X_{i}^{\prime}$ a $Q$-integer combination of
elements in $\mathcal{X}$. Then the following hold.
* •
Letting
$\tilde{\Gamma}=\\{\exp(t_{1}X_{1}^{\prime})\exp(t_{2}X_{2}^{\prime})\cdots\exp(t_{d}X_{d}^{\prime})\exp(sY):t_{1},\dots,t_{d},s\in\mathbb{Z}\\}$,
$G/\tilde{\Gamma}$ is a nilmanifold equipped with a Mal’cev basis
$\mathcal{X}^{\prime}$ of complexity $Q^{O(d^{O(1)})}$.
* •
If $F(g(n)\Gamma)$ is a Fourier-expanded nilcharacter on $G/\Gamma$, then
there exists a periodic Fourier-expanded nilcharacter
$\tilde{F}(\tilde{g}(n)\tilde{\Gamma})$
$F(g(2n)\Gamma)=F(\tilde{g}(n)\tilde{\Gamma})e(\text{[Lower order terms]})$
with frequency $k=4$.
###### Proof.
We first show the first item. We first note that since $\mathcal{X}$ is a
Mal’cev basis,
$[\exp(X_{i}),\exp(X_{j})]=\exp([X_{i},X_{j}])\in\exp(\mathbb{Z}Y)$, so the
structure constants of the Mal’cev basis $\mathcal{X}$ are integers. This
implies that $\exp([X_{i}^{\prime},X_{j}^{\prime}])\in\exp(\mathbb{Z}Y)$ so
$\tilde{\Gamma}$ is a group. Since we may find a bounded fundamental domain
for $G/\tilde{\Gamma}$, it follows by Bolzano-Weierstrass and the Whitney
embedding theorem that it is compact. Also, by Cramer’s rule,
$\mathcal{X}^{\prime}$ has complexity $Q^{O(d^{O(1)})}$.
We now turn to the second item. Let
$\psi_{\mathcal{X}}(g(n))=(\alpha_{1}n,\dots,\alpha_{d}n,P(n))$
and
$\psi_{\mathcal{X}^{\prime}}(g(n))=(\tilde{\alpha}_{1}n,\dots,\tilde{\alpha}_{d-1}n,P(n))$
Note that we have
$\sum_{i}\tilde{\alpha_{i}}X_{i}^{\prime}=\sum_{i}\tilde{\alpha_{i}}\sum_{j}a_{ij}X_{j}=\sum_{j}\left(\sum_{i}a_{ij}\tilde{\alpha_{i}}\right)X_{j}$
with $a_{ij}$ integers at most $Q$, so
$\alpha_{j}=\sum_{i}a_{ij}\tilde{\alpha_{i}}.$
In addition, we have (for a vector-valued [Lower order terms])
$\sum_{i}\tilde{\alpha_{i}}X_{i}^{\prime}=\sum_{j}\alpha_{j}X_{j}$
and
$\sum_{i}[\tilde{\alpha_{i}}n]X_{i}^{\prime}=\sum_{i}[\tilde{\alpha_{i}}n]\sum_{j}a_{ij}X_{j}=\sum_{j}[\alpha_{j}n]X_{j}+[\text{Lower
order terms}]\cdot\mathcal{X}.$
By the identity
$\alpha n[\beta n]-\beta n[\alpha n]\equiv\alpha\beta n^{2}+2\alpha n[\beta
n]\pmod{1}$
it follows that there exists quadratic polynomials $P_{1}$ and $P_{2}$ such
that
$F(g(n)\Gamma)=e(-k/2\sum_{i,j}[\sum_{i}\alpha_{i}nX_{i},\sum_{i}[\alpha_{i}n]X_{i}]+P_{1}(n)+[\text{Lower
order terms}])$
and
$F(g(n)\Gamma)=e(-k/2\sum_{i,j}[\sum_{i}\tilde{\alpha_{i}}nX_{i}^{\prime},\sum_{i}[\tilde{\alpha_{i}}n]X_{i}^{\prime}]+P_{2}(n)+[\text{Lower
order terms}]).$
It follows that letting $Q=P_{2}-P_{1}$ and
$\psi_{\mathcal{X}^{\prime}}(\tilde{g}(n))=(\tilde{\alpha}_{1}n,\dots,\tilde{\alpha}_{d}n,Q(n)),$
we obtain (the purpose of working with $2n$ being that it cancels out with the
factor of $\frac{1}{2}$ present in $k/2$)
$F(g(2n))=F(\tilde{g}(n)\tilde{\Gamma})e(\text{[Lower order terms]}).$
By considering a modular inverse $m$ of $2$, and modifying $\tilde{g}(n)$ to
be
$((\tilde{\alpha}_{1}2m)n,\dots,(\tilde{\alpha}_{d}2m)n,Q(2mn))$
it follows that $Q(2m\cdot)$ is divisble by $2$, and since the leading
coefficient of the bracket part of $\tilde{F}(\tilde{g})$ is divisible by $4$,
it follows that we can take our frequency to be $k=4$. ∎
Finally, we will need a lemma that converts a bracket polynomial to a Fourier
expanded nilcharacter.
###### Lemma 8.2.
u3fourierexpandednilcharacter Let
$\alpha_{1},\dots,\alpha_{d},\beta_{1},\dots,\beta_{d}\in\mathbb{R}$ be
rationals with denominator $N$ and define a function
$\phi\colon\mathbb{Z}/N\mathbb{Z}\to\mathbb{R}$ via
$\phi(n)=\sum_{i=1}^{d}a_{\alpha,\beta}\\{\alpha\cdot n\\}\\{\beta\cdot
n\\}+\sum_{\alpha\in S}a_{\alpha}\\{\alpha\cdot n\\}.$
Then there exists a Fourier expanded nilcharacter $F(g(n)\Gamma)$ of
complexity $2$ and frequency $1$ such that
$e(\phi(n))=F(g(n)\Gamma)e([\text{Lower order terms}]).$
###### Proof.
An application of the $U^{3}$ inverse theorem gives us a bracket polynomial of
the form
$\sum_{i}a_{i}\\{\alpha_{i}n\\}\\{\beta_{i}n\\}.$
We write
$a_{i}\\{\alpha_{i}n\\}\\{\beta_{i}n\\}=[a_{i}](\alpha_{i}n-[\alpha_{i}n])(\beta_{i}n-[\beta_{i}n])+\\{a_{i}\\}\\{\alpha_{i}n\\}\\{\beta_{i}n\\}.$
Defining
$G=\begin{pmatrix}1&\mathbb{R}&\cdots&\cdots&\mathbb{R}\\\
0&1&\cdots&0&\mathbb{R}\\\ 0&0&1&\cdots&\mathbb{R}\\\ 0&0&0&\ddots&\vdots\\\
0&0&0&\cdots&1\end{pmatrix},$
$\Gamma=\begin{pmatrix}1&\mathbb{Z}&\cdots&\cdots&\mathbb{Z}\\\
0&1&\cdots&0&\mathbb{Z}\\\ 0&0&1&\cdots&\mathbb{Z}\\\ 0&0&0&\ddots&\vdots\\\
0&0&0&\cdots&1\end{pmatrix},$
with $G$ $2d+1$ dimensional and $X_{1},\dots,X_{d}$ represent the coordinates
(from left to right) of the first row and $X_{d+1},\dots,X_{2d}$ the
coordinates (from top to bottom), and $Y$ the last coordinate, we see that a
bracket polynomial of the form
$[a_{i}](\alpha_{i}n-[\alpha_{i}n])(\beta_{i}n-[\beta_{i}n])+\\{a_{i}\\}\\{\alpha_{i}n\\}\\{\beta_{i}n\\}$
can be realized (up to lower order terms periodic in $N$) as a Fourier
expanded nilcharacter on $G/\Gamma$ with $d$ being proportional to the number
of brackets in the sum. ∎
Thus, we may assume that $\chi_{h}(n)=F(g_{h}(n)\Gamma)$ for some Fourier
expanded nilcharacter of frequency $4$.
### 8.2. Three-step nilmanifold constructions
This section is meant to give explicit constructions of (approximate)
nilsequences certain degree $3$ bracket polynomials. It is meant to be skimmed
on first reading.
###### Lemma 8.3.
Let
$\alpha_{1},\dots,\alpha_{k},\beta_{1},\dots,\beta_{k},\gamma_{1},\dots,\gamma_{k},\alpha_{1}^{\prime},\dots,\alpha_{k}^{\prime},\beta_{1}^{\prime},\dots,\beta_{k}^{\prime}\in\mathbb{R}$.
Consider
$e(-\sum_{j=1}^{k}\alpha_{j}n\\{\beta_{j}n\\}\\{\gamma_{j}n\\}-\sum_{j=1}^{\ell}\alpha_{j}^{\prime}n^{2}\\{\beta_{j}^{\prime}n\\}).$
There exists a nilmanifold $G/\Gamma$ of degree $3$, complexity $O(1)$, and
dimension $O(k)$, and an approximate nilsequence $F(g(n)\Gamma)$ with $F$
$O(d)$-Lipschitz outside of the boundary of the standard fundamental domain
$\psi^{-1}((-1/2,1/2]^{d})$ of $G/\Gamma$ such that
$F(g(n)\Gamma)=e(-\sum_{j=1}^{k}\alpha_{j}n\\{\beta_{j}n\\}\\{\gamma_{j}n\\}-\sum_{j=1}^{\ell}\alpha_{j}^{\prime}n^{2}\\{\beta_{j}^{\prime}n\\}).$
###### Proof.
We give two proofs. The first will follow[8]. We let the free $3$-step Lie
group:
$G=\\{e_{1}^{t_{1}}e_{2}^{t_{2}}e_{3}^{t_{3}}e_{21}^{t_{21}}e_{211}^{t_{211}}e_{31}^{t_{31}}e_{311}^{t_{311}}e_{32}^{t_{32}}e_{322}^{t_{322}}e_{212}^{t_{212}}e_{312}^{t_{312}}e_{213}^{t_{213}}e_{313}^{t_{313}}e_{323}^{t_{323}}\\}$
where the $t_{i},t_{ij},t_{ijk}$ range over $\mathbb{R}$ and with the
relations $[e_{i},e_{j}]=e_{ij}$ and $[[e_{i},e_{j}],e_{k}]=e_{ijk}$ and the
Jacobi identity
$[[e_{i},e_{j}],e_{k}][[e_{j},e_{k}],e_{i}][[e_{k},e_{i}],e_{j}]=1.$
We also take the lattice $\Gamma$ to be the subgroup of the above group where
the $t_{i},t_{ij},t_{ijk}$ range over $\mathbb{Z}$. Letting
$s_{i},s_{ij},s_{ijk}$ be the coordinates in the fundamental domain of
$G/\Gamma$, we see that
$s_{i}=\\{t_{i}\\},s_{ij}=\\{t_{ij}-t_{i}[t_{j}]\\},s_{ijk}=\\{t_{ijk}-t_{ik}[t_{j}]-t_{ij}[t_{i}]+t_{i}[t_{j}][t_{k}]\\}.$
Thus, this expresses $e_{1}^{\alpha n}e_{2}^{\beta n}e_{3}^{\gamma n}$ as
$e([\alpha n][\beta n]\gamma n)$ with a bunch of lower order terms that we can
take care of using the construction from earlier parts.
The second proof follows [9]. Let $G_{1}$ be the subgroup of the above matrix
group we constructed which is generated by the elements where only the first
(top) row is nonzero. This can be generated by elements $x_{i}$, which
correspond to elements where only the $i$th element (from the left) of the
first row is nonzero, and $z$ which represents the rightmost coordinate of the
first row. Let $y_{i}$ be the $i+1$th element (starting from the top) of the
last column (from the left). The approximate nilsequence we desire to
replicate is thus
$g(n)=x_{1}^{\alpha_{1}n\\{\beta_{1}n\\}}\cdots
x_{k}^{\alpha_{1}n\\{\beta_{1}n\\}}x_{k+1}^{\alpha_{1}^{\prime}n^{2}}\cdots
x_{k+\ell}^{\alpha_{\ell}^{\prime}n^{2}}$ $y_{1}^{\gamma_{1}n}\cdots
y_{k}^{\gamma_{k}n}y_{k+1}^{\beta_{1}^{\prime}n}\cdots
y_{k+\ell}^{\beta_{\ell}^{\prime}n}$
and $F(x\Gamma)=e(\\{z-x[y]\\})$. To define this to be a three-step
nilsequence, we define a group $H=\langle x_{1},x_{2},\dots,x_{k},z\rangle$
and we consider the semidirect product $G\ltimes H$ with the action being
conjugation. We define the action $\rho^{k}$ of $\mathbb{R}^{k}$ on $G\ltimes
H$ by $\rho(t)(g,g_{1}):=(gg_{1}^{t},g_{1})$, and finally consider
$G^{\prime}=\mathbb{R}^{k}\rtimes_{\rho^{k}}(G\ltimes H)$. One can check that
this is a three-step nilpotent Lie group with respect to the lower central
series. We now define
$\Gamma^{\prime}=\mathbb{Z}\rtimes_{\rho^{k}}(\Gamma\ltimes(H\cap\Gamma))$ and
consider the polynomial sequence
$g^{\prime}(n)=(0,(x_{k+1}^{\alpha_{1}^{\prime}n^{2}}\cdots
x_{k+\ell}^{\alpha_{\ell}^{\prime}n^{2}}y_{1}^{\gamma_{1}n}\cdots
y_{k}^{\gamma_{k}n}\cdots y_{k+\ell}^{\beta_{\ell}^{\prime}n},$
$x_{1}^{\alpha_{1}n}\cdots
x_{k}^{\alpha_{k}n}),(\beta_{1}n,\beta_{2}n,\dots,\beta_{k}n),(id,id)).$
and the function
$F^{\prime}((t,(g,g^{\prime}))\Gamma^{\prime})=F(g\Gamma).$
It follows then that
$g^{\prime}(n)\Gamma^{\prime}=(\\{\beta_{1}n\\},\dots,\\{\beta_{k}n\\},(g(n),x_{1}^{\alpha_{1}n}\cdots
x_{k}^{\alpha_{k}n}))\Gamma^{\prime}$
so that
$F^{\prime}(g^{\prime}(n)\Gamma^{\prime})=e(-\sum_{j}\alpha_{j}n\\{\beta_{j}n\\}[\gamma_{j}n]-\sum_{j}\alpha_{j}^{\prime}n^{2}[\beta_{j}^{\prime}n]).$
Finally, to complete the nilmanifold construction, we use
$[\gamma_{j}n]=\gamma_{j}n-\\{\gamma_{j}n\\}$ and
$[\beta_{j}^{\prime}n]=\beta_{j}^{\prime}n-\\{\beta_{j}^{\prime}n\\}$ and
apply a very similar nilmanifold construction to the remaining terms we get
under this expansion. ∎
### 8.3. Sunflower type decomposition
We will now deduce one of the primary improvements to [8]. This corresponds to
[8, Section 7].
#### 8.3.1. A few reductions
We now make a set of reductions. Our first claim is that
$\chi_{h_{1}}(n)\chi_{h_{2}}(n+h_{1}-h_{4})\overline{\chi_{h_{3}}(n)\chi_{h_{4}}(n+h_{1}-h_{4})}$
is a Fourier expanded nilcharacter. To see this, by standard bracket
polynomial manipulations (see e.g., [8] and Section 3), we have
$\chi_{h_{2}}(n+h_{1}-h_{4})\overline{\chi_{h_{4}}(n+h_{1}-h_{4})}\equiv\chi_{h_{2}}(n)\overline{\chi_{h_{4}}(n)}e(\alpha_{h_{1},h_{2},h_{3},h_{4}}n^{2}+\beta_{h_{1},h_{2},h_{3},h_{4}}n+f(h)).$
Next, we observe that by tensoring two free nilcharacters together, as in
adjoining them $n\mapsto\chi_{1}(n)\chi_{2}(n)$ forms a free nilcharacter as
well by taking a product of the corresponding nilsequences
$F_{1}(g_{1}(n)\Gamma_{1})F_{2}(g_{2}(n)\Gamma_{2})=\tilde{F}(\tilde{g}(n)\tilde{\Gamma})$
with $\tilde{F}(x,y)=F_{1}(x)F(y)$, $\tilde{G}=G_{1}\times
G_{2}/\text{ker}(Y_{1}+Y_{2})$ where $Y_{i}$ is the vertical component of
$G_{i}$. Let $\pi:G_{1}\times G_{2}\to\tilde{G}$ be the quotient map. We note
that induced asymmetric bilinear form on $\tilde{G}$ is
$k_{1}\omega_{1}+k_{2}\omega_{2}$ where $k_{i}$ is the frequency of $F_{i}$.
We thus have $\tilde{g}=\pi(g_{1}(n),g_{2}(n))$ and
$\tilde{\Gamma}=\pi(\Gamma_{1}\times\Gamma_{2})$. Finally, we observe that
modifying the quadratic term does not change whether or not a function is a
free nilcharacter. Hence, we’ve realized
$\chi_{h_{1}}(n)\chi_{h_{2}}(n+h_{1}-h_{4})\overline{\chi_{h_{3}}(n)\chi_{h_{4}}(n+h_{1}-h_{4})}$
as a free nilcharacter on
$G^{(4)}:=G^{4}/\mathrm{ker}(Y_{1}+Y_{2}-Y_{3}-Y_{4})$.
#### 8.3.2. Sunflower iteration
If $\chi_{h}(n)=F(g_{h}(n)\Gamma)$, we define
$\tilde{\chi_{h}}(n)=F(g_{h}(n)\Gamma)\psi(g_{h}(n)\Gamma)$ where $\psi$ is a
smooth cutoff function on the horizontal torus
$\mathbb{R}^{d}/\mathbb{Z}^{d}\cong[-1/2,1/2)^{d}$ supported on
$[-1/2+\delta^{O(d)^{O(1)}},1/2-\delta^{O(d)^{O(1)}}]^{d}$, it follows that
$\|\chi_{h}-\tilde{\chi}_{h}\|_{L^{1}[N]}\leq\delta^{O(d^{O(1)})},$
so we may replace $\chi_{h}$ with $\tilde{\chi}_{h}$ at the cost of shrinking
the right hand side of the inequality by a bit (but will still appear as
$\delta^{O(d)^{O(1)}}$).
For simplicity, we shall relabel the $\delta^{\log(1/\delta)^{O(1)}}$ as just
$\delta$ and $\tilde{\chi}_{h}$ as simply $\chi_{h}$ for the remainder of this
section. Similarly to [8], for some subset $H\subseteq\mathbb{Z}_{N}$, $h\in
H$ we let
* •
$\Xi$ will denote the phases of the Fourier expanded nilcharacter;
* •
$\Xi_{h}$ be the $h$-dependent phases of the Fourier expanded nilcharacter;
* •
$\Xi_{*}$ be the $h$-independent phases as $h$;
* •
$\Xi_{h}^{G}$ be the subgroup in $G$ generated by the $h$-dependent
dimensions;
* •
and $\Xi_{*}^{G}$ the subgroup of $G$ generated by the $h$-independent
dimensions.
Given a nilmanifold $G/\Gamma$ with Mal’cev basis $(X_{1},\dots,X_{d-1},Y)$
for the lower central series filtration with one-dimensional vertical
component, and a polynomial sequence of the form
$\psi(g(n))=(\alpha_{1}n,\dots,\alpha_{d-1}n,P(n))$, we denote
$\psi_{horiz}(g(n))=\sum_{i}\alpha_{i}X_{i}.$
We shall also write
$\psi_{horiz}(g)\equiv\sum_{i}\beta_{i}X_{i}\pmod{\Gamma}$
to signify that $\psi_{horiz}(g)$ is equal to $\sum_{i}\beta_{i}X_{i}$ up to
an element in the $\mathbb{Z}$-span of $(X_{1},\dots,X_{d-1})$. We are finally
ready to state the sunflower step.
###### Lemma 8.4.
sunflower Suppose for $\delta|H|^{3}$ many additive quadruples
$(h_{1},h_{2},h_{3},h_{4})$ inside $H^{4}$ with $|H|\geq\delta N$ that
$|\mathbb{E}_{n\in[N]}\chi_{h_{1}}(n)\chi_{h_{2}}(n+h_{1}-h_{4})\overline{\chi_{h_{3}}(n)\chi_{h_{4}}(n+h_{1}-h_{4})}|\geq\delta.$
Then there exists a subset $H^{\prime}\subseteq H$ such that
$|H^{\prime}|\geq\delta^{O(d)^{O(1)}}|H|$ and for each $h\in H^{\prime}$,
there exists some $r>0$ and fixed $\alpha_{1},\dots,\alpha_{r}$ such that
denoting
$\psi_{horiz}(G)=\mathrm{span}(X_{1},\dots,X_{k},Z_{1},\dots,Z_{\ell})$ the
Mal’cev basis with complexity at most $\delta^{-O(d)^{O(1)}}$, there exists
some integer $|q|\leq\delta^{-O(d)^{O(1)}}$ and some ordering of the Mal’cev
basis $(X_{1},\dots,X_{k},Z_{1},\dots,Z_{\ell})$ such that denoting
$\psi_{horiz}(g_{h})=\sum_{i}\xi_{h}^{i}X_{i}+\sum_{j}\xi_{*}^{j}Z_{j}$
, we have
$g_{h}(q\cdot)\equiv\alpha_{1}X_{1}+\cdots+\alpha_{r}X_{r}+\xi_{h}^{r+1}Y_{r+1}+\cdots+\xi_{h}^{k}Y_{k}+\xi_{*}^{1}Z_{1}+\cdots+\xi_{*}^{\ell}Z_{\ell}\pmod{\Gamma}.$
where $X_{1},\dots,X_{r},Y_{r+1},\dots,Y_{k},Z_{1},\dots,Z_{\ell}$ are
linearly independent and each $Y_{j}$ is a $\delta^{-O(d)^{O(1)}}$-rational
combination of $X_{i}$’s and we have that the asymmetric bilinear form
$\omega$ restricted to $\mathrm{span}(Y_{1},\dots,Y_{k})$ vanishes.
###### Remark 8.1.
This lemma will be used (following the procedure in Section 7.2 to change
bases $(X_{1},\dots,X_{k},Z_{1},\dots,Z_{\ell})$ to
$(X_{1}.\dots,X_{r},Y_{r+1},\dots,Y_{k},Z_{1},\dots,Z_{\ell})$ and reordering
so that $Y_{i}$’s appear before the $X_{i}$’s and the $Z_{i}$’s) with the
“new” $X_{i}$’s consisting of $Y_{r+1},\dots,Y_{k}$ and the “new” $Z_{j}$’s
consisting of $X_{1},\dots,X_{r},Z_{1},\dots,Z_{\ell}$. Additionally, since we
can accomplish this lemma in one single application of [13, Theorem 8], it
follows that we do not need to consider $h$-independent phases in the
hypothesis for our argument of the main theorem.
###### Proof.
Let $\omega$ denote the asymmetric bilinear form associated to $G/\Gamma$
representing the commutator bracket. Let $d=k+\ell+1$ be the dimension of $G$.
Let $G_{(4)}=\\{x\in G^{(4)}:\text{the }Z_{i}\text{ components of }x\text{ are
equal}\\}$. The commutator bracket on $G^{(4)}$ is of the form
$\omega_{1}+\omega_{2}-\omega_{3}-\omega_{4}$ where $\omega_{i}$ represents
the commutator bracket on the $i$th coordinate of $G^{4}$. We let
$\tilde{\omega}$ be the restriction of that commutator bracket to $G_{(4)}$.
Then by twostepcor, and Pigeonholing in $(h_{1},h_{2},h_{3},h_{4})$, one can
find a $\delta^{-O(d)^{O(1)}}$-rational subspace $V$ with
$\tilde{\omega}(V,V)=0$ such that for a set $R$ of at least
$\delta^{O(d)^{O(1)}}N^{3}$ many additive quadruples
$(h_{1},h_{2},h_{3},h_{4})$, (up to an integer shift possibly depending on
$h_{1},h_{2},h_{3},h_{4}$)
$\psi_{horiz}(g_{h_{1}},g_{h_{2}},g_{h_{3}},g_{h_{4}},g_{*})$ lies in $V$. Let
$V_{123}=\pi_{123}(V)$ be the projection into the first three coordinates and
the $h$-independent coordinate and $V_{124}=\pi_{124}(V)$ be the projection of
$V$ into the first, second, and fourth coordinates and the $h$-independent
coordinate. We write
$V_{123}=\\{(g_{1},g_{2},g_{3},g_{*}):\eta_{h_{1}}(g_{1})+\eta_{h_{2}}(g_{2})+\eta_{h_{3}}(g_{3})+\eta_{*}(g_{*})=0\forall\eta=(\eta_{h_{1}},\eta_{h_{2}},\eta_{h_{3}},\eta_{*})\in
S\\}$
where $S$ consists of linearly independent integer vectors. Similarly, we
write
$V_{124}=\\{(g_{1},g_{2},g_{4},g_{*}):\eta_{h_{1}}^{\prime}(g_{1})+\eta_{h_{2}}^{\prime}(g_{2})+\eta_{h_{4}}^{\prime}(g_{4})+\eta_{*}^{\prime}(g_{*})=0\forall\eta^{\prime}=(\eta_{h_{1}}^{\prime},\eta_{h_{2}}^{\prime},\eta_{h_{4}}^{\prime},\eta_{*}^{\prime})\in
S^{\prime}\\}.$
Since $V$ is $\delta^{-O(d)^{O(1)}}$-rational, it follows that $V_{123}$ is
also $\delta^{-O(d)^{O(1)}}$-rational, so by Cramer’s rule (e.g., [13, Lemma
A.8]), it follows that we may take all $\eta_{h_{i}}$,
$\eta_{h_{i}}^{\prime}$, $\eta_{*}$, $\eta_{*}^{\prime}$ to be of size
$\delta^{-O(d)^{O(1)}}$. Let
$W_{1}=\\{v\in\text{span}(X_{1},\dots,X_{k}):(v,0,0,0)\in V_{123}\\}$
$W_{2}=\\{v\in\text{span}(X_{1},\dots,X_{k}):(v,0,0,0)\in V_{124}\\}.$
Note that $v$ lies in $W_{1}$ if and only if $\eta_{h_{1}}(v)=0$ for all
$\eta\in S$ and similarly $w\in W_{2}$ if and only if
$\eta_{h_{1}}^{\prime}(w)=0$ for all $\eta^{\prime}\in S^{\prime}$.
Furthermore, since $\tilde{\omega}(V,V)=0$, and $v\in W_{1}$ and $w\in W_{2}$,
it follows that $(v,0,0,0)\in V_{123}$ lifts to $(v,0,0,z,0)$ and
$(w,0,0,0)\in V_{124}$ lifts to $(w,0,z^{\prime},0,0)$ for elements
$z,z^{\prime}\in\text{span}(Z_{1},\dots,Z_{\ell})$. Thus, denoting $\tilde{v}$
and $\tilde{w}$ these lifts, we have
$0=\tilde{\omega}(\tilde{v},\tilde{w})=\omega(v,w)$. Hence
$\omega(W_{1},W_{2})=0$. Let $W=W_{1}\cap W_{2}$. Note that an element $w\in
W$ if and only if for each $\eta\in S$ and $\eta^{\prime}\in S^{\prime}$ that
$\eta_{h_{1}}(w)=0$ and $\eta_{h_{1}}^{\prime}(w)=0$.
By the Pigeonhole principle, there are thus $\delta^{O(d)^{O(1)}}N$ many
$h_{1}$ which is part of at least $\delta^{O(d)^{O(1)}}N^{2}$ additive
quadruples $(h_{1},h_{2},h_{3},h_{4})$ in $R$. Thus, there are at least
$\delta^{O(d)^{O(1)}}N^{5}$ many elements
$(h_{1},h_{2},h_{3},h_{2}^{\prime},h_{4}^{\prime})$ such that
$(h_{1},h_{2},h_{3},h_{1}+h_{2}-h_{3})\in R$ and
$(h_{1},h_{2}^{\prime},h_{1}+h_{2}^{\prime}-h_{4}^{\prime},h_{4}^{\prime})\in
R$. By the Pigeonhole principle, there exists choices $(h_{2},h_{3})$ and
$(h_{2}^{\prime},h_{4}^{\prime})$ such that there are at least
$\delta^{O(d)^{O(1)}}N$ many $h_{1}$ with
$(h_{1},h_{2},h_{3},h_{1}+h_{2}-h_{3})\in R$ and
$(h_{1},h_{2}^{\prime},h_{1}+h_{2}^{\prime}-h_{4}^{\prime},h_{4}^{\prime})\in
R$. Let $H^{\prime}$ denote this set of $h_{1}$’s.
Let the projection to $h_{1}$ component of elements in $S\cup S^{\prime}$ be
$(\tilde{\eta}^{1},\dots,\tilde{\eta}^{r})$ and suppose without a loss of
generality that $\tilde{\eta}^{i}$ are linearly independent. Let us assume
without a loss of generality that $\tilde{\eta}^{1},\dots,\tilde{\eta}^{r}$
are Gaussian eliminated and (after possibly reordering the variables) are in
row-reduced echelon form. This can be done while keeping similar bounds for
these elements. Noticing that $h_{2},h_{3},h_{2}^{\prime},h_{4}^{\prime}$ are
fixed, there exists fixed $h$-independent phases $\alpha_{1},\dots,\alpha_{r}$
such that for each $h\in H^{\prime}$, we have
$\tilde{\eta}^{i}(g_{h})+\alpha_{i}\equiv 0\pmod{1}.$
By scaling $g_{h}(\cdot)$ by $q$, we may assume that the first nonzero
component of $\tilde{\eta}^{i}$ is an integer. Thus (since
$\tilde{\eta}^{1},\dots,\tilde{\eta}^{r}$ are in row-reduced echelon form), we
may write
$\psi_{horiz}(g_{h}(q\cdot))\equiv\alpha_{1}X_{1}+\cdots+\alpha_{r}X_{r}+\xi_{h}^{r+1}Y_{r+1}+\cdots+\xi_{h}^{k}Y_{k}+\xi_{*}^{1}Z_{1}+\cdots+\xi_{*}^{\ell}Z_{\ell}\pmod{\Gamma}$
as prescribed in the lemma. We claim that $Y_{j}\in W$ for each $j$. This is
because $\tilde{\eta}^{1},\dots,\tilde{\eta}^{r}$ are in row-reduced echelon
form so if
$g=\sum_{j}g_{j}X_{j}$
and $\tilde{\eta}^{i}(g)=0$, then we may write
$\tilde{\eta}^{i}(qg)=\sum_{j}g_{j}Y_{j}.$
Since $Y_{j}$’s are linearly independent, the set of all such $g$’s have the
same span as the set of the $Y_{j}$’s and we note that
$W=\bigcap_{i=1}^{r}\text{ker}(\tilde{\eta}^{i})$. Hence the result. ∎
Thus, we have:
###### Proposition 8.2 (Sunflower decomposition).
There exists a set $H$ of size at most
$\delta^{\log(1/\delta)^{O(1)}}N$
and associated Fourier expanded nilcharacters $\chi_{h}(n)$ associated to $h$
on a ($h$-independent) nilmanifold $G/\Gamma$ such that
$|\mathbb{E}_{n\in\mathbb{Z}_{N}}\Delta_{h}\tilde{f}(n)\chi_{h}(n)|\geq\delta^{\log(1/\delta)^{O(1)}}$
for some function $\tilde{f}=f(y\cdot)$ where $y$ is an integer relatively
prime to $N$, and the frequency set of $\chi_{h}$ decomposes as $\Xi_{h}$, an
$h$-dependent set, and $\Xi_{*}$, an $h$-independent set both of size at most
$\log(1/\delta)^{O(1)}$ such that $\omega(\Xi_{h}^{G},\Xi_{h}^{G})=0$.
### 8.4. Linearization
We begin with the following lemma:
###### Lemma 8.5.
bracketlinear Suppose $H\subseteq\mathbb{Z}_{N}$ and
$f:H\to(\widehat{\mathbb{Z}_{N}})^{d}$ has $\delta N^{3}$ additive quadruples
it preserves. Then on a subset of $H$ of size at least
$\exp(-O(\log^{4}(1/\delta)))N$, $f$ can be written as
$f(n)=(f_{1}(n),\dots,f_{d}(n))$
$f_{j}(n)=\sum_{i=1}^{\ell}a_{ij}\\{\alpha_{i}(n-k)\\}$
where $a_{ij}\in\mathbb{R}$, $\alpha_{i}\in\widehat{\mathbb{Z}_{N}}$, $k$ an
integer, and $\ell\leq O(\log^{O(1)}(1/\delta))$.
###### Proof of bracketlinear.
Balog-Szemerédi-Gowers [3, Proposition 7.3] (or rather as it is stated [5,
Theorem 5.2]) shows that we may pass to a subset $H^{\prime}$ of size
$\delta^{O(1)}N$ where denoting $\Gamma$ to be the graph of $f$ restricted to
$H^{\prime}$, we have $|\Gamma+\Gamma|\leq\delta^{-O(1)}|\Gamma|$. To continue
the proof of bracketlinear, we need the following lemma:
###### Lemma 8.6.
expandingfreiman There exists a subset $H^{\prime\prime}\subseteq H^{\prime}$
of cardinality at least $\delta^{O(1)}N$ such that $f$ is a $8$-Freiman
homomorphism.
###### Proof of expandingfreiman.
We follow the proof of [5, Lemma 9.2]. Let $\Gamma$ be the graph of $f$. Let
$A\subseteq(\widehat{\mathbb{Z}_{N}})^{d}$ be the set of all $\xi$ such that
$(0,\xi)\in 8\Gamma-8\Gamma$. Since $\Gamma$ is a graph, it follows that
$|\Gamma+A|=|\Gamma||A|$. On the other hand,
$|\Gamma+A|\leq|9\Gamma-8\Gamma|\leq\delta^{-O(d)}N$. This tells us that
$|A|\leq\delta^{-O(1)}$. By [5, Lemma 8.3], we may find a set
$T\subseteq\mathbb{Z}_{N}^{d}$ with $|T|\leq O(\log_{2}(1/\delta))$ such that
$A\cap B(T,1/4)=\\{0\\}$ where
$B(T,1/4)=\\{\xi\in\widehat{\mathbb{Z}_{N}}^{d}:\|t(\xi)\|_{\mathbb{R}/\mathbb{Z}\forall
t\in T}<1/4\\}$.
Let $\Psi:\widehat{\mathbb{Z}_{N}}^{d}\to(\mathbb{R}/\mathbb{Z})^{T}$ be the
homomorphism $\Psi(\xi)=(t(\xi))_{t\in T}$. We may cover
$(\mathbb{R}/\mathbb{Z})^{T}$ by $2^{6|T|}\leq 2^{6}\delta^{-O(1)}$ many cubes
of side length $\frac{1}{64}$. Since $|\Gamma|\geq\delta N$, it follows that
there exists some cubes $Q$ for which
$\Gamma^{\prime}:=\\{(h,f(h))\in\Gamma:\Psi(f(h))\in Q\\}$
has cardinality $\delta^{O(1)}N$. By linearity of $t$, it follows that if
$(0,\xi)\in 8\Gamma^{\prime}-8\Gamma^{\prime}$ then
$\|t(\xi)\|_{\mathbb{R}/\mathbb{Z}}\leq\frac{16}{64}$, so $\xi\in B(T,1/4)$.
On the other hand, $\xi$ also lies in $A$, so $\xi=0$ by construction of $T$.
This allows us to conclude that $4\Gamma^{\prime}-4\Gamma^{\prime}$ is a
graph, or in other words, $f$ is an $8$-Freiman homomorphism on
$H^{\prime\prime}$, the projection of $\Gamma^{\prime}$ to the first
coordinate. ∎
Since $f$ is an $8$-Freiman hoomorphism on $H^{\prime\prime}$, $f$ is a
Freiman homomorphism on $2H^{\prime\prime}-2H^{\prime\prime}$, which by [21]
and [19, Proposition 27] contains a Bohr set $B=B(S,\rho)$ where
$|S|\leq\log(1/\delta)^{O(1)}$ and $\rho\geq\delta^{C}$ for some constant
$C>0$. Thus, $f$ restricts to a Freiman homomorphism on a Bohr set
$f:B\to(\widehat{\mathbb{Z}/N\mathbb{Z}})^{d}$. We now require the following
lemma which is essentially [5, Proposition 10.8] to finish the proof of
bracketlinear:
###### Lemma 8.7.
bohrbracket Suppose $f:B\to\widehat{\mathbb{Z}/N\mathbb{Z}}$ is a Freiman
homomorphism. Then letting $B_{\epsilon}=B(S,\rho\epsilon)$, it follows that
on $B_{|S|^{-O(|S|)}}$, one can write
$f(n)=\sum_{i}a_{i}\\{\alpha_{i}n\\}+\gamma$
where each $\alpha_{i}\in S$.
###### Proof of bohrbracket.
By [5, Proposition 10.5], $B$ contains a symmetric proper generalized
arithmetic progression
$P=\\{\ell_{1}n_{1}+\cdots+\ell_{d^{\prime}}n_{d^{\prime}}:n_{i}\in[-N_{i},N_{i}]\\}$
with $1\leq d^{\prime}\leq|S|$ such that $P$ contains $B_{|S|^{-O(|S|)}}$ and
$(\\{\alpha\ell_{i}\\})_{\alpha\in\mathbb{R}^{S}}$ are linearly independent.
It follows that we may write
$f(\ell_{1}n_{1}+\cdots+\ell_{d}n_{d})=n_{1}f(\ell_{1})+\cdots+n_{d}f(\ell_{d})+f(0)=n_{1}k_{1}+\cdots+n_{d}k_{d}+\gamma.$
Let $\Phi:B\to\mathbb{R}^{S}$ be $\phi(x)=(\\{\alpha x\\})_{\alpha\in S}.$ We
see that
$\Phi(\ell_{1}n_{1}+\cdots+\ell_{d^{\prime}}n_{d^{\prime}})=n_{1}\Phi(\ell_{1})+\cdots+n_{d}\Phi(\ell_{d^{\prime}}).$
Since $\Phi(\ell_{i})$ are linearly independent, it follows that (by scaling
up a vector in the orthogonal complement of $\Phi(\ell_{j})$ for $j\neq i$ but
not in the orthogonal complement of $\Phi(\ell_{i})$ for all $i$) there exists
a vector $u_{i}$ such that
$u_{i}\cdot\Phi(\ell_{1}n_{1}+\cdots+\ell_{d^{\prime}}n_{d^{\prime}})=\ell_{i}$.
Thus, writing $u_{i}=(u_{i\alpha})_{\alpha\in S}$, we see that
$\ell_{i}=\sum_{\alpha\in S}u_{i\alpha}\\{\alpha x\\}.$
The lemma follows from this. ∎
Thus, each component of $f$ agrees with a bracket linear function with at most
$d$ many phases on $B_{d^{-O(d)}}$. By the Pigeonhole principle, there exists
some $x_{0}$ such that $|(x_{0}+B_{1/2})\cap H|\geq\delta^{O(1)}N$ where
$B_{1/2}$ is the $\frac{1}{2}$-dilation of $B$ (say regularized as well). Let
$A=(x_{0}+B)\cap H$. Then for $h,h^{\prime}\in A$, we have
$(h-h^{\prime},f(h)-f(h^{\prime}))$ lies inside
$\Gamma^{\prime}-\Gamma^{\prime}$, so
$f(x_{0}+h)-f(x_{0}+h^{\prime})=f(h-h^{\prime})$, so there exists $\xi_{0}$
such that $f(x_{0}+h)=\xi_{0}+f(h)$. Thus, there exists some set of size at
least $\delta^{O_{\epsilon}(\log^{3+\epsilon}(1/\delta))}N$ such that $f$ is a
bracket polynomial with $\log^{3+\epsilon}(1/\delta)$ many bracket phases. ∎
We will also need the following lemma:
###### Lemma 8.8 (Cauchy-Schwarz inequality for energy).
CSenergy We have the following:
$E(A_{1},A_{2},A_{3},A_{4})\leq
E(A_{1})^{1/4}E(A_{2})^{1/4}E(A_{3})^{1/4}E(A_{4})^{1/4}.$
###### Proof.
We see that
$E(A_{1},A_{2},A_{3},A_{4})=\frac{1}{N^{3}}\sum_{z\in(A-C)\cap(B-D)}|A\cap(C+z)||B\cap(D+z)|.$
By Cauchy-Schwarz, we may bound this by
$\sqrt{\left(\frac{1}{N^{3}}\sum_{z\in
A-C}|A\cap(C+z)|^{2}\right)\left(\frac{1}{N^{3}}\sum_{z\in
B-D}|B\cap(D+z)|^{2}\right)}.$
We see by [24, Lemma 2.9] that
$\frac{1}{N^{3}}\sum_{z\in A-C}|A\cap(C+z)|^{2}=E(A,C)\leq
E(A)^{1/2}E(C)^{1/2}$
and similarly
$\frac{1}{N^{3}}\sum_{z\in B-D}|B\cap(D+z)|^{2}=E(B,D)\leq
E(B)^{1/2}E(D)^{1/2}.$
∎
We now prove the linearization step:
###### Lemma 8.9.
linearizationstep Suppose
$|\mathbb{E}_{n\in[N]}\chi_{h_{1}}(n)\chi_{h_{2}}(n+h_{1}-h_{4})\overline{\chi_{h_{3}}(n)\chi_{h_{4}}(n+h_{1}-h_{4})}|\geq\delta.$
and that $\omega(\Xi_{h}^{G},\Xi_{h}^{G})=0$. Then there exists a set
$S\subseteq\widehat{\mathbb{Z}_{N}}$ such that $|S|\leq\log(1/\delta)^{O(1)}$
and a subset $H^{\prime}\subseteq H$ with
$|H^{\prime}|\geq\log(1/\delta)^{O(1)}$ such that for each $h\in H^{\prime}$
there exists some $k\in\mathbb{Z}_{N}$ such that denoting $\xi_{h}^{struc}$ as
a generic element of the form
$\xi_{h}^{struc}=\sum_{\alpha\in S}a_{\alpha}\\{\alpha(h-k)\\}$
where $a_{\alpha}\in\mathbb{R}$ so that each $h\mapsto\\{\alpha(h-k)\\}$ is a
Freiman homomorphism on $H^{\prime}$, we may write for some appropriate
integer scaler $q$ with $|q|\leq\delta^{-O(d)^{O(1)}}$ and some $r>0$, such
that if
$\psi_{horiz}(g(n))=\sum_{i}\xi_{h}^{i}nX_{i}+\sum_{j}\xi_{*}^{j}nZ_{j},$
we have (for some ordering of the basis
$(X_{1},\dots,X_{k},Z_{1},\dots,Z_{\ell})$),
$\psi_{horiz}(g_{h}(qn))\equiv\sum_{j\leq
r}\xi_{h}^{struc,j}nX_{j}+\sum_{j>r}\xi_{h}^{j}nY_{j}+\sum_{j\geq
r_{1}}\xi_{*}^{j}n\tilde{Z}_{j}\pmod{\Gamma}$
where $\tilde{Z}_{j}$ is a $\delta^{-d^{O(1)}\log(1/\delta)^{O(1)}}$-integer
span of $Z_{j}$ and $Y_{j}$ is the
$\delta^{-d^{O(1)}\log(1/\delta)^{O(1)}}$-integer span of the $X_{i}$’s and
the $X_{i},Y_{i},\tilde{Z}_{i}$ are linearly independent and the restriction
of $\omega(Y_{i},\tilde{Z}_{j})=0$ for each $i$ and $j$.
###### Proof.
By using various bracket polynomial identities, we may rewrite
$\chi_{h_{1}}(n)\chi_{h_{2}}(n)\overline{\chi_{h_{3}}(n)\chi_{h_{4}}(n)}$
as a free nilcharacter of the form $\tilde{\chi}_{h_{1},h_{2},h_{3},h_{4}}(n)$
which has the same free nilcharacter of $\chi_{h}$ except that at each phase
$\xi_{h}$, we replace it with
$\xi_{h_{1}}+\xi_{h_{2}}-\xi_{h_{3}}-\xi_{h_{4}}$ and there are no phases of
the form $\xi_{*}^{i}n[\xi_{*}^{j}n]$. Thus, there exists some
$\alpha_{h_{1},h_{2},h_{3},h_{4}}^{\prime}$ and
$\beta_{h_{1},h_{2},h_{3},h_{4}}^{\prime}$ such that
$|\mathbb{E}_{n\in[N]}\tilde{\chi}_{h_{1},h_{2},h_{3},h_{4}}(n)e(\alpha^{\prime}_{h_{1},h_{2},h_{3},h_{4}}n^{2}+\beta^{\prime}_{h_{1},h_{2},h_{3},h_{4}}n)|\geq\delta^{O(d)^{O(1)}}.$
Let $g_{h_{1},h_{2},h_{3},h_{4}}$ denote the nilsequence corresponding to
$\chi_{h_{1},h_{2},h_{3},h_{4}}$ and [13, Theorem 8] shows that (up to an
integer shift), $g_{h_{1},h_{2},h_{3},h_{4}}$ lies in a subspace $V$ such that
$\omega^{\prime}(V,V)=0$ where $\omega$ is the asymmetric bilinear form
corresponding to $G/\Gamma$ and
$\omega^{\prime}(X_{i},X_{j})=\omega(X_{i},X_{j})$ and
$\omega^{\prime}(X_{i},Z_{j})=\omega(X_{i},Z_{j})$, and
$\omega^{\prime}(Z_{i},Z_{j})=0$ (this last fact reflects there being no terms
of the form $\xi_{*}^{i}[\xi_{*}^{j}]$). Letting
$\eta^{i}=(\eta^{i}_{h},\eta^{i}_{*})$ be linearly independent and span the
annihilators of $V$. We have
$\eta^{i}(g_{h_{1},h_{2},h_{3},h_{4}})=\eta^{i}_{h}(\xi_{h_{1}}+\xi_{h_{2}}-\xi_{h_{3}}-\xi_{h_{4}})+\eta^{i}_{*}(\xi_{*})\equiv
0\pmod{1}.$
Thus, by a combination of CSenergy and bracketlinear, there exists a subset
$H^{\prime}$ of $H$ of size at least $\delta^{(d\log(1/\delta))^{O(1)}}|H|$
such that for each $h\in H^{\prime}$, $\eta^{i}_{h}(\xi_{h})$ is of the
prescribed bracket polynomial form (and is a Freiman homomorphism) and such
that for many additive quadruples, we have
$\eta^{i}_{h}(\xi_{h_{1}}+\xi_{h_{2}}-\xi_{h_{3}}-\xi_{h_{4}})+\eta^{i}_{*}(\xi_{*})\equiv
0\pmod{1}.$
Let $w\in\text{span}_{\mathbb{Z}}(X_{1},\dots,X_{k})$ be orthogonal to all of
the $\eta^{i}_{h}$. It follows that $(w,0)$ is orthogonal to $\eta^{i}$ and
thus $w\in V$. Hence
$\omega^{\prime}(w,g_{h_{1},h_{2},h_{3},h_{4}})=\omega(w,g_{*})\equiv
0\pmod{1}$
where $g_{*}$ is the $h$-independent part of $g_{h_{1},h_{2},h_{3},h_{4}}$.
Choosing such $w_{1},\dots,w_{s}$ of size at most
$\delta^{-d^{O(1)}\log(1/\delta)^{O(1)}}$ to be in the span of the orthogonal
subspace of the $\eta_{i}^{h}$’s, and reducing $r$ when necessary, we may
assume that $\omega(w_{i},\cdot)$ are linearly independent relations and that
$\omega(w_{i},g_{*})\equiv 0\pmod{1}$. Let $W$ be the subspace of
$\psi_{horiz}(G)$ defined as $\\{x:\eta^{i}_{h}(x)=0,\omega(w_{j},x)=0\forall
i,j\\}$. It follows that if $x,y\in W$ and $x$ lies in the span of $X_{i}$’s
then $\omega(x,y)=0$. This is because $x\in W$ implies that
$\eta^{i}_{h}(x)=0$, so $x$ can be generated by a combination of the
$w_{j}$’s. However, $\omega(w_{j},y)=0$. Hence, $\omega(x,y)=0$. Writing
$\nu_{1},\dots,\nu_{s}$ to be the relations defined by $\omega(w_{j},\cdot)$,
we have that
$\eta_{i}^{h}(g_{h})+\xi_{h}^{struc,i}\equiv 0\pmod{1}$ $\nu_{j}(g_{*})\equiv
0\pmod{1}.$
We assume that $\eta_{i}$’s and $\nu_{j}$’s are written (after possibly
reordering variables) in row-reduced echelon form. Thus, writing
$\psi_{horiz}(g_{h})=\sum_{i}\xi_{h}^{i}X_{i}+\sum_{j}\xi_{*}^{j}Z_{j}$
and scaling $g_{h}$ by an appropriate integer, it follows that there exists
some integer $q\leq\delta^{-(\log(1/\delta)d)^{O(1)}}$ such that taking the
linear relations above into account, we may write
$\psi_{horiz}(g_{h}q)\equiv\sum_{i\leq
r}\xi_{h}^{struc,i}X_{i}+\sum_{i>r}\xi_{h}^{i}Y_{i}+\sum_{j}\tilde{\xi}_{*}^{j}\tilde{Z}_{j}\pmod{\Gamma}$
where $\tilde{Z}_{j}$ is a $\delta^{-(d\log(1/\delta))^{O(1)}}$-integer
combination of $Z_{j}$’s and $Y_{i}$ is a
$\delta^{-(d\log(1/\delta))^{O(1)}}$-integer combination of the $X_{i}$’s and
that the combination of $X_{i},Y_{i},Z_{i}$ are linearly independent and
$\xi^{struc,i}_{h}$ are of the bracket polynomial form as specified above, and
that $Y_{i}$ and $\tilde{Z}_{j}$ lie inside $W$. The last property we can
guarantee because the subspace
$\\{x\in\text{span}(X_{1},\dots,X_{k}):\eta_{h}^{i}(x)=0\\}$ is spanned by
$Y_{i}$’s and the subspace
$\\{x\in\text{span}(Z_{1},\dots,Z_{\ell}):\nu_{j}(x)=0\\}$ is spanned by
$\tilde{Z}_{j}$’s. The result follows. ∎
###### Remark 8.2.
This proof can be made entirely free of use of “bracket polynomial identities”
and “Fourier complexity lemmas.” In particular, the step where we take our
modified bracket polynomial $g_{h_{1},h_{2},h_{3},h_{4}}$ and replace it with
$g_{h}$ but with $\xi_{h}$ replaced with
$\xi_{h_{1}}+\xi_{h_{2}}-\xi_{h_{3}}-\xi_{h_{4}}$ is unnecessary. One can
quotient by the subspace of
$\\{\xi_{h_{1}}+\xi_{h_{2}}-\xi_{h_{3}}-\xi_{h_{4}}=0\\}$ and argue that any
horizontal character on $G_{(4)}$ that annihilates that subspace must be of
the form
$\eta^{i}(\xi_{h_{1}},\xi_{h_{2}},\xi_{h_{3}},\xi_{h_{4}},\xi_{*})=\eta^{i}_{h}(\xi_{h_{1}}+\xi_{h_{2}}-\xi_{h_{3}}-\xi_{h_{4}})+\eta^{i}_{*}(\xi_{*}).$
By another application of the procedure in Section 8.2, we may thus write
$\psi(g_{h}(n))=(\xi_{h}^{1}n,\dots,\xi_{h}^{\ell}n,\xi_{*}^{1}n,\dots,\xi_{*}^{k}n,0,0,\dots,0,P_{h}(n))$
where $\xi_{h}^{i}$ is of the form specified by the Lemma above and
$\xi_{*}^{j}$ is $h$-independent and the $0$’s represent the coordinates
$Z_{1},\dots,Z_{r_{1}-1}$ which we eliminated in the above lemma.
This linearizes the entire bracket polynomial except for the vertical phase
$P_{h}(n)$. We write $P_{h}(n)=\alpha_{h}n^{2}+\beta_{h}n-P_{h}(1)n-{n\choose
2}\sum_{i<j}C_{[i,j]}g^{i}g^{j}$ where $(g^{1},\dots,g^{k+\ell})$ is the
ordering
$(\xi_{h}^{1},\dots,\xi_{h}^{\ell},\xi_{*}^{1},\dots,\xi_{*}^{k}).$
We note that the Mal’cev coordinates of $g_{h}(1)^{n}$ is
$(\xi_{h}^{1}n,\dots,\xi_{h}^{\ell}n,\xi_{*}^{1}n,\dots,\xi_{*}^{k}n,P_{h}(1)n+\frac{1}{2}{n\choose
2}\sum_{i<j}C_{[i,j]}g^{i}g^{j}).$
To linearize the quadratic term in $P_{h}(n)$, it thus suffices to linearize
$\alpha_{h}$ since if $i$ and $j$ correspond to some bases $X_{i^{\prime}}$
and $X_{j^{\prime}}$, then $[X_{i^{\prime}},X_{j^{\prime}}]=C_{[i,j]}=0$. We
aim to linearize $\alpha_{h}$. It turns out that we do not need to linearize
the linear term in $P_{h}(n)$ since we will be able to eliminate it via the
symmetry argument. First we show that $\alpha_{h}$ is rational with
denominator $2N$. To show this, we essentially follow the procedure in
periodic. Consider the sequence
$\tau_{k}(g_{h}(n))\Gamma\times\Gamma=(g_{h}(n+k),g_{h}(n))\Gamma\times\Gamma=(g_{h}(1)^{n}g_{h}^{nlin}(n+k)[\\{g_{h}(1)^{k}\\}^{-1},g(1)^{-n}],g(1)^{n})\Gamma\times\Gamma$
where $g_{h}^{nlin}(n)=g_{h}(n)g_{h}(1)^{-n}$. This restricts to a sequence in
$G\times_{G_{2}}G/(\Gamma\times_{G_{2}\cap\Gamma}\Gamma)$. Since
$(g_{h}(n+k),g_{h}(n))\Gamma\times\Gamma$ is periodic, so is
$(g_{h}(1)^{n}g_{h}^{nlin}(n+k)[\\{g_{h}(1)^{k}\\}^{-1},g(1)^{-n}],g_{h}(1)^{n})\Gamma\times\Gamma$
and its restriction. By [13, Lemma A.3], each horizontal character $\eta$ on
$G\times_{G_{2}}G$ decomposes as
$\eta(g,g^{\prime})=\eta_{1}(g)+\eta_{1}(g^{\prime}g^{-1})$ where $\eta_{1}$
is a horizontal character on $G$ and $\eta_{2}$ is a character on
$G_{2}/[G_{2},G]=G_{2}$. Since $\tau_{k}$ is periodic modulo $N$, it follows
that for any $\eta_{2}$, that
$\eta_{2}(g_{h}^{nlin}(n+k))-\eta_{2}(g_{h}^{nlin}(n))+\eta_{2}([\\{g_{h}(1)^{k}\\},g_{h}(1)^{n}])\equiv
0\pmod{1/N}.$
Since $\\{g_{h}(1)^{k}\\}[g_{h}(1)^{k}]=g_{h}(1)^{k}$ and
$[g_{h}(1)^{k},g_{h}(1)^{n}]=0$, it follows that
$\eta_{2}([\\{g_{h}(1)^{k}\\},g_{h}(1)^{n}])=\eta_{2}([[g_{h}(1)^{k}]^{-1},g_{h}(1)^{n}]).$
Since the image of $g_{h}(1)^{n}$ under the quotient map $G\mapsto G/[G,G]$
lies inside $\Gamma/[G,G]$, it follows that
$\eta_{2}([[g_{h}(1)^{k}]^{-1},g_{h}(1)^{n}])\equiv 0\pmod{1/N}.$
Thus,
$\eta_{2}(g_{h}^{nlin}(n+k))-\eta_{2}(g_{h}^{nlin}(n))\equiv\eta_{2}(2\alpha_{h}nk)+f(k)\equiv
0\pmod{1/N}$
for any $\eta_{2}$ and some function $f$. Hence, $\alpha_{h}$ is rational with
denominator $2N$. Thus, using periodicity to scale $n$ by $2n$, we may work
with $g_{h}(2n)$ instead of $g_{h}(n)$ so we may assume that $\alpha_{h}$ is
rational with denominator $N$.
To linearize $\alpha_{h}$, we apply the linearization argument one more time,
obtaining that there exists
$\delta^{\log(1/\delta)^{O(\log\log(1/\delta))}}N^{3}$ many additive
quadruples $(h_{1},h_{2},h_{3},h_{4})$ such that
$\mathbb{E}_{n\in\mathbb{Z}/N\mathbb{Z}}e(\alpha_{h_{1}}n^{2}+\alpha_{h_{2}}(n+h_{1}-h_{4})^{2}+\beta_{h_{1},h_{2},h_{3},h_{4}}n-\alpha_{h_{3}}n^{2}-\alpha_{h_{4}}(n+h_{1}-h_{4})^{2})\geq\delta^{\log(1/\delta)^{O(1)}}$
for some phase
$\beta_{h_{1},h_{2},h_{3},h_{4}}\in\widehat{\mathbb{Z}/N\mathbb{Z}}$. This
implies that
$\alpha_{h_{1}}+\alpha_{h_{2}}\equiv\alpha_{h_{3}}+\alpha_{h_{4}}\pmod{1}.$
Arguing as above, this implies that there exists some subset
$H^{\prime}\subseteq H$ and $k\in\mathbb{Z}/N\mathbb{Z}$ such that for each
$h\in H^{\prime}$,
$\alpha_{h}=\sum_{i=1}^{d^{\prime}}a_{i}\\{\alpha_{i}(h-k)\\}$
with $(\alpha_{i})_{i=1}^{d^{\prime}}\in\widehat{\mathbb{Z}/N\mathbb{Z}}$ and
$d^{\prime}\leq\log(1/\delta)^{O(1)}$. This linearizes $\alpha_{h}$. Thus, we
have the following.
###### Lemma 8.10 (Linearization).
linearizationargument There exists a subset
$H^{\prime}\subseteq\mathbb{Z}_{N}$ of size at least
$\delta^{O(\log(1/\delta))^{O(1)}}N$ such that the following holds:
* •
For each $h\in H^{\prime}$, there exists an associated periodic Fourier
expanded nilcharacter $\chi_{h}$ for each $h\in H^{\prime}$, $\chi_{h}$ is the
exponential of a bracket polynomial with at most $\log(1/\delta)^{O(1)}$ terms
which include
$\xi_{*}^{i}n[\xi_{h}^{j}n],\xi_{h}^{j}n[\xi_{*}^{i}n]$
and a single term of the form
$\xi_{h}^{struc}n^{2}$
where $\xi_{h}^{j}$, and $\xi_{h}^{struc}$ are of the form
$\sum_{\alpha\in S}a_{\alpha}\\{\alpha(h-k)\\}$
where $a_{\alpha}\in\mathbb{R}$, $S\subseteq\widehat{\mathbb{Z}_{N}}$ is a
subset of size at most $\log(1/\delta)^{O(1)}$, and
$\xi_{h}^{j},\xi_{h}^{struc},\xi_{*}$ are all rational with denominator $N$
and $H^{\prime}\subseteq k+B(S,\delta^{O(1)})$.
* •
There exists some nonzero $y\in\mathbb{Z}_{N}$ such that
$|\mathbb{E}_{n\in[N]}\Delta_{h}f(yn)\chi_{h}(n)|\geq\delta^{O(\log(1/\delta))^{O(1)}}.$
Note that we may unwind the scaling of $f$ by making a change of variables
$n\mapsto pm$ where $p$ is the modular inverse of $y$. We can then absorb the
factor of $p$ in the frequencies of $\chi_{h}$.
### 8.5. Symmetry and integration argument
Thus, we have obtained that (essentially) $\Delta_{h}f(x)$222We will assume
here that we have unwinded the scaling of $\tilde{f}$ so we work with $f$.
correlates with sums of terms of the form
$\left(\sum_{i=1}^{\ell}a_{i}\\{\alpha_{i}(h-k)\\}n\right)[\beta n]$
and
$\beta n\left[\sum_{i=1}^{\ell}a_{i}\\{\alpha_{i}(h-k)\\}n\right].$
Writing $[\beta n]=\beta n-\\{\beta n\\}$, it follows up to lower order terms
modulo $1$, we may write the first one as
$\sum_{i=1}^{\ell}a_{i}\beta n^{2}\\{\alpha_{i}(h-k)\\}-\\{\beta
n\\}\sum_{i=1}^{\ell}a_{i}n\\{\alpha_{i}(h-k)\\}$
and the second one as (up to lower order terms modulo $1$)
$\\{\beta n\\}\sum_{i=1}^{\ell}a_{i}n\\{\alpha_{i}(h-k)\\}.$
The point of these computations is to illustrate that for our situation, each
of the $\alpha_{i}$’s and the $\beta$ will lie in $\widehat{\mathbb{Z}_{N}}$,
so the phases in brackets will be periodic modulo $N$. We may also translate
our set $H$ by $k$ to eliminate the $k$ term.
It follows that $\Delta_{h+k}f$ correlations with a bracket polynomial of the
form $e(3T(h,n,n)+B(n)+\theta_{h}n)$ where
$T(x,y,z):=\sum_{j=1}^{d}\\{\alpha_{j}x\\}\frac{\beta_{j}}{6}y\\{\gamma_{j}z\\}+\\{\alpha_{j}x\\}\frac{\beta_{j}}{6}z\\{\gamma_{j}y\\}+\sum_{j=1}^{d}\frac{\alpha_{j}^{\prime}}{3}\\{\beta_{j}^{\prime}x\\}yz$
with $\alpha_{j},\gamma_{j},\beta_{j}^{\prime}\in\widehat{\mathbb{Z}_{N}}$ and
$B(n)$ is a degree two bracket polynomial of at most $2d$ phases and is of the
form $B(n)=\sum_{j=1}^{d}\lambda_{j}n[\mu_{j}n]$ (and $\theta_{h}n$ should be
thought of as a “lower order term”). We wish to show that $f$ correlates with
a degree three bracket polynomial. One important observation is that this form
is trilinear on the Bohr set with frequencies
$S:=\\{1/N,\alpha_{1},\dots,\alpha_{d},\gamma_{1},\dots,\gamma_{d},\beta_{1}^{\prime},\dots,\beta_{d}^{\prime}\\}$
and radius, say $1/10$ ($1/N$ is added to $S$ to ensure that $T$ is genuinely
trilinear on the Bohr set with frequencies of $S$ and radius $1/10$).
We will now give a sketch of this step. By hypothesis, we only have local
linearity in the first variable. Let us assume for simplicity that $T$ is
trilinear. We wish to write $3T(h,n,n)$ as a combination of
$T(n+h,n+h,n+h)-T(n,n,n)$ plus lower order terms in $n$. If this can be done,
then we may absorb the $T(n,n,n)$ and the $T(n+h,n+h,n+h)$ into $f(n)$ and
$f(n+h)$, respectively, obtaining that
$\mathbb{E}_{h}\|f_{1}(n)f_{2}(n+h)\|_{U^{2}}\gg_{\epsilon}1$
for $f_{1}$ a product of $f$ and bracket cubics. We can deduce that $f_{1}$
has large $U^{3}$ norm, so applying the $U^{3}$ inverse theorem, we can
conclude the $U^{4}$ inverse theorem.
The only thing preventing us from showing this is that we have three terms
$T(h,n,n)+T(n,h,n)+T(n,n,h)$, which may not necessarily be equal. Since we
symmetrized the last two variables of $T$, we may assume that the latter two
terms are equal. It is not actually possible to show that all three terms are
equal, but rather, it is possible to show that the difference of two of them
are of “lower order,” in the sense of being a two-step in $n$ and $h$ (or one
step in $n$).
To show that $T$ is symmetric in its first two variables, we go back to the
hypothesis
$\mathbb{E}_{h}|\mathbb{E}_{n}\Delta_{h+k}f(n)\chi_{h}(n)|^{2}\geq\delta^{O(\log(1/\delta))^{O(1)}}.$
For simplicity, we will relabel the right hand side as simply $\delta$. By [8,
Proposition 6.1], it follows that
$\mathbb{E}_{h_{1}+h_{2}=h_{3}+h_{4}}|\mathbb{E}_{n}\chi_{h_{1}}(n)\chi_{h_{2}}(n+h_{1}-h_{4})\overline{\chi_{h_{3}}(n)\chi_{h_{4}}(n+h_{1}-h_{4})}|^{2}\geq\delta^{O(1)}$
or for some one-bounded function $b(h_{1},h_{2},h_{3},h_{4})$:
$\mathbb{E}_{h_{1}+h_{2}=h_{3}+h_{4}}\mathbb{E}_{n}\chi_{h_{1}}(n)\chi_{h_{2}}(n+h_{1}-h_{4})\overline{\chi_{h_{3}}(n)\chi_{h_{4}}(n+h_{1}-h_{4})}b(h_{1},h_{2},h_{3},h_{4})\geq\delta^{O(1)}$
Writing $\chi_{h}(n)=e(3T(h,n,n)+\theta_{h}n)$ and using trilinearity and
making a change of variables $(h_{1},h_{2},h_{3},h_{4})=(h,h+x+y,h+y,h+x)$, we
obtain eventually (after Pigeonholing in $h$) that
$|\mathbb{E}_{n,x,y}e(-6(T(x,y,n))b(n,x)b(n,y)b(n,x+y)b(x,y)|\gg\delta^{O(1)}$
for one-bounded functions $b$ as in Gowers’ notation [3] (where we identify
$\mathbb{Z}/N\mathbb{Z}$ with the interval $[-(N-1)/2,(N-1)/2]$). Applying
Cauchy-Schwarz in $x$ and $n$ gives
$|\mathbb{E}_{n,x,y,y^{\prime}}e(-6(T(x,y-y^{\prime},n))b(n,y^{\prime})b(n,y)b(n,x+y)b(n,x+y^{\prime})b(x,y,y^{\prime})|\gg\delta^{O(1)}.$
Making a change of variables $z=x+y+y^{\prime}$, we obtain
$|\mathbb{E}_{n,z,y,y^{\prime}}e(-6(T(z-y-y^{\prime},y-y^{\prime},n))b(n,y^{\prime})b(n,y)b(n,z-y)b(n,z-y^{\prime})b(z,y,y^{\prime})|\gg\delta^{O(1)}.$
Pigeonholing in $z$ and using trilinearity again, we obtain
$|\mathbb{E}_{n,y,y^{\prime}}e(-6(T(y,y^{\prime},n)-T(y^{\prime},y,n)))b(n,y)b(n,y^{\prime})b(y,y^{\prime})|\geq\delta^{O(1)}.$
From here, we see that these one-bounded functions are of “lower order”
compared to the top term $e(T(y,y^{\prime},n)-T(y^{\prime},y,n))$; this allows
us to conclude either with an application of the equidistribution theorem, or
via some Bohr set argument similar to [8] that
$T(y,y^{\prime},n)-T(y^{\prime},y,n)$ is “lower order” as well, implying that
$T$ is symmetric in the first two variables up to lower order. This argument
can either be made rigorous using a combination of bracket polynomial
arithmetic argument and the equidistribution theorem found here, or by a Bohr
set argument similar to the flavor of [8]. We opt for the latter approach,
since it is overall less annoying in the $U^{4}$ case though we imagine that
the other approach generalizes better than the approach we opt for.
###### Remark 8.3.
In [8], the authors work with the density assumption that there exists many
additive quadruples $(h_{1},h_{2},h_{3},h_{4})$ such that
$\mathbb{E}_{n}\chi_{h_{1}}(n)\chi_{h_{2}}(n+h_{1}-h_{4})\overline{\chi_{h_{3}}(n)\chi_{h_{4}}(n+h_{1}-h_{4})}$
is large. Because of this, they need a further tool involving the bilinear
Bogolyubov argument333The author was unable to understand their argument at
the point where they use the bilinear Bogolyubov-type theorem; however, one
can also follow their argument and use [19] to finish and obtain desirable
quantitative bounds.. We remove the use of bilinear Bogolyubov by taking
advantage of additional averaging from the hypothesis
$\mathbb{E}_{h_{1}+h_{2}=h_{3}+h_{4}}|\mathbb{E}_{n}\chi_{h_{1}}(n)\chi_{h_{2}}(n+h_{1}-h_{4})\overline{\chi_{h_{3}}(n)\chi_{h_{4}}(n+h_{1}-h_{4})}|^{2}\geq\delta^{O(1)}$
rather than the density hypothesis of additive quadruples.
We begin with the following lemma:
Thus, by applying FourierComplexity2 and onevarfouriercomplexity (or rather
the procedure of onevarfouriercomplexity) and making the above manipulations,
we may simplify
$\mathbb{E}_{h_{1}+h_{2}=h_{3}+h_{4}}|\mathbb{E}_{k,n}\chi_{h_{1}}(n)\chi_{h_{2}}(n+h_{1}-h_{4})\overline{\chi_{h_{3}}(n)\chi_{h_{4}}(n+h_{1}-h_{4})}|^{2}\geq\delta^{O(1)}$
to
$|\mathbb{E}_{n,y,y^{\prime}}e(-6(T(y,y^{\prime},n)-T(y^{\prime},y,n)))b(n,y)b(n,y^{\prime})b(y,y^{\prime})|\geq\delta^{O(d)^{O(1)}}.$
Let us relabel $y^{\prime}$ to $x$, so we instead end up with
$|\mathbb{E}_{n,y,z}e(6(T(x,y,n)-T(y,x,n)))b(n,x)b(n,y)b(x,y)|\geq\delta^{O(d)^{O(1)}}.$
#### 8.5.1. Bohr sets
In the remainder of the proof, we will need to do various arithmetic
operations over Bohr sets. In this section, we formulate notation we will use
for Bohr sets.
###### Definition 8.4.
Given a subset $S\subseteq\widehat{\mathbb{Z}_{N}}$, and a real number
$\rho\in(0,1/2)$, we denote the _Bohr set_ with frequencies $S$ and radius
$\rho$ to be $B(S,\rho):=\\{x\in\mathbb{Z}_{N}:\|\alpha
x\|<\rho\forall\alpha\in S\\}$.
A Bohr set is _regular_ if whenever $\epsilon\leq\frac{1}{100|S|}$,
$|B(S,\rho)|(1-100|S|\epsilon)\leq|B(S,\rho(1+\epsilon))|\leq|B(S,\rho)|(1+100|S|\epsilon).$
It was shown by Bourgain in [1] that regular Bohr sets are ubiquitous in the
sense that there exists $\rho^{\prime}\in[\rho/2,\rho]$ such that
$B(S,\rho^{\prime})$ is regular. If $S$ and $\rho$ are specified, we will
shorten $B(S,\rho)$ to be $B$. Given a real number $\epsilon$ so that
$\epsilon\rho\in(0,1)$, we will denote by $B_{\epsilon}$ as
$B(S,\epsilon\rho)$. We will also define the “norms”
$\|n\|_{S}=\sup_{\alpha\in S}\|\alpha n\|_{\mathbb{R}/\mathbb{Z}}.$
#### 8.5.2. Finishing the symmetry argument
Recall that we defined
$S=\\{1/N,\alpha_{1},\dots,\alpha_{d},\gamma_{1},\dots,\gamma_{d},\beta_{1}^{\prime},\dots,\beta_{d}^{\prime}\\}.$
Let $B=B(S,\rho)$ be the Bohr set where all of the frequencies in $B$ are the
bracket frequencies of $T$ where $\rho$ is selected in $[1/10,1/5]$ such that
$B(S,\rho)$ is regular. It follows that Pigeonholing in one of those
translations, we have for some $n_{0},x_{0},y_{0}$ that
$\sum_{n\in n_{0}+B,x\in x_{0}+B,y\in y_{0}+B}e(6(T(x,y,n)-$
$T(y,x,n)))b(n,x)b(n,y)b(x,y)|\gg N^{3}2^{-O(d)}\delta^{O(d)^{O(1)}}.$
Making a change of variables, we have
$\mathbb{E}_{n,x,y\in B}e(6(T(x-x^{0},y-y^{0},n-n_{0})$
$-T(y-y^{0},x-x^{0},n-n_{0})))b(n,x)b(n,y)b(x,y)\gg\delta^{O(d)^{O(1)}}.$
Expanding out the bracket polynomials as
$T(x-x^{0},y-y^{0},n-n_{0})=T(x,y,n)+[\text{lower order terms}]$ (with the
point of the lower order terms depending on fewer variables after applying a
Fourier complexity lemma), and similarly for $T(y-y^{0},x-x^{0},n-n_{0})$ and
applying FourierComplexity2, we therefore obtain
$\mathbb{E}_{n,x,y\in
B}e(6(T(x,y,n)-T(y,x,n)))b(n,x)b(n,y)b(x,y)|\gg\delta^{O(d)^{O(1)}}.$
Fix elements $a,b,c\in B_{1/|S|100}=B(S,\rho/|S|100)$. It then follows that if
$f$ is one-bounded, then
$\mathbb{E}_{n\in B}f(n)=\mathbb{E}_{n\in
B}\mathbb{E}_{k\in[-K,K]}f(n+ka)+O\left(K\frac{|S|\|a\|_{S}}{\rho}\right)$
whenever $K\leq\frac{\rho}{|S|\|a\|_{S}}$. We may thus write for $a,b,c\in
B_{\delta^{O(d)^{O(1)}}/100|S|}$ that
$|\mathbb{E}_{n,x,y\in
B}\mathbb{E}_{k_{1}\in[-K_{1},K_{1}],k_{2}\in[-K_{2},K_{2}],k_{3}\in[-K_{3},K_{3}]},e(6(T(x+k_{1}a,y+k_{2}b,n+k_{3}c)$
$-T(y+k_{2}b,x+k_{1}a,n+k_{3}c)))b(n+k_{3}c,x+k_{1}a)b(n+k_{3}c,y+k_{2}b)b(x+k_{1}a,y+k_{2}b)|\gg\delta^{O(d)^{O(1)}}$
for $K_{1}\leq\frac{\delta^{O(d)^{O(1)}}\rho}{\|a\|_{S}}$,
$K_{3}\leq\frac{\delta^{O(d)^{O(1)}}\rho}{\|b\|_{S}}$,
$K_{3}\leq\frac{\delta^{O(d)^{O(1)}}\rho}{\|c\|_{S}}$. We note that if $f$ is
locally linear on a Bohr set, and if $n+ka$ lies inside the Bohr set for each
$k\in[-K,K]$ (say for $K$ significantly larger than $2$), then we may write
$f(n+ka)=f(n)+kf(a)$. Thus, we may write $T(x+k_{1}a,y+k_{2}b,n+k_{3}c)$ as a
polynomial in the variables $k_{1},k_{2},k_{3}$ with top term
$k_{1}k_{2}k_{3}T(a,b,c)$ and similarly, we may write
$T(y+k_{2}b,x+k_{1}a,n+k_{3}c)$ as a polynomial in $k_{1},k_{2},k_{3}$ with
top term $k_{1}k_{2}k_{3}T(b,a,c)$. We will now Pigeonhole in values $n,x,y$
so that
$\mathbb{E}_{k_{1}\in[-K_{1},K_{1}],k_{2}\in[-K_{2},K_{2}],k_{3}\in[-K_{3},K_{3}]},e(6(T(x+k_{1}a,y+k_{2}b,n+k_{3}c)-T(y+k_{2}b,x+k_{1}a,n+k_{3}c)))$
$b(k_{1},k_{3})b(k_{1},k_{2})b(k_{2},k_{3})|\gg\delta^{O(d)^{O(1)}}.$
Thus, applying trilinearity of $T$, we obtain
$\mathbb{E}_{k_{i}\in[-K_{i},K_{i}]}e(6k_{1}k_{2}k_{3}(T(a,b,c)-T(b,a,c)))b(k_{1},k_{3})b(k_{1},k_{2})b(k_{2},k_{3})|\gg\delta^{O(d)^{O(1)}}.$
Let $\alpha=6(T(a,b,c)-T(b,a,c))$. Applying Cauchy-Schwarz three times to
eliminate all of the $b$’s gives
$|\mathbb{E}_{k_{i},k_{i}^{\prime}\in[-K_{i},K_{i}]}e(\alpha
P(k_{1},k_{2},k_{3},k_{1}^{\prime},k_{2}^{\prime},k_{3}^{\prime}))|\gg\delta^{O(d)^{O(1)}}$
where
$P(k_{1},k_{2},k_{3},k_{1}^{\prime},k_{2}^{\prime},k_{3}^{\prime})=k_{1}k_{2}k_{3}-k_{1}^{\prime}k_{2}k_{3}-k_{1}k_{2}^{\prime}k_{3}-k_{1}k_{2}k_{3}^{\prime}+k_{1}^{\prime}k_{2}^{\prime}k_{3}+k_{1}^{\prime}k_{2}k_{3}^{\prime}+k_{1}k_{2}^{\prime}k_{3}^{\prime}-k_{1}^{\prime}k_{2}^{\prime}k_{3}^{\prime}.$
Pigeonholing in $k_{1}^{\prime},k_{2}^{\prime},k_{3}^{\prime}$, we obtain
$|\mathbb{E}_{k_{i}\in[-K_{i},K_{i}]}e(\alpha k_{1}k_{2}k_{3}+[\text{Lower
Order Terms}])|\gg\delta^{O(d)^{O(1)}}.$
This gives us by multidimensional polynomial equidistribution theorem [23,
Proposition 7] that there exists some $q=q_{a,b,c}\leq\delta^{-O(d)^{O(1)}}$
such that
$\|q\alpha\|_{\mathbb{R}/\mathbb{Z}}\ll\frac{\delta^{-O(d)^{O(1)}}}{K_{1}K_{2}K_{3}}.$
This implies via a geometry of numbers argument as in [6, Lemma 11.4] or in
[14, Lemma 7.5] that denoting $\epsilon=\delta^{O(d)^{O(1)}}$ that if
$a,b,c\in B_{\epsilon^{O(d)^{O(1)}}}$, then for some $q\leq\epsilon^{-1}$
independent of $a,b,c$, that
$\|6q(T(a,b,c)-T(b,c,a))\|_{\mathbb{R}/\mathbb{Z}}\leq\|a\|_{S}\|b\|_{S}\|c\|_{S}\delta^{-O(d)^{O(1)}}.$
This gives the symmetry result. The rest, now, is putting everything together.
#### 8.5.3. Integrating the result
We go back to the hypothesis
$\mathbb{E}_{h}|\mathbb{E}_{n\in\mathbb{Z}_{N}}\Delta_{h+k}f(n)\chi_{h}(n)|^{2}\geq\delta^{O(d)^{O(1)}}.$
We first claim that there exists some $h_{0}\in H$ such that
$H\cap(h_{0}+B_{\delta^{O(1)}})$ has size at least $\delta^{O(d)^{O(1)}}N$. To
show this, we observe (following [8, p. 33]) that
$\sum_{n\in[N]}1_{H}*1_{B^{\prime}}*1_{B^{\prime}}(n)1_{H}(n)=\sum_{n\in[N]}(1_{H}*1_{B^{\prime}}(n))^{2}\geq\left(\sum_{n\in[N]}1_{H}*1_{B^{\prime}}(n)\right)^{2}/N\geq\delta\rho_{1}^{2d}N^{3}$
with $B^{\prime}=B_{\delta^{O(1)}/2}$. On the other hand, we have that
$1_{B^{\prime}}*1_{B^{\prime}}(n)\leq|B_{\delta^{O(1)}}|1_{B_{\delta^{O(1)}}}(n)\leq
N1_{B_{\delta^{O(1)}}}(n)$. Hence, by the Pigeonhole principle, such $h_{0}$
exists. Thus, taking $H^{\prime}=(H-h_{0})\cap B$, it follows that for each
$h^{\prime}\in H^{\prime}$,
$\mathbb{E}_{h^{\prime}\in
B_{\delta^{O(1)}}}|\mathbb{E}_{n}\overline{f(n)}f(n+k+h_{0}+h^{\prime})e(3T(h_{0}+h^{\prime},n,n)+\theta_{h_{0}+h^{\prime}}n)|^{2}\geq\delta.$
We may use local trilinearity of $T$ and relabeling $\overline{f}=f_{1}$ and
$f_{2}=f(h_{0}+k+\cdot)$, we have (after adjusting $\theta$ a bit)
$\mathbb{E}_{h\in
B_{\delta^{O(1)}}}|\mathbb{E}_{n}f_{1}(n)f_{2}(n+h)e(3T(h,n,n)+\theta_{h}n)|^{2}\geq\delta^{O(d)^{O(1)}}.$
Pigeonoling $n$ to be in a Bohr set $B_{\delta^{O(1)}}$ and adjusting the
radius so that it is regular, we have by the various bracket polynomial
manipulations from previous sections that for some phases
$\tilde{\theta}_{h}\in\widehat{\mathbb{Z}_{N}}$,
$\mathbb{E}_{h}|\mathbb{E}_{n\in
B_{\delta^{O(1)}}}\tilde{f}_{1}(n)\tilde{f}_{2}(n+h)e(3T(h,n,n)+\tilde{\theta}_{h}n)|^{2}\geq\delta^{O(d)^{O(1)}}$
where $\tilde{f}_{i}$ are products of $f$ and at most $O(d)$ many degree $\leq
2$ bracket polynomials which are periodic in $n$ and $h$ modulo $N$.
Pigeonholing in $n$ and $h$ in a smaller Bohr set $B_{\delta^{O(d)^{O(1)}}}$
gives
$\mathbb{E}_{h\in B_{\delta^{O(d)^{O(1)}}}}|\mathbb{E}_{n\in
B_{\delta^{O(d)^{O(1)}}}}\tilde{f}_{1}(n+n_{0}+h+h_{0})\tilde{f}_{2}(n+n_{0})e(3T(h+h_{0},n+n_{0},n+n_{0})+\tilde{\theta_{h}}^{1}n)|^{2}\geq\delta^{O(d)^{O(1)}}.$
The point is that on $B_{\delta^{O(1)}}$, $T$ is genuinely trilinear, so by
FourierComplexity2, we may write
$3T(h+h_{0},n+n_{0},n+n_{0})=3T(h,n+n_{0},n+n_{0})$ plus a lower order bracket
quadratic term in $n$, which we can absorb in $\tilde{\theta_{h}}^{1}$ and in
$\tilde{f}_{1}$. We may also write
$3T(h,n+n_{0},n+n_{0})=3T(h,n,n)+3T(h,n_{0},n)+3T(h,n,n_{0})+3T(h,n_{0},n_{0})$,
but noting that $T$ is symmetric in the last two variables, we have
$3T(h,n_{0},n)+3T(h,n,n_{0})=3T(n+h,n+h,n_{0})-3T(n,n,n_{0})-3T(h,h,n_{0}).$
In addition, using that $6q(T(h,n,n)-T(n,h,n))\equiv
O(\delta^{O(d)^{O(1)}})\pmod{1}$, it follows that
$6qT(h,n,n)\equiv 2q(T(n+h,n+h,n+h)-T(n,n,n)-T(h,h,h)-T(h,n,h)$
$-T(n,h,h)-T(h,h,n))+O(\delta^{O(d)^{O(1)}})\pmod{1}.$
This suggests that we should rewrite the argument as follows: let
$\ell\equiv(6q)^{-1}\pmod{N}$. We start with
$\mathbb{E}_{h}|\mathbb{E}_{n}\Delta_{h}f(n)\chi_{h}(n)|^{2}\geq\delta^{O(d)^{O(1)}}.$
Instead of Pigeonholing in $B_{\delta^{O(1)}}$ By Pigeonholing on a translate
of $\ell^{-1}B_{\delta^{O(1)}}$, we obtain:
$\mathbb{E}_{h}|\mathbb{E}_{n\in\ell^{-1}B_{\delta^{O(1)}}}\tilde{f}_{1}(n)\tilde{f}_{2}(n+h)e(3T(6q\ell
h,6q\ell n,6q\ell n)+\tilde{\theta}_{h}n)|^{2}\gg\delta^{O(d)^{O(1)}}.$
We may then argue as before, finally obtaining $e((6q)^{3}3T(\ell h,\ell
n,\ell
n))=e(\tilde{T}(n+h,n+h,n+h)-\tilde{T}(n,n,n)+\tilde{\theta}_{h}^{3}n+\alpha)+O(\delta^{O(d)^{O(1)}})$.
Absorbing the $n+h$ terms into $f(n+h+n_{0}+h_{0})$ and the $n$ terms into
$f(n+n_{0})$ and applying trivialfouriercomplexity on the terms
$\tilde{T}(n,h,h)$, $\tilde{T}(h,n,h)$, and $\tilde{T}(h,h,n)$, we obtain
$\mathbb{E}_{h}\|F_{1}(n+h)F_{2}(n)\|_{U^{2}([N])}^{2}\geq\delta^{O(d)^{O(1)}}$
for functions $F_{1}$ and $F_{2}$ where $F_{2}$ is a product of $f$ and $O(d)$
many degree $\leq 3$ bracket polynomials. Using the Cauchy-Schwarz-Gowers
inequality, combined with the results above on three-step nilmanifold
constructions gives us that there exists some approximate degree $\leq 3$
nilsequence $F(g(n)\Gamma)$ of dimension at most $O(d)^{O(1)}$ and complexity
at most $\delta^{-O(d)^{O(1)}}$ such that
$|\mathbb{E}_{n\in[N]}\Delta_{h}f(n)F(g(n)\Gamma)|\geq\delta^{O(d)^{O(1)}}.$
We may also smooth out $F1_{B}$ as follows: writing
$1_{B_{\delta^{O(d)^{O(1)}}}}(n)=1_{U}((\alpha n)_{\alpha\in S})$ where $U$ is
the open set
$\\{\|x_{i}\|_{\mathbb{R}/\mathbb{Z}}<\rho\delta^{O(d)^{O(1)}}\\}$ where
$B=B(S,\rho)$, we let $\phi$ be a smooth cutoff function on
$\mathbb{R}^{|S|}/\mathbb{Z}^{|S|}$ supported on an $\epsilon$-neighborhood of
$U$. By regularity, (assuming $\epsilon$ is sufficiently small)
$\phi(n)F(g(n)\Gamma)=1_{B}(n)F(g(n)\Gamma)+O_{L^{1}[N]}(\epsilon)$. One can
check that $\phi(n)F(g(n)\Gamma)$ is smooth on a three-step nilmanifold of
complexity $O(1)$ and $\phi$ can be chosen in a way so that
$\phi(n)F(g(n)\Gamma)$ has Lipschitz parameter at most $O(d\epsilon^{-1})$.
Letting $\epsilon=\delta^{O(d)^{O(1)}}$, we obtain the desired correlation
with a Lipschitz-smooth nilsequence of parameter at most
$\delta^{-O(d)^{O(1)}}$. One can then divide by the Lipschitz constant to
ensure a Lipschitz parameter of $1$.
Substituting $\delta$ with $\delta^{O(\log(1/\delta))^{O(1)}}$ and $d$ with
$O(\log(1/\delta))^{O(1)}$ gives us mainresult4.
## Appendix A Further auxiliary lemmas
In this section, we shall state auxiliary lemmas we use in the proof of our
main theorem. Most of these results come from [7]. The first two lemmas are
similar to [7, Proposition 9.2] and [7, Lemma 7.9], respectively.
###### Lemma A.1 (Factorization lemma I).
factorization Let $G/\Gamma$ be a nilmanifold of dimension $d$, complexity
$M$, and degree $k$ and let $g\in\mathrm{poly}(\mathbb{Z},G)$ with
$g(n)\Gamma$ be $N$-periodic. Suppose $\eta_{1},\dots,\eta_{r}$ are a set of
linearly independent nonzero horizontal characters of size at most $L$ with
$\|\eta_{i}\circ g\|_{C^{\infty}[N]}=0$ for each character $i$. Then there
exists a factorization $g(n)=\epsilon(n)g_{1}(n)\gamma(n)$ where $\epsilon$ is
constant, $g_{1}$ is a polynomial sequence in
$\tilde{G}=\bigcap_{i=1}^{r}\operatorname{ker}(\eta_{i}),$
and $\gamma\in\mathrm{poly}(\mathbb{Z},G)$ is a
$(ML)^{O_{k}(d^{O_{k}(1)})}$-rational. If $g(0)=\mathrm{id}_{G}$, then we may
take and $\epsilon(0)=\gamma(0)=\mathrm{id}_{G}$.
###### Proof.
By setting $\epsilon(n)=g(0)$, we may work with the assumption that
$g(0)=\mathrm{id}_{G}$. We may thus write in coordinates that
$\psi(g(n))=\sum_{i}{n\choose i}t_{i}$
where $t_{i}$ are vectors representing the coordinates of the degree $i$
component of $g$ in Mal’cev coordinates. By Cramer’s rule, we may pick a
rational vector $v$ with height at most $(ML)^{O(d^{2})}$ such that
$\eta_{i}\cdot v=\eta_{i}\cdot\psi(g(n))$. We define a polynomial sequence
$\gamma$ with
$\psi(\gamma(n))=\sum_{i\geq 1}{n\choose i}v_{i}.$
Thus, the polynomial sequence $g_{1}(n):=g(n)\gamma(n)^{-1}$ lies inside
$\tilde{G}$ and by construction $\gamma(n)\Gamma$ is
$(ML)^{O_{k}(d)^{O_{k}(1)}}$-rational. ∎
Before we state the next lemma, we recall the definition of $g_{2}$ from [13,
Definition 4.1] via given a polynomial sequence
$g\in\mathrm{poly}(\mathbb{Z},G)$, we define $g_{2}(n):=g(n)g(1)^{-n}$.
###### Lemma A.2 (Factorization lemma II).
factorization2 Let $G/\Gamma$ be a nilmanifold of dimension $d$, complexity
$M$, and degree $k$ and let $g\in\mathrm{poly}(\mathbb{Z},G)$ with
$g(n)\Gamma$ be $N$-periodic. Suppose $\eta_{1},\dots,\eta_{r}$ are a set of
linearly independent nonzero horizontal characters on $G_{2}$ which annihilate
$[G,G]$ of size at most $L$ with $\|\eta_{i}\circ g_{2}\|_{C^{\infty}[N]}=0$
for each character $i$. Then we may write $g(n)=\epsilon(n)g_{1}(n)\gamma(n)$
where $\epsilon$ is constant, $g_{1}(n)\in\mathrm{poly}(\mathbb{Z},\tilde{G})$
with $\tilde{G}$ given the filtration
$\tilde{G}_{0}=\tilde{G}_{1}=G,\tilde{G}_{2}=\bigcap_{i=1}^{r}\operatorname{ker}(\eta_{i}),\text{
and }\tilde{G}_{i}=\tilde{G}_{2}\cap G_{i}$
for all $i\geq 2$, and $\gamma$ is $(ML)^{O_{k}(d^{O(1)})}$-rational, and
$g_{1}(0)=\gamma(0)=1$.
###### Proof.
By setting $\epsilon(n)=g(0)$, we may assume that $g(0)=\mathrm{id}_{G}$. We
write in Mal’cev coordinates that
$\psi_{G_{2}}(g_{2}(n))=\sum_{i\geq 2}{n\choose i}t_{i}$
where $t_{i}$ are vectors representing the coordinates of the degree $i$
component of $g$ in Mal’cev coordinates. By Cramer’s rule, we may pick a
rational vector $v$ in $G_{2}$ with height at most $(ML)^{O(r)^{2}}$ such that
$\eta_{i}\cdot v=\eta_{i}\cdot\psi(g(n))$ and such that the linear component
of $v$, i.e., the first $d-d_{1}$ components when written as Mal’cev bases, is
zero. We define $\gamma$ via
$\psi(\gamma(n))=\sum_{i\geq 2}{n\choose i}v_{i}.$
By construction, $\gamma$ is $(ML)^{O_{k}(d^{O_{k}(1)})}$-rational. We now
define $g_{1}(n)=g(n)\gamma(n)^{-1}$. We observe that
$g_{1,2}(n)=g_{1}(n)g_{1}(1)^{-n}=g_{2}(n)\gamma(n)^{-1}\pmod{[G,G]}$
so $g_{1}(n)\in\mathrm{poly}(\mathbb{Z},\tilde{G})$ as desired. ∎
###### Lemma A.3 (Removing the Rational Term).
removerational Suppose $N$ is prime and $Q$ is an integer. Suppose $G/\Gamma$
|
# Self-Correcting Self-Consuming Loops for Generative Model Training
Nate Gillman Michael Freeman Daksh Aggarwal Chia-Hong Hsu Calvin Luo
Yonglong Tian Chen Sun
###### Abstract
As synthetic data becomes higher quality and proliferates on the internet,
machine learning models are increasingly trained on a mix of human- and
machine-generated data. Despite the successful stories of using synthetic data
for representation learning, using synthetic data for generative model
training creates “self-consuming loops” which may lead to training instability
or even collapse, unless certain conditions are met. Our paper aims to
stabilize self-consuming generative model training. Our theoretical results
demonstrate that by introducing an idealized correction function, which maps a
data point to be more likely under the true data distribution, self-consuming
loops can be made exponentially more stable. We then propose self-correction
functions, which rely on expert knowledge (e.g. the laws of physics programmed
in a simulator), and aim to approximate the idealized corrector automatically
and at scale. We empirically validate the effectiveness of self-correcting
self-consuming loops on the challenging human motion synthesis task, and
observe that it successfully avoids model collapse, even when the ratio of
synthetic data to real data is as high as 100%.
Machine Learning, Generative Modeling, Self-Consuming Loops, Data
Contamination, Deep Learning, Artificial Intelligence, Human Motion Synthesis
## 1 Introduction
Figure 1: What happens after iteratively training a text-conditioned
generative model for human motion synthesis for 50 generations? We simulate a
self-consuming loop by creating synthetic data with the latest generative
model, and mixing them with the original data to continue training the next
generative model. We observe that by self-correcting the synthetic data with a
physics simulator, the model can successfully avoid collapse and generate
high-quality human motion. Faded poses represent poses from further back in
time. Our paper provides theoretical and empirical justification for the self-
correcting self-consuming loop.
Generative models have been used to synthesize training data for various
learning tasks, to varying degrees of success. For example, for the tasks of
image classification and contrastive representation learning, recent work
(Azizi et al., 2023; Tian et al., 2023) finds that using data synthesized from
generative models rivals using real data. Unfortunately, there is a gloomier
outlook when attempting to generalize this framework to generative model
training.
On one hand, there is evidence to suggest that training a generative model
with its own outputs in a self-consuming manner will lead to collapse
(Alemohammad et al., 2023). For example, after 50 iterations of self-consuming
training, a human motion diffusion model (Tevet et al., 2023) collapses and
fails to follow the text prompts or the laws of physics (see the two examples
on the left of Figure 1).
On the other hand, evidence suggests that such a framework could avoid
collapse, but only when a “moderate” amount of synthetic data is used
(Bertrand et al., 2024). Worse still, this self-consuming scenario might
happen without us knowing, and without us being able to quantify how much
synthetic data is being used during training, due to the wide spread of AI
generated content on the internet.
Intuitively, model collapse might be delayed or avoided by incorporating
higher quality human generated data (Alemohammad et al., 2023), or by manually
fixing the “mistakes” in machine created data. Considering the size of
datasets used in practice (Schuhmann et al., 2022), neither of these options
is a scalable solution.
In this paper, we aim to provide a theoretical analysis of how certain
operations would avoid collapse in self-consuming loops, without any
assumptions on the “moderateness” of synthetic data corruption. We introduce
the mathematical abstraction of a _self-correction operation_. This operation
maps synthesized data that are sampled from the generative model to data that
are better representatives from the target probability distribution that the
model is attempting to approximate. Instead of training on a combination of
real data and synthesized data, we propose training on a combination of real
data and synthesized _and then self-corrected_ data. Note that injecting fresh
human generated data can be viewed as a special case of this operation.
Our main theoretical findings (Theorem 4.3):
1. (1)
The self-consuming model with self-correction is exponentially more stable
than the self-consuming model without any self-correction.
2. (2)
The self-correction procedure guarantees less unwanted variance during self-
consuming model training.
In our theoretical study, we assume that correction is _ideal_ in order to
obtain rigorous performance guarantees. In our empirical study, we evaluate
whether the same conclusions hold for _noisy_ self-correction functions. We
propose to automate this “self-correction” process by relying on programmed
expert knowledge rather than a human-in-the-loop, such that the function can
be applied at scale. We focus on the human motion synthesis task (Guo et al.,
2022), and implement the self-correction function with a physics simulator-
based imitation model (Luo et al., 2021). Our empirical results confirm that
our theoretical findings hold in practice:
1. (1)
As illustrated in Figure 1, the self-correcting self-consuming model generates
higher-quality human motion than the one without any self-correction.
2. (2)
The self-correction function allows self-consuming loops to avoid collapse
even at a high synthetic data to real data ratio (e.g. 100%).
We will release all the code associated with this paper.111Project page:
https://nategillman.com/sc-sc.html.
## 2 Related Work
### 2.1 Learning Representations with Synthetic Data
Real curated datasets are costly to obtain, so there has been much interest in
generating synthetic data as training data for various vision tasks. Azizi et
al. (2023) demonstrates that text-to-image diffusion models such as Imagen
(Saharia et al., 2022) can generate synthetic examples that augment the
ImageNet dataset for better image classification. He et al. (2023) studies how
synthetic data from text-to-image models, when used exclusively, can be used
as training data for image recognition tasks. Similarly, Tian et al. (2023)
finds that using synthetic outputs from a text-to-image model results in
contrastive models whose downstream performance rivals that of CLIP (Radford
et al., 2021) on visual recognition tasks, including dense prediction. And the
work in Jahanian et al. (2021) explored methods for multi-view representation
learning by using the latent space of the generative models to generate
multiple “views” of the synthetic data. The above works collectively provide
evidence that _some_ representation learning tasks, when trained on synthetic
data from some _given_ generative models, yield excellent results.
### 2.2 Training Generative Models on Synthetic Data
Another line of reseach investigates the use of synthetic data for training
_generative_ models. Shumailov et al. (2023) and Martínez et al. (2023) show
that the use of model generated content in generative model training results
in model degradation, likely because self-consuming loops remove low-density
areas from the estimated probability manifold. Alemohammad et al. (2023)
formalize three different kinds of self-consuming generative models: the fully
synthetic loop, the synthetic augmentation loop, and the fresh data loop. In
all of these loops, they iteratively re-train the model from scratch for every
new generation. They find that the former two loops result in model
degradation, and only for the latter one does diversity and quality not suffer
between self-consuming iterations.
Another recent work (Bertrand et al., 2024) considers the problem of
iteratively fine-tuning in the context of synthetic augmentation loops. They
find that self-consuming augmentation loops do not necessarily collapse, so
long as the synthetic augmentation percentage is sufficiently low. The authors
use techniques from the field of performative stability (Perdomo et al., 2020)
to prove the existence of a convergence phenomenon in the space of model
parameters. Our paper differs from prior work as we conduct analysis on self-
consuming generative model training when the synthetic data can be optionally
corrected. The correction can be performed with a human-in-the-loop, or by
incorporating learned or programmed expert knowledge, as explored for natural
language (Saunders et al., 2022; Welleck et al., 2022; Wu et al., 2023) and
human motion (Yuan et al., 2023; Xu et al., 2023). We validate our theory with
a practical self-correcting operation designed for the human motion synthesis
task.
Algorithm 1 Iterative Fine-tuning of a Generative Model With Correction
Input: $\mathcal{D}_{\text{real}}:=\\{x_{i}\\}_{i=1}^{n}$, $\mathcal{A}$,
$\mathcal{A}_{\text{ft}}$, $\pi_{\gamma}$ // ground truth data, learning
procedure, fine-tuning procedure, correction function
Parameters: $T$, $\lambda$,
${\color[rgb]{0.0,0.2901960784313726,0.6784313725490196}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.2901960784313726,0.6784313725490196}\gamma}$,
// number of retraining iterations, proportion of generated data, correction
strength
$p_{\theta_{0}}\leftarrow\mathcal{A}(\mathcal{D}_{\text{real}})$ // learn
generative model from scratch on true data
for $t=1$ to $T$ do
$\mathcal{D}_{\text{synth}}\leftarrow\\{{\color[rgb]{0.0,0.2901960784313726,0.6784313725490196}\definecolor[named]{pgfstrokecolor}{rgb}{0.0,0.2901960784313726,0.6784313725490196}\pi_{\gamma}}(\tilde{x}_{i})\\}_{i=1}^{\lfloor\lambda\cdot
n\rfloor}$, with $\tilde{x}_{i}\sim p_{\theta_{t-1}}$ // sample
$\lfloor\lambda\cdot n\rfloor$ synthetic data points, pass through correction
function
$p_{\theta_{t}}\leftarrow\mathcal{A}_{\text{ft}}(\mathcal{D}_{\text{real}}\cup\mathcal{D}_{\text{synth}};p_{\theta_{t-1}}$)
// fine-tune previous generation using augmented dataset
end for
Return $[p_{\theta_{0}},p_{\theta_{1}},p_{\theta_{2}},\dots,p_{\theta_{T}}]$
## 3 Overall Training Procedure
We describe our proposed procedure in concise language in Algorithm 1, and we
explain it in more detail here. We train the zero’th generation from scratch
on the ground truth dataset
$\mathcal{D}_{\mathrm{real}}:=\\{x_{i}\\}_{i=1}^{n}$, and we stop training
when the model is close to convergence. For all the following generations, we
fine-tune the previous generation’s latest checkpoint on a combination of the
ground truth dataset $\mathcal{D}_{\mathrm{real}}$, as well as
$\lfloor\lambda\cdot n\rfloor$ synthetic data points which are generated from
the previous generation’s latest checkpoint, and then passed through the
_correction function_ $\pi_{\gamma}$.
###### Example 3.1.
Let us consider the text-to-motion task (Guo et al., 2022) for human motion
generation. Our experimental study in Section 6 is based on this task. In this
case, the $0$’th generation is trained from scratch on $n$ ground-truth
training examples until close to convergence. To train the $t$’th generation,
we synthesize $\lfloor\lambda\cdot n\rfloor$ new motions using
$\lfloor\lambda\cdot n\rfloor$ randomly selected prompts from the training
set, and then we augment the original training set with these synthetic
examples. We then train on these $n+\lfloor\lambda\cdot n\rfloor$ training
examples for a fixed number of batches. This iterative fine-tuning procedure
is a simplified model for data contamination in the age of AI generated
content proliferating on the internet and leaking into training data.
The correction function $\pi_{\gamma}$ is parameterized by the _correction
strength_ $\gamma\in\mathbb{R}_{\geq 0}$, which controls how much influence
the correction function has on the input data points towards increasing a
given point’s likelihood with respect to the target distribution. The other
main hyperparameter $\lambda\in\mathbb{R}_{\geq 0}$ is the _synthetic
augmentation percent_ , and it controls the ratio of synthetic data to real
data in each iteration of fine-tuning. When $\gamma=0$, we recover iterative
re-training with synthetic augmentation considered in (Bertrand et al., 2024).
And if we choose the synthetic augmentation percent to be $\lambda=0$, then
each generation simply corresponds to fine-tuning the model on the same
dataset that it was trained on initially.
We now use _iterative fine-tuning_ interchangeably with the more general term
self-consuming loop. We also consider a broader family of _correction
functions_ , which may assume knowing the true data distribution (thus
“idealized”).
## 4 Theoretical Analysis
### 4.1 Preliminaries
Let us denote222We mostly follow the notation from (Bertrand et al., 2024),
except for introducing the correction function $\pi_{\gamma}$. by
$p_{\mathrm{data}}$ the ground truth probability distribution that we want to
train a generative model to estimate. Suppose we have some dataset
$\mathcal{D}_{\mathrm{real}}=\\{x_{i}\\}_{i=1}^{n}$ sampled from
$p_{\mathrm{data}}$. We write
$\hat{p}_{\mathrm{data}}=(1/n)\sum_{i=1}^{n}\delta_{x_{i}}$. More generally,
we use a hat to denote the empirical distribution over finitely many samples
from the corresponding distribution.
Suppose that we have a class of generative models parameterized by
$\Theta\subset\mathbb{R}^{d}$. We denote by $p_{\theta}$ a probability
distribution in this class with model parameters $\theta\in\Theta$. We define
the optimal model parameters within this class to be
$\theta^{\star}=\operatorname*{arg\,max}_{\theta^{\prime}\in\Theta}\mathbb{E}_{x\sim
p_{\mathrm{data}}}[\log p_{\theta^{\prime}}(x)],$ (1)
where we break ties by minimizing $\|\theta^{\star}\|$. Typically, such
optimal parameters yield a model $p_{\theta^{\star}}$ which closely
approximates the oracle ground truth distribution $p_{\mathrm{data}}$, but
doesn’t equal it exactly; accordingly, we define the Wasserstein-2 distance
between the distributions to be
$\varepsilon:=d_{W}(p_{\theta^{\star}},p_{\mathrm{data}}).$ (2)
The model weights for the first generation are naturally defined according to
the optimization
$\theta_{0}^{n}:=\operatorname*{arg\,max}_{\theta^{\prime}\in\Theta}[\mathbb{E}_{x\sim\hat{p}_{\mathrm{data}}}[\log
p_{\theta^{\prime}}(x)]].$ (3)
This corresponds to training on the finite subset
$\mathcal{D}_{\mathrm{real}}$. Next, let us suppose that the model weights
from generation $t$ are denoted $\theta_{t}^{n}$. We will formalize a
procedure for updating these weights for the next generation to obtain
$\theta_{t+1}^{n}$. For this, we need to define our correction function, and
then we will use it to define the weight update.
###### Definition 4.1.
For any probability distribution, and for any $\gamma\in\mathbb{R}_{\geq 0}$,
we define the _correction of strength $\gamma$_ of distribution $p_{\theta}$
to be the distribution
$\pi_{\gamma}p_{\theta}(x):=\frac{{p_{\theta}}(x)+\gamma
p_{\theta^{\star}}(x)}{1+\gamma},$ (4)
where $p_{\theta^{\star}}$ is defined in (1). For any augmentation percentage
$\lambda\geq 0$, we define the _weight update_ mapping to be
$\displaystyle\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta)$
$\displaystyle:=\operatorname*{local\,argmax}_{\theta^{\prime}\in\Theta}\hat{\mathcal{H}}(\theta,\theta^{\prime})$
(5)
$\displaystyle:=\operatorname*{local\,argmax}_{\theta^{\prime}\in\Theta}\Big{[}\mathbb{E}_{x\sim\hat{p}_{\mathrm{data}}}[\log
p_{\theta^{\prime}}(x)]]$
$\displaystyle\qquad\qquad\qquad\,\,\,\,\,\,\,\,\,\,\,+\lambda\mathbb{E}_{x\sim{\widehat{\pi_{\gamma}p_{\theta}}}}[\log
p_{\theta^{\prime}}(x)]\Big{]},$
where $\hat{p}_{\mathrm{data}}$ and $\widehat{\pi_{\gamma}p_{\theta}}$ are
empirical distributions of size $n$ and $\lfloor\lambda\cdot n\rfloor$
respectively.
To continue our discussion from before, our iterative weight update is defined
as $\theta_{t+1}^{n}:=\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta_{t}^{n})$.
Note that we use an global maximization in (3) when defining the initial
parameters $\theta_{0}^{n}$, but we use a local maximization when computing
our parameter update in (5). This difference is analogous to the differences
between how model weights update during initial training, where parameter
updates are more global, and during fine-tuning, where parameter updates are
more local.
#### 4.1.1 Understanding the correction $\pi_{\gamma}p_{\theta}(x)$
For $\gamma=0$, the correction mapping in (4) simplifies to
$\pi_{0}p_{\theta}=p_{\theta}$, which is just the original distribution; this
corresponds to no correction at all. For $\gamma=1$, it is
$\pi_{1}p_{\theta}=(p_{\theta}+p_{\theta^{\star}})/2$. And for
$\gamma=\infty$, it is $\pi_{\infty}p_{\theta}=p_{\theta^{\star}}$, which
corresponds to the optimal distribution. So as $\gamma$ increases from $0$ to
$\infty$, the distribution $\pi_{\gamma}p_{\theta}$ has a likelihood profile
that matches $p_{\theta}$ less, and $p_{\theta^{\star}}$ more. As
$p_{\theta^{\star}}$ is the optimal model in our generative model class, this
means that as $\gamma$ increases from $0$ to $\infty$, we have that
$\pi_{\gamma}p_{\theta}(x)$ is a PDF which better represents the target
likelihood that we want to estimate through training the generative model.
In our theoretical formulation, we consider correction functions that correct
the probability distribution $p_{\theta}$, rather than the more intuitive (and
practical) case of a correction function that corrects individual points that
the distribution is defined over. In Appendix C, we specify sufficient
conditions under which a pointwise correction function is guaranteed to
correspond to a distribution-wise correction function of the same form as
those which we consider in our theoretical study and therefore can enjoy the
theoretical stability guarantees we prove. We also provide a concrete example
of a projection function, in the Gaussian case, which provably satisfies those
conditions. We conduct a series of experiments on this toy example in Section
5.
#### 4.1.2 Understanding the weight update
$\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta)$
The weight update $\pi_{\gamma}G_{\lambda}^{n}(\theta)$ in (5) is a
formalization of the intended output of fine-tuning $p_{\theta}$ on
$\mathcal{D}_{\mathrm{real}}\cup\mathcal{D}_{\mathrm{synth}}$, where
$\mathcal{D}_{\mathrm{real}}=\\{x_{i}\\}_{i=1}^{n}$ is the ground truth
dataset of size $n$, and
$\mathcal{D}_{\mathrm{synth}}=\\{\tilde{x}_{i}:\tilde{x}_{i}\sim\widehat{\pi_{\gamma}p_{\theta}}\\}_{i=1}^{\lfloor\lambda\cdot
n\rfloor}$ is the synthesized-and-corrected dataset of size
$\lfloor\lambda\cdot n\rfloor$. In other words, in an ideal run of stochastic
gradient descent fine-tuning, the model weights $\theta$ should update to
$\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta)$, as defined in (5), when
trained on $\mathcal{D}_{\mathrm{real}}\cup\mathcal{D}_{\mathrm{synth}}$.
### 4.2 Assumptions
In order to prove our main result, we need some regularity assumptions about
the learning procedure. Informally speaking, we will assume that the class of
generative models that we consider is smoothly parameterized by its model
weights; the loss landscape is concave near the ideal model weights; and the
class of generative models does an increasingly good job approximating the
target data distribution as the dataset size increases. We formally quantify
and state these hypotheses in Assumption 4.2.
###### Assumption 4.2.
The following are true.
1. 1.
There exists some $L>0$ such that, for all $\theta$ sufficiently close to
$\theta^{\star}$, the mapping $x\mapsto\nabla_{\theta}^{2}\log p_{\theta}(x)$
is $L$-Lipschitz.
2. 2.
The mapping $\theta\mapsto\mathbb{E}_{x\sim p_{\mathrm{data}}}[\log
p_{\theta}(x)]$ is continuously twice differentiable locally around
$\theta^{\star}$, and there exists some $\alpha>0$ such that
$\mathbb{E}_{x\sim p_{\text{data}}}\left[\nabla_{\theta}^{2}\log
p_{\theta}(x)\right]|_{\theta^{\star}}\preceq-\alpha I_{d}\prec 0.$
3. 3.
There exist $a,b,\varepsilon_{\text{OPT}}\geq 0$ and a neighborhood $U$ of
$\theta^{\star}$ such that, for any $\delta\in(0,1)$, with probability
$1-\delta$ over the samplings, we have333The map
$\pi_{\gamma}G_{\lambda}^{\infty}$ is defined similarly to
$\pi_{\gamma}G_{\lambda}^{n}$ in (5), but with $\hat{p}_{\mathrm{data}}$
replaced with $p_{\mathrm{data}}$, and with $\widehat{\pi_{\gamma}p_{\theta}}$
replaced with $\pi_{\gamma}p_{\theta}$. See Appendix A for more details. This
estimate is identical to the analogous Assumption 3 used in (Bertrand et al.,
2024), with the only difference being it is applied to our iterative fine-
tuning update function. See Appendix B for further discussion.
$\|\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta)-\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)\|\leq\varepsilon_{\text{OPT}}+\frac{a}{\sqrt{n}}\sqrt{\log\frac{b}{\delta}}.$
(6)
for all $\theta\in U$ and $n\in\mathbb{N}$. Denote this bound by $\tau(n)$.
In Assumption 4.2 (2), the notation “$\preceq$” corresponds to the Loewner
order on symmetric matrices: we write that $A\preceq B$ if $B-A$ is positive
semi-definite, and $A\prec B$ if $B-A$ is positive definite. In particular,
Assumption 4.2 (2) implies that the matrix $\mathbb{E}_{x\sim
p_{\text{data}}}\left[\nabla_{\theta}^{2}\log
p_{\theta}(x)\right]|_{\theta^{\star}}$ is negative definite, and its largest
eigenvalue is at most $-\alpha$. And Assumption 4.2 (3) mirrors the main
assumption in (Bertrand et al., 2024); it is motivated by generalization
bounds in deep learning, see e.g. (Jakubovitz et al., 2019; Ji et al., 2021).
The interested reader can consult Appendix B for more details on this
assumption.
### 4.3 Iterative Fine-Tuning with Correction
We now have the language to state our main result, which essentially says that
if the initial parameters $\theta_{0}$ are sufficiently close to the optimal
model parameters $\theta^{\star}$, and if the augmentation percentage
$\lambda$ is sufficiently small, then under iterative fine-tuning with
correction, we can expect our subsequent model parameters to stay close to
$\theta^{\star}$.
###### Theorem 4.3 (Stability of Iterative Fine-Tuning with Correction).
Fix an augmentation percentage $\lambda\in\mathbb{R}_{>0}$ and a correction
strength $\gamma\in\mathbb{R}_{\geq 0}$. Suppose we have an iterative fine-
tuning procedure defined by the rule
$\theta_{t+1}^{n}=\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta_{t}^{n})$, and
suppose that Assumption 4.2 holds. Define the constant
$\displaystyle\rho(\lambda)$
$\displaystyle:=\rho(\lambda;\alpha,\varepsilon,L):=\frac{\lambda(\alpha+\varepsilon
L)}{\alpha-\lambda(\alpha+\varepsilon L)}$
and fix any $\delta\in(0,1)$. If $\theta_{0}$ is sufficiently close to
$\theta^{\star}$, and if $\lambda\left(1+\frac{\varepsilon
L}{\alpha}\right)<\frac{1+\gamma}{2+\gamma}$, then
$\rho(\lambda)/(1+\gamma)<1$, and it follows that the stability estimate holds
with probability $(1-\delta)^{t}$:
$\displaystyle\|\theta_{t}^{n}$ $\displaystyle-\theta^{\star}\|$ (7)
$\displaystyle\leq\tau(n)\sum_{i=0}^{t}\left(\frac{\rho(\lambda)}{1+\gamma}\right)^{i}+\left(\frac{\rho(\lambda)}{1+\gamma}\right)^{t}\|\theta_{0}^{n}-\theta^{\star}\|$
for all $t>0$.
We prove Theorem 4.3 in Appendix A.
###### Corollary 4.4.
Under the assumptions from Theorem 4.3, iterative fine-tuning with any amount
of correction outperforms iterative fine-tuning without correction–in the
sense that it is exponentially more stable, and it results in better model
weights.
###### Proof of Corollary 4.4.
We apply Theorem 4.3 with $\gamma=0$, which corresponds to no correction, as
well as with $\gamma>0$, which corresponds to any amount of correction. For
any $\gamma>0$, we notice that the RHS of (7) is strictly smaller than when
$\gamma=0$. This guarantees better stability as $t\to\infty$, as well as model
weights $\theta_{t}^{n}$ which are closer to the optimal model weights
$\theta^{\star}$. ∎
###### Remark 4.5.
While Theorem 4.3 addresses the practical setting of finite sampling, we prove
a stronger convergence result _unconditional in probability_ (i.e.,
$\delta=0$) in the case that we assume an infinite sampling budget. This
result is key to our proof of Theorem 4.3; the interested reader is referred
to Theorem A.8 in Appendix A.
###### Example 4.6.
If we apply Theorem 4.3 with correction strength $\gamma=0$, then the
iterative fine-tuning procedure trains successively on a combination of raw
synthetic data that has not been corrected using a correction function and
ground truth data. This is exactly the case considered in (Bertrand et al.,
2024). Accordingly, the bound in (7), applied with $\gamma=0$, recovers their
result.444Readers may notice that the bound in (7), applied with $\gamma=0$,
differs slightly from its analogue Theorem 2 in (Bertrand et al., 2024). This
is because the bound holds with decreasing probability as $t\to\infty$; this
corrects an error in (Bertrand et al., 2024). See Appendices A and D.
###### Example 4.7.
If we apply Theorem 4.3 with correction strength $\gamma\to\infty$, then the
bound (7) in Theorem 4.3 limits to $\tau(n)$. This implies that the practical
iterate $\theta_{t}^{n}$ approaches the ideal model paramaters, and is at
worst some constant away, that depends on error from the optimization
procedure, as well as statistical error from using finitely many ground truth
data samples $n$.
Note that Theorem 4.3 relies on the assumption that the initial model
parameters $\theta_{0}$ are sufficiently close to the ideal model parameters
$\theta^{\star}$, and also that the augmentation percentage $\lambda$ is
sufficiently small. We hypothesize that these assumptions can be relaxed in
the case where a correction function participates in the iterative fine-tuning
procedure–intuitively, the correction function should compensate for errors
that arise from $\theta_{0}^{n}$ being worse, as well as errors that arise
from incorporating more synthetic data. We frame this in the following
conjecture.
###### Conjecture 4.8.
In the case of iterative fine-tuning with correction, we can relax how close
the initial model parameters $\theta_{0}^{n}$ need to be to the initial model
parameters $\theta^{\star}$, as well as choose a larger synthetic augmentation
percentage $\lambda$, while still retaining the improved stability estimate
(7) in Theorem 4.3.
We provide empirical evidence for Conjecture 4.8 in Section 6 on the human
motion synthesis task. In fact, Theorem 4.3 represents partial progress
towards this conjecture. Namely, according to Theorem 4.3, for large
correction strength $\gamma$, we can effectively choose a synthetic
augmentation percentage that is twice as large as we would be able to without
any correction, and still be able to meet the assumptions of the theorem. This
is because $\lim_{\gamma\to\infty}\frac{1+\gamma}{2+\gamma}=1$, which is twice
as large as the bound when $\gamma=0$.
## 5 Toy Example: Gaussian
We now use a toy Gaussian example to illustrate Theorem 4.3. We take as our
ground truth dataset $50$ points sampled from a $2$-dimensional isotropic
Gaussian centered at the origin. This is our target distribution, i.e.,
$\theta^{\star}=((0,0),I_{2})$. We compute the sample mean and covariance
using these $50$ points. In our notation from Section 4, these initial
parameters correspond to
$\theta_{0}^{50}=(\mu_{0},\Sigma_{0})\in\mathbb{R}^{6}$. We fix our synthetic
augmentation percentage to be $\lambda=0.5$. This means that given the sample
mean and covariance $\theta_{t}^{50}=(\mu_{t},\Sigma_{t})$ from generation
$t$, we synthesize a new dataset
$\tilde{\mathcal{D}}_{\mathrm{synth}}=\\{y_{i}\sim\mathcal{N}(\mu_{t},\Sigma_{t})\\}_{i=1}^{25}$
of size $0.5\cdot 50=25$.
Figure 2: Empirical results from our Gaussian toy example. The graph
demonstrates that increasing the correction strength $\gamma$, with a fixed
augmentation ratio of $\lambda=0.5$, improves performance and stability after
self-consuming iterations.
For the correction function, we first sample $25$ points from the target
distribution, say
$\mathcal{D}_{\mathrm{temp}}=\\{z_{i}\sim\mathcal{N}((0,0),I_{2})\\}_{i=1}^{25}$.
We then compute a minimum total distance matching function
$m:\tilde{\mathcal{D}}_{\mathrm{synth}}\to\mathcal{D}_{\mathrm{temp}}$, and
take the correction of a sampled point to be
$\pi_{\gamma}(y_{i}):=\frac{1}{1+\gamma}\cdot
y_{i}+\frac{\gamma}{1+\gamma}\cdot m(y_{i}).$ Finally, we define
$\mathcal{D}_{\mathrm{synth}}:=\\{\pi_{\gamma}(y_{i})\\}_{i=1}^{25}$. In order
to simulate the case of fine-tuning, we logarithmically accrue synthetic data
points during this procedure. Finally, we obtain the updated model parameters
$\theta_{t}^{50}$ by computing the sample mean and covariance on this
augmented dataset.
We present our results in Figure 2. At each iteration $t$, we present the
Wasserstein distance between the origin-centered isotropic Gaussian target
distribution and the distribution defined by the parameters $\theta_{t}^{n}$.
Our empirical results illustrate how increasing the correction strength adds
stability and results in convergence near better Wasserstein scores in later
generations, in accordance with Theorem 4.3. The experiments also demonstrate
how even a very small correction strength can improve performance over the
baseline, in accordance with our claim of exponential improvement in Corollary
4.4.
## 6 Human Motion Synthesis
Figure 3: Results from our human motion experiments on iterative fine-tuning
with self-correction. These graphs show evaluation metrics for the last
checkpoint for every generation. This is the checkpoint used for sampling in
the iterative fine-tuning experiments, and it is also the checkpoint where
training is resumed with this new partially synthesized dataset. We can see
that with self-correction, the iterative fine-tuning procedure more stably
converges to a better FID score, and more quickly. When the dataset size is
smaller ($n=64$, above) we can see that iterative fine-tuning with no self-
correction has a flat Matching score, as well as diverging FID and Diversity
scores, indicating model collapse. And when the dataset size is larger
($n=2794$, below), there is less collapse for iterative fine-tuning with no
self-correction, although the variance of the FID score is worse, as is the
average FID across generations. In both cases, we see that iterative fine-
tuning with self-correction outperforms iterative fine-tuning with no self-
correction, and is competitive with the baseline after many generations.
Theorem 4.3 states that, in theory, iterative fine-tuning with correction
should be more stable than iterative fine-tuning without correction.
Crucially, the stability estimates that we prove rely on the dataset size, the
synthetic augmentation percentage, how expressible the generative model class
is, and having an idealized correction function. To validate how our theory
works beyond toy examples, we conduct a case study on human motion synthesis
with diffusion models (Tevet et al., 2023). We believe this is a natural
setting to test our iterative fine-tuning with correction framework, because
synthesizing natural motions is a challenging problem, but there is a natural
and intuitive way to automatically correct them at scale–namely, using a
physics simulator.
### 6.1 Generative Model
For our generative model, we use the Human Motion Diffusion Model (MDM) (Tevet
et al., 2023). This is a classifier-free diffusion-based generative model for
the text-to-motion generation task, where the model receives as input a
description of a motion sequence (e.g. “get down on all fours and crawl across
the floor”), and outputs a sequence of skeleton poses which attempt to embody
that prompt. Synthesizing human motion is challenging not only for the diverse
and compositional text prompts, but also due to failure of physics obeying-
ness (e.g. feet skating, floating, penetrating a surface), which is not
explicitly enforced by deep generative models.
### 6.2 Physics Simulator as Self-Correction Function
For our self-correction function, we use Universal Humanoid Control (UHC) (Luo
et al., 2021), which is an imitation policy that operates inside the MuJoCo
physics simulator (Todorov et al., 2012). Given an input sequence of humanoid
skeleton poses, UHC attempts to imitate the motion sequence, constrained by
the laws of physics imposed by the physics simulator, and it outputs a new
motion sequence that is the closest possible approximation it can replace it
with. For example, if an input motion sequence violates the laws of physics by
having a foot penetrate through the floor, then the motion sequence output by
UHC will attempt to remove that physically impossible artifact while
maintaining the semantic integrity of the original input motion. We use VPoser
(Pavlakos et al., 2019) and SMPL (Loper et al., 2015) to translate joint
representations between the human motion generator and the physics simulator.
The physics simulator allows us to self-correct a synthesized motion
automatically. Our underlying assumption is that by enforcing the physics
obeying-ness (via the simulator) and closeness to the synthesized motion (via
the imitation objective), the self-correction function would act as similar as
an idealized corrector as possible.
Figure 4: How does the self-correction operation affect iterative fine-tuning,
qualitatively? Here we present some visualizations. The prompt which describes
the ground truth motion, and which we use to generate the three other motions,
is: “a person stands with feet wide, stretches both hands up over his head and
then swings down by the waist and hangs arms down before standing up”. We can
see that the iterative fine-tuning model produces a motion where the human
moves closer to the camera than the others; this is evidence of model
collapse, as moving feet is irrelevant to the prompt. Additionally, this
motion produces single frames that suddenly snap to a physically impossible
position–note the leg penetration through the ground plane. These negative
artifacts do not exist in the motions synthesized from the ground truth,
baseline model, or iterative fine-tuning with self-correction model. Lastly,
we note that the iterative fine-tuning motion depicted here is semantically
similar to crawling. We observe in our experiments with smaller dataset sizes
that the iterative fine-tuning model generates less diverse outputs than the
baseline model and the iterative fine-tuning with self-correction model, and
that this crawling pattern appears more often in the latter. Each snapshot is
taken at exactly frame 105 of their respective videos. The two motions on the
right come from models that were iteratively fine-tuned for 50 generations,
with a train set of size $n=64$, and a synthetic augmentation percentage of
$25\%$. For all pictures of the human, the camera is fixed at the same
position, and for consistency the image is not resized.
### 6.3 Experimental setup
We preprocess the MoVi (Ghorbani et al., 2021) subset of HumanML3D (Guo et
al., 2022) using the official code implementation of HumanML3D. We filter out
movements involving interactions with chairs, as UHC by default does not
handle human-object interactions. We take as our train split the train split
from HumanML3D, intersected with our filtered subset of MoVi, and likewise for
the test split. This procedure yields a train set of size $n=2794$ and a test
set of size $546$. We further randomly select a smaller training set of
$n\in\\{64,128,256\\}$ examples, to simulate the more challenging scenario
when the initial generative model is sub-optimal (due to data scarcity). The
smaller data also enables us to explore larger synthetic augmentation
percentage due to compute constraints. From here, the iterative re-training
procedure follows Algorithm 1. We spell it out in this concrete experimental
setup.
We first train on the ground truth train split until the model is nearly
converged, using all the default hyperparameters from MDM. We evaluate and
save this last checkpoint from generation $0$. From here, for each generation
$t\in\\{1,2,\dots,50\\}$, we run three sets of experiments.
1. A.
_Baseline_ : fine-tune the latest checkpoint from generation $t-1$ for $m$
batches on ground truth dataset $\mathcal{D}_{\mathrm{real}}$.
2. B.
_Iterative fine-tuning:_ fine-tune the latest checkpoint from generation $t-1$
on $\mathcal{D}_{\mathrm{real}}\cup\mathcal{D}_{\mathrm{synth},t-1}$ for $m$
batches. Here, $\mathcal{D}_{\mathrm{synth},t-1}$ is a synthetic dataset of
size $\lfloor\lambda\cdot n\rfloor$ generated from the checkpoint for
generation $t-1$, using randomly chosen prompts from the train split.
3. C.
_Iterative fine-tuning with self-correction:_ fine-tune the latest checkpoint
from generation $t-1$ on
$\mathcal{\mathcal{missing}}D_{\mathrm{real}}\cup\mathrm{UHC}(\mathcal{D}_{\mathrm{synth},t-1})$
for $m$ batches. Here, $\mathrm{UHC}(\mathcal{D}_{\mathrm{synth},t-1})$
denotes a synthetic dataset of size $\lfloor\lambda\cdot n\rfloor$ generated
from the latest checkpoint for generation $t-1$, using randomly chosen prompts
from the train split, which is then corrected by UHC.
We experiment with synthetic augmentation percentages
$\lambda\in\\{0.05,0.10,0.15,0.20,0.25\\}$ on the larger dataset; we set the
number of batches seen during generation $0$ to be $3125$, and the number of
batches seen for each later generation to be $m=625$. Separately, we
experiment with synthetic augmentation percentages
$\lambda\in\\{0.25,0.50,0.75,1.00\\}$ on the smaller datasets; we set the
number of batches seen during generation $0$ to be $78*k$ for dataset size
$64*k$, and the number of batches seen for each later generation $t>0$ to be
$m=16$. We choose to control how many data points the model sees across each
generation, rather than controlling some other quantity like the number of
epochs, as this allows each experiment to compare against its baseline in a
controlled way, which in turn allows them to compare against each other in a
controlled way.
We compute every evaluation one time for each checkpoint using the evaluation
script provided in the original MDM codebase. Regardless of the train split
size, we perform sampling for evaluation using all 546 motion sequences from
the test split, since the FID score is sensitive to generated dataset size. We
use the same hyperparameters as those used for MDM, including batch size $64$,
AdamW (Loshchilov & Hutter, 2019) with learning rate $1e-4$, and classifier-
free guidance parameter $2.5$. And for UHC we used the uhc_explicit model for
imitation.
### 6.4 Quantitative Analysis of Results
For each of these experiments we report the metrics from MDM, as used by (Guo
et al., 2022): FID measures how similar the distribution of generated motions
is to the ground truth distribution; Diversity measures the variance of the
generated motions; and Matching Score measure how well the generated motions
embody the given text prompt. In Figure 3 we present results from experiments
on our $64$-size dataset with $100\%$ synthetic augmentation, as well as our
$2794$-size dataset with $25\%$ synthetic augmentation.
Our experimental results confirm our theoretical results, that iterative fine-
tuning with self-correction outperforms iterative fine-tuning without self-
correction, in the sense that the graphs are generally more stable across
generations, and approach better evaluation metric values. In particular,
Theorem 4.3 and Corollary 4.4 claim that any amount of idealized self-
correction will improve the stability bound during iterative fine-tuning. Our
results in Figure 3 demonstrate that the FID score is lower and more stable
across generations when applying self-correction. We conduct experiments
across multiple seeds, and we find empirically that this general phenomenon
holds consistently, where the self-correction technique consistently yields
improved training dynamics over iterative fine-tuning with no correction.
Graphs from these runs can be found in Appendix G.
Our experimental results also provide empirical evidence for Conjecture 4.8.
Observe that in the baseline experiments in Figure 3, the FID score decreases
across generations, which indicates that the initial model parameters
$\theta_{0}^{n}$ are not that close to the optimal model parameters
$\theta^{\star}$; additionally, the augmentation percentages considered in the
graph are $25\%$ and $100\%$. Conjecture 4.8 claims that performing self-
correction during iterative fine-tuning improves performance, even when the
initial model weights are sub-optimal and simultaneously the synthetic
augmentation percentage is large. This claim is confirmed by Figure 3. We
direct the curious reader to Appendix F, where we present graphs for all of
the above listed training set sizes and augmentation percentages, providing
additional empirical evidence for Theorem 4.3, Corollary 4.4, and Conjecture
4.8.
### 6.5 Qualitative Analysis of Results
We visually inspect the generated human motion sequences in order to analyze
what concrete effect the self-correction has on iterative fine-tuning. We find
that the correctness and diversity of synthesized motions are improved by the
self-correction procedure, in agreement with our quantitative analysis in
Subsection 6.4. We present snapshots of our synthesized motions in Figure 4,
and we analyze the motions in the caption. In short, we find that physics-
disobeying artifacts such as floor penetration or floating become more
pronounced without the self-correction. We also find that in the model without
self-correction, the humanoid sometimes performs movements completely
unrelated to the prompt; our model with self-correction fixes these negative
phenomena. We direct the curious reader to Appendix E, where we present more
examples from our qualitative analysis, as well as our project webpage, where
we provide side-by-side video comparisons.
## 7 Conclusion
Our paper investigates the learning of generative models when the training
data includes machine-generated contents. We investigate how self-correction
functions, which automatically correct synthesized data points to be more
likely under the true data distribution, can stabilize self-consuming
generative model training. Our theoretical results show that self-correction
leads to exponentially more stable model training and smaller variance, which
we illustrate with a Gaussian toy example. We then demonstrate how physics
simulators can serve as a self-correction function for the challenging human
motion synthesis task, where models trained with our self-correcting self-
consuming loops generate higher quality motions, and manage to avoid collapse
even at a high synthetic data to real data ratio. Future work includes
exploring self-correcting functions for more diverse applications, such as
text-to-image and text-to-video generation, and investigating when self-
consuming training may lead to overall better generative models.
## Acknowledgments
We would like to thank Stephen H. Bach, Quentin Bertrand, Carsten Eickhoff,
Jeff Hoffstein, Zhengyi Luo, Singh Saluja, and Ye Yuan for useful discussions.
This work is supported by the Samsung Advanced Institute of Technology, Honda
Research Institute, and a Richard B. Salomon Award for C.S. Our research was
conducted using computational resources at the Center for Computation and
Visualization at Brown University.
## References
* Alemohammad et al. (2023) Alemohammad, S., Casco-Rodriguez, J., Luzi, L., Humayun, A. I., Babaei, H., LeJeune, D., Siahkoohi, A., and Baraniuk, R. G. Self-consuming generative models go mad. _arXiv preprint arXiv:2307.01850_ , 2023.
* Azizi et al. (2023) Azizi, S., Kornblith, S., Saharia, C., Norouzi, M., and Fleet, D. J. Synthetic data from diffusion models improves imagenet classification. _arXiv preprint arXiv:2304.08466_ , 2023.
* Bertrand et al. (2024) Bertrand, Q., Bose, A. J., Duplessis, A., Jiralerspong, M., and Gidel, G. On the stability of iterative retraining of generative models on their own data. In _The Twelfth International Conference on Learning Representations_ , 2024. URL https://openreview.net/forum?id=JORAfH2xFd.
* Ghorbani et al. (2021) Ghorbani, S., Mahdaviani, K., Thaler, A., Kording, K., Cook, D. J., Blohm, G., and Troje, N. F. Movi: A large multi-purpose human motion and video dataset. _Plos one_ , 16(6):e0253157, 2021.
* Guo et al. (2022) Guo, C., Zou, S., Zuo, X., Wang, S., Ji, W., Li, X., and Cheng, L. Generating diverse and natural 3d human motions from text. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , pp. 5152–5161, 6 2022.
* He et al. (2023) He, R., Sun, S., Yu, X., Xue, C., Zhang, W., Torr, P., Bai, S., and Qi, X. Is synthetic data from generative models ready for image recognition? In _ICLR_ , 2023.
* Jahanian et al. (2021) Jahanian, A., Puig, X., Tian, Y., and Isola, P. Generative models as a data source for multiview representation learning. _arXiv preprint arXiv:2106.05258_ , 2021.
* Jakubovitz et al. (2019) Jakubovitz, D., Giryes, R., and Rodrigues, M. R. Generalization error in deep learning. In _Compressed Sensing and Its Applications: Third International MATHEON Conference 2017_ , pp. 153–193. Springer, 2019.
* Ji et al. (2021) Ji, K., Zhou, Y., and Liang, Y. Understanding estimation and generalization error of generative adversarial networks. _IEEE Transactions on Information Theory_ , 67(5):3114–3129, 2021.
* Loper et al. (2015) Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., and Black, M. J. SMPL: A skinned multi-person linear model. _ACM Trans. Graphics (Proc. SIGGRAPH Asia)_ , 34(6):248:1–248:16, October 2015.
* Loshchilov & Hutter (2019) Loshchilov, I. and Hutter, F. Decoupled weight decay regularization. In _International Conference on Learning Representations_ , 2019. URL https://openreview.net/forum?id=Bkg6RiCqY7.
* Luo et al. (2021) Luo, Z., Hachiuma, R., Yuan, Y., and Kitani, K. Dynamics-regulated kinematic policy for egocentric pose estimation. In _Advances in Neural Information Processing Systems_ , 2021.
* Martínez et al. (2023) Martínez, G., Watson, L., Reviriego, P., Hernández, J. A., Juarez, M., and Sarkar, R. Towards understanding the interplay of generative artificial intelligence and the internet. _arXiv preprint arXiv:2306.06130_ , 2023.
* Pavlakos et al. (2019) Pavlakos, G., Choutas, V., Ghorbani, N., Bolkart, T., Osman, A. A. A., Tzionas, D., and Black, M. J. Expressive body capture: 3d hands, face, and body from a single image. In _Proceedings IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)_ , 2019.
* Perdomo et al. (2020) Perdomo, J., Zrnic, T., Mendler-Dünner, C., and Hardt, M. Performative prediction. In _International Conference on Machine Learning_ , pp. 7599–7609. PMLR, 2020.
* Radford et al. (2021) Radford, A., Kim, J. W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al. Learning transferable visual models from natural language supervision. In _International conference on machine learning_ , pp. 8748–8763. PMLR, 2021.
* Saharia et al. (2022) Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E. L., Ghasemipour, K., Gontijo Lopes, R., Karagol Ayan, B., Salimans, T., et al. Photorealistic text-to-image diffusion models with deep language understanding. _Advances in Neural Information Processing Systems_ , 35:36479–36494, 2022.
* Saunders et al. (2022) Saunders, W., Yeh, C., Wu, J., Bills, S., Ouyang, L., Ward, J., and Leike, J. Self-critiquing models for assisting human evaluators. _arXiv preprint arXiv:2206.05802_ , 2022.
* Schuhmann et al. (2022) Schuhmann, C., Beaumont, R., Vencu, R., Gordon, C., Wightman, R., Cherti, M., Coombes, T., Katta, A., Mullis, C., Wortsman, M., et al. Laion-5b: An open large-scale dataset for training next generation image-text models. _Advances in Neural Information Processing Systems_ , 35:25278–25294, 2022.
* Shumailov et al. (2023) Shumailov, I., Shumaylov, Z., Zhao, Y., Gal, Y., Papernot, N., and Anderson, R. The curse of recursion: Training on generated data makes models forget. _arXiv preprint arxiv:2305.17493_ , 2023.
* Tevet et al. (2023) Tevet, G., Raab, S., Gordon, B., Shafir, Y., Cohen-or, D., and Bermano, A. H. Human motion diffusion model. In _The Eleventh International Conference on Learning Representations_ , 2023. URL https://openreview.net/forum?id=SJ1kSyO2jwu.
* Tian et al. (2023) Tian, Y., Fan, L., Chen, K., Katabi, D., Krishnan, D., and Isola, P. Learning vision from models rivals learning vision from data. _arXiv preprint arXiv:2312.17742_ , 2023.
* Todorov et al. (2012) Todorov, E., Erez, T., and Tassa, Y. Mujoco: A physics engine for model-based control. In _2012 IEEE/RSJ International Conference on Intelligent Robots and Systems_ , pp. 5026–5033, 2012. doi: 10.1109/IROS.2012.6386109.
* Welleck et al. (2022) Welleck, S., Lu, X., West, P., Brahman, F., Shen, T., Khashabi, D., and Choi, Y. Generating sequences by learning to self-correct. _arXiv preprint arXiv:2211.00053_ , 2022.
* Wu et al. (2023) Wu, T.-H., Lian, L., Gonzalez, J. E., Li, B., and Darrell, T. Self-correcting llm-controlled diffusion models. _arXiv preprint arXiv:2311.16090_ , 2023.
* Xu et al. (2023) Xu, S., Li, Z., Wang, Y.-X., and Gui, L.-Y. Interdiff: Generating 3d human-object interactions with physics-informed diffusion. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pp. 14928–14940, 2023.
* Yuan et al. (2023) Yuan, Y., Song, J., Iqbal, U., Vahdat, A., and Kautz, J. Physdiff: Physics-guided human motion diffusion model. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_ , 2023.
## Appendix A Mathematical Theory: The Proof of Theorem 4.3
In this appendix, we provide a full account of the mathematical details of the
theorems and their proofs appearing in the main body of the paper.
### A.1 Mathematical Setup and Notation
###### Definition A.1.
Define the optimal model parameters to be
$\theta^{\star}\in\operatorname*{arg\,max}_{\theta^{\prime}\in\Theta}\mathbb{E}_{x\sim
p_{\mathrm{data}}}[\log p_{\theta^{\prime}}(x)],$ (8)
chosen so that $\|\theta^{\star}\|$ has minimal norm within this set. Let
$\theta$ be any model parameters. Then the _correction of strength $\gamma$_
of distribution $p_{\theta}$ towards $p_{\theta^{\star}}$ is a new
distribution, denoted $\pi_{\gamma}p_{\theta}$, defined according to the rule
$\pi_{\gamma}p_{\theta}(x):=\frac{{p_{\theta}}(x)+\gamma
p_{\theta^{\star}}(x)}{1+\gamma}.$
This is illustrated in Figure 5. Let $\theta_{t}$ be the parameters of the
model trained after $t$ generations. We define the _iterative fine-tuning with
correction update mapping_ to be
$\displaystyle\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)$
$\displaystyle:=\operatorname*{local\,argmax}_{\theta^{\prime}\in\Theta}\mathcal{H}(\theta,\theta^{\prime}):=\operatorname*{local\,argmax}_{\theta^{\prime}\in\Theta}[\mathbb{E}_{x\sim
p_{\mathrm{data}}}[\log
p_{\theta^{\prime}}(x)]]+\lambda\mathbb{E}_{x\sim\pi_{\gamma}p_{\theta}}[\log
p_{\theta^{\prime}}(x)]]$ (9)
$\displaystyle\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta)$
$\displaystyle:=\operatorname*{local\,argmax}_{\theta^{\prime}\in\Theta}\hat{\mathcal{H}}(\theta,\theta^{\prime}):=\operatorname*{local\,argmax}_{\theta^{\prime}\in\Theta}[\mathbb{E}_{x\sim\hat{p}_{\mathrm{data}}}[\log
p_{\theta^{\prime}}(x)]]+\lambda\mathbb{E}_{x\sim{\widehat{\pi_{\gamma}p_{\theta}}}}[\log
p_{\theta^{\prime}}(x)]].$ (10)
Notice that in the finite case, we’re optimizing by taking samples from an
empirical distribution. In contrast, in the infinite case, there is zero
statistical error, since the parameter update is done with access to an
infinite sampling budget at each generation $t$. The finite case is the more
practical case, when we have some statistical error (so we only have access to
finite sampling at each generation). Since the parameter space of the
generative model class might be limited, there might be a small difference
between the distribution corresponding to the optimal parameters and the
target distribution $p_{\mathrm{data}}$; we capture this difference via the
Wasserstein-2 distance and denote
$\varepsilon:=d_{W}(p_{\theta^{\star}},p_{\mathrm{data}}).$ (11)
Let
$\displaystyle\mathcal{H}_{1}(\theta^{\prime}):=\mathbb{E}_{x\sim
p_{\mathrm{data}}}[\log
p_{\theta^{\prime}}(x)],\qquad\mathcal{H}_{2}(\theta,\theta^{\prime}):=\mathbb{E}_{x\sim\pi_{\gamma}p_{\theta}}[\log
p_{\theta^{\prime}}(x)].$ (12)
and note that
$\mathcal{H}(\theta,\theta^{\prime})=\mathcal{H}_{1}(\theta^{\prime})+\mathcal{\lambda}\mathcal{H}_{2}(\theta,\theta^{\prime})$.
We first establish that the correction map is truly a mapping of probability
distributions as well as some of its elementary properties.
###### Lemma A.2.
The correction map has the following properties.
1. 1.
$\pi_{\gamma}p_{\theta}$ is a probability distribution.
2. 2.
Strengths $0,1,\infty$ correspond to $p_{\theta}$, the average of $p_{\theta}$
and $p_{\theta^{\star}}$, and $p_{\theta^{\star}}$, respectively.
3. 3.
For any $x\in\mathbb{R}^{n}$, if $\gamma>1$, then
$\|\pi_{\gamma}p_{\theta}(x)-p_{\theta^{\star}}(x)\|\leq\|\pi_{\gamma}p_{\theta}(x)-p_{\theta}(x)\|,$
and if $\gamma<1$, then the inequality is flipped. In other words,
$\pi_{\gamma}p_{\theta}$ is a better estimate of the ideal distribution
$p_{\theta^{\star}}$ than $p_{\theta}$ is, precisely when the projection
strength is more than $1$.
###### Proof.
For the first point, $\pi_{\gamma}p_{\theta}$ is a probability distribution
because it is a convex combination of probability distributions. For example,
we can compute that
$\displaystyle\int_{\mathbb{R}^{d}}\pi_{\gamma}p_{\theta}dx$
$\displaystyle=\frac{1}{1+\gamma}\int_{\mathbb{R}^{d}}{p_{\theta}}(x)dx+\frac{\gamma}{1+\gamma}\int_{\mathbb{R}^{d}}{p_{\theta^{\star}}}(x)dx=\frac{1}{1+\gamma}\cdot
1+\frac{\gamma}{1+\gamma}\cdot 1=1.$
The second point follows immediately from the definition of
$\pi_{\gamma}p_{\theta}$. For the third point, we can estimate that
$\displaystyle\|\pi_{\gamma}p_{\theta}(x)-p_{\theta^{\star}}(x)\|$
$\displaystyle=\left\|\frac{p_{\theta}(x)+\gamma
p_{\theta^{\star}}(x)}{1+\gamma}-\frac{p_{\theta^{\star}}(x)(1+\gamma)}{1+\gamma}\right\|$
$\displaystyle=\frac{1}{1+\gamma}\cdot\|p_{\theta}(x)-p_{\theta^{\star}}(x)\|$
$\displaystyle\leq\frac{\gamma}{1+\gamma}\cdot\|p_{\theta^{\star}}(x)-p_{\theta}(x)\|$
$\displaystyle=\left\|\frac{p_{\theta}(x)+\gamma
p_{\theta^{\star}}(x)}{1+\gamma}-\frac{p_{\theta}(x)(1+\gamma)}{1+\gamma}\right\|$
$\displaystyle=\|\pi_{\gamma}p_{\theta}(x)-p_{\theta}(x)\|$
when $\gamma>1$. The inequality flips when $\gamma<1$. ∎
Intuitively, it is clear that we cannot hope to prove general results about
generative models without assuming something about the mapping $\theta\mapsto
p_{\theta}$. We now state the two assumptions we require in order to make our
theoretical arguments; note that they are precisely the same assumptions made
in (Bertrand et al., 2024). The first assumption is a local Lipschitzness
property that we will exploit via the Kantorovich-Rubenstein duality:
###### Assumption A.3.
For $\theta$ close enough to $\theta^{\star}$, the mapping
$x\mapsto\nabla_{\theta}\nabla_{\theta}\log p_{\theta}(x)$ is $L$-Lipschitz.
The second assumption is a local regularity and concavity condition:
###### Assumption A.4.
The mapping $\theta\mapsto\mathbb{E}_{x\sim p_{\mathrm{data}}}[\log
p_{\theta}(x)]$ is continuously twice differentiable locally around
$\theta^{\star}$ and $\mathbb{E}_{x\sim
p_{\text{data}}}\left[\nabla_{\theta}\nabla_{\theta}\log
p_{\theta}(x)\right]_{\theta^{\star}}\preceq-\alpha I_{d}\prec 0.$
We next show the existence and uniqueness of
$\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\infty)$ locally around
$\theta^{\star}$.
###### Proposition A.5 (The Local Maximum Likelihood Solution is Unique).
The following are true:
1. A.
There exists an open neighborhood $U\subset\mathbb{R}^{d}$ containing
$\theta^{\star}$ and a continuous function $g:U\to\mathbb{R}^{d}$ such that
$g(\theta^{\star})=\theta^{\star}$, and
$\nabla_{\theta^{\prime}}\mathcal{H}(\theta,\theta^{\prime})|_{\theta,g(\theta)}=0$
(13)
for every $\theta\in U$.
2. B.
Given optimal model parameters $\theta^{\star}$ as in (8) that follow
Assumptions A.3 and A.4, we have that, if $\varepsilon L<\alpha$, then for all
$\lambda>0$ and $\theta$ in a small enough neighborhood $U$ around
$\theta^{\star}$, there exists a unique local maximizer
$\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)$ in $U$.
###### Proof.
We first prove part A. It suffices to apply the Implicit Function Theorem to
the map
$\displaystyle\mathbb{R}^{2d}\to\mathbb{R}^{d}:(\theta,\theta^{\prime})\mapsto\nabla_{\theta^{\prime}}\mathcal{H}(\theta,\theta^{\prime})|_{\theta,\theta^{\prime}}$
(14)
in an open neighborhood of $(\theta^{\star},\theta^{\star})$. To do this, we
need to show the following:
1. i)
The map vanishes at $(\theta^{\star},\theta^{\star})$, i.e.
$\nabla_{\theta^{\prime}}\mathcal{H}(\theta,\theta^{\prime})|_{\theta^{\star},\theta^{\star}}=0.$
(15)
2. ii)
The Jacobian matrix at $(\theta^{\star},\theta^{\star})$ is invertible, i.e.,
$\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}(\theta,\theta^{\prime})|_{\theta^{\star},\theta^{\star}}\qquad\text{is
invertible.}$ (16)
We first prove i). Recall from the definition (9) that
$\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)=\operatorname*{arg\,max}_{\theta^{\prime}\in\Theta}\mathcal{H}(\theta,\theta^{\prime})$.
This means that for any $\theta$,
$\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)$ is the choice of
$\theta^{\prime}$ which maximizes $\mathcal{H}(\theta,\theta^{\prime})$. In
particular, for $\theta=\theta^{\star}$, we have that
$\theta^{\prime}=\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta^{\star})$
is the choice which maximizes $\mathcal{H}(\theta^{\star},\theta^{\prime})$.
But
$\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta^{\star})=\theta^{\star}$ by
Proposition A.6. This implies that its derivative is zero at
$\theta^{\prime}=\theta^{\star}$, meaning
$\nabla_{\theta^{\prime}}\mathcal{H}(\theta,\theta^{\prime})|_{\theta^{\star},\theta^{\star}}=0$,
as needed.
Now we prove ii). In order to show that the matrix (16) is invertible, it
suffices to show it is close to another matrix which is invertible. A natural
choice is the matrix
$M=(1+\lambda)\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathbb{E}_{x\sim
p_{\text{data}}}[\log p_{\theta^{\prime}}(x)]|_{\theta^{\star}}.$ (17)
First of all, note that this matrix indeed exists; by Assumption 2 A.4, we
know the map $\theta^{\prime}\mapsto\mathbb{E}_{x\sim p_{\text{data}}}[\log
p_{\theta^{\prime}}(x)]$ is continuously twice differentiable locally near
$\theta^{\star}$. We can estimate that the matrices (16) and (17) are indeed
close as follows:
$\displaystyle\|\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}$
$\displaystyle\mathcal{H}(\theta,\theta^{\prime})|_{\theta^{\star},\theta^{\star}}-(1+\lambda)\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathbb{E}_{x\sim
p_{\text{data}}}[\log p_{\theta^{\prime}}(x)]_{\theta^{\star}}\|$
$\displaystyle=\|\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}[\mathbb{E}_{x\sim
p_{\mathrm{data}}}\log
p_{\theta^{\prime}}(x)+\lambda\mathbb{E}_{x\sim\pi_{\gamma}p_{\theta}}p_{\theta^{\prime}}(x)]|_{\theta^{\star},\theta^{\star}}-(1+\lambda)\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathbb{E}_{x\sim
p_{\text{data}}}[\log p_{\theta^{\prime}}(x)]_{\theta^{\star}}\|$
$\displaystyle=\lambda\|[\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathbb{E}_{x\sim\pi_{\gamma}p_{\theta}}\log
p_{\theta^{\prime}}(x)]|_{\theta^{\star},\theta^{\star}}-\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathbb{E}_{x\sim
p_{\text{data}}}[\log p_{\theta^{\prime}}(x)]_{\theta^{\star}}\|$
$\displaystyle=\lambda\|[\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathbb{E}_{x\sim
p_{\theta^{\star}}}\log
p_{\theta^{\prime}}(x)]_{\theta^{\star}}-[\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathbb{E}_{x\sim
p_{\text{data}}}\log p_{\theta^{\prime}}(x)]_{\theta^{\star}}\|$
$\displaystyle=\lambda\|[\mathbb{E}_{x\sim
p_{\theta^{\star}}}\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\log
p_{\theta^{\prime}}(x)]_{\theta^{\star}}-[\mathbb{E}_{x\sim
p_{\text{data}}}\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\log
p_{\theta^{\prime}}(x)]_{\theta^{\star}}\|$ $\displaystyle\leq
L\lambda\mathbb{E}_{(x,x^{\prime})\sim p_{\theta^{\star}}\times
p_{\mathrm{data}}}\|[\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\log
p_{\theta^{\prime}}(x)]_{\theta^{\star}}-\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\log
p_{\theta^{\prime}}(x)]_{\theta^{\star}}\|$
$\displaystyle\leq\lambda\varepsilon L$
where the first equality follows from the definition of $\mathcal{H}$ in (12);
the second equality follows from some cancellation; the third equality follows
the fact that the derivatives are constant with respect to $\theta$, and
$\pi_{\gamma}p_{\theta^{\star}}=p_{\theta^{\star}}$ by Lemma A.2; we exchange
the derivative and the expectation in equation 4 using the Dominated
Convergence Theorem, since Assumption 1 A.3 says that
$x\mapsto\nabla_{\theta}\nabla_{\theta}\log p_{\theta}(x)$ is $L$-Lipschitz;
the fifth estimate follows from Kantorovich-Rubinstein Duality; and the final
estimate is the definition of Wasserstein distance (11).
Finally, we verify $M$ is indeed invertible. Assumption 2 A.4 implies that the
largest eigenvalue of $M$ is at most $-(1+\lambda)\alpha$. Therefore, since
all eigenvalues of $M$ are nonzero, $M$ is invertible. We can now apply the
implicit function theorem to (14), and part A follows immediately.
Next, we prove part B. Let $d_{U}=\sup_{\theta\in
U}d_{W}(p_{\theta^{\star}},p_{\theta})$. To verify that $g(\theta)$ is a local
maximizer of (14), it suffices to show that
$\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}(\theta,g(\theta))\prec
0$. By Assumption 2 A.4, we know
$\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}_{1}(\theta^{\star})\prec-\alpha
I_{d}$ and since
$\theta^{\prime}\mapsto\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}_{1}(\theta^{\prime})$
is continuously twice differentiable locally near $\theta^{\star}$, we also
have
$\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}_{1}(g(\theta))\prec-\alpha
I_{d}$. Thus, we have
$\displaystyle\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}(\theta,g(\theta))$
$\displaystyle=\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}_{1}(g(\theta^{\prime}))+\lambda\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}_{2}(\theta,g(\theta))$
$\displaystyle=(1+\lambda)\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}_{1}(g(\theta))+\lambda(\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}_{2}(\theta,g(\theta))-\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}_{1}(g(\theta)))$
$\displaystyle\preceq-\alpha(1+\lambda)I_{d}+\lambda
L\left(\frac{1}{1+\gamma}d_{W}(p_{\theta},p_{\theta^{\star}})+\varepsilon\right)I_{d},$
where the last step follows from Kantorovich-Rubsenstein duality:
$\displaystyle\|\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}$
$\displaystyle\mathcal{H}_{2}(\theta,\theta^{\prime})-\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}_{1}(\theta^{\prime})\|$
$\displaystyle\leq\|\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}_{2}(\theta,\theta^{\prime})-\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}_{2}(\theta^{\star},\theta^{\prime})\|+\|\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}_{2}(\theta^{\star},\theta^{\prime})-\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}_{1}(\theta^{\prime})\|$
$\displaystyle=\|\int_{\mathbb{R}^{d}}\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\log
p_{\theta^{\prime}}(x)\frac{p_{\theta}(x)+\gamma
p_{\theta^{\star}}(x)}{1+\gamma}\,dx-\int_{\mathbb{R}^{d}}\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\log
p_{\theta^{\prime}}(x)p_{\theta^{\star}}(x)\,dx\|$
$\displaystyle\;\;\;+\|\mathbb{E}_{x\sim p_{\text{data}}}[\log
p_{\theta^{\prime}}(x)]-\mathbb{E}_{x\sim p_{\theta^{\star}}}[\log
p_{\theta^{\prime}}(x)]\|$
$\displaystyle\leq\frac{1}{1+\gamma}\|\int_{\mathbb{R}^{d}}\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\log
p_{\theta^{\prime}}(x)\left(p_{\theta}(x)-p_{\theta^{\star}}(x)\right)\,dx\|+L\varepsilon$
$\displaystyle=\frac{1}{1+\gamma}\|\mathbb{E}_{x\sim p_{\theta}}[\log
p_{\theta^{\prime}}(x)]-\mathbb{E}_{x\sim p_{\theta^{\star}}}[\log
p_{\theta^{\prime}}(x)]\|+L\varepsilon$
$\displaystyle\leq\frac{L}{1+\gamma}d_{W}(p_{\theta},p_{\theta^{\star}})+L\varepsilon$
$\displaystyle\leq\frac{L}{1+\gamma}d_{U}+L\varepsilon$
Thus, to have
$\nabla_{\theta^{\prime}}\nabla_{\theta^{\prime}}\mathcal{H}(\theta,g(\theta))\prec
0$, it is sufficient that
$\displaystyle-\alpha(1+\lambda)+\lambda
L\left(\frac{1}{1+\gamma}d_{U}+\varepsilon\right)<0,$
which is guaranteed for all $\lambda>0$ by $\alpha>L\varepsilon$ and
$d_{U}\leq\frac{\alpha(1+\gamma)}{\lambda}$. This concludes the proof. ∎
Further, as we would expect, $\theta^{\star}$ is a fixed point of
$\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}$:
###### Proposition A.6 (The optimal parametric generative model is a fixed
point).
For any given data distribution $p_{\mathrm{data}}$, any $\theta^{\star}$ as
defined by (8), and for all $\lambda>0$, we have
$\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta^{\star})=\theta^{\star}$.
###### Proof.
Unpacking definition (9) shows that
$\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta^{\star})=\mathcal{G}_{\lambda}^{\infty}(\theta^{\star})$,
and we know by Proposition 4 from (Bertrand et al., 2024) that
$\mathcal{G}_{\lambda}^{\infty}(\theta^{\star})=\theta^{\star}$. ∎
### A.2 Convergence of Iterative Fine-tuning with Correction for Infinite
Sampling
We now have the required setup to state and prove a convergence result for
iterative fine-tuning assuming infinite access to underlying probablity
distributions. We need the following result, which is a technical lemma that
provides a computation of the Jacobian of $\pi_{\gamma}G_{\lambda}^{\infty}$
at $\theta^{\star}$ as well as a spectral bound, both essential for the proof
of Theorem A.8.
###### Lemma A.7.
We define the matrices
$\displaystyle A$
$\displaystyle:=(\nabla_{\theta^{\prime},\theta^{\prime}}^{2}\mathcal{H}_{1}(\theta^{\prime}))|_{\theta^{\star}}$
(18) $\displaystyle B$
$\displaystyle:=\nabla_{\theta,\theta^{\prime}}^{2}\mathbb{E}_{x\sim
p_{\theta}}[\log
p_{\theta^{\prime}}(x)]\big{|}_{\theta^{\star},\theta^{\star}}$ (19)
$\displaystyle C$
$\displaystyle:=\nabla_{\theta^{\prime},\theta^{\prime}}^{2}\mathbb{E}_{x\sim
p_{\theta}}[\log p_{\theta^{\prime}}(x)]\big{|}_{\theta^{*},\theta^{*}}$ (20)
Recall the definition of $\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)$
from (9). Since $\gamma$ and $\lambda$ are fixed, denote
$\pi\mathcal{G}(\theta)=\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta).$
Finally, let
$\mathcal{J}(\pi\mathcal{G}(\theta)):=\nabla_{\theta}\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)|_{\theta}$
denote the Jacobian of $\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)$.
1. I.
There exists an open neighborhood $U\subseteq\Theta$ containing
$\theta^{\star}$ such that for all $\theta\in U$, we have
$\displaystyle\mathcal{J}(\mathcal{\pi G}(\theta))=$
$\displaystyle\,\,-\left(\nabla^{2}_{\theta^{\prime},\theta^{\prime}}\mathcal{H}(\theta,\pi\mathcal{G}(\theta))\right)^{-1}\cdot\lambda\nabla^{2}_{\theta,\theta^{\prime}}\mathcal{H}_{2}(\theta,\pi\mathcal{G}(\theta)).$
(21)
2. II.
We have that
$\nabla^{2}_{\theta,\theta^{\prime}}\mathcal{H}_{2}(\theta^{\star},\theta^{\star})=\frac{B}{1+\gamma}$,
and $B=-C$, so the Jacobian of $\pi\mathcal{G}$ at $\theta^{\star}$ is
$\mathcal{J}(\pi\mathcal{G}(\theta^{\star}))=(I+\lambda
A^{-1}C)^{-1}\cdot\frac{\lambda}{1+\gamma}A^{-1}C$ (22)
3. III.
The spectral norm of $A^{-1}C$ can be bounded as
$\|A^{-1}C\|\leq 1+\frac{L\varepsilon}{\alpha}.$ (23)
###### Proof.
We first prove I. We apply Proposition A.5. Part A of that proposition gives
us a function $g:U\to\mathbb{R}^{d}$ such that
$\nabla_{\theta^{\prime}}\mathcal{H}(\theta,\theta^{\prime})_{\theta,g(\theta)}=0$.
But part $B$ of that proposition says that there exists a unique local
maximizer inside $U$, and this local maximizer is
$\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}$. This implies that
$\nabla_{\theta^{\prime}}\mathcal{H}(\theta,\theta^{\prime})_{\theta,\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)}=0$.
Next, we implicitly differentiate this equation with respect to $\theta$.
Recall that when you have an equation of the form $f(x,y)=0$, and implicitly
differentiate it in the form $f(x,g(x))=0$ with respect to $x$, you obtain
$\frac{\partial f}{\partial x}+\frac{\partial f}{\partial y}\frac{\partial
g}{\partial x}=0$, and solving for $\frac{\partial g}{\partial x}$ yields
$\frac{\partial g}{\partial x}=-\left(\frac{\partial f}{\partial
y}\right)^{-1}\frac{\partial f}{\partial x}$. We apply this formula with
$(x,f,g)=(\theta,\theta\mapsto\nabla_{\theta^{\prime}}\mathcal{H}(\theta,\theta^{\prime})_{\theta,\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)},\theta\mapsto\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta))$
and obtain (21), as desired.
Now we prove II. We can compute that
$\displaystyle\nabla^{2}_{\theta^{\prime},\theta}\mathcal{H}_{2}(\theta,\theta^{\prime})$
$\displaystyle=\nabla_{\theta^{\prime}}\nabla_{\theta}\mathbb{E}_{x\sim\pi_{\gamma}p_{\theta}}[\log
p_{\theta^{\prime}}(x)]$ (24)
$\displaystyle=\nabla_{\theta^{\prime}}\nabla_{\theta}\int_{x\in\mathbb{R}^{d}}\log
p_{\theta^{\prime}}(x)\left(\frac{p_{\theta}(x)+\gamma
p_{\theta^{\star}}(x)}{1+\gamma}\right)dx$ (25)
$\displaystyle=\frac{1}{1+\gamma}\nabla_{\theta^{\prime}}\nabla_{\theta}\int_{x\in\mathbb{R}^{d}}\log
p_{\theta^{\prime}}(x)p_{\theta}(x)dx$ (26)
$\displaystyle=\frac{1}{1+\gamma}\nabla^{2}_{\theta^{\prime},\theta}\mathbb{E}_{x\sim
p_{\theta}}[\log p_{\theta^{\prime}}(x)]$ (27)
$\displaystyle=\frac{1}{1+\gamma}B$ (28)
where the third equality holds because the integral containing
$p_{\theta^{\star}}$ is constant with respect to $\theta$. Next, we can
compute that
$\displaystyle B$ $\displaystyle=\int_{X}\nabla_{\theta^{\prime}}\log
p_{\theta^{\prime}}(x)\nabla_{\theta}p_{\theta}(x)dx\Big{|}_{\theta^{*},\theta^{*}}$
(29) $\displaystyle=\int_{X}[\nabla_{\theta}\log
p_{\theta}(x)][\nabla_{\theta}p_{\theta}(x)]dx\Big{|}_{\theta^{*},\theta^{*}}$
(30) $\displaystyle=\int_{X}\nabla_{\theta}[p_{\theta}(x)\nabla_{\theta}\log
p_{\theta}(x)]dx\Big{|}_{\theta^{*},\theta^{*}}-\int_{X}p_{\theta}(x)(\nabla_{\theta}\nabla_{\theta}\log
p_{\theta}(x))dx\Big{|}_{\theta^{*},\theta^{*}}$ (31)
$\displaystyle=\int_{X}\nabla_{\theta}\left[p_{\theta}(x)\frac{\nabla_{\theta}p_{\theta}(x)}{p_{\theta}(x)}\right]dx\Big{|}_{\theta^{*},\theta^{*}}-\nabla_{\theta^{\prime},\theta^{\prime}}^{2}\mathbb{E}_{x\sim
p_{\theta}}[\log p_{\theta^{\prime}}(x)]\Big{|}_{\theta^{*},\theta^{*}}$ (32)
$\displaystyle=-C,$ (33)
where the third equality follows from the product rule for gradients,
$\displaystyle\nabla_{\theta}[p_{\theta}(x)\nabla_{\theta}\log p_{\theta}(x)]$
$\displaystyle=p_{\theta}(x)(\nabla_{\theta}\nabla_{\theta}\log
p_{\theta}(x))+[\nabla_{\theta}p_{\theta}(x)][\nabla_{\theta}\log
p_{\theta}(x)].$ (34)
Finally, we will prove the formula (22) by manipulating (21). We begin with
the rightmost factor in (21). If we apply these equalities that we just
obtained, then we get
$\displaystyle\mathcal{J}(\pi\mathcal{G}(\theta^{\star}))$
$\displaystyle=-\left(\nabla^{2}_{\theta^{\prime},\theta^{\prime}}\mathcal{H}(\theta^{\star},\theta^{\star})\right)^{-1}\cdot\lambda\nabla^{2}_{\theta^{\prime},\theta}\mathcal{H}_{2}(\theta^{\star},\theta^{\star})$
$\displaystyle=-(A+\lambda C)^{-1}\cdot\frac{\lambda}{1+\gamma}B$
$\displaystyle=-(I+\lambda A^{-1}C)^{-1}\cdot\frac{\lambda}{1+\gamma}A^{-1}B$
$\displaystyle=(I+\lambda A^{-1}C)^{-1}\cdot\frac{\lambda}{1+\gamma}A^{-1}C$
where the first equality follows from (22) along with the fixed point
Proposition A.6, and we are using that $A$ is invertible by Assumption 2 A.4,
which implies all eigenvalues of $A$ are nonzero; in the fourth step we used
that $B=-C$. This proves part II.
Now we prove III. We can bound the operator norm $\|A^{-1}C\|$ as follows:
$\displaystyle\|A^{-1}C\|=\|I+A^{-1}(C-A)\|\leq\|I\|+\|A^{-1}\|\cdot\|C-A\|\leq
1+\alpha^{-1}\|C-A\|,$ (35)
where the first estimate comes from subadditivity and submultiplicativity, and
the second comes from the fact that, since $A$ is symmetric,
$\|A\|=\max_{\lambda\in\sigma(A)}|\lambda|$, where $\sigma(A)$ is the spectrum
of $A$. Formally, we know by Assumption A.4 that $A$ has eigenvalues
$e_{1}<e_{2}<\dots<e_{n}\leq-\alpha<0$ and so $|e_{n}|>\alpha$. Therefore,
$A^{-1}$ has eigenvalues $1/e_{n}<1/e_{n-1}<\dots<1/e_{1}<0$ and thus
$1/|e_{n}|>1/|e_{n-1}|>\dots>1/|e_{1}|$, which gives us the bound
$\|A^{-1}\|=1/|e_{n}|<1/\alpha$ on the matrix norm. Next, we can estimate that
$\displaystyle||C-A||$
$\displaystyle=\|\nabla^{2}_{\theta^{\prime},\theta^{\prime}}\mathbb{E}_{x\sim
p_{\theta^{\star}}}[\log
p_{\theta^{\prime}}(x)]|_{\theta^{\star}}-\nabla^{2}_{\theta^{\prime},\theta^{\prime}}\mathbb{E}_{x\sim
p_{\text{data}}}[\log p_{\theta^{\prime}}(x)]|_{\theta^{\star}}\|$
$\displaystyle=\|\mathbb{E}_{x\sim
p_{\theta^{\star}}}[\nabla^{2}_{\theta^{\prime},\theta^{\prime}}\log
p_{\theta^{\star}}(x)]-\mathbb{E}_{x\sim
p_{\text{data}}}[\nabla^{2}_{\theta^{\prime},\theta^{\prime}}\log
p_{\theta^{\star}}(x)]\|$ $\displaystyle\leq
Ld_{W}(p_{\theta^{\star}},p_{\text{data}})$ $\displaystyle=L\varepsilon,$
where in the second equality we exchange the derivative and the expectation in
equation 4 using the Dominated Convergence Theorem, since Assumption 1 A.3
says that $x\mapsto\nabla_{\theta}\nabla_{\theta}\log p_{\theta}(x)$ is
$L$-Lipschitz; and in the last estimate, we used Kantorovich-Rubenstein
duality. This, combined with the estimate (35), yields the bound in (23). ∎
We are finally ready to prove our theorem that guarantees convergence to the
optimal parameters in the infinite sampling case under certain assumptions,
one being the that the initial model parameters $\theta_{0}$ are sufficiently
close to $\theta^{\star}$:
###### Theorem A.8 (Convergence of Iterative Fine-tuning, Infinite Sampling
Case).
Suppose we have an iterative fine-tuning procedure defined by the rule
$\theta_{t+1}^{\infty}=\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta_{t}^{\infty})$.
Let $\theta^{\star}$ be the parameter vector for the optimal generative model,
as in (8). We assume that $\theta^{\star}$ follows Assumptions A.3 and A.4
from (Bertrand et al., 2024). Suppose also that
$\lambda\left(1+\frac{\varepsilon
L}{\alpha}\right)<\frac{1+\gamma}{2+\gamma}$. Then, the Jacobian of
$\pi_{\gamma}G_{\lambda}^{\infty}$ satisfies the following bound:
$\displaystyle\|\nabla_{\theta}\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta^{\star})\|_{2}$
$\displaystyle\leq\frac{1}{1+\gamma}\cdot\frac{\lambda(\alpha+\varepsilon
L)}{\alpha-\lambda(\alpha+\varepsilon L)}<1.$ (36)
Consequently, there exists a $\delta>0$ such if $\theta_{0}\in\Theta$
satisfies $\|\theta_{0}-\theta^{\star}\|\leq\delta$, then starting training at
$\theta_{0}$ and having
$\theta_{t+1}=\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta_{t})$, we have
that $\lim_{t\to\infty}\theta_{t}\to\theta^{\star}$. Furthermore, if we define
$\displaystyle\rho(\lambda)=\frac{\lambda(\alpha+\varepsilon
L)}{\alpha-\lambda(\alpha+\varepsilon L)},$ (37)
then we obtain the asymptotic stability estimate555(Bertrand et al., 2024)
could have presented their results in this stronger form, without the big $O$
notation, with very little extra work.
$\displaystyle\|\theta_{t}-\theta^{\star}\|\leq\left(\frac{\rho(\lambda)}{1+\gamma}\right)^{t}\|\theta_{0}-\theta^{\star}\|.$
(38)
###### Proof.
We first prove the Jacobian bound (36). By hypothesis, we know
$\lambda(1+\frac{L\varepsilon}{\alpha})<1$, so by Lemma A.7(III), we have
$\lambda||A^{-1}C||<1$. Thus, we can write
$\displaystyle(I+\lambda A^{-1}C)^{-1}$
$\displaystyle=\sum_{k=0}^{\infty}(-\lambda A^{-1}C)^{k}$
and so
$\displaystyle\|(I+\lambda A^{-1}C)^{-1}\|$
$\displaystyle\leq\sum_{k=0}^{\infty}\lambda^{k}||A^{-1}C||^{k}=\frac{1}{1-\lambda||A^{-1}C||}.$
Applying Lemma A.7(2), we get
$\displaystyle||\mathcal{J}(G(\theta^{\star}))||$
$\displaystyle\leq||(I+\lambda
A^{-1}C)^{-1}||\cdot\frac{\lambda}{1+\gamma}||A^{-1}C||\leq\frac{\lambda}{1+\gamma}\cdot\frac{||A^{-1}C||}{1-\lambda||A^{-1}C||}.$
Now, it is straightforward to see the RHS above is at most the bound in (36)
if and only if $\alpha\|A^{-1}C\|<\alpha+\varepsilon L$. But this bound holds
because of Lemma A.7(III). This proves the Jacobian bound (36), but does not
prove that the bound is less than $1$. For this, we must show that
$\displaystyle\frac{1}{1+\gamma}\cdot\frac{\lambda(\alpha+\varepsilon
L)}{\alpha-\lambda(\alpha+\varepsilon L)}<1.$ (39)
By clearing denominators and grouping like terms, we can see that this is
equivalent to
$\displaystyle\lambda\left(1+\frac{\varepsilon
L}{\alpha}\right)<\frac{1+\gamma}{2+\gamma},$ (40)
which is precisely guaranteed by our hypothesis.
We now apply the the Jacobian bound (36) to prove the asymptotic stability
estimate (38). Assume $\lambda$ is sufficiently small so that
$\rho(\lambda)/(1+\gamma)<1$. Then for every
$\rho^{\prime}\in(\rho(\lambda)/(1+\gamma),1)$, there exists $\delta>0$
sufficiently small so that every $\theta_{0}\in\Theta$ which satisfies
$\|\theta_{0}-\theta^{\star}\|<\delta$ has the property that
$\|\nabla_{\theta}\pi_{\gamma}G_{\lambda}^{\infty}(\theta_{0})\|_{2}<\rho^{\prime}$.
Because the map $\pi_{\gamma}G_{\lambda}^{\infty}$ has Jacobian matrix norm
less than $1$ in the $\delta$-ball around $\theta^{\star}$, it is a
contraction mapping in this neighborhood. Concretely, this means that
$\|\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)-\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta^{\prime})\|\leq\frac{\rho(\lambda)}{1+\gamma}\|\theta-\theta^{\prime}\|,$
(41)
for every $\theta,\theta^{\prime}$ in the $\delta$-ball around
$\theta^{\star}$. In particular, for
$(\theta,\theta^{\prime})=(\theta_{t},\theta^{\star})$ we obtain
$\displaystyle\|\theta_{t+1}-\theta^{\star}\|=\|\pi_{\gamma}\theta_{t}-\theta^{\star}\|$
$\displaystyle=\|\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta_{t})-\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta^{\star})\|\leq\frac{\rho(\lambda)}{1+\gamma}\cdot\|\theta_{t}-\theta^{\star}\|.$
By induction, the above estimate implies that if $\theta_{0}$ is in a
$\delta$-ball around $\theta^{\star}$, then so is every successive
$\theta_{t}$. Therefore the desired estimate (38) now follows by induction on
$t$. ∎
###### Remark A.9.
Taking $\gamma=0$ recovers exactly the result in (Bertrand et al., 2024).
Importantly, the correction function $\pi_{\gamma}$ provides leverage in
determining how large the augmentation percentage $\lambda$ can be: choosing a
larger correction strength $\gamma$ allows us to choose a larger augmentation
percentage $\lambda$ while still retaining theoretical guarantees for
convergence. Additionally, for the same choice of augmentation percentage
$\lambda$, a larger correction strength $\gamma$ provides a guarantee for an
improved rate of convergence. See Conjecture 4.8.
### A.3 Stability of Iterative Fine-tuning with Correction for Finite
Sampling
Finally, we prove a stability result for iterative fine-tuning with correction
in the presence of statistical error. To do this, we require an assumption
that essentially provides probabilistic guarantee that the chosen generative
model learns the underlying distribution increasingly better if it has access
to more samples:
###### Assumption A.10.
There exist $a,b,\varepsilon_{\text{OPT}}\geq 0$ and a neighborhood $U$ of
$\theta^{\star}$ such that, for any $\delta\in(0,1)$, with probability
$1-\delta$ over the samplings, we have
$(\forall\theta\in U)(\forall
n\in\mathbb{N})\qquad\|\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta)-\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)\|\leq\varepsilon_{\text{OPT}}+\frac{a}{\sqrt{n}}\sqrt{\log\frac{b}{\delta}}.$
(42)
See Appendix B for a discussion about this assumption; we investigated whether
to assume a similar bound to the one they assumed in (Bertrand et al., 2024),
or prove our bound from theirs. In fact, we prove in Appendix B that you can
in fact deduce something nearly as strong as Assumption A.10 from Assumption 3
in their paper, so we made Assumption A.10 for the sake of a cleaner, more
parallel exposition.
###### Theorem A.11 (Iterative Fine-Tuning Stability Under Correction).
Suppose we have an iterative fine-tuning procedure defined by the rule
$\theta_{t+1}^{n}=\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta_{t}^{n})$. In
words, this means that the augmentation percentage is $\lambda\in(0,\infty)$
and the correction strength is $\gamma\in[0,\infty)$. Under the same
assumptions of Theorem A.8 and Assumption A.10, there exist $0<\rho<1$ and
$\delta_{1}>0$ such that if $\|\theta_{0}^{n}-\theta^{\star}\|\leq\delta_{1}$,
then for any $\delta_{2}\in(0,1)$, with probability $(1-\delta_{2})^{t}$, we
have
$\displaystyle\|\theta_{t}^{n}-\theta^{\star}\|\leq\left(\varepsilon_{\text{OPT}}+\frac{a}{\sqrt{n}}\sqrt{\log\frac{b}{\delta}}\right)\sum_{i=0}^{t}\left(\frac{\rho(\lambda)}{1+\gamma}\right)^{i}+\left(\frac{\rho(\lambda)}{1+\gamma}\right)^{t}\|\theta_{0}^{n}-\theta^{\star}\|.$
(43)
###### Proof.
By the triangle inequality, we can estimate that
$\displaystyle\|\theta_{t}^{n}-\theta^{\star}\|$
$\displaystyle\leq\|\theta_{t}^{n}-\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta_{t-1}^{n}))\|+\|\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta_{t-1}^{n})-\theta^{\star}\|$
$\displaystyle=\|\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta_{t-1}^{n})-\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta_{t-1}^{n})\|+\|\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta_{t-1}^{n})-\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta^{\star})\|,$
(44)
where we applied the fixed point Proposition A.6. By Assumption A.10, the left
summand in (A.3) is at most
$\varepsilon_{\text{OPT}}+\frac{a}{\sqrt{n}}\sqrt{\log\frac{b}{\delta}}$, with
probability $1-\delta$. Next, recall that in (41) in the proof of Theorem A.8,
we proved that that $\pi_{\gamma}G_{\lambda}^{\infty}$ is a contraction
mapping of factor $\rho(\lambda)/(1+\gamma)$ sufficiently close to $U$; this
implies that the right summand in (A.3) is at most
$\frac{\rho(\lambda)}{1+\gamma}\|\theta_{t-1}^{n}-\theta^{\star}\|$. Together,
these yield the recurrence estimate
$\displaystyle\mathbb{P}\left(\|\theta_{t}^{n}-\theta^{\star}\|\leq\varepsilon_{\text{OPT}}+\frac{a}{\sqrt{n}}\sqrt{\log\frac{b}{\delta}}+\frac{\rho(\lambda)}{1+\gamma}\|\theta_{t-1}^{n}-\theta^{\star}\|\right)\geq
1-\delta.$ (45)
Iterating this recurrence for successive time steps yields
$\displaystyle\mathbb{P}\left(\|\theta_{t}^{n}-\theta^{\star}\|\leq\left(\varepsilon_{\text{OPT}}+\frac{a}{\sqrt{n}}\sqrt{\log\frac{b}{\delta}}\right)\sum_{i=0}^{t}\left(\frac{\rho(\lambda)}{1+\gamma}\right)^{i}+\left(\frac{\rho(\lambda)}{1+\gamma}\right)^{t}\|\theta_{0}^{n}-\theta^{\star}\|\right)\geq(1-\delta)^{t}.$
(46)
This completes the proof, with $\delta_{2}=\delta$. ∎
###### Remark A.12.
Theorem A.11 recovers the result from (Bertrand et al., 2024) in the case
where the correction strength is $\gamma=0$. But for a fixed augmentation
percentage $\lambda$, for any correction strength $\gamma>1$, this gives
stronger stability guarantees than in (Bertrand et al., 2024).
## Appendix B Discussion about Assumption 4.2
In this section, we show how with a mild boundedness assumption on our
generative model parameter update function, we can deduce our Assumption A.10
(which is the same as Assumption 4.2, part 3) from the following assumption
used in (Bertrand et al., 2024).
###### Assumption B.1.
There exist $a,b,\varepsilon_{\text{OPT}}\geq 0$ and a neighborhood $U$ of
$\theta^{\star}$ such that, for any $\delta\in(0,1)$, with probability
$1-\delta$ over the samplings, we have
$(\forall\theta\in U)(\forall
n\in\mathbb{N})\qquad\|\mathcal{G}_{\lambda}^{n}(\theta)-\mathcal{G}_{\lambda}^{\infty}(\theta)\|\leq\varepsilon_{\text{OPT}}+\frac{a}{\sqrt{n}}\sqrt{\log\frac{b}{\delta}}.$
(47)
Now, if we make the additional assumption that our generative model parameter
update function is locally bounded near $\theta^{\star}$ then we obtain the
following.
###### Proposition B.2.
Suppose Assumption B.1 holds. Suppose also that there exists $B<\infty$ such
that for all $n>0$ and $\theta$ sufficiently close to $\theta^{\star}$,
$\displaystyle\|\mathcal{G}_{\lambda}^{n}(\theta)-\mathcal{G}_{\lambda}^{n}(\theta^{\star})\|<B\|\theta-\theta^{\star}\|.$
Then there exist $a,b,c,\varepsilon_{\text{OPT}}\geq 0$ and a neighborhood $U$
of $\theta^{\star}$ such that, for any $\delta\in(0,1)$, with probability
$1-\delta$ over the samplings, we have
$(\forall\theta\in U)(\forall
n\in\mathbb{N})\qquad\|\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta)-\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)\|\leq
c\cdot
d_{U}+\varepsilon_{\text{OPT}}+\frac{a}{\sqrt{n}}\sqrt{\log\frac{b}{\delta}},$
(48)
where $d_{U}=\sup_{\theta\in U}\|\theta-\theta^{\star}\|.$
###### Proof.
By the triangle inequality, we have
$\displaystyle\|\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta)-\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)\|$
$\displaystyle\leq\|\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta)-\mathcal{G}_{\lambda}^{n}(\theta)\|+\|\mathcal{G}_{\lambda}^{n}(\theta)-\mathcal{G}_{\lambda}^{\infty}(\theta)\|+\|\mathcal{G}_{\lambda}^{\infty}(\theta)-\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)\|.$
(49)
We bound each term in the RHS: firstly, note the middle term is bounded by
Assumption B.1.The first term is bounded as follows:
$\displaystyle\|\mathcal{G}_{\lambda}^{n}(\theta)-\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta)\|$
$\displaystyle\leq\|\mathcal{G}_{\lambda}^{n}(\theta)-\mathcal{G}_{\lambda}^{n}(\theta^{\star})\|+\|\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta^{\star})-\pi_{\gamma}\mathcal{G}_{\lambda}^{n}(\theta)\|$
$\displaystyle\leq B\|\theta-\theta^{\star}\|+B\|\theta-\theta^{\star}\|$
$\displaystyle\leq 2Bd_{U},$
where in the first step we used that
$\mathcal{G}_{\lambda}^{\infty}(\theta^{\star})=\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta^{\star})$.
Similarly, the last term is bounded as follows:
$\displaystyle\|\mathcal{G}_{\lambda}^{\infty}(\theta)-\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)\|$
$\displaystyle\leq\|\mathcal{G}_{\lambda}^{\infty}(\theta)-\mathcal{G}_{\lambda}^{\infty}(\theta^{\star})\|+\|\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta^{\star})-\pi_{\gamma}\mathcal{G}_{\lambda}^{\infty}(\theta)\|$
$\displaystyle\leq\rho(\lambda)\|\theta-\theta^{\star}\|+\frac{\rho(\lambda)}{1+\gamma}\|\theta-\theta^{\star}\|$
$\displaystyle=\rho(\lambda)\frac{2+\gamma}{1+\gamma}\|\theta-\theta^{\star}\|$
$\displaystyle\leq\rho(\lambda)\frac{2+\gamma}{1+\gamma}d_{U},$
where in the second step we applied (41). Using these bounds in (49) and
taking $c=2B+\rho(\lambda)\frac{2+\gamma}{1+\gamma}$ completes the proof. ∎
Note that the constant $c\cdot d_{U}<c$ (for $U$ sufficiently small) can
really be viewed as a part of the optimization constant
$\varepsilon_{\text{OPT}}$ since it is controlled by the choice of generative
model class.
## Appendix C Point-wise correction corresponds to distribution-wise
correction
In this section we provide a sufficient condition under which you can
associate a distribution-wise correction mapping (like the one we consider in
the paper, $\pi_{\gamma}$) to a point-wise correction mapping (which is the
one you are more likely to find in the wild).
###### Definition C.1.
Let $X=\\{x_{1},\dots,x_{n}\\}\subset\mathbb{R}^{m}$ and define the empirical
cumulative distribution function $\Phi_{X}$ by
$\displaystyle\Phi_{X}(v):=\Phi_{X}(v;\\{x_{1},\dots,x_{n}\\}):=\frac{1}{n}\sum_{i=1}^{n}\chi_{v}(x_{i}),$
where for $v\in\mathbb{R}^{m}$, $\chi_{v}:\mathbb{R}^{m}\to\\{0,1\\}$ is the
indicator function for the set $\prod_{i=1}^{n}(-\infty,v_{i}]$. For a
continuous distribution, the cumulative distribution function is defined in
the usual way.
###### Definition C.2.
Suppose that we have a model $p_{\theta}$ and an arbitrary function
$\Pi:\mathbb{R}^{m}\to\mathbb{R}^{m}$. Then we say that $\Pi$ is a _valid
point-wise correction function_ for $p_{\theta}$ if there exists a
$\gamma\in[0,\infty]$ such that
$\lim_{n\to\infty}\left(\mathbb{E}_{X\sim
p_{\theta}^{n}}\sup_{v\in\mathbb{R}^{m}}\|\Phi_{\Pi(X)}(v)-\Phi_{\pi_{\gamma}p_{\theta}}(v)\|\right)\to
0,$ (50)
almost surely, where the expectation is over all samplings
$X=\\{x_{1},\dots,x_{n}\\}$ of size $n$ from $p_{\theta}$.
###### Intuition C.3.
This is saying that the CDFs for $\pi_{\gamma}p_{\theta}$ and $\Pi(X\sim
p_{\theta}^{n})$ are equal in expectation, for large enough $n$. This is one
way of saying that $\pi_{\gamma}p_{\theta}$ and $\Pi(X\sim p_{\theta}^{n})$,
for large enough $n$, are nearly identical probability distributions.
###### Definition C.4.
If the limit in (50) exists, then we define the _distribution-wise projection
function_ corresponding to $\Pi$ to be
$\pi_{\gamma}p_{\theta}=\frac{1}{1+\gamma}p_{\theta}+\frac{\gamma}{1+\gamma}p_{\theta^{\star}},$
(51)
and we define the _projection strength of the point-wise correction function_
$\Pi$ to be $\gamma$. Recall that
$\pi_{\gamma}p_{\theta}=\frac{1}{1+\gamma}p_{\theta}+\frac{\gamma}{1+\gamma}p_{\theta^{\star}}$.
So intuitively, (50) implies that the projection function $\Pi$ maps samples
from $p_{\theta}$ to a different space such that they look like they come from
a combination of the original distribution $p_{\theta}$ and
$p_{\theta^{\star}}$, at least at the level of CDFs.
###### Remark C.5.
Such a $\gamma$, if it exists, is unique. Furthermore, if
$p_{\theta}=p_{\theta^{\star}}$, then $\gamma=\infty$.
The limit condition in Definition C.2 is abstract, and can be hard to swallow.
We present an example of a simple point-wise correction for the Gaussian toy
example that we consider in Section 5, whose corresponding distribution-wise
correction is exactly one would expect it to be–the weighted average of the
corresponding Gaussians. Recall that we demonstrated empirically in Figure 2
that Theorem 4.3 holds for that example. The projection function is depicted
in Figure 5.
###### Example C.6.
Let $G_{1}=\mathcal{N}(0,\sigma_{1}^{2}I_{d})$ (initial distribution,
corresponds to $\theta$) and $G_{2}=\mathcal{N}(0,\sigma_{2}^{2}I_{d})$
(target distribution, corresponds to $\theta^{\star}$). Given
$x_{1},\dots,x_{n}\sim G_{1}$, we define $\Pi^{\gamma}$ as follows: let
$y_{1},\dots,y_{n}\sim G_{2}$. Then choose a random $\sigma\in S_{n}$ ($S_{n}$
= group of permutations on $n$ symbols). Define
$\Pi^{\gamma}(x_{i}):=\frac{x_{i}+\gamma y_{\sigma(i)}}{1+\gamma}$
for $1\leq i\leq n$. Note that by construction,
$\Pi^{\gamma}(x_{i})\sim\frac{G_{1}+\gamma G_{2}}{1+\gamma},$
for $1\leq i\leq n$.
Next, we define the projection set $\Pi X:=\\{\Pi^{\gamma}(x_{i})\\}_{1\leq
i\leq n}$, let
$\pi_{\gamma}G_{1}:=\frac{1}{1+\gamma}G_{1}+\frac{\gamma}{1+\gamma}G_{2}$, and
let $\Phi_{\pi_{\gamma}G_{1}}$ represent the cumulative distribution function
of the Gaussian $\pi_{\gamma}G_{1}$. Then, since
$\Pi^{\gamma}(x_{i})\sim\pi_{\gamma}G_{1}$, we have by the uniform law of
large numbers that
$\lim_{n\to\infty}\left(\mathbb{E}_{\\{x_{i}\sim
G_{1}\\}_{i=1}^{n}}\mathrm{sup}_{v\in\mathbb{R}^{m}}\left\|\Phi_{\Pi
X}(v)-\Phi_{\pi_{\gamma}G_{1}}(v)\right\|\right)\to 0$ (52)
almost surely. Therefore $\Pi^{\gamma}$ is a valid point-wise correction
function, and its corresponding distribution-wise projection function is
$\pi_{\gamma}G_{1}$.
###### Remark C.7.
In the example we considered in Section 5, we had a total distance traveled
minimization condition, but here for this proof we don’t even need to use that
hypothesis. (In the proof, this would have corresponded to the additional
assumption that we’ve chosen a $\sigma\in S_{n}$ such that
$\sum_{i=1}^{n}\|x_{i}-y_{\sigma(i)}\|$ is minimized.) This implies that
different point-wise correction functions can correspond to the same
distribution-wise correction function.
Figure 5: Illustration of the distribution-wise projection function in our
Gaussian toy example. Correcting one Gaussian in the direction of another,
like we consider in Section 5, corresponds to finding the “(weighted) average
Gaussian” that lives between the two.
## Appendix D Mathematical errors in (Bertrand et al., 2024) and their
implications
This section offers a detailed explanation of what we believe are mathematical
errors in (Bertrand et al., 2024) and how they affect their stated results.
All notation throughout this section refers to the notation used in (Bertrand
et al., 2024), even if we have redefined certain variables in our preceding
arguments. Additionally, all references to equations, theorems, etc. are to
those in (Bertrand et al., 2024) as well.
* •
Proof of Proposition 4: Equation (60) should read
$\displaystyle\mathcal{G}_{\lambda}^{\infty}(\theta)=\operatorname*{local\,argmax}_{\theta^{\prime}\in\Theta}[\mathbb{E}_{z\sim
p_{\text{data}}}[\log p_{\theta^{\prime}}(z)]+\lambda\mathbb{E}_{z\sim
p_{\theta}}[\log p_{\theta^{\prime}}(z)]].$
This is not a critical error to the proof since their inequalities (62) and
(63) are still valid and thus imply for all $\theta^{\prime}\in\Theta$ that
$\displaystyle\mathbb{E}_{z\sim p_{\text{data}}}[\log
p_{\theta^{\prime}}(z)]+\lambda\mathbb{E}_{z\sim p_{\theta^{\star}}}[\log
p_{\theta^{\prime}}(z)]\leq\mathbb{E}_{z\sim p_{\text{data}}}[\log
p_{\theta^{\star}}(z)]+\lambda\mathbb{E}_{z\sim p_{\theta^{\star}}}[\log
p_{\theta^{\star}}(z)],$
thus yielding the conclusion
$\mathcal{G}^{\infty}_{\lambda}(\theta^{\star})=\theta^{\star}$.
* •
Statement and proof of Theorem 2: In the statement of their Theorem 2, there
is a $t$ in the $\sqrt{\log\frac{bt}{\delta}}$ term, which is erroneous, as
can be seen for example by comparing with their Assumption 3 (indeed, the
bound on the RHS of (12) diverges to $\infty$ as $t\to\infty$). Additionally,
their proof (cf. inequality (100)) only shows that the bound holds at
generation $t$ with a probability of $(1-\delta)^{t}$ rather than $(1-\delta)$
as claimed. Our proof of Theorem A.11 corrects these errors, and the correct
form of their Theorem 2 can be recovered by setting $\gamma=0$ in (43). This
weakens their result; as the number of generations $t$ increases linearly,
their stability estimate holds with exponentially smaller likelihood, and the
same is true for our stability estimate. However: when applying this result in
practice, if we know a priori that we will only do a moderate quantity of
self-consuming iterations, then we can choose a sufficiently small $\delta$,
and will still be guaranteed a high likelihood that the stability bound holds.
For example, $(1-0.001)^{128}\approx 0.88$. The flip side of this analysis is
that as $\delta\to 0$ in the denominator of the bound in Theorem 2, the bound
goes to $\infty$. Our version of their Theorem 2, in the correction function
case, has the same general behavior, but the presence of the $(1+\gamma)^{t}$
factor in the denominator will help cancel out that effect, with increasing
help as $\gamma$ increases. Therefore, although their Theorem 2 holds with
exponentially less likelihood than they claimed, and our Theorem A.11 does as
well, for the values considered in practical settings, the effect generally
isn’t that pronounced.
## Appendix E Additional Human Motion Generation Qualitative Results
In Figures 6, 7, and 8, we present additional qualitative observations and
analysis of our synthesized motions. We present more evidence that iterative
fine-tuning with self-correction yields physically plausible motions
comparable to the baseline, whereas iterative fine-tuning without self-
correction yields motions that are incorrect for various reasons. See the
captions of the referenced figures for analysis of some characteristic failure
modes of the iterative fine-tuning loop without self-correction.
A technical note: for all figures, we render the motions from the same
environment and camera position. We consolidate each render into the same
image without resizing it. This means that if a figure appears larger relative
to the others, the human moved closer to the camera. Some motions will have
transparent frames of past positions; the more transparent the image, the
farther back in the past it was in the motion sequence. Finally, in each
figure, the text prompt for all generated motions was the same –the prompt
being the one associated with the ground truth motion in the HumanML3D (Guo et
al., 2022) training data, which we also visualize. Note that the coloring in
the humanoid figures corresponds to the coloring in the graphs.
Figure 6: Here we see the negative _floating_ phenomenon exacerbated by
iterative fine-tuning, whereas iterative fine-tuning with self-correction
generates a motion with floor contact integrity comparable to the ground truth
and baseline. The floatic metric is formally defined in (Yuan et al., 2023) as
the distance between the lowest vertex on the human mesh and the floor plane.
All three sequences were generated using the same prompt: person got down and
is crawling across the floor. Each snapshot was taken at exactly frame 87. The
green figure appears larger than the other two only because it is closer to
the camera. The two motions on the right were synthesized after 50 generations
training with $25\%$ synthetic augmentation, trained on $n=64$ data points.
Figure 7: All four of the above motions correspond to the prompt: a person
raises right hand to face looks around and puts hand down back to side.. The
model which is trained with iterative fine-tuning outputs spurious motion that
slides the figure to the right. And in the video for this example, the human
rotates their forearm unnaturally and forcefully. In contrast, the baseline
and iterative fine-tuning with self-correction models’ motions both accurately
embody the prompt. Each generated snapshot is taken at exactly frame 142 while
the ground truth’s image is frame 70 in its sequence. The two motions on the
right were synthesized after 42 generations with $10\%$ synthetic
augmentation, where the ground truth dataset has size $n=2794$.
Figure 8: Here we observe that iterative fine-tuning fails to produce any
meaningful motion sequence, but the iterative fine-tuning with self-correction
and baseline models generate results consistent with their prompt: walks side
ways but back and forth. Each snapshot for the generated motions was taken at
exactly frame 120 while the ground truth image is a snapshot from frame 69.
These images were synthesized after 50 generation of the model that was
trained on $n=64$ data points at $25\%$ synthetic augmentation.
## Appendix F Additional Human Motion Generation Quantitative Results
See Figures 9, 10, 11 for results when the dataset size is
$n\in\\{64,128,256\\}$ and the synthetic augmentation percentage is
$\lambda\in\\{0.25,0.50,0.75,1.00\\}$. And see Figures 12 and 13 for
additional results on our iterative fine-tuning experiments when the dataset
size is $n=2794$ and the synthetic augmentation percentage is
$\lambda\in\\{0.05,0.10,0.15,0.20,0.25\\}$. The graphs provide evidence across
$17$ experiment settings that our iterative fine-tuning procedure with self-
correction yields better training performance than iterative fine-tuning with
no self-correction for the motion synthesis task, in accordance with Theorem
4.3.
Figure 9: Results from our human motion experiments with iterative fine-tuning
with and without self-correction, where the training set has size $64$. These
are graphs for evaluation metrics on the last checkpoint for every generation;
this is the checkpoint used for sampling in the self-consuming loop
experiments, and it is also the checkpoint where training is resumed with this
new partially synthesized dataset. These results demonstrate that iterative
fine-tuning with self-correction generally outperforms iterative fine-tuning,
and is sometimes even competitive with baseline performance.
Figure 10: Results from our human motion experiments with iterative fine-
tuning with and without self-correction, where the training set has size
$128$. These are graphs for evaluation metrics on the last checkpoint for
every generation; this is the checkpoint used for sampling in the self-
consuming loop experiments, and it is also the checkpoint where training is
resumed with this new partially synthesized dataset. These results demonstrate
that iterative fine-tuning with self-correction generally outperforms
iterative fine-tuning, and is sometimes even competitive with baseline
performance. Notably, the performance gain of iterative fine-tuning with self-
correction over iterative fine-tuning is less pronounced than when the dataset
size is $n=64$.
Figure 11: Results from our human motion experiments with iterative fine-
tuning with and without self-correction, where the training set has size
$256$. These are graphs for evaluation metrics on the last checkpoint for
every generation; this is the checkpoint used for sampling in the self-
consuming loop experiments, and it is also the checkpoint where training is
resumed with this new partially synthesized dataset. These results demonstrate
that iterative fine-tuning with self-correction generally outperforms
iterative fine-tuning, and is sometimes even competitive with baseline
performance.
Figure 12: Results from our human motion experiments on iterative fine-tuning
with dataset size $n=2794$. These are graphs for evaluation metrics on the
last checkpoint for every generation; this is the checkpoint used for sampling
in the augmentation loop experiments, and it is also the checkpoint where
training is resumed with this new synthesized dataset. In these results, it
appears as though iterative fine-tuning with self-correction has less variance
during training than iterative fine-tuning with with no self-correction, and
generally has better FID scores later in training. Notably, the these two
curves are closer together than they were in the cases $n\in\\{64,128,256\\}$.
Figure 13: Results from our human motion experiments on iterative fine-tuning
with dataset size $n=2794$. These are graphs of the average evaluation metrics
for every generation. Graphing the average evaluation metrics makes the
training dynamics trend over time more clear. With this additional smoothing,
it is more clear that iterative fine-tuning with self-correction outperforms
iterative fine-tuning with no self-correction, and is competitive with the
baseline after many generations; in fact, it appears to converge to the
baseline (on average) for every synthetic augmentation percentage that we
considered.
## Appendix G Consistency Across Seeds: Additional Human Motion Generation
Quantitative Results
In Figures 14, 15, 16, and 17, we present experimental results from runs
across three more seeds for our human motion experiments when the dataset size
is $n=64$. We find that the self-correction technique consistently yields
improved training dynamics over iterative fine-tuning without correction.
Figure 14: Results from our human motion experiments on iterative fine-tuning,
with dataset size $n=64$ and $25\%$ augmentation percentage. Each row
corresponds to a different random seed. We can see that iterative fine-tuning
with self-correction consistently outperforms iterative fine-tuning with no
self-correction, and the FID score appears to converge to the baseline after
many generations.
Figure 15: Results from our human motion experiments on iterative fine-tuning,
with dataset size $n=64$ and $50\%$ augmentation percentage. Each row
corresponds to a different random seed. We can see that iterative fine-tuning
with self-correction consistently outperforms iterative fine-tuning with no
self-correction, and the FID score appears to converge to the baseline after
many generations.
Figure 16: Results from our human motion experiments on iterative fine-tuning,
with dataset size $n=64$ and $75\%$ augmentation percentage. Each row
corresponds to a different random seed. We can see that iterative fine-tuning
with self-correction consistently outperforms iterative fine-tuning with no
self-correction, and the FID score appears to converge near the baseline after
many generations.
Figure 17: Results from our human motion experiments on iterative fine-tuning,
with dataset size $n=64$ and $100\%$ augmentation percentage. Each row
corresponds to a different random seed. We can see that iterative fine-tuning
with self-correction consistently outperforms iterative fine-tuning with no
self-correction. However, we see less stability than in the runs with a lower
augmentation percentage. This is in accordance with Theorem 4.3.
|
# SRB measures of singular hyperbolic attractors
<EMAIL_ADDRESS>
###### Abstract.
It is known that hyperbolic maps admitting singularities have at most
countably many ergodic Sinai-Ruelle-Bowen (SRB) measures. These maps include
the Belykh attractor, the geometric Lorenz attractor, and more general Lorenz-
type systems. In this paper, we establish easily verifiable sufficient
conditions guaranteeing that the number of ergodic SRB measures is at most
finite, and provide examples and nonexamples showing that the conditions are
necessary in general.
###### Key words and phrases:
Singular hyperbolic maps, SRB measures, strange attractors, hyperbolic sets,
physical measures.
###### 1991 Mathematics Subject Classification:
Primary: 37C05, 37C40, 37D05, 37D20, 37D45; Secondary: 37C10, 37C70, 37D35.
Dominic Veconi∗
Department of Mathematics
The Pennsylvania State University
University Park, PA 16802, USA
(Communicated by the associate editor name)
## 1\. Introduction
One primary question in smooth ergodic theory is the existence of “physical
measures” for a smooth dynamical system. Given a compact Riemannian manifold
$M$ and a smooth map $f:U\to M$, $U\subseteq M$ open, a _physical measure_ is
one in which the Birkhoff averages of continuous functions are constant on a
set of positive measure. In other words, a probability measure $\mu$ is a
_physical measure_ if
$m\bigg{\\{}x\in
U:\lim_{n\to+\infty}\frac{1}{n}\sum_{k=0}^{n-1}\left(\varphi\circ
f^{k}\right)(x)=\int_{U}\varphi\,d\mu\quad\forall\varphi\in
C^{0}(U)\bigg{\\}}>0,$
where $m$ is the Riemannian volume. Among the most significant physical
measures are the _Sinai-Ruelle-Bowen (SRB) measures_. These are invariant
measures for hyperbolic dynamical systems that have conditional measures on
unstable leaves that are absolutely continuous with respect to the Riemannian
leaf volume. For uniformly hyperbolic dynamical systems (such as transitive
Anosov diffeomorphisms and attractors of Axiom A systems), there is a unique
SRB measure [9], and the existence of SRB measures has been established for
several classes of nonuniformly hyperbolic dynamical systems [10, 14, 18] and
partially hyperbolic dynamical systems [2, 8]. It was further shown in [15]
that if $M$ is a compact Riemannian 2-manifold and $f:M\to M$ is a hyperbolic
diffeomorphism admitting an SRB measure, then this SRB measure is unique.
Many dynamical systems in engineering and natural sciences exhibit “chaotic”
behavior: their trajectories appear disordered and they are highly sensitive
to initial data. The simplest mathematical examples of such systems are
uniformly hyperbolic and uniformly expanding, and so hyperbolic dynamical
systems have been at the forefront of smooth ergodic theory since at least the
1960s. However, most stochastic dynamical systems arising from physical and
natural phenomena are not uniformly hyperbolic. In these instances, uniqueness
results for uniformly hyperbolic dynamical systems and surface maps (such as
those described in [15]) no longer apply. Examples of such dynamical systems
include the Lorenz attractor model of atmospheric convection, the associated
geometric Lorenz attractor (described in more detail below), and the Belykh
attractor of phase synchronization theory [5, 13, 17]. The latter two are maps
of the unit square that admit highly complex limit sets, so that the resulting
maps are not invariant under Lebesgue volume and may not _a priori_ admit a
unique SRB measure.
Although our results concern discrete singular hyperbolic maps, historically
many results about singular hyperbolic attractors come from investigations
into hyperbolic flows, the most famous being the flow generated by the Lorenz
equations. In [11], J. Kaplan and J. Yorke used a Poincaré return map to study
the dynamical behavior of the Lorenz attractor, such as the parameters for
which periodic points are dense. This Poincaré map was later reformulated as
the _geometric Lorenz attractor_ , which is a simplified discrete model of the
Poincaré map of the original Lorenz flow. The more general family of discrete
_Lorenz-type maps_ was introduced in [1]. In the years that followed, the
Lorenz system and related hyperbolic flows have led to active research in
singular hyperbolic attractors (see e.g. [1, 5, 13, 16, 17], and others).
There is also a large body of work on singular hyperbolic and sectional-
hyperbolic flows more generally. In [17], it is shown that singular hyperbolic
flows admit finitely many ergodic physical measures; more recently, it was
shown in [3] that flows of Hölder-$C^{1}$ vector fields admitting a sectional-
hyperbolic attracting set admit finitely many ergodic SRB measures. The proof
in [3] also relies on Poincaré return maps, and so these results extend to
discrete singular hyperbolic maps arising as Poincaré maps of hyperbolic
flows. For a detailed discussion of the ergodic properties of hyperbolic flows
and their attractors, see [4].
In this paper, we consider the class of discrete singular hyperbolic dynamical
systems. These are hyperbolic maps $f:K\setminus N\to K$, where $K\subset M$
is a precompact open subset of a Riemannian manifold $M$, and $N\subset K$ is
a closed subset of singularities on which $f$ fails to be continuous and/or
differentiable. The map $f$ is uniformly hyperbolic on the non-invariant set
$K\setminus N$, but behaves more similarly to the non-uniformly hyperbolic
setting on an invariant set that consists of trajectories passing nearby the
the singular set $N$ with a prescribed rate. Our setting includes systems that
are derived from Poincaré maps of hyperbolic flows, such as the geometric
Lorenz attractor, but also includes singular hyperbolic dynamical systems that
do not arise from flows, such as the Lozi map [12]. In [13], it was shown that
the attractors admitted by singular hyperbolic maps support at most countably
many ergodic SRB measures. In [16], conditions were given under which a
singular hyperbolic attractor admits at most finitely many ergodic SRB
measures. We provide an alternative proof of this result, with somewhat
different conditions that are easy to verify. Namely, if the singular set is a
disjoint union of finitely many embedded submanifolds that transversally
intersect unstable leaves, and if the image of neighborhoods of the singular
set remain separated from the singular set under the dynamics for sufficiently
long but finite time (conditions (SH3), (SH6), and (SH7)), then there are at
most finitely SRB measures.
Although most examples of singular hyperbolic attractors in the literature
satisfy conditions (SH3) - (SH7) (see the examples in [1, 5, 7, 12, 13]),
there exist singular hyperbolic attractors that do not satisfy these
conditions and admit infinitely many ergodic components. For this reason,
these conditions are necessary for our statement to be true in full
generality. For example, one can construct a family of Lorenz-type attractors
whose singular sets have infinitely many components, and these examples admit
countably many SRB measures, but these maps are not topologically transitive
(see Section 4.3).
Once the existence of ergodic SRB measures has been established, a natural
question to ask is when the SRB measure is unique. In [15], it is shown that
in the case of a $C^{1+\alpha}$ diffeomorphism $f$ of a compact surface, if
$f$ is topologically transitive, then $f$ admits at most one SRB measure. A
similar result may be proven for singular hyperbolic attractors, provided a
certain regularity condition on the stable foliation (Theorem 3.5, also [13]).
The regularity condition needed is _local continuity_ , which roughly means
that the smooth functions $E^{s}(x)\to M$ defining the local stable leaf
$W_{\mathrm{loc}}^{s}(x)$ vary continuously with $x\in K\setminus N$, where
$E^{s}(x)\subset T_{x}M$ is the stable subspace at $x$. In this paper, we show
that a singular hyperbolic map for which the stable foliation is locally
continuous admits a unique SRB measure if and only if the map is topologically
transitive.
This paper is structured as follows. Section 2 is devoted to preliminary
constructions and definitions needed to discuss singular hyperbolic dynamical
systems. Our main result is stated and proven in Section 3. Section 4 is spent
discussing examples of dynamical systems satisfying the hypotheses of our main
result, as well as examples of systems that fail these hypotheses and that
admit infinitey many SRB measures.
## 2\. Preliminaries
We begin by defining singular hyperbolic attractors, and discuss some of their
major properties. We consider a Riemannian manifold $M$, an open, bounded,
connected subset $K\subset M$ with compact closure, and a closed subset
$N\subset K$. We further consider a map $f:K\setminus N\to K$ satisfying:
1. (SH1)
$f$ is a $C^{2}$ diffeomorphism from $K\setminus N$ to $f(K\setminus N)$.
We further define $N^{+}:=N\cup\partial K$ as the _discontinuity set_ for $f$
(on which the function $f$ is discontinuous), and further define
$N^{-}=\left\\{y\in K:\textrm{There are }z\in N^{+}\textrm{ and }z_{n}\in
K\setminus N^{+}\textrm{ s.t. }z_{n}\to z\textrm{ and }f(z_{n})\to
y\right\\}.$
The set $N^{-}$ is referred to as the _discontinuity set_ for $f^{-1}$. We
further assume the map $f$ satisfies:
1. (SH2)
There exist $C_{i}>0$ and $\alpha_{i}\geq 0$, with $i=1,2$, such that
$\displaystyle\left\lVert d^{2}f_{x}\right\rVert$ $\displaystyle\leq
C_{1}\rho(x,N^{+})^{-\alpha_{1}}\quad\textrm{for }x\in K\setminus N,$
$\displaystyle\left\lVert d^{2}f_{x}^{-1}\right\rVert$ $\displaystyle\leq
C_{2}\rho(x,N^{-})^{-\alpha_{2}}\quad\textrm{for }x\in f(K\setminus N)$
where $\rho$ is the Riemannian distance in $M$.
Define the set $K^{+}$ by
$\displaystyle K^{+}=\mathop{\bigcap}_{n=0}^{\infty}\left(K\setminus
f^{-n}(N^{+})\right)=\left\\{x\in K:f^{n}(x)\not\in N^{+}\textrm{ for all
}n\geq 0\right\\},$
so that $K^{+}$ is the largest forward-invariant set on which $f$ is
continuous. Further, define
$D=\mathop{\bigcap}_{n=0}^{\infty}f^{n}(K^{+})\quad\textrm{and}\quad\Lambda=\overline{D}.$
We say $\Lambda$ is the _attractor_ for $f$.
###### Proposition 1.
[13] We have
$D=\Lambda\setminus\mathop{\bigcup}_{n\in\mathbb{Z}}f^{n}(N^{+})$.
Furthermore, $f$ and $f^{-1}$ are well-defined on $D$, and $f(D)=D$ and
$f^{-1}(D)=D$.
Given $z\in M$, $\alpha>0$, and a subspace $P\subset T_{z}M$, we denote the
cone at $z$ around $P$ with angle $\alpha$ by
$C(z,\alpha,P)=\left\\{v\in T_{z}M:\angle(v,P):=\inf_{w\in
P}\angle(v,w)\leq\alpha\right\\}.$
###### Definition 2.1.
The map $f:K\setminus N\to K$ is _singular hyperbolic_ if there is $C>0$,
$\lambda>1$, a function $\alpha:D\to\mathbb{R}$, and two distributions
$P^{s},P^{u}$ on $K\setminus N^{+}$ of dimensions $\dim P^{s}=p$, $\dim
P^{u}=q=n-p$ (with $n=\dim M$), such that the cones
$C^{s}(z)=C(z,\alpha(z),P^{s}_{z})$ and $C^{u}(z)=C(z,\alpha(z),P^{u}_{z})$,
for $z\in K\setminus N$, satisfy the following conditions:
1. (a)
The angle between $C^{s}(z)$ and $C^{u}(z)$ is uniformly bounded below over
$K\setminus N^{+}$, and in particular, $C^{s}(z)\cap C^{u}(z)=0$;
2. (b)
$df_{z}(C^{u}(z))\subset C^{u}(f(z))$ for $z\in K\setminus N^{+}$, and
$df^{-1}_{z}(C^{s}(z))\subset C^{s}(f^{-1}(z))$ for $z\in f(K\setminus
N^{+})$;
3. (c)
for any $n>0$, we have:
$\displaystyle|df_{z}^{n}v|$ $\displaystyle\geq
C\lambda^{n}|v|\quad\textrm{for }z\in K^{+},v\in C^{u}(z);$
$\displaystyle|df_{z}^{-n}|$ $\displaystyle\geq
C\lambda^{n}|v|\quad\textrm{for }z\in f^{n}(K^{+}),v\in C^{s}(z).$
In this instance, the set $\Lambda$ defined above is called a _singular
hyperbolic attractor_.
Define the following subsets of $T_{z}M$ for $z\in D$:
$E^{s}_{z}=\mathop{\bigcap}_{n=0}^{\infty}df^{-n}_{f^{n}(z)}C^{s}(f^{n}(z))\quad\textrm{and}\quad
E^{u}_{z}=\mathop{\bigcap}_{n=0}^{\infty}df^{n}_{f^{-n}(z))}C^{u}(f^{-n}(z)).$
###### Proposition 2.
[13] The sets $E^{s}_{z}$ and $E^{u}_{z}$ are subspaces of $T_{z}M$, called
the _stable_ and _unstable subspaces_ at $z$ respectively. They satisfy the
following properties:
1. (a)
the dimensions of these subspaces are the same as the respective subspaces
$P^{s}_{z}$ and $P^{u}_{z}$ around which the cones $C^{s}(z)$ and $C^{u}(z)$
are centered. That is, $\dim E^{s}_{z}=\dim P^{s}_{z}=p$ and $\dim
E^{u}_{z}=\dim P^{u}_{z}=q=n-p$;
2. (b)
$T_{z}M=E^{s}_{z}\oplus E^{u}_{z}$;
3. (c)
the angle between $E^{s}_{z}$ and $E^{u}_{z}$ is bounded below uniformly over
$D$;
4. (d)
for any $n\geq 0$ and $z\in D$, we have
$\displaystyle|df^{n}_{z}v|$ $\displaystyle\leq
C\lambda^{-n}|v|\quad\textrm{for }v\in E^{s}(z),$
$\displaystyle|df^{-n}_{z}v|$ $\displaystyle\leq
C\lambda^{-n}|v|\quad\textrm{for }v\in E^{u}(z).$
The distributions $E^{s}$ and $E^{u}$ on $D$ thus form uniformly hyperbolic
structure with singularities. In particular, they are the tangent spaces of
stable and unstable foliations on $D$. To rigorously characterize the leaves
of these foliations, we need to define the subsets on which stable and
unstable manifolds may be defined.
For arbitrary $\varepsilon>0$ and $l\in\mathbb{N}$, we denote:
$\displaystyle\widehat{D}_{\varepsilon,l}^{+}$ $\displaystyle=\left\\{z\in
K^{+}:\rho\left(f^{n}(z),N^{+}\right)\geq l^{-1}e^{-\varepsilon n},\>n\geq
0\right\\};$ $\displaystyle D^{-}_{\varepsilon,l}$
$\displaystyle=\left\\{z\in\Lambda:\rho\left(f^{-n}(z),N^{-}\right)\geq
l^{-1}e^{-\varepsilon n},n\geq 0\right\\};$ $\displaystyle
D^{+}_{\varepsilon,l}$
$\displaystyle=\widehat{D}_{\varepsilon,l}^{+}\cap\Lambda;$ $\displaystyle
D^{0}_{\varepsilon,l}$ $\displaystyle=D^{-}_{\varepsilon,l}\cap
D^{+}_{\varepsilon,l};$ $\displaystyle D_{\varepsilon}^{\pm}$
$\displaystyle=\mathop{\bigcup}_{l\geq 1}D_{\varepsilon,l}^{\pm};$
$\displaystyle D_{\varepsilon}^{0}$ $\displaystyle=\mathop{\bigcup}_{l\geq
1}D_{\varepsilon,l}^{0}.$
We note that $\widehat{D}^{+}_{\varepsilon,l}$, $D^{\pm}_{\varepsilon,l}$, and
$D^{0}_{\varepsilon,l}$ are closed, and hence compact. Also observe that
$D^{0}_{\varepsilon}=D_{\varepsilon}^{+}\cap D_{\varepsilon}^{-}\subset D$ for
$\varepsilon>0$, and $D_{\varepsilon}^{0}$ is invariant under both $f$ and
$f^{-1}$. Further, $D_{\varepsilon}^{+}$ and $D_{\varepsilon}^{-}$ are
invariant under $f$ and under $f^{-1}$ respectively.
For the proof of the following proposition, see the discussion in Sections 1.5
and 2.1 of [13].
###### Proposition 3.
There exists $\varepsilon>0$ and such that:
1. (a)
for $z\in D^{+}_{\varepsilon}$, there is an embedded (possibly disconnected)
submanifold $W_{\mathrm{loc}}^{s}(z)$ of dimension $p=\dim E^{s}_{z}$ for
which $T_{z}W^{s}_{\mathrm{loc}}(z)=E^{s}_{z}$;
2. (b)
for $z\in D^{-}_{\varepsilon}$, there is an embedded (possibly disconnected)
submanifold $W_{\mathrm{loc}}^{u}(z)$ of dimension $q=n-p=\dim E^{u}_{z}$ for
which $T_{z}W_{\mathrm{loc}}^{u}(z)=E^{u}_{z}$.
Furthermore, define $B^{s}_{z}(y,r)$ to be the ball in
$W^{s}_{\mathrm{loc}}(z)$ of radius $r$ centered at $y\in
W_{\mathrm{loc}}^{s}(z)$, where the distance is the induced distance
$\rho^{s}$ on $W_{\mathrm{loc}}^{s}(z)$. Define $B^{u}_{z}(y,r)$ and
$\rho^{u}$ similarly. Then there is an $\alpha$ with $\lambda^{-1}<\alpha<1$
such that for $r>0$, there is a constant $C=C(r)$ such that:
1. (c)
for $z\in D^{+}_{\varepsilon}$, $y\in W_{\mathrm{loc}}^{s}(z)$, $w\in
B^{s}_{z}(y,r)$, and $n\geq 0$, we have
$\rho^{s}(f^{n}(y),f^{n}(w))\leq C\alpha^{n}\rho^{s}(y,w);$
2. (d)
for $z\in D^{-}_{\varepsilon}$, $y\in W_{\mathrm{loc}}^{u}(z)$, $w\in
B^{u}_{z}(y,r)$, and $n\leq 0$, we have
$\rho^{u}(f^{n}(y),f^{n}(w))\leq C\alpha^{n}\rho^{u}(y,w).$
Additionally, for $z\in D_{\varepsilon,l}^{-}$, let $B(z,\delta)$ denote the
ball of $\rho$-radius $\delta$ centered at $z$. Then there are
$\delta_{i}=\delta_{i}(z)>0$, $i=1,2,3$, with
$\delta_{1}>\delta_{2}>\delta_{3}$, so that for $w\in B(z,\delta_{3})$, the
intersection $B^{s}_{z}(z,\delta_{1})\cap W_{\mathrm{loc}}^{u}(w)$ is nonempty
and contains exactly one point, denoted $[w,z]$; and furthermore,
$B^{u}_{w}([w,z],\delta_{2})\subset W_{\mathrm{loc}}^{u}(w)$.
We denote
$W^{u}(x)=\mathop{\bigcup}_{n\geq
0}f^{n}\left(W_{\mathrm{loc}}^{u}(f^{-n}(x))\right)\quad\textrm{for }x\in K$
(1)
and
$W^{s}(x)=\mathop{\bigcup}_{n\geq
0}f^{-n}\left(W_{\mathrm{loc}}^{s}(f^{n}(x))\cap\Lambda\right)\quad\textrm{for
}x\in\Lambda.$ (2)
Given $\delta>0$ and $x\in K$, let $B_{T}^{u}(\delta,x)\subset E^{u}_{x}$
denote the open ball of radius $\delta$ in $E^{u}_{x}$. For $\delta$ less than
the injectivity radius of $M$ at $x$, suppose the connected component of
$\left(\exp_{x}\big{|}_{B_{T}^{u}(\delta,x)}\right)^{-1}(W^{u}(x))\subset
T_{x}M$ containing $0$ is the graph of some smooth function
$\psi:B_{T}^{u}(\delta,x)\to E^{s}_{x}$. If such a $\psi$ exists for a
particular $x\in M$ and $\delta>0$, we denote
$W_{\delta}^{u}(x)=\exp_{x}\left(\left\\{(u,\psi(u)):u\in
B_{T}^{u}(\delta,x)\right\\}\right).$
Such a number $\delta>0$ and such a function $\psi$ exist for each particular
$x\in K$ (in particular they form $W^{u}_{\mathrm{loc}}(x)$), but $\delta$ may
depend on $x$ (and in particular may not have a uniform lower bound). We
define $W^{s}_{\delta}(x)$ similarly.
Given local submanifolds $W_{\mathrm{loc}}^{s}(z_{1})$ and
$W_{\mathrm{loc}}^{s}(z_{2})$, we define the _holonomy map_
$\pi:W_{\mathrm{loc}}^{s}(z_{1})\to W_{\mathrm{loc}}^{s}(z_{2})$ to be
$\pi(w)=[w,z_{2}]=W_{\mathrm{loc}}^{u}(w)\cap W_{\mathrm{loc}}^{s}(z_{2})$.
Let $\nu_{z}^{s}=\nu|_{W_{\mathrm{loc}}^{s}(z)}$ and
$\nu_{z}^{u}=\nu|_{W_{\mathrm{loc}}^{u}(z)}$ denote the induced Riemannian
volumes on $W_{\mathrm{loc}}^{s}(z)$ and $W_{\mathrm{loc}}^{u}(z)$
respectively for $z\in D^{\pm}_{\varepsilon}$.
###### Proposition 4.
The local foliation $W_{\mathrm{loc}}^{s}(z)$ for $z\in D^{0}_{\varepsilon,l}$
is absolutely continuous, in the sense that for any $z_{1},z_{2}\in
D^{0}_{\varepsilon,l}$, the pushforward measure $\pi_{*}\nu_{z_{1}}^{s}$ on
$W_{\mathrm{loc}}^{s}(z_{2})$ is absolutely continuous with respect to
$\nu_{z_{2}}^{s}$.
###### Proof.
This follows from Proposition 10 of [13]. ∎
Generally, maps satisfying (SH1) and (SH2) are dissipative, and so do not
preserve Riemannian volume. Therefore, our interest is in the following class
of measures:
###### Definition 2.2.
A probability measure $\mu$ on $K$ is an _SRB (Sinai-Ruelle-Bowen) measure_ if
$\mu$ is $f$-invariant and if the conditional measures on the unstable leaves
are absolutely continuous with respect to the Riemannian leaf volume.
In [13], the existence of SRB measures for singular hyperbolic attractors is
proven under certain regulatory conditions. We will describe their
construction of SRB measures. Let $J^{u}(z)=\det\left(df|_{E^{u}_{z}}\right)$
denote the unstable Jacobian of $f$ at a point $z\in D$. For $y\in W^{u}(z)$
and $n\geq 1$, set
$\kappa_{n}(z,y)=\prod_{j=0}^{n-1}\frac{J^{u}\left(f^{-j}(z)\right)}{J^{u}\left(f^{-j}(y)\right)}.$
The functions $\kappa_{n}$ converge pointwise to a function $\kappa$ (see
[13], Proposition 6(1)). Fix $z\in D^{-}_{\varepsilon}$ and a sufficiently
small $r>0$, and set
$U_{0}:=B^{u}(z,r):=B^{u}_{z}(z,r),\quad\widetilde{U}_{n}:=f(U_{n-1}),\quad
U_{n}:=\widetilde{U}_{n}\setminus N^{+}.$
Further set
$\widetilde{C}_{0}=1\quad\textrm{and}\quad\widetilde{C}_{n}=\widetilde{C}_{n}(z)=\left(\prod_{k=0}^{n-1}J^{u}\left(f^{k}(z)\right)\right)^{-1}.$
For $n\geq 0$, define the measures
$\widetilde{\nu}_{n}=\widetilde{\nu}_{n}(z)$ on $U_{n}$ by
$d\widetilde{\nu}_{n}(y)=\widetilde{C}_{n}(z)\kappa\left(f^{n}(z),y\right)d\nu^{u}_{z}(y),$
and let $\nu_{n}$ be a measure on $\Lambda$ defined by
$\nu_{n}(A)=\widetilde{\nu}_{n}(A\cap U_{n})$ for any Borel
$A\subseteq\Lambda$. Under moderate assumptions, we have that
$\nu_{n}=f^{n}_{*}\nu_{0}$ (see [13], Proposition 8). Consider the sequence of
measures
$\mu_{n}=\frac{1}{n}\sum_{k=0}^{n-1}\nu_{k}.$
These measures admit a subsequence that converges in the weak topology to an
$f$-invariant SRB measure (_a priori_ depending on the reference point $z\in
D^{-}_{\varepsilon}$) on $\Lambda$, proving existence of SRB measures.
## 3\. Main result and proof
In this section, we give conditions under which a singular hyperbolic
attractor admits at most finitely many ergodic SRB measures, as well as
conditions under which the SRB measure is unique.
### 3.1. Main result
We begin by reviewing the assumptions we make on our map $f:K\setminus N\to
K$. Assumption (SH1) is the basic setting, and assumption (SH2) concerns the
regularity of the map. We now complete the assumptions of our setting. Below,
assumption (SH3) concerns the structure of the singular set $N$; assumptions
(SH4) - (SH6) concern the smoothness and hyperbolicity of $f$; assumption
(SH7) is a further regulatory assumption; and assumption (SH8) is a condition
on the regularity of the stable foliation.
1. (SH3)
The singular set $N$ is the disjoint union of finitely many embedded
submanifolds $N_{i}$ with boundary, of dimension equal to the codimension of
the unstable foliation $W^{u}$.
2. (SH4)
$f$ is continuous and differentiable in $K\setminus N^{+}$.
3. (SH5)
$f$ possesses two families of stable and unstable cones $C^{s}(z)$,
$C^{u}(z)$, for $z\in K\setminus N^{+}$.
4. (SH6)
The assignment $z\mapsto C^{u}(z)$ has a continuous extension in each
$\overline{K}_{i}\subset K$ (where $K_{i}$ are the connected components of
$K\setminus N^{+}$), and there exists $\alpha>0$ such that for $z\in
N\setminus\partial K$ and $v\in C^{u}(z)$, $w\in T_{z}N$, we have
$\angle(v,w)\geq\alpha$.
5. (SH7)
$f^{j}(N^{-})\cap N^{+}=\emptyset$ for $0\leq j<k$, where $\lambda^{k}>2$ and
$\lambda=\inf_{x\in K\setminus N^{+}}\left\lVert df_{x}\right\rVert>1.$
Two general definitions are required before we state assumption (SH8). Let $M$
be a Riemannian manifold, $X\subset M$ a Borel subset, and $\mu$ a measure on
$M$ with $\mu(X)>0$. A partition $\xi$ on $X$ is a _smoothly measurable
foliation_ if for $\mu$-almost every $x\in X$, the element $\xi_{x}$ of $\xi$
containing $x$ has the form $\xi_{x}=W(x)\cap X$, where $W(x)$ is an immersed
$C^{1}$ submanifold in $M$ passing through $X$. Observe, in particular, that
the foliations $W^{s}$ and $W^{u}$ on $K\setminus N$ as above define smoothly
measurable foliations on the attractor $\Lambda$.
Given $x\in M$ and $r>0$, let $B_{r}(x)$ denote the ball of radius $r$ in $M$.
Consider $X\subset M$ a Borel subset, $\xi$ a smoothly measurable foliation on
$X$, and $x\in X$ for which $\xi_{x}=W(x)\cap X$ for some $C^{1}$ submanifold
$W(x)$. There is a radius $r=r(x)$ for which the submanifold $W(x)\cap
B_{r}(x)$ is the image of a $C^{1}$ function $\psi_{x}:U_{x}\to M$, where
$U_{x}\subset T_{x}W(x)$ is a neighborhood of $0$ and $\varphi_{x}(0)=x$,
$d(\varphi_{x})_{0}(T_{0}T_{x}W(x))=T_{x}W(x)$. We will say $\xi$ is _locally
continuous_ if for $\mu$-almost every $x\in X$, the assignments
$y\mapsto\varphi_{y}$ and $(y,u)\mapsto d(\varphi_{y})_{u}$ are continuous
over $y\in X\cap B_{r(x)}(x)$ (note $\varphi_{y}$ is defined $\mu$-almost
everywhere in $X\cap B_{r(x)}(x)$), where $u\in U_{y}\subset T_{y}W(y)$.
1. (SH8)
The stable foliation $W^{s}$ is locally continuous.
We now state our main result.
###### Theorem 3.1.
Let $\Lambda$ be a singular hyperbolic attractor of a map $f:K\setminus N\to
K$ satisfying conditions (SH1) - (SH7).
1. (a)
There exist at most finitely many ergodic SRB measures of the map
$f:\Lambda\to\Lambda$.
2. (b)
If $f$ satisfies condition (SH8), then there exist a collection of pairwise
disjoint open sets $U_{1},\ldots,U_{n}$, open in $\Lambda$, for which
$\overline{\mathop{\bigcup}_{i=1}^{n}U_{i}}=\Lambda$, and each of which is
supported by exactly one ergodic SRB measure. In particular, if $f$ satisfies
condition (SH8), then $f|_{\Lambda}:\Lambda\to\Lambda$ is topologically
transitive if and only if $f$ admits exactly one ergodic SRB measure.
###### Remark 1.
The existence of an ergodic SRB measure can be proven under more general
assumptions. In particular, in Theorem 1 of [13], a regularity condition is
given under which a singular hyperbolic attractor admits at least one ergodic
SRB measure, and in Theorem 14 of [13], it is shown that conditions (SH3) -
(SH7) imply this regularity condition.
The remainder of this section is devoted to proving this result. In Section
3.2, we prove existence of SRB measures and show that a singular hyperbolic
attractor is charged by at most finitely many ergodic SRB measures. In Section
3.3, we show that $\Lambda$ admits a kind of measurable partition by open
sets, each element of which is given full measure by exactly one SRB measure,
and thus the SRB measure is unique if (SH8) is satisfied and $f|_{\Lambda}$ is
topologically transitive.
### 3.2. Existence of finitely many SRB measures
The existence of SRB measures for singular hyperbolic attractors follows from
the following result, which follows from Theorem 1 and Theorem 14 in [13].
###### Proposition 5.
Suppose $f:K\setminus N\to K$ is a singular hyperbolic map satisfying
conditions (SH1) - (SH7). Then there exists an SRB measure for $f$.
We now show that there are only finitely many ergodic SRB measures. We have
defined $W^{u}_{\delta}(x)$ to be the image under $\exp_{x}:T_{x}M\to M$ of
the graph of a function $\psi:B^{u}_{T}(\delta,x)\to E^{s}_{x}$, where
$B^{u}_{T}(\delta,x)\subset E^{u}_{x}$ is the open ball of radius $\delta$
centered at $0\in E^{u}_{x}$, provided that such a function $\psi$ and such a
number $\delta>0$ exist. For each $x\in M$, such a $\psi$ and $\delta$ do
exist. However, we may also fix $\delta>0$ and define the set $B^{-}_{\delta}$
to be the set of all $x\in D$ for which there is some $\varepsilon>0$, some
$l\in\mathbb{N}$, and some $y\in D^{-}_{\varepsilon,l}$ so that
$W^{u}_{\delta}(y)$ exists and contains $x$. Note that $x\not\in
B^{-}_{\delta}$ if, for example, $W^{u}(x)$ is not the image under $\exp_{x}$
of a smooth graph in a $\delta$-neighborhood of $0\in T_{x}M$.
###### Proposition 6.
For sufficiently small $\varepsilon>0$, we have that
$D_{\varepsilon}^{0}\neq\emptyset$.
###### Proof.
This follows from Theorem 14 and Proposition 3 of [13]. ∎
Using this proposition, assume $\varepsilon>0$ in $D^{\pm}_{\varepsilon}$ is
chosen so that $D^{0}_{\varepsilon}\neq\emptyset$. This implies, in
particular, that $D^{-}_{\varepsilon}$ has full measure with respect to any
invariant measure.
Observe that if $\delta_{1}<\delta_{2}$, then $B_{\delta_{2}}^{-}\subseteq
B_{\delta_{1}}^{-}$. Indeed, if $x\in B_{\delta_{2}}^{-}$, then $x\in
W^{u}_{\delta_{2}}(y)$ for some $y\in D_{\varepsilon,l}^{-}$. Since $f$ is
regular, $D^{-}_{\varepsilon}$ has full measure, so $D^{-}_{\varepsilon}\cap
W^{u}_{\delta_{2}}(y)$ has full conditional measure. So we can choose
$y^{\prime}\in W^{u}_{\delta_{2}}(y)$ with distance $\delta_{1}$ from $x$
along $W^{u}_{\delta_{2}}(y)$, giving us $x\in
W^{u}_{\delta_{1}}(y^{\prime})$, so $x\in B^{-}_{\delta_{1}}$.
In particular, if $\mu$ is an ergodic SRB measure that charges
$B^{-}_{\delta_{1}}$, then since $B^{-}_{\delta_{2}}\subset
B^{-}_{\delta_{1}}$, either $B_{\delta_{2}}^{-}$ is charged by $\mu$, or has
$\mu$-measure 0. In the latter case, if there is an ergodic SRB measure
$\mu_{1}$ that charges $B^{-}_{\delta_{2}}$, then both $\mu_{1}$ and $\mu$ are
ergodic SRB measures charging $B^{-}_{\delta_{1}}$. To summarize, if
$\delta_{2}>\delta_{1}$, then $B^{-}_{\delta_{1}}$ is charged by at least as
many, if not more, SRB measures as $B^{-}_{\delta_{2}}$.
Our proof of Theorem 3.1 (a) has two major components. The first is to show
that there is a $\delta_{0}>0$ for which $B_{\delta_{0}}^{-}$ is charged by
every ergodic SRB measure, and so $B_{\delta}^{-}$ and $B_{\delta_{0}}^{-}$
are charged by the same measures for every $0<\delta<\delta_{0}$. The second
is to show that every set $B_{\delta}^{-}$, and in particular
$B_{\delta_{0}}^{-}$, is charged by only finitely many ergodic SRB measures.
###### Lemma 3.2.
Suppose the singular hyperbolic map $f:K\setminus N\to K$ satisfies (SH1) -
(SH7). There is a $\delta_{0}>0$ such that for every ergodic SRB measure $\mu$
on $\Lambda$, $\mu(B_{\delta_{0}}^{-})>0$.
###### Proof.
Assumption (SH3) states that $N$ is composed of finitely many closed
submanifolds with boundary. Call these submanifolds $N_{i}$. Observe that if
$U$ is a neighborhood of $N$, then $f(U)$ is a neighborhood of $N^{-}$.
Because $f^{j}(N^{-})\cap N=\emptyset$ for $1\leq j<k$ with $\lambda^{k}>2$ by
(SH7), and because $N^{-}$ and its images are closed (as is $N$), there is a
radius $Q>0$ so that the open neighborhoods $B_{Q}(N_{i})$ of each $N_{i}$ of
radius $Q$ are pairwise disjoint and whose first $k$ images do not intersect
$N$. Let $\delta_{0}<Q$.
Fix an ergodic SRB measure $\mu$. Our strategy will be to construct a
“rectangle” $R\subset\Lambda$, formed by the local hyperbolic product
structure of $\Lambda$, with $\mu(R)>0$. Applying the map $f$ to $R$, the
unstable leaves composing $R$ will eventually grow sufficiently large so that
a certain iterate of $R$ lies inside $B^{-}_{\delta_{0}}$. Since $\mu$ is
$f$-invariant and $\mu(R)>0$, this will show that $\mu(B^{-}_{\delta_{0}})>0$
for any ergodic SRB measure $\mu$.
Proposition 6 implies $\mu(D^{-}_{\varepsilon})>0$. Therefore there is a
generic point $x\in D^{-}_{\varepsilon}$, which implies there is an $r>0$ and
an $l\geq 1$ for which $\mu(D^{-}_{\varepsilon,l}\cap B_{r}(x))>0$. By virtue
of the hyperbolic local product structure of $D$ (see Proposition 3), there is
an $\alpha>0$ and a $\beta>0$ for which $W^{s}_{\beta}(x)\subset B_{r}(x)$,
and the local unstable leaves $W^{u}_{\alpha}(y)\subset B_{r}(x)$ are well-
defined for every $y\in D^{-}_{\varepsilon,l}\cap W^{s}_{\beta}(x)$.
Furthermore, on a subset $A\subset W^{s}_{\beta}(x)$ of full conditional
measure, the leaves $W^{u}_{\alpha}(y)$ have positive conditional measure for
every $y\in A$. Therefore, the set
$R_{1}=\mathop{\bigcup}_{y\in A}W^{u}_{\alpha}(y)$
has positive $\mu$-measure and is contained in $B_{r}(x)$.
Suppose $\alpha\geq\delta_{0}$. Let $y_{0}\in A$. Then $W^{u}_{\alpha}(y_{0})$
is the union of finitely many $W^{u}_{\delta_{0}}(y_{i})$, with $y_{i}\in
W^{u}_{\alpha}(y_{0})\cap D^{-}_{\varepsilon}$. So by definition of
$B^{-}_{\delta_{0}}$, for $z\in W^{u}_{\alpha}(y_{0})$, we may take $y=y_{i}$
in the definition of $B^{-}_{\delta_{0}}$ for one of the $y_{i}$’s, so $z\in
B^{-}_{\delta_{0}}$. So $W^{u}_{\alpha}(y_{0})\subset B^{-}_{\delta_{0}}$ for
every $y_{0}\in A$. In particular, $R_{1}\subset B^{-}_{\delta_{0}}$, so since
$R_{1}$ has positive measure, $\mu(B^{-}_{\delta_{0}})>0$.
Now suppose $\alpha<\delta_{0}$, and let $x_{1}=x$. By compactness of $K$,
there is a time $j_{1}\geq 1$ at which $f^{j_{1}}(W^{u}_{\alpha}(x_{1}))$
intersects $N$ (and, by assumption, it does so transversally). For $1\leq
j\leq j_{1}$, the image $f^{j}(W^{u}_{\alpha}(x_{1}))$ is a local unstable
leaf of size $\alpha_{j}\geq\lambda^{j}\alpha$. If
$\lambda^{j}\alpha\geq\delta_{0}$ for some $j\leq j_{1}$, then using the same
arguments as in the previous paragraph, $f^{j}(W^{u}_{\alpha}(x_{1}))\subset
B^{-}_{\delta_{0}}$, for almost every $y_{0}\in f^{j}(W^{s}_{\beta}(x_{1}))$,
and more generally, $f^{j}(R)\subseteq B^{-}_{\delta_{0}}$, yielding the
desired result.
On the other hand, if $\alpha_{j}<\delta_{0}$ for $1\leq j\leq j_{1}$, then
the leaf $f^{j_{1}}(W^{u}_{\alpha}(x_{1}))$ contains a ball in
$W_{\mathrm{loc}}^{u}(f^{j_{1}}(x_{1}))$ of diameter
$\alpha^{\prime}\geq\frac{1}{2}\lambda^{j_{1}}\alpha$ that does not intersect
$N$. Because $\mu$ is an SRB measure, the conditional measure on this ball is
absolutely continuous with respect to the Riemannian measure—in particular,
the set of generic points in this leaf has full $\mu$-conditional measure. So
choose a generic point $x_{2}$ in this ball. As with $x_{1}$, we can use the
hyperbolic product structure induced by $f$ to create a rectangle $R_{2}$ of
positive $\mu$-measure defined by
$R_{2}=\mathop{\bigcup}_{y\in A_{2}}W^{u}_{\alpha^{\prime}}(y),$
where $A_{2}\subset W^{s}_{\beta_{2}}(x_{2})$ is a set of full measure for
some $\beta_{2}>0$. Relabel $\alpha_{j_{1}}=\alpha^{\prime}$ to be the
diameter of the local unstable leaf containing $x_{2}$ and not intersecting
$N$. This leaf (and in fact all of $R_{2}$) lies inside
$B_{Q}(N)=\mathop{\bigcup}_{i}B_{Q}(N_{i})$, and by our construction of
$B_{Q}(N)$, the first $k$ images of this leaf under $f$ will not intersect
$N$. Therefore, either eventually one of the images of this leaf is of
diameter $>\delta_{0}$, or this leaf intersects $N$. In the former case, as
before, we have a rectangle of positive $\mu$-measure lying inside of
$B^{-}_{\delta_{0}}$. In the latter case, the leaf is of diameter
$\geq\lambda^{k}\alpha^{\prime}\geq\frac{1}{2}\lambda^{k+j_{1}}\alpha$. Again,
there is a ball in this leaf of diameter
$\geq\frac{1}{2}\lambda^{k}\alpha^{\prime}\geq\frac{1}{4}\lambda^{k+j_{1}}\alpha$
that does not intersect $N$. As before, we may find a generic point $x_{3}$ in
this ball, and continue iterating this local leaf and resulting positive-
measure rectangle.
Proceeding in this way, we construct a sequence of local unstable leaves
$W^{u}_{\alpha_{j}}(x_{m})$, $j=j_{1}+\cdots+j_{m-1}+l$, where each leaf image
intersects $N$ at time $j_{1}+\cdots+j_{m}$, and thus admits an open ball of
sufficient size and not intersecting $N$. By construction, $j_{i+1}-j_{i}\geq
k$ for each $i$, so each leaf has
$\alpha_{j}=\alpha_{j_{1}+\ldots+j_{m-1}+l}\geq\frac{\lambda^{j}}{2^{m-1}}\alpha\geq\left(\frac{\lambda^{k}}{2}\right)^{m}\frac{\lambda^{j_{1}}}{2}\alpha.$
As $m$ increases, we eventually get
$\left(\lambda^{k}/2\right)^{m}\lambda^{j_{1}}\alpha/2\geq Q>\delta_{0}$ by
(SH7). Once this occurs, we have a rectangle of positive measure contained in
$B^{-}_{\delta_{0}}$. ∎
The next lemma shows that $B_{\delta_{0}}^{-}$ is charged by finitely many
ergodic SRB measures. Since $B_{\delta_{0}}^{-}$ is charged by every ergodic
SRB measure, this will prove Theorem 3.1.
###### Lemma 3.3.
For sufficiently small $\delta>0$, the set $B_{\delta}^{-}$ admits at most
finitely many ergodic SRB measures. In particular, there is a subset
$\Lambda^{0}\subset B_{\delta}^{-}$ that has full measure with respect to any
invariant measure, and a finite partition of $\Lambda^{0}$ each of whose
elements is charged by exactly one ergodic SRB measure.
###### Proof.
The proof is an adaptation of a Hopf argument. Define the subsets
$\Lambda^{+}\subset K$ and $\Lambda^{-}\subset\Lambda$ respectively to be the
set of points where, for every $\varphi\in C^{0}$, the limits
$\varphi_{+}(x)=\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\varphi\left(f^{k}(x)\right)\quad\textrm{and}\quad\varphi_{-}(x)=\lim_{n\to\infty}\frac{1}{n}\sum_{k=0}^{n-1}\varphi\left(f^{-k}(x)\right)$
exist. By the Birkhoff ergodic theorem, both $\Lambda^{+}$ and $\Lambda^{-}$
have full measure with respect to any invariant measure. Observe that if
$x\in\Lambda^{-}$ and $y\in W^{u}_{\alpha}(x)$ for $\alpha>0$, then
$\varphi_{-}(y)=\varphi_{-}(x)$, so $y\in\Lambda^{-}$, and so
$W^{u}_{\delta}(x)\subseteq\Lambda^{-}$ for every $x\in\Lambda^{-}$.
Similarly, $W^{s}_{\alpha}(x)\subseteq\Lambda^{+}$ for $x\in\Lambda^{+}$,
$\alpha>0$.
Recall that a point $x\in K$ lies in $B_{\delta}^{-}$ if and only if there is
a $y=y(x)\in D^{-}_{\varepsilon,l}$ for some $\varepsilon,l$ for which $x\in
W^{u}_{\delta}(y)$. So let $\Lambda^{0}$ be the set of points $x\in
B_{\delta}^{-}$ for which there is a subset $A\subseteq W^{u}_{\delta}(y)$ of
full conditional measure (with respect to Lebesgue) such that
$A\subseteq\Lambda^{+}$ and $\varphi_{+}|_{A}$ is constant for all $\varphi\in
C^{0}(K)$.
We make the following claims:
* •
the set $\Lambda^{0}$ has full measure in $B^{-}_{\delta}$ with respect to any
invariant measure, and
* •
the set $\Lambda^{0}$ is closed.
Granting these claims for now, using the notation in the above paragraph, for
$x\in\Lambda^{0}$, let $\varphi_{+}(W^{u}_{\textrm{loc}}(x))=\varphi_{+}(z)$
for $z\in A\subseteq W^{u}_{\delta}(y)$, where $y=y(x)$ is as in the
definition of $B_{\delta}^{-}$. We will say that $x,z\in\Lambda^{0}$ are
_equivalent_ , and write $x\sim z$, if
$\varphi_{+}(W^{u}_{\mathrm{loc}}(x))=\varphi_{+}(W^{u}_{\mathrm{loc}}(z))$
for every continuous $\varphi:K\to\mathbb{R}$.
Note that the stable foliation $W^{s}$ is absolutely continuous (see (2) and
Proposition 4 for the definition of $W^{s}$ and absolute continuity of the
foliation). So if $x\in\Lambda^{0}$ and $z\in W^{s}(x)\cap\Lambda^{0}$, then
$\varphi_{+}(x)=\varphi_{+}(z)$, and there is a set $A^{\prime}$ of full
measure in $W^{u}_{\delta}(y(z))$ on which $\varphi_{+}$ is constant and equal
to $\varphi_{+}(W^{u}_{\mathrm{loc}}(x))$. So $x\sim z$ whenever $z\in
W^{s}(x)\cap\Lambda^{0}$.
Suppose $\Lambda^{0}_{1}$ is an equivalence class, and let
$x\in\Lambda_{1}^{0}$. We claim there is an $\varepsilon>0$ so that if
$y\in\Lambda^{0}$ lies in the $\varepsilon$-ball centered at $x$, then $x\sim
y$. It will follow that each equivalence class is open in $\Lambda^{0}$, and
therefore also closed in $\Lambda^{0}$ since the equivalence classes form a
partition of $\Lambda^{0}$ and $\Lambda^{0}$ itself is closed.
To prove this claim, by virtue of Proposition 7 in [13], there is an
$\varepsilon>0$ for which $B_{\varepsilon}(x)$ has _local hyperbolic product
structure:_ for $z\in W^{u}_{\varepsilon}(x)$ and $y\in
W^{s}_{\varepsilon}(x)$, the intersection $W^{u}_{\varepsilon}(y)\cap
W^{s}_{\varepsilon}(z)$ contains exactly one point, which we denote $[y,z]$.
Let $y\in B_{\varepsilon}(x)\cap\Lambda^{0}$, let $B^{u}_{\varepsilon}(x)$
denote the ball in the local unstable manifold $W^{u}_{\mathrm{loc}}(x)$
centered at $x$ of size $\varepsilon$, and let
$\theta:B^{u}_{\varepsilon}(x)\to W^{u}_{\varepsilon}(y)$ denote the holonomy
map $\theta(z)=[y,z]$.
To show $x\sim y$, let $A\subseteq B_{\varepsilon}^{u}(x)$ be the set of
points $z$ for which $\varphi_{+}(z)=\varphi_{+}(x)$ for every continuous
function $\varphi$. By definition of the set $\Lambda^{0}$, the set $A$ has
full measure in $B_{\varepsilon}^{u}(x)$. By absolute continuity of the stable
foliation, $\theta(A)$ has full measure in
$\theta(B^{u}_{\varepsilon}(x))\subset W^{u}_{\varepsilon}(y)$. Since
$\varphi_{+}$ is constant on stable leaves,
$\varphi_{+}\circ\theta=\varphi_{+}$ for every continuous $\varphi$. Therefore
$\varphi_{+}(z_{1})=\varphi_{+}(x)$ for almost every
$z_{1}\in\theta(B^{u}_{\varepsilon}(x))$. Again, by definition of
$\Lambda^{0}$, $\varphi_{+}(z_{1})=\varphi_{+}(y)$ for every continuous
$\varphi$ and almost every $z_{1}\in\theta(B_{\varepsilon}^{u}(x))$. Therefore
$\varphi_{+}(x)=\varphi_{+}(y)$, and so $x\sim y$.
It follows from these arguments that each equivalence class is open in
$\Lambda^{0}$, and hence is also closed in $\Lambda^{0}$. By closedness of
$\Lambda^{0}$ in $K$, there is an $\varepsilon>0$ such that each pair of
equivalence classes is separated by a distance of at least $\varepsilon$. By
compactness of $K$, it follows that there may only be finitely many such
equivalence classes. Because an ergodic SRB measure must charge exactly one of
these equivalence classes, there may only be finitely many SRB measures.
It remains only to prove our previous two claims. Let $\widehat{\Lambda}$
denote those points $x\in\Lambda^{-}$ such that
$\varphi_{-}(x)=\varphi_{+}(x)$ for every continuous function $\varphi$. By
the Birkhoff ergodic theorem, $\widehat{\Lambda}$ has full measure with
respect to any invariant measure. Further, let $\widehat{\Lambda}^{0}$ denote
the set of points $x\in\widehat{\Lambda}$ such that there is a set $A\subset
W^{u}_{\gamma}(x)$ of full conditional measure such that
$A\subseteq\Lambda^{+}$ and $\varphi_{-}(z)=\varphi_{+}(z)$ for every
continuous function $\varphi$ and all points $z\in A$. Here,
$W^{u}_{\gamma}(x)$ is the connected component of $W^{u}(x)$ intersecting
$\widehat{\Lambda}$ and containing $x$ (and $W^{s}_{\gamma}(x)$ is defined
similarly). Since $\varphi_{-}(z)$ takes the same value for all $z\in
W^{u}_{\varepsilon}(x)$, $\varphi_{-}|_{A}=\varphi_{+}|_{A}$ is constant.
For $x\in\widehat{\Lambda}^{0}$, the set $\widehat{\Lambda}^{0}$ contains the
union over $y\in W^{s}_{\gamma}(x)\cap B^{-}_{\delta}$ of manifolds
$W^{u}_{\varepsilon}(y)$ that contain a subset of full conditional measure (as
this subset lies in $\widehat{\Lambda}$, which has full measure). Because
$W^{s}_{\gamma}(x)$ has full conditional measure in $\widehat{\Lambda}$, it
follows that this union also has full measure, from which it follows that
$\widehat{\Lambda}^{0}$ has full measure.
If a point $x$ has a negative semitrajectory that enters
$\widehat{\Lambda}^{0}$ infinitely often, then one can show $x\in\Lambda^{0}$.
Since $\widehat{\Lambda}^{0}$ has full measure, the set of points whose
negative semitrajectories enter $\widehat{\Lambda}^{0}$ also has full measure,
so $\Lambda^{0}$ has full measure. This proves the first of the two previous
claims.
To show that $\Lambda^{0}$ is closed, suppose $x$ is the limit point of a
sequence $(x_{n})$ in $\Lambda^{0}$. Since the stable foliation is absolutely
continuous and $W^{u}_{\varepsilon}(x_{n})$ converges to
$W^{u}_{\varepsilon}(x)$ for $\varepsilon>0$ small, we can find a set
$A\subset W^{u}_{\delta}(y)$ of full conditional measure, where $y=y(x)$ is as
in the definition of $B_{\delta}^{-}$, such on which $\varphi_{+}$ is constant
for all continuous functions $\varphi$. Therefore $x\in\Lambda^{0}$. ∎
###### Proof of Theorem 3.1(a).
Note $B^{-}:=\mathop{\bigcup}_{\delta>0}B^{-}_{\delta}$ is invariant under
$f$, and as we showed in Lemma 3.2, $\mu(B^{-})>0$ for every ergodic SRB
measure $\mu$. Therefore $\mu(B^{-})=1$ for every ergodic SRB measure $\mu$,
and since $\mu(D)=1$ as well, every ergodic SRB measure gives full volume to
$D\cap B^{-}$. If there were infinitely many ergodic SRB measures, then by
Lemma 3.2, there would be a $\delta>0$ for which $B^{-}_{\delta}$ is charged
by infinitely many SRB measures. But this contradicts Lemma 3.3. ∎
### 3.3. Uniqueness and topological transitivity
Generally speaking, a map satisfying (SH1) - (SH7) may admit more than one
ergodic SRB measure (see examples in Section 4). However, under moderate
assumptions on the regularity of the stable foliation $W^{s}$, one can show
that the components of topological transitivity of
$f|_{\Lambda}:\Lambda\to\Lambda$ are in correspondence with the number of
distinct ergodic SRB measures. We formalize this idea in this section.
Given a metric space $X$, we will call a Borel measure $\mu$ on $X$ _locally
positive_ if $\mu(U)>0$ for every nonempty open subset $U\subset X$. A
collection $\\{U_{i}\\}_{i\in I}$ of open subsets $U_{i}\subset X$, together
with a collection of Borel measures $\\{\mu_{j}\\}_{j\in J}$ on $X$, with
$J\subset I$, is an _open partition by measures_ if:
1. (P1)
the open sets $\\{U_{i}\\}_{i\in I}$ are pairwise disjoint;
2. (P2)
$\overline{\mathop{\bigcup}_{i\in I}U_{i}}=X$;
3. (P3)
$\mu_{j}\left(X\setminus U_{j}\right)=0$ for all $j\in J$; and
4. (P4)
$\mu_{j}|_{U_{j}}$ is locally positive for all $j$.
By local positivity, we may assume $I\setminus J$ has a single element, which
we denote $0\in I$. If the open sets $\\{U_{i}\\}_{i\in I}$ and the measures
$\\{\mu_{j}\\}_{j\in J}$ are chosen so that $I=J$, we say that the open
partition by measures is _complete_. If $f:X\to X$ is continuous, and the
measures $\mu_{j}$ are ergodic probability measures, we call an open partition
by the ergodic measures $\mu_{j}$ an _open ergodic partition_ if in addition
to (P1) - (P4) above, we also have
1. (P5)
$\mu(U_{0})=0$ for any ergodic measure $\mu$.
###### Lemma 3.4.
If $X$ is a complete metric space, and $f:X\to X$ is a continuous and open map
admitting an open ergodic partition by the open sets $\\{U_{i}\\}_{i\in I}$
and the ergodic $f$-invariant Borel measures $\\{\mu_{j}\\}_{j\in
I\setminus\\{0\\}}$, then $\overline{U}_{i}$ is $f$-invariant for each $i$. If
the open ergodic partition is complete, then $f|_{\overline{U}_{i}}$ is
topologically transitive for each $i\in I$.
###### Proof.
By (P1) - (P4) and the fact that each measure $\mu_{j}$ is $f$-invariant,
$U_{i}$ is invariant for every $i$, as is $\overline{U}_{i}$. Now suppose the
open ergodic partition is complete. If $V,V^{\prime}\subset\overline{U}_{i}$
are open, then $\mathop{\bigcup}_{n\in\mathbb{Z}}f(V)$ is open and
$f$-invariant. Thus if $f^{n}(V)\cap V^{\prime}=\emptyset$ for every $n\geq
0$, ergodicity of $\mu_{i}$ implies that either $\mu_{i}(V^{\prime})=0$ or
$\mu_{i}(V)=0$. By local positivity, either $V^{\prime}=\emptyset$ or
$V=\emptyset$. ∎
Theorem 3.1(b) is now an immediate consequence of the following lemma.
###### Lemma 3.5.
If $f:K\subset N\to K$ is a singular hyperbolic map with attractor
$\Lambda\subset K$ satisfying condition (SH8) in addition to (SH1) - (SH7),
then $\Lambda$ admits a finite and complete open ergodic partition, and the
measures defining this partition are SRB measures. If in addition
$f|_{\Lambda}:\Lambda\to\Lambda$ is topologically transitive, then the open
ergodic partition consists of a single open set and a single measure, and thus
in particular, $f$ admits a unique ergodic SRB measure.
###### Proof.
The fact that $\Lambda$ admits a complete open ergodic partition by SRB
measures follows from Theorems 4 and 6 of [13]. Finiteness of this partition
follows from Theorem 3.1(a). If $f|_{\Lambda}$ is topologically transitive, by
Lemma 3.4, there is at most one ergodic SRB measure; existence of this measure
follows from Proposition 5. ∎
## 4\. Examples
Maps described in Theorem 3.1 do exist, and as the following non-example will
demonstrate, the hypotheses described in this result are necessary assumptions
in general. Examples of singular hyperbolic attractors can be found in, for
example, Lorenz-type attractors; but this class of singular hyperbolic maps
includes cases where the singular set has countably many components, and admit
countably many SRB measures.
### 4.1. Lorenz-type attractors
To begin, we describe the class of singular hyperbolic attractors generated by
Lorenz-type maps of the unit square. The definition of these maps is as
follows. Let $I=(-1,1)$, $K=I^{2}$, and
$-1=a_{0}<a_{1}<\cdots<a_{m}<a_{m+1}=1$. Define the rectangles
$P_{i}=I\times(a_{i},a_{i+1})$ for $0\leq i\leq m$, and
$N=I\times\\{a_{0},\ldots,a_{m+1}\\}$. Let $f:K\setminus N\to K$ be an
injective map given by
$f(x,y)=\big{(}\varphi(x,y),\,\psi(x,y)\big{)},\quad x,y\in I,$
where the functions $\varphi,\psi:K\to\mathbb{R}$ satisfy the following
conditions:
1. (L1)
$\varphi$ and $\psi$ are continuous in $\overline{P}_{i}$ for each $i$, and:
$\displaystyle\lim_{y\to a_{i}^{+}}\varphi(x,y)=\varphi_{i}^{+},$
$\displaystyle\quad\lim_{y\to a_{i}^{+}}\psi(x,y)=\psi_{i}^{+},$
$\displaystyle\lim_{y\to a_{i}^{-}}\varphi(x,y)=\varphi_{i}^{-},$
$\displaystyle\quad\lim_{y\to a_{i}^{-}}\psi(x,y)=\psi_{i}^{-},$
where $\varphi_{i}^{\pm}$, $\psi_{i}^{\pm}$ do not depend on $x$ for each $i$;
2. (L2)
$\psi$ and $\varphi$ have two continuous derivatives in $P_{i}$. Furthermore,
there are positive constants $B_{i}^{1}$, $B_{i}^{2}$, $C_{i}^{1}$, and
$C_{i}^{2}$; constants
$0\leq\nu_{i}^{1},\nu_{i}^{2},\nu_{i}^{3},\nu_{i}^{4}\leq 1$; a sufficiently
small constant $\gamma>0$; and continuous functions $A_{i}^{1}(x,y)$,
$A_{i}^{2}(x,y)$, $D_{i}^{1}(x,y)$, and $D_{i}^{2}(x,y)$ that tend to zero
uniformly over $x$ as $y\to a_{i}$ or $y\to a_{i+1}$; so that for $(x,y)\in
P_{i}$,
$\left.\begin{array}[]{l}d\varphi(x,y)=B_{i}^{1}(y-a_{i})^{-\nu_{i}^{1}}\left(1+A_{i}^{1}(x,y)\right)\\\
d\psi(x,y)=C_{i}^{1}(y-a_{i})^{-\nu_{i}^{2}}\left(1+D_{i}^{1}(x,y)\right)\end{array}\right\\}\quad\textrm{if
}y-a_{i}\leq\gamma;$
$\left.\begin{array}[]{l}d\varphi(x,y)=B_{i}^{2}(a_{i+1}-y)^{-\nu_{i}^{3}}\left(1+A_{i}^{2}(x,y)\right)\\\
d\psi(x,y)=C_{i}^{2}(a_{i+1}-y)^{-\nu_{i}^{4}}\left(1+D_{i}^{2}(x,y)\right)\end{array}\right\\}\quad\textrm{if
}a_{i+1}-y\leq\gamma;$
and additionally,
$\left\lVert\varphi_{xx}\right\rVert,\left\lVert\psi_{xx}\right\rVert,\left\lVert\varphi_{xy}\right\rVert,\left\lVert\psi_{xy}\right\rVert\leq\mathrm{const.}$;
3. (L3)
the following inequalities hold:
$\displaystyle\left\lVert f_{x}\right\rVert,\left\lVert
g_{y}^{-1}\right\rVert$ $\displaystyle<1;$ $\displaystyle 1-\left\lVert
g_{y}^{-1}\right\rVert\left\lVert f_{x}\right\rVert$
$\displaystyle>2\sqrt{\left\lVert g_{y}^{-1}\right\rVert\left\lVert
g_{x}\right\rVert\left\lVert g_{y}^{-1}f_{y}\right\rVert};$
$\displaystyle\left\lVert g_{y}^{-1}\right\rVert\left\lVert g_{x}\right\rVert$
$\displaystyle<\left(1-\left\lVert f_{x}\right\rVert\right)\left(1-\left\lVert
g_{y}^{-1}\right\rVert\right);$
where $\left\lVert\cdot\right\rVert=\max_{i}\sup_{(x,y)\in P_{i}}|\cdot|$.
This class of maps includes the _geometric Lorenz attractor_ , for which we
have $m=1$, $a_{1}=0$, and
$\displaystyle\varphi(x,y)$
$\displaystyle=\left(-B|y|^{\nu_{0}}+Bx\,\mathrm{sgn}(y)|y|^{\nu}+1\right)\mathrm{sgn}(y),$
$\displaystyle\psi(x,y)$
$\displaystyle=\left((1+A)|y|^{\nu_{0}}-A\right)\mathrm{sgn}(y),$
where $0<A<1$, $0<B<\frac{1}{2}$, $1/(1+A)<\nu_{0}<1$, and $\nu>1$.
###### Theorem 4.1.
Let $f:I^{2}\setminus N\to I^{2}$ be a Lorenz-type map for which one of the
following properties hold:
1. (a)
$\nu_{i}^{j}=0$, for $i=1,\ldots,m$ and $j=1,2,3,4$;
2. (b)
$\rho\left(f^{n}(\varphi_{i}^{\pm},\psi_{i}^{\pm}),N\right)\geq
C_{i}e^{-\gamma n}$ (where $C_{i}$ are constants independent of $n$ and
$\gamma>0$ is sufficiently small).
Then $f$ admits a singular hyperbolic attractor $\Lambda$, which is supported
by at most finitely many ergodic SRB measures.
###### Remark 2.
This result is also proven in [5], and is also a consequence of the arguments
in both [3] and [16]. We present an additional proof of this result using
Theorem 3.1 directly.
###### Proof.
Properties (SH1) and (SH4)-(SH7) are shown in [13], Theorem 17. The singular
set $N$ is the disjoint union of finitely many horizontal lines
$I\times\\{a_{i}\\}$, $i=1,\ldots,m$, so (SH3) is satisfied. The statement now
follows from Theorem 3.1. ∎
###### Remark 3.
Condition (SH4) is easy to verify for the geometric Lorenz attractor, as the
map $\varphi:I^{2}\setminus\left(I\times\\{0\\}\right)\to\mathbb{R}$ extends
to $\pm 1$ as $y\to 0$ from above or below. In particular, $N^{-}\cap
K=\emptyset$, since the continuations of $\varphi$ to $N$ map $N$ to the
boundary of $K$, so (SH4) is in fact trivial. Moreover, this is true with any
Lorenz-type attractor for which $\varphi_{i}^{\pm}=\pm 1$ or $\mp 1$.
More generally, (SH4) holds if (b) is satisfied in the statement of Theorem
4.1, provided $\gamma>0$ is sufficiently small.
### 4.2. The Belykh attractor
We consider a map $f:K\setminus N\to K$, where $K=[-1,1]^{2}$, and
$N=\\{(x,kx)\in K:-1<x<1\\}$
where $|k|<1$. (More generally one can consider $N=\\{(x,h(x)):-1<x<1\\}$ for
a continuous function $h$.) We then choose constants
$\lambda_{1},\lambda_{2},\mu_{1},\mu_{2}$ so that
$0<\lambda_{1},\mu_{1}<\frac{1}{2}\quad\textrm{and}\quad
1<\lambda_{2},\mu_{2}<\frac{2}{1+|k|},$
and define the map $T:K\setminus N\to\mathbb{R}^{2}$ by
$T(x,y)=\begin{cases}\left(\lambda_{1}(x-1)+1,\>\lambda_{2}(y-1)+1\right)&\textrm{if
}y>kx;\\\ \left(\mu_{1}(x+1)-1,\>\mu_{2}(y+1)-1\right)&\textrm{if
}y<kx.\end{cases}$
This map was first introduced in [7] as a model of phase synchronization
theory.
###### Theorem 4.2.
Define $T:K\setminus N\to\mathbb{R}^{2}$ as above.
1. (a)
The map $T$ is a map from $K\setminus N$ into $K$, and satisfies conditions
(SH1) and (SH3)-(SH6).
2. (b)
For any choice of $\lambda_{2}>1$, and for all but countably many $\mu_{2}>1$
(the countably many exceptions depending on $\lambda_{2}$), there is a
$\delta>0$ so that $T$ satisfies (SH7) when $|k|<\delta$, and thus admits
finitely many ergodic SRB measures for such $k$.
###### Proof.
The first of the above assertions is proven in [13]. To prove the second,
first note that [13] shows that $T$ satisfying (SH3)-(SH6) admits countably
many SRB measures. In general, (SH7) may fail; however, we will show that
given $\lambda_{2}>0$, this can happen only for countably many choices of
$\mu_{2}>0$. To see this, note that when $k=0$, (SH7) fails if the horizontal
lines forming $N^{-}$ lie inside $f^{-n}(N)$ for some $n>0$, which only
happens for countably many choices of $\mu_{2}$. Given a pair $\lambda_{2}$
and $\mu_{2}$ so that $T$ satisfies (SH7) with $k=0$, the line segments
forming $N$ and $f^{j}(N^{-})$ do not intersect for $0\leq j<k$, where
$\max(\lambda_{2},\mu_{2})^{k}>2$. Increasing $|k|$ will rotate these line
segments; by continuity, if the change in $|k|$ is sufficiently small, these
line segments will remain disjoint. So, for these choices of $\lambda_{2}$,
$\mu_{2}$, and $k$, $T$ will admit finitely many SRB measures by Theorem 3.1.
∎
### 4.3. Necessity of assumptions
The singular set $N$ may in principle have countably many components. If this
is the case, then the attractor may admit infinitely many ergodic SRB
measures, as the following example illustrates.
Let $P_{k}=\left(-1,1\right)\times\left(2^{-k}-1,\,2^{-(k-1)}-1\right)$ for
$k\geq 0$. Then $K=I^{2}=\overline{\mathop{\bigcup}_{k}P_{k}}$, and
$N^{1}:=K\setminus\mathop{\bigcup}_{k}P_{k}$ is the countable union of line
segments $(-1,1)\times\\{2^{-k}-1\\}=:N_{k}^{1}$.
Let $f:I^{2}\setminus\big{(}(-1,1)\times\\{0\\}\big{)}\to I_{2}$ be the
geometric Lorenz attractor, and let
$f_{k}:P_{k}\setminus\big{(}(-1,1)\times\left\\{\frac{2^{-k-1}+2^{-k}}{2}-1\right\\}$
be given by $f_{k}=h_{k}^{-1}\circ f\circ h_{k}$, where $h_{k}:P_{k}\to I^{2}$
is the conjugacy map given by
$h_{k}(x,y)=\left(x,2^{k+2}(y+1)-3\right).$
Now denote
$\displaystyle N$
$\displaystyle=I^{2}\setminus\left(P_{k}\setminus\left((-1,1)\times\left\\{\frac{2^{-k-1}+2^{-k}}{2}-1\right\\}\right)\right)$
$\displaystyle=(-1,1)\times\mathop{\bigcup}_{k\geq
0}\left(\left\\{2^{-k}-1,\frac{2^{-k}+2^{-k-1}}{2}-1\right\\}\right),$
and let $g:I^{2}\setminus N\to I^{2}$ be given by
$g(x,y)=f_{k}(x,y)\quad\textrm{for }(x,y)\in
P_{k}\setminus\left((-1,1)\times\left\\{\frac{2^{-k-1}+2^{-k}}{2}-1\right\\}\right).$
Effectively, we have embedded the geometric Lorenz attractor into each
disjoint rectangle $P_{k}$. The map $g$ admits a singular hyperbolic
attractor, and the singular set $N$ is the disjoint union of countably many
submanifolds. Since each orbit of $g$ is entirely contained in one of the
rectangles $P_{k}$, each $P_{k}$ supports a distinct ergodic SRB measure. So
the requirement that there are only finitely many components of the singular
set $N$ is a necessary assumption for our result to hold.
## Acknowledgments
I would like to thank Penn State University and the Anatole Katok Center for
Dynamical Systems and Geometry where this work was done. I also thank my
advisor, Y. Pesin, for introducing me to this problem and for valuable input
over the course of my investigation into singular hyperbolic attractors. I
also thank S. Luzzatto for many helpful remarks concerning the connections
between ergodicity and topological transitivity of singular hyperbolic
attractors.
## References
* [1] V. Afraimovich and Y. Pesin, The dimension of Lorenz type attractors, in _Sov. Math. Phys. Rev._ 6, Gordon and Preach: Hartwood Academic (1987).
* [2] J. Alves, C. Bonatti, M. Viana, SRB measures for partially hyperbolic systems whose central direction is mostly expanding, _Invent. Math._ , 140(2) (2000), 351–398.
* [3] V. Araujo, Finitely many physical measures for sectional-hyperbolic attracting sets and statistical stability, preprint, 1901.10537.
* [4] [10.1007/978-3-642-11414-4] V. Araujo and M. Pacifico, _Three-Dimensional Flows_ , _A Series of Modern Surveys in Mathematics_ , 53, Springer-Verlag, Berlin, Heidelberg, 2010.
* [5] V. Araujo, M. Pacifico, E. Pujals, M. Viana, Singular-hyperbolic attractors are chaotic, _Trans. Amer. Math. Soc._ , 361(5) (2009), 2431–2485.
* [6] [10.1017/978-1-107-32602-6] L. Barreira and Y. Pesin, _Nonuniform Hyperbolicity: Dynamics of systems with nonzero Lyapunov exponents_ , Encyclopedia of Mathematics and its Applications, 115, Cambridge University Press, 2007.
* [7] V. Belykh, Qualitative methods of the theory of nonlinear oscillations in point systems, Gorki University Press (1980).
* [8] C. Bonatti and M. Viana, SRB measures for partially hyperbolic systems whose central direction is mostly contracting, _Israel J. Math._ , 115 (2000), 157–193.
* [9] [10.1007/978-3-540-77605-5] R. Bowen, _Equilibrium states and the ergodic theory of Anosov diffeomorphisms_ , Lecture notes in Mathematics, 470, Springer-Verlag, Berlin, Heidelberg, 2008.
* [10] H. Hu, Conditions for the existence of SBR measures for “almost Anosov” diffeomorphisms, _Trans. Amer. Math. Soc._ , 352(5) (2000), 2331–2367.
* [11] J. Kaplan and J. Yorke, Preturbulence: A regime observed in a fluid flow model of Lorenz. _Commun. Math. Phys._ 67 (1979), 93–108.
* [12] R. Lozi, Un attracteur étrange du type attracteur de Hénon, _J. Phys., Paris_ , 39, Coll. C5 (1978), 9–10.
* [13] Y. Pesin, Dynamical systems with generalized hyperbolic attractors: hyperbolic, ergodic and topological properties, _Ergod. Theory Dyn. Syst._ , 12 (1992), 123–151.
* [14] Y. Pesin, S. Senti, K. Zhang, Thermodynamics of the Katok map (Revised Version), _Ergod. Theory Dyn. Syst._ , 41 (2021), 1864–1866.
* [15] F. Rodriguez-Hertz, M. A. Rodriguez-Hertz, A. Tahzibi, R. Ures. Uniqueness of SRB measures for transitive diffeomorphisms on surfaces, _Commun. Math. Phys._ , 306(1) (2011), 35–49.
* [16] E. Sataev, Invariant measures for hyperbolic maps with singularities, _Russian Math. Surveys_ , 47(1) (1992), 191–251.
* [17] E. Sataev, Invariant measures for singular hyperbolic attractors, _Sbornik: Mathematics_ , 201(3) (2010), 419–470.
* [18] D. Veconi, Thermodynamics of smooth models of pseudo-Anosov homeomorphisms preprint, 1912.09625
Received xxxx 20xx; revised xxxx 20xx.
|
which is an isomorphism when $R$ is a Noetherian $\mathbb{F}_{p}$-algebra. The
left hand side it the classical de Rham-Witt complex and the right hand side
is the de Rham-Witt complex defined by saturation and completion.
## VII Graded loopspaces in mixed characteristics
In this section we will take $k=\mathbb{Z}_{(p)}$.
### VII.1 Frobenius lift structures
#### VII.1.1 Classical and derived Frobenius lifts
###### Definition VII.1.1.
Let $A$ be a discrete commutative ring, a classical Frobenius lift on $A$ is
given by a ring endomorphism $F:A\to A$ such that $F$ induces the Frobenius
morphism on $A/p$, ie $F(a)-a^{p}$ is $p$-divisible for any $a\in A$.
###### Definition VII.1.2.
We define the category of derived stacks with a derived, or homotopy,
Frobenius lift $dSt^{Fr}$ as the pullback of categories
$dSt^{endo}\times_{dSt_{\mathbb{F}_{p}}^{endo}}dSt_{\mathbb{F}_{p}}$
where the functor $dSt_{\mathbb{F}_{p}}\to dSt_{\mathbb{F}_{p}}^{endo}$ is the
canonical functor adjoining the Frobenius endomorphism to a derived stack on
$\mathbb{F}_{p}$. Similarly, we define the category of graded derived stacks
endowed with a derived Frobenius lift $dSt^{gr,Fr}$ as the pullback of
categories
$dSt^{gr,endo}\times_{dSt_{\mathbb{F}_{p}}^{endo}}dSt_{\mathbb{F}_{p}}$
where the forgetful functor $dSt^{gr,endo}\to dSt_{\mathbb{F}_{p}}^{endo}$ is
given by taking $0$ weights, that is $\mathbb{G}_{m}$-fixed points by
Definition IV.10.9 and taking the fiber on $\mathbb{F}_{p}$.
We will give a more general definition of derived Frobenius lifts.
###### Definition VII.1.3 (Homotopy coherent Frobenius lift).
Let $\mathcal{C}$ a category endowed with a functor $\mathcal{C}\to
dSt_{\mathbb{F}_{p}}^{endo}$. We define the category of Frobenius lifts on
$\mathcal{C}$ as
$\mathcal{C}^{Fr}\coloneqq\mathcal{C}\times_{dSt_{\mathbb{F}_{p}}^{endo}}dSt_{\mathbb{F}_{p}}$
where the $dSt_{\mathbb{F}_{p}}\to dSt_{\mathbb{F}_{p}}^{endo}$ is the
canonical functor adjoining the Frobenius endomorphism to a derived stack on
$\mathbb{F}_{p}$.
###### Example VII.1.4.
* We give a series of examples for categories of "algebra-type objects"
* •
with $\mathcal{C}=CRing_{\mathbb{Z}_{(p)}}$, we define the category of
Frobenius lifts on $CRing_{\mathbb{Z}_{(p)}}$ :
$CRing_{\mathbb{Z}_{(p)}}^{Fr}\coloneqq(CRing_{\mathbb{Z}_{(p)}}^{endo,op})^{Fr,op}$
The canonical morphism $CRing_{\mathbb{Z}_{(p)}}^{endo,op}\to
dSt^{endo}_{\mathbb{F}_{p}}$ is the derived affine functor modulo $p$.
* •
with $\mathcal{C}=SCR_{\mathbb{Z}_{(p)}}$, we define the category of Frobenius
lifts on $SCR_{\mathbb{Z}_{(p)}}$ :
$SCR_{\mathbb{Z}_{(p)}}^{Fr}\coloneqq(SCR_{\mathbb{Z}_{(p)}}^{endo,op})^{Fr,op}$
The canonical morphism $SCR_{\mathbb{Z}_{(p)}}^{endo,op}\to
dSt^{endo}_{\mathbb{F}_{p}}$ is the derived affine functor modulo $p$.
* •
with $\mathcal{C}=SCR_{\mathbb{Z}_{(p)}}^{gr}$, we define the category of
Frobenius lifts on $SCR_{\mathbb{Z}_{(p)}}^{gr}$ :
$SCR_{\mathbb{Z}_{(p)}}^{gr,Fr}\coloneqq(SCR_{\mathbb{Z}_{(p)}}^{gr,endo,op})^{Fr,op}$
The canonical morphism $SCR_{\mathbb{Z}_{(p)}}^{gr,endo,op}\to
dSt^{endo}_{\mathbb{F}_{p}}$ is the derived affine functor modulo $p$ after
taking the weight $0$ componant defined by Proposition IV.10.10. See Remark
IV.10.14 for details.
###### Remark VII.1.5.
Our definition requires a derived Frobenius lift to be homotopy equivalent on
$\mathbb{F}_{p}$ with the canonical Frobenius only on $0$-weights, we do not
require an homotopy to $0$ on the other weights.
###### Proposition VII.1.6.
Let $A$ be a $p$-torsion-free commutative algebra on ${\mathbb{Z}_{(p)}}$, the
space of Frobenius lifts on $A$ is discrete and in bijection with the set of
classical Frobenius lifts on $A$
$\\{\phi:A\to A:\phi_{p}:A/p\to A/p=Fr_{p}\\}$
###### Proof.
Given an endomorphism of $A$, the data of being a derived Frobenius lift is a
path in
$Map_{SCR_{\mathbb{F}_{p}}}(A\otimes^{\mathbb{L}}\mathbb{F}_{p},A\otimes^{\mathbb{L}}\mathbb{F}_{p})$
between the induced endomorphism and the canonical Frobenius. As $A$ is
$p$-torsion-free we have
$A\otimes^{\mathbb{L}}\mathbb{F}_{p}\simeq A/p$
which is a discrete commutative $\mathbb{F}_{p}$ algebra. Now, we deduce
$Map_{SCR_{\mathbb{F}_{p}}}(A\otimes^{\mathbb{L}}\mathbb{F}_{p},A\otimes^{\mathbb{L}}\mathbb{F}_{p})\simeq
Map_{SCR_{\mathbb{F}_{p}}}(A/p,A/p)\simeq
Hom_{CRing_{\mathbb{F}_{p}}}(A/p,A/p)$
which is a discrete space. Therefore the choice of a homotopy coherent
Frobenius lift is equivalent to the choice of a classical Frobenius lift. ∎
###### Definition VII.1.7.
Let $A$ be a $p$-torsion-free commutative algebra on $\mathbb{Z}_{(p)}$, let
$M$ and $N$ be $A$-module. An $(A,F)$-linear map is a morphism of
${\mathbb{Z}_{(p)}}$-modules $M\to N_{F}$ where $N_{F}$ is the
${\mathbb{Z}_{(p)}}$-module on $N$ endowed with the $A$-module structure
induced by composition by $F$.
###### Proposition VII.1.8.
Let $A$ be a $p$-torsion-free commutative algebra on $\mathbb{Z}_{(p)}$, $M$ a
projective $A$-module of finite type. The space of Frobenius lifts on the
graded simplicial algebra $Sym_{A}(M[n])$, with $M$ in weight $1$ and $n>0$ a
positive integer, is discrete and is made up by pairs $(F,\phi)$ with $F$ a
Frobenius lift of $A$ and $\phi:M\to M$ an $(A,F)$-linear map. Therefore once
the Frobenius $F$ is fixed, $\phi$ is the data of a classical $(A,F)$-module
structure on the $A$-module $M$.
###### Proof.
Taking weight $0$, a derived Frobenius lift on $Sym_{A}(M[n])$ induces a
derived Frobenius lift on $A$. From the previous proposition, a Frobenius
structure on $A$ is a classical Frobenius lift : we denote it $F$.
We start by considering the space of choices of endomorphisms on
$Sym_{A}(M[n])$ compatible with $F$, which are morphisms of simplicial
$A$-algebras from $Sym_{A}(M[n])$ to $Sym_{A}(M[n])$ with the former endowed
with the canonical $A$-algebra structure and the latter with the $A$-algebra
structure induced by $F$.
$\displaystyle End_{SCR^{gr}_{A}}(Sym_{A}(M[n]))$ $\displaystyle\simeq
Map_{A-Mod^{gr}}(M[n],Sym_{A}(M[n])_{F})$ $\displaystyle\simeq
Map_{A-Mod}(M[n],M_{F}[n])$ $\displaystyle\simeq Map_{A-Mod}(M,M_{F})$
where $M_{F}$ is $M$ endowed by the $A$-module structure induced by $F$.
Now the weight $0$ part of $Sym_{A}(M[1])$ is simply $A$. Therefore the
homotopy with the canonical Frobenius is uniquely determined by $F$. ∎
###### Proposition VII.1.9.
The space of derived Frobenius lifts on $\mathbb{F}_{p}$ is empty.
Before we move on to the proof of the proposition, we will need a lemma.
###### Lemma VII.1.10.
Let $A$ be a discrete commutative algebra and $a\in A$, there exists a
simplicial algebra $K(A,a)$ with the following universal property : let $B$ be
a simplicial $A$-algebra, there is a functorial equivalence
$Map_{A-SCR}(K(A,a),B)\to\Omega_{a,0}B$
where the space of paths $\Omega_{a,0}B$ from $a$ to $0$ in $B$ is defined as
the fiber of
$B^{\Delta^{1}}\xrightarrow{ev_{0},ev_{1}}B\times B$
over $(a,0)$.
###### Proof.
The category $SCR$ is presentable and the functor
$B\in SCR\to\Omega_{a,0}B$
preserves limits and is accessible, therefore using [Lur07, Proposition
5.5.2.7], it is representable. ∎
###### Remark VII.1.11.
We can explicitly construct $K(A,a)$ as $Sym_{A}(S^{1})$ where $Sym_{A}(-)$ is
the free simplicial $A$-algebra on a simplcial set and
$S^{1}=\Delta^{1}/\partial\Delta^{1}$ is the simplicial circle.
###### Proof of the proposition.
We first compute the mapping space $Map_{SCR}(\mathbb{F}_{p},\mathbb{F}_{p})$.
We know
$Map_{SCR_{\mathbb{Z}_{(p)}}}(\mathbb{F}_{p},\mathbb{F}_{p})=Hom_{CRing}(\mathbb{F}_{p},\mathbb{F}_{p})$
is contractible. The only endomorphism of $\mathbb{F}_{p}$ as a simplicial
algebra over $\mathbb{Z}_{(p)}$ is the identity up to contractible choice. We
take
$K(\mathbb{Z}_{(p)},p)$
as a cofibrant model of $\mathbb{F}_{p}$ and moding out by p :
$K(\mathbb{Z}_{(p)},p)\otimes_{\mathbb{Z}_{(p)}}\mathbb{F}_{p}\simeq
K(\mathbb{F}_{p},0)\simeq\mathbb{F}_{p}[\epsilon]$
is the free simplicial algebra on one generator in degree $1$, as the
construction $K$ is stable under base change. The identity induces in homology
the identity morphism, however, the Frobenius on $\mathbb{F}_{p}[\epsilon]$
sends the degree $1$ generator $\epsilon$ to $\epsilon^{p}=0$ in homology.
Therefore the identity does not have a derived Frobenius lift structure.
∎
###### Lemma VII.1.12.
The forgetful functor $U:dSt^{gr,Frob}\to dSt^{gr}$ preserves the classifying
space construction $B$.
###### Proof.
For a graded Frobenius derived stack in groups $G$, $BG$ is explicitly given
as a geometric realization of graded derived stacks of the form $G^{n}$. We
write $U$ as a composition
$dSt^{Fr}\to dSt^{endo}\to dSt$
the first one is the projection of a fiber product of categories, therefore it
preserves small limits. The second map also preserves small limits since in a
category of presheaves, they are computed pointwise. Hence $U$ preserves
finite products. The same decomposition show that $U$ preserves geometric
realizations, which concludes the proof, see the proof of Proposition VII.2.21
for a similar argument. ∎
###### Remark VII.1.13.
$B\mathbb{G}_{m}$ is the final element in $dSt^{gr}$ and it admits a unique
derived Frobenius lift structure, therefore it is also the final element in
$dSt^{gr,Frob}$.
###### Remark VII.1.14.
By definition
$SCR^{Fr}\coloneqq dAff^{Fr,op}$
We notice there is an equivalence of categories
$SCR^{Fr}\simeq SCR^{endo}\times_{SCR_{p}^{endo}}SCR_{p}$
compatible with the functor
$Spec:SCR\xrightarrow{\sim}dAff^{op}$
#### VII.1.2 Derived Witt functor
###### Proposition VII.1.15.
The forgetful functor
$U:SCR^{Fr}\to SCR$
admits a right adjoint adjoint, which we call the derived Witt ring functor.
###### Proof.
From the description of $SCR^{Fr}$ as the fiber product
$SCR^{Fr}\coloneqq SCR^{endo}\times_{SCR^{endo}_{p}}SCR_{p}$
the forgetful functor
$U:SCR^{Fr}\to SCR$
commutes with colimits. We conclude using the adjoint functor theorem [Lur09,
Corollary 5.5.2.9]. ∎
###### Remark VII.1.16.
When $A$ is discrete, $W(A)$ is the discrete commutative ring of Witt vectors.
See [Joy85] for details on the classical adjunctions. We can see the
simplicial algebra $W(A)$ is discrete by using the following identifications
$Map_{SCR}(R,W(A))\simeq Map_{SCR^{Fr}}(L(R),W(A))\simeq Map_{SCR}(L(R),A)$
for any simplicial algebra $R$, where $L$ is left adjoint to the forgetful
functor and $W$ is the right adjoint. The mapping space is discrete for every
simplicial algebra $R$ therefore $W(A)$ is discrete.
###### Construction VII.1.17.
Let $A$ be a graded simplicial algebra, which is positively or negatively
graded, we construct its associated graded simplicial algebra with Frobenius
lift of graded Witt vectors $W^{gr}(A)$ as follows. As a graded simplicial
algebra with an endomorphism, we define $W^{gr}(A)$ as the fiber product
$A^{\mathbb{N}}\times_{(A(0))^{\mathbb{N}}}W(A(0))$
where
$W(A(0))\to(A(0))^{\mathbb{N}}$
is the ghost morphism, see Remark IV.10.13 for the connection between naive
$0$ weights and $0$ weights. By construction, $W^{gr}(A)$ is positively or
negatively graded and its $0$ weights ring is given by taking naive $0$
weights
$W^{gr}(A)(0)\simeq W(A(0))$
The natural Frobenius structure on $W(A(0))$ endowes $W^{gr}(A)$ with the
structure of a graded simplicial algebra with a Frobenius lift.
This construction defines a functor
$W^{gr}:SCR^{gr}\to SCR^{gr,Fr}$
###### Proposition VII.1.18.
Let $A$ be a graded simplicial algebra, which is positively or negatively
graded. The functor
$R\in SCR^{gr,Fr}\mapsto Map_{SCR^{gr}}(B,A)\in\mathcal{S}$
is representable by $W^{gr}(A)$.
###### Proof.
Let $A$ be a graded simplicial algebra and $R$ be a graded simplicial algebra
with Frobenius lift. By the construction of $W^{gr}(A)$, a morphism of graded
simplicial algebras with Frobenius lifts
$R\to W^{gr}(A)$
is given by a morphism of graded simplicial algebras with endomorphisms
$R\to A^{\mathbb{N}}$
and a morphism of graded simplicial algebras with Frobenius lifts
$R\to W(A(0))$
with a compatibility between the two morphisms. The former morphism is simply
given by a morphism of graded simplicial algebras $A\to R$, the latter is
given by a morphism of simplicial algebras with Frobenius lifts
$R(0)\to W(A(0))$
which is simply a morphism of simplicial algebras
$R(0)\to A(0)$
which is fixed by requiring the compatibility. Therefore a morphism of graded
simplicial algebras with Frobenius lifts
$R\to W^{gr}(A)$
is uniquely determined by a morphism of graded simplicial algebras $A\to R$. ∎
#### VII.1.3 Modules on an algebra endowed with a Frobenius lift
###### Definition VII.1.19.
Let $(A,F)$ be a simplicial algebra endowed with an endomorphism. We define
the category of modules on $(A,F)$ as the stabilization of the category of
simplicial algebras over $(A,F)$, see section IV.7 for details of the tangent
category formalism. That is
$(A,F)-Mod^{endo}\coloneqq Stab(SCR^{endo}_{/(A,F)})$
Similarly, for $(A,F,h)$ a simplicial algebra endowed with a Frobenius lift,
we define its category of modules
$(A,F,h)-Mod^{Fr}\coloneqq Stab(SCR^{Fr}_{/(A,F,h)})$
###### Remark VII.1.20.
We can notice that a module on a simplicial algebra with endomorphism $(A,F)$
is simply a non-necessarily connective $A$-module endowed with an endophism
compatible with the action of $F$.
###### Remark VII.1.21.
The category $(A,F,h)-Mod$ can be identified with
$(A,F)-Mod^{endo}\times_{(A_{p},F_{p})-Mod^{endo}}A_{p}-Mod$
where $(A_{p},F_{p})$ is the simplicial algebra obtained by base changing
$(A,F)$ to $\mathbb{F}_{p}$. We deduce this identification from the following
description of $SCR^{Fr}_{/(A,F,h)}$ :
$SCR^{Fr}_{/(A,F,h)}\simeq
SCR^{endo}_{/(A,F)}\times_{SCR^{endo}_{p/(A_{p},F_{p})}}SCR_{p,/(A_{p},F_{p})}$
and the fact that stabilizations commute with the "endomorphism category
construction $(-)^{endo}$" and small limits.
Therefore we can identify an object of $(A,F,h)-Mod$ with a $(A,F)$-module and
an homotopy between $F_{p}$ and $0$.
###### Proposition VII.1.22.
Let $F:(A,F)-Mod^{endo}\to(A,F,h)-Mod^{Fr}$ be the functor sending the pair
$(M,\alpha)$ to $(M,p\alpha)$ with the canonical homotopy. The functor $F$ is
an equivalence of categories.
$F:(A,F)-Mod^{endo}\xrightarrow{\sim}(A,F,h)-Mod^{Fr}$
###### Proof.
We use the exact triangle
$M\xrightarrow{\times p}M\to M_{p}$
where $M_{p}$ denotes $M\otimes\mathbb{F}_{p}$. Given a pair $(M,\alpha)$ in
$(A,F)-Mod^{endo}$, promoting the pair to a $(A,F,h)$-module is given by the
data of a homotopy between
$M\xrightarrow{\alpha}M\to M_{p}$
and zero. Using the exact triangle, this is equivalent to specifying an
element $\alpha^{\prime}:M\to M$ and a homotopy between $p\alpha^{\prime}$ and
$\alpha$, that is to say, this is an element in $F^{-1}((M,\alpha))$. This
shows the essential surjectivity of the functor, and even the fully
faithfulness on mapping spaces with equal source and target. We proceed
similarly for general mapping spaces. ∎
###### Remark VII.1.23.
From the previous proposition, a triple in $(M,\alpha,t)$ in
$(A,F,h)-Mod^{Fr}$ may be seen as a pair $(M,\beta)$ in $(A,F)-Mod^{endo}$, we
will denote write $\frac{\alpha}{p}$ for $\beta$, this element is constructed
using the endomorphism $\alpha$ and the homotopy $t$.
###### Definition VII.1.24.
We can define the cotangent complex of a simplicial algebra endowed with a
endomorphism or a Frobenius lift using the formalism of [Lur07].
Let $(A,F)$ be a simplicial algebra endowed with an endomorphism. The
cotangent complex of $(A,F)$ is an $(A,F)-Mod$ representing the functor
$Map_{SCR^{endo}/(A,F)}((A,F),-)$
Let $(A,F,h)$ be a simplicial algebra endowed with a Frobenius lift. The
cotangent complex of $(A,F,h)$ is an $(A,F,h)-Mod$ representing the functor
$Map_{SCR^{endo}/(A,F,h)}((A,F,h),-)$
###### Remark VII.1.25.
The cotangent complex of $(A,F)$ is simply given by $(\mathbb{L}_{A},dF)$
where $dF$ is the endomorphism functorialy induced from $F$.
The forgetful functor $SCR^{Fr}\to SCR^{endo}$ induces a forgetful functor
$(A,F,h)-Mod^{Fr}\to(A,F)-Mod^{endo}$
###### Proposition VII.1.26.
Let $(A,F)$ be a simplicial algebra endowed with an endomorphism, the
forgetful functor
$SCR^{endo}_{(A,F)}\to(A,F)-Mod^{endo}$
admits a left adjoint denoted $Sym_{(A,F)}$.
###### Proof.
We use the adjoint functor theorem, see [Lur09, Corollary 5.5.2.9]. ∎
### VII.2 Mixed graded Dieudonné complexes and algebras
#### VII.2.1 The graded circle
###### Definition VII.2.1.
The graded circle $S^{1}_{gr}$ as in [MRT20] is given by
$S^{1}_{gr}\coloneqq Spec^{\Delta}({\mathbb{Z}_{(p)}}[\eta])\simeq BKer$
It is endowed with an endomorphism induced from the multiplication by $p$
endomorphism on $Ker$, we will call this endomorphism $[p]$.The morphism $[p]$
sends $\eta$ to $p\eta$ in cohomology.
###### Proposition VII.2.2.
The endomorphism $[p]$ gives $S^{1}_{gr}$ the structure of a group in
$dSt^{Fr}$
###### Proof.
The stack $S^{1}_{gr}$ is given by applying the $B$ construction to the
derived stack with Frobenius lift $Ker$. ∎
###### Remark VII.2.3.
Defining a morphism
${\mathbb{Z}_{(p)}}[\frac{x^{n}}{n!}]\to{\mathbb{Z}_{(p)}}[\frac{x^{n}}{n!}]$
by sending $\frac{x^{n}}{n!}$ to $p^{n}\frac{x^{n}}{n!}$ is canonically a
morphism of graded classical affine schemes endowed with a Frobenius lift. As
$x$ is in weight $1$, the condition of being a Frobenius lift is trivial.
###### Remark VII.2.4.
We also notice that $[p]$ is compatible with the group structure, meaning
$\eta\mapsto p\eta$ is compatible with the coalgebra action $\eta\mapsto
1\otimes\eta+\eta\otimes 1$ and the counit map.
###### Definition VII.2.5 (Graded spheres).
We define variants of spheres as graded affine stacks
$S^{k}_{gr}(n)\coloneqq
Spec^{\Delta}({\raisebox{1.99997pt}{${\mathbb{Z}_{(p)}}[\eta_{k}]$}\left/\raisebox{-1.99997pt}{$\eta_{k}^{2}$}\right.})$
with $\eta_{k}$ of weight $n$ and degree $k$. We simply denote
$S^{k}_{gr}\coloneqq S^{k}_{gr}(1)$.
###### Remark VII.2.6.
The graded sphere $S^{1}_{gr}$ is the graded affine stack considered in
[MRT20]. The superior spheres $S^{n}_{gr}$ can be recovered from the
topological spheres as follow :
$S^{n}_{gr}\simeq Spec^{\Delta}(D(H^{*}(S^{n},{\mathbb{Z}_{(p)}})))$
where $H^{*}(S^{n},{\mathbb{Z}_{(p)}})$ is the graded commutative differential
graded algebra of cohomology of the topological sphere $S^{n}$ with the zero
differential and $D$ denotes the denormalization functor.
###### Definition VII.2.7.
For $F$ a pointed graded stack, we define its graded homotopy groups :
$\pi_{k}^{(n)}(F)=Hom_{dSt^{gr,*}}(S^{k}_{gr}(n),F)=\pi_{0}Map_{dSt^{gr,*}}(S^{k}_{gr}(n),F)$
The notation $Hom$ denotes the $\pi_{0}$ set of the associated mapping space.
###### Proposition VII.2.8.
The graded circle endowed with its Frobenius structure can be recovered as the
following pushout
$S^{1}_{gr}\simeq*\sqcup_{Spec({\mathbb{Z}_{(p)}}[\rho])}*$
of graded affine stacks where $\rho$ is a generator in degree $0$ and weight
$1$ which squares to zero. Furthermore, the induced diagram obtained by taking
the product with $Spec(B)$ for $B$ a simplicial algebra
${Spec(\mathbb{Z}_{(p)}[\rho])\times
Spec(B)}$${Spec(B)}$${Spec(B)}$${S^{1}_{gr}\times Spec(B)}$
is a pushout diagram against derived affine schemes, meaning it induces a
pullback diagram on the simplicial algebras of functions.
###### Proof.
We follow the proof in [MRT20, Theorem 5.1.3]. We construct a commutative
diagram
${Spec({\mathbb{Z}_{(p)}}[\rho])}$${*}$${*}$${BKer}$
that is an element $Spec({\mathbb{Z}_{(p)}}[\rho])\to Ker\simeq\Omega BKer$.
We choose
$[\rho]=(\rho,0,0...)\in Ker({\mathbb{Z}_{(p)}}[\rho])$
It is an element of $Ker({\mathbb{Z}_{(p)}}[\rho])$ since the Frobenius $F$
acts on a Teichmüller element as $F[a]=[a^{p}]$.
We are reduced to showing the natural map
$C_{\Delta}(BKer)={\mathbb{Z}_{(p)}}\oplus{\mathbb{Z}_{(p)}}[-1]\to{\mathbb{Z}_{(p)}}\times_{{\mathbb{Z}_{(p)}}[\rho]}{\mathbb{Z}_{(p)}}$
is an equivalence of graded cosimplicial algebras. We only need to show it is
an equivalence on the underlying complexes which is obvious. The compatibility
with the endomorphism structures in straightforward and the additional
Frobenius structure is trivial since the weight $0$ part of $S^{1}_{gr}$ is
simply $Spec({\mathbb{Z}_{(p)}})$.
We move on to the second part of the proof. We want to show the following
diagram is a pushout diagram against derived affine schemes $X$.
${Spec(\mathbb{Z}_{(p)}[\rho])\times
Spec(B)}$${Spec(B)}$${Spec(B)}$${S^{1}_{gr}\times Spec(B)}$
We can reduce to the case of $X=\mathbb{A}^{1}$ using [Lur11, Proposition
4.1.9] since any derived affine scheme can be written as a limit of copies of
$\mathbb{A}^{1}$. Therefore we show the natural morphism
$\mathcal{O}(S^{1}_{gr}\times Spec(B))\to
B\times_{\mathbb{Z}_{(p)}[\rho]\otimes B}B$
is an equivalence. Since the functor of simplicial functions $\mathcal{O}$ is
given by the composition of the Dold-Kan functor with $C(-)$, the functor of
$\mathbb{E}_{\infty}$-functions, we simply show we have an equivalence
$C(S^{1}_{gr}\times Spec(B))\to B\times_{\mathbb{Z}_{(p)}[\rho]\otimes B}B$
of complexes. Now using the finite cohomology property of $S^{1}_{gr}$, see
[MRT20, Lemma 3.4.9] and the base change formula [HLP14, A.1.5-(2)], we have
$C(S^{1}_{gr}\times Spec(B))\simeq C(S^{1}_{gr})\otimes B$
Now as we know
$C(S^{1}_{gr})=H^{*}(S^{1}_{gr},\mathbb{Z}_{(p)})=\mathbb{Z}_{(p)}\oplus\mathbb{Z}_{(p)}[-1]$,
we deduce the result using the first part of the proof.
∎
#### VII.2.2 Mixed graded Dieudonné complexes
###### Definition VII.2.9.
The endomorphism $[p]:S^{1}_{gr}\to S^{1}_{gr}$ induces a pullback morphism
$[p]^{*}:QCoh(BS^{1}_{gr})\to QCoh(BS^{1}_{gr})$
which is an endofunctor of $\epsilon-Mod^{gr}$.
###### Remark VII.2.10.
Given $(M,d,\epsilon)$ a graded mixed complex, $[p]^{*}M$ is given by
$(M,d,p\epsilon)$.
###### Definition VII.2.11.
We define the category of mixed graded Dieudonné complexes, also called
derived Dieudonné complexes, by
$\epsilon-D-Mod^{gr}\coloneqq CFP_{[p]^{*}}(\epsilon-Mod^{gr})$
where the category on the right hand side is the $\infty$-category of colax
fixed points of $[p]^{*}$ on $\epsilon-Mod^{gr}$, as defined in Remark II.2.3.
We see the colax fixed point morphism $[p]^{*}M\to M$ can be seen as a
morphism of graded mixed complexes
$(M,d,p\epsilon)\to(M,d,\epsilon)$
which is a morphism of graded complexes satisfying the usual Dieudonné
relation
$\epsilon F=pF\epsilon$
###### Proposition VII.2.12.
We have a natural identifications
$\epsilon-D-Mod^{gr}\simeq(k[\epsilon],[p])-Mod^{gr,endo}$
The object $(k[\epsilon],[p])$ is seen here as a commutative algebra object in
$Mod^{gr,endo}$.
###### Proof.
Given a mixed graded module $M\in\epsilon-Mod^{gr}$, promoting it to a
$(k[\epsilon],[p])$-module $(M,F)$ is equivalent to the data of a commutative
square
${k[\epsilon]\otimes M}$${M}$${k[\epsilon]\otimes
M}$${M}$$\scriptstyle{\textrm{[p]}\otimes F}$$\scriptstyle{F}$
which is equivalent to a colax fixed point structure
$M\to[p]^{*}M$
∎
##### The Beilinson t-structure
###### Definition VII.2.13.
We recall the definition of the Beilinson t-structure on graded mixed complex.
Let $M$ be a graded mixed complex, $M$ is said to be t-connective for the
t-structure when $H^{i}(M(n))=0$ for $i>-n$, $M$ is said to be t-coconnective
for the t-structure when $H^{i}(M(n))=0$ for $i<-n$.
###### Remark VII.2.14.
As a mixed graded complex, $Sym_{A}(\mathbb{L}_{A}[1])$ is t-connective, for
$A$ a simplicial algebra. It is in the heart of the t-structure when $A$ is a
smooth classical algebra.
###### Proposition VII.2.15.
The heart of $\epsilon-dg-mod^{gr}$ identifies with the $1$-category of
complexes $\bf{dg-mod_{\mathbb{Z}_{(p)}}}$. We associate to a complex $(M,d)$
the graded mixed complex being $M_{n}$ with trivial differential in weight
$-n$, the differential $d$ defines the mixed structure. This defines a functor
$i:\bf{dg-mod_{\mathbb{Z}_{(p)}}}\to\bf{\epsilon-dg-mod^{gr}}$
which induces an equivalence on the heart.
###### Definition VII.2.16 (Beilinson t-structure).
Following [Rak20, §3.3], we define a t-structure on the category of mixed
Dieudonné complexes by letting a graded mixed Dieudonné complexes be
t-connective, respectively t-connective, when its underlying graded mixed
complex is.
###### Proposition VII.2.17.
The heart of $\epsilon-D-Mod_{\mathbb{Z}_{(p)}}$ identifies with the abelian
category of Dieudonné complexes of [BLM22].
###### Proof.
This follows from the identification without Dieudonné structures, Proposition
VII.2.15. ∎
#### VII.2.3 Mixed graded Dieudonné algebras
We introduce the definition of the main objects we will study.
###### Definition VII.2.18.
Motivated by Proposition VII.2.12, we define mixed graded Dieudonné stacks,
also called derived Dieudonné stacks, by
$\epsilon-D-dSt^{gr}\coloneqq S^{1}_{gr}-dSt^{gr,Frob}$
We also define mixed graded Dieudonné simplicial algebras, also called
Dieudonné simplicial algebras, by
$\epsilon-D-SCR^{gr}\coloneqq(S^{1}_{gr}-dAff^{gr,Frob})^{op}$
An element of $S^{1}_{gr}-dAff^{gr,Frob}$ can be thought of as a morphism
$X\to BS^{1}_{gr}$, in the topos $dSt^{gr,Frob}$, which is relative affine.
###### Remark VII.2.19.
A derived Dieudonné stack can be seen as a derived stack endowed with an
endomorphism, which has a Frobenius lift structure, a grading and a compatible
$S^{1}_{gr}$-action and a compatibility condition between the Frobenius lift
and the action of $S^{1}_{gr}$. This compatibility condition is an analogue of
the Dieudonné complex equation $dF=pFd$ for derived stacks.
###### Remark VII.2.20.
We expect $\epsilon-D-SCR^{gr}$ to admit another description as the category
of modules over a monad $LSym$ on $\epsilon-D-Mod^{gr}$ as is done in [BM19]
and[Rak20].
###### Proposition VII.2.21.
The two $\infty$-categories $dSt^{gr,Frob}$ and $S^{1}_{gr}-dSt^{gr,Frob}$ are
$\infty$-topoi.
###### Proof.
The category $dSt^{gr,Frob}$ is, by definition, given by
$\tau^{endo}\times_{\tau_{p}^{endo}}\tau_{p}$
with $\tau=dSt^{gr}$ and $\tau_{p}=dSt^{gr}_{\mathbb{F}_{p}}$. Recalling
[Lur09, Proposition 6.3.2.3], to show that $dSt^{gr,Frob}$ is a topos, it is
enough to show that the projection morphisms $\tau^{endo}\to\tau_{p}^{endo}$
and $\tau_{p}\to\tau_{p}^{endo}$ are left adjoints and they preserve finite
limits, as $\tau^{endo}$, $\tau_{p}^{endo}$ and $\tau_{p}$ are already topoi.
Using [Lur09, Proposition 6.3.5.1], the morphism
$\tau^{endo}\to\tau_{p}^{endo}$ admits a right adjoint and commutes with
finite limits, since it admits a left adjoint : the forgetful functor.
The morphism $\tau_{p}\to\tau_{p}^{endo}$ commutes with finite limits, since
it admits a left adjoint, which is given by sending $(X,F)$ the homotopy
coequalizer of $F$ and the Frobenius on $X$ : $X_{F\simeq Fr}$ and admits a
right adjoint, which is given by sending $(X,F)$ the homotopy equalizer of $F$
and the Frobenius on $X$ : $X^{F\simeq Fr}$, see Proposition B.3.
Therefore, $S^{1}_{gr}-dSt^{gr,Frob}$ is also a topos as the classifying topos
of objects in $dSt^{gr,Frob}$ with a $S^{1}_{gr}$ action. ∎
###### Remark VII.2.22.
From the previous proposition, the previous categories can admit Postnikov
decomposition and obstruction theory, see [Lur07, Proposition 7.2.1.10] for
details.
##### Dieudonné functions on a Dieudonné stack
###### Construction VII.2.23.
We construct the functor of functions on a Dieudonné derived stack
$C:\epsilon-D-dSt^{gr,op}\to\epsilon-D-Mod^{gr}$
by composition with the forgetful functor
$S^{1}_{gr}-dSt^{gr,Frob}\to S^{1}_{gr}-dSt^{gr,endo}$
we simply construct
$S^{1}_{gr}-dSt^{gr,endo,op}\to S^{1}_{gr}-Mod^{gr,endo}$
The category $S^{1}_{gr}-dSt^{gr,endo}$ identifies with
$(S^{1}_{gr}\rtimes(\mathbb{N}\times\mathbb{G}_{m}))$-equivariant derived
stacks, that is derived stacks over
$B(S^{1}_{gr}\rtimes(\mathbb{N}\times\mathbb{G}_{m}))$. Indeed a
$\mathbb{N}\times\mathbb{G}_{m}$-action is given by a grading and a graded
endomorphism and an
$(S^{1}_{gr}\rtimes(\mathbb{N}\times\mathbb{G}_{m}))$-action is given by an
additional $S^{1}_{gr}$-action compatible with gradings and the endomorphisms.
Let us consider $X\in\epsilon-D-dSt^{gr}$, with its structure morphism
$\pi:X\to B(S^{1}_{gr}\rtimes(\mathbb{N}\times\mathbb{G}_{m}))$
Now the pushforward of the structure sheaf $\pi_{*}\mathcal{O}_{X}$ defines an
element of
$QCoh(B(S^{1}_{gr}\rtimes(\mathbb{N}\times\mathbb{G}_{m})))$
The category $QCoh(B(S^{1}_{gr}\rtimes(\mathbb{N}\times\mathbb{G}_{m})))$
identifies with $S^{1}_{gr}-QCoh(B(\mathbb{N}\times\mathbb{G}_{m})))$, which
is $\epsilon-D-Mod$. We denote this element $\pi_{*}\mathcal{O}_{X}$ in
$\epsilon-D-Mod^{gr}$ as $C(X)$.
###### Remark VII.2.24.
The inclusion $\epsilon-D-SCR^{gr,op}\subset\epsilon-D-dSt^{gr}$ defines
functions on a Dieudonné simplicial algebra by composition
$\epsilon-D-SCR^{gr}\to\epsilon-D-Mod^{gr}$
###### Remark VII.2.25.
Since the pushforward $\pi_{*}$ is canonically lax monoidal, $C(X)$ is in fact
an element of $CAlg(\epsilon-D-Mod)$. We may call $CAlg(\epsilon-D-Mod)$ the
category of mixed Dieudonnée $\mathbb{E}_{\infty}$-algebras. This notion could
give the definition of Dieudonné structures for spectral stacks but we will
not explore these notions.
###### Proposition VII.2.26.
The forgetful functor $\epsilon-D-SCR^{gr}\to\epsilon-D-Mod^{gr}$ commutes
with filtered colimits and small limits.
###### Proof.
Since the forgetful functor $\epsilon-D-Mod^{gr}\to Mod$ commutes with the
filtered colimits and small limits and is conservative, we show the forgetful
functor
$\epsilon-D-SCR^{gr}\to Mod$
preserves the necessary colimits and limits.
Now this functor factors as a composition
$\epsilon-D-SCR^{gr}\to
S^{1}_{gr}-SCR^{gr,endo}\simeq(\mathbb{N}\times\mathbb{G}_{m})\ltimes
S^{1}_{gr}-SCR\to Mod$
where both functors commute with the desired limits and colimits.
∎
###### Proposition VII.2.27.
The forgetful functor $U:dSt^{Fr}\to dSt$ admits a left adjoint denoted $L$.
###### Proposition VII.2.28.
We define the two pullback squares :
${SCR^{gr,Fr}}$${SCR^{gr,endo}}$${SCR}$${CRing^{p-tf,gr,Fr}}$${CRing^{p-tf,gr,endo}}$${CRing^{p-tf}}$${\urcorner}$${\urcorner}$
where $CRing^{p-ft}$ is the category of discrete commutative rings which are
$p$-torsion-free.
Then the morphism
$CRing^{p-tf,gr,Fr}\to CRing^{p-tf,gr,endo}$
is a fully faithful functor of $1$-categories.
###### Remark VII.2.29.
The previous proposition is often going to be used implicitly throughout this
thesis. In our constructions, the graded simplicial algebras will have
underlying weight $0$ simplicial algebras which are discrete and $p$-torsion
free. Therefore the choice of a Frobenius lift on such graded simplicial
algebras will simply be the choice of an endomorphism which is equal to the
canonical Frobenius after reduction modulo $p$.
###### Definition VII.2.30.
We define a subcategory $(\epsilon-D-SCR^{gr})_{\leq 0}\subset\epsilon-D-
SCR^{gr}$ on objects which are sent into $(\epsilon-D-Mod^{gr})_{\leq 0}$ when
applying the forgetful functor
$U:\epsilon-D-SCR^{gr}\to\epsilon-D-Mod^{gr}$
We define a subcategory $(\epsilon-D-SCR^{gr})_{\leq 0}\subset\epsilon-D-
SCR^{gr}$ on objects which are sent into $(\epsilon-D-Mod^{gr})_{\leq 0}$ when
applying the forgetful functor
$U:\epsilon-D-SCR^{gr}\to\epsilon-D-Mod^{gr}$
We will abuse terminology and call these objects coconnective, respectively
connective, for a "t-structure" on the non-stable category $\epsilon-D-
SCR^{gr}$.
Let us denote $(\epsilon-D-SCR^{gr})^{\heartsuit}$ the subcategory on
connective and coconnective objects. We will call this category the heart of
$\epsilon-D-SCR^{gr}$.
###### Remark VII.2.31.
These notions might be formalized using the notion of connectivity structure
on an $\infty$-category, developed in [BL21].
###### Proposition VII.2.32.
The inclusion
$(\epsilon-D-SCR^{gr})_{\leq 0}\subset\epsilon-D-SCR^{gr}$
admits a left adjoint denoted $t_{\leq 0}$.
The left adjoint $t_{\leq 0}$ commutes with the forgetful functor
$U:\epsilon-D-SCR^{gr}\to\epsilon-D-Mod^{gr}$
###### Proof.
The category $(\epsilon-D-SCR^{gr})_{\leq 0}$ is given by the pullback of
categories
${(\epsilon-D-SCR^{gr})_{\leq 0}}$${\epsilon-D-SCR^{gr}}$${(\epsilon-D-
Mod^{gr})_{\leq 0}}$${\epsilon-D-
Mod^{gr}}$$\scriptstyle{i}$$\scriptstyle{U_{0}}$${\lrcorner}$$\scriptstyle{U}$$\scriptstyle{j}$
Using the adjoint functor theorem, see [Lur09, Corollary 5.5.2.9], we are
reduced to proving that $i$ commutes with filtered colimits and small limits,
since $\epsilon-D-SCR^{gr}$ is presentable and $(\epsilon-D-SCR^{gr})_{\leq
0}$ is presentable as a limit of presentable categories. The functor $i$ is a
projection associated to a fiber product, hence we deduce the result from the
fact that $U$ and $j$ commute with the required colimits and limits. ∎
###### Remark VII.2.33.
The subcategory of $(\epsilon-D-SCR^{gr})^{\heartsuit}$ on objects which are
of $p$-torsion-free identify with the $1$-category of classical Dieudonné
algebra, see Remark VI.3.12. This defines a functor
$i:\textbf{DA}\to(\epsilon-D-SCR^{gr})^{\heartsuit}$
###### Remark VII.2.34.
As seen before, the graded derived stack with Frobenius lift $S^{1}_{gr}$
admits a canonical morphism
$S^{1}_{gr}\to B\mathbb{G}_{m}$
which has as a total space the semi-direct product
$\mathcal{H}\coloneqq\mathbb{G}_{m}\ltimes S^{1}_{gr}$
Therefore we can see a graded mixed Dieudonné structure on a derived stack as
a $\mathbb{G}_{m}$ action and an action of $S^{1}_{gr}$ which are compatible.
We have identifications
$\epsilon-dSt^{gr}\simeq\mathcal{H}-dSt$
#### VII.2.4 Graded loopspace
###### Definition VII.2.35.
Let $X$ a derived stack endowed with a Frobenius lift. We define the category
of Frobenius graded loopspaces on $X$ as
$\mathcal{L}^{gr,Fr}(X)\coloneqq\textbf{Map}_{dSt^{Fr}}(S^{1}_{gr},X)$
where $S^{1}_{gr}$ is endowed with its canonical Frobenius action.
The canonical point $*\to S^{1}_{gr}$ defined by the augmentation
${\mathbb{Z}_{(p)}}[\eta]\to{\mathbb{Z}_{(p)}}$, is a morphism of graded
derived stacks with Frobenius structures and induces a morphism of graded
derived stacks with Frobenius structures
$\mathcal{L}^{gr,Fr}(X)\to X$
We also define
$\mathcal{L}^{gr,endo}_{triv}(X)\coloneqq\textbf{Map}_{dSt^{endo}}(S^{1}_{gr},X)$
where $S^{1}_{gr}$ is endowed with the trivial endomorphism structure given by
identity.
We recall the forgetful functor
$U:dSt^{gr,Fr}\to dSt^{gr}$
###### Proposition VII.2.36.
Let $(X,F)$ be a affine derived scheme endowed with a Frobenius structure. We
write $X=Spec(C)$. The graded Frobenius loopspace’s underlying graded stack
identifies with the shifted linear stack:
$U\mathcal{L}^{gr,Fr}(X)\simeq
Spec_{X}Sym_{\mathcal{O}_{X}}\left(\mathbb{L}_{(X,F)}^{tw}[1]\right)$
where $Sym$ denotes the free $(C,F)$-module construction of Proposition
VII.1.26.
The $(C,F)$-module $\mathbb{L}\coloneqq\mathbb{L}_{(X,F)}^{tw}$ fits in a
triangle of $(C,F)$-modules
$\bigoplus_{\mathbb{N}}\mathbb{L}_{(C,F)}\otimes\mathbb{F}_{p}\to\mathbb{L}\to\mathbb{L}_{(C,F)}$
###### Proof.
Using the description of the graded circle as a graded derived stack with a
Frobenius lift of Proposition VII.2.8:
$*\sqcup_{Spec({\mathbb{Z}_{(p)}}[\rho])}*\xrightarrow{\sim}S^{1}_{gr}$
which induces an equivalence
$\displaystyle\mathcal{L}^{gr,Fr}(X)$
$\displaystyle\xrightarrow{\sim}X\times_{\textbf{Map}_{dSt^{Fr}}(Spec({\mathbb{Z}_{(p)}}[\rho]),X)}X$
AS $U$ preserves limits, we deduce an equivalence of underlying graded derived
stacks
$\displaystyle U\mathcal{L}^{gr,Fr}(X)$
$\displaystyle\xrightarrow{\sim}UX\times_{U\textbf{Map}_{dSt^{Fr}}(Spec({\mathbb{Z}_{(p)}}[\rho]),X)}UX$
Let us compute the target of this map. We take $B$ a graded simplicial algebra
endowed with an Frobenius structure and $Spec(B)\to X$ a $B$-point of $X$.
Using Corollary B.5, we can reduce to points on derived affine schemes with a
free Frobenius lift. We may assume $B$ to be negatively graded as
$\mathcal{L}^{gr,Fr}(X)$ is negatively graded. We compute the former stack at
$D\coloneqq L(Spec(B))$ over $X$, where $L$ denotes the free graded derived
stack with Frobenius lift construction :
$\displaystyle
Map_{dSt^{gr,St}}(L(Spec(B)),\textbf{Map}_{dSt^{Fr}}(Spec({\mathbb{Z}_{(p)}}[\rho]),X))$
$\displaystyle\simeq
Map_{dSt^{gr,Fr}_{D/}}(Spec({\mathbb{Z}_{(p)}}[\rho])\times D,X)$
$\displaystyle\simeq
Map_{SCR^{gr,Fr}_{/W^{gr}(B)}}(C,W^{gr}(B)\otimes\mathbb{Z}_{(p)}[\rho])$
where we have used the following identification
$\mathcal{O}(Spec({\mathbb{Z}_{(p)}}[\rho])\times
D)\simeq\mathcal{O}(Spec({\mathbb{Z}_{(p)}}[\rho]))\times\mathcal{O}(D)$
which can be seen on the underlying modules where it follows from base change,
see [HLP14, A.1.5-(2)].
Therefore we want to compute the fiber of the canonical morphism from the
source space
$Map_{SCR^{gr,endo}}(C,R)\times_{Map_{SCR_{p}^{gr,endo}}(C_{p},R(0)_{p})}Map_{SCR_{p}^{gr}}(C_{p},R(0)_{p})$
where $R$ denotes $W^{gr}(B)\otimes\mathbb{Z}_{(p)}[\rho]$, to the target
space
$Map_{SCR^{gr,endo}}(C,R^{\prime})\times_{Map_{SCR_{p}^{gr,endo}}(C_{p},R^{\prime}(0)_{p})}Map_{SCR_{p}^{gr}}(C_{p},R^{\prime}(0)_{p})$
where $R^{\prime}$ denoted $W^{gr}(B)$. This fiber is simply given by the
fiber of
$Map_{SCR^{gr,endo}}(C,R)\to Map_{SCR^{gr,endo}}(C,R^{\prime})$
which is
$Map_{SCR^{gr,endo}_{/W^{gr}(B)}}(C,W^{gr}(B)\otimes\mathbb{Z}_{(p)}[\rho])\simeq
Map_{SCR^{endo}_{/W^{gr}(B)(0)}}(C,(W^{gr}(B)\otimes\mathbb{Z}_{(p)}[\rho])(0))$
as $C$ is concentrated in degree $0$. Explicitly, we have
$(W^{gr}(B)\otimes\mathbb{Z}_{(p)}[\rho])(0)\simeq W(B(0))\oplus
B_{-1}^{\mathbb{N}}$
where $W(B(0))$ is endowed by its canonical endomorphism and
$B_{-1}^{\mathbb{N}}$ is endowed by its twisted endomorphism $pS$, by taking
into account the endomorphism of $\mathbb{Z}[\rho]$ sending $\rho$ to $p\rho$.
Therefore, the fiber is computed as
$Map_{SCR^{endo}_{/W^{(}B(0))}}(C,W(B(0))\oplus B_{-1}^{\mathbb{N}})\simeq
Map_{(C,F)-Mod}(\mathbb{L}_{(C,F)},(B_{-1}^{\mathbb{N}},pS))$
###### Lemma VII.2.37.
Let $(M,u)$ and $(N,v)$ be $(C,F,h)$-modules, the fiber of
$Map_{(A,F,h)-Mod^{Fr}}((M,u),(N,v))\to Map_{(A,F)-Mod^{endo}}((M,u),(N,v))$
is given by
$Map_{C-Mod}(M_{p},N)$
###### Proof.
Using Remark VII.1.21, the mapping space $Map_{(A,F,h)-Mod^{Fr}}((M,u),(N,v))$
is given
$Map_{(A,F)-Mod^{endo}}((M,u),(N,v))\times_{Map_{(A_{p},F_{p})-Mod^{endo}}((M_{p},u_{p}),(N_{p},v_{p}))}Map_{A_{p}-Mod}(M_{p},N_{p})$
Therefore the fiber of the map
$Map_{(A,F,h)-Mod^{Fr}}((M,u),(N,v))\to Map_{(A,F)-Mod^{endo}}((M,u),(N,v))$
coincides with the fiber of
$Map_{A_{p}-Mod}(M_{p},N_{p})\to
Map_{(A_{p},F_{p})-Mod^{endo}}((M_{p},u_{p}),(N_{p},v_{p}))$
Now since $M$ and $N$ have $(A,F,h)$-structures, $u_{p}$ and $v_{p}$ are
homotopic to the zero endomorphisms. Therefore we have an identification
$Map_{(A_{p},F_{p})-Mod^{endo}}((M_{p},u_{p}),(N_{p},v_{p}))\simeq
Map_{A_{p}-Mod}(M_{p},N_{p})\times Map_{A_{p}-Mod}(M_{p}[1],N_{p})$
We deduce the fiber is given by
$\Omega Map_{A_{p}-Mod}(M_{p}[1],N_{p})\simeq Map_{A-Mod}(M_{p}[1],N)$
using the identification
$\underline{Hom}_{A-Mod}(\mathbb{F}_{p},N)\simeq N[-1]$
∎
We deduce that the fiber of
$Map_{(C,F,h)-Mod}((\mathbb{L}_{(C,F)},dF),(B_{-1}^{\mathbb{N}},pS))\to
Map_{(C,F)-Mod}(\mathbb{L}_{(C,F)},(B_{-1}^{\mathbb{N}},pS))$
is given by
$Map_{C-Mod}(\mathbb{L}_{(C,F)}\otimes\mathbb{F}_{p}[1],B_{-1}^{\mathbb{N}})$
which is also given by
$Map_{(C,F)-Mod}((\bigoplus_{\mathbb{N}}\mathbb{L}_{(C,F)}\otimes\mathbb{F}_{p}[1],S),(B_{-1}^{\mathbb{N}},S))$
Using Proposition VII.1.22, we have a natural identification
$Map_{(C,F,h)-Mod}((\mathbb{L}_{(C,F)},dF),(B_{-1}^{\mathbb{N}},pS))\simeq
Map_{(C,F)-Mod}((\mathbb{L}_{(C,F)},\frac{dF}{p}),(B_{-1}^{\mathbb{N}},S))$
Therefore there exists $\mathbb{L}$ which fits in a triangle of
$(C,F)$-modules
$\bigoplus_{\mathbb{N}}\mathbb{L}_{(C,F)}\otimes\mathbb{F}_{p}\to\mathbb{L}\to\mathbb{L}_{C}$
such that
$Map_{(C,F)-Mod}(\mathbb{L}_{(C,F)},(B_{-1}^{\mathbb{N}},pS))\simeq
Map_{(C,F)-Mod}(\mathbb{L},(B_{-1}^{\mathbb{N}},S))$
Then we deduce the equivalence
$\mathcal{L}^{gr,Fr}(X)\xrightarrow{\sim}X\times_{\textbf{Map}_{dSt^{Fr}}(Spec({\mathbb{Z}_{(p)}}[\rho]),X)}X\simeq
Spec(Sym_{(C,F)}(\mathbb{L}[1]))$
∎
###### Corollary VII.2.38.
With the notation of the previous proposition, the derived stack associated to
$\mathcal{L}^{gr,Fr}(X)$ is given, after moding out by the $p$-torsion, by
$Spec(Sym(\mathbb{L}_{C}[1]))$
where the induced endomorphism is induced by
$\frac{dF}{p}:\mathbb{L}_{C}[1]\to\mathbb{L}_{C}[1]$
###### Proof.
The underlying derived stack of $\mathcal{L}^{gr,Fr}(X)$ is given by
$Spec(Sym(\mathbb{L}[1]))$
From Remark VII.1.25, the cotangent complex $\mathbb{L}_{(C,F)}$ has
$\mathbb{L}_{C}$ as an underlying $C$-module. In the description of the
twisted cotangent complex in a triangle of $(C,F)$-modules
$\bigoplus_{\mathbb{N}}\mathbb{L}_{C}\otimes\mathbb{F}_{p}\to\mathbb{L}\to\mathbb{L}_{C}$
where $\mathbb{L}_{C}$ is endowed with the endomorphism $\frac{dF}{p}$, we
notice this sequence is split since the first morphism is null. This concludes
the proof. ∎
The proof of Proposition VII.2.36 can easily be adapted to prove the following
result.
###### Remark VII.2.39.
The trivial graded endomorphism loopspace identifies with the shifted tangent
stack of $X$ :
$\mathcal{L}^{gr,endo}_{triv}(X)\simeq
Spec_{X}Sym_{\mathcal{O}_{X}}((\mathbb{L}_{(X,F)},dF)[1])$
where $X$ is no longer required to have a $p$-torsion free cotangent complex.
### VII.3 Comparison theorems
#### VII.3.1 Mixed structures classification : the non-Dieudonné case
In this section, we recall a theorem of [To20, Proposition 2.3.1], which was
announced with an outlined proof. We give a detailed proof so as to generalize
to a Dieudonné variant of this theorem.
###### Theorem VII.3.1.
Let $A$ be smooth commutative ${\mathbb{Z}_{(p)}}$-algebra, $M$ be a
projective $A$-module of finite type. We define $X$ as the derived affine
scheme $Spec(Sym_{A}(M[1]))=\mathbb{V}(M[1])$ endowed with its natural
grading. The classifying space of mixed graded structures on $X$ compatible
with its grading is discrete and in bijection with the set of commutative
differential graded algebra structures on the graded commutative
${\mathbb{Z}_{(p)}}$-algebra $\bigoplus_{i}\wedge^{i}_{A}M[-i]$.
###### Proof.
The classifying space of mixed structures is given by the fiber of the
forgetful functor
$(\mathbb{G}_{m}\ltimes S^{1}_{gr})-dSt\to\mathbb{G}_{m}-dSt$
over $X$.
Since $(\mathbb{G}_{m}\ltimes S^{1}_{gr})-dSt$ identifies naturally with
$S^{1}_{gr}-(\mathbb{G}_{m}-dSt)$, the classifying space is given by the
mapping space
$Map_{Mon(dSt^{gr})}(S^{1}_{gr},\textbf{End}_{gr}(X))$
which, by connexity of $S^{1}_{gr}$, is also
$Map_{Mon(dSt^{gr})}(S^{1}_{gr},\textbf{End}^{0}_{gr}(X))$
where we define
$\textbf{End}^{0}_{gr}(X)\coloneqq
fib(\textbf{End}_{gr}(X)\to\pi_{0}(\textbf{End}^{0}_{gr}(X)))$
the fiber over the identity, meaning $\textbf{End}^{0}_{gr}(X)$ is the
substack on $\textbf{End}_{gr}(X)$ on endomorphisms that are homotopy
equivalent to identity. This space is equivalent to the mapping space of
pointed stacks
$Map_{dSt^{gr,*}}(BS^{1}_{gr},B\textbf{End}^{0}_{gr}(X))$
Since $BS^{1}_{gr}$ is a stack, we may consider $B\textbf{End}^{0}(X)$ as a
stack in $St\subset dSt$. Therefore, we are reduced to the computation of
$Map_{St^{gr,*}}(BS^{1}_{gr},B\textbf{End}^{0}_{gr}(X))$
We will need a Postnikov decomposition on $B\textbf{End}^{0}_{gr}(X)$,
therefore we have to compute its homotopy groups. To study the behaviour on
the base scheme $S=Spec(A)$, we notice that we have a fiber sequence of graded
derived stacks :
${\textbf{End}_{gr,S}^{0}(X)}$${B\mathbb{G}_{m}}$${\textbf{End}^{0}_{gr}(X)}$${\textbf{Hom}^{0}_{gr}(X,S)}$${\lrcorner}$
where $B\mathbb{G}_{m}$ is the final object in
$St^{gr}={\raisebox{1.99997pt}{$St$}\left/\raisebox{-1.99997pt}{$B\mathbb{G}_{m}$}\right.}$,
$\textbf{Hom}_{gr,S}(X)$ is the graded derived stack of endomorphisms of $X$
over $S$ and $\textbf{Hom}_{S,gr}^{0}(X)$ is the substack of maps that are
equivalent to the canonical projection $X\to S$. In this diagram, the bottom
arrow is the composition with the projection $X\to S$ and the left arrow is
the inclusion.
Being a relative linear stack, $X\to S$ has a section : the "zero" section.
Therefore
$\textbf{End}^{0}(X)\to\textbf{Hom}^{0}(X,S)$
is surjective on all homotopy groups and the long exact sequence of homotopy
groups splits as short exact sequences of graded sheaves of abelian groups on
$S$
$0\to\pi_{k}\textbf{End}_{S}^{0}(X)\to\pi_{k}\textbf{End}^{0}(X)\to\pi_{k}\textbf{Hom}^{0}(X,S)\to
0$
##### Computation of $\pi_{k}\textbf{End}^{0}_{S}(X)$
Since all the derived stacks are now truncated, we can take $B$ a test graded
commutative ${\mathbb{Z}_{(p)}}$-algebra and $Spec(B)\to S$ a $B$-point of
$S$, which is an $A$-algebra structure on $B$, ie $B$ is a relative affine
scheme over $S\times B\mathbb{G}_{m}$. We compute, for $k>0$ :
$\displaystyle\textbf{End}_{S,gr}(X)(B)$
$\displaystyle=Map_{SCR^{gr}_{A}}(Sym_{A}(M[1]),Sym_{A}(M[1])\otimes_{A}B)$
$\displaystyle=Map_{A-Mod^{gr}}(M[1],Sym_{A}(M[1])\otimes_{A}B)$
From which we deduce
$\displaystyle\pi_{k}\textbf{End}^{0}_{S,gr}(X)$
$\displaystyle=\pi_{k}\textbf{End}_{S,gr}(X)$
$\displaystyle=Hom_{A-Mod^{gr}}(M[k+1],Sym_{A}(M[1])\otimes_{A}B)$
$\displaystyle=Hom_{A-Mod^{gr}}(M,\wedge^{k+1}M\otimes_{A}B)$
$\displaystyle=Hom_{A-Mod^{gr}}(M\otimes_{A}(\wedge^{k+1}M)^{\vee},B)$
Therefore
$\pi_{k}\textbf{End}^{0}_{S}(X)\simeq\mathbb{V}(M\otimes(\wedge^{k+1}M)^{\vee})$
##### Computation of $\pi_{k}Hom^{0}(X,S)$
We have on $B$-points :
$\textbf{Hom}^{0}(X,S)(B)=Map_{SCR}^{0}(A,Sym_{A}(M[1])\otimes B)$
where the $0$ exponent denotes the connected component of the canonical map
$A\to Sym_{A}(M[1])\to Sym_{A}(M[1])\otimes B$
We can recover the homotopy groups from the Postnikov decomposition :
$\pi_{k}\textbf{Hom}^{0}(X,S)(B)[k]=fib(t_{\leq k}\textbf{Hom}^{0}(X,S)(B)\to
t_{\leq k-1}\textbf{Hom}^{0}(X,S)(B))$
As $Map(A,-)$ preserves Postnikov decompositions,
$\pi_{k}\textbf{Hom}^{0}(X,S)(B)[k]$ is given by the fiber
$fib(Map_{SCR}^{0}(A,Sym_{A}^{\leq k}(M[1])\otimes B)\to
Map_{SCR}^{0}(A,Sym_{A}^{\leq k-1}(M[1])\otimes B))$
Which is simply $Map_{SCR_{A}}^{0}(\Omega_{A}^{1},\Lambda^{k}M[k]\otimes B)$
We find
$\pi_{k}\textbf{Hom}^{0}(X,S)=\mathbb{V}(\Omega_{A}^{1}\otimes_{A}(\Lambda^{k}M)^{\vee})$
When $k$ is strictly greater than the rank of $M$ as an $A$-module,
$\Lambda^{k}M$ and $\Lambda^{k+1}M$ both vanish, therefore $\pi_{k}=0$, we
deduce $B\textbf{End}^{0}(X)$ is $(n+1)$-truncated.
We will, in fact, need a more precise description of
$\pi_{k}\coloneqq\pi_{k}\textbf{End}^{0}_{gr}(X)$ :
###### Proposition VII.3.2.
For $k\geq 1$, the sheaf of groups $\pi_{k}$ is given on a commutative algebra
$B$ by the discrete set of pairs $(d_{0},d_{1})$ with
$d_{0}:A_{B}\to\bigwedge^{k}_{A_{B}}M_{B}$ a derivation and
$d_{1}:M_{B}\to\bigwedge^{k+1}_{A_{B}}M_{B}$ a ${\mathbb{Z}_{(p)}}$-linear
map, with the compatibility condition
$d_{1}(am)=ad_{1}(m)+d_{0}(a)\wedge m$
for $a\in A_{B}$ and $m\in M_{B}$.
In this proposition, we have defined $A_{B}\coloneqq
A\otimes_{\mathbb{Z}_{(p)}}B$ and $M_{B}\coloneqq
M\otimes_{\mathbb{Z}_{(p)}}B$.
###### Proof.
On a $B$ point,
$Map_{SCR}(Sym_{A}(M[1]),Sym_{A}(M[1]))(B)\simeq
Map_{SCR_{B}}(Sym_{A_{B}}(M_{B}[1]),Sym_{A_{B}}(M_{B}[1]))$
We want to compute
$\pi_{k}Map_{SCR_{B}}^{0}(Sym_{A_{B}}(M[1]),Sym_{A_{B}}(M_{B}[1]))=\pi_{k}Map_{SCR_{B}}(Sym_{A_{B}}(M_{B}[1]),Sym_{A_{B}}(M_{B}[1]))$
which is given by the fiber over the identity of
$\pi_{0}(Map(S^{k},Map_{SCR_{B}}(C_{B},C_{B}))\to Map_{SCR_{B}}(C_{B},C_{B}))$
where $C_{B}$ denotes $Sym_{A_{B}}(M_{B}[1])$.
Since $Map(S^{k},Map_{SCR_{B}}(C_{B},C_{B}))$ can be computed as
$Map_{SCR_{B}}(C_{B},C_{B}^{S^{k}})$, we can rewrite
$\pi_{k}Map_{SCR_{B}}^{0}(Sym_{A_{B}}(M_{B}[1]),Sym_{A_{B}}(M_{B}[1]))\simeq
Hom_{SCR_{B}/C_{B}}(C_{B},C_{B}^{S^{k}})$
We will use the following lemma.
###### Lemma VII.3.3.
Let $D$ be a simplicial commutative algebra, we have a natural identification
$\pi_{0}Map_{SCR}(Sym_{A}(M[1]),D)\simeq\left\\{(u,v):u:A\to\pi_{0}(D)\in
CRing,v:M\to\pi_{1}(D)\in A-Mod\right\\}$
where $\pi_{1}(D)$ is endowed with the $A$-module structure induced by $u$.
###### Proof.
We explicitly construct this bijection. An element
$f\in\pi_{0}Map_{SCR}(Sym_{A}(M[1]),D)$
induces by composition a morphism of commutative rings
$A\to Sym_{A}(M[1])\xrightarrow{f}D\to\pi_{0}(D)$
which we call $u$. It also induces
$M[1]\to Sym_{A}(M[1])\to D$
which defines $v$ after taking $\pi_{1}$.
On the other hand, let us give ourselves a pair $(u,v)$ satisfying the
required conditions. By smoothness of $A$, $u$ extends to a morphism of
simplicial rings
$a:A\to D$
Now giving a morphism $Sym_{A}(M[1])\to D$ under $A$ is equivalent to the data
of an $A$-module morphism
$M[1]\to D$
We check that these maps are mutually inverse.
∎
The lemma can be used for the base-changed versions of $A$ and $M$ : $A_{B}$
and $M_{B}$. Taking $D=Sym_{A_{B}}(M_{B}[1])^{S^{k}}$, we know that
$\pi_{0}(D)=A_{B}\oplus\Lambda^{k}M_{B}$
and
$\pi_{1}(D)=M_{B}\oplus\Lambda^{k+1}M_{B}$
Now the set
$Hom_{SCR_{B}/C_{B}}(C_{B},C_{B}^{S^{k}})$
is identified with the set of pairs of $(\delta,d)$ with $\delta$ being a
morphism of simplicial algebras
$\delta:A_{B}\to A_{B}\oplus\Lambda^{k}M_{B}$
and $d$ being a morphism of $A$-modules
$d:M_{B}\to M_{B}\oplus\Lambda^{k+1}M_{B}$
such that they induce the identity $C_{B}\to C_{B}$ after composition.
Therefore $\delta$ is a derivation
$\delta(a)=a+d_{0}(a)$
and $d$ is given by
$d(m)=m+d_{1}(m)$
Requiring $d$ the be $A$-linear gives the compatibility condition on
$(d_{0},d_{1})$.
∎
##### Cell decomposition on $BS^{1}_{gr}$
: We now define a variation of cellular decomposition for $BS^{1}_{gr}$.
The classical cellular decomposition of the topological space
$BS^{1}\simeq\mathbb{C}P^{\infty}$, from its CW complex description, gives a
tower of topological spaces
$(BS^{1})_{\leq 0}=*\to(BS^{1})_{\leq 1}=*\to(BS^{1})_{\leq 2}\simeq
S^{2}\to(BS^{1})_{\leq 3}\simeq S^{2}\to(BS^{1})_{\leq 4}\to...$
The tower is given by iterated homotopy pushouts :
${(BS^{1})_{\leq 2n}}$${(BS^{1})_{\leq
2n+2}}$${S^{2n+1}}$${D^{2n+2}\simeq*}$${\llcorner}$
We want a similar decomposition for $BS^{1}_{gr}$ as a graded affine stack.
###### Definition VII.3.4.
Let us define a sequence of affine stacks $(BS^{1}_{gr})_{\leq n}$ for $n\geq
0$ :
$(BS^{1}_{gr})_{\leq 2n}=(BS^{1}_{gr})_{\leq 2n+1}\coloneqq
Spec^{\Delta}({\mathbb{Z}_{(p)}}[u]/(u^{n+1}))$
where $u$ is in weight $1$ and degree $2$.
The canonical projection
${\mathbb{Z}_{(p)}}[u]/(u^{n+2})\to{\mathbb{Z}_{(p)}}[u]/(u^{n+1})$ induces a
morphism of derived stack $(BS^{1}_{gr})_{\leq 2n}\to(BS^{1}_{gr})_{\leq
2n+2}$.
###### Proposition VII.3.5.
There is a homotopy pushout diagram of graded affine stacks
${(BS^{1}_{gr})_{\leq 2n}}$${(BS^{1}_{gr})_{\leq
2n+2}}$${S^{2n+1}_{gr}(n+1)}$${*}$${\llcorner}$
where $S^{2n+1}_{gr}(n+1)$ is defined as
$Spec^{\Delta}({\mathbb{Z}_{(p)}}\oplus{\mathbb{Z}_{(p)}}[-2n-1]((n+1)))$.
###### Remark VII.3.6.
The diagram is shown to be a pushout diagram in affine stacks, not in derived
stacks.
###### Proof.
This pushout diagram is equivalent to a pullback diagram in graded
cosimplicial algebras :
${{\mathbb{Z}_{(p)}}[u]/(u^{n+1})}$${{\mathbb{Z}_{(p)}}[u]/(u^{n+2})}$${{\mathbb{Z}_{(p)}}\oplus{\mathbb{Z}_{(p)}}[-2n-1]((n+1))}$${{\mathbb{Z}_{(p)}}}$${\llcorner}$
We define a diagram of commutative differential graded algebras :
${{\mathbb{Z}_{(p)}}[u,v]/(u^{n+2},dv=u^{n+1})}$${{\mathbb{Z}_{(p)}}[u]/(u^{n+1})}$${{\mathbb{Z}_{(p)}}[u]/(u^{n+2})}$${{\mathbb{Z}_{(p)}}\oplus{\mathbb{Z}_{(p)}}[-2n-1]((n+1))}$${{\mathbb{Z}_{(p)}}}$$\scriptstyle{f_{1}}$$\scriptstyle{\sim}$$\scriptstyle{f_{2}}$
where $f_{1}$ is a quasi-isomorphism sending $u$ to $u$ and $v$ to $0$ and $g$
sends $u$ to $0$ and $v$ to $\epsilon_{2n-1}$, $v$ being in degree $2n-1$ and
weight $n+1$.
Let us denote $T$ the fiber of
${\mathbb{Z}_{(p)}}[u,v]/(u^{n+2},dv=u^{n+1})\to{\mathbb{Z}_{(p)}}\oplus{\mathbb{Z}_{(p)}}[-2n-1]((n+1))$
over ${\mathbb{Z}_{(p)}}$. We check that sending $u$ to $u$ defines a quasi-
isomorphism
${\mathbb{Z}_{(p)}}[u]/(u^{n+2})\xrightarrow{\sim}T$
which recovers the canonical projection
${\mathbb{Z}_{(p)}}[u]/(u^{n+2})\to{\mathbb{Z}_{(p)}}[u]/(u^{n+1})$
after composing with $f_{1}$.
Therefore this diagram in homotopy pullback diagram on the underlying
complexes : it is strictly a pullback diagram and $f_{2}$ is a fibration.
Since denormalization $D$ respects quasi-isomorphisms, applying $D$ yields the
required diagram, which is a homotopy pullback diagram since it is on the
underlying complexes and the forgetful functor from cosimplicial algebras to
complexes is conservative.
∎
###### Lemma VII.3.7.
The diagram on $(\mathbb{N},\leq)$ defined by the tower $(BS^{1}_{gr})_{\leq
n}$ admits $BS^{1}_{gr}$ as a colimit in affine stacks.
###### Proof.
The canonical projections
${\mathbb{Z}_{(p)}}[u]\to{\mathbb{Z}_{(p)}}[u]/(u^{n+1})$, where $u$ is in
weight $1$ and cohomological degree $2$, are morphisms of graded cosimplicial
algebras. This defines a morphism of graded cosimplicial algebras
${\mathbb{Z}_{(p)}}[u]\to lim_{n}{\mathbb{Z}_{(p)}}[u]/(u^{n+1})$
which is an equivalence on the underlying complexes, therefore it is an
equivalence.
Passing to $Spec^{\Delta}$ yields the result, using Proposition IV.12.8.
∎
As we want to compute
$Map_{St^{gr,*}}(BS^{1}_{gr},B\textbf{End}^{0}_{gr}(X))$, we first need a
result :
###### Proposition VII.3.8.
The commutative diagram, denoted $(\star)$,
${(BS^{1}_{gr})_{\leq 2n}}$${(BS^{1}_{gr})_{\leq
2n+2}}$${S^{2n+1}_{gr}(n+1)}$${*}$${\llcorner}$
is a pushout against $B\textbf{End}_{gr}^{0}(X)$, meaning
${Map_{St^{gr,*}}((BS^{1}_{gr})_{\leq
2n},B\textbf{End}_{gr}^{0}(X))}$${Map_{St^{gr,*}}((BS^{1}_{gr})_{\leq
2n+2},B\textbf{End}_{gr}^{0}(X))}$${Map_{St^{gr,*}}(S^{2n+1}_{gr}(n+1),B\textbf{End}_{gr}^{0}(X))}$${*}$${\llcorner}$
is a pullback diagram.
The result is not obvious as $B\textbf{End}_{gr}^{0}(X)$ need not be an affine
stack.
###### Lemma VII.3.9.
The diagram $QCoh(\star)$ is a pullback diagram, ie the following diagram is a
pullback diagram.
${QCoh((BS^{1}_{gr})_{\leq 2n})}$${QCoh((BS^{1}_{gr})_{\leq
2n+2})}$${QCoh(S^{2n+1}_{gr}(n+1))}$${QCoh(*)}$${\llcorner}$
###### Proof.
We first show the canonical morphism
$QCoh(S^{2n+1}_{gr}(n+1))\to QCoh((BS^{1}_{gr})_{\leq
2n})\times_{QCoh(S^{2n+1}_{gr}(n+1))}QCoh(*)$
is an equivalence on bounded complexes.
We use the canonical t-structure on $QCoh(A)$ for $A$ an affine stack defined
in Remark IV.2.8. With this dévissage argument, we are reduced to showing the
morphism is an equivalence on hearts and on extension groups between objects
in the heart.
The $1$-connectivity of the stacks, see Proposition B.8, identifies their
quasi-coherent complexes with $Mod_{\mathbb{Z}_{(p)}}$. Therefore we have an
equivalence on the heart.
We invoke Proposition B.1, we deduce
$Map_{QCoh((BS^{1}_{gr})_{\leq 2n})}(M,N)\simeq C((BS^{1}_{gr})_{\leq
2n})\otimes Map_{Mod}(M,N)$
Similar results for $QCoh(S^{2n+1}_{gr}(n+1))$, $QCoh(S^{2n+1}_{gr}(n+1))$ and
$Mod$ hold. We deduce the result from the pullback diagram of $C(\star)$. Now,
adding small limits gives the equivalence on eventually coconnective
complexes, then left-completing the category with respect to the t-structure
recovers the categories of quasi-coherent complexes and yields the
equivalence, see [MRT20, Notation 1.2.12] and [Lur18, §6.2].
∎
###### Lemma VII.3.10.
Let $F$ be a stack such that
* •
$F$ is $1$-connective, ie $\pi_{0}(F)=*$ and $\pi_{1}(F)=*$.
* •
$F$ is $n$-truncated for some $n$.
* •
$\pi_{i}(F)$ is quasi-coherent for all $i$.
then the pushout diagram $(\star)$ is a pushout diagram against $F$.
###### Proof.
The Postnikov tower of $F$ exhibits $F_{\leq i+1}$ as the fiber of
$F_{\leq i}\to K(\pi_{i+1},i+2)$
Therefore, we can assume $F$ be an Eilenberg-MacLane stack $K(\pi,i)$. We
have, for $Y$ a stack,
$Map_{St}(Y,F)=Map_{QCoh(Y)}(\mathcal{O}_{Y},p^{*}\pi[i])$
where $p:Y\to Spec({\mathbb{Z}_{(p)}})$ is the canonical projection. Now using
Lemma VII.3.9 and the fact that mapping spaces of fiber products of categories
are given by fiber products of the mapping spaces, we deduce that
$Map_{St}(-,F)$
sends $\star$ to a pullback diagram. ∎
###### Proof of Proposition VII.3.8 .
We combine VII.3.9, VII.3.10 and the fact that $B\textbf{End}_{gr}^{0}(X)$ is
truncated. ∎
We now move on to the computation of
$Map_{St^{gr,*}}(BS^{1}_{gr},B\textbf{End}_{gr}^{0}(X))$ using Proposition
VII.3.8. We will compute
$Map_{St^{gr,*}}(S^{2n+1}_{f}(n+1),B\textbf{End}^{0}_{gr}(X))$ using the
Postnikov decomposition of $B\textbf{End}_{gr}^{0}(X)$. By convention,
$\pi_{n}$ is $\pi_{n}(\textbf{End}_{gr}^{0}(X))$, it is a graded abelian sheaf
on $\textbf{End}_{gr}^{0}(X)$. We will denote $\Gamma(\pi_{n})$ the graded
abelian group of global sections and $\Gamma^{\mathbb{G}_{m}}(\pi_{n})$ the
abelian group of weight $0$ functions.
We need a lemma to compute
$Map_{St^{gr,*}}(S^{1}_{gr},B\textbf{End}^{0}_{gr}(X))$.
###### Lemma VII.3.11.
Using the previous notations, we have an equivalence of spaces
$Map_{St^{gr,*}}(S^{2n+1}_{f}(n+1),B\textbf{End}^{0}_{gr}(X))\simeq\left\\{\begin{array}[]{ll}B\Gamma^{\mathbb{G}_{m}}(\pi_{1})&\text{if
}n=0\\\ \Gamma^{\mathbb{G}_{m}}(\pi_{2})&\text{if }n=1\\\ 0&\text{if
}n>1\end{array}\right.$
###### Proof.
The Postnikov tower exhibits the graded stack
$(B\textbf{End}_{gr}^{0}(X))_{\leq m}$ as a homotopy fiber
$(B\textbf{End}_{gr}^{0}(X))_{\leq m}\to(B\textbf{End}_{gr}^{0}(X))_{\leq
m-1}\to K(\pi_{m-1},m+1)$
We deduce a triangle of topological spaces
${Map_{St^{gr,*}}(S^{2n+1}_{f}(n+1),(B\textbf{End}_{gr}^{0}(X))_{\leq
m})}$${Map_{St^{gr,*}}(S^{2n+1}_{f}(n+1),(B\textbf{End}_{gr}^{0}(X))_{\leq
m-1})}$${Map_{St^{gr,*}}(S^{2n+1}_{f}(n+1),K(\pi_{m-1},m+1))}$
Now,
$\pi_{k}Map_{St^{gr,*}}(S^{2n+1}_{f}(n+1),K(\pi_{m-1},m+1))$
can be computed, following [To06, §1.3] as
$\pi_{k}\Gamma(B\mathbb{G}_{m},p_{*}p^{*}\pi_{m-1}[m+1])$
where $p:S^{2n+1}_{f}(n+1)\to B\mathbb{G}_{m}$ is the structure morphism. The
later is simply given by
$H_{\mathbb{G}_{m}}^{m+1-k}(S^{2n+1}_{f}(n+1),\pi_{m-1})$
which is trivial when $n+1\neq m-1$ as $\pi_{m-1}$ is in weight $m-1$. When
$m=n+2$,
$H_{\mathbb{G}_{m}}^{n+3-k}(S^{2n+1}_{f}(n+1),\pi_{n+1})$
is non-trivial only when $k=2-n$, then it is simply
$\Gamma^{\mathbb{G}_{m}}(\pi_{n+1})$.
We deduce
$Map_{St^{gr,*}}(S^{1}_{f}(1),K(\pi_{1},3))\simeq
K(\Gamma^{\mathbb{G}_{m}}(\pi_{1}),2)$
$Map_{St^{gr,*}}(S^{3}_{f}(2),K(\pi_{2},4))\simeq
K(\Gamma^{\mathbb{G}_{m}}(\pi_{2}),1)$
$Map_{St^{gr,*}}(S^{5}_{f}(3),K(\pi_{3},5))\simeq\Gamma^{\mathbb{G}_{m}}(\pi_{3})$
Using the fact that $B\textbf{End}^{0}_{gr}(X)$ is truncated, therefore its
Postnikov tower converges, we have
$Map_{St^{gr,*}}(S^{2n+1}_{f}(n+1),B\textbf{End}^{0}_{gr}(X))\simeq\Omega_{*}(Map_{St^{gr,*}}(S^{2n+1}_{f}(n+1),K(\pi_{n+1},n+3)))$
which yields the required result.
∎
###### Lemma VII.3.12.
For $n>1$,
$Map_{St^{gr,*}}((BS^{1}_{gr})_{\leq 2n+2},B\textbf{End}^{0}_{gr}(X))\to
Map_{St^{gr,*}}((BS^{1}_{gr})_{\leq 2n},B\textbf{End}^{0}_{gr}(X))$
is an equivalence.
###### Proof.
The result is easily deduced from Proposition VII.3.8 and Lemma VII.3.11. ∎
We need to compute the fiber product
${Map_{St^{gr,*}}((BS^{1}_{gr})_{\leq
2},B\textbf{End}^{0}_{gr}(X))}$${Map_{St^{gr,*}}((BS^{1}_{gr})_{\leq
4},B\textbf{End}^{0}_{gr}(X))}$${Map_{St^{gr,*}}(S^{3}(2),B\textbf{End}^{0}_{gr}(X))}$${*}$${\llcorner}$
that is
${\Gamma^{\mathbb{G}_{m}}(\pi_{1})}$${Map_{St^{gr,*}}((BS^{1}_{gr})_{\leq
4},B\textbf{End}^{0}_{gr}(X))}$${\Gamma^{\mathbb{G}_{m}}(\pi_{2})}$${*}$${\llcorner}$
Intuitively, we want
$\Gamma^{\mathbb{G}_{m}}(\pi_{1})\mapsto\Gamma^{\mathbb{G}_{m}}(\pi_{2})$ to
send a pair $(\delta,d)$, with $\delta:A\to M$ and $d:M\to\Lambda M$, to its
"composition with itself". The equation $d\circ d=0$ involves $Sym_{A}(M[1])$
as a complex, without the full simplicial algebra structure. We first make the
following reduction.
###### Lemma VII.3.13.
The canonical commutative diagram
${Map_{St^{gr,*}}((BS^{1})_{\leq
4},B\textbf{End}^{0}_{SCR}(X))}$${Map_{St^{gr,*}}((BS^{1})_{\leq
4},B\textbf{End}_{Mod}(X))}$${Map_{St^{gr,*}}(S^{2}_{f}(1),B\textbf{End}^{0}_{SCR}(X))}$${Map_{St^{gr,*}}(S^{2}_{f}(1),B\textbf{End}_{Mod}(X))}$
is a pullback diagram.
###### Proof.
We have the following commutative diagram
${Map_{St^{gr,*}}((BS^{1})_{\leq
4},B\textbf{End}^{0}_{SCR}(X))}$${Map_{St^{gr,*}}((BS^{1})_{\leq
4},B\textbf{End}_{Mod}(X))}$${Map_{St^{gr,*}}(S^{2}_{f}(1),B\textbf{End}^{0}_{SCR}(X))}$${Map_{St^{gr,*}}(S^{2}_{f}(1),B\textbf{End}_{Mod}(X))}$${Map_{St^{gr,*}}(S^{3}_{f}(2),B\textbf{End}^{0}_{SCR}(X))}$${Map_{St^{gr,*}}(S^{3}_{f}(2),B\textbf{End}_{Mod}(X))}$
showing a natural transformation of triangles of spaces. Formality of
$\textbf{End}_{Mod}(X)$ allows for a computation of
$Map_{St^{gr,*}}(S^{3}_{f}(2),B\textbf{End}_{Mod}(X))$ and
$Map_{St^{gr,*}}(S^{2}_{f}(1),B\textbf{End}_{Mod}(X))$ using similar methods
as before. We compute the homotopy groups of $\textbf{End}^{0}_{Mod}(X)$ :
$\Gamma^{\mathbb{G}_{m}}(\pi_{k}(\textbf{End}^{0}_{Mod}(X)))=\bigoplus_{n}Hom(\Lambda^{n}M,\Lambda^{n+k}M)$
A Postnikov decomposition argument yields
$Map_{St^{gr,*}}(S^{3}_{f}(2),B\textbf{End}_{Mod}(X))\simeq\Gamma^{\mathbb{G}_{m}}(\pi_{2}(\textbf{End}^{0}_{Mod}(X)))=\bigoplus_{n}Hom(\Lambda^{n}M,\Lambda^{n+2}M)$
is discrete. Similarly,
$Map_{St^{gr,*}}(S^{2}_{f}(1),B\textbf{End}_{Mod}(X))\simeq\Gamma^{\mathbb{G}_{m}}(\pi_{1}(\textbf{End}^{0}_{Mod}(X)))=\bigoplus_{n}Hom(\Lambda^{n}M,\Lambda^{n+1}M)$
is also discrete. To show the commutative diagram of the lemma is a pullback
diagram, we simply show
$Map_{St^{gr,*}}(S^{3}_{f}(2),B\textbf{End}^{0}_{SCR}(X))\to
Map_{St^{gr,*}}(S^{3}_{f}(2),B\textbf{End}_{Mod}(X))$
is injective. But this map is the inclusion
$\Gamma^{\mathbb{G}_{m}}(\pi_{2})\subset\bigoplus_{n}Hom(\Lambda^{n}M,\Lambda^{n+2}M)$
∎
We need to compute the fiber of the morphism
$Map_{St^{gr,*}}(S^{2}_{f}(1),B\textbf{End}_{Mod}(X))\to
Map_{St^{gr,*}}(S^{3}_{f}(2),B\textbf{End}_{Mod}(X))$
Taking $B$ a test commutative algebra,
$B\textbf{End}_{QCoh(Spec({\mathbb{Z}_{(p)}}))}(X)(B)\simeq
B\textbf{End}_{QCoh(B)}(Sym_{A_{B}}(M_{B}[1]))$
which is the connected component of the space $QCoh(B)$ at
$Sym_{A_{B}}(M_{B}[1])$.
This defines a morphism of stacks
$B\textbf{End}_{QCoh(Spec({\mathbb{Z}_{(p)}}))}(X)\to QCoh(-)$ that is fully
faithful, meaning it induces an equivalence on homotopy sheaves $\pi_{k}$, for
$k>0$.
This induces a commutative diagram
${Map_{St^{gr,*}}(S^{2}_{f}(1),B\textbf{End}_{Mod}(X))}$${Map_{St^{gr,*}}(S^{3}_{f}(2),B\textbf{End}_{Mod}(X))}$${Map_{St^{gr,*}}(S^{2}_{f}(1),QCoh(-))}$${Map_{St^{gr,*}}(S^{3}_{f}(2),QCoh(-))}$
Therefore we identify the fiber of the top arrow with the fiber of the bottom
arrow.
We now compute the fiber of
$QCoh(S^{2}_{f}(1))\simeq{\mathbb{Z}_{(p)}}[u]/u^{2}-Mod\to{\mathbb{Z}_{(p)}}[v]-Mod\simeq
QCoh(S^{3}_{f}(2))$
using a Koszul duality argument.
Using Remark IV.2.8, we can simply compute the fiber of the morphism
${\mathbb{Z}_{(p)}}[u]/u^{2}-Mod\to{\mathbb{Z}_{(p)}}[v]-Mod$
where $u$ is in degree $2$ and $v$ is in degree $3$.
###### Lemma VII.3.14 (Koszul duality).
We have equivalences
${\mathbb{Z}_{(p)}}\\{\alpha\\}-Mod\simeq
IndCoh({\mathbb{Z}_{(p)}}[u]/u^{2}-Mod)$
and
${\mathbb{Z}_{(p)}}\\{\beta\\}-Mod\simeq IndCoh({\mathbb{Z}_{(p)}}[v]-Mod)$
where ${\mathbb{Z}_{(p)}}[\alpha]$ is the free differential graded algebra on
one generator in degree $1$, ${\mathbb{Z}_{(p)}}[\beta]$ is the free
differential graded algebra on one generator in degree $2$ and $IndCoh(-)$
denotes the completion by filtered colimits applied to the quasi-coherent
complex construction $QCoh(-)$ .
Moreover, the induced morphism
${\mathbb{Z}_{(p)}}\\{\alpha\\}-Mod\to{\mathbb{Z}_{(p)}}\\{\beta\\}-Mod$
sends $(M,\alpha:M\to M[-1])$ to $(M,\alpha\circ\alpha:M\to M[-2])$, that is
the forgetful functor induced by
$\alpha\in{\mathbb{Z}_{(p)}}\\{\alpha\\}\mapsto\alpha\circ\alpha\in{\mathbb{Z}_{(p)}}\\{\beta\\}$
###### Proof.
The functor $Hom_{{\mathbb{Z}_{(p)}}[u]/u^{2}}({\mathbb{Z}_{(p)}},-)$ gives an
adjunction
$End_{{\mathbb{Z}_{(p)}}[u]/u^{2}}({\mathbb{Z}_{(p)}})-Mod\leftrightarrows
IndCoh({\mathbb{Z}_{(p)}}[u]/u^{2}-Mod)$
which is an equivalence of category using Schwede-Shiple-Lurie’s theorem, see
[Lur17, Theorem 7.1.2.1]. We can take ${\mathbb{Z}_{(p)}}$ as a compact
generator of $IndCoh({\mathbb{Z}_{(p)}}[u]/u^{2}-Mod)$.
Similarly, $Hom_{{\mathbb{Z}_{(p)}}[v]}({\mathbb{Z}_{(p)}},-)$ induces the
equivalence
$End_{{\mathbb{Z}_{(p)}}[v]}({\mathbb{Z}_{(p)}})-Mod\leftrightarrows
IndCoh({\mathbb{Z}_{(p)}}[v]-Mod)$
Now to conclude the proof of the lemma, we only need to show
$Ext^{2}_{{\mathbb{Z}_{(p)}}[v]}({\mathbb{Z}_{(p)}},{\mathbb{Z}_{(p)}})\simeq{\mathbb{Z}_{(p)}}\to{\mathbb{Z}_{(p)}}\simeq
Ext^{2}_{{\mathbb{Z}_{(p)}}[u]/u^{2}}({\mathbb{Z}_{(p)}},{\mathbb{Z}_{(p)}})$
is an isomorphism. The standard resolutions of ${\mathbb{Z}_{(p)}}$ yields the
required result. ∎
The fiber of
${\mathbb{Z}_{(p)}}\\{\alpha\\}-Mod\to{\mathbb{Z}_{(p)}}\\{\beta\\}-Mod$ is
given by $(M,\alpha:M\to M[-1])$ where $M$ is a complex and
$\alpha\circ\alpha\simeq 0$ : which is the relation we wanted.
∎
#### VII.3.2 Mixed structures classification : the Dieudonné case
###### Theorem VII.3.15.
Let $A$ be smooth commutative ${\mathbb{Z}_{(p)}}$-algebra, $M$ a projective
$A$-module of finite type. We fix a derived Frobenius lift structure on the
graded simplicial algebra $Sym_{A}(M[1])$, with $M$ in weight $1$. From
Proposition VII.1.8, it is equivalent to a classical Frobenius lift $F$ on $A$
and a linear map of $A$-modules $\phi:M\to M$. We define $X$ as the derived
affine scheme $Spec(Sym_{A}(M[1]))=\mathbb{V}(M[1])$ endowed with its natural
grading, we regard it as an element of $dSt^{gr,Frob}$. The classifying space
of Dieudonné mixed graded structures on $X$ compatible with its grading and
Frobenius structure is discrete and in bijection with the set of Dieudonné
algebra structures on the graded commutative ${\mathbb{Z}_{(p)}}$-algebra
$\bigoplus_{i}\wedge^{i}_{A}M[-i]$ endowed with its natural canonical
Frobenius lift structure.
###### Proof.
The classifying space of mixed structures is given by the fiber of the
forgetful functor
$S^{1}_{gr}-dSt^{gr,Frob}\to dSt^{gr,Frob}$
over $X$.
The classifying space is given by the mapping space
$Map_{Mon(dSt^{gr,Fr})}(S^{1}_{gr},\underline{\textbf{End}}_{gr,Fr}(X))$, here
$\underline{\textbf{End}}_{gr,Fr}(X)$ is a monoid in graded derived stack
endowed with a Frobenius lift. By connexity of $S^{1}_{gr}$, the classifying
space is equivalent to
$Map_{Mon(dSt^{gr,Fr})}(S^{1}_{gr},\underline{\textbf{End}}^{0}_{gr,Fr}(X))$
which is equivalent to the mapping space of pointed stacks
$Map_{dSt^{gr,Fr,*}}(BS^{1}_{gr},B\underline{\textbf{End}}^{0}_{gr,Fr}(X))$
Here $\underline{\textbf{End}}^{0}_{gr,Fr}(X)$ is the full subcategory on
$\underline{\textbf{End}}_{gr,Fr}(X)$ on endomorphisms that are homotopy
equivalent to identity. Since $BS^{1}_{gr}$ is a stack, we may consider
$B\underline{\textbf{End}}^{0}_{gr,Fr}(X)$ as a stack in $St^{gr,Fr}\subset
dSt^{gr,Fr}$.
We first reduce the classification to the computation of mapping spaces
compatible with endomorphisms instead of Frobenius structures. The mapping
space
$Map_{Gp(dSt^{gr})^{Fr}}(S^{1}_{gr},\underline{\textbf{End}}_{gr,Fr}^{0}(X))$
is computed by the fiber product
$Map_{Gp(dSt^{gr})^{endo}}(S^{1}_{gr},Z)\times_{Map_{Gp(dSt_{p}^{gr})^{endo}}(S^{1}_{gr,p}(0),Z_{p}(0))}Map_{Gp(dSt_{p}^{gr})}(S^{1}_{gr,p}(0),Z_{p}(0))$
where $Z\coloneqq\underline{\textbf{End}}_{gr,Fr}^{0}(X)$ and the $p$ index
denotes base changing to $\mathbb{F}_{p}$.
We start with a lemma to compute $Z_{p}(0)$
###### Lemma VII.3.16.
We have
$\underline{\textbf{End}}^{0}_{gr,endo}(X)(0)\simeq*$
###### Proof.
By definition of $\textbf{End}^{0}$, we have a triangle of derived stacks
$\underline{\textbf{End}}^{0}_{gr,endo}(X)\to\underline{\textbf{End}}_{gr,endo}(X)\to\pi_{0}(\underline{\textbf{End}}_{gr,endo}(X))$
and since taking $0$-weighted invariants of a graded stack preserve limits
(see Proposition IV.8.3), we have an induced triangle
$\textbf{End}^{0}_{gr,endo}(X)(0)\to\textbf{End}_{gr,endo}(X)(0)\to\pi_{0}(\textbf{End}_{gr,endo}(X))(0)\simeq\pi_{0}(\textbf{End}_{gr,endo}(X)(0))$
We only need to see that $\textbf{End}_{gr,endo}(X)(0)$ is a discrete stack.
Now, by Proposition II.3.4, $\textbf{End}_{gr,endo}(X)$ is computed as an
equalizer
$\textbf{End}_{gr,endo}(X)\simeq
eq(\textbf{End}_{gr}(X)^{\mathbb{N}}\rightrightarrows\textbf{End}_{gr}(X)(0)^{\mathbb{N}})$
which gives on $0$ weighted invariants :
$\textbf{End}_{gr,endo}(X)(0)\simeq
eq(\textbf{End}_{gr}(X)(0)^{\mathbb{N}}\rightrightarrows\textbf{End}_{gr}(X)(0)^{\mathbb{N}})$
From the proof of Theorem VII.3.1, $\textbf{End}_{gr}(X)(0)$ is discrete,
which concludes the proof. ∎
###### Lemma VII.3.17.
If we denote $\phi$ the endomorphism of $\textbf{End}^{0}_{gr,Fr}(X)$ and
$h^{\prime}$ the homotopy between $\phi_{\mkern 1.0mu\vrule
height=6.02777pt\mkern 2.0mup}$ and $Fr_{p}$ and $\psi$ the endomorphism of
$\textbf{End}^{0}_{gr,endo}(X)$. We have an equivalence between the underlying
graded derived stacks endowed with endomorphisms
$(\underline{\textbf{End}}^{0}_{gr,Fr}(X),\phi)\simeq(\underline{\textbf{End}}^{0}_{gr,endo}(X),\psi)$
meaning that we forget the "being homotopic to the canonical Frobenius modulo
$p$" part on the left hand side.
###### Proof.
We will promote $(\underline{\textbf{End}}^{0}_{gr,endo}(X),\psi)$ from an
object with an endomorphism to an object with a Frobenius lift in an
essentially unique way, then we will show that this object has the universal
property that holds for $(\underline{\textbf{End}}^{0}_{gr,Fr}(X),\phi)$.
Promoting $(\textbf{End}^{0}_{gr,endo}(X),\psi)$ to a graded derived stack
with a Frobenius has
$Map_{\underline{\textbf{End}}^{0}_{gr,endo}(X)_{p}(0)}(\psi_{p},Fr_{p})$
as a space of choices. We know, by Lemma VII.3.16, that
$Map_{\underline{\textbf{End}}^{0}_{gr,endo}(X)_{p}(0)}(\psi_{p},Fr_{p})$
is discrete and equivalent to a point, therefore
$Map_{\underline{\textbf{End}}^{0}_{gr,endo}(X)_{p}(0)}(\psi_{p},Fr_{p})$ is
contractible and $(\textbf{End}^{0}_{gr,endo}(X),\psi)$ can be promoted to an
object with Frobenius in an essentially unique way :
$(\textbf{End}^{0}_{gr,endo}(X),\psi,h)$.
Let $(T,\lambda,h_{T})$ be an object of $dSt^{Fr}$ and we write $\mu$ the
endomorphism of $X$ and $h_{X}$ the homotopy between $\mu_{p}$ and $Fr_{p}$.
The data of a morphism
$(T,\lambda)\times(X,\mu)\to(X,\mu)$
is equivalent to the data of a morphism
$(T,\lambda)\to(\underline{\textbf{End}}^{0}_{gr,endo}(X),\psi)$
by the universal property of
$(\underline{\textbf{End}}^{0}_{gr,endo}(X),\psi)$, which is equivalent to the
data of a morphism of objects with Frobenius lifts
$(T,\lambda,h_{T})\to(\underline{\textbf{End}}^{0}_{gr,endo}(X),\psi,h)$
since the choice of compatibility of the morphism of endomorphism objects with
the Frobenius lift structures is given by the data of a commutative cube as
follows :
${Y_{p}(0)}$${Z_{p}(0)}$${T_{p}(0)}$${Z_{p}(0)}$${T_{p}(0)}$${Z_{p}(0)}$${Y_{p}(0)}$${Z_{p}(0)}$$\scriptstyle{id}$$\scriptstyle{\lambda_{p}}$${\sim(h_{T})}$$\scriptstyle{\phi_{p}}$$\scriptstyle{id}$${\sim(h)}$$\scriptstyle{Fr_{p}}$$\scriptstyle{Fr_{p}}$$\scriptstyle{id}$$\scriptstyle{id}$
where $Z\coloneqq\underline{\textbf{End}}^{0}_{gr,endo}(X)$. This data lives
in a $2$-simplex in $Map_{gr}(Y_{p}(0),Z_{p}(0))$ and since $Z_{p}(0)$ is
contractible, so is $Map_{gr}(Y_{p}(0),Z_{p}(0))$, therefore the morphism of
endomorphism objects extends in an essentially unique way to a morphism of
objects with Frobenius lifts.
Now a similar argument shows that any morphism
$(T,\lambda)\times(X,\mu)\to(X,\mu)$
extends in an essentially unique way to a morphism of objects with Frobenius
lifts
$(T,\lambda,h_{T})\times(X,\mu,h_{X})\to(X,\mu,h_{X})$
Therefore $(\underline{\textbf{End}}^{0}_{gr,endo}(X),\psi,h)$ satisfies the
universal property of
$(\underline{\textbf{End}}^{0}_{gr,Fr}(X),\phi,h^{\prime})$ and they are
equivalent. ∎
###### Lemma VII.3.18.
We have
$\underline{\textbf{End}}_{gr,Fr}^{0}(X)_{p}(0)\simeq*$
###### Proof.
It is enough to show that $\underline{\textbf{End}}_{gr,Fr}^{0}(X)(0)$ is
contractible. We simply combine Lemma VII.3.16 and Lemma VII.3.17.
∎
We deduce that
$Map_{Gp(dSt^{gr,Fr})}(S^{1}_{gr},\underline{\textbf{End}}_{gr,Fr}^{0}(X))\simeq
Map_{Gp(dSt^{gr,endo})}(S^{1}_{gr},\underline{\textbf{End}}_{gr,endo}^{0}(X))$
We have reduced the study of the Frobenius structure to a mere endomorphism
structure. The previous mapping space is given by the equalizer
$Map_{Gp(dSt^{gr})}(S^{1}_{gr},\underline{\textbf{End}}_{gr,endo}^{0}(X))\leavevmode\hbox
to16.67pt{\vbox to9.47pt{\pgfpicture\makeatletter\hbox{\hskip
3.33301pt\lower-6.13298pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{ }{{}}{{{}{}}}{ {}{}{}}{}{{}}{}{
{}{}{}}{}{{}}{}\hbox{\hbox{\hbox{\hbox{\hbox{{\pgfsys@beginscope\pgfsys@invoke{
}{{}{}{{ {}{}}}{ {}{}} {{}{{}}}{{}{}}{}{{}{}} {
}{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{0.0pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{{}{}{{ {}{}}}{
{}{}} {{}{{}}}{{}{}}{}{{}{}} { }{{{{}}\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{10.00002pt}{0.0pt}\pgfsys@invoke{
}\hbox{{\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{ }\hbox{}
}}\pgfsys@invoke{\lxSVG@closescope }\pgfsys@endscope}}}
\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{}{
{}{}{}} {}{}{}{{ {\pgfsys@beginscope
\pgfsys@setdash{}{0.0pt}\pgfsys@roundcap\pgfsys@roundjoin{} {}{}{} {}{}{}
\pgfsys@moveto{-2.07999pt}{2.39998pt}\pgfsys@curveto{-1.69998pt}{0.95998pt}{-0.85318pt}{0.28pt}{0.0pt}{0.0pt}\pgfsys@curveto{-0.85318pt}{-0.28pt}{-1.69998pt}{-0.95998pt}{-2.07999pt}{-2.39998pt}\pgfsys@stroke\pgfsys@endscope}}
}{}{}{{}}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@lineto{9.60002pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{9.80002pt}{0.0pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}}\hbox{{\pgfsys@beginscope\pgfsys@invoke{ }{}{{}}{}{
{}{}{}}
{}{}{}{}{}{}{{}}\pgfsys@moveto{0.0pt}{-3.533pt}\pgfsys@lineto{9.60002pt}{-3.533pt}\pgfsys@stroke\pgfsys@invoke{
}{{}{{}}{}{}{{}}{{{}}{{{}}{\pgfsys@beginscope\pgfsys@invoke{
}\pgfsys@transformcm{1.0}{0.0}{0.0}{1.0}{9.80002pt}{-3.533pt}\pgfsys@invoke{
}\pgfsys@invoke{ \lxSVG@closescope }\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}{{}}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope}}} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}Map_{Gp(dSt^{gr})}(S^{1}_{gr},\underline{\textbf{End}}_{gr,endo}^{0}(X))$
where the top map is given by precomposing by $[p]$ and the bottom map is
postcomposing by $S$ the endomorphism of
$\underline{\textbf{End}}_{gr,endo}^{0}(X)$.
Now, an element of
$Map_{Gp(dSt^{gr})}(S^{1}_{gr},\underline{\textbf{End}}_{gr,endo}^{0}(X))$ can
be described using Proposition II.3.4 as the set of maps
$f_{k}:S^{1}_{gr}\to\textbf{End}_{gr}^{0}(X)$
with $prec_{\phi_{X}}\circ f_{k+1}=post_{\phi_{X}}\circ f_{k}$, where $prec$
and $post$ are the precomposing and postcomposing morphisms. The equalizer is
then given by a family $(f_{k})$ such that $f_{k+1}=f_{k}\circ[p]$. This
amount to the data of
$f:S^{1}_{gr}\to\textbf{End}_{gr}^{0}(X)$
such that $prec_{\phi_{X}}\circ f\circ[p]=post_{\phi_{X}}\circ f_{k}$.
We now use the classification for classical mixed structures, Theorem VII.3.1,
to identify the element
$f:S^{1}_{gr}\to\textbf{End}_{gr}^{0}(X)$
with a couple $(\delta:A\to M,d:M\to\Lambda^{2}_{A}M)$ satisfying the usual
equations. Unwinding the action of the endomorphisms on $f$. We see that
precomposing with $\phi_{X}$ induces the postcomposition map
$(\delta,d)\mapsto(\phi_{M}\circ\delta,(\phi_{M}\wedge\phi_{M})\circ d)$
Similarly, postcomposing with $\phi_{X}$ induces the precomposition map
$(\delta,d)\mapsto(\delta\circ\phi_{A},d\circ\phi_{M})$
And finally the action of $[p]$ is multiplication by $p$
$(\delta,d)\mapsto(p\delta,pd)$
The equations are therefore
$\phi_{M}\circ\delta=p\delta\circ\phi_{A}$
and
$(\phi_{M}\wedge\phi_{M})\circ d=pd\phi_{M}$
which are the relations required for $(\delta,d)$ to define a Dieudonné
algebra structure on $\bigoplus_{i}\wedge^{i}_{A}M[-i]$. ∎
###### Corollary VII.3.19.
With the notations of Theorem VII.3.15, the graded derived affine scheme
$Spec(Sym_{A}(\Omega_{A}[1]))$ admits a unique Dieudonné structure induced by
the standard Dieudonné structure on $\bigoplus_{i\geq
0}\Lambda_{A}^{i}\Omega_{A}$.
#### VII.3.3 De Rham-Witt complex
###### Construction VII.3.20.
The functor of "simplicial functions"
$\mathcal{O}:dSt^{op}\to SCR$
induces a functor from $G$-equivariant derived stacks, where $G$ is a group in
$dSt$ :
$\mathcal{O}:G-dSt^{op}\to G-SCR$
where $G-SCR$ is abusively defined as the category of relative affine derived
schemes over $BG$. Therefore we can see this construction as taking the
scheme-affinization of a derived stack, relatively to $BG$, using Proposition
IV.3.4.
By taking $G=S^{1}_{gr}\rtimes(\mathbb{G}_{m}\times\mathbb{N})$, we can check
that we obtain a functor
$\mathcal{O}:\epsilon-D-dSt^{gr,op}\to\epsilon-D-SCR$
###### Definition VII.3.21.
Let $(A,F)$ be a simplicial algebra with a Frobenius lift, we define the
Dieudonné de Rham complex of $(A,F)$ as
$\mathcal{O}(\mathcal{L}^{gr,Fr}(Spec(A),Spec(F)))$
where $\mathcal{O}$ is the functor of "simplicial functions" on derived
Dieudonné stacks
$\epsilon-D-dSt^{gr,op}\to\epsilon-D-SCR^{gr}$
defined in the previous proposition. We denote the element we obtain
$DDR(A,F)$.
###### Theorem VII.3.22.
Let $A$ be a smooth discrete $p$-torsion-free algebra, the Dieudonné de Rham
complex coincides with the de Rham complex endowed with its classical
Dieudonné structure defined in [BLM22]
###### Proof.
It is a simple application of Corollary VII.3.19. ∎
###### Proof.
It is an easy corollary of Theorem VII.3.15. ∎
###### Theorem VII.3.23.
The Dieudonné de Rham functor
$SCR^{Fr}\to\epsilon-D-SCR^{gr}$
is left adjoint to the forgetful functor
$(0):\epsilon-D-SCR^{gr}\to SCR^{Fr}$
###### Proof.
This is an application of the definition of the various categories and the
Dieudonné de Rham functor. ∎
###### Theorem VII.3.24.
For $A$ a commutative algebra, non-necessarily smooth, the t-truncation of
$DDR(A)$, with respect to the Beilinson t-structure, is equivalent to the
classical Dieudonné complex. That is
$t_{\geq 0}(DDR(A))\simeq i(\Omega^{\bullet}_{A})$
where $\Omega^{\bullet}_{A}$ is endowed with its canonical classical Dieudonné
structure, see Proposition VI.3.14 and $i$ is the functor
$i:\textbf{DA}\to\epsilon-D-SCR^{gr}$
identifying the category of classical Dieudonné algebra with the heart of
$\epsilon-D-SCR^{gr}$.
###### Proof.
As the t-truncation is compatible with the forgetful map
$\epsilon-D-SCR^{gr}\to\epsilon-Mod^{gr}$
and the forgetful map is conservative, the natural morphism
$Sym_{A}(\Omega_{A}^{\bullet}[1])\to t_{\geq 0}(Sym_{A}(\mathbb{L}_{A}[1]))$
is an equivalence on underlying graded mixed Dieudonné complexes, therefore it
is an equivalence. ∎
### VII.4 Saturation
In this section we study in more details the properties of saturation and
comparing it to the classical saturation for Dieudonné algebras.
#### VII.4.1 The décalage functor
###### Definition VII.4.1.
We define an endofunctor of $\epsilon-Mod^{gr}_{\mathbb{Z}_{(p)}}$ denoted
$[p]^{*}$ by forgetting along
$\epsilon\in{\mathbb{Z}_{(p)}}[\epsilon]\mapsto
p\epsilon\in{\mathbb{Z}_{(p)}}[\epsilon]$
###### Definition VII.4.2.
We define the décalage $\eta_{p}$ functor as an endofunctor of $1$-categories
$\bf{\epsilon-Mod^{gr}_{\mathbb{Z}_{(p)}}}$ sending $M$ to the graded mixed
subcomplex of $M\times M$ on elements $(x,y)$ such that $\epsilon x=py$ and
$\epsilon y=0$.
###### Proposition VII.4.3.
We have an adjunction of $1$-categories
$[p]^{*}:\bf{\epsilon-
Mod^{gr}_{\mathbb{Z}_{(p)}}}\rightleftarrows\bf{\epsilon-
Mod^{gr}_{\mathbb{Z}_{(p)}}}:\eta_{p}$
###### Proposition VII.4.4.
The adjunction is a Quillen adjunction
$[p]^{*}:\bf{(\epsilon-dg-
mod^{gr}_{\mathbb{Z}_{(p)}})_{inj}}\rightleftarrows\bf{(\epsilon-dg-
mod^{gr}_{\mathbb{Z}_{(p)}})_{inj}}:\eta_{p}$
Therefore this adjunctions induces an adjunction between the corresponding
$\infty$-categories.
###### Proof.
The injective model structure on $(\epsilon-dg-
mod^{gr}_{\mathbb{Z}_{(p)}})_{inj}$ is defined by transfer along the forgetful
functor $U:\bf{\epsilon-dg-mod^{gr}_{\mathbb{Z}_{(p)}}}\to\bf{dg-
mod_{\mathbb{Z}_{(p)}}}$. We need to verify that $F^{*}_{p}$ sends
cofibrations and trivial cofibrations to cofibrations and trivial
cofibrations. Since for any graded mixed complex $M$, the underlying complex
of $[p]^{*}M$ and $M$ are the same, the result is obvious. ∎
###### Proposition VII.4.5.
Under the equivalence $(\epsilon-
Mod^{gr}_{\mathbb{Z}_{(p)}})^{\heartsuit}\simeq Mod_{\mathbb{Z}_{(p)}}$ of
VII.2.15, the endofunctor $\eta_{p}$ induces an endofunctor on
$Mod_{\mathbb{Z}_{(p)}}$, which, when restricted to $p$-torsion free complex,
identifies with the décalage functor $\eta_{p}$ from [BLM22].
###### Proof.
When $M$ is a $p$-torsion free graded mixed complex, an element $x\in M_{n}$
such that $\epsilon x$ is $p$-divisible defines a unique $y\in M_{n+1}$ such
that $\epsilon x=py$, furthermore $\epsilon y=0$ is automatic since $p\epsilon
y=\epsilon^{2}x=0$. ∎
###### Remark VII.4.6.
From the definition of Dieudonné structures on complexes and the adjunction of
Proposition VII.4.3, the data of a Dieudonné structure on a graded mixed
complex is equivalent to the data of a lax fixed structure with respect to
$\eta_{p}$.
###### Proposition VII.4.7.
The functor $\eta_{p}:\epsilon-Mod^{gr}_{\mathbb{Z}_{(p)}}\to\epsilon-
Mod^{gr}_{\mathbb{Z}_{(p)}}$ commutes with filtered colimits.
###### Proof.
The forgetful functor $\epsilon-Mod^{gr}_{\mathbb{Z}_{(p)}}\to
Mod_{\mathbb{Z}_{(p)}}$ is conservative. Furthermore the composition
$U\circ\eta_{p}$ is given by forming a finite limit
$M\mapsto M\times_{M[1]}M[1]\times_{M[2]}0$
where the arrow are $\epsilon:M\to M[1]$, $-\times p:M[1]\to M[1]$,
$\epsilon:M[1]\to M[2]$ and $0:0\to M[2]$. Then $U\circ\eta_{p}$ commutes with
filtered colimits, hence so does $U$. ∎
###### Proposition VII.4.8.
The fully faithful inclusion of fixed points
$i_{FP}:FP_{\eta_{p}}(\epsilon-Mod^{gr}_{\mathbb{Z}_{(p)}})\subset
LFP_{\eta_{p}}(\epsilon-Mod^{gr}_{\mathbb{Z}_{(p)}})$
admits a left adjoint.
###### Proof.
The category $\epsilon-Mod^{gr}_{\mathbb{Z}_{(p)}}$ is presentable as a module
category. The functor $\eta_{p}$ commutes with filtered colimits (Proposition
VII.4.7) and with small limits (Proposition VII.4.3). We conclude using
Proposition A.3. ∎
###### Definition VII.4.9.
Identifying the category of mixed Dieudonné complexes with the category
$LFP_{\eta_{p}}(\epsilon-Mod^{gr}_{\mathbb{Z}_{(p)}})$, the subcategory of
saturated mixed Dieudonné complexes is defined as $FP_{\eta_{p}}(\epsilon-
Mod^{gr}_{\mathbb{Z}_{(p)}})$, we denote it $\epsilon-D-Mod^{gr,sat}$.
#### VII.4.2 Saturated Dieudonné algebra
###### Definition VII.4.10.
We define the décalage functor for Dieudonné stacks. Let $\pi:X\to
BS^{1}_{gr}$ be a Dieudonné stack, the décalé Dieudonné stack is simply $X$
endowed with the composed structure morphism :
$X\xrightarrow{\pi}BS^{1}_{gr}\xrightarrow{[p]}BS^{1}_{gr}$. This décalage
construction defines an endofunctor of $D-dSt$.
###### Remark VII.4.11.
A Dieudonné simplicial algebra, that is a Dieudonné derived stack $X\to
BS^{1}_{gr}$ which is relatively affine, has an associated décalé Dieudonné
derived stack, which is not necessarily a Dieudonné simplicial algebra.
###### Proposition VII.4.12.
The décalage construction has a right adjoint given by taking the fiber
product of the structure map along $[p]:BS^{1}_{gr}\to BS^{1}_{gr}$.
###### Definition VII.4.13.
We define $\epsilon-D-SCR^{gr,sat}$, the category of saturated mixed graded
Dieudonné simplicial algebras, as the subcategory of mixed graded Dieudonné
algebras having an underlying mixed graded Dieudonné complex which is
saturated. Meaning :
$\epsilon-D-SCR^{gr,sat}=\epsilon-D-SCR^{gr}\times_{\epsilon-D-
Mod^{gr}}\epsilon-D-Mod^{gr,sat}$
###### Proposition VII.4.14.
The inclusions
$i:\epsilon-D-Mod^{gr,sat}\subset\epsilon-D-Mod^{gr}$
and
$j:\epsilon-D-SCR^{gr,sat}\subset\epsilon-D-SCR^{gr}$
both admit a left adjoint.
###### Proof.
For the first inclusion, it follows from Remark VII.4.6 and Proposition A.3.
By definition of saturated Dieudonné algebras, we have a pullback diagram :
${\epsilon-D-SCR^{gr,sat}}$${\epsilon-D-SCR^{gr}}$${\epsilon-D-
Mod^{gr,sat}}$${\epsilon-D-
Mod^{gr}}$$\scriptstyle{U_{0}}$$\scriptstyle{j}$${\lrcorner}$$\scriptstyle{U}$$\scriptstyle{i}$
We show $j$ commutes with filtered colimits and small limits. Since $U$
commutes with these colimits and limits and it is reflective, we show $U\circ
j\simeq i\circ U_{0}$ preserves the required colimits and limits. The functor
$j$ preserves these colimits and limits from the proof of Proposition A.3 and
so does $U_{0}$ since it is the projection of a fiber product and fibered
product commutes with filtered colimits and small limits. We conclude with the
adjoint functor Theorem II.1.2.
∎
###### Remark VII.4.15.
We can notice the décalé Dieudonné derived stack of a Dieudonné stack has
functions given by the saturation of functions of the Dieudonné stack,
therefore this construction gives a geometrical interpretation of saturation.
## VIII Future perspectives
### VIII.1 Improvements
We will expose many possible directions for improvements and generalizations
of the results. In this thesis, we have defined the de Rham-Witt complex of a
simplicial $\mathbb{Z}_{(p)}$-algebra with a Frobenius lift. In order to
define de Rham-Witt of a simplicial $\mathbb{F}_{p}$-algebra R, we will need
to construct the functor Verschiebung $V$. We will consider a completed
version with respect to a $V$-filtration of the Dieudonné de Rham algebra of
$(W(R),F)$.
##### The $V$ functor and the $V$ filtration
Following the construction of the de Rham-Witt complex in [BLM22], we can
construct a Verschiebung map $V$. The fundamental compatibility
$FV=p$
is expected to hold. From which we deduce, assuming we work on $p$-torsion-
free saturated Dieudonné complexes
$p\epsilon V=V\epsilon$
That is, we want to construct a decomposition of multiplication by $p$ :
$M\xrightarrow{V}[p]^{*}M\xrightarrow{F}M$
The construction of such a map is obvious for $M$ a $p$-torsion-free saturated
derived Dieudonné complex. We assume to have constructed such a map $V$.
###### Definition VIII.1.1.
For $M$ a saturated derived Dieudonné complex, we denote $\mathcal{W}_{r}(M)$
the graded mixed complexes representing the functor
$N\in\epsilon-Mod^{gr}\mapsto Map^{V^{r}}(M,N)$
where $Map^{V^{r}}(M,N)$ is the mapping space of graded mixed morphisms $M\to
N$ such that the composition $M\xrightarrow{V^{r}}M\to N$ is homotopic to
zero. Meaning $Map^{V^{r}}(M,N)$ is the fiber of
$Map_{\epsilon-Mod^{gr}}(M,N)\xrightarrow{-\circ V^{r}}Map_{Mod}(M,N)$
We have a canonical restriction map
$res:M\to lim_{r}\mathcal{W}_{r}(M)$
We denote $\mathcal{W}(M)\coloneqq lim_{r}\mathcal{W}_{r}(M)$.
The Dieudonné complex $M$ is said to be strict when $res$ is an equivalence.
###### Remark VIII.1.2.
The usual definition of $\mathcal{W}_{r}(M)$ is given by
$\mathcal{W}_{r}(M)\coloneqq M/(Im(V^{r})+Im(dV^{r}))$
they coincide as taking the cofiber of $M\to M$ in the category of graded
mixed complexes kills $Im(dV^{r})$ automatically.
We recall an alternative description of the process of strictification.
###### Corollary VIII.1.3.
([BLM22, Corollary 2.8.2]) Let $M$ be a saturated Dieudonné complex. The
restriction map
$M\to\mathcal{W}(M)$
exhibits $\mathcal{W}(M)$ as a $p$-completion of $M$ in the derived category
of abelian groups.
This corollary motivates defining a derived Dieudonné complex to be strict
when is is derived $p$-complete.
We can then extend these definitions to derived Dieudonné algebras : a derived
Dieudonné algebra is said to be strict when it is as a derived Dieudonné
complex. However the description of the strictification process seems more
involved, in particular, defining the $A/VA$ as a simplicial algebra seems to
be a difficult task in this homotopy context.
We define the category of strict derived Dieudonné complexes and algebras,
respectively $\textbf{DC}_{str}$ and $\textbf{DA}_{str}$. We expect the
inclusion
$\textbf{DA}_{str}\subset\textbf{DA}$
to admit a left adjoint denoted $\mathcal{W}Sat(-)$.
Defining $f:B\in\textbf{DA}_{str}\mapsto B^{0}/VB^{0}\in
SCR_{\mathbb{F}_{p}}$, we can ask ourselves
###### Question VIII.1.4.
Does the functor
$f:\textbf{DA}_{str}\to SCR_{\mathbb{F}_{p}}$
admit a left adjoint given by
$R\mapsto\mathcal{W}Sat(DDR(R^{red}))$
where $R^{red}$ is the reduction of $R$ ?
###### Remark VIII.1.5.
The construction of the left adjoint might not need to consider the reduction
since our main results hold for simplicial algebras which are not $p$-torsion
free.
##### De Rham-Witt complex of a $\mathbb{F}_{p}$-algebra
To complete the construction of the de Rham-Witt complex, we need a universal
property of the Witt vectors. In [BLM22, Proposition 3.6.3], the Witt vectors
give the following identification
$Hom_{F}(W(R),B)\cong Hom_{\textbf{DA}}(R,B^{0}/VB^{0})$
where $B$ is a strict Dieudonné algebra and $R$ is a $\mathbb{F}_{p}$-algebra.
The set $Hom_{F}(W(R),B)$ consists of ring morphisms commuting with the
Frobenius morphisms.
We hope to have a similar result in the context of simplicial algebras with
Frobenius lifts.
###### Definition VIII.1.6.
We can define the Dieudonné algebra of a simplicial $\mathbb{F}_{p}$-algebra
$R$ as $\mathcal{W}Sat(\mathbb{L}_{W(R)})$.
##### A more general base
Our main results are given for schemes on $\mathbb{Z}_{(p)}$, we could develop
a theory of graded loopspaces on a derived scheme endowed with a Frobenius
over a general commutative ring $k$. However some precautions have to be
taken, for example, for the crystalline circle
$S^{1}_{gr}=Spec^{\Delta}(k\oplus k[1])$
to have a derived Frobenius lift, we need $k$ to admit a derived Frobenius
lift. When $k=\mathbb{Z}_{(p)}$, such a lift is essentially unique, for
$k=\mathbb{F}_{p}$, there is not such lift since a commutative ring of finite
$p$-torsion does not admit a $\delta$-ring structure. Therefore the base
commutative ring $k$ must be endowed with a Frobenius lift structure.
Another generalization would be to replace the base $\mathbb{Z}_{(p)}$ by
$\mathbb{Z}$. In this context, all prime numbers have to be considered for the
Frobenius lifts, therefore, we have to use the more general notion of
commutating Frobenius lifts. Furthermore big Witt vectors need to be used
instead of $p$-typical Witt vectors.
###### Construction VIII.1.7.
Let $\mathcal{C}$ be an $\infty$-category. We define the category of
commutating endomorphisms on $\mathcal{C}$
$\mathcal{C}^{\mathbb{N}^{*}endo}\coloneqq Fun(B\mathbb{N}^{*},\mathcal{C})$
where $(\mathbb{N}^{*},\times)\simeq\mathbb{N}^{(\mathbb{N})}$ is the free
abelian monoid on the set $\mathbb{N}$. Sending $1$ to the $i$-th prime number
$p_{i}$ induces a morphism of monoids
$\mathbb{N}\to\mathbb{N}^{*}$
which gives by restriction
$p_{i}:\mathcal{C}^{\mathbb{N}^{*}endo}\to\mathcal{C}^{endo}$
Taking $\mathcal{C}=SCR_{\mathbb{Z}_{(p)}}$, we define the category of
simplicial algebras with commutating Frobenius lifts as
$SCR^{\mathbb{N}^{*}Fr}\coloneqq\mathcal{C}^{\mathbb{N}^{*}endo}\times_{(SCR_{p}^{endo})^{\mathbb{N}}}(SCR_{p})^{\mathbb{N}}$
An element in this category is a simplicial algebra $A$ endowed with morphisms
$\phi_{p}:A\to A$
commutating up to coherent homotopy and the data of a homotopy between
$\phi_{p}|_{\mathbb{F}_{p}}$ and the canonical Frobenius $Fr_{p}$.
This definition is close to the notion of cyclic spectra developed in [NS18]
and some connections with topological cyclic homology might be found.
Combining these notions could allow us to work over any general discrete
commutative ring endowed with commutating Frobenius lifts.
##### Study of derived stacks with Frobenius lift
The theory of derived stacks with Frobenius lift has been quickly description
in this thesis but a deeper examination could be fruitful. In particular, a
theory of Postnikov towers could shorten many proofs.
We have proven that $dSt^{Fr}$ is an $\infty$-topos. Extending PropositionB.7,
we can ask ourselves the following question.
###### Question VIII.1.8.
Let $\mathcal{C}_{0}$ be the full subcategory of $dSt^{Fr}$ on objets which
are given by freely adjoining a Frobenius to a derived affine scheme, these
objects are of the form $L(Spec(C))$, with $C$ a simplicial algebra. Does the
category $\mathcal{C}_{0}$ admit a site structure such that the topos of
sheaves on $\mathcal{C}_{0}$ identifies with $dSt^{Fr}$ ? In this case, we
would have the identification :
$Sh(\mathcal{C}_{0})\simeq dSt^{Fr}$
A precise description of the contructions and results of this thesis on
derived stacks with endomorphisms instead of Frobenius lifts could be
illuminating. We expect to be able to recover the construction of the de Rham-
Witt complex when considering
$\textbf{Map}_{dSt^{endo}}(S^{1}_{gr},X)$
when $X$ admits a Frobenius lift, instead of
$\textbf{Map}_{dSt^{Fr}}(S^{1}_{gr},X)$
##### HKR-type filtration
Inspired by [MRT20], we could construct a filtration analoguous to the one on
hochschild cohomology. In the classical case, the affinization of the circle
$Aff(S^{1})\simeq BFix$
admits a filtration which has $BKer$ as its associated graded stack.
The stack $BFix$ admits an endomorphism structure induced by the morphism
$\mathbb{Z}\left[\binom{X}{n}\right]=\\{Q\in\mathbb{Q}[X]:Q(\mathbb{Z})\subset\mathbb{Z}\\}\to\\{Q\in\mathbb{Q}[X]:Q(\mathbb{Z})\subset\mathbb{Z}\\}=\mathbb{Z}\left[\binom{X}{n}\right]$
sending $Q$ to $Q(pX)$.
This endomorphism is not a Frobenius lift. We denote it $[p]$.
###### Definition VIII.1.9.
Let $(X,F)$ be a derived scheme endowed with an endomorphism, we define the
loopspace of $X$ as
$\mathcal{L}^{endo}((X,F))\coloneqq\textbf{Map}_{dSt^{endo}}((BFix,[p]),(X,F))$
similarly, the graded loopspace is defined as
$\mathcal{L}^{endo}_{gr}((X,F))\coloneqq\textbf{Map}_{dSt^{endo}}((BKer,[p]),(X,F))$
###### Remark VIII.1.10.
The "multiplication by $p$" map of topological spaces
$S^{1}\to S^{1}$
defines an endomorphism of the stack $S^{1}$. And we can expect an
identification of graded affine stacks with endomorphisms
$Aff((S^{1},\times p))\simeq(BFix,[p])$
and also an equivalence
$\textbf{Map}_{dSt^{endo}}((S^{1},\times
p),(X,F))\simeq\textbf{Map}_{dSt^{endo}}((BFix,[p]),(X,F))$
###### Definition VIII.1.11.
When $A$ is a simplicial algebra over $\mathbb{Z}_{(p)}$ and $F$ is en
endomorphism of $A$, we define the Hochschild cohomology of $(A,F)$ as
$HH((A,F))\coloneqq\mathcal{O}_{SCR}(\mathcal{L}^{endo}((Spec(A),Spec(F))))$
and the de Rham algebra of $(A,F)$ as
$DR((A,F))\coloneqq\mathcal{O}_{SCR}(\mathcal{L}^{endo}_{gr}((Spec(A),Spec(F))))$
###### Remark VIII.1.12.
We expect $DR((A,F))$ to be very close to
$Sym_{A}((\mathbb{L}_{A}[1],\frac{dF}{p}))$
at least when $A$ is $p$-torsion free and the endomorphism can be promoted to
a Frobenius lift.
We can ask the following
###### Question VIII.1.13.
Does the Hochschild cohomology of a simplicial algebra with endomorphism
$(A,F)$ admit a natural filtration which has $DR(A,F)$ as its associated
graded simplicial algebra ?
Defining such a filtration can be achieved by constructing a filtered analogue
of our graded circle. However some precautions have to be taken since the
natural endomorphism on the affinization of the circle $Aff(S^{1})$ is not a
Frobenius lift. Therefore a careful comparision between Frobenius graded
loopspace and endomorphism graded loopspace will probably be necessary.
##### Prismatic circle
We expect our construction of graded circle to have an analogue over the
prismatic site. We would call this object the prismatic circle. Precisely, for
$(A,I)$ a prism, $A$ is a $\delta$-ring, therefore it has a canonical
Frobenius lift. We can then consider
$\textbf{Map}_{dSt^{Fr}}(S^{1}_{gr},Spec(A))$
We expect taking levelwise mapping space with the crystalline circle to define
a sheaf on the prismatic site. This sheaf could then recover prismatic
cohomology from a mixed graded complex.
###### Remark VIII.1.14.
The above definition is incomplete and needs to be modified to take into
account the ideal $I$ when the prism is not a crystalline prism.
### VIII.2 Symplectic forms
One of the main motivations behind our work on the graded Dieudonné loopspaces
was to define a theory of shifted symplectic forms in mixed characteristic.
The theory of shifted symplectic structures and shifted Poisson structures is
carried out in [PTVV] and [CPTVV]. Furthermore they are helpful for the study
of quantization deformations.
We recall some definitions from [PTVV].
###### Definition VIII.2.1.
Let $F$ be a derived Artin stack over a commutative ring of characteristic
zero $k$. The space of $n$-shifted $p$-forms on $F$ is
$\mathcal{A}^{p}(F,n)\coloneqq Map_{Mod}(k(p)[-p-n],DR(X))$
and the space of closed $n$-shifted $p$-forms on $F$ is
$\mathcal{A}^{p,cl}(F,n)\coloneqq Map_{\epsilon-Mod^{gr}}(k(p)[-p-n],DR(X))$
We have a natural forgetful functor
$\mathcal{A}^{p,cl}(F,n)\to\mathcal{A}^{p}(F,n)$
and the differential induces
$d_{DR}:\mathcal{A}^{p}(F,n)\to\mathcal{A}^{p+1,cl}(F,n)$
###### Definition VIII.2.2.
Let $\omega$ be a closed $2$-form of degree $n$ on $F$, by adjunction it
induces a morphism of quasi-coherent sheaves on
$\Theta_{w}:\mathbb{T}_{F}\to\mathbb{L}_{F}[n]$
since $F$ is a derived Artin stack, therefore $\mathbb{L}_{F}$ is dualizable.
The form $w$ is said to be an $n$-shifted symplectic form when $\Theta_{w}$ is
an equivalence.
We generalize these notions to the Dieudonné de Rham complex.
###### Definition VIII.2.3.
Let $X$ be a derived scheme over $\mathbb{Z}_{(p)}$, endowed with a Frobenius
lift $F$. The space of $n$-shifted Dieudonné $p$-forms on $X$ as
$\mathcal{A}^{p}(X,F,n)\coloneqq Map_{Mod}(k(p)[-p-n],DDR(X,F))$
and the space of closed $n$-shifted Dieudonné $p$-forms on $X$ as
$\mathcal{A}^{p,cl}(X,F,n)\coloneqq Map_{\epsilon-
Mod^{gr}}(k(p)[-p-n],DDR(X,F))$
We have a natural forgetful functor
$\mathcal{A}^{p,cl}(X,F,n)\to\mathcal{A}^{p}(X,F,n)$
and the differential induces
$d_{DR}:\mathcal{A}^{p}(X,F,n)\to\mathcal{A}^{p+1,cl}(X,F,n)$
Furthermore, the endomorphism on $DDR(X,F)$, which we can see as
$\frac{dF}{p}$ on $\mathbb{L}_{X}$ induces natural transformations :
$F_{form}:\mathcal{A}^{p}(X,F,n)\to\mathcal{A}^{p}(X,F,n)$
and
$F_{cl}:\mathcal{A}^{p,cl}(X,F,n)\to\mathcal{A}^{p,cl}(X,F,n)$
We define a variation on the definition which takes the Frobenius structure
into account.
###### Definition VIII.2.4.
The space of $n$-shifted Dieudonné $p$-forms on $X$ as
$\mathcal{A}^{p}_{Fr}(X,F,n)\coloneqq Map_{Mod^{endo}}(k(p)[-p-n],DDR(X,F))$
where we endow $k(p)[-p-n]$ with the "multiplication by $p$" endomorphism and
the space of closed $n$-shifted Dieudonné $p$-forms on $X$ as
$\mathcal{A}^{p,cl}_{Fr}(X,F,n)\coloneqq Map_{\epsilon-
Mod^{gr,endo}}(k(p)[-p-n],DDR(X,F))$
We have a natural forgetful functor
$\mathcal{A}^{p,cl}_{Fr}(X,F,n)\to\mathcal{A}^{p}_{Fr}(X,F,n)$
and the differential induces
$d_{DR}:\mathcal{A}^{p}_{Fr}(X,F,n)\to\mathcal{A}^{p+1,cl}_{Fr}(X,F,n)$
###### Remark VIII.2.5.
We recall the definition of Frobenius-derivations used for Fedosov
quantization in [BK07]. Let $A$ be commutative ring and $M$ an $A$-module, a
Frobenius-derivation $D:A\to M$ is a morphism of abelian groups such that
$D(1)=0$ and
$D(ab)=a^{p}D(b)+b^{p}D(a)$
This definition suggest yet another notion of $n$-shifted $p$-forms, on a
simplicial algebra with Frobenius lift $(A,F)$, based on a notion of
Frobenius-twisted cotangent complex
$\mathbb{L}^{tw}_{(A,F)}\coloneqq\mathbb{L}_{A}\otimes_{A}A$
where the morphism $A\to A$ is $F$.
We expect to be able to define symplectic forms and use them to study
deformation quantizations for derived schemes in mixed characteristic.
The difference between the Frobenius-twisted cotangent complex and the
cotangent complex we study here $\mathbb{L}_{(A,F)}$ seems to be connected to
the difference between $p$-restricted and partition Lie algebra, see [BM19].
### VIII.3 Foliations
We recall from [To20], the definition of derived foliations.
###### Definition VIII.3.1.
Let $X$ be a derived scheme over a commutative ring $k$. A derived foliation
on $X$ is the data of a graded mixed derived stack $\mathcal{F}$ and a
morphism of graded mixed derived stack $\mathcal{F}\to\mathcal{L}^{gr}(X)$
such that
* •
The projection $\mathcal{F}\to X$ is relatively affine.
* •
The quasi-coherent complex $\mathcal{O}_{\mathcal{F}}(1)$ is in Tor-amplitude
$]-\infty,m]$, for an integer $m$, where $\mathcal{O}_{\mathcal{F}}(1)$ is the
$1$-weighted part of functions of $\mathcal{F}$ relative to $X$.
* •
The natural morphism of graded derived stack
$\mathcal{F}\to\mathbb{V}(\mathcal{O}_{\mathcal{F}}(1))$
is an equivalence.
In this definition, we were able to define Dieudonné foliations for possibly
non-connective cotangent complexes $\mathbb{L}_{\mathcal{F}}$ using Corollary
V.2.7. Indeed, we keep the expected property that
$\mathcal{F}\mapsto\mathbb{L}_{\mathcal{F}}$
is conservative.
We extend the definition of foliations to our framework.
###### Definition VIII.3.2.
Let $X$ be a derived scheme over $\mathbb{Z}_{(p)}$, endowed with a Frobenius
lift. A Dieudonné foliation on $(X,F)$ is the data of a derived Dieudonné
stack $F$ and a morphism of derived Dieudonné stack
$\mathcal{F}\to\mathcal{L}^{gr}(X,F)$ such that
* •
The projection $\mathcal{F}\to X$ is relatively affine.
* •
The quasi-coherent complex $\mathcal{O}_{\mathcal{F}}(1)$ is in Tor-amplitude
$]-\infty,-1]$ where $\mathcal{O}_{\mathcal{F}}(1)$ is the $1$-weighted part
of functions of $\mathcal{F}$ relative to $X$.
* •
The natural morphism of graded derived stack endowed with a Frobenius lift
$\mathcal{F}\to\mathbb{V}(\mathcal{O}_{\mathcal{F}}(1))$
is an equivalence.
Our classification Theorem VII.3.15 gives a more precise description of
Dieudonné foliations.
###### Theorem VIII.3.3.
Let $A$ be a smooth commutative $k$-algebra, $M$ a projective $A$-module of
finite type. We fix a derived Frobenius lift structure on the graded
simplicial algebra $Sym_{A}(M[1])$, with $M$ in weight $1$. From Proposition
VII.1.8, it is equivalent to a classical Frobenius lift $F$ on $A$ and a
linear map of $A$-modules $\phi:M\to M$. The space of Dieudonné foliations
$\mathcal{F}$ over $A$ having $M$ as a cotangent complex is discrete and in
bijection with the set of Dieudonné algebra structures on the graded
commutative $k$-algebra $\bigoplus_{i}\wedge^{i}_{A}M[-i]$ endowed with its
natural canonical Frobenius lift structure.
This theorem gives an easier description of Dieudonné foliations, which have a
quite abstract definition.
###### Example VIII.3.4.
We outline the construction of a fundamental example of a Dieudonné foliation
using the previous theorem. Let $(X,F)$ be a smooth derived scheme endowed
with a Frobenius lift. We define the tangent complex $\mathbb{T}_{(X,F)}$ as
the dual of the cotangent complex $\mathbb{L}_{(X,F)}$ in the category of
$(X,F)$-modules, we note that $\mathbb{T}_{(X,F)}$ does not admit
$\mathbb{T}_{X}$ as an underlying $\mathcal{O}_{X}$-module : an element of
$\mathbb{T}_{(X,F)}$ can be thought of as a sequence of elements of
$\mathbb{T}_{X}$ which satisfy compatibility conditions with the Frobenius
morphism. We fix a sub-bundle $I$ of $\mathbb{T}_{(X,F)}$ which is stable by
the Lie bracket. The derived stack
$\mathcal{F}\coloneqq\mathbb{V}(I^{\vee}[1])$
has a canonical mixed graded structure and a Frobenius structure. The
canonical map
$I\to\mathbb{T}_{(X,F)}$
induces a morphism
$\mathcal{F}\to\mathcal{L}^{gr}(X,F)$
which makes $\mathcal{F}$ into a Dieudonné foliation.
###### Example VIII.3.5.
Let $f:X\to Y$ be a morphism of derived schemes with Frobenius lifts. Pulling
back along $f$ should define a canonical foliation $f^{*}0_{Y}\in Fol(X)$.
Extending on the theory of Dieudonné foliations, we hope to construct, for a
fixed Dieudonné foliation, the de Rham-Witt complex along the leaves of the
foliation, from which we will deduce crystalline cohomology along the
foliation.
From these constructions, we can expect results on Dieudonné foliations
regarding the vanishing of cohomology classes. In [To20], for a crystal $E$ on
a foliation $\mathcal{F}$, cohomology classes $c_{i}(E(0))$ in
$H^{2i}_{DR}(X/S)$ are shown to vanish in $H^{2i}_{DR}(X/\mathcal{F})$ where
$\mathcal{F}$ is a foliation on $X$ relative to $S$. These classes are shown
to come from classes in crystalline cohomology and a similar vanishing theorem
is proven. These classes are, in fact, canonically lifted to rigid cohomology
classes and the theory of Dieudonné foliations could help better understand
the vanishing of the crystalline classes and give similar results for rigid
cohomology classes.
## Appendix A Categorical results
###### Proposition A.1.
Let $\mathcal{C}$ be an $\infty$-topos, $G$ a group in $\mathcal{C}^{endo}$.
$G$ is naturally a group in $\mathcal{C}$ and it has a compatible endomorphism
$\alpha:G\to G$ which induces a forgetful functor $\alpha^{*}:G-\mathcal{C}\to
G-\mathcal{C}$. Then there is an equivalence :
$G-\mathcal{C}^{endo}\simeq CFP_{\alpha^{*}}(G-\mathcal{C})$
###### Proposition A.2.
Let $\mathcal{C}_{0}\to\mathcal{C}_{01}\leftarrow\mathcal{C}_{1}$ a pullback
diagram of $\infty$-categories and $G$ be group in
$\mathcal{C}_{0}\times_{\mathcal{C}_{01}}\mathcal{C}_{1}$. $G$ induces a group
$G_{0}$ in $\mathcal{C}_{0}$, $G_{1}$ in $\mathcal{C}_{1}$, $G_{01}$ in
$\mathcal{C}_{01}$. The canonical morphism is an equivalence :
$G-(\mathcal{C}_{0}\times_{\mathcal{C}_{01}}\mathcal{C}_{1})\xrightarrow{\sim}(G_{0}-\mathcal{C}_{0})\times_{G_{01}-\mathcal{C}_{01}}(G_{1}-\mathcal{C}_{1})$
###### Proposition A.3.
Let $\mathcal{C}$ a presentable category and $\eta$ an endofunctor of
$\mathcal{C}$ which commutes with filtered colimits and small limits. Then the
inclusion functor
$U:FP(\mathcal{C})\subset LFP(\mathcal{C})$
admits a left adjoint.
###### Proof.
The functor $U$ commutes with filtered colimits and with small limits as the
functor $\eta$ does. The categories $FP_{\eta}(\mathcal{C})$ and
$LFP(\mathcal{C})$ are presentable as limits of presentable categories, see
Proposition II.2.1. Using the adjoint functor theorem, Theorem II.1.2,
concludes the proof. ∎
## Appendix B Geometrical results
###### Proposition B.1.
Let $F$ be an affine stack with $C(F)$ projective of finite type as a complex
on $k$. We denote $p:F\to*$ the canonical structure morphism. Let $M$ and $N$
be $k$-complexes.
$C(F)\otimes Map_{Mod}(M,N)\to Map_{QCoh(F)}(p^{*}M,p^{*}N)$
is an equivalence, where $Map_{Mod}(M,N)$ is considered with its $k$-module
structure.
###### Proof.
As a complex, $C(F)$ is a retract of a free complex, therefore tensoring with
$C(F)$ preserves all limits. Now, the morphism
$C(F)\otimes Map_{Mod}(M,N)\to Map_{QCoh(F)}(M,N)$
is compatible with colimits in $M$ and limits in $N$. We are reduced to
proving the result for $M=k$ and $N=k$, where the result is obvious. ∎
###### Proposition B.2.
Let $A$ be a simplicial algebra over $\mathbb{F}_{p}$. Endowing a simplicial
algebra over $A$ with its canonical Frobenius endomorphism defines a functor
$B\in SCR_{A}\mapsto(B,Fr_{B})\in SCR_{A}^{endo}$
which admits a right adjoint given by taking the homotopy equalizer
$(B,F)\in SCR_{A}^{endo}\mapsto B^{F\simeq Fr_{B}}\coloneqq eq(B\ext@arrow
0359\arrowfill@\mathrel{\mathchoice{\raise 1.66194pt\hbox
to0.0pt{$\displaystyle\relbar$\hss}\lower
1.66194pt\hbox{$\displaystyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\textstyle\relbar$\hss}\lower
1.66194pt\hbox{$\textstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptscriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptscriptstyle\relbar$}}}\mathrel{\mathchoice{\raise
1.66194pt\hbox to0.0pt{$\displaystyle\relbar$\hss}\lower
1.66194pt\hbox{$\displaystyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\textstyle\relbar$\hss}\lower
1.66194pt\hbox{$\textstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptscriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptscriptstyle\relbar$}}}\rightrightarrows{F}{Fr_{B}}B)\in
SCR_{A}$
and a left adjoint given by taking homotopy coinvariants
$(B,F)\in SCR_{A}^{endo}\mapsto B_{F\simeq Fr_{B}}\coloneqq coeq(B\ext@arrow
0359\arrowfill@\mathrel{\mathchoice{\raise 1.66194pt\hbox
to0.0pt{$\displaystyle\relbar$\hss}\lower
1.66194pt\hbox{$\displaystyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\textstyle\relbar$\hss}\lower
1.66194pt\hbox{$\textstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptscriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptscriptstyle\relbar$}}}\mathrel{\mathchoice{\raise
1.66194pt\hbox to0.0pt{$\displaystyle\relbar$\hss}\lower
1.66194pt\hbox{$\displaystyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\textstyle\relbar$\hss}\lower
1.66194pt\hbox{$\textstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptscriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptscriptstyle\relbar$}}}\rightrightarrows{F}{Fr_{B}}B)\in
SCR_{A}$
###### Proof.
We sketch the proof. Let $B\in SCR_{A}$ and $C\in SCR_{A}^{endo}$.
In the following diagram :
${B}$${C}$${C}$${B}$${C}$${C}$$\scriptstyle{Fr_{B}}$$\scriptstyle{f}$$\scriptstyle{Fr_{C}}$$\scriptstyle{Id}$$\scriptstyle{F}$$\scriptstyle{f}$$\scriptstyle{Id}$
the left diagram is a commutative diagram.
For a fixed morphism $f:B\to C$, a homotopy making the outer diagram commute
is equivalent to a homotopy between $Fr_{C}\circ f$ and $F\circ f$ in the
following diagram.
${B}$${C}$${C}$${C}$${C}$$\scriptstyle{f}$$\scriptstyle{Fr_{C}}$$\scriptstyle{Id}$$\scriptstyle{F}$$\scriptstyle{Id}$
The latter is then the data of $f$ factoring through
$C^{F\simeq Fr_{C}}\coloneqq eq(C\ext@arrow
0359\arrowfill@\mathrel{\mathchoice{\raise 1.66194pt\hbox
to0.0pt{$\displaystyle\relbar$\hss}\lower
1.66194pt\hbox{$\displaystyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\textstyle\relbar$\hss}\lower
1.66194pt\hbox{$\textstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptscriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptscriptstyle\relbar$}}}\mathrel{\mathchoice{\raise
1.66194pt\hbox to0.0pt{$\displaystyle\relbar$\hss}\lower
1.66194pt\hbox{$\displaystyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\textstyle\relbar$\hss}\lower
1.66194pt\hbox{$\textstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptscriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptscriptstyle\relbar$}}}\rightrightarrows{F}{Fr_{C}}C)$
The dual proof for the left adjoint goes similarly.
∎
A similar proof yield the following proposition.
###### Proposition B.3.
Let $A$ be a simplicial algebra over $\mathbb{F}_{p}$. Endowing a derived
stack over $A$ with its canonical Frobenius endomorphism defines a functor
$X\in dSt_{A}\mapsto(X,Fr_{X})\in dSt_{A}^{endo}$
which admits a right adjoint given by taking the homotopy equalizer
$(X,F)\in dSt_{A}^{endo}\mapsto X^{F\simeq Fr_{X}}\coloneqq eq(X\ext@arrow
0359\arrowfill@\mathrel{\mathchoice{\raise 1.66194pt\hbox
to0.0pt{$\displaystyle\relbar$\hss}\lower
1.66194pt\hbox{$\displaystyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\textstyle\relbar$\hss}\lower
1.66194pt\hbox{$\textstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptscriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptscriptstyle\relbar$}}}\mathrel{\mathchoice{\raise
1.66194pt\hbox to0.0pt{$\displaystyle\relbar$\hss}\lower
1.66194pt\hbox{$\displaystyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\textstyle\relbar$\hss}\lower
1.66194pt\hbox{$\textstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptscriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptscriptstyle\relbar$}}}\rightrightarrows{F}{Fr_{X}}X)\in
dSt_{A}$
and a left adjoint given by taking homotopy coinvariants
$(X,F)\in dSt_{A}^{endo}\mapsto X_{F\simeq Fr_{X}}\coloneqq coeq(X\ext@arrow
0359\arrowfill@\mathrel{\mathchoice{\raise 1.66194pt\hbox
to0.0pt{$\displaystyle\relbar$\hss}\lower
1.66194pt\hbox{$\displaystyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\textstyle\relbar$\hss}\lower
1.66194pt\hbox{$\textstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptscriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptscriptstyle\relbar$}}}\mathrel{\mathchoice{\raise
1.66194pt\hbox to0.0pt{$\displaystyle\relbar$\hss}\lower
1.66194pt\hbox{$\displaystyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\textstyle\relbar$\hss}\lower
1.66194pt\hbox{$\textstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptstyle\relbar$}}{\raise 1.66194pt\hbox
to0.0pt{$\scriptscriptstyle\relbar$\hss}\lower
1.66194pt\hbox{$\scriptscriptstyle\relbar$}}}\rightrightarrows{F}{Fr_{X}}X)\in
dSt_{A}$
###### Proposition B.4.
Let $\mathcal{C}_{0}$ be the full subcategory of $dSt^{Fr}$ on objects which
are given by freely adjoining a Frobenius lift to a derived affine scheme,
these objects are of the form $L(Spec(C))$, with $C$ a simplicial algebra. See
Proposition VII.2.27 for the construction of $L$.
The subcategory $\mathcal{C}_{0}$ generates $dSt^{Fr}$ under colimits.
###### Proof.
The comonad $T$ induced from the adjunction
$L:dSt\rightleftarrows dSt^{Fr}:U$
induces a resolution of $X$ denoted $T^{\bullet+1}(X)$ using the bar
construction, by combining the fact that $T$ commutes with small colimits with
the Barr-Beck Theorem, see [Lur17, Theorem 4.7.3.5] and the proof of
Proposition VII.2.21. Therefore $X$ is given by the geometric realization of
$T^{\bullet+1}(X)$.
Now we consider $Y=L(Z)$ obtained by freely adjoining a Frobenius lift to a
derived stack. Writing $Z$ as a colimit of derived affine schemes, using the
co-Yoneda lemma
$Z\simeq colim_{i}Spec(A_{i})$
we deduce
$Y=L(colim_{i}Spec(A_{i}))\simeq colim_{i}L(Spec(A_{i}))$
∎
###### Corollary B.5.
The restriction of the Yoneda embedding gives a functor
$dSt^{Fr}\to Fun(\mathcal{C}_{0},\mathcal{S})$
which is fully faithful.
Similar arguments provide analoguous results for derived stacks endowed with
endomorphisms.
###### Proposition B.6.
Let $\mathcal{C}_{0}$ be the full subcategory of $dSt^{endo}$ on objets which
are given by freely adjoining an endomorphism to a derived affine scheme,
these objects are of the form $L(Spec(C))$, with $C$ a simplicial algebra.
The subcategory $\mathcal{C}_{0}$ generates $dSt^{endo}$ under colimits.
###### Corollary B.7.
The restriction of the Yoneda embedding gives a functor
$dSt^{endo}\to Fun(\mathcal{C}_{0},\mathcal{S})$
which is fully faithful.
###### Proposition B.8.
Let $A\to k$ be an augmented cosimplicial $k$-algebra such that $H^{0}(A)\to
k$ is an isomorphism, where $k$ is a principal torsion-free commutative ring.
Let $n\geq 0$ be such that
$H^{i}(A)=0$
for $1\leq i\leq n$. Note that this condition is empty for $n=0$. Then there
is a cofibrant replacement of $A$
$QA\xrightarrow{\sim}A$
such that
* •
All coface maps $QA_{i}\to QA_{j}$ are flat.
* •
All coface maps $QA_{i}\to QA_{0}$ are isomorphisms for $i\leq n-1$.
###### Proof.
We construct a tower of cosimplicial algebras by adding successive cells to
kill the various cohomology groups since the model structure on cosimplicial
algebras is cofibrantly generated. Let us define $X\coloneqq
Spec^{\Delta}(A)$. We assume that we have constructed a cofibrant model
$X^{m}=Spec^{\Delta}(A_{m})$ with a canonical morphism
$p_{m}:X\to X^{m}$
which is an isomorphism on all cohomology groups $H^{i}$ for $i\leq n$. We
have a natural identification
$H^{m+1}(A)\cong[X,K(\mathbb{G}_{a},m+1)]$
We define $I\coloneqq[X,K(\mathbb{G}_{a},m+1)]$ as a set, and the tautological
morphism
$X\to K(\mathbb{G}_{a},m+1)^{I}$
We define $X^{\prime m}(1)\coloneqq X^{m}\times
K(\mathbb{G}_{a},m+1)^{I}=Spec^{\Delta}(A^{\prime m}(1))$, where $A^{\prime
m}(1)$ is the tensor product of the free cosimplicial algebra on a coproduct
of $I$ copies of $k[-m-1]$ and $A^{m}$. The natural morphism
$A^{\prime m}(1)\to A$
is surjective on the $H^{m+1}$ group. From the isomorphisms
$H^{m+1}(A^{\prime m}(1))\cong H^{m+1}(K(\mathbb{G}_{a},m+1)^{I})\times
H^{m+1}(A^{m})\cong End_{Gp}(\mathbb{G}_{a})^{(I)}\times H^{m+1}(A^{m})$
We deduce
$H^{m+1}(A^{\prime m}(1))\cong k^{(I)}\times H^{m+1}(A^{m})$
because $k$ is torsion-free. The kernel of the surjective morphism
$k^{(I)}\times H^{m+1}(A^{m})\to H^{m+1}(A)$
is denoted $J$. We deduce a map
$X^{m}\to X^{\prime m}(1)\to K(\mathbb{G}_{a},m+1)^{J}$
which factors through
$X^{m}(1)\coloneqq X^{\prime
m}(1)\times_{K(\mathbb{G}_{a},m+1)^{J}}E(\mathbb{G}_{a},m+1)^{J}$
We iterate this construction to obtain a tower
$X^{m}\to...\to X^{m}(k)\to X^{\prime m}(k)\to...X^{m}(1)\to X^{\prime m}(1)$
which gives
$\alpha_{m}:X^{m}\to X^{m+1}$
where $X^{m+1}$ is the limit of the tower
$X^{m}(k)\to X^{\prime m}(k)\to...X^{m}(1)\to X^{\prime m}(1)$
We check that $\alpha_{m}$ is an isomorphism on the $H^{i}$ groups where
$i\leq m+1$ and we define $QA$ by requiring
$Spec^{\Delta}(QA)\coloneqq colim_{m}X^{m}$
where we take the colimit in the category of affine stacks.
The flatness of the transition morphisms come from the flatness of
$E(\mathbb{G}_{a},i)\to K(\mathbb{G}_{a},i)$
when $i>0$.
Now to get the last assertion, we choose a specific model for
$K(\mathbb{G}_{a},i)$ and $E(\mathbb{G}_{a},i)$. We take the following :
$K(\mathbb{G}_{a},i)\coloneqq Spec^{\Delta}(Free_{coSCR}(k[-m]))$
and
$K(\mathbb{G}_{a},i)\coloneqq
Spec^{\Delta}(Free_{coSCR}(k_{m-1}\xrightarrow{Id}k_{m}))$
As the associated cosimplicial algebras of these affine stacks satisfy
$Y_{i}\xrightarrow{\sim}Y_{0}$
where $i<m-1$, this concludes the proof. ∎
###### Remark B.9.
We notice in the proof that if $H^{n+1}(A)$ is a free $k$-module, $A^{\prime
n+1}(1)\to A$ is an isomorphism on $H^{m+1}$ groups, therefore we can take
$X^{n+1}(1)$ to be $X^{\prime n+1}(1)$ and we obtain a cofibrant model such
that $QA_{n}\to QA_{0}$ is also an isomorphism.
From the remark, we deduce :
###### Corollary B.10.
The formal $n$-sphere
$S^{n}_{f}\coloneqq Spec^{\Delta}(DH^{*}(S^{n},k))$
is $(n-1)$-connective.
## References
* [A78] Gert Almkvist K-theory of endomorphisms.Journal of Algebra, 55(2):308–340 1978\.
* [BDJ11] Bhargav Bhatt, Aise Johan de Jong Crystalline cohomology and de Rham cohomology.https://arxiv.org/abs/1110.5001 2011, https://arxiv.org/abs/1110.5001.
* [Ber74] Pierre Berthelot Cohomologie cristalline des schémas de caractéristique $p>0$. Springer. 1974\.
* [BK07] R. Bezrukavnikov, D. Kaledin Fedosov quantization in positive characteristichttps://arxiv.org/abs/1805.05501. https://arxiv.org/abs/math/0501247. 2007\.
* [BL21] Jonathan Beardsley, Tyler Lawson Skeleta and categories of algebrashttps://arxiv.org/pdf/2110.09595.pdf. https://arxiv.org/pdf/2110.09595.pdf. 2021\.
* [BLM22] Bhargav Bhatt, Jacob Lurie, Akhil Mathew Revisiting the de Rham-Witt complex.https://arxiv.org/abs/1805.05501 2022, https://arxiv.org/abs/1805.05501.
* [BM19] Lukas Brantner, Akhil Mathew Deformation Theory and Partition Lie Algebras.https://arxiv.org/abs/1904.07352 2019, https://arxiv.org/abs/1904.07352.
* [BS22] Bhargav Bhatt, Peter Scholze Prisms and Prismatic Cohomology.https://arxiv.org/abs/1905.08229 2022, https://arxiv.org/abs/1905.08229.
* [BZN12] David Ben-Zvi and David Nadler. Loop spaces and connections.https://arxiv.org/abs/1002.3636 J. Topol., 5(2):377–430, 2012, https://arxiv.org/abs/1002.3636.
* [CL91] Antoine Chambert-Loir Cohomologie cristalline : un survol.https://webusers.imj-prg.fr/ antoine.chambert-loir/publications/papers/cristal.pdf 1991, https://webusers.imj-prg.fr/ antoine.chambert-loir/publications/papers/cristal.pdf.
* [CPTVV] D. Calaque, T. Pantev, B. Toen, M. Vaquie, G. Vezzosi Shifted Poisson Structures and Deformation Quantization.https://arxiv.org/abs/1506.03699 2017, https://arxiv.org/abs/1506.03699.
* [Dun10] Gregory Dungan Review of Model Categories.https://ncatlab.org/nlab/files/DunganModelCategories.pdf 2010, https://ncatlab.org/nlab/files/DunganModelCategories.pdf.
* [EGAIV] A. Grothendieck, Eléments de Géométrie Algébrique $IV$. Etude locale des schémas et des morphismes de schémashttp://www.numdam.org/article/PMIHES_1961__8__5_0.pdf, Publ. Math. I.H.E.S., 20, 24, 28, 32 (1967), http://www.numdam.org/article/PMIHES_1961__8__5_0.pdf.
* [Haz08] Michiel Hazewinkel Witt vectors. Part 1.https://arxiv.org/abs/0804.3888 2008, https://arxiv.org/abs/0804.3888.
* [HKR62] Gerhard Hochschild, Bertram Kostant, Alexander Rosenberg Differential forms on regular affine algebras. 1962, AMS 102 (1962), No.3, 383–408. DOI:10.2307/1993614.
* [HLP14] Daniel Halpern-Leistner and Anatoly Preygel Mapping stacks and categorical notions of properness.https://arxiv.org/abs/1402.3204 2014, https://arxiv.org/abs/1402.3204.
* [Ho91] Mark Hovey Model categories.https://people.math.rochester.edu/faculty/doug/otherpapers/hovey-model-cats.pdf 1991, https://people.math.rochester.edu/faculty/doug/otherpapers/hovey-model-cats.pdf.
* [HS98] A. Hirschowitz, C. Simpson, Descente pour les $n$-champshttps://arxiv.org/abs/math/9807049, 1998, https://arxiv.org/abs/math/9807049.
* [Ill71] L. Illusie, Complexe cotangent et deformations I, 1971, Springer.
* [Ill79] Luc Illusie. Complexe de de Rham-Witt et cohomologie cristalline.http://www.numdam.org/item/10.24033/asens.1374.pdf Annales scientifiques de l’École Normale Supérieure, Série 4, Tome 12 (1979) no. 4, pp. 501-661, http://www.numdam.org/item/10.24033/asens.1374.pdf.
* [Joy85] A. Joyal. $\delta$-anneaux et vecteurs de Witt.C. R. Math. Rep. Acad. Sci. Canada 7 (1985), no. 3, 177–18 1985\.
* [Lur07] Jacob Lurie. Higher topos theoryDerived Algebraic Geometry IV: Deformation Theory. 2007, https://arxiv.org/abs/0709.3091.
* [Lur09] Jacob Lurie. Higher topos theoryhttp://www.math.harvard.edu/ lurie/papers/HTT.pdf, volume 170 of Annals of Mathematics Studies. Princeton University Press, Princeton, NJ, 2009, http://www.math.harvard.edu/ lurie/papers/HTT.pdf.
* [Lur11] Jacob Lurie. Structured Spaces.https://people.math.harvard.edu/ lurie/papers/DAG-V.pdf 2011, https://people.math.harvard.edu/ lurie/papers/DAG-V.pdf.
* [Lur11b] Jacob Lurie. Spectral Schemes.https://www.math.ias.edu/ lurie/papers/DAG-VII.pdf 2011, https://www.math.ias.edu/ lurie/papers/DAG-VII.pdf.
* [Lur11c] Jacob Lurie. Derived Algebraic Geometry VIII: Quasi-Coherent Sheaves and Tannaka Duality Theorems. http://people.math.harvard.edu/ lurie/papers/DAG-VIII.pdf November 2011, http://people.math.harvard.edu/ lurie/papers/DAG-VIII.pdf.
* [Lur15] Jacob Lurie Rotation invariance in algebraic K-Theory.https://people.math.harvard.edu/ lurie/papers/Waldhaus.pdf 2015, https://people.math.harvard.edu/ lurie/papers/Waldhaus.pdf.
* [Lur17] Jacob Lurie. Higher Algebrahttp://people.math.harvard.edu/ lurie/papers/HA.pdf. September 2017, http://people.math.harvard.edu/~lurie/papers/HA.pdf.
* [Lur18] Jacob Lurie. Spectral Algebraic Geometry (Under Construction).https://www.math.ias.edu/ lurie/papers/SAG-rootfile.pdf. 2018, https://www.math.ias.edu/~lurie/papers/SAG-rootfile.pdf.
* [LZ03] Andreas Langer, Thomas Zink. De Rham-Witt Cohomology for a Proper and Smooth Morphism).https://www.math.uni-bielefeld.de/ zink/dRW.pdf. 2003, https://www.math.uni-bielefeld.de/~zink/dRW.pdf.
* [Mon21] Ludovic Monier. A note on linear stacks.https://arxiv.org/abs/2103.06555 2021, https://arxiv.org/abs/2103.06555.
* [Mon21b] Shubhodip Mondal. $G_{a}^{perf}$-modules and de Rham Cohomology.https://arxiv.org/abs/2101.03146 2021, https://arxiv.org/abs/2101.03146.
* [Mou19] Tasos Moulinos. The geometry of filtrations.https://arxiv.org/abs/1907.13562 2019, https://arxiv.org/abs/1907.13562.
* [MRT20] Tasos Moulinos, Marco Robalo and Bertrand Toën. A Universal HKR Theorem.https://arxiv.org/abs/1906.00118 2020, https://arxiv.org/abs/1906.00118.
* [NS18] Thomas Nikolaus, Peter Scholze. On topological cyclic homology.https://arxiv.org/abs/1707.01799 2018, https://arxiv.org/abs/1707.01799.
* [PS83] Alexandre Grothendieck Persuing Stacks.https://grothendieck.umontpellier.fr/archives-grothendieck/ Unpublished, 1983, https://grothendieck.umontpellier.fr/archives-grothendieck.
* [PTVV] Tony Pantev, Bertrand Toën, Michel Vaquié, and Gabriele Vezzosi. Shifted symplectic structures.https://arxiv.org/abs/1111.3209 Publ. Math. Inst. Hautes Études Sci., 117:271–328, 2013, https://arxiv.org/abs/1111.3209.
* [Rab14] Joseph Rabinoff The Theory of Witt Vectors.https://arxiv.org/abs/1409.7445 2014, https://arxiv.org/abs/1409.7445.
* [Rak20] Arpon Raksit. Hochschild homology and the derived de Rham complex revisited.https://arxiv.org/abs/2007.02576 2020, https://arxiv.org/abs/2007.02576.
|
# Evolution of cold gas at $2<z<5$: a blind search for H i and OH absorption
lines towards mid-infrared color selected radio-loud AGNs
N. Gupta Inter-University Centre for Astronomy and Astrophysics, Post Bag 4,
Ganeshkhind, Pune 411 007, India R. Srianand Inter-University Centre for
Astronomy and Astrophysics, Post Bag 4, Ganeshkhind, Pune 411 007, India G.
Shukla Inter-University Centre for Astronomy and Astrophysics, Post Bag 4,
Ganeshkhind, Pune 411 007, India J-.K. Krogager Institut d’astrophysique de
Paris, UMR 7095, CNRS-SU, 98bis bd Arago, 75014 Paris, France Department of
Astronomy, University of Geneva, Chemin Pegasi 51, 1290 Versoix, Switzerland
P. Noterdaeme Institut d’astrophysique de Paris, UMR 7095, CNRS-SU, 98bis bd
Arago, 75014 Paris, France F. Combes Observatoire de Paris, Collège de
France, PSL University, Sorbonne University, CNRS, LERMA, Paris, France R.
Dutta Dipartimento di Fisica G. Occhialini, Università degli Studi di Milano-
Bicocca, Piazza della Scienza 3, 20126 Milano, Italy J. P. U. Fynbo Cosmic
Dawn Center (DAWN), University of Copenhagen, Jagtvej 128, DK-2200, Copenhagen
N, Denmark Niels Bohr Institute, University of Copenhagen, Jagtvej 128,
DK-2200, Copenhagen N, Denmark M. Hilton Astrophysics Research Centre and
School of Mathematics, Statistics and Computer Science, UKZN, Durban 4041,
South Africa E. Momjian National Radio Astronomy Observatory, Socorro, NM
87801, USA K. Moodley Astrophysics Research Centre and School of
Mathematics, Statistics and Computer Science, UKZN, Durban 4041, South Africa
P. Petitjean Institut d’astrophysique de Paris, UMR 7095, CNRS-SU, 98bis bd
Arago, 75014 Paris, France
###### Abstract
We present results from a spectroscopically blind search for H i 21-cm and OH
18-cm absorption lines at $2\leq z\leq 5$ towards 88 AGNs with 1.4 GHz
spectral luminosity of 27 - 29.3 W Hz-1. One 21-cm absorber, which is
associated with M1540$-$1453 at $z_{\rm abs}$$=$ 2.1139, is detected. This is
only the fourth known associated absorption at $z>2$. The detection rate
($1.6^{+3.8}_{-1.4}$%) suggests low covering factor of cold neutral medium
(CNM; T$\sim$100 K) associated with these powerful AGNs. The intervening H i
21-cm and OH 18-cm absorption searches, with a sensitivity to detect CNM in
damped Ly$\alpha$ systems (DLAs), have comoving absorption path lengths of
$\Delta$X = 130.1 and 167.7, respectively. Using these we estimate the number
of H i and OH absorber per unit comoving path lengths to be $\leq$0.014 and
$\leq$0.011, respectively. The former is at least 4.5 times lower than that of
DLAs and consistent with the CNM cross-section estimated using H2 and C i
absorbers at $z>2$. The AGNs in our sample selected using mid-infrared colors
are optically fainter compared to the optical- and radio-selected quasars used
to search for DLAs. In our optical spectra obtained using SALT and NOT, we
detect 5 DLAs (redshift path $\sim 9.3$) and 2 proximate DLAs (within 3000 km
s-1 of the AGN redshift). This is slightly excessive compared to the
statistics based on optically selected quasars. The non-detection of H i 21-cm
absorption from these DLAs suggests small CNM covering fraction around
galaxies at $z>2$.
quasars: absorption lines — interstellar medium
††journal: ApJ††facilities: SALT, uGMRT††software: ARTIP, Astropy, CASA and
Matplotlib
,
,
,
## 1 Introduction
H i 21-cm absorption lines in the spectra of radio sources can provide
valuable insights into the cold atomic gas ($T\sim$100 K) associated with
active galactic nuclei (AGNs) and intervening galaxies along the line of
sight. In the former, the matter may be associated with the AGN, its host
galaxy, a nearby companion galaxy, outflows driven by its feedback or
infalling material. In the latter, the absorbing gas corresponds to the
interstellar or circumgalactic medium of an intervening galaxy or intragroup
medium. The strength of absorption signal does not depend on the distance to
the observer. The H i 21-cm absorption line could thus be an important probe
of the properties of cold gas in distant galaxies and investigating its role
in the cosmic evolution of star formation rate (SFR) density and that of the
luminosity density of AGNs, both of which peak at $z\simeq 2$.
For a long time radio telescopes have receivers capable of observing the H i
21-cm line up to arbitrarily high redshifts. Indeed, H i 21-cm absorption has
been searched in AGNs as distant as $z\sim 5.2$ (e.g., Carilli et al., 2007).
But technical limitations imposed by narrow bandwidths, hostile radio
frequency environment and limited number of known bright radio AGNs at
high-$z$ have prevented large unbiased radio absorption line surveys.
Consequently, to date, the majority of H i 21-cm absorption line observations
and detections have been based on optically selected samples of AGNs.
For associated absorption, AGNs with known redshifts and, preferably with
compact radio morphology, have been observed to study the circumnuclear gas
which may be fueling the radio activity or impacted by the AGN feedback (e.g.,
Vermeulen et al., 2003; Gupta et al., 2006; Curran et al., 2013; Geréb et al.,
2015; Aditya et al., 2016; Dutta et al., 2019; Grasha et al., 2019). Although
more than 500 AGNs have been searched for H i 21-cm absorption the vast
majority of observations are at $z<2$ and most of the detections at $z<1$ (see
Morganti & Oosterloo, 2018, for a review). Only 3 detections at $z>2$ are
known, the highest redshift being 3.53 (Aditya et al., 2021). Overall, the
bulk of detections are towards compact radio sources (detection rate $\sim
30-50\%$) associated with galaxies having mid-infrared (MIR) colors suggesting
gas and dust rich environment (Glowacki et al., 2017; Chandola et al., 2020).
Among detections associated with more powerful AGNs (radio luminosity,
log($L_{\rm 1.4\,GHz}/({\rm W\,Hz}^{-1})>24$), the H i absorption profiles
often show signatures of radio jet-ISM interaction in the form of blue-shifted
components representing outflowing gas (Maccagni et al., 2017).
For intervening H i 21-cm absorption line studies the targets have been sight
lines towards quasars, the most powerful AGNs, selected from the large optical
spectroscopic surveys such as the Sloan Digital Sky Survey (SDSS; York et al.,
2000). Generally, sight lines with indications of large H i column densities
($N$(H i)) along the sight line suggested by the presence of a damped ${\rm
Ly}\alpha$ system (DLAs; $N$(H i)$>2\times 10^{20}$cm-2; e.g., Srianand et
al., 2012; Kanekar et al., 2014), a strong Mg ii absorption (rest equivalent
width, $W_{\rm r}>1\AA$; e.g., Gupta et al., 2012; Dutta et al., 2017a), or a
galaxy at small impact parameter (typially $<$30 kpc; e.g., Carilli & van
Gorkom, 1992; Gupta et al., 2010; Borthakur et al., 2010; Reeves et al., 2016;
Dutta et al., 2017c) are selected. The vast majority of the observations are
sensitive to detecting cold neutral medium (CNM; $T\sim$100 K) in
$N(\mbox{H\,{\sc i}})>5\times 10^{19}$ cm-2. The detection rates are typically
10-50%, depending crucially on the sample selection criteria (see, for
example, Dutta et al., 2017b). Although the highest redshift detection is at
$z\sim$3.38 (Kanekar et al., 2007), the bulk of the reported H i 21-cm
detections are associated with gas rich galaxies at $z<2$. These studies also
suggest that the gas traced by DLAs at $z>2$ is predominantly warm ($T>$1000
K).
It is reasonable to expect optically selected samples of AGNs to be affected
by dust-bias. Since cold gas is accompanied by dust, the bias is particularly
relevant for H i 21-cm absorption line searches. In the case of associated
absorption, the dust intrinsic to AGN may remove objects with certain
orientation (Type ii) or going through the very early stages of evolution. In
the case of intervening gas, it can substantially affect our ability to use
optically selected samples of DLAs to detect translucent and dense phases of
the ISM (Krogager et al., 2016; Geier et al., 2019), and influence the
measurements of H i and metal mass densities (Krogager et al., 2019).
The limitations due to dust obscuration can be overcome by selecting AGNs
without resorting to any optical color selection scheme or carry out blind
searches of H i 21-cm absorption. The latter is becoming possible with various
precursor and pathfinder telescopes of Square Kilometre Array (SKA) equipped
with wideband receivers. Especially, the upcoming large H i 21-cm absorption
line surveys such as the MeerKAT Absorption Line Survey (MALS; Gupta et al.,
2017) and First Large Absorption Survey in H i (FLASH; Allison et al., 2017)
will characterize the evolution of cold gas without possible selection effects
due to dust-bias or from the choice of different methods used to select sight
lines in different redshift ranges (see also Grasha et al., 2020). These will
also simultaneously search OH 18-cm main lines, providing additional
constraints on the evolution of diffuse molecular gas in the ISM (Gupta et
al., 2018a; Balashev et al., 2020).
In this paper, we present a spectroscopically blind search of H i 21-cm
absorption at $z>2$ based on a sample of AGNs selected using the mid-infrared
(MIR) colors from Wide-field Infrared Survey Explorer (WISE; Wright et al.,
2010; Cutri & et al., 2014) and having spectroscopically confirmed redshifts
using the Southern African Large Telescope (SALT; 180 hrs) and the Nordic
Optical Telescope (NOT; 3 nights; Krogager et al., 2018). Note that similar to
the radio waveband the infrared wavelengths are also unaffected by dust
obscuration. These AGNs are being observed as part of MALS, which is a large
project at the MeerKAT array in South Africa, to search H i 21-cm and OH 18-cm
lines at $z<2$. The upgraded Giant Metrewave Radio Telescope (uGMRT) survey
presented here covers $2<z<5.1$.
The paper is laid out as follows. In Section 2, we present the sample
definition and its properties in the context of previous radio-selected
samples to search for DLAs. The details of uGMRT observations and data
analysis to obtain the radio spectra and spectral line catalog are presented
in Section 3. We provide the details of H i 21-cm absorber detected from the
survey in Section 4. In sections 5 and 6, we compute the incidences of
intervening and associated H i 21-cm absorption lines, respectively. In
Section 5, we apply the same formalism to also derive the incidence of
intervening OH absorption. The availability of SALT-NOT spectra allows us to
examine the properties of gas along the sight line using ${\rm Ly}\alpha$ and
various metal absorption lines. In particular, for a subset of uGMRT targets
($z_{e}>2.7$) through deeper SALT observations we have discovered 6 DLAs and 1
candidate proximate DLA (PDLA i.e., DLA within 3000 km s-1 of $z_{q}$). In
Section 5, we also present the properties of these ${\rm Ly}\alpha$ and metal
line absorbers and discuss the nature of multi-phase ISM in the context of
uGMRT survey results. The results and future prospects are summarized in
Section 7.
Throughout this paper we use the $\Lambda$CDM cosmology with
$\Omega_{m}$=0.27, $\Omega_{\Lambda}$=0.73 and Ho=71 km s-1 Mpc-1.
## 2 Sample
### 2.1 Definition and properties
Figure 1: Redshift and flux density (1.4 GHz) distributions for our MIR
selected sample. The vertical dashed lines mark the median for each
distribution.
The targets for the uGMRT survey are drawn from the SALT-NOT sample of 299
AGNs constructed for MALS. The SALT-NOT sample is selected on the basis of MIR
colors from WISE. We defined the following color wedge based on the first
three bands of WISE i.e., $W1$ (3.4 $\mu$m), $W2$ (4.6 $\mu$m), $W3$ (12
$\mu$m),
$\begin{array}[]{c}W_{1}-W_{2}<1.3\times(W_{2}-W_{3})-3.04;\\\ \\\
W_{1}-W_{2}>0.6.\end{array}$ (1)
As shown in Fig. 1 of Krogager et al. (2018), the MIR-wedge defined above is
optimised towards identifying most powerful AGNs (i.e., quasars) at $z>1.4$.
The details of SALT-NOT target selection process will be presented in a future
paper. In short, we cross-correlated AllWISE catalog (Cutri & et al., 2014)
and radio sources brighter than $200$ mJy in the NRAO VLA Sky Survey (NVSS;
Condon et al., 1998), to identify 2011 high-probability quasar candidates
satisfying the MIR wedge (Equation 1). We restricted the sample to declination
$<+20^{\circ}$ to ensure reasonable observability with the MeerKAT telescope.
A search radius of $10^{\prime\prime}$ for WISE-NVSS cross-matching was used
but all the coincidences were verified using higher spatial resolution quick
look radio images at 3 GHz from the Very Large Array Sky Survey (VLASS; Lacy
et al., 2020). These quick look images have a spatial resolution of $\sim
2.5^{\prime\prime}$ and the positional accuracy is limited to $\sim
0.5^{\prime\prime}$. Consequently, our sample selects preferentially compact
core-dominated AGNs. We observed 299 candidates using SALT and NOT to measure
redshifts and confirm the AGN nature. This optical spectroscopic campaign has
led to a sample of AGNs which can be split into following three categories:
(i) with emission lines in the optical spectrum (250 objects with confirmed
redshifts at $0.1<z<5.1$), (ii) with no emission lines in the optical spectrum
(25), and (iii) empty fields i.e., radio continuum peak coincides with the MIR
source but neither emission line nor a continuum source are detected in
optical spectra and images.
The uGMRT Band-3 covers 250 - 500 MHz which is suitable to search for H i
21-cm absorption over $1.9<z<4.7$. It nicely complements the MALS coverage of
$z<1.4$. For the uGMRT survey presented here we selected all the 98 objects at
$z>2$ from the SALT-NOT sample. In the allocated observing time we observed 88
of these which are listed in Table LABEL:tab:wisesamp. The redshift
(median$\sim$2.5) and 1.4 GHz flux density (median$\sim$288 mJy) distributions
are presented in Fig. 1. The 1.4 GHz spectral luminosities are in the range of
log $L_{\rm 1.4\,GHz}\simeq 27-29.3$ W Hz-1. The lower end of luminosity is
well above the radio cut-off that separates FRI and FRII radio sources, and
the upper end corresponds to the most luminous radio-loud AGN at $z>5$
discovered from the SALT-NOT survey. All except one are spectroscopically
confirmed quasars. The details of radio galaxy M1540-1453 are presented by
Shukla et al. (2021).
With the right sample, it is possible to determine the evolution of cold gas
in a dust-unbiased way. Therefore, next we examine the efficacy of our sample
selection strategy by comparing it with samples of DLAs from radio-selected
quasars.
### 2.2 Comparison with radio-selected DLA samples
The three notable DLA samples based on radio-selected quasars are: (i) the
Complete Optical and Radio Absorption Line System (CORALS) survey of 66 QSOs
($z_{em}>2.2$) by Ellison et al. (2001), (ii) the University of California San
Diego (UCSD) survey of 53 QSOs ($z_{em}>2.0$) for DLAs by Jorgenson et al.
(2006), and (iii) the survey of 45 QSOs ($z_{em}>2.4$) selected from the Texas
radio survey (Ellison et al., 2008). These surveys revealed 19, 7 and 9 DLAs
over a redshift path, $\Delta z$ of 57.16, 41.15 and 38.79, respectively. The
number of DLAs per unit redshift, $n_{\rm DLA}$, are estimated to be
$0.31^{+0.09}_{-0.08}$, $0.17^{+0.08}_{-0.07}$ and $0.23^{+0.11}_{-0.07}$,
respectively. The CORALS survey found a slightly higher incidence of DLAs and
suggested that optically-selected DLA samples may be affected by dust-bias.
But overall none of the surveys uncovered a population of dusty DLAs.
Figure 2: Comparison of optical properties of the high-redshift ($z\gtrsim
2$), radio-selected quasar surveys: SALT-NOT $z>2$ (this work), Ellison et al.
(2008), UCSD (Jorgenson et al., 2006), and CORALS (Ellison et al., 2001). The
black line indicates the cumulative distribution of $i$-band magnitudes in the
respective surveys, and the blue line shows the modelled distribution taking
into account the survey radio flux limit and spectroscopic follow-up criterion
of $B<22$ mag (see text). The median $i$-band magnitude of each sample is
given in the upper left corner.
Targets for these three surveys have been selected at different radio
frequencies and to different radio flux limits. While such differences might
be subtle, they may still affect the optical properties of the quasars and
hence the resulting statistics of DLAs. The CORALS survey has been selected at
2.7 GHz down to a flux density limit of 250 mJy; the UCSD sample has been
selected at 4.9 GHz to a flux limit of 350 mJy; and lastly, the survey by
Ellison et al. (2008) has been selected at 356 MHz down to a flux density
limit of 400 mJy.
In order to compare the effects of the radio-selection, we generate a mock
distribution of $i$-band magnitudes for the three samples as well as for the
SALT-NOT sample presented in this work. The intrinsic ultraviolet luminosity
function is assumed to be the same in all cases and is taken from the work by
Manti et al. (2017). We assume a fixed distribution of the optical-to-radio
flux ratio, $R_{i}$, following Baloković et al. (2012) as well as a fixed
radio slope of $\alpha_{\nu}=-0.8$, in order to scale the various survey
limits to the same frequency (1.4 GHz) as used in our survey and as used by
Baloković et al. (2012). Since all the surveys impose roughly the same optical
follow-up strategy in order to detect DLAs in low-resolution spectra, we
impose a final cut on $B<22$ mag. For this cut, we use an average color
correction for high-redshift QSOs: $B=i+0.3$ with a scatter of 0.1 mag (see
color relations by Krogager et al., 2019). The resulting mock magnitude
distribution is shown in Fig. 2 (blue curve) compared to the respective survey
data (in black). While all surveys span a wide range of magnitudes, our survey
more closely samples the underlying luminosity function and hence introduces a
minimal bias in the optical properties of the sample. This is a direct
consequence of the fact that SALT-NOT survey has targeted optically fainter
quasars (refer to median $i$-band mags in Fig. 2). An analysis of dust-bias in
the sample using optical-infrared colors will be presented in a future paper.
Table 1: Sample of $z>2$ MIR-selected radio sources (88) observed with uGMRT.
Source name | F${}_{1.4\,GHz}$ | $z_{em}$ | Obs. run | Beam | F${}_{\rm p,420\,MHz}$ | F${}_{\rm 420\,MHz}$ | F${}_{\rm p,420\,MHz}$/F${}_{\rm 420\,MHz}$ | $\alpha^{1.4}_{0.4}$ | $\alpha_{inband}$ | $\Delta$F | Ncand
---|---|---|---|---|---|---|---|---|---|---|---
| (mJy) | | | | (mJy b-1) | (mJy) | | | | (mJy b-1) |
(1) | (2) | (3) | (4) | (5) | (6) | (7) | (8) | (9) | (10) | (11) | (12)
M004243.06$+$124657.6 | 635.0 | 2.150 | 16SEP | $7.0^{\prime\prime}\times 6.3^{\prime\prime},-12.0^{\circ}$ | 1628.8 | 1882.5 | 0.87 | $-$0.87 | $-$0.86 | 1.9 | -
M005315.65$-$070233.4 | 248.2 | 2.130 | 16SEP | $7.3^{\prime\prime}\times 6.5^{\prime\prime},+17.0^{\circ}$ | 546.1 | 536.5 | 1.02 | $-$0.62 | $-$0.61 | 2.8 | -
M013047.38$-$172505.6 | 250.3 | 2.528 | 14SEP | $9.9^{\prime\prime}\times 6.9^{\prime\prime},+40.0^{\circ}$ | 532.4 | 560.2 | 0.95 | $-$0.64 | $-$0.66 | 1.5 | -
M021231.86$-$382256.6 | 244.5 | 2.260 | 14SEP | $17.5^{\prime\prime}\times 6.8^{\prime\prime},+33.0^{\circ}$ | 495.7 | 605.4 | 0.82 | $-$0.72 | $-$0.71 | 1.6 | 1
M022613.72$+$093726.3 | 374.6 | 2.605 | 14SEP | $8.9^{\prime\prime}\times 7.0^{\prime\prime},+87.0^{\circ}$ | 435.1 | 436.6 | 1.0 | $-$0.12 | $-$0.02 | 2.1 | -
M022639.92$+$194110.1 | 209.8 | 2.190 | 14SEP | $10.4^{\prime\prime}\times 6.7^{\prime\prime},-83.0^{\circ}$ | 390.5 | 381.7 | 1.02 | $-$0.48 | $-$0.41 | 1.9 | -
M024939.93$+$044028.9 | 420.5 | 2.008 | 17SEP | $9.1^{\prime\prime}\times 7.6^{\prime\prime},+89.0^{\circ}$ | 927.7 | 992.7 | 0.93 | $-$0.69 | $-$0.67 | 1.6 | -
M025035.54$-$262743.1 | 389.2 | 2.918 | 17SEP | $11.0^{\prime\prime}\times 6.7^{\prime\prime},+25.0^{\circ}$ | 389.6 | 419.0 | 0.93 | $-$0.06 | $+$0.01 | 1.7 | 1
M032808.59$-$015220.2 | 221.9 | 2.679 | 17SEP | $12.0^{\prime\prime}\times 8.0^{\prime\prime},+72.0^{\circ}$ | 370.2 | 527.3 | 0.70 | $-$0.69 | $-$0.63 | 1.3 | -
M041620.54$-$333931.3 | 264.1 | 3.045 | 08SEP | $17.5^{\prime\prime}\times 9.0^{\prime\prime},-42.0^{\circ}$ | 130.0 | 117.8 | 1.1 | $+$0.64 | $+$0.51 | 1.7 | -
M042248.53$-$203456.6 | 224.3 | 2.582 | 08SEP | $11.8^{\prime\prime}\times 8.4^{\prime\prime},-58.0^{\circ}$ | 187.3 | 192.1 | 0.98 | $+$0.12 | $+$0.16 | 1.7 | 1
M044849.48$-$093531.3 | 240.9 | 2.079 | 14SEP | $9.3^{\prime\prime}\times 7.2^{\prime\prime},+46.0^{\circ}$ | 152.5 | 147.3 | 1.04 | $+$0.39 | $+$0.39 | 1.6 | -
M050725.04$-$362442.9 | 212.4 | 2.930 | 08SEP | $15.9^{\prime\prime}\times 9.6^{\prime\prime},-35.0^{\circ}$ | 494.5 | 474.5 | 1.04 | $-$0.64 | $-$0.51 | 1.6 | -
M051240.99$+$151723.8 | 966.5 | 2.568 | 07SEP | $6.8^{\prime\prime}\times 6.5^{\prime\prime},-79.0^{\circ}$ | 560.1 | 595.9 | 0.94 | $+$0.39 | $+$0.27 | 1.8 | -
M051340.03$+$010023.6 | 447.0 | 2.673 | 07SEP | $7.7^{\prime\prime}\times 6.6^{\prime\prime},+64.0^{\circ}$ | 342.5 | 349.7 | 0.98 | $+$0.20 | $-$0.04 | 2.1 | 4
M051511.18$-$012002.4 | 288.8 | 2.287 | 07SEP | $8.9^{\prime\prime}\times 6.4^{\prime\prime},+59.0^{\circ}$ | 412.6 | 641.5 | 0.64 | $-$0.64 | $-$0.68 | 1.6 | 1
M051656.35$+$073252.7 | 231.7 | 2.594 | 07SEP | $10.6^{\prime\prime}\times 6.9^{\prime\prime},+90.0^{\circ}$ | 44.2 | 44.1 | 1.0 | $+$1.32 | $+$1.10 | 1.9 | -
M052318.55$-$261409.6 | 1354.9 | 3.110 | 08SEP | $12.0^{\prime\prime}\times 9.1^{\prime\prime},-52.0^{\circ}$ | 477.7 | 451.8 | 1.06 | $+$0.88 | $+$1.08 | 2.2 | -
M061038.80$-$230145.6 | 360.2 | 2.829 | 14SEP | $11.0^{\prime\prime}\times 7.0^{\prime\prime},+31.0^{\circ}$ | 130.5 | 129.0 | 1.01 | $+$0.82 | $+$0.89 | 1.7 | -
M061856.02$-$315835.2 | 346.1 | 2.134 | 09SEP | $10.8^{\prime\prime}\times 6.3^{\prime\prime},+0.0^{\circ}$ | 877.1 | 828.6 | 1.06 | $-$0.70 | $-$0.35 | 2.0 | 1
M063602.28$-$311312.5 | 262.1 | 2.654 | 09SEP | $19.2^{\prime\prime}\times 11.1^{\prime\prime},+33.0^{\circ}$ | 178.7 | 162.4 | 1.1 | $+$0.38 | $+$0.42 | 3.1 | -
M063613.53$-$310646.3 | 208.0 | 2.757 | 09SEP | $11.6^{\prime\prime}\times 6.9^{\prime\prime},+17.0^{\circ}$ | 436.2 | 474.2 | 0.92 | $-$0.66 | $-$0.73 | 2.1 | -
M065254.73$-$323022.6† | 322.1 | 2.239 | 08SEP | $11.4^{\prime\prime}\times 10.1^{\prime\prime},+5.0^{\circ}$ | 475.9 | 611.1 | 0.78 | $-$0.85 | $-$0.95 | 2.3 | -
| | | | | 279.0 | 328.2 | 0.85 | | | 2.3 | -
M070249.30$-$330205.0 | 314.6 | 2.410 | 08SEP | $12.3^{\prime\prime}\times 9.8^{\prime\prime},+33.0^{\circ}$ | 574.9 | 599.8 | 0.96 | $-$0.52 | $-$0.52 | 2.8 | 1
M073159.01$+$143336.3 | 316.5 | 2.632 | 16SEP | $8.8^{\prime\prime}\times 8.1^{\prime\prime},-49.0^{\circ}$ | 180.0 | 185.4 | 0.97 | $+$0.43 | $+$0.40 | 7.8 | -
M073714.60$-$382841.9 | 219.3 | 2.107 | 14SEP | $14.5^{\prime\prime}\times 6.6^{\prime\prime},+18.0^{\circ}$ | 498.9 | 515.3 | 0.97 | $-$0.68 | $-$0.67 | 2.4 | -
M080804.34$+$005708.2 | 317.0 | 3.133 | 08SEP | $14.3^{\prime\prime}\times 7.1^{\prime\prime},-73.0^{\circ}$ | 434.8 | 450.7 | 0.96 | $-$0.28 | $-$0.26 | 2.1 | -
M081936.62$-$063047.9 | 280.0 | 2.507 | 08SEP | $11.9^{\prime\prime}\times 6.7^{\prime\prime},-73.0^{\circ}$ | 339.0 | 354.9 | 0.96 | $-$0.19 | $-$0.08 | 2.3 | -
M085826.92$-$260721.0 | 404.5 | 2.036 | 09SEP | $17.3^{\prime\prime}\times 8.5^{\prime\prime},-19.0^{\circ}$ | 561.0 | 523.9 | 1.07 | $-$0.21 | $-$0.28 | 2.6 | -
M090910.66$-$163753.8 | 340.1 | 2.475 | 09SEP | $9.0^{\prime\prime}\times 7.3^{\prime\prime},-11.0^{\circ}$ | 776.2 | 909.7 | 0.85 | $-$0.79 | $-$0.94 | 1.7 | -
M091051.01$-$052626.8† | 337.9 | 2.395 | 16SEP | $9.6^{\prime\prime}\times 7.2^{\prime\prime},-44.0^{\circ}$ | 151.4 | 166.3 | 0.91 | $+$0.20 | $+$0.29 | 1.6 | -
| | | | | 80.0 | 97.3 | 0.82 | | | 1.6 | -
M095231.66$-$245349.1 | 209.5 | 2.626 | 14SEP | $10.4^{\prime\prime}\times 7.0^{\prime\prime},+5.0^{\circ}$ | 224.2 | 210.3 | 1.07 | $-$0.00 | $-$0.17 | 1.8 | -
M100715.18$-$124746.7 | 381.1 | 2.113 | 16SEP | $7.7^{\prime\prime}\times 6.2^{\prime\prime},-24.0^{\circ}$ | 476.6 | 470.0 | 1.01 | $-$0.17 | $-$0.21 | 2.1 | -
M101313.10$-$254654.7 | 248.8 | 2.965 | 14SEP | $10.6^{\prime\prime}\times 6.9^{\prime\prime},+10.0^{\circ}$ | 216.7 | 218.6 | 0.99 | $+$0.10 | $-$0.0 | 2.1 | -
M102548.76$-$042933.0 | 363.5 | 2.292 | 16SEP | $6.9^{\prime\prime}\times 6.7^{\prime\prime},+89.0^{\circ}$ | 356.3 | 391.6 | 0.91 | $-$0.06 | $-$0.03 | 2.5 | -
M104314.53$-$232317.5 | 212.1 | 2.881 | 07SEP | $8.9^{\prime\prime}\times 6.5^{\prime\prime},-10.0^{\circ}$ | 460.6 | 505.8 | 0.91 | $-$0.69 | $-$0.79 | 2.2 | -
M111820.61$-$305459.0 | 233.2 | 2.352 | 08SEP | $18.5^{\prime\prime}\times 8.4^{\prime\prime},-51.0^{\circ}$ | 64.8 | 70.0 | 0.93 | $+$0.96 | $+$0.10 | 3.0 | -
M111917.36$-$052707.9 | 1174.4 | 2.651 | 07SEP | $7.1^{\prime\prime}\times 6.3^{\prime\prime},-48.0^{\circ}$ | 1696.6 | 1793.2 | 0.95 | $-$0.34 | $-$0.47 | 3.6 | -
M112402.56$-$150159.1 | 261.9 | 2.551 | 07SEP | $9.0^{\prime\prime}\times 6.9^{\prime\prime},-23.0^{\circ}$ | 194.7 | 196.0 | 0.99 | $+$0.23 | $+$0.01 | 2.6 | -
M114226.58$-$263313.7 | 294.7 | 3.237 | 08SEP | $15.5^{\prime\prime}\times 8.3^{\prime\prime},-56.0^{\circ}$ | 293.6 | 340.7 | 0.86 | $-$0.35 | $-$0.31 | 4.0 | -
M115222.04$-$270126.3 | 238.2 | 2.703 | 08SEP | $13.3^{\prime\prime}\times 8.6^{\prime\prime},-60.0^{\circ}$ | 410.4 | 441.4 | 0.93 | $-$0.49 | $-$0.64 | 2.5 | -
M115306.72$-$044254.5† | 684.8 | 2.591 | 16SEP | $7.0^{\prime\prime}\times 6.5^{\prime\prime},+86.0^{\circ}$ | 929.3 | 899.1 | 1.03 | $-$0.78 | $-$0.89 | 2.7 | -
| | | | | 980.1 | 912.2 | 1.07 | | | 2.7 | -
M120632.23$-$071452.6 | 698.8 | 2.263 | 16SEP | $8.0^{\prime\prime}\times 6.2^{\prime\prime},-9.0^{\circ}$ | 1402.8 | 1267.9 | 1.10 | $-$0.48 | $-$0.60 | 2.7 | -
M121514.42$-$062803.5 | 360.4 | 3.218 | 16SEP | $9.2^{\prime\prime}\times 6.6^{\prime\prime},-8.0^{\circ}$ | 511.1 | 461.5 | 1.11 | $-$0.20 | $-$0.31 | 2.8 | -
M123150.30$-$123637.5 | 276.0 | 2.106 | 07SEP | $7.2^{\prime\prime}\times 6.5^{\prime\prime},-17.0^{\circ}$ | 159.7 | 205.1 | 0.78 | $+$0.24 | $+$0.36 | 1.8 | -
M123410.08$-$332638.5 | 297.9 | 2.820 | 08SEP | $15.8^{\prime\prime}\times 9.7^{\prime\prime},-57.0^{\circ}$ | 665.4 | 710.4 | 0.94 | $-$0.69 | $-$0.72 | 3.4 | -
M124448.99$-$044610.2 | 384.9 | 3.104 | 16SEP | $9.1^{\prime\prime}\times 7.7^{\prime\prime},+28.0^{\circ}$ | 677.2 | 616.1 | 1.1 | $-$0.38 | $-$0.40 | 2.3 | 1
M125442.98$-$383356.4 | 219.2 | 2.776 | 17SEP | $14.7^{\prime\prime}\times 7.0^{\prime\prime},+7.0^{\circ}$ | 297.4 | 300.9 | 0.99 | $-$0.25 | $-$0.25 | 2.3 | -
M125611.49$-$214411.7 | 260.7 | 2.178 | 16SEP | $10.3^{\prime\prime}\times 6.5^{\prime\prime},+13.0^{\circ}$ | 446.8 | 455.4 | 0.98 | $-$0.45 | $-$0.15 | 1.7 | -
M131207.86$-$202652.4 | 778.1 | 5.064 | 16SEP | $10.5^{\prime\prime}\times 6.7^{\prime\prime},+21.0^{\circ}$ | 1727.9 | 1721.9 | 1.0 | $-$0.63 | $-$0.62 | 2.0 | 1
| | | 21APR‡ | $15.8^{\prime\prime}\times 13.8^{\prime\prime},-38.0^{\circ{\ddagger}}$ | - | 2577.0‡ | - | - | - | - | -
M132657.20$-$280831.4 | 404.5 | 2.238 | 16SEP | $15.0^{\prime\prime}\times 7.0^{\prime\prime},+20.0^{\circ}$ | 671.3 | 874.8 | 0.77 | $-$0.62 | $-$0.83 | 2.7 | -
M135131.98$-$101932.9 | 726.1 | 2.999 | 16SEP | $9.2^{\prime\prime}\times 7.0^{\prime\prime},+47.0^{\circ}$ | 1338.3 | 1886.5 | 0.71 | $-$0.76 | $-$0.79 | 2.0 | -
M141327.20$-$342235.1 | 274.7 | 2.812 | 09SEP | $20.9^{\prime\prime}\times 6.7^{\prime\prime},-42.0^{\circ}$ | 78.2 | 76.4 | 1.02 | $+$1.02 | $+$0.81 | 3.9 | -
M143709.04$-$294718.5 | 273.8 | 2.331 | 09SEP | $14.6^{\prime\prime}\times 7.0^{\prime\prime},-44.0^{\circ}$ | 333.0 | 456.1 | 0.73 | $-$0.41 | $-$0.49 | 2.5 | -
M144851.10$-$112215.6 | 455.5 | 2.630 | 07SEP | $7.3^{\prime\prime}\times 6.0^{\prime\prime},-48.0^{\circ}$ | 967.0 | 896.8 | 1.08 | $-$0.54 | $-$0.71 | 3.1 | 1
M145342.95$-$132735.2 | 254.5 | 2.370 | 07SEP | $7.6^{\prime\prime}\times 6.2^{\prime\prime},-33.0^{\circ}$ | 477.3 | 634.7 | 0.75 | $-$0.73 | $-$0.83 | 2.2 | -
M145502.84$-$170014.2 | 294.7 | 2.291 | 07SEP | $8.3^{\prime\prime}\times 6.4^{\prime\prime},-13.0^{\circ}$ | 345.8 | 352.2 | 0.98 | $-$0.14 | $-$0.23 | 2.3 | -
M145625.83$+$045645.2 | 287.9 | 2.134 | 09SEP | $7.1^{\prime\prime}\times 6.8^{\prime\prime},+87.0^{\circ}$ | 800.8 | 813.2 | 0.98 | $-$0.83 | $-$0.82 | 2.6 | -
M145908.92$-$164542.3 | 378.9 | 2.006 | 07SEP | $8.5^{\prime\prime}\times 6.6^{\prime\prime},-9.0^{\circ}$ | 853.1 | 909.6 | 0.94 | $-$0.70 | $-$0.85 | 2.1 | -
M150425.30$+$081858.6 | 210.8 | 2.035 | 09SEP | $8.4^{\prime\prime}\times 7.6^{\prime\prime},-32.0^{\circ}$ | 122.1 | 138.5 | 0.88 | $+$0.34 | $+$0.27 | 4.8 | -
M151129.01$-$072255.3 | 326.3 | 2.582 | 17SEP | $7.8^{\prime\prime}\times 6.8^{\prime\prime},-85.0^{\circ}$ | 672.7 | 624.9 | 1.08 | $-$0.52 | $-$0.71 | 2.7 | -
M151304.72$-$252439.7† | 217.6 | 3.132 | 09SEP | $9.1^{\prime\prime}\times 6.8^{\prime\prime},+1.0^{\circ}$ | 855.7 | 819.4 | 1.04 | $-$1.26 | $-$1.26 | 3.6 | -
| | | | | 268.5 | 242.1 | 1.11 | | | 3.6 | -
M151944.77$-$115144.6 | 441.0 | 2.014 | 17SEP | $9.2^{\prime\prime}\times 7.6^{\prime\prime},+84.0^{\circ}$ | 425.0 | 561.5 | 0.76 | $-$0.19 | $-$0.01 | 3.7 | -
M154015.23$-$145341.5 | 203.3 | 2.098 | 17SEP | $9.8^{\prime\prime}\times 7.5^{\prime\prime},+61.0^{\circ}$ | 642.4 | 595.0 | 1.08 | $-$0.86 | $-$0.98 | 2.6 | 1
M155825.35$-$215511.1 | 206.9 | 2.760 | 09SEP | $13.7^{\prime\prime}\times 6.7^{\prime\prime},-42.0^{\circ}$ | 209.5 | 235.8 | 0.89 | $-$0.10 | $+$0.09 | 3.0 | -
M161907.44$-$093952.5 | 340.3 | 2.891 | 17SEP | $8.9^{\prime\prime}\times 6.8^{\prime\prime},+66.0^{\circ}$ | 757.0 | 695.5 | 1.09 | $-$0.57 | $-$0.43 | 2.2 | -
M162047.94$+$003653.2 | 317.8 | 2.438 | 17SEP | $10.7^{\prime\prime}\times 7.6^{\prime\prime},+2.0^{\circ}$ | 226.9 | 234.3 | 0.97 | $+$0.24 | $+$0.16 | 3.3 | -
M164950.51$+$062653.3 | 389.2 | 2.144 | 17SEP | $13.6^{\prime\prime}\times 9.3^{\prime\prime},-11.0^{\circ}$ | 154.3 | 204.3 | 0.76 | $+$0.51 | $+$0.36 | 2.6 | -
M165038.03$-$124854.5 | 275.5 | 2.527 | 09SEP | $7.3^{\prime\prime}\times 6.2^{\prime\prime},-20.0^{\circ}$ | 729.6 | 675.1 | 1.08 | $-$0.72 | $-$0.52 | 5.0 | -
M165435.38$+$001719.2 | 255.3 | 2.363 | 07SEP | $9.2^{\prime\prime}\times 7.3^{\prime\prime},-14.0^{\circ}$ | 405.7 | 381.5 | 1.06 | $-$0.32 | $-$0.44 | 4.2 | -
M194110.28$-$300720.9 | 315.0 | 2.059 | 17SEP | $23.6^{\prime\prime}\times 8.3^{\prime\prime},+51.0^{\circ}$ | 164.0 | 227.2 | 0.72 | $+$0.26 | $+$0.40 | 3.6 | -
M200209.37$-$145531.8 | 620.3 | 2.192 | 17SEP | $8.6^{\prime\prime}\times 7.1^{\prime\prime},-17.0^{\circ}$ | 942.0 | 896.1 | 1.05 | $-$0.29 | $-$0.31 | 2.9 | -
M201708.96$-$293354.7 | 327.2 | 2.617 | 17SEP | $20.5^{\prime\prime}\times 7.3^{\prime\prime},+50.0^{\circ}$ | 1044.7 | 1201.1 | 0.87 | $-$1.04 | $-$1.02 | 2.7 | 1
M203425.65$-$052332.2 | 419.7 | 2.070 | 17SEP | $8.0^{\prime\prime}\times 7.3^{\prime\prime},+85.0^{\circ}$ | 366.2 | 407.2 | 0.9 | $+$0.02 | $+$0.10 | 4.0 | -
M204737.67$-$184141.2 | 241.7 | 2.994 | 16SEP | $13.8^{\prime\prime}\times 6.7^{\prime\prime},-49.0^{\circ}$ | 273.0 | 284.3 | 0.96 | $-$0.13 | $-$0.17 | 2.2 | -
M205245.03$-$223410.6 | 330.9 | 2.072 | 16SEP | $12.3^{\prime\prime}\times 6.0^{\prime\prime},-41.0^{\circ}$ | 631.8 | 608.3 | 1.04 | $-$0.49 | $-$0.60 | 3.5 | -
M210143.29$-$174759.2 | 959.5 | 2.803 | 16SEP | $9.5^{\prime\prime}\times 5.8^{\prime\prime},-39.0^{\circ}$ | 2554.1 | 2477.5 | 1.03 | $-$0.76 | $-$0.91 | 3.6 | -
M212821.83$-$150453.2 | 245.5 | 2.547 | 17SEP | $9.2^{\prime\prime}\times 6.9^{\prime\prime},+6.0^{\circ}$ | 443.4 | 460.8 | 0.96 | $-$0.50 | $-$0.49 | 2.0 | -
M220127.50$+$031215.6 | 300.5 | 2.181 | 17SEP | $18.3^{\prime\prime}\times 8.6^{\prime\prime},+77.0^{\circ}$ | 180.5 | 191.6 | 0.94 | $+$0.36 | $+$0.33 | 1.7 | -
M222332.81$-$310117.3 | 231.7 | 3.206 | 17SEP | $25.0^{\prime\prime}\times 9.2^{\prime\prime},+40.0^{\circ}$ | 230.6 | 330.2 | 0.7 | $-$0.28 | $-$0.40 | 2.6 | -
M223816.27$-$124036.4 | 213.6 | 2.623 | 16SEP | $12.3^{\prime\prime}\times 8.3^{\prime\prime},-50.0^{\circ}$ | 464.3 | 435.5 | 1.07 | $-$0.57 | $-$0.65 | 3.7 | -
M224111.48$-$244239.0 | 211.4 | 2.242 | 16SEP | $10.9^{\prime\prime}\times 6.1^{\prime\prime},-31.0^{\circ}$ | 253.0 | 254.8 | 0.99 | $-$0.15 | $-$0.17 | 2.2 | -
M224705.52$+$121151.4 | 223.7 | 2.185 | 14SEP | $12.1^{\prime\prime}\times 7.5^{\prime\prime},+83.0^{\circ}$ | 474.3 | 489.1 | 0.97 | $-$0.62 | $-$0.60 | 1.9 | -
M224950.57$-$263459.6 | 228.8 | 2.174 | 16SEP | $9.8^{\prime\prime}\times 5.7^{\prime\prime},-23.0^{\circ}$ | 568.3 | 542.1 | 1.05 | $-$0.69 | $-$0.59 | 2.4 | -
M230036.41$+$194002.9 | 210.4 | 2.160 | 17SEP | $27.7^{\prime\prime}\times 8.2^{\prime\prime},+79.0^{\circ}$ | 475.8 | 556.0 | 0.86 | $-$0.78 | $-$0.65 | 11.2 | -
M231634.61$+$042940.2 | 214.0 | 2.180 | 16SEP | $6.9^{\prime\prime}\times 6.6^{\prime\prime},+47.0^{\circ}$ | 103.3 | 97.9 | 1.06 | $+$0.70 | $+$0.50 | 3.0 | -
M234910.12$-$043803.2 | 206.1 | 2.240 | 16SEP | $7.7^{\prime\prime}\times 6.8^{\prime\prime},-62.0^{\circ}$ | 168.3 | 185.6 | 0.91 | $+$0.08 | $-$0.04 | 5.1 | -
M235722.47$-$073134.3 | 235.5 | 2.764 | 16SEP | $7.0^{\prime\prime}\times 6.0^{\prime\prime},-19.0^{\circ}$ | 372.4 | 398.2 | 0.94 | $-$0.42 | $-$0.40 | 2.4 | -
Note. — Column 1: source name based on right ascension and declination (J2000)
from NVSS. Column 2: 20 cm flux density from NVSS. Column 3: emission line
redshift measured from SALT-NOT survey. Column 4: observing run (see Table
LABEL:tab:obslog). Note that only M1312-2026 was also observed at Band-2.
Columns 5 - 7: synthesised beam, peak of the prominent Gaussian component and
total flux densities, respectively, from the continuum image based on 390-450
MHz range (average 420 MHz). Column 8: ratio of columns 6 and 7. Column 9:
spectral index derived using NVSS and 420 MHz flux densities. In a few cases,
which are all sources with a single component fit, this ratio marginally
($\sim$5%) exceeds 1, suggesting that the radio emission may be partially
resolved. Column 10: in-band spectral index. Column 11: observed spectral rms
at 420 MHz. Column 12: number of absorption candidates.
${\dagger}$: The radio source is double lobed in 420 MHz image (see Section
3.1 for details). ${\ddagger}$: Corresponds to Band-2.
## 3 Observations and data analysis
### 3.1 Observations
We used the recently commissioned Band-2 (120-240 MHz) and Band-3 (250-500
MHz) of uGMRT to observe redshifted associated and intervening H i 21-cm
absorption lines from the sample. The total allocated time including all
overheads for the survey observations was 90 hrs. The Band-3 observations were
split into 6 observing runs in September, 2018 (see Table LABEL:tab:obslog).
For these, we used GMRT Wideband Backend (GWB) with a baseband bandwidth of
200 MHz covering 300-500 MHz and split into 8192 frequency channels. This
corresponds to a redshift coverage of 1.84 - 3.73 for H i 21-cm line. The
channel resolution is 24.414 kHz, which at 400 MHz provides a velocity
resolution of 18.3 km s-1. Each target was observed for typically 30-45 mins.
The details of which target sources were visited in which observing run are
summarized in column 4 of Table LABEL:tab:wisesamp.
For Band-2 observations which targeted only M1312-2026, the highest redshift
quasar in the sample, the GMRT Software Backend (GSB) was used to configure a
baseband bandwidth of 4.17 MHz split into 512 spectral channels. The observing
band was centered at 234.1 MHz (resolution$\sim$10 km s-1), the redshifted H i
21-cm line frequency of the source. The total on-source time was 4.2 hrs.
Additionally, five absorption candidates identified from the Band-3 survey
observations were reobserved on December 10, 2019 and February 20, 2020. We
used GWB with a bandwidth of 6.25 MHz centered at line frequency (details in
Section 3.3) and split into 4096 channels (resolution$\sim$1 km s-1). Each
candidate was observed for 3 hrs (on-source time $\sim$2.2 hrs).
For all the observations, only parallel hand correlations RR and LL were
obtained. During each observing run 3C48, 3C147 and/or 3C286 were observed for
flux density and bandpass calibrations. A complex gain calibrator was also
observed for each target source.
Table 2: Details of uGMRT observations.
Run ID | Band | Date† | Duration‡
---|---|---|---
Survey observations
21APR | Band-2 | 2018 April 21 | 7
07SEP | Band-3 | 2018 September 07 | 11
08SEP | ” | 2018 September 08 | 10
09SEP | ” | 2018 September 09 | 11
14SEP | ” | 2018 September 14 | 10
16SEP | ” | 2018 September 16 | 21
17SEP | ” | 2018 September 17 | 20
Follow-up observations
10DEC | ” | 2019 December 10 | 6
20FEB | ” | 2020 February 20 | 9
Note. — ${\dagger}$: Start date as per Indian Standard Time. ${\ddagger}$: In
hours.
Figure 3: uGMRT radio continuum (420 MHz) contours overlaid on PS1 $yig$
color composite images. For M0636-3106 and M0652-3230, the background image is
PS1 $i$-band and uGMRT 420 MHz, respectively. The contour levels are shown at
20$\times$(-1, 1, 2, 4, 8, …) mJy beam-1. The synthesized beams, shown at the
bottom-left corner of images, and the peak and total flux densities are
provided in column 5 - 7 of Table LABEL:tab:wisesamp, respectively. The
position of WISE source is marked with a cross.
### 3.2 Data analysis
All the data were edited, calibrated and imaged using the Automated Radio
Telescope Imaging Pipeline (ARTIP) following the steps described in Gupta et
al. (2021). After flagging and calibration, the spectral line processing of
wideband Band-3 data was sped up by partitioning the 200 MHz bandwidth into
four 50 or 60 MHz wide spectral windows with an overlap of 10 MHz between the
adjacent windows. These spectral windows covered: 300-360 MHz, 350-410 MHz,
400-460 MHz and 450-500 MHz. The calibrated visibilities for each spectral
window were processed separately (independently) for RFI flagging, continuum
imaging and self-calibration, and continuum subtraction. The continuum
subtracted visibilities were imaged to obtain RR and LL spectral line cubes.
For this a ‘common’ synthesized beam corresponding to the lowest frequency
spectral window was used.
The narrow band datasets from Band-2 survey and Band-3 follow-up observations
were processed as a single 4.17 or 6.25 MHz wide spectral window,
respectively.
#### 3.2.1 Continuum analysis
For Band-3, the spectral window covering 390-450 MHz, hereafter identified
through the central reference frequency of 420 MHz, is least affected by RFI
resulting in best possible continuum images from the data. We used CASA task
IMFIT to model the radio continuum emission in these images as multiple
Gaussian components. The 9 cases requiring more than one Gaussian component
are shown in Fig. 3. Only in 4 cases i.e., M0652$-$3230, M0910$-$0526,
M1153$-$0442 and M1513$-$2524, does the second component contains more than
20% of the total flux density.
In columns 5 and 6 of Table LABEL:tab:wisesamp we list synthesized beams and
peak flux densities of the most prominent Gaussian component. For the above-
mentioned four sources the second component is also listed. In the remaining
cases the additional components are too faint ($<$50 mJy) to be useful for the
objectives of this paper, hence we do not list their individual properties.
The total flux density as estimated from the single or multiple component fit
is provided in column 7.
In Fig. 3, we also show optical images from Pan-STARRS1 (PS1) (Chambers et
al., 2016). Note that M0652-3230 is too south to be covered in PS1. The
location of MIR sources from WISE are also shown in the images. Owing to the
MIR-selection wedge described in Section 2, all but one radio source in our
sample are quasars. Indeed the median spectral index111Spectral index $\alpha$
is defined by the power law, $S_{\nu}\propto\nu^{\alpha}$,
$\alpha^{1.4}_{0.4}$, derived using the NVSS 1420 MHz and the uGMRT 420 MHz
total flux densities is $-0.38$ (see column 9 of Table LABEL:tab:wisesamp and
Fig. 4). As expected this is flatter than the overall radio source population
which has $\alpha\sim-0.8$. Thus, for our sample, when radio emission is
dominated by a single component, we expect AGN to be located close to the peak
of the radio emission. In case two prominent radio components are present
i.e., a compact symmetric object (CSO; Conway, 2002) morphology, the AGN is
expected to be located in between them. In all but 2 cases (M0910-0526 and
M1513-2524; details provided below), the optical / MIR counterpart is at the
location of the AGN expected from the radio morphology. As previously
mentioned we have also verified these coincidences using higher spatial
resolution 3 GHz VLASS images.
Figure 4: Distributions of radio spectral index between 1400 and 420 MHz
($\alpha^{1.4}_{0.4}$) and 5$\sigma$ 21-cm optical depth ($\int\tau$dv). The
vertical dashed lines mark the median for each distribution. The dotted lines
correspond to $\alpha$ = -0.8 and $\tau_{21}$dv = 1.1 km s-1.
In the case of M0910-0526, the northern component could be an unrelated radio
source (Fig 3). We will exclude this component from the absorption line
statistics. M1513-2524, the only radio galaxy in the sample presented here, is
among the optically faintest ($r>23$ mag) in our survey. Among the two radio
components, one of them is closer to the MIR source (see Fig 3). We have
tentatively detected faint radio emission i.e., radio core at the location of
the MIR source. For details see higher spatial resolution radio images
presented in Shukla et al. (2021). We will consider the eastern and western
radio components as the two radio lobes of this radio galaxy.
For M1312$-$2026, the only target also observed in Band-2, the properties at
234 MHz are also provided in Table LABEL:tab:wisesamp. The associated radio
emission is compact with a deconvolved size of 15.8′′$\times$13.8′′ (position
angle = $-38.0^{\circ}$). Based on the observed flux densities, M1312-2026,
has a spectral luminosity of L(1.4 GHz) = $1.2\times 10^{29}$ W Hz-1, which is
more than three orders of magnitude higher than the radio power cut-off that
separates FRI and FRII radio sources, and greater than the luminosity of any
known radio-loud AGN at $z>5$. The multi-frequency VLA and VLBA observations
of this AGN have been obtained to investigate its radio morphology.
### 3.3 Spectral line analysis
Figure 5: RR and LL spectra of M1206-0714 and continuum fits (dashed line).
Figure 6: The continuum subtracted Stokes-$I$ spectrum of M0422-2034
($z_{em}$=2.582; see arrow at 396.51 MHz marking the redshifted H i 21-cm line
frequency). Shaded regions mark frequency ranges that were masked prior to any
calibration. The median spectra obtained using the full survey and only 08SEP
run are plotted at +0.042 (median-survey) and +0.020 Jy (median-08SEP),
respectively. In the spectrum of M0422-2034, the pixels flagged on the basis
of median spectra are shown in red. The error spectrum (5$\times\sigma_{\rm
rolling}$) is also shown. The dotted and dashed lines at the bottom in panels
3-5 show the frequency range valid for 21-cm line search and actually
contributing to the sensitivity function ($g(z)$), respectively. The candidate
detections are marked using $\star$.
For spectral line analysis, RR and LL spectra in the heliocentric frame were
extracted at the location of radio continuum peaks from the spectral line
cubes. The spectra show systematic oscillations or ripples due to residual
bandpass calibration, and numerous positive/ negative spikes (for an example
see Fig. 5). The ripple is not identical in the two parallel hands and also
varies from one target source to other. We removed its effect by iteratively
fitting the underlying structure using Savitsky-Golay filtering with window
length = 24 and polynomial order = 3. In each iteration, the pixels deviating
beyond the threshold were flagged and excluded from the subsequent iterations.
The continuum was interpolated across the masked pixels and the process was
repeated until no deviant pixels were found.
The above determined continuum fit was subtracted from the RR and LL spectra,
and an error spectrum was generated by calculating a rolling standard
deviation ($\sigma_{\rm rolling}$; window size=48 channels). For Band-3, an
additional step was to merge the spectra from adjacent spectral windows and
unmask the spikes to obtain the final RR and LL spectra covering the entire
300-500 MHz. These were then averaged with appropriate statistical weights to
obtain the final Stokes-$I$ spectra. The resultant Stokes-$I$ spectra have
flat baselines but numerous positive and negative spikes (for example see Fig.
6).
The Band-2 spectrum of J1312$-$2026 is presented in Fig. 7. These data were
severely affected by the radio frequency interference (RFI). The broad-band
RFI mostly affected the shorter baselines ($<$4 k$\lambda$) which were
completely flagged. There was also narrow-band impulse-like bursts of RFI,
which affected all the baselines. Overall $\sim$55% of the data was flagged
due to antenna/baseline-based problems and RFI. The spectral rms in the
Stokes-$I$ spectrum presented here is 8.5 mJy/beam, which for the unsmoothed
spectrum presented here corresponds to a 1$\sigma$ optical depth sensitivity
of 0.003. There are several statistically significant narrow-band features
with widths of 1-2 spectral channels detected in the spectrum. But all of
these are coincident with the spikes present in the RFI-spectrum, and are most
likely due to the low-level narrow-band RFI which could not be detected in the
individual baselines.
In general, no true emission is expected in our spectra and only a tiny
fraction of negative spikes are expected to represent true absorption. The
majority of these spikes are RFI artefacts. The biggest issue in spectral line
search at low frequencies is to distinguish between true absorption and RFI
artefacts. The rest of this section is concerned with absorption line search
in Band-3 spectra.
Figure 7: H i 21-cm absorption spectrum towards the highest redshift quasar
M1312-2026 in our sample. The vertical dashed line marks the redshifted H i
21-cm absorption frequency corresponding to $z_{\rm q}$. The filled histogram
in the background is the ratio of the extent of data flagged due to frequency-
dependent and frequency-independent flags.
The worst RFI in Band-3 spectra (e.g., at 360-380 MHz) was flagged by applying
an initial mask prior to any calibration (see shaded regions in Fig. 6).
Further, to identify artefacts due to weaker but persistent RFI, we generated
median Stokes-$I$ spectra for each observing run and the full survey. The
median spectrum from the survey and 08SEP run in which M0422-2034 was observed
are shown in Fig. 6. The pixels deviating by more than 5$\sigma$ in the median
spectra were taken to represent RFI artefacts. We rejected corresponding
pixels in the individual source spectrum. In Fig. 6, such pixels for
M0422-2034 are plotted in red.
After this we created a list of 550 absorption line candidates using a
detection algorithm which requires: (i) flux density at a pixel $j$, F($j$)
$<-5\times\sigma_{rolling}$($j$) and (ii) Heliocentric frequency at $j$,
$\nu$($j$) $\geq\nu_{\rm 21cm}$/(1 + ($z_{em}$ \+ 0.01) ), where $\nu_{\rm
21cm}$ is the rest-frame 21-cm line frequency. The factor of 0.01 in the
denominator, which approximately corresponds to a outflowing velocity of
$\sim$3000 km s-1, allows for the possibility of detecting redshifted
absorption associated with AGN (see Fig. 21 of Gupta et al., 2006).
Next, we created a false-detection spectrum by identifying pixels based on
following two characteristics. First, we identified all positive spikes with
F($j$) $>5\times\sigma_{rolling}$($j$). These are unphysical and hence false
detections because H i emission lines are too weak to be detectable at $z>2$
in our spectra. Second, we identified all negative spikes i.e., F($j$)
$<-5\times\sigma_{rolling}$ but only at $\nu$($j$) $<\nu_{\rm 21cm}$/(1 +
($z_{em}$ \+ 0.01)). These are unphysical because absorbing gas must be in the
front of radio source. In Fig. 6, we mark three candidates using $\star$. Two
of these are clearly false detections whereas third one at 395.5 MHz
(approximately +800km s-1 with respect to $z_{em}$) could be a true absorption
associated with the AGN. The cumulative distributions of all the false
absorption (528) and emission (1359) detections from the survey are shown in
Fig. 8. These represent frequency ranges that may be affected by sporadic RFI.
The majority of these are at the edges of frequency ranges masked in Fig. 10.
An updated RFI mask would get rid of them. This will certainly be the
preferred strategy for defining frequency ranges to be used for continuum
imaging. Here, we rejected all the absorption candidates that are within one
frequency channel of any of these false detections. This step reduced the
number of absorption candidates by a factor of $\sim$10.
Figure 8: Distribution of false absorption (blue) and emission (red)
detections for the survey. The majority of these are the edges of frequency
regions masked in Fig. 10. The locations of absorption candidates (see column
12 of Table LABEL:tab:wisesamp) are marked by $\star$.
We visually examined RR and LL spectra of the remaining 48 absorption
candidates for consistency. Specifically, we imposed the criteria that
integrated optical depths estimated using the RR and LL spectra match within
3$\sigma$ and within errors the absorption profiles appear similar.
Table 3: High probability 21-cm absorption candidates
Source name | $z_{em}$ | $z_{abs}$(21-cm) | $\int\tau$dv(21-cm) | $\Delta$V90
---|---|---|---|---
| | | (km s-1) | (km s-1)
(1) | (2) | (3) | (4) | (5)
M0513$+$0100 | 2.673 | 1.9526 | 3.59$\pm$0.67 | 107
M0618$-$3158 | 2.134 | 1.9642 | 0.49$\pm$0.07 | 15
M1244$-$0446 | 3.114 | 2.3871 | 1.61$\pm$0.27 | 70
M1312$-$2026 | 5.064 | 3.0324 | 0.34$\pm$0.08 | 20
M1540$-$1453 | 2.098 | 2.1139 | 9.14$\pm$0.28 | 144
Note. — Column 1: source name. Column 2: emission redshift. Columns 3-5: H i
21-cm absorption redshift, integrated 21-cm optical depth limit and velocity
width of absorption profile based on the survey spectra presented in Fig. 9,
respectively. The spectra have been normalized using the corresponding peak
flux densities.
After all the statistical filtering described above, we were left with a total
of 15 candidates towards the sight lines which are identified in column 12 of
Table LABEL:tab:wisesamp. We extracted Stokes-$I$ spectra of gain calibrators
corresponding to these. For the following candidates: M0212-3822 ($z_{\rm
abs}$=2.1666), M0250-2627 ($z_{\rm abs}$=2.1665), M0422-2034 ($z_{\rm
abs}$=2.5924), M0513+0100 ($z_{\rm abs}$=2.1612, 2.3183, 2.6434), M0515-0120
($z_{\rm abs}$=2.1753), M0702-3302 ($z_{\rm abs}$=2.2769), M1448-1122 ($z_{\rm
abs}$=2.2973) and M2017-2933 ($z_{\rm abs}$=2.0733), we find an ‘absorption’
feature at the same redshifted frequency in the gain calibrator spectrum. The
angular separation between a target source and its gain calibrator is
typically 15 degree. Thus, it is unrealistic that at $z>2$ a true absorption
is present in both of them. Therefore we rejected these 10 candidates.
Figure 9: High probability absorption candidates. The survey and reobservation
spectra are shown as dotted (red) and solid (blue) lines, respectively.
Finally, we have 5 high probability candidates. These are listed in Table
LABEL:tab:abscand. We also estimated integrated optical depths ($\int\tau$dv)
and velocity width ($\Delta$V90) corresponding to the 5% and 95% percentiles
of the apparent optical depth distribution. These are very similar to the
values observed for 21-cm absorption lines detected in various surveys (see
e.g., Gupta et al., 2009; Dutta et al., 2017c).
We reobserved these high probability candidates with uGMRT (see Section 3.1
and Table LABEL:tab:obslog) using a bandwidth of 6.25 MHz centered at H i
21-cm line frequency corresponding to $z_{abs}$(21-cm) given in column 3 of
Table LABEL:tab:abscand. These observations were carried out at night to
reduce the effect of RFI. For better RFI mitigation, the frequency setup was
chosen to provide a spectral resolution of $\sim$1 km s-1. Recall, the survey
observations had a spectral resolution of $\sim$18 km s-1. In Fig. 9, we
present profiles from the survey and reobservation spectra. Clearly, only
M1540-1453 is confirmed. The remaining 4 candidates are due to RFI.
To summarize, based purely on the uGMRT survey spectra and blind 21-cm line
search, we identified 5 absorption features (4 intervening and 1 associated
systems). The followup observations confirmed only one of these i.e.,
absorption associated with the radio source M1540-1453 at $z_{em}$ = 2.098.
The distribution of 5$\sigma$ 21-cm optical depth limits at 420 MHz estimated
assuming $\Delta v$=25 km s-1 is shown in Fig. 4. The median 0.535 km s-1 is
well below the sensitivity (1.1 km s-1) required to detect CNM (i.e T$\sim$100
K) in DLAs (i.e log $N$(H i)$\geq$ 20.3).
## 4 Associated H i 21-cm absorption detection towards M1540-1453
Figure 10: Associated H i 21-cm absorption detection towards M1540-1453. The
zero of the velocity scale is defined with respect to the peak of the
absorption i.e., $z_{\rm abs}$= 2.1139. The solid horizontal line corresponds
to $\Delta V_{90}$.
The H i 21-cm absorption spectrum of M1540-1453 based on the followup
observation is presented in Fig. 10. It is normalized using the corresponding
peak continuum flux density of 652 mJy beam-1. The absorption is spread over
$\sim$ 300 km s-1. We measure a total optical depth, $\int\tau$dv =
11.30$\pm$0.07 km s-1 and about 90% of this is confined to $\Delta V_{90}$ =
167 km s-1. This translates to a H i column density of $(2.06\pm 0.01)\times
10^{21}({T_{S}\over 100})({1\over f_{c}})$ cm-2. Here, $T_{\rm s}$ is the spin
temperature in Kelvin and $f_{c}$ is covering factor of absorbing gas. We have
assumed $T_{\rm s}$ = 100 K for the CNM (Heiles & Troland, 2003) and $f_{c}$ =
1. The inferred $N$(H i) will be higher if the gas is warmer and/or partially
covers the absorbing gas.
We obtained an optical spectrum of M1540-1453 using the Robert Stobie
Spectrograph (RSS; Burgh et al., 2003; Kobulnicky et al., 2003) on SALT as
part of our optical survey summarized in Section 2. The spectrum has a typical
signal-to-noise ratio (SNR) of 7 per pixel. It shows C iv and [C iii] emission
lines. The spectral SNR close to the [C iii] emission line is poor due to
residuals from the subtraction of skylines. Hence, we focus on the C iv
emission line. The peak of C iv emission corresponds to $z_{\rm em}$$\simeq$
2.113 which is consistent with the 21-cm absorption peak (Fig. 10). The
emission line is superimposed with absorption lines possibly also of C iv but
at a redshift slightly lower than the 21-cm absorption.
Our SALT spectrum does not cover ${\rm Ly}\alpha$ absorption for M1540-1453.
But it covers the rest wavelength range of 1436 to 2414Å. Although the region
is affected by skylines, in principle, we have access to the Fe ii lines
associated with the 21-cm absorption. We detect an absorption feature exactly
at the redshifted wavelength corresponding to Fe ii$\lambda$2383 line. The
redshift and rest equivalent width of the absorption are $z_{\rm abs}$=
2.11404 and $W_{\rm r}$ = $1.05\pm 0.25$Å, respectively. We also find
absorption dips at the expected positions of the Fe ii$\lambda$2344, Fe
ii$\lambda$2374 and Si ii$\lambda$1526 lines. These coincidences are
interesting because metal absorption line ratios can be a reasonable indicator
of H i column density (Rao et al., 2006; Gupta et al., 2012; Dutta et al.,
2017b). A better quality optical spectrum covering Ly$\alpha$ and these metal
lines is needed to extract physical conditions prevailing in the absorbing
gas.
We follow the method described in Srianand et al. (2008) to constrain the
visual extinction, $A_{V}$. Using our flux calibrated SALT spectrum along with
the Small Magellanic Cloud (SMC) type extinction curve and the average QSO
spectral energy distribution (SED) given in Selsing et al. (2016), we measure,
$A_{V}=0.13\pm 0.01$. The moderate extinction observed towards M1540-1453 is
consistent with the idea that cold atomic gas is accompanied by dust (see Fig.
9 of Dutta et al., 2017b).
To date only three associated H i 21-cm absorbers are known222We note that H i
21-cm absorber ($z_{abs}$ = 1.9436) has been detected towards the QSO PKS
1157+014 at $z_{em}=$ 1.978 (Wolfe et al., 1981) which is slightly below the
redshift cut-off used here. It is suggested that in this case the absorption
originates from a galaxy bound to the cluster containing the QSO. at $z>2$.
These are: $z_{\rm abs}$= 3.3968 towards B2 0902+345 (Uson et al., 1991;
Briggs et al., 1993), $z_{\rm abs}$= 2.6365 towards MG J0414+0534 (Moore et
al., 1999) and $z_{\rm abs}$= 3.52965 towards 8C 0604+728 (Aditya et al.,
2021). Thus, M1540-1453 absorber reported here is only the fourth detection at
$z>2$. The inferred column densities for reasonably assumed values of spin
temperature and covering factor ($T_{\rm s}$ = 100 K; $f_{c}$ =1) imply $N$(H
i)$>>2\times 10^{20}$ cm-2 which is the formal DLA cut-off. So, in all these
cases if the optical and radio sightlines coincide then one expects to see a
DLA at the 21-cm absorption redshift. We investigate this for B2 0902+345 and
MG J0414+0534, the two sources for which optical spectra are available in
literature.
B2 0902+345 is associated with a radio galaxy that exhibits ${\rm Ly}\alpha$
emission extended up to 50 kpc. The associated radio continuum emission
($\alpha$ = -0.94) has a highly distorted radio morphology over 6′′ ($\sim$45
kpc at $z_{abs}$) and rotation measure in excess of 1000 rad m-2 (Carilli et
al., 1994). However, no signatures of ${\rm Ly}\alpha$ absorption associated
with the 21-cm absorber are seen. In fact, the 21-cm absorption is found to be
redshifted with respect to the ${\rm Ly}\alpha$ emission (shift $\sim$ +300 km
s-1; see Adams et al., 2009, for details).
MG J0414+0534 is a highly reddened gravitationally lensed quasar ($A_{V}\sim$5
Lawrence et al., 1995b). The weakness of ${\rm Ly}\alpha$ emission prevents us
from searching for a DLA. However 4 associated strong Fe ii absorption
components are detected in the redshift range 2.6317-2.6447 (Lawrence et al.,
1995a). These are within the range over which CO emission is detected but do
not exactly coincide with the 21-cm absorption redshift, which itself is
shifted by $\sim$200 km s-1 with respect to the peak of the CO emission line.
The 21-cm absorption in this case may actually be towards a steep-spectrum
radio jet component not spatially coinciding with the AGN (Moore et al.,
1999). The same scenario may also apply to B2 0902+345.
In comparison, the radio emission associated with M1540-1453 is compact in
VLASS. It has a deconvolved size of $1.8^{\prime\prime}\times
0.7^{\prime\prime}$ with a position angle of $155^{\circ}$ which corresponds
to $\sim$15 pc at $z_{abs}$. The CNM clouds responsible for H i 21-cm
absorption are generally larger than this (Gupta et al., 2018b). Thus, it is
most likely the case that $f_{c}=1$ for M1540-1453. Further, the radio
continuum peak coincides well with the PS1/MIR counterparts. All these explain
the coincidence between 21-cm absorption and metal absorption lines observed
towards this source. This is also consistent with the profile of 21-cm
absorption which is only slightly asymmetric at the base and does not show
signatures of outflows i.e., blue-shifted absorption (Vermeulen et al., 2003;
Gupta et al., 2006). In low-$z$ samples, such absorption likely originates
from the circumnuclear disk or gas clouds associated with the host galaxy
(e.g., Geréb et al., 2015; Srianand et al., 2015). In Section 6, we note that
the quasars in our sample are generally hosted in gas and dust poor galaxies.
The CO emission line and millimetre continuum observations of M1540-1453 will
reveal the nature of its host galaxy ISM and shed light on the origin of gas
detected in 21-cm absorption.
## 5 Intervening absorption statistics
In this section, we constrain the occurrence of intervening H i 21-cm
absorbers at $z>2$ using a blind spectroscopic search. We also use ${\rm
Ly}\alpha$ and metal absorption lines detected in our SALT spectra to
interpret these statistics.
### 5.1 Blind H i 21-cm absorption line search
To estimate the incidence of intervening absorbers, we first determine the
sensitivity function, $g({\cal{T}},z)$, as a function of integrated optical
depth (${\cal{T}}$) and redshift ($z$). For this we follow the formalism
provided in Gupta et al. (2021). The two crucial inputs required to determine
$g({\cal{T}},z)$ are spectral weight ($W$) and completeness fraction ($C$).
The former accounts for the possibility that some of the targets in the sample
may not have spectroscopic redshifts. Since, all the targets in our sample
have spectroscopic redshifts, we assign $W$ = 1.
Figure 11: Completeness corrected total redshift paths ($\Delta
z({\cal{T}})\equiv g({\cal{T}})$) for the 21-cm line search. The horizontal
dashed line represents total redshift path without completeness correction.
The vertical dashed lines correspond to integrated optical depth, ${\cal{T}}$
= 1.1 km s-1. This corresponds to a 5$\sigma$ detection limit of $N$(H i) =
$2\times 10^{20}$ cm-2 for $T_{s}$ = 100 K.
Figure 12: The sensitivity function, $g(z)$, for H i 21-cm absorbers with
integrated 21-cm optical depth ${\cal{T}}\geq$ 1.1 km s-1. The abrupt dips are
caused by the spectral channels removed from the data due to RFI.
Approximately 30% of the redshift path is lost due to RFI.
The completeness fraction accounts for the detectability of absorption line
features of different line shapes. To determine this we consider the
absorption profiles of all the intervening absorbers detected from our surveys
in last 15 years. We inject 200 single Gaussian components with widths
consistent with the distribution of $\Delta V_{90}$ of these absorbers (see
Fig. 8 of Gupta et al., 2021) at each pixel and apply the detection algorithm
described in Section 3.3 to compute $C({\cal{T}}_{j},z_{k})$ as
$C({\cal{T}}_{j},z_{k})=\frac{1}{N_{inj}}\sum_{i=1}^{N_{inj}}F({\cal{T}}_{j},z_{k},\Delta
V_{i}),$ (2)
where $N_{inj}$ is the number of injected systems and $F=1$ if the injected
system is detected and 0 if not. The total completeness corrected redshift
path of the survey, $g({\cal{T}}_{j}$), considering all sight lines is plotted
in Fig. 11. The redshift path starts falling off rapidly below ${\cal{T}}$ =
1.5 km s-1.
It is of particular interest to consider the detectability of H i 21-cm
absorption in DLAs, i.e., ${\cal{T}}$ = 1.1 km s-1(refer to vertical dashed
line in Fig. 11). The sensitivity function providing the number of spectra in
which it is possible to detect CNM in DLAs is shown in Fig. 12. The total
redshift and comoving path length are $\Delta z=$ 38.3 and $\Delta X=$ 130.1,
respectively. Then, the incidence or number of 21-cm absorbers per unit
redshift and comoving path length are $n_{21}<$ 0.048 and $\ell_{21}<$ 0.014,
respectively. These 1$\sigma$ upper limits are based on small number Poisson
statistics (Gehrels, 1986).
The formalism to search for H i 21-cm absorption line presented above can also
be applied to OH main lines. For stronger OH main line at 1667.359 MHz,
${\cal{T}}$ = 1.1 km s-1 and excitation temperature of 3.5 K will correspond
to $N$(OH) = 8.6$\times 10^{14}$ cm-2. The total redshift and comoving path
length are $\Delta z=$ 44.9 and $\Delta X<$ 167.7, respectively. The number of
OH absorbers per unit redshift and comoving path length are $n_{\rm OH}$ =
0.041 and $\ell_{\rm OH}<$ 0.011, respectively.
Besides H i 21-cm absorption, CNM at high-$z$ may also be searched using
absorption lines of H2 and C i (e.g., Srianand et al., 2012; Noterdaeme et
al., 2018). Recently, Krogager & Noterdaeme (2020) using a canonical two phase
model of atomic gas and the observed statistics of H2 and C i absorbers at
high-$z$ estimated the comoving path length of CNM, $n_{\rm CNM}$ = 0.012. The
upper limit of $n_{21}<$ 0.048 obtained through our blind survey is consistent
with this. This result is the first study comparing the CNM cross-section of
galaxies at $z>2$ estimated using radio and optical/ultraviolet absorption
lines. The detectability of H2 and C i absorption at optical/ultraviolet and H
i 21-cm absorption at radio wavelength are affected by different systematic
effects. Indeed, there does not exist a one-to-one correspondence between the
presence of H2 and H i 21-cm absorption, and the difference may be due to
small sizes of H2 bearing clouds (see Srianand et al., 2012, for a
discussion). Much larger radio surveys are needed to disentangle these
possibilities.
### 5.2 Relationship with ${\rm Ly}\alpha$ and metal lines
Figure 13: Voigt profile fits to the 6 DLAs detected in our new radio loud
quasar sample. The profiles given in solid and long-dashed lines correspond to
the best fit and 1$\sigma$ range around it respectively. The absorption
redshift and the best fit $N$(H i) obtained are also provided in each panel.
Table 4: Properties of the DLAs derived from our observations
QSO | $z_{\rm abs}$ | log $N$(H i) | W(Si ii) | Z(P08) | Z | $\int\tau_{21}dv$ | $T_{s}$
---|---|---|---|---|---|---|---
| | | (Å) | | | (km s-1) | (K)
(1) | (2) | (3) | (4) | (5) | (6) | (7) | (8)
M0416-3339 | 2.8561 | 20.50$\pm$0.10 | 0.12$\pm$0.03 | $-$2.2 | $\leq-2.3^{a}$ | RFI | ….
M0610-2301 | 2.3987 | 20.85$\pm$0.10 | 0.66$\pm$0.02 | $-$1.2 | $\leq-1.1^{b}$ | $\leq 0.64$ | $\geq$603
M1013-2546 | 2.6834 | 20.35$\pm$0.15 | $\leq 0.90$ | …. | …. | $\leq 0.72$ | $\geq$169
M1351-0129 | 2.7719 | 20.50$\pm$0.10 | 0.19$\pm$0.03 | $-1.9$ | $\leq-1.2^{c}$ | RFI | ….
M1619-0939 | 2.7934 | 20.55$\pm$0.10 | 1.03$\pm$0.05 | $-0.9$ | $\leq-1.9^{c}$ | RFI | ….
M2154-3826 | 2.7613 | 21.00$\pm$0.10 | 0.53$\pm$0.01 | $-1.3$ | $-1.60\pm 0.11^{d}$ | …. | ….
Note. — Column 1: source name. Columns 2 and 3: DLA redshift and H i column
density. Column 4: Si ii equivalent width. Column 5: metallicity inferred
using W(Si ii) and correlation from Prochaska et al. (2008a). Column 6:
metallicity measured using weak Si ii or S i lines. Column 7: 5$\sigma$ 21-cm
optical depth limit, considering $\Delta v$ = 25 km s-1.
$a$: Based on Si ii$\lambda 1301$ line; $b$: Based on S ii$\lambda$1250 line;
$c$: Based on Si ii$\lambda$1808 line; $d$: Using the curve of growth
analysis.
Our SALT spectra allow us to search for Fe ii$\lambda\lambda\lambda$2343,
2374, 2383 for $z_{\rm abs}$$\leq$2.15 and DLAs at $z>2.65$. For this search,
we complement our SALT-NOT survey spectra with more sensitive long-slit SALT
observations of 25 quasars $\mathrm{z_{em}>2.7}$ obtained to search for
extended ${\rm Ly}\alpha$ emission halos associated with powerful radio loud
quasars (see Shukla et al., 2021, for an example). In total, we identify 7
DLAs and 1 strong Fe ii absorber in our sample. Implications of lack of 21-cm
absorption in these high H i column density absorbers are discussed below. In
the redshift range 2.15 $\leq$ $z_{\rm abs}$$\leq$ 2.65 we also identify 21 C
iv absorbers for which neither Fe ii nor DLA can be searched in our spectrum.
The redshifted H i 21-cm line frequencies corresponding to these C iv
absorbers are unaffected by RFI and no H i absorption is detected. Note that C
iv can trace a wide range of ionization stages and so is not a good indicator
of the presence of a DLA or 21-cm absorption. We will use this only as an
indicator of the possible presence of multi-phase gas along the sight line.
First, we focus on the subset of 23 quasars that have sufficient SNR in
optical continuum and, hence, suitable to search for ${\rm Ly}\alpha$
absorption. The absorption profiles of 6 DLAs detected from this subsample are
shown in Fig. 13. The measured absorption redshifts are consistent with 5 of
these being intervening systems and remaining one ($z_{\rm abs}$= 2.7613
towards M2154-3826) being a PDLA.
We also detect strong ${\rm Ly}\alpha$ absorption at the systemic redshift of
M0507$-$3624\. While we see the evidence of the damping wings and a wide range
of absorption from singly ionised species, we also see non-zero flux in the
core of the ${\rm Ly}\alpha$ absorption line. It is possible that this system
is similar to the associated H2 bearing DLAs studied by Noterdaeme et al.
(2019) where the presence of flux in the absorption core is related to the
partial coverage. Based on damping wings we estimate log $N$(H i)$\sim$20.2.
However, with the present spectra we are unable to confirm the origin of
residual flux in the core. Higher spectral resolution dataset covering both
${\rm Ly}\alpha$ and Ly$\beta$ absorption is needed to get an accurate
estimate of the $N$(H i) for this absorber. For the purpose of this paper, we
consider this as a candidate PDLA.
For a total redshift path of 9.3, the detection of 5 intervening DLAs
correspond to a number of DLAs per unit redshift, $n_{DLA}$ = 0.54 $\pm$ 0.24.
This is slightly higher but due to large uncertainties consistent with the
measurement of 0.24$\pm$0.02 based on SDSS DLAs by Noterdaeme et al. (2012),
and $0.26^{+0.06}_{-0.05}$ towards radio-loud quasars based on the combined
CORALS and UCSD samples by Jorgenson et al. (2006). Since, the quasars in our
sample are fainter than previous surveys (see Fig. 1), it is also possible
that there is indeed a dependence between $n_{\rm DLA}$ and the faintness of
quasars as noted by Ellison et al. (2001) for the CORALS sample.
As discussed in next section, in comparison to associated H i 21-cm absorption
detection rates in low-$z$ AGNs, the detection of just 2 PDLAs and one
associated H i 21-cm absorber (M1540-1453) from our sample may seem surprising
low. But in fact the detection of 3 PDLAs from our sample is in fact a factor
of 3 larger than what we expect from the statistics of PDLAs observed at
$z\sim 3$ in SDSS (Prochaska et al., 2008a). Interestingly, from the
statistics of damped H2 absorption lines, Noterdaeme et al. (2019) have
suggested that the PDLA fraction in Prochaska et al. (2008b) may have been
underestimated. A complete search of ${\rm Ly}\alpha$ absorption towards all
the targets in our sample will confirm the above mentioned excesses of DLAs
and PDLAs. Specifically, from the observations of remaining $2<z<2.7$ sources
($\Delta z$ = 20.4), we expect to detect another $\sim$10 DLAs.
Using SALT 1D spectra we measure redshift, $N$(H i) and rest equivalent widths
of metal absorption lines corresponding to these DLAs. These measurements are
provided in Table. LABEL:tab:dla. The quoted error in $N$(H i) also include
continuum placement uncertainties. The single component Voigt profile fits to
the DLAs are shown in Fig 13. The rest equivalent width of Si ii$\lambda$1526Å
lines are provided in column 4 of Table. LABEL:tab:dla. The metallicities
inferred using the W(Si ii) and metallicity correlation (see equation 1 of
Prochaska et al., 2008a) are given in column 5. We also estimated metallicity
using weak transitions of Si ii or S ii. These are provided in column 6. For
$z_{\rm abs}$= 2.7613 PDLA towards M2154$-$3826 we detect several weak
transitions of Fe ii, Ni ii and Si ii. We used single component curve of
growth to measure metallicities in this case. Overall, the metallicities of
these DLAs are typically less than a tenth of the solar metallicity. However,
given the poor spectral resolution of our data these estimates may suffer from
hidden saturation effects.
Unfortunately strong persistent RFI at 360-380 MHz prevents H i 21-cm line
search at $z$ = 2.73-2.94 (see Fig. 12). Thus, we could observe 21-cm line
only for the $z_{\rm abs}$= 2.3987 DLA towards M0610-2301 and $z_{\rm abs}$=
2.6834 towards M1013-2546. We do not detect 21-cm absorption from these
systems. The 5$\sigma$ integrated optical depths are provided in column 7 of
Table LABEL:tab:dla. M0610-2301 is a compact radio source in the modest
quality VLASS quick look image. The deconvolved source size is
$1.3^{\prime\prime}\times 0.1^{\prime\prime}$ with a position angle of
$180^{\circ}$ (i.e., size $<$ 11 pc at $z_{\rm abs}$). M1013-2546 also appears
to be a core-dominated source. Thus, we assume complete coverage i.e.,
$f_{c}=1$. This together with the observed $N$(H i) translates to a lower
limit on the spin temperature, $T_{S}\geq 603$ K for M0610$-$2301 and $\geq$
169K for M1013$-$2546\. These limiting values of spin temperatures are higher
than the measured median $N$(H i) weighted $T_{S}$ of 70 K for the cold
neutral medium (CNM) in our galaxy (see Heiles & Troland, 2003). We note that
H i 21-cm observations of 23 DLAs from radio-selected samples of CORALS, UCSD
and Ellison et al. (2008) are available in the literature (Srianand et al.,
2012; Kanekar et al., 2014). The overall 21-cm absorption detection rate of
3/25 (12${}^{+11}_{-7}$%) is consistent with the measurements from optically
selected samples and the conclusion that DLAs at $z>2$ are predominantly warm
and tend to show high spin temperatures (see also Petitjean et al., 2000).
Much larger radio-selected surveys ($\Delta{\rm X}\gtrsim 10^{4}$) are needed
to uncover the population of dusty DLAs.
Our SALT spectra also allow us to detect Fe ii$\lambda$2383 line for $z_{\rm
abs}$$\leq 2.15$. We detect a strong Fe ii absorber at $z_{\rm abs}$= 2.1404
towards M0652-3230. This system also shows absorption lines from other singly
ionized species (i.e Si ii and Al ii). All these suggest high $N$(H i) (Dutta
et al., 2017b). But as can be seen from Fig. 3, the radio emission is
dominated by the double lobe structure. The separation between the two lobes
is $\sim 27^{\prime\prime}$ (250 kpc at $z_{\rm abs}$). Therefore, the radio
and optical sight lines are well separated. This explains the non-detection of
21-cm absorption at the redshift of Fe ii absorbers.
## 6 Associated absorption statistics
Figure 14: Distributions of quasar redshift (filled histogram for all targets
and hatched for those unaffected by RFI) and 5$\sigma$ 21-cm optical depth
limits for associated H i absorption. The vertical dashed lines mark the
median for each distribution. The dotted line corresponds to $\tau_{21}$dv =
1.1 km s-1.
We searched for H i 21-cm absorption within 3000 km s-1 to the quasar emission
line redshift. In 28/88 cases (32%), the redshifted frequency is affected by
RFI. For remaining 60 sources, the distributions of redshift and 5$\sigma$
21-cm optical depth limit for a width of 25 km s-1 are shown in Fig. 14. The
median redshift of this subsample which includes M1312-2026 ($z=5.064$) is
2.288. For sources with resolved morphology (Fig. 3), we have searched for
absorption towards multiple components but consider only the optical depth
limit for the strongest component for statistics.
The only 21-cm absorption detection from the survey is described in Section 4.
For non-detections, the optical depth limits span a range of 0.103 - 3.6 km
s-1. The detection rate of $1.6^{+3.8}_{-1.4}$% without any sensitivity cut is
among the lowest for a 21-cm absorption line survey. If we adopt the detection
rate based on low-$z$ surveys, we would expect to detect approximately 20 H i
21-cm absorbers from our sample. For example, Maccagni et al. (2017) reported
a 27%$\pm$5% detection rate from the observations of 248 AGNs at
$0.02<z<0.25$. The 3$\sigma$ peak optical depth (resolution $\sim$ 16 km s-1)
limits are typically between 0.005 - 0.2 (median $\sim$ 0.04). For 25 km s-1
width adopted by us, the range corresponds to 5$\sigma$ integrated optical
depth, ${\cal{T}}$, of 0.2 - 7.1 km s-1(median $\sim$ 1.41). In our survey, we
achieve ${\cal{T}}$ better than 1.1 km s-1 in 90% of the cases. Thus, stark
contrast in detection rates is not due to differences in the sensitivity.
At low-$z$, the vast majority of targets have WISE color $W1-W2$, which is an
indicator of AGN accretion rate, to be less than 0.6. Among these the higher
fraction of detections are associated with larger $W2-W3$ colors, which is an
indicator of star formation rate (Glowacki et al., 2017; Chandola et al.,
2020). The compact radio sources still embedded within such gas and dust rich
environments are well placed to exhibit 21-cm detections. The powerful quasars
identified by our MIR-wedge (see equation 1) do not overlap with the above-
mentioned low-$z$ AGNs in color space defined by $W1-W2$ and $W2-W3$. Thus, we
hypothesize that the low non-detection rate in our sample is a consequence of
the gas poor environment of host galaxies. Since the detectability of H i
21-cm absorption also depends on the thermal state of the gas and the
morphology of radio emission, we validate this hypothesis through ${\rm
Ly}\alpha$ and metal lines covered in our SALT-NOT spectra.
First, we consider following four sources333We exclude M1312-2026 at $z=5.064$
as all the lines except ${\rm Ly}\alpha$ are redshifted into IR and not
covered in our SALT spectrum. at 2.7$<z<$3.5 : M0507$-$3624, M1013$-$2546,
M1351$-$1019 and M2047$-$1841\. For these we have access to both ${\rm
Ly}\alpha$ and metal absorption lines and the 21-cm frequency is not affected
by RFI. In the cases of M1013$-$2546 and M2047$-$1841, we do not detect a PDLA
or any associated C iv absorption system. This suggests the absence of neutral
and metal enriched ionized gas along the sight lines. In the case of
M0507$-$3624, we identified a potential PDLA. For log $N$(H i)$\sim$20.2
inferred from the damping wing of ${\rm Ly}\alpha$ absorption and H i 21-cm
non-detection, we place a lower limit of 216 K for the spin temperature
($f_{c}=1$). In the case of M1351$-$1019 while ${\rm Ly}\alpha$ absorption is
not prominent we do detect associated C iv absorption. In this case, we also
detect extended ${\rm Ly}\alpha$ emission. All this suggests that these two
sources are associated with gas rich environment. Therefore, the lack of H i
21-cm absorption here may just be due to lower CNM filling factor.
In general, 22 i.e., $\sim$23% quasars in our sample show associated C iv
absorption within 3000 km s-1 to the $z_{\rm em}$. In 11 of these cases 21-cm
absorption could not be searched due to strong RFI. Among remaining 10, we
also detect H i ${\rm Ly}\alpha$ absorption in 4 cases but none are DLAs. In
the remaining 6 cases, the ${\rm Ly}\alpha$ absorption is not covered in the
spectra but we do not detect corresponding absorption due to any low-
ionization species such as Fe ii (in 4 cases) and Si ii (in 6 cases). Since C
iv may come from a wide range of ionization stages, the absence of strong
${\rm Ly}\alpha$ and low-ionization absorption lines indicate the lack of
sufficient high column density neutral gas along the line of sight.
From the above we conclude that the high fraction of quasars in our sample are
indeed residing in gas and dust poor environments. An interesting counterpoint
to the lack of H i 21-cm absorption in our sample is provided by the 100%
detection rate of molecular gas through CO emission in a sample of eight
hyper-luminous WISE/SDSS (WISSH) quasars at $z\sim 2-4$ (Bischetti et al.,
2021). We note that a majority (6/8) of these would be selected by our MIR-
wedge (Equation 1). But WISSH quasars are much brighter (1.5 Vega magnitude
compared to our sample) in the $W4$ band (22$\mu$m) of WISE. The deliberate
selection of WISSH quasars as most luminous infrared sources ensures that they
are being observed through dust clouds (Weedman et al., 2012), and perhaps
represent an early phase in the evolution of quasar. Approximately only
$\sim$10% of quasars in our sample have $W4$ magnitudes comparable to above-
mentioned WISSH quasars with CO detections, and only in 3 cases ${\cal{T}}<$
1.1 km s-1 i.e., the sensitivity to detect CNM in $N$(H i)$>10^{20}$ cm-2 is
achieved. Although this may seem to conflict with our MIR-selected sample
strategy but luminous quasars only spend a small fraction of their total
lifetime ($\sim 10^{7}$ yr; Martini & Weinberg, 2001) in the dust-obscured
phase. Therefore, the the representation of dust-obscured quasars in our
unbiased sample will also be proportionately small. Considering only the AGNs
with sensitivity to detect CNM in $N$(H i)$>10^{20}$ cm-2 gas, we estimate the
CNM covering factor in the unobscured phase of quasars with radio luminosity
log $L_{\rm 1.4\,GHz}\simeq 27-29.3$ W Hz-1 to be 0.02.
Although, the most straightforward explanation for low detection rate in our
sample is gas and dust poor environment at host galaxy scales but the
distribution of cold gas at nuclear scales may also be influenced by the
additional factors such as high intrinsic radio or ultraviolet luminosities of
AGN (Curran & Whiting, 2010; Aditya et al., 2016; Grasha et al., 2019) and the
distribution of gas at nuclear scales in the context of unification model may
also contribute (Gupta & Saikia, 2006). Unfortunately, none of the CO-detected
quasars from Bischetti et al. (2021) are bright enough at radio wavelengths to
search for H i 21-cm absorption. Interestingly, their available SDSS spectra
do not show the presence of high-column density neutral gas i.e., a PDLA along
the quasar sight line (although ionized gas outflows are present). Thus,
although the molecular gas is distributed in rotating disks (extent 1.7 - 10
kpc; Bischetti et al., 2021), it is oriented such that cold gas cross-section
towards the quasar sight line is minimal. An extended bright radio source
embedded in such an environment may have still shown H i 21-cm absorption
(e.g., refer to the cases of B2 0902+345 and MG J0414+0534 in Section 4).
Finally, we note the non-detection towards the highest redshift quasar
($z_{\rm em}$= 5.062) in our sample: M1312-2026 (see also Carilli et al.,
2007, for 21-cm non-detections towards two $z\sim$5 AGNs). This brightest
radio loud quasar at $z>5$, has a radio-loudness parameter of R =
$f_{\nu,5GHz}$/$f_{\nu,4400\AA}$ = $1.4\times 10^{4}$. This R value is an
order-of-magnitude greater than that of any other $z>5$ AGN known-to-date
(Momjian et al., 2018; Saxena et al., 2018). The host galaxies of quasars at
such high redshifts can be associated with large amounts of dust and molecular
gas ($>10^{10}\,{\rm M_{\odot}}$), and high inferred star formation rates
($>100\,{\rm M_{\odot}\,yr^{-1}}$) (Venemans et al., 2017; Decarli et al.,
2018; Feruglio et al., 2018). Our H i 21-cm non-detection corresponds to a
5$\sigma$ upper limit of $N$(H i)$<4\times 10^{19}$ cm-2= 100 K; $f_{c}$ = 1
assumed) but can miss narrow absorption components due to the RFI. The current
SALT spectrum also covers only ${\rm Ly}\alpha$ emission. Further
investigation on the nature of this very intriguing non-detection require IR
spectra and sub-arcsecond scale radio imaging which are in progress.
## 7 Summary and outlook
This paper described a spectroscopically blind search for H i 21-cm absorption
lines in the wide band uGMRT spectra of 88 AGNs at $2<z<5.1$. We also applied
the same formalism to constrain the occurrence of intervening OH 18-cm main
lines. We show that compared to previous radio-selected samples of quasars to
search for DLAs, our sample for the uGMRT survey has targeted fainter objects
(median $i$ = 19.5 mag; see Fig. 2) and is a close representation of the
underlying luminosity function of quasars. Thus, our dust-unbiased sample of
AGNs with median radio spectral index, $\alpha^{1.4}_{0.4}$ = $-0.38$,
redshift, $z$ = 2.5 and spectral luminosity, log $L_{\rm 1.4GHz}$ = $27-29.3$
W Hz-1 is ideally suited to determine the occurrence of cold atomic gas
($T\sim$100 K) towards powerful quasars at $z>2$.
Through a spectroscopically blind search of absorption lines in all the uGMRT
spectra, we detected one new associated H i absorption which is towards
M1540-1453 at $z_{\rm abs}$= 2.1139. No intervening H i 21-cm absorption line
is detected. Our detection is only the fourth associated H i 21-cm absorber
known at $z>2$. It has a H i column density of $(2.06\pm 0.01)\times
10^{21}({T_{S}\over 100})({1\over f_{c}})$ cm-2. In our SALT spectrum, the
peak of C iv emission and low-ionization metal absorption lines are coincident
with that of the 21-cm absorption. The overall properties of 21-cm absorption
are consistent with it originating from a circumnuclear disk or gas clouds
associated with the host galaxy. The CO emission line observations and optical
spectra covering the ${\rm Ly}\alpha$ absorption (i.e., $\lambda\sim 3785$Å)
will allow us to understand the origin of cold gas detected in 21-cm
absorption.
Our survey is sensitive to detect CNM in DLAs corresponding to a total
redshift and comoving path length of $\Delta z=$ 38.3 and $\Delta X=$ 130.1,
respectively. Using this we constrain the incidence or number of 21-cm
absorbers per unit redshift and comoving path length to be $n_{21}<$ 0.048 and
$\ell_{21}<$ 0.014, respectively. The same formalism applied to OH main line
at 1667.359 MHz corresponds to total redshift and comoving path length of
$\Delta z=$ 44.9 and $\Delta X=$ 167.7, respectively. The number of OH
absorbers per unit redshift and comoving path length are $n_{\rm OH}<$ 0.041
and $\ell_{\rm OH}<$ 0.011, respectively. We note that the number of DLAs per
unit redshift interval i.e., $n_{\rm DLA}$($z$) at $2.3\leq z\leq 2.9$ is in
the range of 0.21 to 0.29 (Noterdaeme et al., 2012). This implies that the
covering factor of CNM gas in DLAs is $\leq$20 percent. These upper limits are
also consistent with $n_{\rm CNM}$ = 0.012 estimated using H2 and C i
absorbers, also tracers of cold gas, at high-$z$ (Krogager & Noterdaeme,
2020). Our result shows that a moderately larger survey such as MALS with
$\Delta$X$\gtrsim 10^{4}$ is important to precisely characterise the CNM
fraction and its redshift evolution at high-$z$.
The low-$z$ AGNs ($z<0.25$) exhibit H i 21-cm absorption detection rates of
$\sim$30% (e.g., Maccagni et al., 2017). Compared to this the low associated H
i 21-cm absorption detection rate ($1.6^{+3.8}_{-1.4}$%) and the CNM filling
factor of 0.2 from our survey is intriguing. We show that this is most likely
due to the fact that the powerful quasars in our sample are residing in gas
and dust poor environments, and that luminous quasars only spend a small
fraction of their total lifetime in dust-obscured phase. We use the spectral
coverage of ${\rm Ly}\alpha$ and various metal absorption lines in our optical
spectra to confirm the absence of high column density atomic gas towards the
quasar sight lines.
From our SALT spectra, we report detections of 5 intervening DLAs and 2 PDLAs
in our sample. The measured number of DLAs per unit redshift, $n_{\rm DLA}$ =
0.54 $\pm$ 0.24 is slightly higher but due to large uncertainties consistent
with the measurement based on SDSS DLAs (Noterdaeme et al., 2012) and the
combined CORALS and UCSD sample of radio-selected quasars (Jorgenson et al.,
2006). Interestingly, the PDLA detection fraction is also a factor of 3
larger. Since the quasars in our sample are fainter than in the previous
surveys, there may indeed be a dependence between $n_{\rm DLA}$ and optical
faintness of quasars (Ellison et al., 2001). These results also underline the
need for larger surveys of dust-unbiased DLAs. Due to limited spectral
coverage, we could search for ${\rm Ly}\alpha$ in only 30% of our SALT-NOT
sample presented here. A complete search of ${\rm Ly}\alpha$ absorption
towards all the targets will allow us to examine the above-mentioned excesses
at a higher significance level.
Eventually, much larger radio-selected surveys ($\Delta$X $\gtrsim 10^{4}$)
such as MALS are needed to uncover the population of dusty DLAs. The key
science objectives are summarized in Gupta et al. (2017), and the survey is
well underway. The first L- and UHF-band spectra based on the science
verification data are presented in Gupta et al. (2021) and Combes et al.
(2021), respectively. Each MALS pointing is centered at a bright ($>200$ mJy
at 1 GHz) radio source. Through wideband spectra of the central bright radio
source and the numerous off-axis radio sources, it will sample the column
density distribution ($N$(H i)$>5\times 10^{19}$ cm-2; $T_{s}$ = 100 K)
relevant to characterize the cross-section of cold atomic gas in and around
normal galaxies and AGNs at $0<z<1.4$. Simultaneously, it will also be
sensitive to detect OH main line absorption at $z<1.9$ in gas with
$N$(OH)$>2.4\times 10^{14}$ cm-2 (excitation temperature = 3.5 K). Since the
formation of OH is tightly coupled to H2, the measurement of OH cross-section
at $z<2$ will be a crucial input through which to understand the redshift
evolution of CNM cross-section (Balashev et al., 2020).
We thank GMRT staff for their support during the observations. GMRT is run by
the National Centre for Radio Astrophysics of the Tata Institute of
Fundamental Research. This work is based on observations made with SALT and
NOT. The uGMRT data were processed using the MALS data processing facility at
IUCAA. The CASA package is developed by an international consortium of
scientists based at the National Radio Astronomical Observatory (NRAO), the
European Southern Observatory (ESO), the National Astronomical Observatory of
Japan (NAOJ), the Academia Sinica Institute of Astronomy and Astrophysics
(ASIAA), the CSIRO division for Astronomy and Space Science (CASS), and the
Netherlands Institute for Radio Astronomy (ASTRON) under the guidance of NRAO.
The National Radio Astronomy Observatory is a facility of the National Science
Foundation operated under cooperative agreement by Associated Universities,
Inc.
## References
* Adams et al. (2009) Adams, J. J., Hill, G. J., & MacQueen, P. J. 2009, ApJ, 694, 314, doi: 10.1088/0004-637X/694/1/314
* Aditya et al. (2021) Aditya, J. N. H. S., Jorgenson, R., Joshi, V., et al. 2021, MNRAS, 500, 998, doi: 10.1093/mnras/staa3306
* Aditya et al. (2016) Aditya, J. N. H. S., Kanekar, N., & Kurapati, S. 2016, MNRAS, 455, 4000, doi: 10.1093/mnras/stv2563
* Allison et al. (2017) Allison, J. R., Moss, V. A., Macquart, J.-P., et al. 2017, MNRAS, 465, 4450, doi: 10.1093/mnras/stw2860
* Balashev et al. (2020) Balashev, S., Gupta, N., & Kosenko, D. 2020, arXiv e-prints, arXiv:2012.12241. https://arxiv.org/abs/2012.12241
* Baloković et al. (2012) Baloković, M., Smolčić, V., Ivezić, Ž., et al. 2012, ApJ, 759, 30, doi: 10.1088/0004-637X/759/1/30
* Bischetti et al. (2021) Bischetti, M., Feruglio, C., Piconcelli, E., et al. 2021, A&A, 645, A33, doi: 10.1051/0004-6361/202039057
* Borthakur et al. (2010) Borthakur, S., Tripp, T. M., Yun, M. S., et al. 2010, ApJ, 713, 131, doi: 10.1088/0004-637X/713/1/131
* Briggs et al. (1993) Briggs, F. H., Sorar, E., & Taramopoulos, A. 1993, ApJ, 415, L99, doi: 10.1086/187042
* Burgh et al. (2003) Burgh, E. B., Nordsieck, K. H., Kobulnicky, H. A., et al. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4841, Proc. SPIE, ed. M. Iye & A. F. M. Moorwood, 1463–1471
* Carilli et al. (1994) Carilli, C. L., Owen, F. N., & Harris, D. E. 1994, AJ, 107, 480, doi: 10.1086/116870
* Carilli & van Gorkom (1992) Carilli, C. L., & van Gorkom, J. H. 1992, ApJ, 399, 373, doi: 10.1086/171934
* Carilli et al. (2007) Carilli, C. L., Wang, R., van Hoven, M. B., et al. 2007, AJ, 133, 2841, doi: 10.1086/518436
* Chambers et al. (2016) Chambers, K. C., Magnier, E. A., Metcalfe, N., et al. 2016, arXiv e-prints, arXiv:1612.05560. https://arxiv.org/abs/1612.05560
* Chandola et al. (2020) Chandola, Y., Saikia, D. J., & Li, D. 2020, MNRAS, 494, 5161, doi: 10.1093/mnras/staa1029
* Combes et al. (2021) Combes, F., Gupta, N., Muller, S., et al. 2021, arXiv e-prints, arXiv:2101.00188. https://arxiv.org/abs/2101.00188
* Condon et al. (1998) Condon, J. J., Cotton, W. D., Greisen, E. W., et al. 1998, AJ, 115, 1693, doi: 10.1086/300337
* Conway (2002) Conway, J. E. 2002, New A Rev., 46, 263, doi: 10.1016/S1387-6473(01)00191-9
* Curran et al. (2013) Curran, S. J., Allison, J. R., Glowacki, M., Whiting, M. T., & Sadler, E. M. 2013, MNRAS, 431, 3408, doi: 10.1093/mnras/stt438
* Curran & Whiting (2010) Curran, S. J., & Whiting, M. T. 2010, ApJ, 712, 303, doi: 10.1088/0004-637X/712/1/303
* Cutri & et al. (2014) Cutri, R. M., & et al. 2014, VizieR Online Data Catalog, II/328
* Decarli et al. (2018) Decarli, R., Walter, F., Venemans, B. P., et al. 2018, ApJ, 854, 97, doi: 10.3847/1538-4357/aaa5aa
* Dutta et al. (2019) Dutta, R., Srianand, R., & Gupta, N. 2019, MNRAS, 489, 1099, doi: 10.1093/mnras/stz2178
* Dutta et al. (2017a) Dutta, R., Srianand, R., Gupta, N., & Joshi, R. 2017a, MNRAS, 468, 1029, doi: 10.1093/mnras/stx538
* Dutta et al. (2017b) Dutta, R., Srianand, R., Gupta, N., et al. 2017b, MNRAS, 465, 4249, doi: 10.1093/mnras/stw3040
* Dutta et al. (2017c) —. 2017c, MNRAS, 465, 588, doi: 10.1093/mnras/stw2689
* Ellison et al. (2001) Ellison, S. L., Yan, L., Hook, I. M., et al. 2001, A&A, 379, 393, doi: 10.1051/0004-6361:20011281
* Ellison et al. (2008) Ellison, S. L., York, B. A., Pettini, M., & Kanekar, N. 2008, MNRAS, 388, 1349, doi: 10.1111/j.1365-2966.2008.13482.x
* Feruglio et al. (2018) Feruglio, C., Fiore, F., Carniani, S., et al. 2018, A&A, 619, A39, doi: 10.1051/0004-6361/201833174
* Gehrels (1986) Gehrels, N. 1986, ApJ, 303, 336, doi: 10.1086/164079
* Geier et al. (2019) Geier, S. J., Heintz, K. E., Fynbo, J. P. U., et al. 2019, A&A, 625, L9, doi: 10.1051/0004-6361/201935108
* Geréb et al. (2015) Geréb, K., Maccagni, F. M., Morganti, R., & Oosterloo, T. A. 2015, A&A, 575, A44, doi: 10.1051/0004-6361/201424655
* Glowacki et al. (2017) Glowacki, M., Allison, J. R., Sadler, E. M., et al. 2017, MNRAS, 467, 2766, doi: 10.1093/mnras/stx214
* Grasha et al. (2019) Grasha, K., Darling, J., Bolatto, A., Leroy, A. K., & Stocke, J. T. 2019, ApJS, 245, 3, doi: 10.3847/1538-4365/ab4906
* Grasha et al. (2020) Grasha, K., Darling, J., Leroy, A. K., & Bolatto, A. D. 2020, MNRAS, 498, 883, doi: 10.1093/mnras/staa2521
* Gupta et al. (2018a) Gupta, N., Momjian, E., Srianand, R., et al. 2018a, ApJ, 860, L22, doi: 10.3847/2041-8213/aac9cd
* Gupta & Saikia (2006) Gupta, N., & Saikia, D. J. 2006, MNRAS, 370, 738, doi: 10.1111/j.1365-2966.2006.10498.x
* Gupta et al. (2006) Gupta, N., Salter, C. J., Saikia, D. J., Ghosh, T., & Jeyakumar, S. 2006, MNRAS, 373, 972, doi: 10.1111/j.1365-2966.2006.11064.x
* Gupta et al. (2010) Gupta, N., Srianand, R., Bowen, D. V., York, D. G., & Wadadekar, Y. 2010, MNRAS, 408, 849, doi: 10.1111/j.1365-2966.2010.17198.x
* Gupta et al. (2012) Gupta, N., Srianand, R., Petitjean, P., et al. 2012, A&A, 544, A21, doi: 10.1051/0004-6361/201219159
* Gupta et al. (2009) Gupta, N., Srianand, R., Petitjean, P., Noterdaeme, P., & Saikia, D. J. 2009, MNRAS, 398, 201, doi: 10.1111/j.1365-2966.2009.14933.x
* Gupta et al. (2017) Gupta, N., Srianand, R., Baan, W., et al. 2017, ArXiv e-prints. https://arxiv.org/abs/1708.07371
* Gupta et al. (2018b) Gupta, N., Srianand, R., Farnes, J. S., et al. 2018b, MNRAS, 476, 2432, doi: 10.1093/mnras/sty384
* Gupta et al. (2021) Gupta, N., Jagannathan, P., Srianand, R., et al. 2021, ApJ, 907, 11, doi: 10.3847/1538-4357/abcb85
* Heiles & Troland (2003) Heiles, C., & Troland, T. H. 2003, ApJ, 586, 1067, doi: 10.1086/367828
* Jorgenson et al. (2006) Jorgenson, R. A., Wolfe, A. M., Prochaska, J. X., et al. 2006, ApJ, 646, 730, doi: 10.1086/505130
* Kanekar et al. (2007) Kanekar, N., Chengalur, J. N., & Lane, W. M. 2007, MNRAS, 375, 1528, doi: 10.1111/j.1365-2966.2007.11430.x
* Kanekar et al. (2014) Kanekar, N., Prochaska, J. X., Smette, A., et al. 2014, MNRAS, 438, 2131, doi: 10.1093/mnras/stt2338
* Kobulnicky et al. (2003) Kobulnicky, H. A., Nordsieck, K. H., Burgh, E. B., et al. 2003, in Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Vol. 4841, Instrument Design and Performance for Optical/Infrared Ground-based Telescopes, ed. M. Iye & A. F. M. Moorwood, 1634–1644
* Krogager et al. (2019) Krogager, J.-K., Fynbo, J. P. U., Møller, P., et al. 2019, MNRAS, 486, 4377, doi: 10.1093/mnras/stz1120
* Krogager et al. (2016) Krogager, J. K., Fynbo, J. P. U., Noterdaeme, P., et al. 2016, MNRAS, 455, 2698, doi: 10.1093/mnras/stv2346
* Krogager & Noterdaeme (2020) Krogager, J.-K., & Noterdaeme, P. 2020, A&A, 644, L6, doi: 10.1051/0004-6361/202039843
* Krogager et al. (2018) Krogager, J. K., Gupta, N., Noterdaeme, P., et al. 2018, ApJS, 235, 10, doi: 10.3847/1538-4365/aaab51
* Lacy et al. (2020) Lacy, M., Baum, S. A., Chandler, C. J., et al. 2020, PASP, 132, 035001, doi: 10.1088/1538-3873/ab63eb
* Lawrence et al. (1995a) Lawrence, C. R., Cohen, J. G., & Oke, J. B. 1995a, AJ, 110, 2583, doi: 10.1086/117714
* Lawrence et al. (1995b) Lawrence, C. R., Elston, R., Januzzi, B. T., & Turner, E. L. 1995b, AJ, 110, 2570, doi: 10.1086/117713
* Maccagni et al. (2017) Maccagni, F. M., Morganti, R., Oosterloo, T. A., Geréb, K., & Maddox, N. 2017, A&A, 604, A43, doi: 10.1051/0004-6361/201730563
* Manti et al. (2017) Manti, S., Gallerani, S., Ferrara, A., Greig, B., & Feruglio, C. 2017, MNRAS, 466, 1160, doi: 10.1093/mnras/stw3168
* Martini & Weinberg (2001) Martini, P., & Weinberg, D. H. 2001, ApJ, 547, 12, doi: 10.1086/318331
* Momjian et al. (2018) Momjian, E., Carilli, C. L., Bañados, E., Walter, F., & Venemans, B. P. 2018, ApJ, 861, 86, doi: 10.3847/1538-4357/aac76f
* Moore et al. (1999) Moore, C. B., Carilli, C. L., & Menten, K. M. 1999, ApJ, 510, L87, doi: 10.1086/311818
* Morganti & Oosterloo (2018) Morganti, R., & Oosterloo, T. 2018, A&A Rev., 26, 4, doi: 10.1007/s00159-018-0109-x
* Noterdaeme et al. (2019) Noterdaeme, P., Balashev, S., Krogager, J. K., et al. 2019, A&A, 627, A32, doi: 10.1051/0004-6361/201935371
* Noterdaeme et al. (2018) Noterdaeme, P., Ledoux, C., Zou, S., et al. 2018, A&A, 612, A58, doi: 10.1051/0004-6361/201732266
* Noterdaeme et al. (2012) Noterdaeme, P., Petitjean, P., Carithers, W. C., et al. 2012, A&A, 547, L1, doi: 10.1051/0004-6361/201220259
* Petitjean et al. (2000) Petitjean, P., Srianand, R., & Ledoux, C. 2000, A&A, 364, L26
* Prochaska et al. (2008a) Prochaska, J. X., Chen, H.-W., Wolfe, A. M., Dessauges-Zavadsky, M., & Bloom, J. S. 2008a, ApJ, 672, 59, doi: 10.1086/523689
* Prochaska et al. (2008b) Prochaska, J. X., Hennawi, J. F., & Herbert-Fort, S. 2008b, ApJ, 675, 1002, doi: 10.1086/526508
* Rao et al. (2006) Rao, S. M., Turnshek, D. A., & Nestor, D. B. 2006, ApJ, 636, 610, doi: 10.1086/498132
* Reeves et al. (2016) Reeves, S. N., Sadler, E. M., Allison, J. R., et al. 2016, MNRAS, 457, 2613, doi: 10.1093/mnras/stv3011
* Saxena et al. (2018) Saxena, A., Marinello, M., Overzier, R. A., et al. 2018, MNRAS, 480, 2733, doi: 10.1093/mnras/sty1996
* Selsing et al. (2016) Selsing, J., Fynbo, J. P. U., Christensen, L., & Krogager, J. K. 2016, A&A, 585, A87, doi: 10.1051/0004-6361/201527096
* Shukla et al. (2021) Shukla, G., Srianand, R., Gupta, N., et al. 2021, MNRAS, 501, 5362, doi: 10.1093/mnras/staa3977
* Srianand et al. (2015) Srianand, R., Gupta, N., Momjian, E., & Vivek, M. 2015, MNRAS, 451, 917, doi: 10.1093/mnras/stv1004
* Srianand et al. (2012) Srianand, R., Gupta, N., Petitjean, P., et al. 2012, MNRAS, 421, 651, doi: 10.1111/j.1365-2966.2011.20342.x
* Srianand et al. (2008) Srianand, R., Gupta, N., Petitjean, P., Noterdaeme, P., & Saikia, D. J. 2008, MNRAS, 391, L69, doi: 10.1111/j.1745-3933.2008.00558.x
* Uson et al. (1991) Uson, J. M., Bagri, D. S., & Cornwell, T. J. 1991, Phys. Rev. Lett., 67, 3328, doi: 10.1103/PhysRevLett.67.3328
* Venemans et al. (2017) Venemans, B. P., Walter, F., Decarli, R., et al. 2017, ApJ, 851, L8, doi: 10.3847/2041-8213/aa943a
* Vermeulen et al. (2003) Vermeulen, R. C., Pihlström, Y. M., Tschager, W., et al. 2003, A&A, 404, 861, doi: 10.1051/0004-6361:20030468
* Weedman et al. (2012) Weedman, D., Sargsyan, L., Lebouteiller, V., Houck, J., & Barry, D. 2012, ApJ, 761, 184, doi: 10.1088/0004-637X/761/2/184
* Wolfe et al. (1981) Wolfe, A. M., Briggs, F. H., & Jauncey, D. L. 1981, ApJ, 248, 460, doi: 10.1086/159170
* Wright et al. (2010) Wright, E. L., Eisenhardt, P. R. M., Mainzer, A. K., et al. 2010, AJ, 140, 1868, doi: 10.1088/0004-6256/140/6/1868
* York et al. (2000) York, D. G., Adelman, J., Anderson, John E., J., et al. 2000, AJ, 120, 1579, doi: 10.1086/301513
|
B.S.E., University of Michigan (2019) Department of Physics
Doctor of Philosophy
June 2023 May 23, 2023
© 2023 Nicholas Kamp. All rights reserved.
The author hereby grants to MIT a nonexclusive, worldwide, irrevocable,
royalty-free license to exercise any and all rights under copyright, including
to reproduce, preserve, distribute and publicly display copies of the thesis,
or release the thesis under an open-access license.
Janet M. ConradProfessor
Lindley WinslowAssociate Department Head of Physics
# Experimental and Phenomenological Investigations of the MiniBooNE Anomaly
Nicholas Kamp
The $4.8\sigma$ excess of electron neutrino-like events reported by the
MiniBooNE experiment at Fermilab’s Booster Neutrino Beam (BNB) is one of the
most significant and longest standing anomalies in particle physics. This
thesis covers a range of experimental and theoretical efforts to elucidate the
origin of the MiniBooNE low energy excess (LEE). We begin with the follow-up
MicroBooNE experiment, which took data along the BNB from 2016 to 2021. The
detailed images produced by the MicroBooNE liquid argon time projection
chamber enable a suite of measurements that each test a different potential
source of the MiniBooNE anomaly. This thesis specifically presents
MicroBooNE’s search for $\nu_{e}$ charged-current quasi-elastic (CCQE)
interactions consistent with two-body scattering. The two-body CCQE analysis
uses a novel reconstruction process, including a number of deep-learning based
algorithms, to isolate a sample of $\nu_{e}$ CCQE interaction candidates with
$75\%$ purity. The analysis rules out an entirely $\nu_{e}$-based explanation
of the MiniBooNE excess at the $2.4\sigma$ confidence level. We next perform a
combined fit of MicroBooNE and MiniBooNE data to the popular $3+1$ model; even
after the MicroBooNE results, allowed regions in $\Delta
m^{2}$-$\sin^{2}2_{\theta_{\mu e}}$ parameter space exist at the $3\sigma$
confidence level. This thesis also demonstrates that, due to nuclear effects
in the low-energy cross section behavior, the MicroBooNE data are consistent
with a $\overline{\nu}_{e}$-based explanation of the MiniBooNE LEE at the
$<2\sigma$ confidence level. Next, we investigate a phenomenological
explanation of the MiniBooNE excess involving both an eV-scale sterile
neutrino and a dipole-coupled MeV-scale heavy neutral lepton (HNL). It is
shown that a 500 MeV HNL can accommodate the energy and angular distributions
of the LEE at the $2\sigma$ confidence level while avoiding stringent
constraints derived from MINER$\nu$A elastic scattering data. Finally, we
discuss the Coherent CAPTAIN-Mills (CCM) experiment–a 10-ton light-based
liquid argon detector at Los Alamos National Laboratory. The background
rejection achieved from a novel Cherenkov-based reconstruction algorithm will
give CCM world-leading sensitivity to a number of beyond-the-Standard Model
physics scenarios, including dipole-coupled HNLs.
### Acknowledgments
Development from a starry-eyed first year graduate student into a competent
researcher is like the MSW effect: it doesn’t happen in a vacuum. There are
many people who have helped me along the way in both physics and life, without
whom I would never have gotten to the point of writing this thesis.
First and foremost, I owe an immense debt of gratitude to Janet Conrad. Janet
has been an incredible mentor to me during my time as a graduate student; her
advice and wisdom have helped me become the scientist I wanted to be when I
came to MIT four years ago. I’d like to specifically thank Janet for taking my
interests seriously and working with me to develop research projects that
matched them. Creativity, ingenuity, enthusiasm, and kindness run rampant in
the Conrad group–I will always be grateful for being offered a spot in it. I
look forward to many years of fruitful collaboration to come.
To my partner, Wenzer: thank you for the love, support, and patience over the
last two years. Life is not so hard when we can act as a restoring force for
one another–I am grateful for having been able to rely on it while writing
this thesis. I look forward with great excitement to our future adventures
together.
Thank you to my past mentors: Christine Aidala for introducing me to particle
physics through POLARIS, Robert Cooper for asking me about my research at APS
DNP 2016, Bill Louis for answering my many questions about neutrino physics in
my first summer at LANL, Richard Van de Water for teaching me to follow the
data (which greatly influenced my choice of graduate research), and Josh Spitz
for helping develop confidence as a neutrino physicist on JSNS2. I’d also like
to thank Christopher Mauger and the rest of the Mini-CAPTAIN team for an
introduction to what it takes to run a particle physics experiment at an
accelerator.
Thank you to members of the Conrad group past and present: those I worked
with, Lauren Yates, Adrian Hourlier, Jarrett Moon, Austin Schneider, Darcy
Newmark, Alejandro Diaz, and John Hardin, for being fantastic collaborators,
and those I did not, Loyd Waits, Joe Smolsky, Daniel Winklehner, Philip
Weigel, and Josh Villareal, for making MIT a brighter place. Thank you
especially to Austin for your infinite patience in answering my many questions
on statistics and software–each of our projects has been a great pleasure.
To my MicroBooNE collaborators not mentioned above, Taritree Wongjirad, Katie
Mason, Joshua Mills, Polina Abratenko, Ran Itay, Mike Shaevitz, Georgia
Karagiorgi, Davio Cianci, Rui An, and everyone else: thank you for your
excellent teamwork in putting together the Deep Learning analysis.
To my CCM collaborators not mentioned above, Edward Dunton, Mayank Tripathi,
Adrian Thompson, Will Thopmson, Marisol Chávez Estrada, and everyone else:
thank you for the invigorating research and discussion over the last couple of
years, and best of luck with CCM200!
Thank you to Carlos Argüelles, Mike Shaevitz, Matheus Hostert, Stefano
Vergani, and Melissa Uchida for your excellent mentorship and collaboration in
our phenomenological endeavors together. Thank you specifically to Carlos for
giving me a welcome introduction to neutrino phenomenology, Mike for ensuring
the robustness of each analysis, and Matheus for patiently answering my many
model-building questions.
To all of my friends not mentioned above; Jack, Ryan, Alexis, Melissa, Ben,
Bhaamati, Charlie, Vincent, Patrick, Caolan, Ouail, Artur, Sam, Zhiquan,
Rebecca, Felix, Lila, Rahul, Brandon, Field, Kelsey, Woody, Joey, Rory,
Cooper, Daniel, Kaliroë, Elena, and everyone else: thank you for all of the
great memories–climbing, hiking, skiing, playing music, eating, drinking,
commiserating, and laughing–over the past four years. Thank you especially to
the last three for making preparation for the oral exam significantly more
enjoyable.
Thank you to the MIT administrative staff, including (but not limited to)
Lauren Saragosa, Karen Dow, Catherine Modica, Sydney Miller, Alisa Cabral, and
Elsye Luc, for helping make graduate school a more manageable endeavor. Thank
you also to the rest of my thesis committee, Joseph Formaggio and Washington
Taylor, for helping me get through the final part of graduate school.
Finally, thank you to my parents, Jim and Carla, my siblings, Serafina and
Daniel, and the rest of my family. From elementary school science fairs to
Saturday Morning Physics to today, none of this would have been possible
without your love and support.
###### Contents
1. 1 Introduction
1. 1.1 A Brief History of the Neutrino
2. 1.2 Neutrinos in the Standard Model
3. 1.3 Massive Neutrinos
4. 1.4 Anomalies in the Neutrino Sector
2. 2 The MiniBooNE Experiment
1. 2.1 Overview of MiniBooNE
1. 2.1.1 The Booster Neutrino Beam
2. 2.1.2 The MiniBooNE Detector
2. 2.2 The MiniBooNE Low Energy Electron-Like Excess
3. 3 The MicroBooNE Detector
1. 3.1 Liquid Argon Time Projection Chamber
1. 3.1.1 Cryogenics
2. 3.1.2 LArTPC Drift System
3. 3.1.3 Light Collection System
2. 3.2 TPC Signal Processing
1. 3.2.1 Noise Filtering
2. 3.2.2 Deconvolution
4. 4 The MicroBooNE Electron Neutrino Analysis: Overview and Selection
1. 4.1 Dataset and Simulation
2. 4.2 The Electron Low Energy Excess Template
3. 4.3 Philosophy Behind the Two-Body CCQE Analysis
4. 4.4 Reconstruction
1. 4.4.1 Convolutional Neural Networks in LArTPCs
2. 4.4.2 Vertex and Track Reconstruction
3. 4.4.3 Publication: Electromagnetic shower reconstruction and energy validation with Michel electrons and $\pi^{0}$ samples for the deep-learning-based analyses in MicroBooNE
5. 4.5 $1e1p$ Event Selection
1. 4.5.1 Basic Data Selection Criteria
2. 4.5.2 Boosted Decision Tree Ensemble
3. 4.5.3 Particle Identification Cuts
4. 4.5.4 The Final $1e1p$ Sample
5. 5 The MicroBooNE Electron Neutrino Analysis: Results and Discussion
1. 5.1 First Results from the Two-Body CCQE Analysis
1. 5.1.1 Background Estimation
2. 5.1.2 Evaluation of Systematic Uncertainties
3. 5.1.3 Constraint from the $1\mu 1p$ Sample
4. 5.1.4 Blinded Analysis Approach
2. 5.2 Statistical Interpretation
1. 5.2.1 Goodness of Fit
2. 5.2.2 Two Hypothesis Test
3. 5.2.3 Signal Strength Scaling Test
3. 5.3 Discussion and Outlook
1. 5.3.1 Publication: MiniBooNE and MicroBooNE Combined Fit to a $3+1$ Sterile Neutrino Scenario
2. 5.3.2 Publication: Implications of MicroBooNE’s low sensitivity to electron antineutrino interactions in the search for the MiniBooNE excess
6. 6 Neutrissimos: Heavy Neutral Leptons with a Dipole Moment
1. 6.1 Dipole-Portal Neutrissimos
2. 6.2 Overview of the Mixed Model
3. 6.3 Neutrissimos in MiniBooNE
1. 6.3.1 Simulation in LeptonInjector
2. 6.3.2 Fits to the MiniBooNE Excess
4. 6.4 Publication: Dipole-coupled neutrissimo explanations of the MiniBooNE excess including constraints from MINERvA data
7. 7 The Coherent CAPTAIN-Mills Experiment
1. 7.1 The CCM Beamline and Detector
2. 7.2 Cherenkov Light Reconstruction
1. 7.2.1 Simulation-Based Sensitivity Estimation
2. 7.2.2 Identifying Cherenkov Light in Data
3. 7.3 Neutrissimos in CCM
8. 8 Conclusions and Future Prospects
9. A Publication: Convolutional neural networks for shower energy prediction in liquid argon time projection chambers
10. B Signal Event Displays from the Two-Body CCQE Analysis
###### List of Figures
1. 1.1 Telegram from Fred Reines and Clyde Cowan informing Wolfgang Pauli of their detection of neutrinos from a nuclear reactor.
2. 1.2 The deficit of the observed solar $\nu_{e}$ flux compared with the theoretical expectation. The Homestake experiment is shown on the far left; follow-up solar neutrino measurements confirming the deficit are also shown, including the 2002 SNO result which brought forth a solution to the solar neutrino problem. Figure from Ref. [1].
3. 1.3 The up-down asymmetry measured in SuperK as a function of lepton momentum, separated into $e$-like and $\mu$-like events as well as fully-contained (FC) and partially-contained (PC) events. The dashed line indicates the best fit to $\nu_{\mu}\to\nu_{\tau}$ oscillations. Figure from Ref. [2].
4. 1.4 Measurement of the solar 8B flux from the SNO collaboration, broken down into the $\nu_{e}$ and $\nu_{\mu,\tau}$ sub-components. Measurement of the CC, NC, and ES channels show up as slices in the two-dimensional flux parameter space. Figure from Ref. [3].
5. 1.5 Diagrams contributing to $\nu e^{-}$ elastic scattering
6. 1.6 Diagrams contributing to $\overline{\nu}e^{-}$ elastic scattering
7. 1.7 Diagrams contributing to neutrino-nucleon charged-current quasielastic scattering
8. 1.8 CC inclusive neutrino and antineutrino nucleon scattering cross sections as a function of neutrino energy. Figure from Ref. [4].
9. 1.9 The LSND excess of $\overline{\nu}_{e}$ events on top of the predicted SM background (green and red regions). The blue region indicates the best fit to $\overline{\nu}_{\mu}\to\overline{\nu}_{e}$ oscillations via a sterile neutrino state. Figure from Ref. [5]
10. 1.10 The MiniBooNE electron-like channel data and SM background prediction for the entire neutrino mode dataset, as a function of the reconstructed neutrino energy.
11. 1.11 Data contributing to the reactor antineutrino anomaly, indicating the $\sim 5\%$ flux deficit observed by short-baseline reactor neutrino experiments. The red line indicates the prediction incorporating SM neutrino oscillations only, while the blue line shows an example prediction including a sterile neutrino. Figure from Ref. [6].
12. 1.12 Data contributing to the gallium anomaly, indicating the $\sim 20\%$ deficit in the 71Ge production rate observed by SAGE, GALLEX, and BEST. Figure from Ref. [7].
13. 1.13 Preferred regions in $\sin^{2}2\theta_{ee}$–$\Delta m^{2}$ parameter space to explain the RAA [8] (green contour) and gallium anomaly [9] (blue regions). The total excluded region from other experiments (grey region) is also shown. Figure from Ref. [9].
14. 1.14 Preferred regions in $\sin^{2}2\theta_{\mu e}$–$\Delta m^{2}$ parameter space to explain the LSND anomaly [5] (filled contours) and MiniBooNE anomaly [10] (open contours). Figure from Ref. [10].
15. 1.15 Graphical representation of the tension observed in 3+1 global fits between different subsets of the experimental landscape. Figure 1.15(a) shows the tension between $\nu_{e}$ appearance experiments and $\nu_{e}$/$\nu_{\mu}$ disappearance experiments observed in Ref. [11]. Figure 1.15(b) shows the tension between allowed regions from $\nu_{e}$ appearance (lower right), $\nu_{e}$ disappearance (upper right), and $\nu_{\mu}$ disappearance (upper left) experiments observed in Ref. [12], which includes the latest results from the BEST experiment.
16. (a) From Ref. [11]
17. (b) From Ref. [12]
18. 2.1 A schematic depiction of the BNB at Fermilab, including the downstream MiniBooNE detector. Figure from Ref. [13].
19. 2.2 Breakdown of the neutrino flux at the BNB in neutrino (left) and antineutrino (right) mode. Figure from Ref. [14]
20. 2.3 The MiniBooNE detector situated in the cylindrical detector hall (left) and an image of the interior of the MiniBooNE detector (right), showing the PMTs in both the signal and veto regions. Figure from Ref. [15].
21. 2.4 Visual representations of particle identification in MiniBooNE. Figure 2.4(a) shows a schematic representation of the detector signature from the three main particle classes in MiniBooNE: muons, electrons, and neutral pions. Figure 2.4(b) shows the MiniBooNE log-likelihood-ratio between the $e$-like and $\mu$-like hypothesis as a function of reconstructed neutrino energy, considering both simulated $\nu_{e}$ CCQE (top) and $\nu_{\mu}$ CCQE (bottom) interactions.
22. (a) From Ref. [76]
23. (b) From Ref. [77]
24. 2.5 The $E_{\nu}^{\rm QE}$ distribution of the MiniBooNE $e$-like excess in the total neutrino mode (figure 2.5(a)) and antineutrino mode (figure 2.5(b)) datasets. The observation and SM prediction in each bin are shown by the data points and colored histograms, respectively.
25. (a) $\nu_{e}$ sample, from Ref. [10]
26. (b) $\overline{\nu}_{e}$ sample, from Ref. [79]
27. 2.6 The lepton visible energy (figure 2.6(a)) and $\cos\theta$ (figure 2.6(b)) distributions of the MiniBooNE $e$-like excess in the total neutrino mode dataset. The observation and SM prediction in each bin are shown by the data points and colored histograms, respectively. Figures from Ref. [10].
28. (a) Lepton $E_{\rm vis}$ distribution
29. (b) Lepton $\cos\theta$ distribution
30. 2.7 The MiniBooNE and LSND excesses as a function of the ratio $L/E$. The MiniBooNE data is separated into neutrino and antineutrino mode. Figure from Ref. [10].
31. 3.1 Schematic depictions of the MicroBooNE LArTPC. Figure 3.1(a) shows the detection process for charged particles from a neutrino interaction in a MicroBooNE-like LArTPC. Figure 3.1(b) shows a cross-sectional view of the MicroBooNE detector along the $-\hat{z}$ direction. Figures from Ref. [16].
32. (a)
33. (b)
34. 3.2 Figure 3.2(a) shows a close-up image of the cathode plane of the MicroBooNE LArTPC. The stainless steel field cage tubes can also be seen surrounding the active volume. Figure 3.2(b) shows a cross-sectional map of the electric field at the edge of the active volume, considering a cathode plane voltage of -128 kV. The legend shows the field strength in units of V/m. Figures from Ref. [16].
35. (a)
36. (b)
37. 3.3 Figure 3.3(a) shows a photograph of a single wire carrier board with 32 mounted wires. Figure 3.3(b) shows the fully-assembled MicroBooNE LarTPC, highlighting the anode plane mounted on the stainless steel frame. Figures from Ref. [16].
38. (a)
39. (b)
40. 3.4 The LAr scintillation light spectrum, TPB ultra-violet absorption spectrum, TPB emission spectrum, PMT quantum efficiency, and PMT surface transmission efficiency as a function of photon wavelength. Figure from Ref. [17].
41. 3.5 Figure 3.5(a) shows a photograph of a single PMT system used in the MicroBooNE detector. The acrylic window here has not yet been coated with TPB. Figure 3.5(b) shows the PMT signal from a stopped cosmic muon (top) that decays to a Michel (bottom). Figures from Ref. [16].
42. (a)
43. (b)
44. 3.6 2D displays of the signal from wires in one of the induction planes in a single data event, before and after the application of the offline noise filters. Figure from Ref. [18].
45. 3.7 A U plane event display of a candidate neutrino interaction in the MicroBooNE data. The impact of the 1D and 2D deconvolution algorithms on the post-noise-filtering signal is shown. Figure from Ref. [19].
46. 4.1 Figure 4.1(a) shows the unfolded $\nu_{e}$ prediction in MiniBooNE calculated using the first $6.46\times 10^{20}$ POT of neutrino mode data [20]. Figure 4.1(b) shows the unfolded eLEE model weights derived from the first $12.84\times 10^{20}$ POT of MiniBooNE neutrino mode data, which constitute the MicroBooNE eLEE model.
47. (a) From Ref. [20]
48. (b)
49. 4.2 The expected distribution of $\nu_{e}$ interactions in MicroBooNE as a function of the true $\nu_{e}$ energy. The dotted line shows the expectation from the MicroBooNE eLEE model of the MiniBooNE excess discussed in section 4.2.
50. 4.3 An example candidate $1e1p$ data event in MicroBooNE, including the raw LArTPC collection plane image (left) and the pixel labels assigned from SparseSSNet (right).
51. 4.4 The relationship between the final state lepton (left) and proton (right) energy and scattering angle, for different neutrino energies, in $\nu_{\mu}$ (top) and $\nu_{e}$ (bottom) CCQE scattering.
52. 4.5 Figure 4.5(a) shows a diagram of the SparseSSNet U-ResNet architecture. Figure 4.5(b) shows the SparseSSNet pixel labels on a simulated image in the training dataset. Figures from Ref. [21].
53. (a)
54. (b)
55. 4.6 Figure 4.6(a) shows a diagram of the MPID architecture. Figure 4.6(b) shows the MPID image scores on an example simulated event. Figures from Ref. [22].
56. (a)
57. (b)
58. 4.7 The fraction of EM showers reconstructed to within 5% of the true deposited energy as a function of the fraction of unresponsive wires in the LArTPC collection plane, considering three different methods. The ResNet and Inception network significantly outperform the traditional linear calibration between charge and energy. Figure from Ref. [23].
59. 4.8 Figure 4.8(a) shows the angular metric that is minimized to find a 3D neutrino vertex candidate. Figure 4.8(b) shows the iterative track reconstruction algorithm, which relies on calculating distances (L1 and L2) and angles ($\theta$ and $\phi$) with respect to the previous point and the end of the track. Figures from Ref. [24].
60. (a)
61. (b)
62. 4.9 An example of an event that fails the shower energy consistency cut, because the EM shower passes through an unresponsive region of the collection plane.
63. 4.10 Figure 4.10(a) shows the distribution of the fractional shower energy consistency variable for events with an old $1e1p$ BDT score above/below 0.7. Figure 4.10(b) shows the efficiency with which events above/below the old $1e1p$ BDT cutoff pass the fractional consistency cut as a function of the chosen upper bound.
64. (a)
65. (b)
66. 4.11 The F score of each variable for one of the $1e1p$ BDTs in the ensemble from run 1, run2, and run 3.
67. (a) Run 1
68. (b) Run 2
69. (c) Run 3
70. 4.12 Figure 4.12(a) and figure 4.12(b) show the MC distribution of the $1e1p$ BDT ensemble average score over all three run periods for the full [0,1] range and zoomed in to the [0.95,1] range, respectively.
71. (a)
72. (b)
73. 4.13 Figure 4.13(a) and figure 4.13(b) show the predicted $\nu_{e}$ event rate in run period 2 and run period 3, respectively, using both the run period 2 ensemble and the run period 3 ensemble.
74. (a)
75. (b)
76. 4.14 Figure 4.14(a) and figure 4.14(b) show the fractional difference in average BDT score $(S_{n}-S_{0})/S_{0}$ as a function of the number of omitted BDTs $n$ over the simulation from run period 2 and run period 3, respectively. The red histogram shows the actual distribution of the number of omitted BDTs over the run period 2 and run period 3 simulation samples, respectively. Scores are calculated using the run period 3 and run period 2 BDT ensemble, respectively.
77. (a)
78. (b)
79. 4.15 Figure 4.15(a) and figure 4.15(b) show the MPID electron and muon score, respectively, as a function of the reconstructed electron energy in intrinsic $\nu_{e}$ MC events.
80. (a)
81. (b)
82. 4.16 The $E_{\nu}^{\rm range}$ distribution for the $1e1p$ signal sample, showing only the predicted event rate from the MC. The prediction from the eLEE model is shown in the dashed blue line.
83. 4.17 Figure 4.17(a) shows the distribution of the fractional error on the neutrino energy for MC events in the $1e1p$ signal sample, restricted to $200<E_{\nu}^{\rm Range}\;[{\rm MeV}]<1200$. Figure 4.17(b) shows the 2D distribution of fractional error as a function of the true neutrino energy.
84. (a)
85. (b)
86. 4.18 Figure 4.18(a) shows the post-vertex-identification efficiency of true $\nu_{e}$ CCQE selection for subsequent stages of the $1e1p$ cuts. Figure 4.18(b) shows the true $\nu_{e}$ CCQE event rates over the full run 1-3 dataset after subsequent stages of the $1e1p$ cuts.
87. (a)
88. (b)
89. 5.1 Top: pixel intensity (color scale is in PIU as defined in section 4.4); Bottom: SparseSSNet labels; Left to Right: U, V, Y, planes. The white circle indicates the reconstructed vertex. The horizontal axis corresponds to the wire plane direction and the vertical axis corresponds to the electron drift direction, which is measured using the arrival time of charge on the wires.
90. 5.2 The $1e1p$ sample $E_{\nu}$ distribution, comparing data (black points) to the unconstrained prediction (stacked histogram) in the $200<E_{\nu}<1200$ MeV region. The eLEE model prediction is represented by the dashed blue line. The prediction is presented in terms of both interaction type (figure 5.2(a)) and final state topology (figure 5.2(b)).
91. (a)
92. (b)
93. 5.3 Average $1e1p$ BDT ensemble score distribution comparing data to the unconstrained prediction.
94. 5.4 Comparison between data and unconstrained prediction in the $E_{e}$ (figure 5.4(a)), $E_{p}$ (figure 5.4(b)), $\theta_{e}$ (figure 5.4(c)), and $E_{\nu}^{QE-\ell}$ (figure 5.4(d)) distributions of the $1e1p$ sample.
95. (a)
96. (b)
97. (c)
98. (d)
99. 5.5 The fit to the $\nu_{\mu}$ background distribution to the $1e1p$ analysis. The shape fit is performed at a loose BDT score cutoff of 0.7 (figure 5.5(a)) and scaled to the signal cutoff of 0.95 (figure 5.5(b)). Blue points represent the prediction from the simulation, with error bars representing the Gaussian approximation of the statistical error (quadrature sum of event weights). The orange line and corresponding shaded region represent prediction and uncertainty, respectively, coming from the Landau+linear fit.
100. (a)
101. (b)
102. 5.6 The data and MC prediction for events with a $1e1p$ BDT score inside $[0.7,0.95]$. Good agreement is observed between data and prediction. The prediction incorporating the Landau+linear background fit is shown by the red line.
103. 5.7 The uncertainty in each bin of the $E_{\nu}^{\rm range}$ distribution of the $1e1p$ (figure 5.7(a)) and $1\mu 1p$ (figure 5.7(b)) samples.
104. (a)
105. (b)
106. 5.8 The joint covariance (figure 5.8(a)) and correlation (figure 5.8(b)) matrices for the $E_{\nu}^{\rm range}$ distribution of the $1e1p$ and $1\mu 1p$ samples.
107. (a)
108. (b)
109. 5.9 The $E_{\nu}^{\rm range}$ distribution in the $1\mu 1p$ channel, comparing data to the MC prediction.
110. 5.10 Fractional systematic uncertainty in the $1e1p$ $E_{\nu}^{\rm range}$ distribution before and after the $1\mu 1p$ constraint.
111. 5.11 Comparison between data and prediction in the $1e1p$ $E_{\nu}^{\rm range}$ distribution after applying the $1\mu 1p$ constraint procedure.
112. 5.12 Comparison between data and prediction in the $1e1p$ BDT ensemble average score distribution within the range $[0.7,0.95]$.
113. 5.13 Distributions of the $\Delta\chi^{2}$ test statistic defined in equation 5.7 for $H_{0}$ (red) and $H_{1}$ (blue), calculated by generating $10^{5}$ pseudo-experiments under each hypothesis. The $\Delta\chi^{2}$ value of the data is also shown.
114. 5.14 Confidence intervals on $x_{\rm LEE}$ calculating using the Feldman-Cousins procedure. The solid and dotted lines indicate the confidence level with which a given $x_{\rm LEE}$ is disfavored, calculated using the Feldman-Cousins method [25] and Wilks theorem [26], respectively. The MiniBooNE statistical and systematic errors are shown as a band around $x_{\rm LEE}=1$.
115. 5.15 Figure 5.15(a) shows the observation compared to the nominal ($H_{0}$) prediction in all four signal channels from the three MicroBooNE $\nu_{e}$ analyses, including statistical errors on the data points and systematic errors on the prediction. The eLEE prediction ($H_{1}$) is also indicated by the red line. Figure 5.15(b) shows the observed $1\sigma$ and $2\sigma$ confidence intervals on $x_{\rm LEE}$ from all four signal channels. The $2\sigma$ expected sensitivity of each channel is shown in red.
116. (a)
117. (b)
118. 6.1 Feynman diagram depicting the effective dipole operator of equation 6.1.
119. 6.2 New interactions involving the neutrissimo that are enabled by the dipole operator in equation 6.1, including three-body $\pi^{0}$ decay (figure 6.2(a)), Primakoff-like upscattering (figure 6.2(b)), and neutrissimo decay (figure 6.2(c)).
120. (a)
121. (b)
122. (c)
123. 6.3 $3+1$ global fits including MiniBooNE, considering global (left), appearance-only (middle), and disappearance-only (right) experiments. The allowed regions in $3+1$ parameter space at the 90%, 95%, and 99% confidence levels are shown by the red, green, and blue points, respectively. The best-fit point is indicated by the star.
124. 6.4 $3+1$ global fits without MiniBooNE, considering global (left), appearance-only (middle), and disappearance-only (right) experiments. The allowed regions in $3+1$ parameter space at the 90%, 95%, and 99% confidence levels are shown by the red, green, and blue points, respectively. The best-fit point is indicated by the star.
125. 6.5 Schematic depiction of the neutrissimo model in MiniBooNE as simulated using LeptonInjector. Figure 6.5(a) shows the simulation of Primkaoff upscattering along the beamline, and figure 6.5(b) shows an example of upscattering, neutrissimo decay and pair-production within the MiniBooNE detector.
126. (a)
127. (b)
128. 6.6 Allowed regions at the 95% and $3\sigma$ confidence level in $d_{\mu\mathcal{N}}$-$m_{\mathcal{N}}$ obtained through fits to the MiniBooNE excess in the $E_{\nu}^{\rm QE}$ and $\cos\theta$ distributions. Existing $2\sigma$ constraints on this model are indicated by the grey regions.
129. 6.7 Figure 6.7(a) and figure 6.7(b) show the $E_{\nu}^{\rm QE}$ and $\cos\theta$ distributions of the MiniBooNE excess, respectively, compared with the prediction from the neutrissimo model indicated by the black star in figure 6.6. The oscillation contribution from the $3+1$ global fit without MiniBooNE is also shown.
130. (a)
131. (b)
132. 6.8 The added time delay in MiniBooNE for a neutrissimo with the indicated parameters, as calculated in Ref. [27].
133. 7.1 Schematic depiction of the Lujan TMRS Figure from Ref. [28].
134. 7.2 Figure 7.3(a), from Ref. [29], shows the energy distribution of $\pi^{+}$ decay-at-rest neutrinos from the Lujan beam dump source. Figure 7.2(b), from Ref. [30], shows the timing distribution of particles produced in the Lujan beam dump source after traveling through the TMRS.
135. (a)
136. (b)
137. 7.3 Figure 7.3(a) shows a schematic 3D rendering of the CCM200 detector. Figure 7.3(b) shows an image of the interior of the CCM200 detector.
138. (a)
139. (b)
140. 7.4 Two of the veto PMT assemblies constructed at MIT, including the 1-inch PMT, base circuit board, and TPB-coated acrylic window.
141. 7.5 Figure 7.5 shows one of the veto PMTs across from the LED in the light-tight box. Figure 7.5 shows the average response of the 20 veto PMTs to 1 V (top) and 2 V (bottom) LED pulses.
142. 7.6 The timing distribution of photons from the Lujan source (solid black line) compared with that of neutrons measured in CCM120 (dashed red line). Figure from Ref. [30].
143. 7.7 Figure 7.7(a) shows the integration in equation 7.1 as a function of $\lambda_{1}$ for $\lambda_{2}=700~{}{\rm nm}$ and $z=1$. Figure 7.7(b) shows the Cherenkov cone angle $\cos\theta_{C}$ for an electron as a function of the photon wavelength and the electron kinetic energy.
144. (a)
145. (b)
146. 7.8 Templates of the average number of p.e. detected in each CCM PMT within the first 8 ns of an electron event. Different templates are shown for electron kinetic energies of $T_{e^{-}}=1~{}{\rm MeV}$ and $T_{e^{-}}=5~{}{\rm MeV}$, both with and without Cherenkov photons. Coated (uncoated) PMTs are indicated by the solid (dashed) circles. Grey PMTs indicate those which registered no hits within the first 8 ns across all simulations. The dimensions on each axis are in units of cm.
147. (a) With Cherenkov photons; $T_{e^{-}}=1~{}{\rm MeV}$
148. (b) Without Cherenkov photons; $T_{e^{-}}=1~{}{\rm MeV}$
149. (c) With Cherenkov photons; $T_{e^{-}}=5~{}{\rm MeV}$
150. (d) Without Cherenkov photons; $T_{e^{-}}=5~{}{\rm MeV}$
151. 7.9 Example event displays showing the total number of p.e. detected in each CCM PMT within the first 8 ns of a single simulated electron event. Displays are shown for electron kinetic energies of $T_{e^{-}}=1~{}{\rm MeV}$ and $T_{e^{-}}=5~{}{\rm MeV}$. Coated (uncoated) PMTs are indicated by the solid (dashed) circles. Grey PMTs indicate those which registered no hits in the first 8 ns of this specific simulation.
152. (a) $T_{e^{-}}=1~{}{\rm MeV}$
153. (b) $T_{e^{-}}=5~{}{\rm MeV}$
154. 7.10 Distributions of the test statistic in equation 7.6 over all $10^{4}$ simulations, considering either all photons or scintillation photons only. Distributions are shown for $T_{e^{-}}=1~{}{\rm MeV}$ and $T_{e^{-}}=5~{}{\rm MeV}$. The vertical line indicates the lower bound requirement which can reject 99% of scintillation-only backgrounds.
155. (a) $T_{e^{-}}=1~{}{\rm MeV}$
156. (b) $T_{e^{-}}=5~{}{\rm MeV}$
157. 7.11 Curves of the efficiency to retain events with Cherenkov light (“Ring efficiency”) v.s. the fraction of events without Cherenkov light that can be rejected (“No-Ring Rejection Factor”), generated by considering successively larger lower bounds on $\Delta\log\mathcal{L}$. Different curves are shown for $T_{e^{-}}\in\\{1,2,3,4,5\\}~{}{\rm MeV}$.
158. 7.12 Figure 7.12(a) shows a schematic depiction of the derivative-based pulse definition in the current CCM reconstruction. Figure 7.12(b) shows an example of this pulse finder in a data event, from Ref. [30].
159. (a)
160. (b)
161. 7.13 Two example waveforms from CCM200 beam data. The regions identified by the derivative filter are indicated in red and the result of the fit to equation 7.7 is indicated by the purple curves.
162. 7.14 Figure 7.14(a) shows an image of the six CosmicWatch pairs on top of the CCM detector. Figure 7.14 shows a schematic diagram of the cosmic muon trigger in CCM200.
163. (a)
164. 7.15 Figure 7.15(a) shows the summed waveform across all PMTs in CCM for a single example cosmic muon trigger. The delayed signal from a Michel electron can also be seen. Figure 7.15 shows the difference in rise times between coated and uncoated PMT signals in the top and bottom halves of the barrel of the detector (labeled “sides-top” and “sides-bottom”, respectively), as described in the text.
165. (a)
166. 7.16 Schematic depiction of prompt $\nu_{\mu}$ from $\pi^{+}$ decay in the Lujan target upscattering to neutrissimos within shielding along the path to CCM200 and decaying to photons in the detector. The pink circle represents the TMRS shown in figure 7.1.
167. 7.17 Figure 7.17(a) shows the distribution of background and signal prediction in CCM200 for $m_{\mathcal{N}}=20.35~{}{\rm MeV}$ and $d_{\mu{\mathcal{N}}}=3\times 10^{-7}~{}{\rm GeV}^{-1}$, considering a background reduction factor of $10^{-3}$ compared to CCM120. Figure 7.17 shows the background-subtracted plot, with a red band indicating the expected statistical uncertainty on the background.
168. (a)
169. 7.18 Figure 7.18(a) shows the expected sensitivity of CCM200 to the neutrissimo model, where the blue band corresponds to a background reduction factor between $10^{-4}$ (“CCM200 Low Background”) and $10^{-2}$ (“CCM200 High Background”). The MiniBooNE $E_{\nu}^{\rm QE}$ allowed region (pink) and existing constraints (grey) come from Ref [31]. Figure 7.18(b) shows the same plot, but considering $\mathcal{N}\to\nu_{\tau}\gamma$ decays with $d_{\tau{\mathcal{N}}}=d_{\mu{\mathcal{N}}}m_{\tau}/m_{\mu}$.
170. (a)
171. (b)
172. 7.19 The time delay of neutrissimo single photon decays with the CCM detector for the indicated mass and coupling, as calculated using LeptonInjector.
173. B.1 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
174. B.2 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
175. B.3 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
176. B.4 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
177. B.5 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
178. B.6 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
179. B.7 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
180. B.8 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
181. B.9 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
182. B.10 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
183. B.11 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
184. B.12 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
185. B.13 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
186. B.14 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
187. B.15 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
188. B.16 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
189. B.17 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
190. B.18 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
191. B.19 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
192. B.20 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
193. B.21 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
194. B.22 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
195. B.23 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
196. B.24 Top: pixel intensity; Bottom: SparseSSNet labels; Left to right: U, V, Y, planes. The white circle indicates the reconstructed vertex.
###### List of Tables
1. 4.1 The definition of kinematic variables used throughout the two-body CCQE analysis.
2. 4.2 The suite of variables used to isolate and analyze the $1e1p$ and $1\mu 1p$ samples. Variables used in the BDT ensemble for each sample are specified. The “$*$” character indicates that the variable is calculated in the rest frame of the struck nucleon. The mathematical definitions of many of these variables appear in Table 4.1.
3. 4.3 The specific used to define the $1\mu 1p$ and $1e1p$ samples. For definitions of kinematic variables, see Table 4.1.
4. 5.1 Breakdown of MC events in the “background” category of figures 5.3 and 5.4 over the range $200<E_{\nu}<1200$ MeV. The events are partitioned both by the interaction channel and the event topology.
5. 5.2 Results from goodness-of-fit tests comparing observed $1e1p$ data to the $H_{0}$ and $H_{1}$ predictions, reported via the $\chi^{2}_{\text{CNP}}$ test statistic and the frequentist $p$-value. The top half of the table considers the nominal prediction and uncertainties before the $1\mu 1p$ constraint described in section 5.1.3, while the bottom half considers the post-constraint prediction and uncertainties.
6. 6.1 Relevant parameters of the ten most abundant nuclei in the Earth’s upper crust according to [32]
## Chapter 1 Introduction
We begin with a brief primer on neutrinos, the surprises they have given
physicists throughout recent history, and the mysteries that remain today.
Readers already familiar with the mathematical details of massive neutrinos
and the Standard Model may wish to read only section 1.1 and section 1.4
before continuing.
### 1.1 A Brief History of the Neutrino
Figure 1.1: Telegram from Fred Reines and Clyde Cowan informing Wolfgang Pauli
of their detection of neutrinos from a nuclear reactor.
The first indication of what would come to be known as the neutrino came from
Wolfgang Pauli in 1930 [33]. Addressing the “radioactive ladies and gentlemen”
of Tübingen, Germany, he appealed to the existence of new electrically neutral
particles to save the law of energy conservation in nuclear beta decays. This
idea was developed further by Enrico Fermi in 1934, who calculated the
transition probability for $\beta$-decay with a neutrino in the final state
[34]. Fermi’s theory represents the first study of the weak interaction–the
only Standard Model gauge group under which neutrinos are charged.
As the name “weak interaction” suggests, neutrinos interact very feebly with
particles in the Standard Model. Thus, it wasn’t until 1956 that the neutrino
was observed in an experimental setting for the first time. A team of
scientists from Los Alamos Scientific Laboratory, led by Frederick Reines and
Clyde Cowan, detected a free neutrino from a nuclear reactor via the inverse
beta decay interaction ($\bar{\nu}_{e}p\to e^{+}n$) [35, 36]. Though it was
not known at the time, they had detected electron antineutrinos
($\bar{\nu}_{e}$). Electron (anti)neutrinos represent one of the three weak-
flavor eigenstates neutrinos can occupy in the Standard Model–specifically,
the eigenstate that couples to the $e^{\pm}$ charged leptons through the
charged-current weak interaction. Upon confirmation of their discovery, Reines
and Cowan sent the telegram shown in figure 1.1 to Pauli, alerting him of the
definitive existence of the neutral particles he proposed in Tübingen.
Shortly after this, the phenomenon of neutrino oscillations–periodic
transitions between different types of neutrinos–started to appear in the
literature. In 1958, Bruno Pontecorvo discussed the possibility of mixing
between right-handed antineutrinos $\bar{\nu}_{R}$ and “sterile” right-handed
neutrinos $\nu_{R}$, in analogy with $K^{0}$–$\bar{K}^{0}$ mixing observed in
the quark sector [37]. A second possible source of neutrino oscillations came
following the 1962 experimental discovery of a second neutrino weak-flavor
eigenstate–the muon neutrino ($\nu_{\mu}$) [38]. After this, the notion of
mixing between neutrino flavor and mass eigenstates was introduced by Ziro
Maki, Masami Nakagawa, and Shoichi Sakata [39]. In a 1967 paper [40],
Pontecorvo introduced the possibility of vacuum $\nu_{e}$–$\nu_{\mu}$
oscillations, even predicting a factor of two suppression in the total solar
neutrino flux before such a deficit would actually be observed [41].
The aforementioned deficit, known as the “solar neutrino problem”, was
established in 1968 through a now-famous experiment at the Homestake Mine in
South Dakota led by Raymond Davis [42]. Davis and his colleagues detected the
capture of electron neutrinos from the sun on 37Cl nuclei, allowing a
measurement of the solar $\nu_{e}$ flux. Their result was about a factor of
$\sim 1/3$ lower than the leading prediction from John Bachall [43]. This is
shown in figure 1.2, including confirmations of the deficit following the
Homestake experiment. The solution was not a mistake in the experimental
measurement or theoretical prediction, as physicists expected at the time;
rather, it was a deficiency in our understanding of neutrinos. This was the
first piece of the puzzle that would eventually lead to the discovery of
neutrino oscillations and nonzero neutrino masses.
Figure 1.2: The deficit of the observed solar $\nu_{e}$ flux compared with the
theoretical expectation. The Homestake experiment is shown on the far left;
follow-up solar neutrino measurements confirming the deficit are also shown,
including the 2002 SNO result which brought forth a solution to the solar
neutrino problem. Figure from Ref. [1].
The next piece of the puzzle came from atmospheric neutrinos, i.e. neutrinos
coming from the decay of mesons created from the interactions of primary
cosmic rays in the atmosphere. Around the mid-1980s, two water Cherenkov
detectors, IMB-3 [44] and Kamiokande [45], began to measure the interactions
of atmospheric $\nu_{\mu}$ and $\nu_{e}$ events (initially just as a
background for their main physics goal, the search for nucleon decay). The
ratio of $\nu_{\mu}:\nu_{e}$ interactions was found to be lower than the
theoretical expectation by a factor of $\sim 2/3$ [46]. This was known as the
“atmospheric neutrino anomaly”. The source of this anomaly was not clear at
the time; it could have been a deficit of muon neutrinos, an excess of
electron neutrinos, or some of both. Systematic issues in the flux prediction
or muon identification were also suggested [46]. It was far from clear that
neutrino oscillations could be responsible for the observed deficit.
The solution to the atmospheric neutrino anomaly came from the Super-
Kamiokande (SuperK) experiment [47]. SuperK was a much larger version of the
Kamiokande detector, allowing the detection of higher energy muons (up to
$E_{\mu}\sim 5\;{\rm GeV}$). SuperK also measured the up-down asymmetry of
muon-like and electron-like events in their detector, $(N_{\rm up}-N_{\rm
down})/(N_{\rm up}+N_{\rm down})$. Upward-going events have traveled a much
longer distance than downward-going events before reaching the SuperK
detector–thus positive detection of an asymmetry would be smoking-gun evidence
for a baseline-dependent effect like neutrino oscillations. This is precisely
what SuperK observed [2]. As shown in figure 1.3, an up-down asymmetry is
observed in the muon-like channel, the magnitude of which increases with the
observed muon momentum. Such behavior is consistent with muon neutrino
oscillations to a third flavor eigenstate, $\nu_{\tau}$ (the mathematical
details of neutrino oscillations will be described in section 1.3). No such
effect was observed in the electron-like channel. Thus, the atmospheric
neutrino anomaly is a result of muon neutrino disappearance, specifically
coming from $\nu_{\mu}\to\nu_{\tau}$ oscillations.
The solution to the solar neutrino problem came in 2002 from the Sudbury
Neutrino Observatory (SNO) [48]. The SNO experiment used a heavy water
Cherenkov detector, specifically relying on the use of deuterium target nuclei
to be sensitive to three different neutrino interactions,
$\begin{split}&\nu_{e}+d\to p+p+e^{-}~{}~{}~{}(\rm{CC}),\\\ &\nu_{x}+d\to
p+n+\nu_{x}~{}~{}~{}(\rm{NC}),\\\
&\nu_{x}+e^{-}\to\nu_{x}+e^{-}~{}~{}~{}~{}~{}(\rm{ES}).\end{split}$ (1.1)
Charged-current (CC), neutral-current (NC), and elastic scattering (ES)
interactions were separated based on the visible energy and scattering angle
of the final state particles. NC events were further separated by tagging the
6.25 MeV photon released from neutron capture on deuterium. By measuring all
three channels, SNO was able to measure the 8B solar neutrino flux broken down
into the $\nu_{e}$ and $\nu_{\mu,\tau}$ components. SNO’s 2002 result showed
that the missing neutrinos from the Homestake experiment were in fact showing
up in the $\nu_{\mu,\tau}$ component [3]. Figure 1.4 shows the flux of each
component as constrained by the measured CC, NC, and ES interaction rate. The
flavor transitions here come not from vacuum oscillations but rather from
matter-enhanced resonant behavior as neutrinos travel through the dense solar
medium–a phenomenon known as the Mikheyev–Smirnov–Wolfenstein (MSW) effect
[49, 50]. The MSW effect still, however, requires mixing between the neutrino
flavor and mass eigenstate as well as non-zero squared differences between the
mass eigenstates. It is worth noting here that the KamLAND reactor neutrino
experiment was essential in determining the oscillation parameters which led
to the SNO observation [51]. Thus, the SNO solution to the solar neutrino
problem and the SuperK solution to the atmospheric neutrino anomaly were both
evidence for the existence of neutrino oscillations and thus non-zero neutrino
masses. The collaborations shared the 2015 Nobel Prize in physics for this
discovery [52, 53].
Since SuperK and SNO, neutrino oscillations have been measured extensively by
a global program of reactor, accelerator, atmospheric, and solar neutrino
experiments. The mixing angle and mass-squared splittings of the three
Standard Model neutrinos have been measured to few-percent-level precision in
most cases [54, 55, 56]. There are a number of open questions in the standard
three-neutrino mixing paradigm, including the ordering of the three mass
eigenstates and the value of the charge-parity-violating complex phase
$\delta_{CP}$. Though preliminary results exist on both fronts [57, 58, 55,
54, 56], definitive answers to each will come from next-generation neutrino
experiments, including Hyper-K [59], DUNE [60] and JUNO [61].
Figure 1.3: The up-down asymmetry measured in SuperK as a function of lepton
momentum, separated into $e$-like and $\mu$-like events as well as fully-
contained (FC) and partially-contained (PC) events. The dashed line indicates
the best fit to $\nu_{\mu}\to\nu_{\tau}$ oscillations. Figure from Ref. [2].
Figure 1.4: Measurement of the solar 8B flux from the SNO collaboration,
broken down into the $\nu_{e}$ and $\nu_{\mu,\tau}$ sub-components.
Measurement of the CC, NC, and ES channels show up as slices in the two-
dimensional flux parameter space. Figure from Ref. [3].
### 1.2 Neutrinos in the Standard Model
The arguments and notation presented in this section follow closely from
chapter 2 of Ref. [62].
The interactions of the known fundamental particles of our Universe are
described by a specific quantum field theory known as the Standard Model (SM).
Above the electroweak scale, there are three gauge groups contained within the
SM:
* •
$SU(3)_{c}$, which governs the gluon-mediated “strong interactions” of color-
charged fields.
* •
$SU(2)_{L}$, one part of the “electro-weak interaction”, mediated by the
$W^{\pm}_{\mu}$ and $W^{0}_{\mu}$ vector bosons.
* •
$U(1)_{Y}$, the other part of the “electro-weak interaction”, mediated by the
$B_{\mu}$ gauge boson.
After electro-weak symmetry breaking (EWSB) via the Higgs mechanism, the
$SU(2)_{L}\times U(1)_{Y}$ subgroup breaks down to $U(1)_{Q}$, which describes
the electromagnetic (EM) interactions of charged fields mediated by the
$A_{\mu}$ gauge boson, also known as the photon.
Of the three fundamental interactions of the SM, neutrinos are only charged
under the weak $SU(2)_{L}$ gauge group–they are singlets under the $SU(3)_{c}$
and $U(1)_{Q}$ gauge groups. Thus, neutrinos only appear in the electro-weak
part of the SM Lagrangian, which is given by
$\mathcal{L}=\frac{g}{\sqrt{2}}(J^{\mu}W^{+}_{\mu}+J^{\mu\dagger}W^{-}_{\mu})+\frac{g}{\cos\theta_{W}}K^{\mu}Z_{\mu},$
(1.2)
where $g=e/\cos\theta_{W}$ is the $SU(2)_{L}$ gauge coupling of $W_{\mu}$ and
Higgs field, $\theta_{W}$ is the Weinberg angle describing the rotation that
occurs during EWSB between the neutral parts of the $SU(2)_{L}$ and $U(1)_{Y}$
gauge boson fields, and $W^{\pm}_{\mu}$ ($Z_{\mu}$) is the charged (neutral)
piece of $SU(2)_{L}$ after EWSB. The currents coupled to $W^{\pm}_{\mu}$ and
$Z_{\mu}$ bosons are given by
$\begin{split}J^{\mu}&=\begin{pmatrix}\overline{u}^{0}&\overline{c}^{0}&\overline{t}^{0}\end{pmatrix}\gamma^{\mu}P_{L}\begin{pmatrix}d^{0}\\\
s^{0}\\\
b^{0}\end{pmatrix}+\begin{pmatrix}\overline{\nu_{e}}&\overline{\nu_{\mu}}&\overline{\nu_{\tau}}\end{pmatrix}\gamma^{\mu}P_{L}\begin{pmatrix}e\\\
\mu\\\ \tau\end{pmatrix}\\\
K^{\mu}&=\sum_{f}\overline{f}\gamma^{\mu}[I_{3L}P_{L}-\sin^{2}\theta_{W}Q_{f}]f\\\
&=\sum_{q}[\epsilon_{L}(q)\overline{q}\gamma_{\mu}P_{L}q+\epsilon_{R}(q)\overline{q}\gamma_{\mu}P_{R}q]\\\
&+\frac{1}{2}\sum_{\alpha\in\\{e,\mu,\tau\\}}[\overline{\nu_{\alpha}}\gamma^{\mu}P_{L}\nu_{\alpha}+\overline{\ell}_{\alpha}\gamma_{\mu}(g_{V}^{\alpha}-\gamma_{5}g_{A}^{\alpha})\ell_{\alpha}],\end{split}$
(1.3)
where $P_{R}(P_{L})=(1\pm\gamma^{5})/2$ is the projection operator onto the
right-handed (left-handed) chiral state, and the subscript $0$ on the quark
fields indicates that these are the weak flavor eigenstates rather than the
mass eigenstates. The first generation coupling constants in $K^{\mu}$, which
derive from the specified EM charge and $SU(2)_{L}$ representation of each
field, are given by
$\begin{split}&\epsilon_{L}(u)=\frac{1}{2}-\frac{2}{3}\sin^{2}\theta_{W}~{}~{}~{}~{}~{}\epsilon_{R}(u)=-\frac{2}{3}\sin^{2}\theta_{W}\\\
&\epsilon_{L}(d)=-\frac{1}{2}+\frac{1}{3}\sin^{2}\theta_{W}~{}~{}~{}\epsilon_{R}(d)=\frac{1}{3}\sin^{2}\theta_{W}\\\
&g_{V}^{e}=-\frac{1}{2}+2\sin^{2}\theta_{W}~{}~{}~{}~{}~{}~{}~{}g_{A}^{e}=-\frac{1}{2}.\end{split}$
(1.4)
The Lagrangian in equation 1.2 can be used to calculate cross sections for the
various SM interactions of the neutrino. The first term describes the charged-
current interactions of neutrinos such as nuclear beta decay, while the second
term describes neutral current interactions such as $\nu_{\mu}e^{-}$ elastic
scattering. At energy scales below the electro-weak scale, one can integrate
out the $W_{\mu}$ and $Z_{\mu}$ gauge bosons and describe interactions in
terms of the dimensional Fermi constant
$G_{F}=\frac{g^{2}}{4\sqrt{2}M_{W}^{2}}=1.166\times 10^{-5}\;\rm{GeV}^{-2}.$
(1.5)
The low-energy Lagrangian describing 4-fermion interactions can be derived
from equation 1.2 as
$\mathcal{L}_{4f}=\frac{-4G_{F}}{\sqrt{2}}[J_{\mu}J^{\mu\dagger}+K_{\mu}K^{\mu}].$
(1.6)
As an example, we consider low-energy neutrino electron elastic scattering
(ES) ($\nu e^{-}\to\nu e^{-}$). This is a purely leptonic process and is
therefore relatively clean; specifically, ES models do not need to account for
the complex dynamics of the nuclear medium. The Feynman diagrams for the
contributing interactions are shown in figure 1.5. Both the charged-current
(CC) and neutral-current (NC) diagrams contribute to $\nu_{e}e^{-}$
scattering, while only the NC diagram contributes to $\nu_{\mu,\tau}e^{-}$
scattering. Using the Feynman rules associated with equation 1.6, one can
calculate the cross sections to be [62]
$\begin{split}\sigma_{\nu_{e}e^{-}\to\nu_{e}e^{-}}(E_{\nu})&=\frac{G_{F}^{2}m_{e}E_{\nu}}{2\pi}\bigg{[}(2\sin^{2}\theta_{W}+1)^{2}+\frac{4}{3}\sin^{4}\theta_{W}\bigg{]}\\\
&\approx 0.9\times 10^{-43}\bigg{(}\frac{E_{\nu}}{10\;{\rm MeV}}\bigg{)}{\rm
cm}^{2}\\\
\sigma_{\nu_{\mu,\tau}e^{-}\to\nu_{\mu,\tau}e^{-}}(E_{\nu})&=\frac{G_{F}^{2}m_{e}E_{\nu}}{2\pi}\bigg{[}(2\sin^{2}\theta_{W}-1)^{2}+\frac{4}{3}\sin^{4}\theta_{W}\bigg{]}\\\
&\approx 0.15\times 10^{-43}\bigg{(}\frac{E_{\nu}}{10\;{\rm MeV}}\bigg{)}{\rm
cm}^{2},\end{split}$ (1.7)
which is valid for $E_{\nu}>>m_{e}$. Similarly, one can calculate the cross
section for antineutrino electron ES
($\overline{\nu}e^{-}\to\overline{\nu}e^{-}$). The diagrams contributing for
this process are shown in figure 1.6, and the cross section is given by [62]
$\begin{split}\sigma_{\overline{\nu}_{e}e^{-}\to\overline{\nu}_{e}e^{-}}(E_{\nu})&=\frac{G_{F}^{2}m_{e}E_{\nu}}{2\pi}\bigg{[}\frac{1}{3}(2\sin^{2}\theta_{W}+1)^{2}+4\sin^{4}\theta_{W}\bigg{]}\\\
&\approx 0.378\times 10^{-43}\bigg{(}\frac{E_{\nu}}{10\;{\rm MeV}}\bigg{)}{\rm
cm}^{2}\\\
\sigma_{\overline{\nu}_{\mu,\tau}e^{-}\to\overline{\nu}_{\mu,\tau}e^{-}}(E_{\nu})&=\frac{G_{F}^{2}m_{e}E_{\nu}}{2\pi}\bigg{[}\frac{1}{3}(2\sin^{2}\theta_{W}-1)^{2}+4\sin^{4}\theta_{W}\bigg{]}\\\
&\approx 0.14\times 10^{-43}\bigg{(}\frac{E_{\nu}}{10\;{\rm MeV}}\bigg{)}{\rm
cm}^{2}.\end{split}$ (1.8)
We now turn to the interaction at the core of this thesis: neutrino-nucleon
charged-current quasi-elastic (CCQE) scattering. The relevant Feynman diagrams
for this process are shown in figure 1.7. Unlike ES, models of CCQE do need to
account for the nuclear environment surrounding the target nucleon. As the
final state nucleon travels through the nuclear medium, it may scatter off of
other nucleons and/or produce additional mesons through a process known as
final state interactions (FSIs). As shown in figure 1.8, CCQE is dominant for
$E_{\nu}\lesssim 1\;{\rm GeV}$. Above this energy, nucleon resonance processes
start to take over, in which Delta resonances decay to final state mesons. In
the regime $E_{\nu}\gtrsim 10\;{\rm GeV}$, neutrinos start to undergo deep
inelastic scattering (DIS) off of the constituent quarks within the nucleon.
In order to calculate the CCQE cross section, one considers a theory
containing nucleon degrees of freedom. The original calculation for free
nucleons (i.e., not bound within a nucleus) was carried out by Llewellyn-Smith
in 1972; the differential cross section as a function of the squared four-
momentum transfer $Q^{2}$ is given by [4, 63]
$\begin{split}\frac{d\sigma}{dQ^{2}}=\frac{G_{F}^{2}M^{2}|V_{ud}|^{2}}{8\pi
E_{\nu}^{2}}\bigg{[}A\pm\frac{s-u}{M^{2}}B+\frac{(s-u)^{2}}{M^{4}}C\bigg{]},\end{split}$
(1.9)
where +(-) refers to (anti)neutrino scattering, $M$ is the nucleon mass, $m$
is the lepton mass, $(s-u)=4ME_{\nu}-Q^{2}-m^{2}$, and $A$, $B$, and $C$ are
functions of the vector, axial-vector, and pseudoscalar form factors of the
nucleon (see equations 58, 59, and 60 of Ref. [4] for complete expressions).
These form factors describe the composite nature of nucleons under
interactions with different Lorentz structures.
For $E_{\nu}<<M$, the $\nu_{e}$ CCQE cross section is approximately [62]
$\begin{split}\sigma_{\nu_{e}n\to
e^{-}p}(E_{\nu})&\approx\frac{G_{F}^{2}E_{\nu}^{2}}{\pi}(g_{V}^{2}+3g_{A}^{2})\\\
&\approx 9.75\times 10^{-42}\bigg{[}\frac{E_{\nu}}{10\;{\rm
MeV}}\bigg{]}^{2}\;{\rm cm}^{2}.\end{split}$ (1.10)
In the regime $E_{\nu}\gtrsim 1\;{\rm GeV}$, the $\nu_{e}$ and $\nu_{\mu}$
CCQE cross sections are no longer suppressed by threshold effects and are thus
the same, approximately $10^{-38}\;{\rm cm}^{2}$ [4]. This cross section is
significantly larger than the elastic scattering and lower energy $\nu_{e}$
CCQE cross sections and is the dominant neutrino interaction for many
accelerator-based neutrino experiments, including two at the heart of this
thesis: MiniBooNE and MicroBooNE. Finally, we note that the cross section for
antineutrino CCQE tends to be smaller; this will be important in chapter 6.
$\nu$$\nu$$e^{-}$$e^{-}$$Z$$\nu_{e}$$e^{-}$$e^{-}$$\nu_{e}$$W$
Figure 1.5: Diagrams contributing to $\nu e^{-}$ elastic scattering
$\overline{\nu}$$\overline{\nu}$$e^{-}$$e^{-}$$Z$$\overline{\nu}_{e}$$e^{-}$$e^{-}$$\overline{\nu}_{e}$$W^{-}$
Figure 1.6: Diagrams contributing to $\overline{\nu}e^{-}$ elastic scattering
$\nu_{\alpha}$$\ell_{\alpha}$$n$$p$$W$$\overline{\nu}_{\alpha}$$\overline{\ell}_{\alpha}$$p$$n$$W$
Figure 1.7: Diagrams contributing to neutrino-nucleon charged-current
quasielastic scattering Figure 1.8: CC inclusive neutrino and antineutrino
nucleon scattering cross sections as a function of neutrino energy. Figure
from Ref. [4].
### 1.3 Massive Neutrinos
The arguments and notation presented in this section follow closely from
section 2.5, chapter 4, and chapter 5 of Ref. [62] as well as chapter 11 of
Ref. [64].
Neutrinos are massless in the SM. To see this, we will exhaust the two
possible forms for a neutrino mass term in the SM Lagrangian: Dirac and
Majorana. These refer to the two possible fermionic spinor representations in
which neutrinos can be found. Dirac spinors in general have four complex
components, or degrees of freedom, while Majorana spinors have only two. The
critical question is whether the right-handed chiral projection of the
neutrino field, $\nu_{R}$, is the same as $\overline{\nu}_{R}$, the right-
handed chiral projection of the antineutrino field (Majorana case), or if it
is a new degree of freedom (Dirac case).
The definition of a free Dirac fermion field is
$\psi(x)=\int\frac{d^{3}p}{\sqrt{(2\pi)^{3}2E_{p}}}\sum_{s=\pm\frac{1}{2}}\Big{(}f_{s}(\mathbf{p})u_{s}(\mathbf{p})e^{-i\mathbf{p}\cdot\mathbf{x}}+\overline{f_{s}}^{\dagger}(\mathbf{p})v_{s}(\mathbf{p})e^{i\mathbf{p}\cdot\mathbf{x}}\Big{)},$
(1.11)
where $f_{s}(\mathbf{p})$ annihilates a single particle of momentum
$\mathbf{p}$ while $\overline{f_{s}}^{\dagger}$ creates the corresponding
antiparticle state, and $u_{s}(\mathbf{p})$ and $v_{s}(\mathbf{p})$ are
spinors with positive and negative energy, respectively, satisfying the Dirac
equations
$\begin{split}&(\gamma^{\mu}p_{\mu}-m)u_{s}(\mathbf{p})=0\\\
&(\gamma^{\mu}p_{\mu}+m)v_{s}(\mathbf{p})=0,\end{split}$ (1.12)
where $\gamma^{\mu}$ are a set of Lorentz-indexed matrices satisfying
$\\{\gamma^{\mu},\gamma^{\nu}\\}=2g^{\mu\nu}$. There are many possible
representations for the $\gamma$-matrices. We consider the Weyl basis, in
which [64]
$\gamma_{\mu}=\begin{pmatrix}0&\sigma_{\mu}\\\ \overline{\sigma}_{\mu}&0\\\
\end{pmatrix},$ (1.13)
where $\sigma_{\mu}=(\mathbb{1},\vec{\sigma})$,
$\overline{\sigma}_{\mu}=(\mathbb{1},-\vec{\sigma})$, and
$\vec{\sigma}=(\sigma_{1},\sigma_{2},\sigma_{3})$ are the Pauli matrices. This
representation is convenient for understanding the different chiral components
of the Dirac spinor $\psi$. The Lorentz generators
$S^{\mu\nu}\equiv\frac{i}{4}[\gamma^{\mu},\gamma^{\nu}]$ become block
diagonal, such that we can write the Dirac spinor of equation 1.11 as a
doublet of two-component Weyl spinors with a different chiral nature,
$\psi=\begin{pmatrix}\psi_{L}\\\ \psi_{R}\\\ \end{pmatrix},$ (1.14)
which transform under different irreducible representations of the Lorentz
group [64]. Here $\psi_{L}$ and $\psi_{R}$ refer to the left-handed and right-
handed Weyl spinor, respectively. We can isolate the different chiral
components of the Dirac spinor using the matrix $\gamma^{5}\equiv
i\gamma^{0}\gamma^{1}\gamma^{2}\gamma^{3}$, which takes the form
$\gamma^{5}=\begin{pmatrix}-\mathbb{1}&0\\\ 0&\mathbb{1}\\\ \end{pmatrix}$
(1.15)
in the Weyl basis. We can define projection operators
$P_{L}=\frac{1}{2}(1-\gamma^{5})$ and $P_{R}=\frac{1}{2}(1+\gamma^{5})$ such
that $P_{L}\psi=\psi_{L}$ and $P_{R}\psi_{R}$. It is worth noting that while
the behavior of these projection operators is especially clear in the Weyl
representation, they will isolate the chiral components of $\psi$ in any
representation of $\gamma^{\mu}$.
Dirac mass terms couple left-handed and right-handed chiral states. To see
this, consider the Dirac equation in the Weyl basis, which takes the form [64]
$(\gamma^{\mu}p_{\mu}-m)\psi=\begin{pmatrix}-m&\sigma^{\mu}p_{\mu}\\\
\overline{\sigma}^{\mu}p_{\mu}&-m\\\ \end{pmatrix}\begin{pmatrix}\psi_{L}\\\
\psi_{R}\\\ \end{pmatrix}=0.$ (1.16)
It is evident that this matrix equation mixes the left-handed and right-handed
components of $\psi$. Dirac mass terms take the form
$m\psi_{L}^{\dagger}\psi_{R}$ and $m\psi_{R}^{\dagger}\psi_{L}$, thus
requiring both chiral components. After EWSB, the non-neutrino fermions in the
SM acquire a Dirac mass from their interactions with the Higgs field.
Neutrinos, however, do not have a right-handed chiral state in the SM;
therefore, the SM cannot include a Dirac mass term for neutrinos.
Now we turn to the Majorana mass term. The expression for a Majorana field is
the same as equation 1.11, subject to a condition relating particles and
antiparticles. We see that the expression for $\psi^{*}(x)$ would involve
$f_{s}^{\dagger}(\mathbf{p})$, which creates a particle state, and
$\overline{f_{s}}(\mathbf{p})$, which annihilates an antiparticle state. It
turns out the relationship $\psi(x)=\psi^{*}(x)$ is not Lorentz invariant
[62]. To remedy this, we must define the conjugate Dirac field
$\psi^{C}(x)\equiv\gamma_{0}C\psi^{*}(x),$ (1.17)
where the representation-dependent conjugation matrix $C$ is defined by the
equation
$\begin{split}&\gamma_{0}C\sigma^{*}_{\mu\nu}=-\sigma_{\mu\nu}\gamma_{0}C,\\\
&\sigma_{\mu\nu}\equiv\frac{i}{2}[\gamma_{\mu},\gamma_{\nu}].\end{split}$
(1.18)
In the Weyl representation, for example, $C=i\gamma_{2}\gamma_{0}$. This
requirement for $C$ ensures that $\psi^{C}(x)$ transforms in the same way as
$\psi(x)$ under the Lorentz group [62]. The Lorentz-invariant Majorana
condition specifically requires
$\psi(x)=e^{i\theta}\psi^{C}(x),$ (1.19)
where $\theta$ is an arbitrary phase, which we can take to be $\theta=0$. It
is important to note that this condition can only be satisfied for fields that
carry no additive quantum numbers [64].
In the Weyl basis, equation 1.19 relates the left-handed and right-handed
components of $\psi(x)$ such that [64]
$\psi(x)=\begin{pmatrix}\psi_{L}\\\ i\sigma_{2}\psi^{*}_{L}\\\ \end{pmatrix},$
(1.20)
where the number of degrees of freedom has been reduced from four to two.
Since $i\sigma_{2}\psi^{*}_{L}$ transforms like a right-handed spinor, we can
now write mass terms of the form $im\psi_{L}^{\dagger}\sigma_{2}\psi_{L}^{*}$
and $im\psi_{L}^{\dagger}\sigma_{2}\psi_{L}^{*}$. These are Majorana mass
terms. Note that they couple the same chiral component of the fermion.
The impossibility of a neutrino Majorana mass term is a bit more nuanced.
Majorana mass terms for neutrinos in the SM contain the bi-linear expression
$\nu_{L}^{T}\sigma_{2}\nu_{L}$. However, $\nu_{L}$ belongs to an $SU(2)_{L}$
doublet in the SM, thus this Majorana mass term transforms as a triplet under
$SU(2)_{L}$. It also breaks lepton number by two units, hence it also violates
baryon minus lepton number ($B-L$), which is conserved to all orders of the SM
gauge couplings [62]. Therefore, neutrinos also cannot have a Majorana mass
term in the SM.
Despite these arguments, neutrino oscillations have given physicists
definitive evidence that at least two of the three SM neutrino masses are
nonzero (as discussed in section 1.1). This requires the presence of physics
beyond the Standard Model (BSM). The minimal extension of the SM which can
accommodate nonzero neutrino masses introduces additional right-handed
neutrino states $N_{R}$ [62, 64]. These fields, which are singlets under the
SM gauge group, can generate both Dirac and Majorana mass terms for neutrinos.
The most general expression for the neutrino mass Lagrangian is then
$-\mathcal{L}_{\rm
mass}=\frac{1}{2}\begin{pmatrix}\overline{\nu}_{L}&\overline{N^{C}_{L}}\end{pmatrix}\begin{pmatrix}0&M\\\
M^{T}&B\\\ \end{pmatrix}\begin{pmatrix}\nu_{R}^{C}\\\ N_{R}\end{pmatrix}+{\rm
h.c.},$ (1.21)
where $M$ and $B$ are the Dirac and Majorana mass matrices of the neutrino
sector, respectively, and $\nu_{L}$ and $N_{R}$ are column vectors containing
the left-handed and right-handed projections of each neutrino generation.
In order to obtain the mass eigenstates of this theory, one must diagonalize
the mass matrix in equation 1.21. If we assume one generation of neutrinos,
the eigenvalues of this mass matrix are
$m_{1,2}=\frac{1}{2}(\sqrt{B^{2}+4M^{2}}\mp B).$ (1.22)
In the limit $B>>M$, the eigenvalues are approximately given by
$m_{1}\approx\frac{M^{2}}{B},~{}~{}~{}m_{2}\approx B.$ (1.23)
This is the famous “seesaw mechanism” for neutrino mass generation [65]. One
can see that if $B$ is at roughly the GUT scale ($10^{16}$ GeV) and $M$ is at
roughly the electro-weak scale (100 GeV), we see that $m_{1}<1$ eV. This is
the right order-of-magnitude regime predicted by neutrino oscillation data and
is consistent with existing upper bounds on the neutrino mass from KATRIN
[66]. Thus, this model is an elegant explanation of the observed neutrino
oscillation phenomenon, though experimental confirmation of right-handed
neutrino fields at the GUT scale is probably not feasible for quite a long
time.
While we do not know the mechanism through which neutrinos acquire mass, it is
relevant to ask whether the resulting mass terms are Dirac or Majorana in
nature. An extensive worldwide experimental program is currently underway to
answer this question by searching for neutrino-less double beta decay, a rare
decay process in which a nucleus undergoes two simultaneous beta decays
without emitting any neutrinos in the final state [67, 68, 69]. A positive
observation would imply that neutrinos are Majorana.
As discussed in section 1.1, perhaps the most famous consequence of massive
neutrinos is the phenomenon of neutrino oscillations [37, 40, 39]. This arises
because the three weak flavor eigenstates $\nu_{\alpha}$ are not aligned with
the three mass eigenstates $\nu_{i}$. The two bases are related by the unitary
Pontecorvo–Maki–Nakagawa–Sakata (PMNS) mixing matrix $U_{\alpha i}$,
$\begin{pmatrix}\nu_{e}\\\ \nu_{\mu}\\\
\nu_{\tau}\end{pmatrix}=\begin{pmatrix}U_{e1}&U_{e2}&U_{e3}\\\ U_{\mu
1}&U_{\mu 2}&U_{\mu 3}\\\ U_{\tau 1}&U_{\tau 2}&U_{\tau 3}\\\
\end{pmatrix}\begin{pmatrix}\nu_{1}\\\ \nu_{2}\\\ \nu_{3}\end{pmatrix}.$
(1.24)
As seen in equation 1.2, neutrinos interact in the weak flavor eigenstates
$\nu_{\alpha}$. Thus, a neutrino a produced alongside a charged anti-lepton
$\overline{\ell}$ is in the state
$\ket{\nu(t=0)}=\ket{\nu_{\ell}}=\sum_{i\in\\{1,2,3\\}}U_{\ell
i}\ket{\nu_{i}}.$ (1.25)
Neutrinos propagate, however, in their mass eigenstates. Each mass eigenstate
$\nu_{i}$ is associated with a mass $m_{i}$ and four-momentum
$(p_{i})_{\mu}=(E_{i},\vec{p_{i}})$ satisfying the on-shell requirement
$(p_{i})^{2}=m_{i}^{2}$. Thus, after a time $t$, the neutrino will be in the
state
$\ket{\nu(t)}=\sum_{i}e^{-ip_{i}\cdot x}U_{\ell i}\ket{\nu_{i}}.$ (1.26)
The overlap with a different weak flavor eigenstate
$\nu_{\ell^{\prime}}\neq\nu_{\ell}$ is non-trivial, given by the expression
$\begin{split}\braket{\nu_{\ell^{\prime}}}{\nu(t)}&=\sum_{i,j}\bra{\nu_{j}}U^{\dagger}_{j\ell^{\prime}}e^{-ip_{i}\cdot
x}U_{\ell i}\ket{\nu_{i}}\\\ &=\sum_{i}e^{-ip_{i}\cdot x}U_{\ell
i}U_{\ell^{\prime}i}^{*},\end{split}$ (1.27)
where we have invoked the orthonormality of the mass basis in the last line.
The probability of finding a neutrino in flavor eigenstate
$\nu_{\ell^{\prime}}$ given an initial $\nu_{\ell}$ state is then
$\begin{split}P_{\nu_{\ell}\to\nu_{\ell^{\prime}}}(t)&=|\braket{\nu_{\ell}^{\prime}}{\nu(t)}|^{2}\\\
&=\sum_{i,j}|U_{\ell i}U_{\ell^{\prime}i}^{*}U_{\ell
j}^{*}U_{\ell^{\prime}j}|e^{-i(p_{i}-p_{j})\cdot
x+i\phi_{\ell\ell^{\prime}ij}}.\end{split}$ (1.28)
where $\phi_{\ell\ell^{\prime}ij}\equiv\rm{arg}(U_{\ell
i}U_{\ell^{\prime}i}^{*}U_{\ell j}^{*}U_{\ell^{\prime}j})$.
We now make a simplifying assumption, in which all neutrino mass eigenstates
propagate with the same momentum, i.e.
$\vec{p}_{i}=\vec{p}_{j}\equiv\vec{p}\forall i,j$. This treatment is not
necessarily physical. However, for the parameters relevant to most laboratory
neutrino experiments, it leads to the same result as the correct but
complicated full treatment of the quantum mechanical neutrino wave packet
[70]. Given this assumption along with the approximation that $m_{i}<<p_{i}$
(which should hold for all existing and near-future experiments), we can show
$\begin{split}(p_{i}-p_{j})\cdot x&=(E_{i}-E_{j})t\\\
&=\Big{(}\sqrt{\vec{p}^{2}+m_{i}^{2}}-\sqrt{\vec{p}^{2}+m_{j}^{2}}\Big{)}t\\\
&\approx\frac{\Delta m_{ij}^{2}t}{2|\vec{p}|},\end{split}$ (1.29)
where $\Delta m_{ij}^{2}=m_{i}^{2}-m_{j}^{2}$. Working in natural units
($c=\hbar=1$), we note that ultra-relativistic neutrinos satisfy
$|\vec{p}|\approx E$ and $t\approx L$, where $L$ is the distance traveled by
the neutrino. Taking only the real part of the exponential in equation 1.28,
we have
$P_{\nu_{\ell}\to\nu_{\ell^{\prime}}}(t)=\sum_{i,j}|U_{\ell
i}U_{\ell^{\prime}i}^{*}U_{\ell
j}^{*}U_{\ell^{\prime}j}|\cos\Big{(}\frac{\Delta
m_{ij}^{2}L}{2E}-\phi_{\ell\ell^{\prime}ij}\Big{)}.$ (1.30)
If we consider a two-neutrino paradigm, the unitary mixing matrix is real and
can be parameterized by a single “mixing angle” $\theta$,
$U\equiv\begin{pmatrix}U_{\ell 1}&U_{\ell 2}\\\
U_{\ell^{\prime}1}&U_{\ell^{\prime}2}\end{pmatrix}=\begin{pmatrix}\cos\theta&\sin\theta\\\
-\sin\theta&\cos\theta\end{pmatrix}.$ (1.31)
In this scenario, summing over the two mass eigenstates as in equation 1.30
gives
$P_{\nu_{\ell}\to\nu_{\ell^{\prime}}}(t)=\sin^{2}2\theta\sin^{2}\Big{(}\frac{\Delta
m^{2}L}{4E}\Big{)}.$ (1.32)
The extension to the standard three neutrino paradigm can be found in any text
on neutrino oscillations. We quote the result here. Three mass eigenstates
lead to two independent mass-squared splittings, $\Delta m_{12}^{2}$ and
$\Delta m_{23}^{2}$. The mixing matrix in equation 1.24 can be parameterized
by three real mixing angles $\theta_{ij}$ and one complex CP-violating phase
$\delta$,
$U=\begin{pmatrix}1&0&0\\\ 0&c_{23}&s_{23}\\\ 0&-s_{23}&c_{23}\\\
\end{pmatrix}\begin{pmatrix}c_{13}&0&s_{13}e^{-i\delta}\\\ 0&1&0\\\
-s_{13}e^{i\delta}&0&c_{13}\\\ \end{pmatrix}\begin{pmatrix}c_{12}&s_{12}&0\\\
-s_{12}&c_{12}&0\\\ 0&0&1\\\ \end{pmatrix}$ (1.33)
where $c_{ij}\equiv\cos\theta_{ij}$ and $s_{ij}\equiv\sin\theta_{ij}$. The
three mixing angles ($\theta_{12}$, $\theta_{13}$, $\theta_{23}$) and two
relevant mass squared splittings $\Delta m_{12}^{2}$ and $|\Delta m_{23}^{2}|$
have been measured to a precision of $\mathcal{O}(1\%-10\%)$ over the past two
decades [54, 55, 56]. An extensive experimental program is planned to measure
$\delta$ to similar precision, as well as the neutrino hierarchy (i.e., the
sign of $\Delta m_{23}^{2}$) and the octant of $\theta_{23}$ [71].
### 1.4 Anomalies in the Neutrino Sector
Despite the success of the three-neutrino mixing paradigm, several anomalous
results have appeared. Perhaps the most famous of these is the excess of
$\bar{\nu}_{e}$ candidate events observed by the Liquid Scintillator Neutrino
Detector (LSND) experiment [5]. LSND took data at Los Alamos Meson Physics
Facility (LAMPF) from 1993-1998, observing neutrino interactions from a high-
intensity decay-at-rest (DAR) source. The LSND detector was a 167-ton
cylindrical tank of mineral oil that collected scintillation and Cherenkov
light produced in neutrino interactions. The LAMPF accelerator provided a
$\sim 1\;{\rm mA}$ beam of 798 MeV protons, which were then focused on a water
or high-Z target. This process created a large number of pions, which then
decayed to produce neutrinos. Most $\pi^{-}$ came to rest and were captured by
nuclei in and around the target, and the $\pi^{+}\to\nu_{e}e^{+}$ decay chain
is helicity-suppressed due to the interplay between angular momentum
conservation and the left-chiral nature of the weak interaction. Thus the
dominant neutrino production process was
$\pi^{+}\to\nu_{\mu}(\mu^{+}\to\bar{\nu}_{\mu}\nu_{e}e^{+})$.
LSND looked specifically for $\bar{\nu}_{\mu}\to\bar{\nu}_{e}$ conversion
using $\bar{\nu}_{\mu}$ from $\mu^{+}$ DAR. The $\bar{\nu}_{e}$ events were
observed via the inverse beta decay (IBD) process. This is a very clean
channel, as one can require a coincidence between the initial positron
emission and the subsequent neutron capture on hydrogen, which releases a
characteristic $2.2\;{\rm MeV}$ photon. The intrinsic $\bar{\nu}_{e}$ flux,
coming predominately from $\pi^{-}$ decay-in-flight (DIF), was suppressed
compared to intrinsic $\bar{\nu}_{\mu}$ by a factor of $\sim 8\times 10^{-4}$.
Any significant excess of $\bar{\nu}_{e}$ would be evidence for
$\bar{\nu}_{\mu}\to\bar{\nu}_{e}$ oscillations. This is exactly what LSND
observed, as shown in figure 1.9. However, the neutrino energies
$\mathcal{O}(30\;{\rm MeV})$ and baselines $(\mathcal{O}(30\;{\rm m})$
required a mass-squared-splitting of $\Delta m^{2}\sim 1\;{\rm eV^{2}}$. This
is larger than the measured values of $\Delta m^{2}_{12}$ and $|\Delta
m^{2}_{23}|$ by at least three orders of magnitude–therefore, the LSND result
cannot be explained by the standard three neutrino oscillation paradigm. One
must introduce a fourth neutrino to the SM neutrinos in order to facilitate
such oscillations. Measurements of the invisible width of the $Z$ boson forbid
this neutrino from coupling to the weak force in the same way as the three SM
neutrinos [72]. Thus, this fourth neutrino is typically referred to as a
“sterile neutrino” ($\nu_{s}$). The sterile neutrino paradigm will be
introduced in more detail in section 1.4 and discussed thoroughly throughout
this thesis. The LSND anomaly is currently under direct investigation by the
follow-up JSNS2 experiment [73, 74], which will use a gadolinium-loaded liquid
scintillator detector [75] to measure IBD interactions at the J-PARC Materials
and Life Science Experimental Facility.
Figure 1.9: The LSND excess of $\overline{\nu}_{e}$ events on top of the
predicted SM background (green and red regions). The blue region indicates the
best fit to $\overline{\nu}_{\mu}\to\overline{\nu}_{e}$ oscillations via a
sterile neutrino state. Figure from Ref. [5]
The Mini Booster Neutrino Experiment (MiniBooNE) was designed to follow up on
the LSND anomaly [76]. MiniBooNE took data at Fermilab’s Booster Neutrino Beam
(BNB) from 2002-2019, observing the interactions of neutrinos with energy
$E\sim 500\;{\rm MeV}$ in an 800-metric-ton mineral oil (CH2) detector [15].
The Fermilab Booster accelerates protons to a kinetic energy of 8 GeV, at
which point they collide with the beryllium target of the BNB. This produces a
cascade of mesons, predominately pions. The charged mesons are focused using a
magnetic horn and decay in a 50 m decay pipe; in the nominal “neutrino mode”,
the magnetic field is generated to create a flux of mostly muon neutrinos from
$\pi^{+}$ decay-in-flight [14]. The current in the magnetic horns can be
reversed to instead focus $\pi^{-}$ along the beamline, thus creating a beam
of mostly muon antineutrinos–this is referred to as “antineutrino mode”.
MiniBooNE was situated at a baseline of $L\sim 500\;{\rm m}$ from the BNB
target, resulting in a similar characteristic $L/E$ as that of LSND, $\approx
1\;{\rm m/MeV}$. By equation 1.30, this means MiniBooNE would also be
sensitive to oscillations at $\Delta m^{2}\sim 1\;{\rm eV}^{2}$.
In 2007, MiniBooNE began to report results from their flagship analysis: the
search for an excess of $\nu_{e}$ events in the BNB [76]. MiniBooNE relied
primarily on the reconstruction of Cherenkov light from charged final state
particles to identify neutrino interactions. Thus, $\nu_{e}$ CC interactions
would show up as a “fuzzy” Cherenkov ring due to multiple scattering of the
electron as well as the induced EM shower [77]. These fuzzy Cherenkov ring
events are hereafter referred to as “electron-like” events. Starting with the
initial results [76, 78], MiniBooNE has consistently observed an excess of
electron-like events above their expected SM background, the significance of
which has grown over the 17-year data-taking campaign of the experiment [10].
Figure 2.5 shows the $4.8\sigma$ MiniBooNE electron-like excess considering
the total neutrino mode dataset, corresponding to $18.75\times 10^{20}$
protons-on-target (POT) [10]. A similar excess was observed in the
antineutrino mode dataset [79]. The as-yet-unexplained MiniBooNE excess
represents one of the most significant disagreements with the SM to date.
Though the origin of the MiniBooNE excess remains unknown, neutrino physicists
have converged on a number of potential explanations. The most famous
explanation involves sterile neutrino-driven $\nu_{\mu}\to\nu_{e}$
oscillations consistent with the LSND result ($\Delta m^{2}\sim 1\;{\rm
eV^{2}}$). While this model can explain at least some of the MiniBooNE excess,
the excess in the lowest energy region ($E_{\nu}\lesssim 400\;{\rm MeV}$) sits
above even the best-fit sterile neutrino solution. Due to the Cherenkov nature
of the detector, electrons and photons are essentially indistinguishable–both
seed EM showers which appear as fuzzy Cherenkov rings. Thus, the MiniBooNE
excess could also come from a mismodeled photon background. Though not the
subject of this thesis, there have been extensive experimental and theoretical
efforts, both within and outside of the MiniBooNE collaboration, to validate
the MiniBooNE SM photon background prediction [10, 80, 81, 82]. One can also
consider BSM sources of electron-like events in MiniBooNE. Typical models
introduce additional sources of photons and/or $e^{+}e^{-}$ events in
MiniBooNE through couplings to new dark sector particles. Resolution of the
LSND and MiniBooNE anomalies, often referred to as the short baseline (SBL)
anomalies, is a major goal within the particle physics community [83]. This
thesis specifically investigates the MiniBooNE anomaly in further detail,
covering both experimental and phenomenological studies into the origin of the
excess.
Figure 1.10: The MiniBooNE electron-like channel data and SM background
prediction for the entire neutrino mode dataset, as a function of the
reconstructed neutrino energy.
We now briefly touch on two additional classes of anomalies that have surfaced
over the years: the reactor antineutrino anomaly (RAA) and the gallium
anomaly. The RAA [8] is a $\sim 5\%$ deficit in the total $\overline{\nu}_{e}$
rate observed from nuclear reactors compared to the theoretical expectation
from the Huber-Mueller (HM) model [84, 85]. The HM model combines results
using the summation method (summing the contributions of all beta-decay
branches in the reactor) and the conversion method (relying on older
measurements of the $\overline{\nu}_{e}$ flux from the different fissionable
isotopes in the reactor). The data contributing to the RAA mostly come from
reactor neutrino experiments operating at baselines short enough that the
effects of SM neutrino oscillations are negligible. One can interpret the RAA
as due to $\overline{\nu}_{e}$ disappearance via oscillations involving a
sterile neutrino. Coincidentally, due to the relevant neutrino energies and
baselines, such a solution requires $\Delta m^{2}\gtrsim 1\;{\rm eV}^{2}$,
similar to the LSND and MiniBooNE solution [6]. Figure 1.11 shows an overview
of the RAA circa 2012, including the suite of short baseline reactor
experiments which observe a deficit with respect to the HM model with SM
neutrino oscillations (red line), as well as an example sterile neutrino
solution to the RAA (blue line). Recently, the reactor $\overline{\nu}_{e}$
flux calculation has been revisited by various groups, each of which improves
upon some aspect of the summation or conversion method used in the HM flux
model [86, 87, 88, 89]. The significance of the RAA either diminishes or
disappears in some of these models; however, these improved models have
difficulty removing the RAA while also explaining the “5-MeV bump” observed by
most short baseline reactor experiments with respect to the HM model [89].
Thus, while the RAA story is quickly evolving, our understanding of reactor
neutrino fluxes is far from clear.
Figure 1.11: Data contributing to the reactor antineutrino anomaly, indicating
the $\sim 5\%$ flux deficit observed by short-baseline reactor neutrino
experiments. The red line indicates the prediction incorporating SM neutrino
oscillations only, while the blue line shows an example prediction including a
sterile neutrino. Figure from Ref. [6].
The gallium anomaly refers to a series of gallium-based detectors that have
observed a deficit of $\nu_{e}$ capture events on 71Ga with respect to the
theoretical expectation. The original harbingers of the anomaly, SAGE [90] and
GALLEX [91], were designed to measure solar neutrinos using the ${}^{71}{\rm
Ga}\nu_{e}\to{}^{71}{\rm Ge}e^{-}$ capture process. Each detector was
calibrated using electron capture $\nu_{e}$ sources, including 51Cr and 37Ar.
Combining all available calibration data across both experiments, the observed
71Ge production rate was lower than the expectation by a factor of $0.87\pm
0.05$ [90]. Though the statistical significance of the anomaly was only modest
($2-3\sigma$), the community was already beginning to interpret the anomaly as
$\nu_{e}\to\nu_{s}$ transitions via an eV-scale sterile neutrino [92]. A
follow-up experiment to the SAGE and GALLEX anomaly, BEST [9], released their
first results in 2021. BEST placed a 3.414 MCi 51Cr $\nu_{e}$ source at the
center of two nested 71Ga volumes, each with a different average distance from
the source. The ratio of observed to the predicted 71Ge production rate was
$R_{in}=0.79\pm 0.05$ ($R_{out}=0.77\pm 0.05$) for the inner (outer) volume,
thus reaffirming the gallium anomaly [9]. No evidence for a difference in the
deficit between the inner and outer volumes was observed, which would have
been a smoking gun signature of a baseline-dependent effect like
$\nu_{e}\to\nu_{s}$ oscillations. However, the statistical significance of the
gallium anomaly is now much stronger; the combined SAGE, GALLEX, and BEST
results give evidence for a deficit at the $5.0\sigma$ level [7]. The datasets
contributing to this anomaly are summarized in figure 1.12.
Figure 1.12: Data contributing to the gallium anomaly, indicating the $\sim
20\%$ deficit in the 71Ge production rate observed by SAGE, GALLEX, and BEST.
Figure from Ref. [7].
As alluded to above, the most common BSM interpretation of the SBL, reactor
antineutrino, and gallium anomalies is the “3+1 model”, which involves the
addition of a new neutrino state–the sterile neutrino–at the eV scale. The
sterile neutrino introduces a fourth weak interaction eigenstate $\nu_{s}$ and
mass eigenstate $\nu_{4}$ to the standard three-neutrino mixing paradigm.
Thus, equation 1.24 becomes
$\begin{pmatrix}\nu_{e}\\\ \nu_{\mu}\\\ \nu_{\tau}\\\
\nu_{s}\end{pmatrix}=\begin{pmatrix}U_{e1}&U_{e2}&U_{e3}&U_{e4}\\\ U_{\mu
1}&U_{\mu 2}&U_{\mu 3}&U_{\mu 4}\\\ U_{\tau 1}&U_{\tau 2}&U_{\tau 3}&U_{\tau
4}\\\ \end{pmatrix}\begin{pmatrix}\nu_{1}\\\ \nu_{2}\\\ \nu_{3}\\\
\nu_{4}\end{pmatrix}.$ (1.34)
As we are interested in an eV-scale sterile neutrino, the mass-squared
splittings between the three active neutrinos are smaller by at least 2-3
orders of magnitude compared to their mass-squared splittings with the fourth
mass eigenstate. This means that the active neutrino mass splittings are
negligible for short-baseline experiments, i.e. those in which the argument of
the second $\sin^{2}$ term in equation 1.32 is small. Experiments contributing
to the aforementioned anomalies all satisfy this condition. Thus, when
considering sterile neutrino explanations for these anomalies, we can make the
approximation
$\Delta m^{2}_{41}\approx\Delta m^{2}_{42}\approx\Delta m^{2}_{43}\equiv\Delta
m^{2},$ (1.35)
where we hereafter use $\Delta m^{2}$ to refer to the mass-squared splitting
of the fourth mass eigenstate. This approximation holds regardless of the
hierarchy of SM neutrino mass eigenstates.
The experiments discussed in this thesis are sensitive only to
$\overset{\textbf{(---)}}{\nu}_{e}$ and $\overset{\textbf{(---)}}{\nu}_{\mu}$
interactions. The sterile neutrino can facilitate short-baseline oscillations
between these flavor states; the oscillation probability expressions, which
can be derived using equation 1.30 within the 3+1 framework, are given by [93]
$\begin{split}&P_{\nu_{e}\to\nu_{e}}=1-4\sin^{2}2\theta_{ee}\sin^{2}(1.27\Delta
m^{2}L/E)\\\
&P_{\nu_{\mu}\to\nu_{\mu}}=1-4\sin^{2}2\theta_{\mu\mu}\sin^{2}(1.27\Delta
m^{2}L/E)\\\ &P_{\nu_{\mu}\to\nu_{e}}=4\sin^{2}2\theta_{\mu
e}\sin^{2}(1.27\Delta m^{2}L/E),\end{split}$ (1.36)
where $\Delta m^{2}$, $L$, and $E$ are in units of eV2, km, and GeV,
respectively, and
$\begin{split}&\sin^{2}2\theta_{ee}=4(1-|U_{e4}|^{2})|U_{e4}|^{2}\\\
&\sin^{2}2\theta_{\mu\mu}=4(1-|U_{\mu 4}|^{2})|U_{\mu 4}|^{2}\\\
&\sin^{2}2\theta_{\mu e}=4|U_{\mu 4}|^{2}|U_{e4}|^{2}.\end{split}$ (1.37)
The first expression in equation 1.36 can potentially explain the deficit of
$\overline{\nu}_{e}$ and $\nu_{e}$ events observed in the RAA and gallium
anomaly, respectively. Though both anomalies stem qualitatively from the same
phenomenon–$\overset{\textbf{(---)}}{\nu}_{e}$ disappearance at short
baseline–the gallium anomaly in general prefers a larger value of
$\sin^{2}2\theta_{ee}$ than the RAA. This is evident in figure 1.13, which
shows the regions in $\sin^{2}2\theta_{ee}$–$\Delta m^{2}$ parameter space
preferred by the RAA and gallium anomalies, as well as global constraints from
other experiments. These constraints come from short-to-medium-baseline
reactor experiments, including NEOS [94], RENO [95], and Daya Bay [96], as
well as very-short-baseline reactor experiments, including STEREO [97], DANSS
[98], and PROSPECT [99]. Each of these experiments searches for
$\overline{\nu}_{e}$ disappearance in a reactor-flux-agnostic way: the former
though comparisons of the reactor $\overline{\nu}_{e}$ spectra measured by
different detectors [100], and the latter through the use of modular or
movable detectors capable of comparing $\overline{\nu}_{e}$ interaction rates
across different baselines. The KATRIN experiment, which is sensitive to the
neutrino mass via an extremely precise measurement of the tritium beta
spectrum endpoint, also places strong constraints on $\sin^{2}2\theta_{ee}$ in
the $\Delta m^{2}\gtrsim 10~{}{\rm eV}^{2}$ region [101].
Figure 1.13: Preferred regions in $\sin^{2}2\theta_{ee}$–$\Delta m^{2}$
parameter space to explain the RAA [8] (green contour) and gallium anomaly [9]
(blue regions). The total excluded region from other experiments (grey region)
is also shown. Figure from Ref. [9].
The second expression in equation 1.36 can potentially explain the SBL
anomalies. This is because both LSND and MiniBooNE operated at accelerator
neutrino sources for which the neutrino flux was generated mainly by charged
pion decay [5, 14]; thus, due to helicity suppression, the flavor composition
was dominated muon-flavored (anti)neutrinos. This means that even a small
value of $\sin^{2}2\theta_{\mu e}$ could generate an observable level of
$\overset{\textbf{(---)}}{\nu}_{e}$ appearance on top of the SM
$\overset{\textbf{(---)}}{\nu}_{e}$ flux prediction. Figure 1.14 shows the
allowed regions in $\sin^{2}2\theta_{\mu e}$–$\Delta m^{2}$ parameter space
from LSND and MiniBooNE [10]. Strikingly, both anomalies generally prefer the
same region of parameter space. However, as the MiniBooNE excess tends to peak
more sharply at lower energies, the 3+1 fit prefers lower values of $\Delta
m^{2}$ compared to the LSND result.
It is important to note that the fits performed in figure 1.14 account only
for $\overset{\textbf{(---)}}{\nu}_{\mu}\to\overset{\textbf{(---)}}{\nu}_{e}$
oscillations, ignoring any potential $\overset{\textbf{(---)}}{\nu}_{e}$ or
$\overset{\textbf{(---)}}{\nu}_{\mu}$ disappearance in the SM background
prediction. This is a reasonable approximation, however, the inclusion of the
latter effects does indeed impact the MiniBooNE allowed regions. This effect
was only accounted for recently in Ref. [102], which is presented in section
5.3.1 of this thesis.
Figure 1.14: Preferred regions in $\sin^{2}2\theta_{\mu e}$–$\Delta m^{2}$
parameter space to explain the LSND anomaly [5] (filled contours) and
MiniBooNE anomaly [10] (open contours). Figure from Ref. [10].
While there are indications of short baseline
$\overset{\textbf{(---)}}{\nu}_{\mu}\to\overset{\textbf{(---)}}{\nu}_{e}$
appearance and $\overset{\textbf{(---)}}{\nu}_{e}$ disappearance in the global
anomaly picture, direct observation of $\overset{\textbf{(---)}}{\nu}_{\mu}$
disappearance via the third expression in equation 1.36 remains elusive. Long
baseline experiments such as MINOS/MINOS+ [103, 104] and CCFR84 [105] have
searched for muon neutrino disappearance from an accelerator neutrino source.
Additionally, the IceCube experiment has searched for a sterile-induced matter
resonance impacting muon neutrinos as they transit through the earth [106]. So
far, no definitive evidence for $\overset{\textbf{(---)}}{\nu}_{\mu}$
disappearance has been found (up to a $\sim 2\sigma$ preference in the IceCube
results [106]).
The lack of $\overset{\textbf{(---)}}{\nu}_{\mu}$ disappearance introduces
significant tension when one tries to fit global neutrino data within a
consistent 3+1 model. This conclusion has been reached by multiple 3+1 global
fitting efforts [93, 11, 12]; figure 1.15 shows a representation of the
tension between appearance and disappearance experiments observed in global
fits. This tension persists even with the inclusion of the recent BEST result,
which prefers larger values of $|U_{e4}|^{2}$ (thus allowing lower values of
$|U_{\mu 4}|^{2}$ to fit the $\overset{\textbf{(---)}}{\nu}_{e}$ appearance
anomalies) [12]. Thus, the 3+1 model, while still an important benchmark BSM
scenario, has become disfavored as a solution to all observed anomalies in the
neutrino sector. The state of the sterile neutrino explanation of the SBL
anomalies is discussed in more detail throughout this thesis.
In recent years, neutrino physicists have begun to turn toward alternative
explanations of the anomalies, often involving dark sector particles with
additional interactions. Chapter 6 of this thesis covers one such explanation
of the MiniBooNE anomaly, involving heavy right-handed neutrinos with a
transition magnetic moment coupling to the active neutrinos.
(a) From Ref. [11]
(b) From Ref. [12]
Figure 1.15: Graphical representation of the tension observed in 3+1 global
fits between different subsets of the experimental landscape. Figure 1.15(a)
shows the tension between $\nu_{e}$ appearance experiments and
$\nu_{e}$/$\nu_{\mu}$ disappearance experiments observed in Ref. [11]. Figure
1.15(b) shows the tension between allowed regions from $\nu_{e}$ appearance
(lower right), $\nu_{e}$ disappearance (upper right), and $\nu_{\mu}$
disappearance (upper left) experiments observed in Ref. [12], which includes
the latest results from the BEST experiment.
## Chapter 2 The MiniBooNE Experiment
This chapter is intended to give the reader an overview of the Mini Booster
Neutrino Experiment (MiniBooNE), specifically concerning the excess of
electron-like events observed by MiniBooNE in data taken between 2002–2019 at
Fermilab’s Booster Neutrino Beam (BNB). The MiniBooNE excess is at the center
of this thesis; the research presented here covers both experimental follow-up
and theoretical interpretations of this anomaly. Thus, the remaining chapters
require a thorough discussion of the MiniBooNE experiment and its most famous
result.
### 2.1 Overview of MiniBooNE
MiniBooNE was originally designed as an experimental follow-up to the LSND
excess of $\overline{\nu}_{e}$ events observed at the Los Alamos Meson Physics
Facility (LAMPF) [5]. As described in section 1.4, the LAMPF flux comprised
mostly of $\overline{\nu}_{\mu}$, which dominated over the
$\overline{\nu}_{e}$ flux by three orders of magnitude [107]. Because of this,
LSND was able to perform a low-background search for the IBD interaction
$\overline{\nu}_{e}p\to e^{+}n$. An excess of IBD events was observed above
the intrinsic $\overline{\nu}_{e}$ flux prediction from the beam dump
source–this is known as the “LSND amonaly” [5].
The LSND anomaly has traditionally been interpreted as evidence for
$\overline{\nu}_{\mu}\to\overline{\nu}_{e}$ oscillations at $\Delta
m^{2}\approx 1\;{\rm eV}^{2}$. The LSND detector sat relatively close to the
LAMPF nuclear target; the characteristic length-to-energy ratio in the
experiment was $L/E\sim 30\;{\rm m}/30\;{\rm MeV}$. According to equation
1.32, in order to be sensitive to the oscillation-based interpretation of the
LSND anomaly, one must maintain the same ratio $L/E$. This was the design
strategy of the MiniBooNE experiment, which observed the interactions of
neutrinos from the BNB with characteristic energy $E_{\nu}\sim 500\;{\rm
MeV}$, at a baseline of $L\sim 500\;{\rm m}$ from the BNB beryllium target.
The BNB produced mostly $\nu_{\mu}$ from pion decay-in-flight; thus, MiniBooNE
searched for $\nu_{\mu}\to\nu_{e}$ oscillations in the BNB at $\Delta
m^{2}\approx 1\;{\rm eV}^{2}$.
#### 2.1.1 The Booster Neutrino Beam
The BNB, which is still operational, follows the typical design of a neutrino
beamline [107]. Protons are accelerated in a synchrotron up to a momentum of
8.89 GeV, at which point they are kicked out of the synchrotron and interact
within the Be target of the BNB, producing a cascade of secondary particles
[14]. The charged mesons in this cascade are then focused using a toroidal
magnetic field from an aluminum horn. By switching the direction of the
current in the horn (and thus the direction of the magnetic field), one can
choose whether to focus positively-charged mesons and de-focus negatively-
charged mesons (“neutrino mode”), or vice-versa (“antineutrino mode”). After
focusing, charged mesons pass through a concrete collimator and enter a
50-meter-long air-filled region where they decay to neutrinos. The neutrinos
travel through another 474 meters of bedrock before reaching the MiniBooNE
detector. A schematic depiction of this process is shown in figure 2.1.
The MiniBooNE flux is described in detail in Ref. [14]; we summarize the most
important details here. In neutrino (antineutrino) mode, the flux is dominated
by $\nu_{\mu}$ ($\overline{\nu}_{\mu}$) from $\pi^{+}$ ($\pi^{-}$) decay.
Wrong-sign $\overline{\nu}_{\mu}$ ($\nu_{\mu}$), coming mostly from the decay
of oppositely-charged pions, contribute at the 5% (15%) level. Two and three-
body kaon decays also contribute to the $\nu_{\mu}$ and $\overline{\nu}_{\mu}$
flux at the few-percent level. The BNB also produces electron (anti)neutrinos,
which represent $<1\%$ of the total flux in both neutrino and antineutrino
mode. These come from two main sources: the decay of the secondary muon
produced in the original charged pion decay, which is dominant for
$E_{\nu}\lesssim 1\;{\rm GeV}$, and two-body kaon decay, which is dominant for
$E_{\nu}\gtrsim 1\;{\rm GeV}$. The BNB flux breakdown in neutrino and
antineutrino mode are shown in figure 2.2.
The $\pi^{\pm}$ production rate from p-Be interactions has been measured by
the HARP [108] and BNL E910 [109] experiments. HARP took data at the BNB
proton incident momentum (8.89 GeV/c) with a replica of the BNB beryllium
target, while E910 took data at varying incident proton momenta above and
below the nominal BNB value. These data were used to constrain a Sanford-Wang
parameterization of the $\pi^{\pm}$ differential production cross section in
the BNB [110]. The charged and neutral kaon production rates in p-Be were
constrained by measurements from other experiments at proton momenta around
8.89 GeV/c; the Feynman scaling hypothesis was used to relate these
measurements to the BNB proton momentum [14].
Figure 2.1: A schematic depiction of the BNB at Fermilab, including the
downstream MiniBooNE detector. Figure from Ref. [13].
Figure 2.2: Breakdown of the neutrino flux at the BNB in neutrino (left) and
antineutrino (right) mode. Figure from Ref. [14]
#### 2.1.2 The MiniBooNE Detector
The MiniBooNE detector is an 818-ton, 6.1-meter-radius spherical volume filled
with mineral oil (approximately CH2) [15]. It was designed to measure the
Cherenkov light produced from charged particles created in the charged-current
interactions of BNB neutrinos within the detector volume. To do this, the
inner surface of the sphere was instrumented with 1280 photo-multiplier tubes
(PMTs), corresponding to a photocathode coverage of 11.3%. An additional 240
PMTs were used to instrument the surrounding veto region, which rejected
cosmic muons and neutrino interactions outside the detector volume with an
efficiency of $\sim 99.99\%$ [15]. Mineral oil was chosen as the detector
medium due to its high index of refraction (n=1.47), leading to more Cherenkov
light production by electrons traversing the detector volume. The exact
mineral oil mixture, Marcol 7, was selected by optimizing the behavior of
photons with wavelengths between 320 nm and 600 nm (e.g., requiring an
extinction length greater than 20 m) [15]. The detector was situated in a
cylindrical vault just below ground level, under $\sim 3\;{\rm m}$ of dirt. A
schematic depiction of the MiniBooNE detector is shown in figure 2.3.
The reconstruction of the final state from a neutrino interaction in MiniBooNE
relied on the detection of Cherenkov light. Specifically, the collaboration
developed reconstruction algorithms that turned the spatiotemporal
distribution of photon hits on the PMTs into kinematic information on each
observable final state particle [77]. These algorithms used maximum likelihood
estimators to estimate the starting location, direction, and energy of final
state particles using the observed photon hits in each PMT, relying on the
known transport properties of Cherenkov photons within the detector medium.
Cherenkov photons are emitted when a charged particle travels faster than the
speed of light in a medium, at an angle of $\cos\theta_{C}=1/n\beta$ with
respect to the charged particle track. This results in a characteristic ring-
like pattern on the detector wall. Such Cherenkov rings formed the basis of
the MiniBooNE reconstruction algorithm.
There were two main classes for observable final state particles in MiniBooNE:
muon-like ($\mu$-like) and electron-like ($e$-like). Each elicits a different
Cherenkov ring pattern [77]. At MiniBooNE energies, muons are typically
minimum-ionizing particles and thus would appear as a uniform ring in the PMT
array. The ring would be filled in if the muon exits the detector volume
before going below the Cherenkov threshold, and would be open otherwise.
Electrons, on the other hand, undergo radiative processes as they travel,
emitting photons via the Bremsstrahlung process, which then undergo pair-
production to $e^{+}e^{-}$, which then emit more photons, and so on. This
process is typically called an “electromagnetic (EM) shower”. The multiple
constituent electrons and positrons in this EM shower would result in a
distorted Cherenkov ring in the PMT array. Importantly, high energy photons
also produced these distorted rings after undergoing an initial pair-
production interaction; thus, electrons and photons were essentially
indistinguishable in MiniBooNE and were both classified as $e$-like. Another
relevant final-state particle in MiniBooNE was the neutral pion, which could
be identified via two separate distorted Cherenkov rings via the
$\pi^{0}\to\gamma\gamma$ decay. It is also important to note that $\pi^{0}$
events could be misclassified as $e$-like if one of the photons was not
reconstructed, which could happen if one of the photons escaped the detector
without pair producing or had energy below the detection threshold. A
schematic diagram of the detector signature of muons, electrons, and neutral
pions in MiniBooNE is shown in figure 2.4(a).
A separate likelihood was calculated for three different final state particle
hypotheses: electron, muon, and neutron pion [77]. Ratios of these likelihoods
were used to distinguish one particle from another in MiniBooNE. As an
example, we show the separation of electron and muon events as characterized
by the log-likelihood-ratio as a function of reconstructed neutrino energy in
figure 2.4(b). This ratio was the main selection tool in selecting $e$-like
events for MiniBooNE’s flagship search for $\nu_{\mu}\to\nu_{e}$ oscillations
in the BNB.
Figure 2.3: The MiniBooNE detector situated in the cylindrical detector hall
(left) and an image of the interior of the MiniBooNE detector (right), showing
the PMTs in both the signal and veto regions. Figure from Ref. [15].
(a) From Ref. [76]
(b) From Ref. [77]
Figure 2.4: Visual representations of particle identification in MiniBooNE.
Figure 2.4(a) shows a schematic representation of the detector signature from
the three main particle classes in MiniBooNE: muons, electrons, and neutral
pions. Figure 2.4(b) shows the MiniBooNE log-likelihood-ratio between the
$e$-like and $\mu$-like hypothesis as a function of reconstructed neutrino
energy, considering both simulated $\nu_{e}$ CCQE (top) and $\nu_{\mu}$ CCQE
(bottom) interactions.
### 2.2 The MiniBooNE Low Energy Electron-Like Excess
As stated above, MiniBooNE was designed to test the LSND excess of
$\overline{\nu}_{e}$ events discussed in section 1.4. To do this, MiniBooNE
isolated a sample of $e$-like events using the likelihood ratios described in
the previous section [76]. This sample was optimized to select $\nu_{e}$ CCQE
interactions within the detector while rejecting $\nu_{\mu}$ interaction
backgrounds, thus maximizing sensitivity to potential $\nu_{\mu}\to\nu_{e}$
oscillations within the BNB. MiniBooNE’s flagship $e$-like analysis, which has
remained stable over the lifetime of the experiment, achieved a $\nu_{e}$ CCQE
efficiency of $\sim 20\%$ while rejecting $\sim 99.9\%$ of $\nu_{\mu}$
backgrounds [10]. The full MiniBooNE dataset consists of $18.75\times 10^{20}$
($11.27\times 10^{20}$) protons-on-target (POT) in neutrino (antineutrino)
mode collected over 17 years of operation. In this dataset, the $e$-like
analysis observes 2870 (478) data events in neutrino (antineutrino) mode,
compared to an SM prediction of 2309.4 (400.6) events [10]. Combining neutrino
and antineutrino mode data, MiniBooNE observes an excess of $638.0\pm
52.1\;({\rm stat.})\pm 122.2\;({\rm sys.})$ $e$-like events, corresponding to
a Gaussian significance of $4.8\sigma$ [10].
Figure 2.5 shows the reconstructed neutrino energy distribution of the
MiniBooNE $e$-like excess in both neutrino and antineutrino mode. The stacked
histogram corresponds to the SM prediction from the NUANCE event generator
[44], while the data points correspond to the observed number of $e$-like
events in each bin. The error bars on the stacked histogram correspond to the
systematic uncertainty on the SM prediction, calculated within a covariance
matrix formalism. The dominant sources of systematic uncertainty include
neutrino cross section modeling (derived largely using MiniBooNE’s own cross
section measurements [111, 112, 113, 114]), nuclear effects, detector response
and optical modeling, and BNB flux estimation [79, 10]. The presented error in
each bin of the $\nu_{e}$ and $\overline{\nu}_{e}$ sample incorporates a
constraint from MiniBooNE’s dedicated $\nu_{\mu}$ and $\overline{\nu}_{\mu}$
CCQE samples. The dashed line corresponds to the best fit of the MiniBooNE
excess to the $3+1$ sterile neutrino model described in section 1.4. As one
can see, the excess in data events is strongest in the lowest energy bins; for
this reason, this anomaly is often referred to as the MiniBooNE low-energy
excess (LEE).
As MiniBooNE used a Cherenkov detector, it was not sensitive to the final
state hadronic activity in a neutrino interaction. Thus, kinematic
reconstruction of the original neutrino relied entirely on the final state
lepton. Under the assumption that the neutrino underwent a CCQE interaction
off of a nucleon at rest within the nucleus, the original neutrino energy is
given by [111]
$E_{\nu}^{\rm
QE}=\frac{2(M_{n}^{\prime})E_{\ell}-((M_{n}^{\prime})^{2}+m_{\ell}^{2}-M_{p})}{2[(M_{n}^{\prime})-E_{\ell}+\sqrt{E_{\ell}^{2}-m_{\ell}^{2}}\cos\theta_{\ell}]},$
(2.1)
where $E_{\ell}$ is the total lepton energy, $\cos\theta_{\ell}$ is the lepton
scattering angle, and $M_{n}$, $M_{p}$, and $m_{\ell}$ are the neutron,
proton, and lepton mass, respectively. The adjusted neutron energy is defined
as $M_{n}^{\prime}\equiv M_{n}-E_{B}$, where $E_{B}$ is the nuclear binding
energy of the initial state neutron. An analogous relation exists for
antineutrino energy reconstruction in a CCQE interaction [115]. This is the
reconstructed energy definition used to generate the histograms in figure 2.5.
Figure 2.6 shows the visible energy and $\cos\theta$ distributions of the
final state lepton in MiniBooNE’s $e$-like neutrino mode sample. The visible
energy distribution shows the strongest discrepancy for softer lepton kinetic
energies, as expected for a low-energy excess. For the angular distribution,
it is worth noting that while there is an excess across the entire range, the
largest deviation above the SM prediction comes from the
$\cos\theta\in[0.9,1.0]$ bin. The angular distribution of the MiniBooNE LEE is
an important piece of information for potential solutions to the anomaly–as we
will discuss throughout this thesis, BSM physics models often cannot explain
the energy and angular distributions of the MiniBooNE LEE simultaneously.
The green contributions to the stacked histograms of figure 2.5 represent the
interactions of intrinsic $\nu_{e}$ or $\overline{\nu}_{e}$ in the BNB. At low
energies, these events come mostly from the decay of the secondary muon in the
$\pi^{+}\to\mu^{+}$ or $\pi^{-}\to\mu^{-}$ decay chain, while $\nu_{e}$ and
$\overline{\nu}_{e}$ from kaon decays start to contribute more at higher
energies [14].
The red and brown contributions to the stacked histograms represent
misidentified photon backgrounds that are reconstructed as a single distorted
Cherenkov ring. The largest photon background comes from misidentified
$\pi^{0}$ created via $\nu_{\mu}$ and $\overline{\nu}_{\mu}$ neutral-current
(NC) resonant scattering, in which the initial state nucleon is excited to a
$\Delta$ resonance before decaying to nucleon-pion pair. The $\pi^{0}$ decay
promptly to a pair of photons, which should nominally appear as a pair of
distorted Cherenkov rings as in figure 2.4(a). However, if one of the photons
exits the detector before converting to an $e^{+}e^{-}$ pair, or if the
original pion energy is distributed asymmetrically such that the visible
energy of one of the photons sits below the reconstruction threshold of 140
MeV [76], the $\pi^{0}$ decay will be misidentified as an $e$-like event. An
enhancement of the NC $\pi^{0}$ background in figure 2.5 could in principle
explain the observed excess. However, MiniBooNE constrained the rate of this
NC $\pi^{0}$ background in situ via a measurement of the two-gamma invariant
mass peak in well-reconstructed NC $\pi^{0}$ events [10]. Additionally, the
radial distribution of the excess peaks toward the center of the detector,
while misidentified NC $\pi^{0}$ backgrounds happen more often toward the edge
of the detector, where it is more likely for a photon to escape before pair-
producing.
The next-largest photon background comes from rare $\Delta\to N\gamma$ decays
in $\nu_{\mu}$ and $\overline{\nu}_{\mu}$ NC resonant scattering interactions.
As this process has never been observed directly, it was not possible for
MiniBooNE to constrain the $\Delta\to N\gamma$ event rate in situ. It was
instead constrained indirectly by the NC $\pi^{0}$ two-gamma invariant mass
distribution [10]. A factor 3.18 enhancement in $\Delta\to N\gamma$ events
could explain the MiniBooNE LEE; however, this hypothesis has since been
disfavored by recent results from the MicroBooNE experiment [80]. The
MicroBooNE experiment will be covered in more detail in chapters 3, 4 and 5.
MiniBooNE has also studied neutrino interactions outside the detector volume
which result in a single photon entering the detector (the “dirt” backgrounds
in figure 2.5). The timing distribution of the MiniBooNE $e$-like dataset
suggests that the excess comes primarily in time with the beam, while dirt
background events are often delayed by $\sim 10\;{\rm ns}$ [10]. This result
disfavors an enhancement of external neutrino interactions as an explanation
of the MiniBooNE excess.
Therefore, the $4.8\sigma$ MiniBooNE excess remains unexplained. Resolution of
the MiniBooNE LEE is one of the major goals of the neutrino community [83].
(a) $\nu_{e}$ sample, from Ref. [10]
(b) $\overline{\nu}_{e}$ sample, from Ref. [79]
Figure 2.5: The $E_{\nu}^{\rm QE}$ distribution of the MiniBooNE $e$-like
excess in the total neutrino mode (figure 2.5(a)) and antineutrino mode
(figure 2.5(b)) datasets. The observation and SM prediction in each bin are
shown by the data points and colored histograms, respectively.
(a) Lepton $E_{\rm vis}$ distribution
(b) Lepton $\cos\theta$ distribution
Figure 2.6: The lepton visible energy (figure 2.6(a)) and $\cos\theta$ (figure
2.6(b)) distributions of the MiniBooNE $e$-like excess in the total neutrino
mode dataset. The observation and SM prediction in each bin are shown by the
data points and colored histograms, respectively. Figures from Ref. [10].
The MiniBooNE LEE is most commonly interpreted within the context of the 3+1
model introduced in section 1.4. This is primarily because the MiniBooNE
excess has historically been considered alongside the LSND excess, as both
results can be explained by short-baseline $\nu_{\mu}\to\nu_{e}$ and
$\overline{\nu}_{\mu}\to\overline{\nu}_{e}$ appearance. Strikingly, the
MiniBooNE and LSND anomalies both prefer similar regions in sterile neutrino
parameter space, as shown in figure 1.14. This is further supported by figure
2.7, which shows the rising nature of the MiniBooNE and LSND excesses as a
function of the ratio $L/E$, behavior that is consistent with a sterile
neutrino explanation.
There are, however, complications regarding a sterile neutrino explanation of
the MiniBooNE excess. The 3+1 model has difficulty reproducing the lowest
energy and lowest scattering angle region of the excess. Figure 2.6 shows that
the best fit 3+1 prediction, indicated by the dotted line, still falls below
the observed data in the lowest lepton $E_{\rm vis}$ bin and highest lepton
$\cos\theta$ bin. Additionally, as discussed in section 1.4, there is
significant tension between the MiniBooNE and LSND observation of
$\nu_{e}$/$\overline{\nu}_{e}$ appearance and experiments searching for
$\nu_{e}$/$\overline{\nu}_{e}$ and $\nu_{\mu}$/$\overline{\nu}_{\mu}$
disappearance. Finally, the follow-up MicroBooNE experiment has not observed
an excess of $\nu_{e}$ events consistent with the expectation from the
MiniBooNE LEE [116]. The MicroBooNE $\nu_{e}$ analysis is one of the main
results of this thesis and will be explored in more detail in chapters 4 and
5. While the non-observation of a MiniBooNE-like excess of $\nu_{e}$ events in
the BNB does set constraints on $3+1$ parameter space, it does not fully
exclude the MiniBooNE allowed regions [102]. This point will be discussed
further in chapter 5.
These complications with the eV-scale sterile neutrino interpretation of the
MiniBooNE LEE have prompted the community to explore alternative explanations.
Many of these are relatively simple extensions beyond $3+1$, such as $3+N$
models involving $N$ additional sterile neutrino states [93], decaying sterile
neutrino models [117, 118, 119], and sterile neutrinos with altered dispersion
relations from large extra dimensions [120, 121, 122]. Other explanations for
the MiniBooNE LEE introduce a number of new particle species prescribed with
new interactions that create an additional source of photons or $e^{+}e^{-}$
pairs in MiniBooNE. Such models include heavy neutral leptons (HNLs) which
decay to photons via a transition magnetic moment [123, 124, 125, 126, 127,
128, 129, 130, 131, 132, 133, 134, 27, 135, 31] and models with heavy
neutrinos coupled to a “dark sector” involving, for example, new vector or
scalar mediators [136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146].
Chapter 6 of this thesis explores one such explanation of MiniBooNE involving
an HNL with a transition magnetic moment coupling to active neutrinos, which
we hereby refer to as a “neutrissimo”. Neutrissimo decays in MiniBooNE provide
an additional source of single photons which could explain the $e$-like excess
[27, 31].
Thus, there are many potential explanations for the MiniBooNE anomaly.
Distinguishing between these explanations requires careful consideration of
the kinematic distributions of the MiniBooNE excess [147, 31]. Further, these
models are often subject to constraints from existing accelerator neutrino
experiments, such as MINERvA and NA62 [146, 139]. A complete evaluation of
constraints from existing data is essential in determining the most viable
models among the many proposed MiniBooNE explanations. The work presented in
chapter 6 takes a step in this direction by calculating constraints from
MINERvA on the neutrissimo model.
Figure 2.7: The MiniBooNE and LSND excesses as a function of the ratio $L/E$.
The MiniBooNE data is separated into neutrino and antineutrino mode. Figure
from Ref. [10].
## Chapter 3 The MicroBooNE Detector
In order to ascertain the nature of the MiniBooNE excess, one needs a detector
capable of providing more detailed event-by-event information than MiniBooNE’s
Cherenkov detector. This is the concept behind the MicroBooNE experiment. The
MicroBooNE detector is a large-scale liquid argon time projection chamber
(LArTPC) with the ability to record high-resolution images of neutrino
interactions. MicroBooNE recently released its first results investigating the
nature of the MiniBooNE excess [116, 80], which will be presented in chapters
4 and 5. This chapter introduces the detector that made this measurement
possible.
### 3.1 Liquid Argon Time Projection Chamber
MicroBooNE used an 85-metric-ton fiducial volume LArTPC detector to observe
the interactions of neutrinos in the BNB [16, 148]. This makes MicroBooNE the
first $\mathcal{O}(100~{}{\rm t})$ LArTPC operated in the United States. The
idea for a LAr-based total absorption detector originated in the 1970s [149].
The introduction of the LArTPC detector concept came from Carlo Rubbia in 1977
[150], extending earlier work from David Nygren [151] and Georges Charpak
[152]. The first operational large-scale LArTPC was the 500-metric-ton active
volume ICARUS T600 detector [153], which came online in 2010. ICARUS observed
cosmic ray and neutrino interactions at the Gran Sasso underground National
Laboratory [154] and even set constraints on $\nu_{\mu}\to\nu_{e}$
interpretations of the LSND and MiniBooNE anomalies using the CERN to Gran
Sasso neutrino beam [155]. On a smaller scale, the ArgoNeuT experiment
operated a 0.25-metric-ton LArTPC at Fermilab’s Neutrino Main Injector
beamline from 2009-2010, where it performed the first measurements of
neutrino-argon cross sections [156].
The MicroBooNE detector is situated $70$ m downstream of the MiniBooNE
detector along the BNB and operated from 2015 to 2021, observing a total of
approximately $1.5\times 10^{21}$ POT [148]. MicroBooNE LArTPC data come in
the form of high-resolution three-dimensional images of the ionization energy
deposited by final state charged particles in neutrino interactions. The
information contained in these images allows for the event-by-event separation
of photons and electrons–an essential capability for determining the source of
the MiniBooNE excess. MicroBooNE can also reconstruct hadronic activity in the
final state of the neutrino interaction, which helps further distinguish
between the possible sources of the MiniBooNE excess.
We begin with a brief overview of the MicroBooNE reconstruction procedure.
Charged-current neutrino interactions in the LAr volume produce charged
particles in the final state, which ionize argon atoms as they traverse the
detector. Thus, each charged particle leaves behind a trail of ionized
electrons which, in theory, can drift freely through the noble element
detector medium without being captured. This drift is controlled via an
external electric field with strength $|E|\sim 273$ V/cm, accelerating the
ionized electrons to a final velocity $v\sim 0.11$ cm/$\mu$s toward three
anode wire planes [148]. MicroBooNE employs a right-handed coordinate system,
in which BNB neutrinos travel along the $\hat{z}$ direction, ionization
electrons drift along the $-\hat{x}$ direction, and $\hat{y}$ represents the
vertical direction [16]. The anode planes consist of two induction planes and
one collection plane, each containing a series of wires spaced 3 mm apart and
oriented at $\pm 60^{\circ}$ and $0^{\circ}$ with respect to the $\hat{y}$
direction for the induction and collection planes, respectively. Each plane is
biased such that ionization electrons drift past the induction plane wires,
generating a signal via induction, and terminate on the collection plane
wires, generating a signal via direct charge collection. The signals on the
anode wire planes allow for two-dimensional reconstruction of the charged
particle trajectory in the $\hat{y}-\hat{z}$ plane, transverse to the drift
direction. The $\hat{x}$ dimension of the charged particle trajectory can be
reconstructed using the arrival time of signals on the anode wires in
conjunction with the known drift time of the ionization electrons. In order
for this technique to work, one must know the initial time at which the
charged particle entered the detector. This can be established using either an
external beam trigger or an internal trigger from the light collection system,
which operates on much shorter time scales, $\mathcal{O}({\rm ns})$, compared
to characteristic electron drift times of $\mathcal{O}({\rm ms})$. A schematic
of this process is shown in figure 3.1.
(a)
(b)
Figure 3.1: Schematic depictions of the MicroBooNE LArTPC. Figure 3.1(a) shows
the detection process for charged particles from a neutrino interaction in a
MicroBooNE-like LArTPC. Figure 3.1(b) shows a cross-sectional view of the
MicroBooNE detector along the $-\hat{z}$ direction. Figures from Ref. [16].
#### 3.1.1 Cryogenics
The MicroBooNE detector is relatively large–the LArTPC volume spans 2.6 m, 2.3
m, and 10.4 m in the $\hat{x}$, $\hat{y}$, and $\hat{z}$ direction,
respectively [16]. Thus, ionization electrons must drift through
$\mathcal{O}(m)$ of LAr before reaching the anode wire planes. Reconstruction
of these ionization electrons requires careful control of the drift process.
This is the main objective of the MicroBooNE cryogenic system.
The LArTPC is housed within a larger cylindrical cryostat, which itself is
supported by an argon purification system and nitrogen refrigeration system
[16]. The purification system consists of two recirculation pumps and two
filter skids that remove electronegative impurities from the LAr, mainly
oxygen (O2) and water (H2O). These impurities must be kept below the 100
parts-per-trillion O2-equivalent level in order to maintain electron drift
lengths of at least 2.5 m [157, 16]. Additionally, the nitrogen contamination
must be kept below 2 parts-per-million in order to maintain an argon
scintillation light attenuation length greater than the size of the detector
[158]. Nitrogen cannot be appreciably removed from the argon via the
purification system; rather, the initial nitrogen contamination is fixed by
the quality of the delivered argon, and additional contamination must be
controlled by minimizing the atmosphere leakage rate into the cryostat.
The nitrogen refrigeration system is designed to combat the heat load on the
LAr from the environment and electrical power systems, maintaining thermal
homogeneity throughout the active volume. It consists of two condensers, each
designed to handle a heat load of approximately 9.5 kW [16]. The temperature
of the LAr volume must be stable to $\pm 0.1$ K in order to keep the $\hat{x}$
direction resolution of charged particle tracks below 0.1% [16].
#### 3.1.2 LArTPC Drift System
The drift system inside the LArTPC volume consists of three major subsystems:
the cathode plane, the field cage, and the three anode wire planes. The
purpose of the drift system is to maintain a uniform electric field throughout
the active volume such that ionization electrons are transported to the anode
plane at a stable drift velocity.
The cathode consists of nine stainless steel sheets connected to a supporting
frame to form a single plane. Laser tracker measurements indicate that a
majority of the cathode plan is flat to within $\pm 3$ mm [16]. The cathode
plane is kept at a negative potential of approximately $-70$ kV via a high
voltage feedthrough on the cryostat. The field cage maintains a uniform
electric field between the cathode plane and anode planes. It consists of 64
stainless steel tubes wrapped around the LArTPC active volume. A resistor
divider chain connects each tube to its neighbor, sequentially stepping the
voltage from $-70$ kV to ground in $1.1$ kV increments. The chain provides a
resistance of 250 M$\Omega$ between adjacent tubes such that the current flow
is approximately 4.4 $\mu$A, much larger than the $\mathcal{O}({\rm nA})$
current from signals on anode plane wires [16]. Figure 3.2 shows the
MicroBooNE cathode and field cage, as well as a simulated map of the electric
field within the LArTPC active volume.
Perhaps the most critical components of the MicroBooNE detector are the three
anode wire planes. The U and V induction planes contain 2400 wires each, while
the Y collection plane contains 3456 wires. As mentioned above, the U and V
plane wires are oriented at $\pm 60^{\circ}$ with respect to the vertical,
while the Y plane is oriented vertically. The U, V, and Y planes are biased at
-200 V, 0 V, and +440 V, respectively, to ensure termination of ionization
electrons on the Y collection plane. Each wire is 150 $\mu$m in diameter and
is spaced 3 mm from its neighbors. The planes themselves are spaced 3 mm from
one another. The wires are held in place by wire carrier boards, which house
16 wires each in the U and V planes and 32 wires in the Y plane. Each wire is
terminated using a semi-automated wrapping procedure around a 3 mm diameter
brass ferrule. On the wire carrier boards, each wire makes contact with a gold
pin that connects to the electronic read-out system. The anode planes are held
in place by a single stainless steel frame, which houses each wire carrier
board via an array of precision alignment pins. Wires are tested to withstand
three times the nominal load of 0.7 kg without breakage, both before and after
placement onto the wire carrier board. Figure 3.3(a) shows an image of a
single Y plane wire carrier board with 32 mounted wires. An image of the
fully-assembled MicroBooNE LArTPC is shown in figure 3.3(b), specifically
highlighting the anode planes mounted on the stainless steel frame.
(a)
(b)
Figure 3.2: Figure 3.2(a) shows a close-up image of the cathode plane of the
MicroBooNE LArTPC. The stainless steel field cage tubes can also be seen
surrounding the active volume. Figure 3.2(b) shows a cross-sectional map of
the electric field at the edge of the active volume, considering a cathode
plane voltage of -128 kV. The legend shows the field strength in units of V/m.
Figures from Ref. [16].
(a)
(b)
Figure 3.3: Figure 3.3(a) shows a photograph of a single wire carrier board
with 32 mounted wires. Figure 3.3(b) shows the fully-assembled MicroBooNE
LarTPC, highlighting the anode plane mounted on the stainless steel frame.
Figures from Ref. [16].
#### 3.1.3 Light Collection System
Liquid argon is a prolific scintillation medium due to its low cost, high
scintillation yield ($\mathcal{O}(10^{4})$ photons per MeV of deposited
energy), and transparency to its own scintillation light [158]. This last
feature comes from the scintillation mechanism in LAr: when argon atoms are
ionized, they combine with one another to form singlet and triplet excimer
states. When these excimer states decay, they emit 128 nm photons which pass
unattenuated through the surrounding atomic argon [159]. The decay of the
singlet (triplet) state happens on timescales of $\mathcal{O}$(ns)
($\mathcal{O}$($\mu$s)) [160, 161]. Thus, scintillation light emission happens
on much shorter timescales than the $\mathcal{O}$(ms) drift time of the
ionization electrons.
The light collection system in MicroBooNE is designed to detect the
scintillation photons produced in a neutrino interaction. It consists of 32
8-inch Hammamatsu R5912-02mod cryogenic PMTs situated behind an acrylic plate
coated with tetraphenyl butadiene (TPB) [16]. An image of one such PMT
assembly is shown in figure 3.5(a). TPB is a wavelength shifter that absorbs
the 128 nm argon scintillation light and re-emits a photon in the visible
range. The necessity of this procedure is shown in figure 3.4, which
demonstrates that, unlike direct LAr scintillation light, TPB emission is well
within the wavelength acceptance range of the PMTs.
|
# Gravitational wave memory in wormhole spacetimes
Indranil Chakraborty Department of Physics, Indian Institute of Technology
Kharagpur, Kharagpur 721302, India<EMAIL_ADDRESS>Soumya Bhattacharya
Department of Astrophysics and High Energy Physics, S.N. Bose National Center
for Basic Sciences, Kolkata 700106, India<EMAIL_ADDRESS>Sumanta
Chakraborty School of Physical Sciences, Indian Association for the
Cultivation of Science, Kolkata 700032, India<EMAIL_ADDRESS>
###### Abstract
Gravitational wave (GW) memory is studied in the context of a certain class of
braneworld wormholes. Unlike other wormhole geometries, this novel class of
wormholes do not require any exotic matter fields for its traversability.
First, we study geodesics in this wormhole spacetime, in the presence of a GW
pulse. The resulting evolution of the geodesic separation shows the presence
of displacement and velocity memory effects. Motivated by the same, we study
the memory effects at null infinity using the Bondi-Sachs formalism, adapted
for braneworld wormhole. Our analysis provides a non-trivial change of the
Bondi mass after the passage of a burst of gravitational radiation and hence
manifests the memory effect at null infinity. In both of these exercises, the
presence of extra dimension and the wormhole nature of the spacetime geometry
gets imprinted in the memory effect. Since future GW detectors will be able to
probe the memory effect, the present work provides another avenue to search
for compact objects other than black holes.
## I Introduction
The direct detection of Gravitational Waves (GWs) from binary black hole and
binary neutron star merger events [1, 2], as well as the observations of the
shadow of supermassive compact central objects e.g., the M87* and the SgrA*
[3, 4, 5, 6, 7, 8, 9], are the two major observational breakthroughs in the
field of gravitational physics within the last decade. Both of these
observations depend crucially on the strong field behaviour of gravity and, in
principle, can be used to test the robustness of General Relativity (GR) and
also provide crucial pointers to the nature of the compact objects [10, 11,
12, 13, 14, 15, 16, 17, 18]. Despite the fact that — so far GR has passed
these strong field tests without any scar — from a purely theoretical
perspective, the inability of GR to correctly reconcile singularities
occurring at the end point of gravitational collapse, or, as the starting
point of our modern hot big bang cosmology, posits a serious challenge to the
theory itself. This important shortcoming of GR must make room for
investigating alternative possibilities, vis-a-vis modified near-horizon
geometries. Among the alternatives, one can either look for higher
curvature/higher dimensional theories of gravity, possibly emerging from some
quantum gravity scenarios, or, modifications of the black hole paradigm
itself. Such non-black hole compact objects can arise from quantum gravity-
motivated theories e.g., fuzzballs [19] or, compact objects made out of exotic
matter fields, known as ECOs [20, 21, 22, 23, 24]. Both of these classes of
non-black hole objects, behave as a black hole for solar system tests of
gravity, but appears to have distinct phenomenology, in contrast to that of
the black hole, when they are probed using the strong field tests of gravity,
in particular, GWs [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36].
However, the present generation of GW detectors are not sensitive enough to
detect the modifications in the GW waveform originating from such non-black
hole objects. In absence of such definitive tests that can either confirm or
nullify the existence of these non-black hole objects, it is necessary to
study the strong-gravity signatures of these exotic objects in order to gain a
better understanding of their physical characteristics. Here we attempt to
study some properties associated with GWs in the background of one of the
oldest as well as interesting class of ECOs, viz. wormholes. These are
spacetimes joining two distinct universes by a throat [37, 38, 39, 40, 41, 42,
43, 44, 45, 46]. If one can travel from one universe to the other, it will be
referred to as a traversable wormhole and this, in general, requires exotic
matter fields. However, there exist one class of wormholes, which do not
require exotic matter for its existence, known as the braneworld wormholes
[47], corresponding to Randall-Sundrum two braneworld scenario [48]. This has
two advantages — (a) the presence of higher dimension is imprinted on this
wormhole solution and hence can possibly be tested using GWs or, black hole
shadow measurements, (b) the wormhole does not require any exotic matter on
the four-dimensional brane, the contributions from higher dimensions take care
of the exotic nature to support traversability of the wormhole. Therefore, it
is worthwhile to study various properties of this wormhole and hence look for
its observational signatures. This is because, it will not only validate the
existence of wormholes, but also of compact extra dimensions, thus providing a
unified testing ground for non-black hole nature of compact objects, vis-a-vis
of theories beyond GR. Several aspects of this braneworld wormhole, e.g., its
stability under perturbations due to various fields, in particular the
ringdown structure [49], as well as implications for black hole shadow
measurements [50], have already been explored. Intriguingly, all of these
explorations have provided quite promising results and in particular,
demonstrates the existence of echoes in the GW signal. In this work, we wish
to explore another direction, which has remained largely unexplored, but holds
immense potential for the future generations of GW detectors, namely the
gravitational memory effect.
With the improvement in the detection prospects for the future ground based GW
detectors and also the launch of the space based-detector LISA in the near
future, may provide us an opportunity to observe GW memory effect [51]. The
memory effect brings in both the strong-field, as well as non-linear aspects
of general relativity, which is yet to be observed. It refers to the lasting
change in the relative distance between test particles when a GW passes
through the ambient spacetime [52, 53]. Being a subtle DC shift to the overall
GW amplitude, it has remained undetected in the past observations taken by
LIGO [54]. There have been proposals of stacking the known GW signals observed
by LIGO-Virgo in order to detect this effect [55]. Initially studied in the
context of hyperbolic scattering [56] and gravitational bremsstrahlung [57],
the memory effect was also sho [58]. Recent works on memory effects involve,
generalization to electrodynamics [59, 60] and Yang-Mills theories [61, 62],
investigating features of extra dimensions [63, 64, 65], distinguishing
modified theories of gravity from GR, e.g., scalar-tensor theories [66, 67,
68, 69] and Chern-Simons gravity [70]. Moreover, there have also been works
generalizing memory effects to the symmetries associated with near horizon
geometries for black holes [71, 72, 73]. In this work, we wish to study the
gravitational memory effect for braneworld wormholes, a class of ECOs, for the
first time. Our aim, in the present context, is to infer the imprints of the
non-black hole nature of this static wormhole spacetime and the presence of
extra dimensions on the memory effect. We attempt this exercise of finding the
memory effects in two distinct ways — (a) we perform a geodesic analysis (as
worked out in [52, 74]) and comment on the presence of displacement and
velocity memory effect in the background wormhole spacetime by introducing an
additional GW perturbation, (b) we study memory effects at null infinity using
the Bondi-Sachs formalism [75].
The organization of the paper is as follows: in II we briefly review the
wormhole geometry in the context of braneworld scenario. III deals with the
study of geodesics and subsequent analysis of displacement and velocity memory
effects and finally in IV, we will study the influence of a perturbing GW on
the wormhole metric in the Bondi-Sachs gauge and the associated null memory
effects. We conclude with a discussion on the results obtained and provide an
outline of the future directions to pursue.
## II Brief review of wormhole geometry in the braneworld scenario
Let us now discuss briefly the braneworld model under investigation. In the
model, we have a five dimensional spacetime (referred to as the bulk), within
which two four dimensional branes are embedded. The extra dimension is
spacelike in nature and is described by the coordinate $y$. One of the brane
is located at $y=0$, and dubbed as the Planck brane, while the other one is
located at $y=\ell$, referred to as the visible brane. The proper distance
between the two branes is given by the integral of the $g_{yy}$ component over
the extent of the extra dimension, yielding $d(x)=e^{\phi(x)}\ell$, where the
field $\phi(x)$ is referred to as the radion field. Since we are interested in
the measurements done by a four-dimensional observer, it will suffice to
consider the low energy effective gravitational field equations. This can be
achieved by projecting the five dimensional Einstein’s equations on the four
dimensional brane, and then expanding the same in the ratio of the
(bulk/brane) curvature length scale. Thus we finally obtain the following
gravitational field equations on the visible brane [76, 47],
$\begin{split}G_{\mu\nu}&=\frac{\kappa_{5}^{2}}{\ell\Phi}T^{\rm
V}_{\mu\nu}+\frac{\kappa_{5}^{2}(1+\Phi)}{\ell\Phi}T^{\rm
P}_{\mu\nu}+\frac{1}{\Phi}\Big{(}\nabla_{\mu}\nabla_{\nu}\Phi-
g_{\mu\nu}\nabla^{\alpha}\nabla_{\alpha}\Phi\Big{)}-\frac{3}{2\Phi(1+\Phi)}\Big{(}\nabla_{\mu}\Phi\nabla_{\nu}\Phi-\frac{1}{2}g_{\mu\nu}\nabla^{\alpha}\nabla_{\alpha}\Phi\Big{)}~{}.\end{split}$
(1)
Here $g_{\mu\nu}$ is the visible brane metric, $\nabla_{\mu}$ is the covariant
derivative with respect to $g_{\mu\nu}$ and $\kappa_{5}^{2}$ is the five
dimensional gravitational coupling constant. Moreover, $T^{\rm P}_{\mu\nu}$
and $T^{\rm V}_{\mu\nu}$ are the energy momentum tensors on the Planck brane
and the visible brane, respectively. The scalar field $\Phi$, appearing in 1
is defined as, $\Phi\equiv\exp[2e^{\phi(x)}]-1$, where $\phi(x)$ is the radion
field. The energy density of the on-brane matter field must be small compared
to the brane tension in order for the low-energy effective theory to hold.
Wormholes, being non-singular solutions, have finite energy density and
pressure for the on-brane matter fields everywhere, thereby ensuring the
validity of this theory in the contexts of wormhole geometry [47].
In addition, it must be noted that the wormholes require exotic matter fields
for their traversability, at least in the context of GR. This is because the
violation in the convergence condition for timelike geodesics, leads to a
violation of the energy conditions for the stress-energy tensor sourcing the
wormhole geometry [37]. However, in the case of braneworld scenario, the total
energy-momentum tensor has contributions from the matter present on the two
3-branes (visible and planck) and a geometric stress due to the radion field
generated from the bulk spacetime. Hence, one can sustain this wormhole
geometry with the on-brane matter satisfying the energy conditions and the
violations of the energy conditions for the total energy-momentum tensor can
be attributed to the bulk spacetime (similar situations may arise in the
context of scalar coupled Gauss-Bonnet gravity, see e.g., [77]). Thus the
resulting wormhole solution will be constructed out of normal matter fields on
the brane, with exotic matter field on the bulk. Since we cannot access the
energy scale of the bulk spacetime, the existence of such exotic matter in the
bulk is not of much concern to our present analysis. Moreover, such braneworld
wormholes have also been shown to be stable under scalar, electromagnetic and
axial gravitational perturbations in [49].
In order to avoid non-locality in the theory, i.e., we do not want the
dynamics of the visible brane to be governed by the energy momentum tensor of
the Planck brane, we will work with $T^{\rm P}_{\mu\nu}=0$ i.e., there is no
matter on the Planck brane. With this choice, the field equation for $\Phi$
takes the following form,
$\nabla^{\alpha}\nabla_{\alpha}\Phi=\frac{\tilde{p}^{2}}{\ell}\frac{T^{\rm
V}}{2\omega+3}-\frac{1}{2\omega+3}\frac{d\omega}{d\Phi}\Big{(}\nabla^{\alpha}\Phi\Big{)}\Big{(}\nabla_{\alpha}\Phi\Big{)}~{},$
(2)
where, $T^{\rm V}$ is the trace of the energy momentum tensor on the visible
brane and the coupling function $\omega(\Phi)$, is defined as,
$\omega(\Phi)=-\frac{3\Phi}{2(1+\Phi)}~{}.$ (3)
Thus, the low energy effective braneworld scenario can be written as a
generalized Brans-Dicke (BD) [78] theory, with a variable BD parameter
$\omega(\Phi)$. Given these, we rewrite the gravitational field equations as,
$G_{\mu\nu}=\frac{\kappa_{5}^{2}}{l\Phi}T_{\mu\nu}^{V}+\frac{1}{\Phi}T_{\mu\nu}^{\Phi}~{}.$
(4)
Here, $T_{\mu\nu}^{\Phi}$ is to be identified with the sum of the third and
the fourth terms of the right hand side of 1, without the $(1/\Phi)$ part.
The above set of equations for the metric functions, as well as for the scalar
field $\Phi$, can be solved assuming a static and spherically symmetric metric
ansatz along with an anisotropic fluid source on the visible brane with
vanishing trace. This simplifies the problem of solving the field equations a
lot and one arrives at a two-parameter family of solutions with $R=0$, written
in the Schwarzschild-like coordinates as [79, 47],
$\displaystyle ds^{2}$
$\displaystyle=-\left(\kappa+\lambda\sqrt{1-\frac{2M}{r}}\right)^{2}dt^{2}+\left(1-\frac{2M}{r}\right)^{-1}dr^{2}$
$\displaystyle\hskip
28.45274pt+r^{2}\Big{(}d\theta^{2}+\sin\theta^{2}d\phi^{2}\Big{)}~{}.$ (5)
Here, in the limit, $\kappa=0$ and $\lambda=1$, we get back the Schwarzschild
geometry. Note that in 5, the $tt$-component of the metric is not given in the
standard asymptotic form, since it does not reduce to unity in the limit of
$r\rightarrow\infty$, rather to $(\kappa+\lambda)^{2}$. Therefore, we rescale
the time coordinate by $t\to t/(\kappa+\lambda)$ and define the ratio
$\frac{\kappa}{\lambda}=p$, since this is the only parameter which
characterises the wormhole geometry. Finally, using the coordinate
transformation $u=t-r_{*}$, where $r_{*}$ is the Tortoise coordinate, defined
as $(dr_{*}/dr)=(1/\sqrt{-g_{tt}g^{rr}})$, the metric in 5 becomes,
$ds^{2}=-f(r)~{}du^{2}-2g(r)~{}dudr+r^{2}~{}d\theta^{2}+r^{2}~{}\sin^{2}\theta\,d\phi^{2}~{}.$
(6)
Here we have denoted the $uu$ and $ur$ components of the metric as $f(r)$ and
$g(r)$, with the following expressions for them,
$\displaystyle f(r)$
$\displaystyle\equiv\bigg{(}\frac{p}{p+1}+\frac{1}{1+p}\sqrt{1-\frac{2M}{r}}\bigg{)}^{2}~{},$
(7) $\displaystyle g(r)$
$\displaystyle\equiv\bigg{(}\frac{p}{p+1}+\frac{1}{1+p}\sqrt{1-\frac{2M}{r}}\bigg{)}\bigg{(}1-\frac{2M}{r}\bigg{)}^{-1/2}~{}.$
(8)
The scalar field, on the other hand, is best written in the isotropic
coordinate $r^{\prime}$, such that
$\Phi(r^{\prime})=\Big{(}\frac{C_{1}}{M}\log\frac{2r^{\prime}q+M}{2r^{\prime}+M}+C_{4}\Big{)}^{2}-1~{},$
(9)
with the isotropic coordinate $r^{\prime}$ being related to the Schwarzschild
coordinate $r$ through the following relation:
$r=r^{\prime}(1+\frac{M}{2r^{\prime}})^{2}$. Note that the two coordinates $r$
and $r^{\prime}$ become identical in the asymptotic limit. Moreover, $C_{1}$
and $C_{4}$, appearing in 9, are positive non-zero constants and
$q=\\{(p+1)/(p-1)\\}$, where $p$ is the wormhole parameter, defined earlier.
Unlike the Schwarzschild spacetime, here the radial coordinate $r$ can be
extended only upto $r=2M$ and it is also evident from 5 that as long as
$\kappa$ is non-zero and positive, $g_{tt}\neq 0$ for all $r\geq 2M$. This
suggests that there is no event horizon in this spacetime. Though, the surface
$r=2M$ is not an event horizon, it is indeed null, as $g^{rr}$ vanishes there
and hence in the above solution as well $r=2M$ is a special surface, and is
referred to as the throat of the wormhole. Physically, the above solution
depicts two separate universes connected together at the throat, located at
$r=2M$, which is traversable. The expression for the anisotropic fluid matter
at the throat, necessary for traversability can be obtained from [47]. We do
not provide it here, since this is not required for the computation of
gravitational memory, which is the prime goal in the present context. The
above provides a broad introduction to the wormhole geometry we will be
working with and we shall apply these results in order to compute the memory
effect, to be discussed below.
## III Memory of geodesics
In this section, we will present the analysis of the memory effect vis-a-vis
the geodesic deviation between neighbouring geodesics due to a propagating GW,
with the geodesic separation quantifying the amount of displacement memory
effect. Moreover, if the geodesics do not have constant separation, after the
passage of the GW pulse, one can also associate a velocity memory effect with
these geodesics as well.
Such effects have been studied in the recent past [80, 74, 81, 82, 83] by
investigating the evolution of geodesics in exact plane GW spacetimes. By
choosing a Gaussian pulse for the polarization (radiative) term in the line
element of the plane GW spacetime, the geodesic equations can be solved
numerically, and then the change in the separation and velocity (the
displacement and the velocity memory, respectively) can be computed due to the
passage of the GW pulse. Lately, the above formalism has also been generalized
in the context of alternative theories of gravity [84, 85] to look for
signatures of such alternative theories in the displacement and velocity
memory effects. In the present work, we will study the evolution of the
geodesics in the wormhole background presented above, in the presence of a GW
pulse and study how the displacement and velocity memory effects depend on the
wormhole nature of the background geometry. For this purpose, we write down
the spacetime metric as a sum of the background wormhole geometry $g_{\mu\nu}$
and a GW perturbation $h_{\mu\nu}$, such that the line element becomes,
$ds^{2}=(g_{\mu\nu}+h_{\mu\nu})\,dx^{\mu}\,dx^{\nu}~{}.$ (10)
As we have already mentioned, $g_{\mu\nu}$ is the wormhole metric given in 6
and $h_{\mu\nu}$ is the GW perturbation. Using the wormhole geometry presented
in 6 explicitly, The resulting geometry becomes,
$\displaystyle ds^{2}$
$\displaystyle=-f(r)\,du^{2}-2g(r)\,dudr+\left[r^{2}+rH(u)\right]~{}d\theta^{2}$
$\displaystyle\hskip
14.22636pt+\left[r^{2}-rH(u)\right]~{}\sin^{2}\theta\,d\phi^{2}~{},$ (11)
where, $f(r)$ and $g(r)$ are given by 7 and 8, respectively. The function
$H(u)$ corresponds to the GW pulse and we have assumed that $h_{\mu\nu}$ can
be expressed in the transverse-traceless (TT) gauge. In order to generate this
perturbation in the wormhole metric, we need to include in the matter sector
an energy momentum tensor, which can source the GW pulse $H(u)$ in the TT
gauge. Such an energy momentum tensor can arise from an expanding anisotropic
fluid shell. Prior to the perturbation, the fluid was non-dynamical and the
$u$-constant hypersurfaces were spherically symmetric. Due to this expansion,
the GW pulse is generated and it propagates over the wormhole spacetime to
future null infinity. In what follows, we will derive the displacement and the
velocity memory effects as the propagating GW crosses a congruence of timelike
geodesics in the wormhole background, leading to a change in the separation
between these comoving geodesics.
The GW pulse profile described by $H(u)$ is taken to be,
$H(u)=A\operatorname{sech}^{2}(u-u_{0})$, where $A$ denotes the amplitude of
the GW pulse and it is centered around $u=u_{0}$. Since the above wormhole
spacetime along with the GW pulse respects the spherical symmetry, we can
choose the equatorial plane, located at $\theta=(\pi/2)$, to be the plane on
which all the geodesics are located. Therefore, on the equatorial plane, the
geodesic equations for the $u$ coordinate becomes,
$\displaystyle\ddot{u}$
$\displaystyle-\frac{f^{\prime}}{2g}\dot{u}^{2}-\frac{H-2r}{2g}\dot{\phi}^{2}=0~{},$
(12)
Along identical lines one can arrive at the geodesic equations for the other
coordinates on the equatorial plane, in particular, the respective geodesic
equations for the $r$ and the $\phi$ coordinates are given as,
$\displaystyle\ddot{r}$
$\displaystyle+\frac{f}{2g^{2}}f^{\prime}\dot{u}^{2}+\frac{f^{\prime}}{g}\dot{r}\dot{u}+\frac{g^{\prime}}{g}\dot{r}^{2}$
$\displaystyle\hskip 56.9055pt+\frac{fH-2fr-
rgH^{\prime}}{2g^{2}}\dot{\phi}^{2}=0~{},$ (13) $\displaystyle\ddot{\phi}$
$\displaystyle-\frac{H^{\prime}}{r-H}\dot{\phi}\dot{u}+\frac{2r-H}{r(r-H)}\dot{\phi}\dot{r}=0~{}.$
(14)
As mentioned before, all the above equations are written on the
$\theta=\frac{\pi}{2}$ plane and here ‘overdot’ denotes derivative with
respect to the proper time $\tau$ associated with the geodesics, while ‘prime’
denotes derivative with respect to the argument of the respective function.
For example, $H^{\prime}\equiv(dH/du)$ and $f^{\prime}=(df/dr)$.
(a) Variation of the difference between null coordinate of the two timelike
geodesics, denoted by $\Delta u$ with respect to the proper time $\tau$, for
different values of the higher dimensional parameter $p$ has been presented.
(b) Variation of the difference between the radial coordinates of the timelike
geodesics, namely $\Delta r$ has been presented against the proper time $\tau$
for different values of the wormhole parameter $p$.
Figure 1: Variation of the difference of both the coordinates between the
timelike geodesics, namely $\Delta u$ and $\Delta r$ has been plotted with the
proper time for different choices of the wormhole parameters.
(a) Variation of the difference between null coordinate of the two timelike
geodesics, denoted by $\Delta u$, has been presented against the proper time
$\tau$, in the presence of and in the absence of the GW pulse.
(b) Variation of the difference between the radial coordinates of the timelike
geodesics, namely $\Delta r$, has been depicted with the proper time $\tau$ in
the presence as well as in the absence of the GW pulse.
Figure 2: The variation of $\Delta u$ and $\Delta r$ with the proper time of
the geodesics have been presented in the presence and in the absence of GW
pulse. The plots explicitly demonstrates that both $\Delta u$ and $\Delta r$
encodes information about the passage of the GW pulse in the past, thus
depicts the displacement memory effect.
(a) Variation of the difference between the velocity along null direction $u$
of the two timelike geodesics, have been presented against the proper time
$\tau$, in the presence as well as in the absence of the GW pulse.
(b) Variation of the difference between the velocity along the radial
direction $r$ of the timelike geodesics has been presented with the proper
time $\tau$ with and without the GW pulse.
Figure 3: We have explicitly depicted that the expressions for $\Delta\dot{u}$
and $\Delta\dot{r}$ deviate significantly in the presence of a GW pulse. This
immediately suggests the existence of a velocity memory effect.
We have solved the three geodesic equations, presented in 12 \- 14 numerically
in the symbolic manipulation software Mathematica and have analysed the
solutions extensively. In particular, we have started by considering two
neighbouring geodesics in the wormhole background and then have studied the
evolution of their coordinate separation in terms of the proper time $\tau$,
which is also an affine parameter, see 1. Further, we have depicted in 2, how
the evolution of these coordinate separations have been affected because of
the presence of the GW pulse and also by the presence of extra dimension
through non-zero values of $p$. In particular, as 2 demonstrates, even after
the GW pulse has passed, there is a residual effect which manifests as the GW
memory, more specifically, as the displacement memory effect. It turns out
that these geodesics also inhibit velocity memory effects, which can be seen
from 3.
Let us discuss the above scenario in detail and consider the various boundary
conditions imposed on them. First of all, in this work we have considered two
neighbouring timelike geodesics, which in the background wormhole geometry
would satisfy the following condition on the equatorial plane,
$-f(r)\dot{u}^{2}-2g(r)\dot{u}\dot{r}+r^{2}\dot{\phi}^{2}-rH(u)\dot{\phi}^{2}=-1~{}.$
(15)
We choose the initial conditions as follows: both the geodesics are chosen to
have an initial separation in the radial coordinate $r$, as well as in the
null coordinate $u$, however they both start at the same azimuthal coordinate
$\phi$. These geodesics have fixed initial values of $\dot{r}$ and $\dot{u}$,
while the value of $\dot{\phi}$ will depend on the background geometry through
15. Then we define the following quantities $\Delta u$ and $\Delta r$, as
follows,
$\displaystyle\Delta u=u({\rm Geodesic~{}II})-u(\rm Geodesic~{}I)~{},$
$\displaystyle\Delta r=r({\rm Geodesic~{}II})-r(\rm Geodesic~{}I)~{},$
where, $u(\rm Geodesic~{}I)$ and $r(\rm Geodesic~{}I)$ corresponds to the
coordinates associated with the geodesic I and $u(\rm Geodesic~{}II)$ and
$r(\rm Geodesic~{}II)$ are the coordinates associated with the geodesic II.
All of these are obtained by solving the geodesic equations, using appropriate
initial conditions, as discussed before. We choose our pulse profile such as
$A=1$ and $u_{0}=5$. The results arising out of the time evolution of $\Delta
u$ and $\Delta r$ for different values of wormhole parameter $p$ have been
depicted in 1.We consider three cases corresponding to the following values of
$p$ — i) $p=2.0$, ii) $p=1.5$ and iii) $p=1.05$. The value of the mass is set
to $M=3$ for obtaining the plots. It is clearly seen from these plots that
$\Delta u$ is increasing with the decreasing values of $p$ (1(a)), however,
$\Delta r$ is decreasing with the decreasing values of $p$ (1(b)), i.e.,
$\Delta u$ and $\Delta r$ have opposite behaviours with change of $p$. As
evident from these figures, the change $\Delta u$ is more pronounced as
compared to $\Delta r$. For clarity, in 1, the GW pulse is represented by the
filled region. The pulse we have shown in the plot is not the exact profile of
the GW, rather we have used a scaled-up version of the original profile. The
affine parameter interval for which the GW pulse remained significant remains
unaltered. We will now discuss the emergence of displacement and velocity
memory effect in the wormhole geometry.
Existence of displacement memory effect is evident from 2, where we have shown
that the differences $\Delta u$ and $\Delta r$ between two neighbouring
timelike geodesics depends on whether a GW pulse has passed through them or,
not. In other words, the values of $\Delta u$ and $\Delta r$ after the GW has
passed through does not return to that of the wormhole background and hence
provides the displacement memory effect. This memory effect not only depends
on the strength of the GW pulse, but more so on the background spacetime
geometry. Different choices for the parameter $p$ will lead to different
deviations and hence to different memories. In particular, the memory effect
can be a potential candidate to test the existence of additional hairs in the
background spacetime outside a compact object. In the present scenario, it
corresponds to the fact that non-zero $p$ does affect the displacement memory
in neighbouring geodesics.
Finally, in addition to the displacement memory effect, the GW pulse in the
wormhole geometry also describes a velocity memory effect, as clear from 3.
Both $\Delta\dot{u}$ and $\Delta\dot{r}$ are non-zero and differ from their
background values in the presence of the GW pulse. This memory effect also
depends on the choice of the wormhole hair $p$ and possibly a combined study
of displacement and velocity memory effect will lead to an existential proof
of non-zero values of $p$. Therefore, we can conclude that both displacement
and velocity memory effects exist in the case of braneworld wormhole and
depends crucially on the choice of $p$. This provides another avenue to search
for these non-black hole compact objects, which can also hint at the existence
of an extra spatial dimension.
## IV Bondi-Sachs formalism and memory effect from null infinity
Having discussed the displacement and the velocity memory effects from the
geodesic analysis in the previous section, we now focus our attention in
investigating the memory effect at null infinity [58]. Since the effective
gravitational field equations on the brane, as presented in 1, is equivalent
to the generalized Brans-Dicke theory, with the Brans-Dicke parameter $\omega$
being also a function of the radion field, we try to reformulate the Bondi-
Sachs analysis of the memory effect as prescribed in [67, 68] appropriately.
As in the previous section, here also we assume that the spacetime of interest
corresponds to a background wormhole spacetime with a GW pulse passing through
it, ultimately reaching the future null infinity and as a consequence
modifying the Bondi mass aspect. In particular, we will derive the functional
form of the Bondi mass aspect in terms of the wormhole parameters, which in
turn are related to the physics of the higher spacetime dimensions through the
background wormhole solution. Note that without the GW pulse, there shall be
no dynamics associated with the bondi mass of the background wormhole metric,
since it depicts a static background.
For this purpose, we have to express the background wormhole geometry in the
null coordinates, as presented in 6, in the Bondi-Sachs form by an expansion
of the same in the inverse powers of the radial coordinate $r$. Such an
expansion yields,
$\begin{split}ds^{2}=&-du^{2}-2dudr+r^{2}\gamma_{AB}dx^{A}dx^{B}\\\
&+\frac{2M}{r(p+1)}du^{2}-\frac{2Mp}{r(p+1)}dudr+\mathcal{O}(r^{-2})~{},\end{split}$
(16)
where, $\gamma_{AB}$ is the metric on the unit-two sphere. The terms in the
first line of 16 is the flat metric at future null infinity and the terms in
the second line are the desired $(1/r)$ corrections. As evident, the Bondi
mass aspect is simply given by $\\{M/(p+1)\\}$ and is a constant for the
background spacetime. This provides the behaviour of the background static
wormhole geometry at future null infinity, while the similar result for matter
fields will now follow.
The matter fields consist of two parts, the matter on the four dimensional
brane and the radion field. The dependence of the on-brane matter field on the
radial coordinate $r$ can be found in [47] and it follows that at future null
infinity $\mathscr{I}^{+}$, the matter components fall off faster than
$1/r^{2}$ and hence need not be considered for the calculation. On the other
hand, the fall off behaviour of the radion field111In 9 we had written the
scalar field in terms of an isotropic coordinate $\tilde{r}$. As in the
asymptotic limit the isotropic coordinate $\tilde{r}$ is equal $r$, one can
safely expand the scalar field in terms of $\mathcal{O}(1/r)$. at future null
infinity can be parametrized as: $\Phi_{\rm
b}=\Phi_{0\textrm{(b)}}+\Phi_{1\textrm{(b)}}/r+\Phi_{2\textrm{(b)}}/r^{2}+\cdots$,
where ‘b’ denotes that these are contribution from the background wormhole
geometry. The term $\Phi_{0\textrm{(b)}}$, which is the leading order term in
the asymptotic expansion of the radial field, reads,
$\displaystyle\Phi_{0\textrm{(b)}}$
$\displaystyle=\bigg{(}\frac{C_{1}}{M}\log\frac{p+1}{p-1}+C_{4}\bigg{)}^{2}-1~{}.$
(17)
In an identical manner, we obtain the coefficients of $(1/r)$ and $(1/r^{2})$
terms, in the asymptotic expansion of the radion field $\Phi$ as,
$\displaystyle\Phi_{1\textrm{(b)}}$
$\displaystyle=-\frac{2C_{1}}{p+1}\bigg{(}\frac{C_{1}}{M}\log\frac{p+1}{p-1}+C_{4}\bigg{)}~{},$
(18) $\displaystyle\Phi_{2\textrm{(b)}}$
$\displaystyle=\frac{C_{1}^{2}}{(p+1)^{2}}+\frac{2MpC_{1}}{(p+1)^{2}}\bigg{(}\frac{C_{1}}{M}\log\frac{p+1}{p-1}+C_{4}\bigg{)}~{},$
(19)
where, $C_{1}$ and $C_{4}$ are nonzero constants used to quantify the radion
field [47]. Note that, these expansion coefficients can be expressed in terms
of the parameters of the wormhole spacetime and in particular it depends on
$p$. This fact will be used in the later parts of this work.
The above analysis is about the background spacetime, which is definitely non-
radiative. This is because the metric given in 16 has a constant and non-
dynamical Bondi mass. This is because there is no loss of news in absence of
any dynamics, as fit for a non-radiative geometry. Thus, the memory effect
requires a propagating GW pulse on top of this background geometry, leading to
a finite radiative term. Hence, we introduce a GW component to the above
wormhole metric and the final result is the following axi-symmetric line
element (see [86]),
$\displaystyle ds^{2}=$ $\displaystyle-
du^{2}-2dudr+\bigg{(}\frac{2M}{r(p+1)}+\frac{2M_{\rm
B}(u,\theta)}{r}\bigg{)}du^{2}$
$\displaystyle-\bigg{(}\frac{2Mp}{r(p+1)}+\frac{b(u,\theta)}{r}\bigg{)}du\,dr-2r^{2}U(u,r,\theta)du\,d\theta$
$\displaystyle+r^{2}h_{AB}dx^{A}dx^{B}+\cdots~{}.$ (20)
Note that in the line element presented above, there is a dynamical Bondi mass
term $M_{\rm B}(u,\theta)$. This is due to the presence of the gravitational
radiation in the background of the wormhole spacetime.
Here, $h_{AB}=\gamma_{AB}+c_{AB}r^{-1}+\mathcal{O}(r^{-2})$ and the terms
$b(u,\theta),M_{\rm B}(u,\theta),U(u,r,\theta)$ and $c_{AB}$ arise due to the
presence of the GW pulse and can be considered as perturbations over and above
the braneworld wormhole metric. Moreover, for the above perturbed metric to be
consistent with the gravitational field equations, the radion field should be
perturbed as well and the resultant field becomes $\Phi=\Phi_{\rm b}+\Phi_{\rm
p}$, where $\Phi_{\rm b}$ denotes the background radion scalar field and
$\Phi_{\rm p}$ denotes the perturbed scalar field due to the scalar part of
the GW pulse. We assume that the leading order term in $\Phi_{\rm p}$ is
$\mathcal{O}(1/r)$, such that the Bondi determinant condition can be expressed
as,
$\det(h_{AB})=(\Phi_{0\textrm{(b)}}/\Phi)^{2}\sin^{2}\theta~{}.$ (21)
which yields ,
$c_{AB}=\hat{c}_{AB}-\gamma_{AB}(\Phi_{1}/\Phi_{0(\textrm{b})})~{}.$ (22)
Here, $\hat{c}_{AB}$ is the pure gravitational part and corresponds to the
transverse and traceless degrees of freedom. Also, in the above expression,
$\Phi_{1}=\Phi_{1(\textrm{b})}+\Phi_{1(\textrm{p})}$, is the coefficient of
the $\mathcal{O}(1/r)$ term in the asymptotic expansion of the radion field
$\Phi$. Since there is an additional GW pulse being considered here, it will
be described by a tensorial News $N_{AB}$ as well as a scalar News $N$, which
are given by,
$\displaystyle-\partial_{u}\hat{c}_{AB}$ $\displaystyle=N_{AB}$
$\displaystyle=\mathcal{N}_{1}\operatorname{sech}^{2}u\,(_{-2}Y^{20})\,\,\begin{pmatrix}1&0\\\
0&-\sin^{2}\theta\end{pmatrix}~{},$ (23) $\displaystyle N$
$\displaystyle=\mathcal{N}_{2}\operatorname{sech}^{2}u\,(_{-2}Y^{20})\equiv\partial_{u}\Phi_{1(\textrm{p})}~{}.$
(24)
Note that the $\operatorname{sech}^{2}u$ behaviour is assumed in order to be
consistent with the discussion in the previous section and $N_{AB}$ embodies
the gravitational degrees of freedom, while $N$ encodes the scalar degree of
freedom. The amplitudes $\mathcal{N}_{1}$ and $\mathcal{N}_{2}$ are such that
the GW pulse can be considered as perturbations over the wormhole background.
Now, the Bondi determinant condition, along with integration of the above
relations over the null coordinate $u$, yields the following change in the
Bondi shear $c_{AB}$ as,
$\begin{split}\triangle
c_{AB}&=\triangle\hat{c}_{AB}-\gamma_{AB}\frac{\triangle\Phi_{1(\textrm{p})}}{\Phi_{0(\textrm{b})}}=-\mathcal{N}_{1}\,(_{-2}Y^{20})\,\,\begin{pmatrix}1&0\\\
0&-\sin^{2}\theta\end{pmatrix}\int_{-\infty}^{\infty}\operatorname{sech}^{2}u\,du\,\\\
&-\frac{\mathcal{N}_{2}}{\Phi_{0(\textrm{b})}}(_{-2}Y^{20})\begin{pmatrix}1&0\\\
0&\sin^{2}\theta\end{pmatrix}\int_{-\infty}^{\infty}\operatorname{sech}^{2}u\,du=-2\,(_{-2}Y^{20})\,\begin{bmatrix}\mathcal{N}_{1}+\dfrac{\mathcal{N}_{2}}{\Phi_{0(\textrm{b})}}&0\\\
0&\sin^{2}\theta\bigg{(}-\mathcal{N}_{1}+\dfrac{\mathcal{N}_{2}}{\Phi_{0(\textrm{b})}}\bigg{)}\end{bmatrix}\end{split}$
(25)
Here, the term ${}_{-2}Y^{20}$ is the spin-weighted 222Since the system is
axisymmetric, the spherical harmonic is chosen in a way such that there is no
dependence on $\phi$. harmonic, having the expression:
${}_{-2}Y^{20}=(3/4)\sqrt{5/6\pi}~{}\sin^{2}\theta$. The above equation,
namely 25, shows that the change in the Bondi shear, represented by $\triangle
c_{AB}$ is not traceless, with the trace being dependent on the presence of
the scalar memory via $\Phi_{0(\textrm{b})}$. Therefore, the existence of a
non-zero trace for the Bondi shear will signal the presence of an additional
scalar degree of freedom in the system, possibly arising from extra
dimensions. Thus, the total gravitational memory in the spacetime has both a
tensorial part and a scalar part, the later arising from the radion scalar
field of the underlying theory.
(a) We have depicted the variation of the Bondi shear with the wormhole hair
$p$ on the $\theta=(\pi/2)$ plane.
(b) Response of the change in Bondi shear due to the GW pulse has been
presented against $\theta$ for different values of $p$.
Figure 4: Variation of Bondi shear due to the passage of a GW pulse, also
known as the memory tensor has been presented for various choices of the
wormhole parameters.
The effects described above, are clearly depicted in 4, where the behaviour of
the Bondi shear has been presented with variation in the wormhole parameter
$p$.
Finally, we solve the gravitational field equations order by order, to obtain
the change in the Bondi mass aspect of the system, namely $\Delta M_{\rm B}$.
For this purpose, we re-express IV in a convenient form, fit for an
axisymmetric system [86], such that,
$\begin{split}ds^{2}=&-\exp\left[\sigma(u,r,\theta)\right]du^{2}-2\exp\left[2\beta(u,r,\theta)\right]dudr-2r^{2}\exp\left[2\left\\{\gamma(u,r,\theta)-\delta(u,r,\theta)\right\\}\right]U_{1}(u,r,\theta)du\,d\theta\\\
&\hskip
56.9055pt+r^{2}\exp\left[2\left\\{\gamma(u,r,\theta)-\delta(u,r,\theta)\right\\}\right]d\theta^{2}+r^{2}\exp\left[2\left\\{-\gamma(u,r,\theta)-\delta(u,r,\theta)\right\\}\right]d\phi^{2}~{}.\end{split}$
(26)
The metric functions $\sigma$, $\beta$, $U_{1}$, $\gamma$ and $\delta$,
appearing in the above expression, are all expanded in the inverse powers of
the radial coordinate $r$, yielding,
$\displaystyle\sigma(u,r,\theta)=\frac{\sigma_{1}(u,\theta)}{r}+\frac{\sigma_{2}(u,\theta)}{r^{2}}+\mathcal{O}(r^{-3})$
(27)
$\displaystyle\beta(u,r,\theta)=\frac{\beta_{1}(u,\theta)}{r}+\frac{\beta_{2}(u,\theta)}{r^{2}}+\mathcal{O}(r^{-3})$
(28) $\displaystyle
U_{1}(u,r,\theta)=\frac{U_{11}(u,\theta)}{r}+\frac{U_{12}(u,\theta)}{r^{2}}+\mathcal{O}(r^{-3})$
(29)
$\displaystyle\gamma(u,r,\theta)=\frac{\gamma_{1}(u,\theta)}{r}+\frac{\gamma_{2}(u,\theta)}{r^{2}}+\mathcal{O}(r^{-3})$
(30)
$\displaystyle\delta(u,r,\theta)=\frac{\delta_{1}(u,\theta)}{r}+\frac{\delta_{2}(u,\theta)}{r^{2}}+\mathcal{O}(r^{-3})$
(31)
These expansions must be compared with the Bondi-Sachs form of the wormhole
metric with a GW pulse, as presented in IV, from which we find the following
correspondence,
$\displaystyle\sigma_{1}(u,\theta)$ $\displaystyle=$
$\displaystyle-\frac{2M}{p+1}-2M_{\rm B}(u,\theta)~{},$
$\displaystyle\beta_{1}(u,\theta)$ $\displaystyle=$
$\displaystyle\frac{M\,p}{p+1}+\frac{b(u,\theta)}{2}~{},$
$\displaystyle\delta_{1}(u,\theta)$ $\displaystyle=$
$\displaystyle\frac{\Phi_{1}}{2\Phi_{0(\textrm{b})}}~{},$
$\displaystyle\gamma_{1}(u,\theta)$ $\displaystyle=$
$\displaystyle\frac{\hat{c}_{\theta\theta}}{2}\equiv\frac{\hat{c}(u,\theta)}{2}~{},$
$\displaystyle\delta_{2}(u,\theta)$ $\displaystyle=$ $\displaystyle
U(u,r,\theta)$ $\displaystyle=$
$\displaystyle\dfrac{\Phi_{2}U_{1}(u,r,\theta)}{2\Phi_{0(\textrm{b})}}\exp\left[2\left\\{\gamma(u,r,\theta)-\delta(u,r,\theta)\right\\}\right]~{}.$
Figure 5: Variation of the change in the Bondi mass due to GW pulse has been
presented with the null coordinate $u$, for different choices of $p$.
To proceed further, we use the fact that the field equations in Bondi-Sachs
formalism follows a nested pattern. One starts with an initial data,
prescribed in terms of the functions $\hat{c}_{\theta\theta}(u,\theta)$ and
$\Phi_{1(\textrm{p})}(u,\theta)$ at some value of $u$. Subsequently, the
hypersurface equations $G^{u}\,_{r}$ and $G^{u}\,_{\theta}$, respectively
yields, 333The following field equations are solved using the RGTC package in
the symbolic manipulation software Mathematica.
$\displaystyle\mathcal{O}(r^{-1})$ $\displaystyle:\hskip
5.69046ptU_{11}=0~{},$ (33) $\displaystyle\mathcal{O}(r^{-2})$
$\displaystyle:\hskip
5.69046ptU_{12}(u,\theta)=-\frac{\partial_{\theta}\hat{c}(u,\theta)}{2}-\hat{c}(u,\theta)\cot\theta~{},$
(34)
and finally, at order $\mathcal{O}(r^{-3})$, we obtain,
$\beta_{1}=-\frac{\Phi_{1}(u,\theta)}{2\Phi_{0(\textrm{b})}}~{};\qquad
b(u,\theta)=-\frac{\Phi_{1}(u,\theta)}{\Phi_{0(\textrm{b})}}-\frac{2M\,p}{p+1}~{}.$
(35)
Similarly, the hypersurface equation in the $G^{u}\,_{u}$ component of
Einstein’s equations yields an identity at the lowest order of
$\mathcal{O}(r^{-2})$, while the supplementary equation for the component
$G^{r}\,_{u}$ at $\mathcal{O}(r^{-2})$ yields the change in the Bondi mass
aspect of the system,
$\begin{split}\sigma_{1},_{u}=&\frac{\left(\partial_{u}\Phi_{1}\right)^{2}}{2\Phi_{0}^{2}}-\frac{3\left(\partial_{u}\Phi_{1}\right)^{2}}{2\Phi_{0}(1+\Phi_{0})}-\frac{\Phi_{1}\partial_{u}^{2}\Phi_{1}}{\Phi_{0}^{2}}\\\
&-\frac{\partial_{u}\Phi_{1},}{\Phi_{0}}+\frac{\left(\partial_{u}\hat{c}\right)^{2}}{2}+\partial_{u}\hat{c}-\frac{3}{2}\partial_{u}\partial_{\theta}\hat{c}\cot\theta-\frac{1}{2}\partial_{u}\partial_{\theta}^{2}\hat{c}~{}.\end{split}$
(36)
Integrating the above expression, we obtain the effective Bondi mass of the
system to yield,
$\begin{split}M_{\rm
B}=&-\frac{m}{p+1}-\frac{1}{2}\bigg{[}\frac{a^{2}Y^{2}}{2}+b^{2}Y^{2}\bigg{\\{}\frac{1}{2\Phi_{0}^{2}}-\frac{3}{2\Phi_{0}(1+\Phi_{0})}\bigg{\\}}\bigg{]}\times\bigg{(}\frac{2}{3}\tanh
u+\frac{1}{3}\operatorname{sech}^{2}u\tanh u\bigg{)}\\\
&-\frac{1}{3\Phi_{0}^{2}}b^{2}Y^{2}\tanh^{3}u+\frac{1}{2}\tanh
u\bigg{(}\frac{bY}{\Phi_{0}}+aY+\frac{3}{2}aY,_{\theta}\cot\theta+aY,_{\theta\theta}\bigg{)}~{}.\end{split}$
(37)
The evolution of the Bondi mass with variation in $p$ has been depicted in 5.
In the plot we find that the drop in the Bondi mass is higher as the value of
$p$ decreases.
## V Conclusions
In this article, we have explored certain aspects of the GW memory in a
wormhole background on the brane. The reason for considering the presence of
extra spacetime dimension is two-fold — i) The on-brane gravity theory is a
quasi scalar-tensor theory, with the scalar field capturing the imprints of
the spatial extra dimension and hence any information about the scalar hair
will translate into possible inputs for the extra dimensions. ii) In this
class the wormhole geometry is traversable and can be sustained without
invoking any exotic matter fields. In this manner we arrive at a possible non-
black hole compact object without using exotic matter field and hence it
provides interesting viable alternative to the standard black hole paradigm in
GR. We would like to mention that, besides the wormhole solution considered
here, there are other wormhole solutions, e.g., in the context of Scalar-
coupled Einstein-Gauss-Bonnet Gravity [77], where also the matter field is not
exotic. It will be interesting to study the stability and hence the memory
effect in such wormhole backgrounds as well.
We, at first, have briefly reviewed the geometry of the wormhole spacetime and
how the presence of extra dimension helps in constructing a traversable
wormhole without any exotic matter. Then we have explored the displacement and
velocity memory effects by analysing neighbouring geodesics in the wormhole
background in the presence of a localised GW pulse. We have shown explicitly,
how the geodesic separation evolves before and after the passage of the pulse.
This explicitly establishes the existence of both displacement and velocity
memory effect. In addition, these memory effects depend crucially on the hairs
of this wormhole solution and hence differs from the corresponding memory
effect in Schwarzschild spacetime. Therefore, memory effects are indeed future
pointers towards exploring existence of non-black hole compact objects in our
universe, which in this context can also be related to the existence of extra
spatial dimensions.
Having observed that indeed memory effect exists and depends on the details of
the wormhole geometry, we subsequently proceeded to study the memory effect
using symmetries at null infinity using the Bondi-Sachs formalism. For this
purpose, we have expressed the spacetime metric of the wormhole geometry using
the Bondi coordinate system and have expanded the radion field in inverse
powers of the radial coordinate $r$. Considering a GW perturbation in the
system that satisfies the Bondi gauge conditions, we have computed the Bondi
shear as well as the Bondi mass aspect and hence observed that these also give
rise to the memory effect. Again, the memory depends on the wormhole parameter
($p$) through the leading order contribution from the radial field.
Since the braneworld scenario, considered in the present context, very much
resembles a scalar-tensor theory of gravity, we use the formalism given in
[67], and show how the variation in the Bondi mass aspect, related to the
memory effect, depend explicitly on the wormhole parameters. The same
conclusion holds true for geodesic memory effects as well. This variation of
the Bondi mass aspect is different from the black hole scenario and can
possible be probed using the future GW detectors. Moreover, generalization of
the present result for astrophysically relevant cases of rotating wormholes
will be of significant interest. Besides, this work can also be used as a
prospect to investigate supertranslations and soft hair implants on the throat
of a wormhole geometry (analogous study for black hole horizons can be found
in [87] and Rindler horizons in [88]). These issues we wish to study in the
future.
The memory effect encoded in the Bondi mass aspect is computed by solving the
gravitational field equations of the theory at orders of $\mathcal{O}(1/r)$.
Decomposing the field equations into Hypersurface and Supplementary equations,
we have determined the change in the Bondi mass of the system analytically. We
find that the variation in the value of $p$ produces a change in the evolution
of the Bondi mass. This shows that it will be easier to decipher whether the
central compact object has a non-zero value of $p$, since presence of a non-
zero $p$ will modify the memory significantly. Therefore, as and when the
memory effect can be detected in the future GW detectors, it will possibly
tell us about the existence of non-black hole compact objects and one can
independently verify if the braneworld wormhole is a viable scenario.
There are several possible future extensions of the present work, first of all
a complete analysis of the Bondi-Sachs formalism without assuming axisymmetry
will be an interesting and important extension. Moreover, studying memory
effect for rotating wormholes and other rotating compact objects, which are
more relevant astrophysically, will be another interesting avenue to explore.
Finally, studying GW memory effects for other black hole mimicker spacetimes,
e.g., fuzzballs will be of significant interest.
## Acknowledgements
I.C. acknowledges the University Grants Commission (UGC), Government of India,
for providing financial assistance through a senior research fellowship
(reference ID: 523711). Research of S.C. is funded by the INSPIRE Faculty
fellowship from DST, Government of India (Reg. No. DST/INSPIRE/04/2018/000893)
and by the Start- Up Research Grant from SERB, DST, Government of India (Reg.
No. SRG/2020/000409). S.C. further thanks the Albert-Einstein Institute, where
a part of this work was carried out and the Max-Planck Society for providing
the Max-Planck-India Mobility Grant.
## References
* Abbott _et al._ [2016] B. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. Lett. 116, 061102 (2016).
* Abbott _et al._ [2017] B. P. Abbott _et al._ (LIGO Scientific, Virgo), Phys. Rev. Lett. 119, 161101 (2017), arXiv:1710.05832 [gr-qc] .
* Akiyama _et al._ [2019a] K. Akiyama _et al._ (Event Horizon Telescope), Astrophys. J. Lett. 875, L1 (2019a), arXiv:1906.11238 [astro-ph.GA] .
* Akiyama _et al._ [2019b] K. Akiyama _et al._ (Event Horizon Telescope), Astrophys. J. Lett. 875, L2 (2019b), arXiv:1906.11239 [astro-ph.IM] .
* Akiyama _et al._ [2019c] K. Akiyama _et al._ (Event Horizon Telescope), Astrophys. J. Lett. 875, L3 (2019c), arXiv:1906.11240 [astro-ph.GA] .
* Akiyama _et al._ [2019d] K. Akiyama _et al._ (Event Horizon Telescope), Astrophys. J. Lett. 875, L4 (2019d), arXiv:1906.11241 [astro-ph.GA] .
* Akiyama _et al._ [2019e] K. Akiyama _et al._ (Event Horizon Telescope), Astrophys. J. Lett. 875, L5 (2019e), arXiv:1906.11242 [astro-ph.GA] .
* Akiyama _et al._ [2022a] K. Akiyama _et al._ (Event Horizon Telescope), Astrophys. J. Lett. 930, L12 (2022a).
* Akiyama _et al._ [2022b] K. Akiyama _et al._ (Event Horizon Telescope), Astrophys. J. Lett. 930, L17 (2022b).
* Yunes and Siemens [2013] N. Yunes and X. Siemens, Living Rev. Rel. 16, 9 (2013), arXiv:1304.3473 [gr-qc] .
* The LIGO Scientific Collaboration _et al._ [2021] The LIGO Scientific Collaboration, The Virgo Collaboration, The KAGRA Collaboration, and R. e. a. Abbott, arxiv:2112.06861 (2021).
* Krishnendu and Ohme [2021] N. V. Krishnendu and F. Ohme, Universe 7, 497 (2021), arXiv:2201.05418 [gr-qc] .
* Johannsen _et al._ [2016] T. Johannsen, A. E. Broderick, P. M. Plewa, S. Chatzopoulos, S. S. Doeleman, F. Eisenhauer, V. L. Fish, R. Genzel, O. Gerhard, and M. D. Johnson, Phys. Rev. Lett. 116, 031101 (2016), arXiv:1512.02640 [astro-ph.GA] .
* Ayzenberg and Yunes [2018] D. Ayzenberg and N. Yunes, Class. Quant. Grav. 35, 235002 (2018), arXiv:1807.08422 [gr-qc] .
* Psaltis [2019] D. Psaltis, Gen. Rel. Grav. 51, 137 (2019), arXiv:1806.09740 [astro-ph.HE] .
* Banerjee _et al._ [2020a] I. Banerjee, S. Chakraborty, and S. SenGupta, Phys. Rev. D 101, 041301 (2020a), arXiv:1909.09385 [gr-qc] .
* Chakraborty _et al._ [2022] S. Chakraborty, E. Maggio, A. Mazumdar, and P. Pani, (2022), arXiv:2202.09111 [gr-qc] .
* Mishra _et al._ [2019] A. K. Mishra, S. Chakraborty, and S. Sarkar, Phys. Rev. D 99, 104080 (2019), arXiv:1903.06376 [gr-qc] .
* Mathur [2005] S. D. Mathur, Fortsch. Phys. 53, 793 (2005), arXiv:hep-th/0502050 .
* Cardoso _et al._ [2016a] V. Cardoso, E. Franzin, and P. Pani, Phys. Rev. Lett. 116, 171101 (2016a).
* Cardoso and Pani [2017] V. Cardoso and P. Pani, Nature Astron. 1, 586 (2017), arXiv:1709.01525 [gr-qc] .
* Cardoso and Pani [2019] V. Cardoso and P. Pani, Living Rev. Rel. 22, 4 (2019), arXiv:1904.05363 [gr-qc] .
* Mazur and Mottola [2001] P. O. Mazur and E. Mottola, arxiv:gr-qc/0109035 (2001).
* Almheiri _et al._ [2013] A. Almheiri, D. Marolf, J. Polchinski, and J. Sully, Journal of High Energy Physics 2013, 10.1007/jhep02(2013)062 (2013).
* Lemos and Zaslavskii [2008] J. P. S. Lemos and O. B. Zaslavskii, Phys. Rev. D 78, 024040 (2008), arXiv:0806.0845 [gr-qc] .
* Pani _et al._ [2008] P. Pani, V. Cardoso, M. Cadoni, and M. Cavaglia, PoS BHGRS, 027 (2008), arXiv:0901.0850 [gr-qc] .
* Konoplya and Zhidenko [2016] R. A. Konoplya and A. Zhidenko, JCAP 12, 043, arXiv:1606.00517 [gr-qc] .
* Carballo-Rubio _et al._ [2018] R. Carballo-Rubio, F. Di Filippo, S. Liberati, and M. Visser, Phys. Rev. D 98, 124009 (2018).
* Abedi _et al._ [2017] J. Abedi, H. Dykaar, and N. Afshordi, Phys. Rev. D 96, 082004 (2017), arXiv:1612.00266 [gr-qc] .
* Mark _et al._ [2017] Z. Mark, A. Zimmerman, S. M. Du, and Y. Chen, Phys. Rev. D 96, 084002 (2017), arXiv:1706.06155 [gr-qc] .
* Sennett _et al._ [2017] N. Sennett, T. Hinderer, J. Steinhoff, A. Buonanno, and S. Ossokine, Phys. Rev. D 96, 024002 (2017), arXiv:1704.08651 [gr-qc] .
* Oshita and Afshordi [2019] N. Oshita and N. Afshordi, Phys. Rev. D 99, 044002 (2019), arXiv:1807.10287 [gr-qc] .
* Bueno _et al._ [2018] P. Bueno, P. A. Cano, F. Goelen, T. Hertog, and B. Vercnocke, Phys. Rev. D 97, 024040 (2018), arXiv:1711.00391 [gr-qc] .
* Cardoso _et al._ [2016b] V. Cardoso, S. Hopper, C. F. B. Macedo, C. Palenzuela, and P. Pani, Phys. Rev. D 94, 084031 (2016b), arXiv:1608.08637 [gr-qc] .
* Dey _et al._ [2020] R. Dey, S. Chakraborty, and N. Afshordi, Phys. Rev. D 101, 104014 (2020), arXiv:2001.01301 [gr-qc] .
* Dey _et al._ [2021] R. Dey, S. Biswas, and S. Chakraborty, Phys. Rev. D 103, 084019 (2021), arXiv:2010.07966 [gr-qc] .
* Morris _et al._ [1988] M. S. Morris, K. S. Thorne, and U. Yurtsever, Phys. Rev. Lett. 61, 1446 (1988).
* Bronnikov and Kim [2003] K. A. Bronnikov and S.-W. Kim, Phys. Rev. D 67, 064027 (2003), arXiv:gr-qc/0212112 .
* Bronnikov _et al._ [2003] K. A. Bronnikov, V. N. Melnikov, and H. Dehnen, Phys. Rev. D 68, 024025 (2003).
* Tsukamoto _et al._ [2012] N. Tsukamoto, T. Harada, and K. Yajima, Phys. Rev. D 86, 104062 (2012), arXiv:1207.0047 [gr-qc] .
* Ohgami and Sakai [2015] T. Ohgami and N. Sakai, Phys. Rev. D 91, 124020 (2015), arXiv:1704.07065 [gr-qc] .
* Shaikh _et al._ [2019] R. Shaikh, P. Banerjee, S. Paul, and T. Sarkar, Phys. Lett. B 789, 270 (2019), [Erratum: Phys.Lett.B 791, 422–423 (2019)], arXiv:1811.08245 [gr-qc] .
* Banerjee _et al._ [2021] P. Banerjee, S. Paul, R. Shaikh, and T. Sarkar, JCAP 03, 042, arXiv:1912.01184 [astro-ph.HE] .
* Dutta Roy _et al._ [2020] P. Dutta Roy, S. Aneesh, and S. Kar, Eur. Phys. J. C 80, 850 (2020), arXiv:1910.08746 [gr-qc] .
* Bronnikov and Konoplya [2020] K. A. Bronnikov and R. A. Konoplya, Phys. Rev. D 101, 064004 (2020).
* Franzin _et al._ [2022] E. Franzin, S. Liberati, J. Mazza, R. Dey, and S. Chakraborty, arxiv:2201.01650 (2022).
* Kar _et al._ [2015] S. Kar, S. Lahiri, and S. SenGupta, Phys. Lett. B 750, 319 (2015), arXiv:1505.06831 [gr-qc] .
* Randall and Sundrum [1999] L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 3370 (1999), arXiv:hep-ph/9905221 .
* Biswas _et al._ [2022] S. Biswas, M. Rahman, and S. Chakraborty, arxiv:2205.14743 (2022).
* Banerjee _et al._ [2020b] I. Banerjee, S. Chakraborty, and S. SenGupta, Phys. Rev. D 101, 041301 (2020b), arXiv:1909.09385 [gr-qc] .
* Boersma _et al._ [2020] O. M. Boersma, D. A. Nichols, and P. Schmidt, Phys. Rev. D 101, 083026 (2020).
* Braginsky and Grishchuk [1985] V. B. Braginsky and L. P. Grishchuk, Sov. Phys. JETP 62, 427 (1985).
* Favata [2010] M. Favata, Class. Qtm. Grav. 27, 084036 (2010).
* Hübner _et al._ [2021] M. Hübner, P. Lasky, and E. Thrane, Phys. Rev. D 104, 023004 (2021).
* Lasky _et al._ [2016] P. D. Lasky, E. Thrane, Y. Levin, J. Blackman, and Y. Chen, Phys. Rev. Lett. 117, 061102 (2016).
* Zel’dovich and Polnarev [1974] Y. B. Zel’dovich and A. G. Polnarev, Sov. Astron 18, 17 (1974).
* Kovacs and Thorne [1978] S. J. Kovacs and K. S. Thorne, Astrophys. J. 224, 62 (1978).
* Christodoulou [1991] D. Christodoulou, Phys. Rev. Lett. 67, 1486 (1991).
* Bieri and Garfinkle [2013] L. Bieri and D. Garfinkle, Class. Qtm. Grav. 30, 195009 (2013).
* Winicour [2014] J. Winicour, Class. Quant. Grav. 31, 205003 (2014), arXiv:1407.0259 [gr-qc] .
* Pate _et al._ [2017] M. Pate, A.-M. Raclariu, and A. Strominger, Phys. Rev. Lett. 119, 261602 (2017).
* Jokela _et al._ [2019] N. Jokela, K. Kajantie, and M. Sarkkinen, Phys. Rev. D 99, 116003 (2019).
* Hollands _et al._ [2017] S. Hollands, A. Ishibashi, and R. M. Wald, Classical and Quantum Gravity 34, 155005 (2017).
* Satishchandran and Wald [2018] G. Satishchandran and R. M. Wald, Phys. Rev. D 97, 024036 (2018).
* Ferko _et al._ [2022] C. Ferko, G. Satishchandran, and S. Sethi, Phys. Rev. D 105, 024072 (2022), arXiv:2109.11599 [gr-qc] .
* Du and Nishizawa [2016] S. M. Du and A. Nishizawa, Phys. Rev. D 94, 104063 (2016).
* Hou and Zhu [2021] S. Hou and Z.-H. Zhu, JHEP 01, 083, arXiv:2005.01310 [gr-qc] .
* Hou [2020] S. Hou, in _9th International Workshop on Astronomy and Relativistic Astrophysics_ (2020) arXiv:2011.02087 [gr-qc] .
* [69] S. Tahura, D. A. Nichols, A. Saffer, L. C. Stein, and K. Yagi, arXiv:2007.13799 [gr-qc] .
* Hou _et al._ [2022] S. Hou, T. Zhu, and Z.-H. Zhu, Phys. Rev. D 105, 024025 (2022).
* Donnay _et al._ [2018] L. Donnay, G. Giribet, H. A. González, and A. Puhm, Physical Review D 98, 10.1103/physrevd.98.124016 (2018).
* Bhattacharjee _et al._ [2019] S. Bhattacharjee, S. Kumar, and A. Bhattacharyya, Phys. Rev. D 100, 084010 (2019).
* Rahman and Wald [2020] A. A. Rahman and R. M. Wald, Physical Review D 101, 10.1103/physrevd.101.124010 (2020).
* Zhang _et al._ [2017a] P.-M. Zhang, C. Duval, G. W. Gibbons, and P. A. Horvathy, Phys. Rev. D 96, 064013 (2017a).
* Mädler and Winicour [2016] T. Mädler and J. Winicour, Scholarpedia 11, 33528 (2016), arXiv:1609.01731 [gr-qc] .
* Kanno and Soda [2002] S. Kanno and J. Soda, Phys. Rev. D 66, 043526 (2002), arXiv:hep-th/0205188 .
* Antoniou _et al._ [2020] G. Antoniou, A. Bakopoulos, P. Kanti, B. Kleihaus, and J. Kunz, Phys. Rev. D 101, 024033 (2020), arXiv:1904.13091 [hep-th] .
* Faraoni [2004] V. Faraoni, _Cosmology in scalar tensor gravity_ (Kluwer Academic Publishers, Dordecht, The Netherlands, 2004).
* Dadhich _et al._ [2002] N. Dadhich, S. Kar, S. Mukherjee, and M. Visser, Phys. Rev. D 65, 064004 (2002).
* Zhang _et al._ [2017b] P.-M. Zhang, C. Duval, G. W. Gibbons, and P. A. Horvathy, Phys. Lett. B 772, 743 (2017b).
* Flanagan _et al._ [2019] E. E. Flanagan, A. M. Grant, A. I. Harte, and D. A. Nichols, Phys. Rev. D 99, 084044 (2019).
* Chakraborty and Kar [2020a] I. Chakraborty and S. Kar, Phys. Rev. D 101, 064022 (2020a).
* Chakraborty and Kar [2020b] I. Chakraborty and S. Kar, Phys. Lett. B 808, 135611 (2020b).
* Siddhant _et al._ [2021] S. Siddhant, I. Chakraborty, and S. Kar, Eur. Phys. J. C 81, 350 (2021).
* Chakraborty [2022] I. Chakraborty, Phys. Rev. D 105, 024063 (2022).
* Bondi _et al._ [1962] H. Bondi, M. G. J. van der Burg, and A. W. K. Metzner, Proc. Roy. Soc. Lond. A 269, 21 (1962).
* Hawking _et al._ [2016] S. W. Hawking, M. J. Perry, and A. Strominger, Physical Review Letters 116, 10.1103/physrevlett.116.231301 (2016).
* Kolekar and Louko [2017] S. Kolekar and J. Louko, Phys. Rev. D 96, 024054 (2017), arXiv:1703.10619 [hep-th] .
|
every pair of servers. Let $\Pi_{\sf OT}^{\sf ij}$ denote an instance of
1-out-of-2 OT with $\mathsf{S}_{\sf i}$ being the sender and $\mathsf{S}_{\sf
j}$ being the receiver. Here, $\mathsf{S}_{\sf i}$ inputs the sender messages
$(x_{0},x_{1})$ while $\mathsf{S}_{\sf j}$ inputs the receiver choice bit
$c\in\mathbb{Z}_{2}$ and obtains $x_{c}$ as the output, for
$x_{0},x_{1}\in\mathbb{Z}_{2^{\ell}}$.
Preprocessing: 1. Execute $\Pi_{\sf BitA}^{\sf
Pre}([{\lambda}_{\mathsf{b}}]^{\bf B})$ to obtain $[{\lambda}_{\mathsf{b}}]$.
2. Locally generate $([r],\langle r\rangle)$ for a random
$r\in\mathbb{Z}_{2^{\ell}}$. Online: 1. $\mathsf{S}_{j}$, for $j\in[\tau]$,
locally computes the following ($\Delta=1$ if $j=1$, else $0$): •
$[(z-r)]_{j}=\Delta\cdot\mathsf{m}_{\mathsf{b}}+(1-2\mathsf{m}_{\mathsf{b}})\cdot[{\lambda}_{\mathsf{b}}]_{j}-[r]_{j}$.
2. $\mathsf{S}_{j}$, for $j\in[\tau]$, sends $[(z-r)]_{j}$ to
$\mathsf{S}_{1}$, who computes $(z-r)$ and sends to all the servers. 3.
Locally compute $\langle z\rangle=\langle(z-r)\rangle+\langle r\rangle$.
Figure 12: Bit-to-arithmetic conversion protocol.
To generate the arithmetic sharing of ${\lambda}_{\mathsf{b}}$ from its
Boolean shares in $[\cdot]$-shared form, a simple method would be to apply a
3-XOR using a daBit-style approach [107], but would result in 12 executions of
1-out-of-2 OTs. However, as pointed out in Prio+ [2], the cost could be
further optimized due to the semi-honest security model being considered in
this work rather than the malicious in [107]. Since Prio+ operates over two
MPC servers, we extend their optimized daBit-generation protocol (cf. [2,
$\mathsf{daBitGen}_{p}$]) to our setting with three servers.
Given two bits $\mathsf{b}_{\sf i},\mathsf{b}_{\sf j}\in\mathbb{Z}_{2}$, the
arithmetic share corresponding to their product can be generated using one
instance of $\Pi_{\sf OT}^{\sf ij}$ with $(x_{0}=r,x_{1}=r+\mathsf{b}_{\sf
i})$ as the OT-sender messages and $\mathsf{b}_{\sf j}$ as the OT-receiver
choice bit. With this observation and using Eq. 3.1, servers can compute
$[\cdot]$-shares corresponding to the bit ${\lambda}_{\mathsf{b}}$ using five
OT invocations. The formal details appear in Fig. B.1.2.
OT Instance - I: $[\mathsf{b}]_{1}[\mathsf{b}]_{2}$ 1. $\mathsf{S}_{1}$
samples random $r_{12}\in\mathbb{Z}_{2^{\ell}}$. 2. $\mathsf{S}_{1}$ and
$\mathsf{S}_{2}$ executes $\Pi_{\sf OT}^{\sf
12}((r_{12},r_{12}+[\mathsf{b}]_{1}),[\mathsf{b}]^{\bf B}_{2})$. 3.
$\mathsf{S}_{1}$ sets $y_{12}^{1}=-r_{12}$ and $\mathsf{S}_{2}$ sets the OT
output as $y_{12}^{2}$. OT Instances - II & III:
$[\mathsf{b}]_{1}[\mathsf{b}]_{3}$, $[\mathsf{b}]_{2}[\mathsf{b}]_{3}$ These
are similar to the computation of $[\mathsf{b}]_{1}[\mathsf{b}]_{2}$ discussed
above. OT Instances - IV & IV:
$[\mathsf{b}]_{1}[\mathsf{b}]_{2}[\mathsf{b}]_{3}$ 1. Computation can be
broken down to
$([\mathsf{b}]_{1}[\mathsf{b}]_{2})\cdot[\mathsf{b}]_{3}=(y_{12}^{1}+y_{12}^{2})\cdot[\mathsf{b}]_{3}$.
2. Execute $\Pi_{\sf OT}^{\sf 13}$ for $y_{12}^{1}\cdot[\mathsf{b}]^{\bf
B}_{3}$. Let $z_{13}^{1}$ and $z_{13}^{2}$ denote the respective shares of
$\mathsf{S}_{1}$ and $\mathsf{S}_{3}$. 3. Execute $\Pi_{\sf OT}^{\sf 23}$ for
$y_{12}^{2}\cdot[\mathsf{b}]^{\bf B}_{3}$. Let $z_{23}^{1}$ and $z_{23}^{2}$
denote the respective shares of $\mathsf{S}_{2}$ and $\mathsf{S}_{3}$.
Computation of final shares $\mathsf{S}_{1}$:
$[\mathsf{b}]_{1}=\mathsf{b}_{1}-2y_{12}^{1}-2y_{13}^{1}+4z_{13}^{1}$.
$\mathsf{S}_{2}$:
$[\mathsf{b}]_{2}=\mathsf{b}_{2}-2y_{12}^{2}-2y_{23}^{1}+4z_{23}^{1}$.
$\mathsf{S}_{3}$:
$[\mathsf{b}]_{3}=\mathsf{b}_{3}-2y_{13}^{2}-2y_{23}^{2}+4z_{13}^{2}+4z_{23}^{2}$.
Figure 13: Bit-to-arithmetic preprocessing.
For the case of approximate bit conversion discussed in §3.1, the number of OT
instances can be further reduced to three following Eq. 9. Concretely, the
conversion involves computation of just
$[\mathsf{b}]_{1}[\mathsf{b}]_{2}[\mathsf{b}]_{3}$ and hence the OT instances
II & III described in Fig. B.1.2 are no longer needed.
Preprocessing: 1. Execute $\Pi_{\sf BitA}^{\sf
Pre}([{\lambda}_{\vec{\mathbf{M}}}]^{\bf B})$ to obtain
$[{\lambda}_{\vec{\mathbf{M}}}]$. 2. Locally generate $([r],\langle r\rangle)$
for a random $r\in\mathbb{Z}_{2^{\ell}}$. Online: 1. $\mathsf{S}_{j}$, for
$j\in[\tau]$, locally computes the following ($\Delta=1$ if $j=1$, else $0$):
• $[(z-r)]_{j}=\Delta\cdot{\sf
Agg\mbox{-}R}(\mathsf{m}_{\vec{\mathbf{M}}})+(1-2\mathsf{m}_{\vec{\mathbf{X}}})\odot[{\lambda}_{\vec{\mathbf{Y}}}]_{j}-[r]_{j}$.
2. $\mathsf{S}_{j}$, for $j\in[\tau]$, sends $[(z-r)]_{j}$ to
$\mathsf{S}_{1}$, who computes $(z-r)$ and sends to all the servers. 3.
Locally compute $\langle z\rangle=\langle(z-r)\rangle+\langle r\rangle$.
Figure 14: Bit-to-arithmetic sum protocol.
When computing the sum of bits directly, the online communication can be
optimized following inner-product protocol and the resulting protocol
$\Pi_{\sf BitA}^{\sf sum}$ is given in Fig. B.1.2.
Bit Injection Protocol. Given a Boolean vector
$\vec{\mathbf{M}}_{\sf\mathsf{d}\times 1}$ and an arithmetic vector
$\vec{\mathbf{N}}_{\sf\mathsf{d}\times 1}$ in the secret-shared form, protocol
$\Pi_{\sf BI}$ computes the inner product of the two vectors, defined as
$z=\vec{\mathbf{M}}\odot\vec{\mathbf{N}}$. This protocol is similar to the
inner product protocol $\Pi_{\sf IP}$ presented in Fig. B.1.2, with the main
difference being that $\vec{\mathbf{M}}$ is a Boolean vector.
During the preprocessing phase, servers first generate the arithmetic shares
of ${\lambda}_{\vec{\mathbf{M}}}$ from its Boolean shares, similar to the bit-
to-arithmetic protocol $\Pi_{\sf BitA}$ in Fig. B.1.2. The remaining steps are
similar to $\Pi_{\sf IP}$ in Fig. B.1.2 and we omit the details.
Preprocessing: 1. Execute $\Pi_{\sf BitA}^{\sf
Pre}([{\lambda}_{\vec{\mathbf{M}}}]^{\bf B})$ to obtain
$[{\lambda}_{\vec{\mathbf{M}}}]$. 2. Execute $\Pi_{\sf BI}^{\sf
Pre}([\vec{\mathbf{{\lambda}_{M}}}],[\vec{\mathbf{{\lambda}_{N}}}])$ to obtain
$[\vec{\mathbf{{\gamma}_{Q}}}]$ with
$\vec{\mathbf{{\gamma}_{Q}}}=\vec{\mathbf{{\lambda}_{M}}}\circ\vec{\mathbf{{\lambda}_{N}}}$.
3. Execute $\Pi_{\sf Tr}()$ to generate $([r],\langle
r/2^{\mathsf{f}}\rangle)$. Online: 1. $\mathsf{S}_{j}$, for $j\in[\tau]$,
locally computes the following ($\Delta=1$ if $j=1$, else $0$): •
$T_{j}^{1}=\Delta\cdot(\mathsf{m}_{\vec{\mathbf{M}}}\odot\mathsf{m}_{\vec{\mathbf{N}}})\
+\ \mathsf{m}_{\vec{\mathbf{M}}}\odot[{\lambda}_{\vec{\mathbf{N}}}]_{j}$. •
$T_{j}^{2}=((1-2\mathsf{m}_{\vec{\mathbf{M}}})\circ\mathsf{m}_{\vec{\mathbf{N}}})\odot[{\lambda}_{\vec{\mathbf{M}}}]_{j}\
+\ (1-2\mathsf{m}_{\vec{\mathbf{M}}})\odot[{\gamma}_{\vec{\mathbf{Q}}}]_{j}$.
• $[(z-r)]_{j}=T_{j}^{1}+T_{j}^{2}-[r]_{j}$. 2. $\mathsf{S}_{j}$, for
$j\in[\tau]$, sends $[(z-r)]_{j}$ to $\mathsf{S}_{1}$, who computes $(z-r)$
and sends to all the servers. 3. Locally compute $\langle
z\rangle=\langle(z-r)/2^{\mathsf{f}}\rangle+\langle r/2^{\mathsf{f}}\rangle$.
Figure 15: Bit injection (sum) protocol.
### B.2 ScionFL: Additional Details
This section provides addition details of our FL framework ScionFL presented
in §4.
#### B.2.1 Efficiency of Approximate Bit Conversion
In this section, we measure the efficiency gains achieved by our approximation
method, discussed in §3.1, by counting the number of cross terms that must be
computed securely using MPC. Cross terms are terms that compute the product of
two or more shares. While the exact amount of computation and communication
varies depending on the MPC protocol and setting (e.g., honest vs. dishonest
majority or semi-honest vs. malicious security), we believe cross terms can
provide a protocol-independent and realistic assessment of
scalability.121212We acknowledge that the analysis cannot provide an exact
comparison, owing to the presence of the product term in the approximation.
For example, depending on the underlying MPC setup, the product term
($\mathsf{term_{p}}$) may require more communication than the middle terms
($\mathsf{term_{m}}$), and therefore the effect of approximation may be
minimized.
Computation | #cross-terms
---|---
Exact ($\tilde{\mathsf{b}}$) | Approximate ($\hat{\mathsf{b}}$)
Bit-to-Arithmetic | $2^{\mathsf{q}}-\mathsf{q}-1$ | $1$
Bit Injection | $2^{\mathsf{q}}+\mathsf{q}^{2}-2\mathsf{q}-1$ | $\mathsf{q}^{2}-\mathsf{q}+1$
Table 4: Efficiency analysis via approximate bit conversion with respect to
the #cross-terms involved.
Tab. 4 provides the details regarding the number of cross terms involved in
obtaining the arithmetic equivalent of
$\mathsf{b}=\oplus_{i=1}^{\mathsf{q}}\mathsf{b}_{i}$. The gains increase
significantly with a higher number of shares $\mathsf{q}$ due to the
exponential growth in the number of cross terms for the exact computation.
Tab. 4 also provides the details for a bit injection operation in which the
product of a Boolean bit $\mathsf{b}$ and a scale value $\mathsf{s}$ is
securely computed. Given $\mathsf{s}=\sum_{i=1}^{\mathsf{q}}\mathsf{s}_{i}$,
the value $\mathsf{b}\cdot\mathsf{s}$ can be computed by first computing
either $\tilde{\mathsf{b}}$ or $\hat{\mathsf{b}}$ (depending on whether an
exact or approximate value is required) and then multiplying by $\mathsf{s}$.
#### B.2.2 Secure Bit Aggregation with Global Scales
Here, we consider the secure bit aggregation problem in the context of global
scales, as discussed in (cf.§2.2). As shown in Eq. (6), the computation
becomes simpler in the case of global scales since all clients utilize the
same set of public scales, denoted by $\mathsf{s}^{min}$ and
$\mathsf{s}^{max}$, to compute their quantized vector that corresponds to the
rows of $\vec{\mathbf{B}}$. Hence, we just need to compute the column-wise
aggregate of the $\vec{\mathbf{B}}$ matrix and use protocol $\Pi_{\sf
BitA}^{\sf sum}$ (Fig. B.1.2 in §B.1.2) to do so. The resulting protocol
$\Pi_{\sf SecAgg}^{\textsf{Global}}$ appears in Fig. B.2.2.
1. Compute $\langle\vec{\mathbf{W}}^{j}\rangle=\Pi_{\sf BitA}^{\sf sum}(\langle\vec{\mathbf{B}}^{j}\rangle^{\bf B})$, for each $j\in[\mathsf{m}]$. 2. Locally compute $\langle\vec{\mathbf{Y}}_{\sf 1\times\mathsf{m}}\rangle=\mathsf{s}^{min}\oplus\left(\langle\vec{\mathbf{W}}_{\sf 1\times\mathsf{m}}\rangle\cdot(\mathsf{s}^{max}-\mathsf{s}^{min})\right)$. Figure 16: Secure aggregation – Global Scales.
#### B.2.3 Secure Bit Aggregation with Local Scales
In §3.2, we provided an empirical accuracy evaluation for our approximate
secure bit aggregation using global scales.
$10^{-3}$$10^{1}$$10^{5}$x$d=2^{10}$Exactx$d=2^{15}$Approx.x$d=2^{20}$NMSE SQ
---
$10^{0}$$10^{2}$$10^{4}$$10^{-3}$$10^{1}$$10^{5}$number of clients $\mathsf{n}$NMSE HSQ | $10^{0}$$10^{2}$$10^{4}$number of clients $\mathsf{n}$NMSE KSQ
---|---
Figure 17: NMSE comparison between exact and approximation-based aggregation
for SQ, Hadamard SQ (HSQ), and Kashin SQ (KSQ) for local scales with
$\mathsf{q}=4$ shares, various vector dimensions $d$, and number of clients
$\mathsf{n}$.
Here, in Fig. 17, we additionally provide results considering local scales. In
contrast to global scales, we can observe that for stochastic quantization
without rotation the effect on the NMSE is reduced from three to one order of
magnitude. Also, for rotation-based algorithms there are significant concrete
improvements.
#### B.2.4 Detailed Communication Costs
In this section, we provide more detailed insights into the concrete
communication costs for our secure aggregation protocols described in §4.1.
Approach | Offline | Online
---|---|---
Approach-I | $\mathsf{n}\cdot\mathsf{BitA}^{\mathsf{pre}}+\mathsf{n}\cdot\mathsf{Mult}^{\mathsf{pre}}$ | $\mathsf{n}\cdot\mathsf{BitA}^{\mathsf{on}}+\mathsf{n}\cdot\mathsf{Mult}^{\mathsf{on}}$
Approach-II | $\mathsf{n}\cdot\mathsf{BitA}^{\mathsf{pre}}+\mathsf{n}\cdot\mathsf{Mult}^{\mathsf{pre}}$ | $\mathsf{Mult}^{\mathsf{on}}$
Approach-III | $\mathsf{n}\cdot\mathsf{BitA}^{\mathsf{pre}}+~{}~{}~{}~{}\mathsf{Mult}^{\mathsf{pre}}$ | $\mathsf{Mult}^{\mathsf{on}}$
Table 5: Communication costs aggregating quantized vectors with a single
dimension for $\mathsf{n}$ clients. Protocols $\Pi_{\sf BitA}$ and $\Pi_{\sf
Mult}$ are treated as black-boxes, and their costs are represented as
$\mathsf{BitA}$ and $\mathsf{Mult}$, respectively. The superscript pre in the
costs denotes preprocessing and on denotes the online phase. We compare
Approach-I (cf. Fig. 4.1.1 in §4.1.1), Approach-II (cf. Fig. 4.1.2 in §4.1.2),
and Approach-III (cf. Fig. 4.1.3 in §4.1.3).
First, in Tab. 5 we give a theoretical analysis for the communication cost
when aggregating $\mathsf{n}$ quantized vectors with a single dimension.
Clearly, our Approach-III (cf. Fig. 4.1.3 in §4.1.3) is the most efficient,
with the multiplication-related cost being completely independent of the
number of clients $\mathsf{n}$ due to the integration of SepAgg [14].
| | Exact | Approx.
---|---|---|---
$\mathsf{n}$ | Method | Offline | Online | Offline | Online
20 | Approach-I | 644.50 | 12.90 | 620.27 | 12.90
Approach-II | 644.50 | 0.59 | 620.27 | 0.59
Approach-III | 89.77 | 0.59 | 65.54 | 0.59
100 | Approach-I | 3222.51 | 64.51 | 3101.36 | 64.51
Approach-II | 3222.51 | 0.59 | 3101.36 | 0.59
Approach-III | 332.08 | 0.59 | 210.93 | 0.59
500 | Approach-I | 16112.56 | 322.56 | 15506.80 | 322.56
Approach-II | 16112.56 | 0.59 | 15506.80 | 0.59
Approach-III | 1543.62 | 0.59 | 937.85 | 0.59
Table 6: Inter-server communication per round in MiB for our MNIST/LeNet
benchmark for different numbers of clients $\mathsf{n}$ per round. Training is
done using $1$-bit SQ with Kashin’s representation (KSQ). We compare
Approach-I (cf. Fig. 4.1.1 in §4.1.1), Approach-II (cf. Fig. 4.1.2 in §4.1.2),
and Approach-III (cf. Fig. 4.1.3 in §4.1.3). Additionally, we distinguish
between using an exact bit-to-arithmetic conversion and our approximation (cf.
§3.1).
Next, in Tab. 6 we provide the detailed communication costs for the secure
aggregation approaches discussed in §4.1 when training the LeNet architecture
for image classification on the MNIST data set [80] using $1$-bit SQ with
Kashin’s representation [28]. We instantiate the OT instances required in the
preprocessing phase, as discussed in §B.1.2, with silent OT [37], following
Prio+ [2]. Here, we can observe the significant impact of including SepAgg
[14] has in practice with performance improvements between Approach-II and
Approach-III of up to $16.6\times$ in the offline phase.
Approach | $\mathsf{n}=10^{2}$ | $\mathsf{n}=10^{3}$ | $\mathsf{n}=10^{4}$ | $\mathsf{n}=10^{5}$
---|---|---|---|---
Prio+ [2] | 9.45 | 94.50 | 945.04 | 9450.44
Approach-III (Exact) | 3.94 | 39.42 | 394.17 | 3941.66
Approach-III (Approx.) | 2.37 | 23.75 | 237.45 | 2374.53
Table 7: Total communication in MiB of our Approach-III (cf. Fig. 4.1.3 in
§4.1.3) compared to Prio+ [2] to calculate the sum of bits for different
numbers of clients $\mathsf{n}$ and dimension $\mathsf{m}=1000$. For our
Approach-III, we distinguish between using an exact bit-to-arithmetic
conversion as in Prio+ [2] and our approximation (cf. §3.1).
Finally, in Tab. 7, we compare the aggregation of bits (i.e., when not
considering quantized inputs that require scale multiplication and hence
without SepAgg [14] being applicable) to Prio+ [2]. For a fair comparison, we
translate the approach Prio+ [2] in our three party dishonest-majority
setting. As we can see, even for exact bit-to-arithmetic conversion, we
improve over Prio+ by factor $2.4\times$ for $\mathsf{n}=10^{5}$. When
applying our approximate bit-to-arithmetic conversion (cf. 3.1), this
improvement increases to a factor of $4\times$.
### B.3 ScionFL-Aura: Additional Details
Here, we provide additional details ofr ScionFL-Aura.
#### B.3.1 Sub-protocols
In this section, we give the sub-protocols for ScionFL-Aura (cf. §5.1.1). Note
that for sake of simplicity, we do not include optimizations discussed in
§5.1.3.
Alg. 2 computes the aggregation of $\alpha$ quantized vectors. As shown in Eq.
4, the dequantized value of a vector $\vec{Y}$, given its quantized form
$(\vec{\sigma}_{Y},\mathsf{s}_{Y}^{min},\mathsf{s}_{Y}^{max})$, can be
computed as
$\vec{Y}=\mathsf{s}_{Y}^{min}\ \oplus\
\vec{\sigma}_{Y}\circ(\mathsf{s}_{Y}^{max}-\mathsf{s}_{Y}^{min}).$
The above operation essentially places $\mathsf{s}_{Y}^{min}$ in those
positions of the vector $\vec{Y}$ with the corresponding bit in
$\vec{\sigma}_{Y}$ being zero, and the rest with $\mathsf{s}_{Y}^{max}$.
Algorithm 2 Quantized Aggregation
1:procedure
Aggregate($\\{\vec{\sigma}_{Y_{i}},\mathsf{s}_{Y_{i}}^{min},\mathsf{s}_{Y_{i}}^{max}\\}_{i\in\alpha}$)
2: $\vec{Z}\leftarrow\vec{0}$
3: for $k\leftarrow 1$ to $\alpha$ do
4:
$\vec{Z}\leftarrow\vec{Z}+\left(\mathsf{s}_{Y_{k}}^{min}\oplus\vec{\sigma}_{Y_{k}}\circ(\mathsf{s}_{Y_{k}}^{max}-\mathsf{s}_{Y_{k}}^{min})\right)$
5: end for
6: $\vec{Z}\leftarrow\vec{Z}/\alpha$
7: return $\vec{Z}$
8:end procedure
Alg. 3 computes the ${\sf L}_{2}$-norm of a quantized vector. As discussed in
§2.4, a quantized vector $\vec{{Y}}_{\sigma}$ consists of a binary vector
$\vec{\sigma}_{Y}$ and the respective min. and max. scales
$\mathsf{s}_{Y}^{min}/\mathsf{s}_{Y}^{max}$. In this case, we observe that the
squared ${\sf L}_{2}$-norm can be obtained by first counting the number of
zeroes and ones in the vector, denoted by $N_{Z}$ and and $N_{O}$
respectively, followed by multiplying them with the square of the respective
scales and adding the results, i.e.
$N_{Z}\cdot(\mathsf{s}_{Y}^{min})^{2}+N_{O}\cdot(\mathsf{s}_{Y}^{max})^{2}$.
Furthermore, computing the number of ones $N_{O}$ corresponds to the bit-
aggregation of the vector $\vec{Y}$, for which our aggregation methods
discussed in §4.1 can be utilized.
Algorithm 3 L2-Norm Computation (Quantized)
1:procedure
L2-NormQ($\vec{\sigma}_{Y},\mathsf{s}_{Y}^{min},\mathsf{s}_{Y}^{max}$)
2: $\beta\leftarrow\textsc{Len}(\vec{\sigma}_{Y})$ // Dimension of
$\vec{\sigma}_{Y}$
3: $N_{O}\leftarrow\textsc{Sum}(\vec{\sigma}_{Y})$ // Number of ones in
$\vec{\sigma}_{Y}$
4: $N_{Z}\leftarrow\beta-N_{O}$ // Number of zeros in $\vec{\sigma}_{Y}$
5: return
$\sqrt{N_{Z}\cdot(\mathsf{s}_{Y}^{min})^{2}+N_{O}\cdot(\mathsf{s}_{Y}^{max})^{2}}$
6:end procedure
Alg. 4 is used to compute the cosine distance between a quantized vector
$\vec{{Y}}_{\sigma}$ and a reference vector $\vec{S}$. The cosine distance is
given by
$\frac{\vec{{Y}}_{\sigma}\odot\vec{S}}{\lVert\vec{{Y}}_{\sigma}\rVert\cdot\lVert\vec{S}\rVert}$,
where $\lVert\cdot\rVert$ corresponds to the ${\sf L}_{2}$-norm of the input
vector. Using Eq. 4, we can write
$\displaystyle\vec{{Y}}_{\sigma}\odot\vec{S}$
$\displaystyle=(\mathsf{s}_{Y}^{min}\oplus\vec{\sigma}_{Y}\circ(\mathsf{s}_{Y}^{max}-\mathsf{s}_{Y}^{min}))\odot\vec{S}$
$\displaystyle=\mathsf{s}_{Y}^{min}\odot\vec{S}\ +\
(\vec{\sigma}_{Y}\odot\vec{S})\cdot(\mathsf{s}_{Y}^{max}-\mathsf{s}_{Y}^{min}).$
Thus, the inner product computation of $\vec{{Y}}_{\sigma}\odot\vec{S}$
reduces to computing $\vec{\sigma}_{Y}\odot\vec{S}$, followed by two
multiplications.
Algorithm 4 Cosine Distance Calculation
1:procedure
Cosine($(\vec{\sigma}_{Y},\mathsf{s}_{Y}^{min},\mathsf{s}_{Y}^{max})$,
$\vec{S}$)
2: ${\sf
L}_{2}^{Y}\leftarrow\textsc{L2-NormQ}(\vec{\sigma}_{Y},\mathsf{s}_{Y}^{min},\mathsf{s}_{Y}^{max})$
3: ${\sf L}_{2}^{S}\leftarrow\lVert\vec{S}\rVert$ // Computes ${\sf
L}_{2}$-norm
4: $\alpha\leftarrow\textsc{Sum}(\vec{S})$ // Sum of elements of $\vec{S}$
5: $\beta\leftarrow\textsc{Inner\mbox{-}Product}(\vec{\sigma}_{Y},\vec{S})$
6:
$\gamma=\mathsf{s}_{Y}^{min}\cdot\alpha+\beta\cdot(\mathsf{s}_{Y}^{max}-\mathsf{s}_{Y}^{min})$
7: return $\gamma/({\sf L}_{2}^{Y}\cdot{\sf L}_{2}^{S})$
8:end procedure
Figure 18: Effect of _Min-Max_ attack [112] on training VGG11 with CIFAR10
with and without our defense ScionFL-Aura assuming $20\%$ of $N=50$ clients
being corrupted.
#### B.3.2 Evaluation on VGG11
In addition to our results in §5.1.2, we evaluate the _Min-Max_ attack on
VGG11 trained with CIFAR10. The experimental setup is identical to §5.1.2. The
results are shown in Fig. 18.
Similarly as for ResNet9 (cf. Fig. 10), the _Min-Max_ attack substantially
reduces the validation accuracy when training VGG11: We observe drops of up to
$36.8\%$. However, on average, VGG11 is less impacted by the attack.
Concretely, only $15\%$ of the iterations observe a validation accuracy
reduction of about $15\%$ or more when using no compression. One third of the
training rounds are impacted by about $15\%$ or more when using Kashin’s
representation (KSQ) while with the Hadamard transform (HSQ) only very few
training rounds showed a significant accuracy reduction. Thus, HSQ seems to be
inherently more robust against untargeted poisoning.
With ScionFL-Aura, the accuracy reduction is still noticeable smaller for all
variants. With HSQ, on average 0.28 malicious updates are included in global
updated instead of 2.24 without defense. With respect to the validation
accuracy, the difference between having no attack and employing ScionFL-Aura
when under attack is less than $4\%$ in almost all training iterations. When
using KSQ, a global update includes just 0.44 malicious updates on average,
and the attack impact is at least halved in two third of the training
iterations.
#### B.3.3 Comparison to FLAME [97]
Our _untargeted poisoning_ defense ScionFL-Aura has components with
similarities to the _backdoor_ defense FLAME [97], which has three steps:
density-based clustering, clipping, and noise addition. However, there are
important differences, which we emphasize in the following.
1. 1.
Magnitude Boundary. FLAME’s clipping is done with respect to the median
Euclidean distance of the local updates to the previous global model. However,
especially with non-iid data and in early training phases, each training
iteration may exhibit significant differences even for consecutive iterations.
Hence, using the recent average norm (assuming the majority of updates is
benign) as in ScionFL-Aura (cf. Step 1 in §5.1.1 and Lines 3-14 in Alg. 1)
intuitively gives a better estimation for a benign magnitude in the current
training state.
2. 2.
Filtering. FLAME compares cosine similarity in a pair-wise fashion among
individual updates, i.e., it computes
$\frac{\mathsf{n}\cdot(\mathsf{n}-1)}{2}$ cosine distances per iteration while
ScionFL-Aura does $\mathsf{n}$ (cf. Step 2 in §5.1.1 and Line 16 in Alg. 1).
While ScionFL-Aura sorts local updates based on cosine similarity (cf. Step 2
in §5.1.1 and Line 18 in Alg. 1), FLAME uses secure clustering with low cubic
complexity [25]. FLAME only accepts updates assigned to the largest cluster,
which can lead to an exclusion of benign updates and thus significantly
slowing down the training by removing valuable benign contributions. In
contrast, ScionFL-Aura removes only a fixed number of updates, thereby
enabling an efficient trade-off that reduces the attack’s effect to a
tolerable level (even if a few malicious contributions are not filtered out)
with a low false positive rate.
3. 3.
Differential Privacy: After the clipping, FLAME aggregates the updates and
adds noise in the cleartext to create a differentially private new global
model. We do not consider differential privacy in our work, however, the noise
addition can trivially be added to our system.
For more details on FLAME, we refer the reader to [97].
|
Stevens Institute of Technology
Crypto Market Analysis & Real-Estate Business Protocol Proposal
Application of Ethereum Blockchain
Sid Bhatia, Samuel Gedal, Himaya Jeyakumar
Grace Lee, Ravinder Chopra, Daniel Roman, Shrijani Chakroborty
26 April 2024
###### Contents
1. 1 Introduction
2. 2 Part I: Crypto Market Analysis
1. 2.1 Overview
2. 2.2 Detailed Overview and Crypto Protocol
1. 2.2.1 Bitcoin (BTC)
2. 2.2.2 Ethereum (ETH)
3. 2.2.3 XRP (Ripple)
4. 2.2.4 Dogecoin (DOGE)
5. 2.2.5 Tether (USDT)
3. 2.3 Crypto Selection Rationale
4. 2.4 Market Analysis
1. 2.4.1 Price Trend Analysis
2. 2.4.2 Volatility Analysis
3. 2.4.3 Correlation Analysis
5. 2.5 FTX Delta Analysis
1. 2.5.1 Event Overview
2. 2.5.2 FTX Impact on 11/11/22
3. 2.5.3 Immediate Impact
4. 2.5.4 FTX Impact in November 2022
5. 2.5.5 Long-Term Impact
6. 2.5.6 Benchmark and Market Performance Comparison
7. 2.5.7 FTX Conclusion
6. 2.6 Crypto Market Analysis Synthesis
1. 2.6.1 Volatility and Stability
2. 2.6.2 Market Correlations
3. 2.6.3 FTX Crisis Impact
4. 2.6.4 Long-Run Market Behavior
7. 2.7 Part I Key Takeaways
8. 2.8 Future Implications
3. 3 Part II: Real-Estate Business Protocol Proposal
1. 3.1 Business Proposal
1. 3.1.1 Overview of the Blockchain Protocol
2. 3.1.2 Transactional Process
3. 3.1.3 Advantages of Blockchain in Real Estate
2. 3.2 Target Customers
1. 3.2.1 Market Strategy and Consumer Segmentation
2. 3.2.2 Integration of Market Sides
3. 3.3 Competitive Analysis
1. 3.3.1 Propy: A Comparative Study
4. 3.4 Competitive Advantage Analysis
5. 3.5 Implementation Strategy
6. 3.6 Economic Viability for Ethereum
7. 3.7 Comparative Analysis of Blockchain Platforms
8. 3.8 Synthesis of the Blockchain Real-Estate Protocol
4. 4 Conclusion
1. 4.1 Synthesis of Findings
2. 4.2 Prospects and Implications
3. 4.3 Summary and Forward Outlook
## 1 Introduction
In the dynamic realm of financial technology, blockchain and cryptocurrencies
represent two of the most significant innovations that have reshaped how
transactions are conducted and assets are managed globally
([futurecryptocurrencies2022]). This paper delves into a dual-focused analysis
and proposal. Firstly, we conduct a thorough market analysis of a select group
of cryptocurrencies, each chosen for its unique role and impact within the
broader digital currency landscape. The cryptocurrencies under review include
Bitcoin, often regarded as the progenitor of all digital currencies; Ethereum,
notable for its robust smart contract capabilities; XRP, designed primarily
for rapid financial transactions; Dogecoin, which began as a meme but has
since gained substantial practical application; and Tether, a stablecoin tied
to the US dollar, offering a less volatile refuge within the highly fluctuant
crypto market ([evolutioncryptomarket2017]).
This study not only examines the price trends, volatilities, and inter-
cryptocurrency correlations but also assesses the impact of significant market
events, such as the FTX bankruptcy, on these digital assets
([ftxresponse2023]). The insights garnered from this analysis aim to provide a
granular understanding of how various cryptocurrencies react to internal and
external pressures, influencing investor sentiment and market dynamics.
Following the market analysis, the second focus of this paper introduces an
innovative business proposal leveraging blockchain technology. This proposal
outlines a new protocol for real estate transactions, allowing property deeds
to be securely managed and transferred without the need for traditional
intermediaries such as lawyers and brokers. By employing blockchain
technology, this protocol seeks to revolutionize the real estate market by
enhancing transparency, reducing transaction costs, and simplifying the
transaction process for buyers and sellers across the globe
([blockchainrealestate2021]).
Through comprehensive analysis and forward-thinking proposals, this paper
contributes to the ongoing discussions surrounding the application of
blockchain technology in traditional sectors, proposing not only a new way to
understand cryptocurrencies in relation to the traditional financial markets
but also offering a practical application that addresses real-world challenges
in real estate transactions.
## 2 Part I: Crypto Market Analysis
### 2.1 Overview
This analysis encompasses a selection of five distinct cryptocurrencies, each
representing a unique facet of the current digital currency ecosystem. Our
selected cryptocurrencies include: Bitcoin (BTC), recognized as the original
and most well-known cryptocurrency; Ethereum (ETH), noted for its advanced
smart contract capabilities; XRP, developed by Ripple Labs with a focus on
rapid digital payments; Dogecoin (DOGE), which has evolved from a meme into a
cryptocurrency with practical uses in tipping and donations; and Tether
(USDT), a stablecoin that introduces a measure of stability in the otherwise
volatile cryptocurrency market. This diverse selection aims to cover a broad
spectrum of functionalities, market positions, and technological innovations
within the crypto space, providing a comprehensive overview of its varied
applications and implications.
### 2.2 Detailed Overview and Crypto Protocol
#### 2.2.1 Bitcoin (BTC)
Overview: Introduced in 2009 by an entity under the pseudonym Satoshi
Nakamoto, Bitcoin stands as the inaugural cryptocurrency, designed to operate
as a decentralized digital currency without the oversight of a central
authority. Transactions are conducted directly between users through the peer-
to-peer Bitcoin network.
Protocol: Bitcoin’s network is underpinned by a proof-of-work (PoW) protocol,
wherein miners employ significant computational resources to solve intricate
mathematical problems, thus validating transactions and securing the network,
with new bitcoins awarded as a mining reward.
For more details see [nakamoto2009bitcoin].
#### 2.2.2 Ethereum (ETH)
Overview: Launched in 2015, Ethereum transcends the conventional definition of
a cryptocurrency. It serves as a platform for the development of decentralized
applications (DApps) through smart contracts, aiming to democratize access to
a decentralized financial system.
Protocol: Initially based on a proof-of-work mechanism similar to that of
Bitcoin, Ethereum is transitioning to a proof-of-stake (PoS) model with its
Ethereum 2.0 update, which promises enhanced scalability and reduced energy
consumption.
Refer to [buterin2015ethereum] for additional insights.
#### 2.2.3 XRP (Ripple)
Overview: Created by Ripple Labs in 2012, XRP is central to a digital payment
protocol that surpasses its identity as a mere cryptocurrency. It facilitates
rapid payment settlements across the network.
Protocol: The XRP Ledger utilizes a consensus protocol that does not rely on
the traditional blockchain mining process; instead, it achieves consensus
through a network of independent validating servers that constantly compare
transaction records.
See [ripple2012xrp] for further information.
#### 2.2.4 Dogecoin (DOGE)
Overview: Originating as a humorous take on the cryptocurrency phenomenon in
2013, Dogecoin was inspired by the ”Doge” meme featuring a Shiba Inu. It has
since cultivated a community focused on using the cryptocurrency for
charitable contributions and tipping online content creators.
Protocol: Dogecoin operates on a less energy-intensive proof-of-work algorithm
derived from Litecoin, known as Scrypt, facilitating faster transaction
processing.
Detailed information available at [dogecoin2013meme].
#### 2.2.5 Tether (USDT)
Overview: Introduced in 2014, Tether represents a stablecoin that is tethered
to the US dollar, aiming to meld the flexibility of cryptocurrencies with the
stability of fiat currency.
Protocol: Tether supports a hybrid use of protocols, operating on the Omni
Layer of the Bitcoin blockchain and as an ERC-20 token on the Ethereum
blockchain, among other blockchain platforms.
Further details can be found in [tether2014usdt].
These cryptocurrencies were chosen to provide a diverse perspective on the
various applications, market usage, and technological advancements within the
broader cryptocurrency environment. From January 1, 2022, to December 31,
2022, our study observed no missing data, ensuring the completeness and
reliability of the analysis conducted during this period.
### 2.3 Crypto Selection Rationale
The selection of cryptocurrencies for this study was informed by a
multifaceted rationale emphasizing diversity, technological innovation,
community engagement, and market stability. Each cryptocurrency was chosen not
only for its unique position within the market but also for its contribution
to advancing the blockchain technology landscape.
Diversity and Relevance: Bitcoin and Ethereum are selected as foundational
pillars within the cryptocurrency domain, illustrating the broad spectrum of
blockchain applications. Bitcoin, often hailed as the original cryptocurrency,
has pioneered the concept of a decentralized digital currency and enjoys
widespread adoption and recognition. Ethereum, on the other hand, extends the
utility of blockchain beyond mere financial transactions through its support
for smart contracts, thereby catalyzing a plethora of decentralized
applications (DApps). This diversity underscores the significant role these
currencies play in the ongoing development and maturation of the
cryptocurrency market.
Technological Diversity: XRP and Tether were chosen to highlight the
technological diversity within blockchain implementations. XRP, developed by
Ripple, is notable for its rapid transaction capabilities and minimal energy
consumption, diverging from the traditional mining-based consensus used by
currencies like Bitcoin. Similarly, Tether introduces a model of stability in
the highly volatile cryptocurrency market by being pegged to the US dollar,
showcasing a unique application of blockchain technology in creating
stablecoins that mitigate the price volatility typically associated with
cryptocurrencies.
Community and Innovation: Dogecoin exemplifies the impact of community on the
value and adoption of a cryptocurrency. Originating as a meme, Dogecoin has
transcended its initial novelty to foster a robust community that actively
engages in tipping and charitable activities through the currency. This aspect
highlights the role of societal and cultural dynamics in shaping the
cryptocurrency landscape, emphasizing the importance of community-driven
development and innovation.
Market Stability and Innovations: Finally, the inclusion of Tether also
addresses the critical challenge of market stability. By anchoring its value
to a stable fiat currency, Tether offers a pragmatic solution to the issue of
volatility, which is a pervasive concern for investors in cryptocurrencies
like Bitcoin and Ethereum. This approach not only facilitates greater market
stability but also enhances the practicality of cryptocurrencies for everyday
transactions and financial applications.
Collectively, these selections provide a comprehensive overview of the current
state and potential future directions of blockchain technology, illustrating a
spectrum of use cases from foundational cryptocurrencies to innovative
adaptations addressing specific market needs.
### 2.4 Market Analysis
Figure 1: Standardized Daily Prices of Cryptocurrencies for 2022
#### 2.4.1 Price Trend Analysis
The analysis of standardized price trends of Bitcoin (BTC), Ethereum (ETH),
XRP, Dogecoin (DOGE), and Tether (USDT) throughout 2022 reveals several key
insights into the dynamics of the cryptocurrency market:
##### Correlated Movements:
The data illustrates that most cryptocurrencies exhibited closely correlated
movements over the course of the year. Such correlation is indicative of the
substantial influence exerted by broader market forces and global economic
events on the cryptocurrency market as a whole, driving collective swings in
investor sentiment—whether bullish or bearish.
##### Volatility Across Assets:
The degree of volatility varied significantly among the analyzed
cryptocurrencies. Bitcoin and Ethereum experienced relatively moderate
fluctuations, maintaining tighter price bands, while Dogecoin displayed higher
volatility, characterized by more pronounced peaks and troughs. This disparity
in volatility underscores the differential market perceptions and investor
bases of these assets.
##### Stablecoin Anomaly:
An unexpected anomaly was observed in the price trend of Tether (USDT),
particularly in May 2022, where it deviated markedly from its expected stable
trajectory. Such a divergence, given the design of stablecoins to maintain
parity with a peg (e.g., USD), suggests potential extraordinary events, data
reporting inaccuracies, or underlying issues with the stability mechanisms
during that period.
##### Market Recovery Ability:
Following significant market dips, the cryptocurrencies demonstrated varying
degrees of recovery. Bitcoin and Ethereum showed robust resilience and
recovery capabilities compared to Dogecoin. This variation could reflect
differing levels of market confidence and inherent stability within these
digital assets.
##### Stablecoin’s Peculiar Trend:
Assuming the accuracy of the observed sharp decline in USDT’s value, this
could represent a period of intense market stress or a temporary disruption in
the stablecoin’s dollar peg. However, such incidents are generally ephemeral,
as corrective mechanisms typically restore stability swiftly, aligning with
the observed rapid return to normalcy.
From the analysis of these price trends, it is evident that while
cryptocurrencies are interconnected and respond collectively to market shifts,
individual assets exhibit distinct behaviors influenced by their specific
market dynamics, investor sentiment, and technological foundations. The
peculiar movement observed in Tether’s price trend during the analyzed period
merits further investigation to ascertain the causes and implications of such
an anomaly.
#### 2.4.2 Volatility Analysis
The study of volatility in cryptocurrency markets provides crucial insights
into the risks and stability of digital assets. By calculating daily returns
and examining their standard deviations, we can gauge the unpredictability
associated with each cryptocurrency and identify the factors contributing to
these dynamics.
##### Dogecoin (DOGE-USD):
Dogecoin exhibits the highest volatility among the cryptocurrencies analyzed,
with a standard deviation of approximately 5.64%. This elevated volatility can
primarily be attributed to its relatively low price per unit, which renders it
more susceptible to significant percentage changes on a per-unit basis.
Moreover, Dogecoin’s price is notably influenced by social media trends and
possesses comparatively less market liquidity than more established
cryptocurrencies. These elements combine to increase its price volatility,
reflecting the substantial impact of retail investor sentiment and speculative
trading on its market behavior.
##### Tether (USDT-USD):
In stark contrast, Tether shows the lowest volatility, with a standard
deviation near 0.03%. As a stablecoin, Tether is explicitly designed to be
pegged to a fiat currency, specifically the US dollar, and maintains a stable
value through various regulatory and technological mechanisms. This stability
is critical for its role in providing a safe haven during market turbulence
and for facilitating transactions where volatility can be a deterrent.
##### Bitcoin (BTC-USD) and Ethereum (ETH-USD):
Both Bitcoin and Ethereum exhibit moderate levels of volatility, reflecting
their established presence in the market and larger capitalizations. These
factors typically confer higher liquidity and result in less drastic
percentage changes in daily prices.
##### Benchmark Volatility Analysis:
Comparing the volatility of cryptocurrencies with traditional financial
markets, such as the S&P 500, highlights the unique risk profiles inherent to
digital assets. The S&P 500, with a volatility of 1.00%, offers a contrast to
the higher volatility levels seen in cryptocurrencies, underscoring the
potential for greater price stability in traditional equity markets.
##### Market Implications:
This variability in volatility, especially when benchmarked against
traditional indices like the S&P 500, illustrates the diverse nature of
cryptocurrency markets. While stablecoins like Tether aim to minimize price
fluctuations, other cryptocurrencies such as Dogecoin and Bitcoin exhibit a
range of volatilities, heavily influenced by investor sentiment, liquidity,
and their roles within the digital economy.
The higher volatility of cryptocurrencies compared to traditional markets like
the S&P 500 underscores their speculative nature and the heightened risks they
pose, which investors must navigate carefully. This analysis emphasizes the
importance of strategic risk assessment and portfolio diversification to
mitigate the inherent volatility of cryptocurrencies.
#### 2.4.3 Correlation Analysis
Figure 2: Correlation Matrix Heatmap of Cryptocurrencies for 2022
A comprehensive examination of the correlation matrix for daily returns of
Bitcoin (BTC), Ethereum (ETH), XRP, Dogecoin (DOGE), and Tether (USDT) in
tandem with the S&P benchmark (GSPC) elucidates the interrelationships among
these prominent cryptocurrencies:
##### High Correlations:
* •
Bitcoin and Ethereum: Exhibiting a correlation coefficient of 0.90, BTC and
ETH demonstrate a very strong positive correlation, indicating that these
cryptocurrencies often move in tandem. This strong linkage is primarily due to
their predominant positions in the market, where both are frequently
influenced by similar economic factors, investor sentiments, and regulatory
developments.
* •
Ethereum and XRP: With a correlation of 0.75, movements in Ethereum frequently
correlate closely with those in XRP, suggesting overlapping functionalities
and investor bases that react similarly to market stimuli in these two
platforms.
##### Moderate Correlations:
* •
XRP with Bitcoin and Dogecoin: XRP displays moderate correlations of 0.74 with
BTC and 0.61 with DOGE. These correlations suggest a level of synchronicity,
albeit influenced by distinct market dynamics and external factors specific to
each cryptocurrency.
* •
Dogecoin with Ethereum and Bitcoin: Correlation coefficients of 0.67 with ETH
and 0.65 with BTC for Dogecoin indicate a moderate degree of correlation,
influenced by broader market trends that impact all cryptocurrencies, though
each responds according to its unique market niche and investor behavior.
##### Lower Correlations with Tether:
* •
All Cryptocurrencies with Tether: Tether, being a stablecoin tied closely to
the US dollar, shows significantly lower correlation coefficients with BTC
(0.26), ETH (0.25), XRP (0.28), and DOGE (0.27). This fundamental difference
in design and purpose—aimed at providing stability—results in less
synchronized movements with the more speculative cryptocurrency assets.
##### Benchmark Correlation Comparison:
Comparison with traditional financial markets, specifically through a
correlation study with the S&P 500, reveals additional insights. While
cryptocurrencies such as BTC and ETH show the highest correlation with each
other, they exhibit only moderate correlation levels with the S&P 500, with
BTC showing the highest correlation at 0.52. This suggests that while
cryptocurrencies do move somewhat in sync with traditional financial markets,
they retain distinct market dynamics that set them apart.
##### Market Implications:
These findings highlight the diverse correlation landscapes within the
cryptocurrency markets, where strong intra-crypto correlations contrast with
more moderate interactions with traditional financial indices. This divergence
underscores the necessity for investors to consider the unique correlation
patterns when diversifying portfolios or implementing hedging strategies. The
mixed correlation profiles suggest both opportunities and risks, as
cryptocurrencies can offer portfolio diversification benefits due to their
partial independence from traditional market movements.
### 2.5 FTX Delta Analysis
#### 2.5.1 Event Overview
In mid-November 2022, the cryptocurrency exchange FTX filed for bankruptcy,
triggering significant disturbances across the cryptocurrency markets. This
event was exacerbated by the resignation of its CEO, Sam Bankman-Fried,
further destabilizing the market’s confidence.
#### 2.5.2 FTX Impact on 11/11/22
Ticker | Impact on Nov 11
---|---
BTC-USD | -3.14%
DOGE-USD | -5.46%
ETH-USD | -0.94%
USDT-USD | 0.04%
XRP-USD | -2.92%
GSPC | 1.00%
Table 1: FTX Impact on Cryptocurrency Prices on November 11, 2022
#### 2.5.3 Immediate Impact
The immediate repercussions of the bankruptcy announcement on November 11,
2022, were starkly evident across various cryptocurrencies:
* •
Bitcoin (BTC) and Ripple (XRP) each faced notable declines, with Bitcoin
falling by -3.14% and XRP by -2.92%.
* •
Ethereum (ETH) exhibited relative resilience, with a modest decline of -0.94%,
reflecting its robust market presence and investor confidence.
* •
Dogecoin (DOGE) experienced the most significant drop of -5.46%, illustrating
its susceptibility to market shocks.
* •
Tether (USDT), maintaining its stability, changed insignificantly by +0.04%,
underscoring its role as a stabilizing force within the volatile
cryptocurrency environment.
#### 2.5.4 FTX Impact in November 2022
Ticker | Change in Nov 2022
---|---
BTC-USD | -16.19%
DOGE-USD | -25.05%
ETH-USD | -17.98%
USDT-USD | 0.01%
XRP-USD | -12.02%
GSPC | 5.81%
Table 2: Monthly Impact of FTX Bankruptcy on Cryptocurrency Prices in November
2022
#### 2.5.5 Long-Term Impact
The extended impact throughout November painted a grim picture of recovery
challenges:
* •
Major cryptocurrencies like BTC, ETH, and XRP recorded substantial declines of
-16.19%, -17.98%, and -12.02% respectively.
* •
DOGE was particularly hard hit, plummeting by -25.05%, marking the highest
vulnerability among the group.
* •
Conversely, USDT showed remarkable stability with only a 0.01% change,
reinforcing its value proposition as a hedge against volatility.
#### 2.5.6 Benchmark and Market Performance Comparison
The correlation and impact studies reveal that while the cryptocurrency market
suffered significant losses in response to the FTX crisis, the traditional
financial markets, as represented by the S&P 500, exhibited contrasting
behavior:
* •
On November 11, 2022, while cryptocurrencies faced sharp declines, the S&P 500
(GSPC) experienced a rise of 1.00%, demonstrating a decoupling from
cryptocurrency market dynamics.
* •
Over the entire month of November, the S&P 500 gained 5.81%, further
highlighting the resilience and differing risk profiles of traditional equity
markets compared to the high-risk cryptocurrency sector.
#### 2.5.7 FTX Conclusion
The FTX bankruptcy served as a critical stress test, revealing the inherent
volatility and risk exposure of speculative cryptocurrencies compared to the
stability offered by stablecoins like Tether and traditional financial indices
like the S&P 500. This event underscores the need for robust risk management
strategies and diversified investment approaches to navigate the complexities
of cryptocurrency investments effectively.
### 2.6 Crypto Market Analysis Synthesis
Part I of this project delved into a comprehensive analysis of the
cryptocurrency ecosystem, with an emphasis on five key cryptocurrencies
(Bitcoin, Ethereum, Ripple, Dogecoin, and Tether) and comparisons against the
S&P 500 index. Our study covered daily price behavior, volatility,
correlations, and market responses to major events like the FTX bankruptcy.
Let’s synthesize our key findings:
#### 2.6.1 Volatility and Stability
* •
Cryptocurrencies demonstrate substantially higher volatility than traditional
markets like the S&P 500. Dogecoin exhibited the highest volatility due to its
smaller size and speculative nature.
* •
Tether’s negligible volatility confirms its role as a stablecoin, offering
refuge within the cryptocurrency market.
#### 2.6.2 Market Correlations
* •
Bitcoin, Ethereum, and other major cryptocurrencies are highly correlated,
driven by similar market forces.
* •
Cryptocurrencies show low-to-moderate correlation with the S&P 500, suggesting
some independence and potential diversification benefits.
#### 2.6.3 FTX Crisis Impact
* •
The FTX bankruptcy severely impacted cryptocurrency prices, while the S&P 500
remained largely unaffected, highlighting sector-specific risks within crypto.
* •
November 2022’s broader market picture reinforced this divergence.
Cryptocurrencies declined significantly (excluding Tether), while the S&P 500
grew, emphasizing a decoupling during cryptocurrency-specific crises.
#### 2.6.4 Long-Run Market Behavior
* •
2022 data illustrates that while offering potential for growth,
cryptocurrencies also carry substantial risks of sharp declines.
* •
The S&P 500’s lower volatility and positive November performance underscore
the importance of traditional equity investments for risk mitigation in
diversified portfolios.
### 2.7 Part I Key Takeaways
##### Diversification:
Cryptocurrencies offer diversification potential, but investors must carefully
manage their high-risk profile.
##### Investment Strategy:
Balancing crypto holdings with safer assets like the S&P 500 can mitigate
losses during downturns.
##### Regulatory and Market Sensitivity:
Staying informed about regulatory developments and sector-specific events is
crucial for navigating the dynamic cryptocurrency market.
### 2.8 Future Implications
These insights are vital for developing robust investment strategies
maximizing the potential of cryptocurrencies while safeguarding against their
risks. Monitoring evolving correlations between cryptocurrencies and
traditional markets will aid in understanding market dynamics and adapting
investment strategies accordingly.
## 3 Part II: Real-Estate Business Protocol Proposal
### 3.1 Business Proposal
#### 3.1.1 Overview of the Blockchain Protocol
Our business proposal introduces an innovative blockchain protocol designed to
revolutionize the real estate sector. This protocol allows homeowners to store
the deeds of their houses on the blockchain and facilitates the sale of
properties without traditional intermediaries such as lawyers, brokers, or
other third parties. This system not only simplifies transactions but also
enhances security, reduces costs, and increases transparency.
#### 3.1.2 Transactional Process
The transactional process under this protocol is streamlined to ensure
efficiency and security:
1. 1.
Initiation of Sale: Homeowners list their properties on the blockchain
platform directly, bypassing the need for intermediaries. This step
significantly reduces the complexity and duration of property transactions.
2. 2.
Proof of Ownership: The blockchain technology inherently provides a clear,
immutable record of ownership. This proof is publicly accessible and
verifiable, ensuring that the current owner has indisputable ownership of the
property before proceeding with the sale.
3. 3.
Payment and Transfer: The buyer pays the seller in cryptocurrency, such as
Bitcoin. Following payment confirmation, the property deed is automatically
transferred to the buyer’s blockchain address via a smart contract, which also
handles the transaction fee, typically associated with platforms like
Ethereum.
4. 4.
Final Ownership: The new owner receives the property deed securely stored on
the blockchain, ensuring both safety and accessibility. This digital deed is
resistant to tampering, loss, or theft, providing a permanent record of
ownership.
#### 3.1.3 Advantages of Blockchain in Real Estate
The integration of blockchain into real estate transactions offers several
improvements over traditional methods:
Transparency:
The blockchain’s immutable ledger ensures that all transactions, including
historical ownership data and property details (e.g., square footage, number
of bedrooms, date of last renovation), are permanently recorded and openly
verifiable. This level of transparency significantly reduces the potential for
fraud and disputes.
Cost Efficiency:
By eliminating the need for various intermediaries and reducing paperwork and
manual verification processes, the blockchain protocol cuts down on
significant transactional costs. These savings make real estate transactions
more economical for both buyers and sellers.
Global Accessibility:
The blockchain protocol enables international transactions without the
complexities of cross-border legalities and financial transactions, opening up
the property market to global participants and investors.
Market Liquidity:
The use of blockchain can enhance market liquidity. Buyers who may not have
immediate access to traditional financing options can leverage decentralized
finance (DeFi) solutions, such as Aave, for quicker funding solutions, thereby
accelerating the buying process.
### 3.2 Target Customers
#### 3.2.1 Market Strategy and Consumer Segmentation
Our blockchain protocol adopts a dual-sided market strategy designed to
address distinct needs within the real estate transaction process. This
approach targets two main customer segments: property sellers (and deed
holders) and property buyers, along with a third, indirect segment involving
financial lenders.
##### Property Sellers and Deed Holders:
The primary market segment consists of current property owners who stand to
benefit substantially from blockchain integration. Traditional methods of deed
storage involve physical documentation, which not only increases the risk of
loss and damage but also complicates the verification and transfer processes.
Our protocol offers a secure and immutable storage solution on the blockchain,
eliminating the need for physical safekeeping.
In tandem with disproportionately high the current real estate market
structure necessitates multiple intermediaries, including real estate agents,
brokers, and legal advisors, each adding significant transaction costs in the
form of commissions and fees. Historical data indicates that commission rates
have remained relatively stable over the past three decades, despite
substantial increases in property values, leading to disproportionately high
costs for sellers (refer to Figure 3, [commratetrends2013]). Moreover, median
housing prices have soared, as vividly depicted in the provided data from the
Federal Reserve Economic Data (FRED, [fredmspus2024]) (see Figure 4).
Our protocol simplifies this process, allowing sellers to initiate and
complete sales directly on the blockchain, thereby reducing or eliminating
traditional commission fees.
Figure 3: Historical Analysis of Average Commission Rates in Real Estate
Transactions Figure 4: Federal Reserve Economic Data (FRED) on Median House
Sale Prices
##### Property Buyers:
The second primary target segment includes potential property buyers who
benefit from the streamlined purchase process. Through our protocol, buyers
can directly engage with sellers, conduct swift and secure transactions, and
gain immediate access to verified property deeds, significantly speeding up
the acquisition process. The use of smart contracts ensures that all
conditions of the sale are met before the transaction is finalized, offering
additional security and efficiency.
##### Financial Lenders:
An emerging market segment within our protocol includes financial lenders,
particularly those operating in the decentralized finance (DeFi) space. With
the rise of blockchain technology, platforms like Aave have demonstrated
significant demand for more dynamic lending solutions that offer higher yields
compared to traditional financial products. Our protocol can connect these
lenders directly with real estate buyers, providing a new avenue for secured
lending at competitive interest rates, reflective of the increased risk
profiles associated with cryptocurrency-based transactions (refer to Figure
5).
Figure 5: Top Total Value Locked (TVL) in DeFi; Growth in DeFi Loan Market
Size
#### 3.2.2 Integration of Market Sides
By effectively integrating these two sides of the market—sellers and buyers
with the financial backing of lenders—our blockchain protocol facilitates a
comprehensive ecosystem that enhances liquidity, reduces transaction latency,
and improves overall market efficiency. This integrated approach not only
serves the immediate participants but also introduces a scalable model for
future expansions in global real estate markets.
### 3.3 Competitive Analysis
#### 3.3.1 Propy: A Comparative Study
Propy emerges as a significant player within the blockchain-based real estate
marketplace, providing a global platform that aligns closely with the
decentralized ethos of the blockchain revolution. As a competitor, Propy’s
operational model is built upon eliminating traditional intermediaries from
the property transaction process. By leveraging smart contracts on the
blockchain, Propy ensures secure and efficient property transactions.
##### Operational Model:
The core of Propy’s proposition is its decentralized marketplace, which
facilitates the buying and selling of properties. This innovative approach
circumvents the need for brokers and agents, potentially reducing
transactional friction and cost. Furthermore, Propy offers digital deeds
alongside automated escrow services, thus simplifying real estate transactions
and enhancing user experience.
##### Economic Structure:
Propy’s economic framework incorporates the use of its native cryptocurrency,
PRO, alongside a designated fee for smart contract execution, termed PGas. The
integration of PRO within their platform ecosystem not only facilitates
transactional activities but also extends utility to users engaging with
Propy’s services. The current market valuation of PRO stands at $2.99, which
plays a pivotal role in the cost structure of property sales on Propy’s
platform.
Figure 6: Analysis of Propy’s Transactional Fees for Property Sales
This competitive analysis provides a deeper understanding of Propy’s strategic
positioning within the blockchain-based real estate sector. By dissecting
their transactional fee structure and operational model, we can assess the
potential impact on our blockchain protocol’s market penetration and user
adoption.
### 3.4 Competitive Advantage Analysis
Our protocol presents a unique value proposition in the blockchain real estate
marketplace, offering comprehensive solutions that address various stages of
the real estate transaction process. It capitalizes on the inherent advantages
of blockchain technology to deliver an end-to-end service that simplifies the
complexities traditionally associated with real estate transactions.
##### Comprehensive Transactional Solutions:
At the heart of our protocol is the capability to facilitate complete real
estate transactions on the blockchain. This ranges from secure deed storage to
the actual execution of property sales via cryptocurrency. By providing a
single, unified platform, we significantly reduce the dependency on multiple
services, thereby streamlining the transaction process for all stakeholders
involved.
##### Transparency and Security:
The blockchain’s immutable ledger is a cornerstone feature that enhances our
protocol’s appeal. It serves as an unalterable record of transactions,
ensuring complete transparency and security for the transaction history. This
transparency is a critical factor for buyers and sellers who prioritize trust
and verifiable transparency in their transactions, eliminating the traditional
concerns of fraud and ambiguity in property ownership and history.
##### Cost Efficiency:
By obviating the need for intermediaries such as agents, brokers, and legal
consultants, our protocol minimizes the associated transaction costs. The
conventional commission-based model, which significantly increases transaction
expenses, is replaced by a more cost-effective structure that aligns with the
economic preferences of a market leaning towards efficiency and reduced
overhead.
##### Global Market Accessibility:
Our protocol removes the barriers to entry for international buyers and
sellers, thereby facilitating global transactions. Without the constraints
imposed by legal and regulatory compliance typical of centralized systems, our
platform paves the way for a more inclusive and expansive real estate market,
appealing to a broader investor base and contributing to the diversity of real
estate offerings.
In summary, our competitive advantage stems from a holistic approach that not
only provides practical transactional capabilities but also fosters trust,
reduces costs, and embraces global inclusivity. This strategic positioning is
poised to disrupt the traditional real estate market, leveraging blockchain
technology to its fullest potential.
### 3.5 Implementation Strategy
The implementation of our business model onto the blockchain comprises a
systematic approach, focused on smart contract formulation, asset
tokenization, oracle integration, privacy considerations, and user interface
development.
##### Defining Smart Contracts:
The foundation of our blockchain protocol is the design of smart contracts,
which are digital representations of real estate assets, mortgages, and deed
transfers. The contracts will encapsulate the logic for buying, selling,
transferring ownership, and managing mortgage payments. This will involve:
1. 1.
Structuring smart contracts to encapsulate real estate transaction
requirements.
2. 2.
Integrating functions for various transaction processes within these
contracts.
##### Tokenizing Real Estate Assets:
We will transform real estate properties into digital tokens on the Ethereum
blockchain, where each token signifies property ownership. This process will:
1. 1.
Utilize established token standards, such as ERC-20, for the representation of
real assets in the digital domain.
2. 2.
Deploy contracts to issue and regulate these tokens, assigning unique
identifiers to each property ([erc20whitepaper]).
##### Integration of Oracles:
Oracles will be employed to incorporate off-chain data, like property
specifications and legal documents, into the blockchain. These oracles will:
1. 1.
Source trusted data essential for executing real estate transactions.
2. 2.
Update on-chain records to reflect accurate off-chain information, ensuring
the veracity and reliability of data ([blockchainoracle2020]).
##### Privacy Enhancements:
Our protocol will incorporate privacy measures to safeguard sensitive
transaction data, allowing for public verification while maintaining
confidentiality. We will:
1. 1.
Develop smart contracts with robust privacy controls.
2. 2.
Implement encryption techniques to restrict data access to authorized entities
only.
##### User Interface Development:
To facilitate user interaction with our blockchain platform, we will develop
accessible and intuitive interfaces. These interfaces will:
1. 1.
Offer a seamless user experience for property searches, transaction
initiation, mortgage tracking, and deed management.
2. 2.
Provide tools that are comprehensible and efficient for users, irrespective of
their familiarity with blockchain technology.
The strategic deployment of these elements will result in a robust blockchain
protocol for real estate, streamlining the transaction process and enhancing
the overall experience for users in the real estate market. The careful
orchestration of smart contracts, tokenization, oracles, privacy
considerations, and user interfaces are essential components of our strategy
to integrate real estate transactions with blockchain technology effectively.
### 3.6 Economic Viability for Ethereum
Evaluating the economic feasibility of our business proposal on the Ethereum
platform involves careful consideration of various factors, including
scalability, transaction fees, and the complexity of smart contracts.
##### Scalability Concerns:
Ethereum’s scalability is a pivotal concern, particularly as transaction
volumes escalate. As the leading blockchain platform, Ethereum faces
challenges in maintaining performance amid rising demand. The introduction of
Ethereum 2.0 promises to alleviate these issues through sharding and a proof-
of-stake consensus mechanism, potentially enhancing throughput and lowering
transaction costs ([blockchaingasfees2021]).
##### Transaction Fees:
Gas fees on Ethereum are known for their volatility and can constitute a
significant portion of transaction costs. These fees tend to surge during
periods of network congestion, impacting the cost-benefit analysis for users
engaging in real estate transactions. Monitoring the historical trends of
Ethereum’s average gas fee is crucial in forecasting and managing the
financial viability of transactions ([ethereumgasprices2024]).
Figure 7: Average Ethereum Gas Fees Over the Last Five Years
##### Smart Contract Complexity:
The smart contracts at the core of our real estate protocol, which will enable
property transfers, loan repayments, and the enforcement of covenants, are
inherently complex. The intricacy of these contracts is directly proportional
to the computational resources required, thus influencing the overall gas fees
incurred. This complexity must be carefully managed to ensure that the
benefits of using the Ethereum platform outweigh the costs for all parties
involved.
The prospective enhancements with Ethereum 2.0 alongside strategic management
of smart contract complexity and monitoring of gas fees are essential for
ensuring the economic viability of deploying our real estate transaction
protocol on the Ethereum blockchain. As we progress, it will be imperative to
remain adaptable to the evolving blockchain landscape to maintain a
competitive and cost-effective platform for real estate transactions.
### 3.7 Comparative Analysis of Blockchain Platforms
Figure 8: Comparative Analysis of Ethereum Versus Alternative Blockchain
Platforms
In exploring the economic viability and technical suitability of our real
estate transaction protocol, we extend our analysis beyond Ethereum to other
leading blockchain platforms, each with distinct attributes and potential
advantages.
##### Binance Smart Chain:
Binance Smart Chain (BSC) emerges as a viable alternative, promising higher
throughput and lower latency compared to Ethereum, which is advantageous for
high-demand scenarios. Despite these benefits, BSC’s more centralized nature
may raise concerns among stakeholders seeking a fully decentralized solution.
##### Solana:
Solana presents a compelling case for applications necessitating rapid
transaction processing, offering superior transaction speeds and scalability
([solana2022]). While Solana provides an efficient alternative to Ethereum,
its developer ecosystem and tooling are less mature, potentially imposing
limitations on development and integration efforts.
##### Polkadot:
The multi-chain framework of Polkadot facilitates cross-chain
interoperability, which can significantly enhance the scope and flexibility of
our protocol. Polkadot’s design allows for seamless integration with a variety
of blockchain networks, potentially expanding the protocol’s reach.
Nevertheless, Polkadot’s infrastructure and tooling are still evolving, which
may introduce challenges during early adoption phases.
This comparative analysis underscores the importance of selecting a blockchain
platform that aligns with our protocol’s requirements for security,
decentralization, transaction speed, and scalability. As we proceed with our
implementation strategy, ongoing evaluation of these platforms’ evolving
capabilities will be imperative to maintain an innovative and user-centric
service in the dynamic real estate market.
### 3.8 Synthesis of the Blockchain Real-Estate Protocol
The business protocol presented in Part II encapsulates an innovative,
blockchain-based approach to real estate transactions. The proposed system
introduces a transformative model that empowers homeowners to engage directly
in the sale and purchase of properties, effectively circumventing the
traditional, intermediary-reliant processes that are often cumbersome and less
secure.
##### Overview and Efficacy of Protocol Application
The cornerstone of the proposed protocol lies in its utilization of
blockchain’s inherent properties such as immutability, transparency, and
distributed consensus. These properties facilitate a seamless transition of
deeds and payments, providing a robust proof of ownership and streamlining
transactions. By leveraging smart contract technology, the protocol ensures
that all prerequisites of a property transaction are automatically met,
heralding a new era of efficiency in property dealings.
##### Enhancing Real Estate Transaction Dynamics
The protocol offers multiple benefits over conventional methods. It provides a
public, transparent ledger for ownership and transaction history, reduces the
costs associated with property transactions by eliminating intermediary fees,
and enables global participation in the real estate market. Additionally, the
protocol introduces greater liquidity to the market by integrating with
decentralized financial platforms, allowing prospective buyers to secure
funding rapidly.
##### Consumer-Centric Market Strategy
This protocol advocates a dual-sided market strategy that aims to reform the
real estate transaction paradigm. Property sellers are afforded the ability to
secure deed storage on the blockchain while benefiting from direct market
access for sales, effectively bypassing intermediary overheads. For property
buyers, the protocol simplifies the purchase process, offering immediate
access to property details and streamlining the transfer of ownership.
Moreover, financial lenders find a new marketplace in which to offer secured
loans, augmented by blockchain’s security features.
##### Competitive Landscape and Advantages
In the competitive landscape, the proposed protocol differentiates itself by
presenting a comprehensive and integrated solution that extends beyond mere
transaction facilitation. It anticipates the current and future needs of the
real estate market, focusing on user experience, transactional integrity, and
market inclusivity. Against competitors like Propy, RealT, and Deedcoin, the
protocol asserts its edge through its amalgamation of transactional
efficiency, cost savings, and market reach.
##### Strategic Implementation and Economic Considerations
The practical implementation of the protocol on the Ethereum blockchain, and
the considerations for its economic viability, are outlined with a foresight
into potential scalability issues and transaction costs. Alternative
blockchains such as Binance Smart Chain, Solana, and Polkadot are appraised
for their suitability, with an emphasis on their comparative advantages in
terms of transaction speed, costs, and infrastructural development.
##### Implications and Future Prospects
In conclusion, the blockchain real-estate protocol promises to not only
revolutionize the manner in which real estate transactions are conducted but
also to serve as a blueprint for future applications of blockchain technology
in other domains. The synthesis of the protocol’s operational model, strategic
implementation, and competitive positioning underscores its potential to offer
a superior alternative to the established real estate transaction processes,
setting a new benchmark in efficiency, security, and global accessibility.
## 4 Conclusion
This paper has presented an in-depth examination of the cryptocurrency market,
followed by a pioneering proposal for a blockchain-based real estate
transaction protocol. Through meticulous analysis, it has provided evidence of
the intricate dynamics governing cryptocurrency price movements, volatility,
and correlations, particularly in the context of the FTX bankruptcy event,
thereby illuminating the vulnerabilities and resilience inherent in the
digital currency landscape. In tandem, it has offered a visionary blueprint
for the utilization of blockchain technology in streamlining real estate
transactions, proposing a model that is poised to redefine the sector.
### 4.1 Synthesis of Findings
The market analysis revealed that cryptocurrencies are not only highly
volatile but are also subject to correlated movements, which can lead to
systemic risks within the digital asset class. Yet, this volatility and
interconnectivity also underscore the potential of cryptocurrencies to
diversify investment portfolios when judiciously balanced with traditional
assets. The FTX bankruptcy served as a litmus test for the market, delineating
the stability offered by stablecoins and the S&P 500 in contrast to the
pronounced volatility of cryptocurrencies like Bitcoin and Dogecoin.
On the frontier of innovation, the blockchain real estate proposal detailed in
Part II of this paper is set to disrupt a long-established industry. By
removing intermediaries, reducing transaction costs, and enhancing
transparency, the protocol demonstrates a tangible application of blockchain
beyond speculative trading, embodying the technology’s transformative
potential in real-world asset management and exchange.
### 4.2 Prospects and Implications
The synthesis of the paper’s findings paints a nuanced picture of the
cryptocurrency market’s complexities and introduces a sophisticated approach
to real estate transactions that capitalizes on blockchain technology’s
strengths. As the cryptocurrency market continues to mature, it is anticipated
that investor strategies will adapt to encompass both traditional and digital
assets, ensuring balanced portfolios that mitigate risk while capitalizing on
growth opportunities.
The real estate blockchain protocol, while nascent, holds promise for a
radical shift in property ownership transfer, marking a significant leap
towards a more interconnected, efficient, and accessible global market. Its
implications extend beyond the real estate sector, signaling the advent of a
broader adoption of blockchain in various facets of commerce and governance,
ushering in a new era of decentralized digital solutions.
### 4.3 Summary and Forward Outlook
In summary, the research presented in this paper contributes meaningfully to
the understanding of cryptocurrencies and offers a progressive application of
blockchain technology. As we witness the convergence of traditional financial
methodologies with groundbreaking digital solutions, the potential for
innovation in both markets and technology is boundless. Future research and
development will undoubtedly continue to expand on these foundations, further
integrating the burgeoning possibilities of blockchain technology into the
fabric of societal and economic structures.
In moving forward, continuous monitoring of market trends, regulatory
developments, and technological advancements will be crucial in optimizing the
strategies and applications discussed herein. The cryptocurrency market’s
evolution and the blockchain real estate protocol’s maturation will
undoubtedly serve as critical barometers for the future trajectory of digital
finance and property transactions. The journey ahead promises to be as
challenging as it is exciting, with the potential to redefine the very essence
of investment, ownership, and exchange in an increasingly digital world.
|
# Remarks on Fixed Point Assertions in Digital Topology, 5
Laurence Boxer Department of Computer and Information Sciences, Niagara
University, NY 14109, USA; and Department of Computer Science and Engineering,
State University of New York at Buffalo
email<EMAIL_ADDRESS>
Paper accepted for publication in Applied General Topology
###### Abstract
As in [6, 3, 4, 5], we discuss published assertions concerning fixed points in
“digital metric spaces” - assertions that are incorrect or incorrectly proven,
or reduce to triviality.
MSC: 54H25
Key words and phrases: digital topology, fixed point, metric space
## 1 Introduction
As stated in [3]:
> The topic of fixed points in digital topology has drawn much attention in
> recent papers. The quality of discussion among these papers is uneven; while
> some assertions have been correct and interesting, others have been
> incorrect, incorrectly proven, or reducible to triviality.
Paraphrasing [3] slightly: in [6, 3, 4, 5], we have discussed many
shortcomings in earlier papers and have offered corrections and improvements.
We continue this work in the current paper.
Authors of many weak papers concerning fixed points in digital topology seek
to obtain results in a “digital metric space” (see section 2.1 for its
definition). This seems to be a bad idea. We quote [5]:
> * •
>
> Nearly all correct nontrivial published assertions concerning digital metric
> spaces use either the adjacency of the digital image or the metric, but not
> both.
>
> * •
>
> If $X$ is finite (as in a “real world” digital image) or the metric $d$ is a
> common metric such as any $\ell_{p}$ metric, then $(X,d)$ is uniformly
> discrete as a topological space, hence not very interesting.
>
> * •
>
> Many of the published assertions concerning digital metric spaces mimic
> analogues for subsets of Euclidean ${\mathbb{R}}^{n}$. Often, the authors
> neglect important differences between the topological space
> ${\mathbb{R}}^{n}$ and digital images, resulting in assertions that are
> incorrect, trivial, or trivial when restricted to conditions that many
> regard as essential. E.g., in many cases, functions that satisfy fixed point
> assertions must be constant or fail to be digitally continuous [6, 3, 4].
>
>
Since the publication of [5], additional papers concerning fixed points in
digital metric spaces have come to our attention. This paper continues the
work of [6, 3, 4, 5] in discussing shortcomings of published assertions
concerning fixed points in digital metric spaces.
Many of the definitions and assertions we discuss were written with
typographical and grammatical errors, and mathematical flaws. We have quoted
these by using images of the originals so that the reader can see these errors
as they appear in their sources (we have removed or replaced with a different
style labels in equations and inequalities in the images to remove confusion
with labels in our text).
## 2 Preliminaries
Much of the material in this section is quoted or paraphrased from [5].
We use ${\mathbb{N}}$ to represent the natural numbers, ${\mathbb{Z}}$ to
represent the integers, and ${\mathbb{R}}$ to represent the reals.
A digital image is a pair $(X,\kappa)$, where $X\subset{\mathbb{Z}}^{n}$ for
some positive integer $n$, and $\kappa$ is an adjacency relation on $X$. Thus,
a digital image is a graph. In order to model the “real world,” we usually
take $X$ to be finite, although there are several papers that consider
infinite digital images. The points of $X$ may be thought of as the “black
points” or foreground of a binary, monochrome “digital picture,” and the
points of ${\mathbb{Z}}^{n}\setminus X$ as the “white points” or background of
the digital picture.
For this paper, we need not specify the details of adjacencies or of digitally
continuous functions.
A fixed point of a function $f:X\to X$ is a point $x\in X$ such that $f(x)=x$.
### 2.1 Digital metric spaces
A digital metric space [9] is a triple $(X,d,\kappa)$, where $(X,\kappa)$ is a
digital image and $d$ is a metric on $X$. The metric is usually taken to be
the Euclidean metric or some other $\ell_{p}$ metric. We are not convinced
that the digital metric space is a notion worth developing. Typically,
assertions in the literature do not make use of both $d$ and $\kappa$, so that
“digital metric space” seems an artificial notion. E.g., for a discrete
topological space $X$, all functions $f:X\to X$ are continuous, although on
digital images, many functions $g:X\to X$ are not digitally continuous
(digital continuity is defined in [2], generalizing an earlier definition
[15]).
## 3 Assertions for contractions in [11]
The paper [11] claims fixed point theorems for several types of digital
contraction functions. Serious errors in the paper are discussed below.
### 3.1 Fixed point for Kannan contraction
Figure 1 shows the definition appearing in [11] of a Kannan digital
contraction.
Figure 1: Definition of Kannan digital contraction in [11] Figure 2: Fixed
point assertion for Kannan digital contractions in [11]
Figure 2 shows a fixed point assertion for Kannan digital contractions in
[11]. The “proof” of this assertion has errors discussed below.
* •
In the fourth and fifth lines of the “proof” is the claim that
$\lambda[d(x_{n},x_{n-1})+d(x_{n-1},x_{n-2})]\leq 2\lambda d(x_{n},x_{n-1}).$
Since $\lambda>0$, this claim implies
$d(x_{n-1},x_{n-2})\leq d(x_{n},x_{n-1}),$
so if any $d(x_{n},x_{n-1})$ is positive, the sequence $\\{x_{n}\\}$ is not a
Cauchy sequence, contrary to a claim that appears later in the argument.
* •
Towards the end of the existence argument, the authors claim that a Kannan
digital contraction is digitally continuous. This assumption is contradicted
by Example 4.1 of [6].
In light of these errors, we must conclude that the assertion of Figure 2 is
unproven.
### 3.2 Example of pp. 10769 - 10770
This example is shown in Figure 3. One sees easily the following.
* •
$d(0,1)=d(0,2)=0$, so $d$, contrary to the claim, is not a metric.
* •
$T(1)=1/2\not\in X$.
Figure 3: The example of [11], pp. 10769 - 10770
### 3.3 Fixed point for generalization of Kannan contraction
Figure 4 shows an assertion of a fixed point result on p. 10770 of [11].
Figure 4: A “theorem” of [11], p. 10770
Note “And let $f$ satisfies” should be “and let $T$ satisfy”.
More importantly: In the argument offered as proof of this assertion, the
authors let $x_{0}\in X$, and, inductively,
$x_{n+1}=T(x_{n}),~{}~{}~{}~{}~{}a_{n+1}=d(x_{n},x_{n+1}).$
They claim that by using the statements marked (i) and (ii) in Figure 4, it
follows that
$a_{n+1}=d(x_{n},x_{n+1})\leq\Upsilon(d(x_{n},x_{n-1})+d(x_{n-1},x_{n-2}))$
$<2d(x_{n},x_{n-1})=2a_{n},$ (1)
which, despite the authors’ claim, does not show that the sequence
$\\{a_{n}\\}$ is decreasing. However, what correctly follows from the
statements marked (i) and (ii) in Figure 4 is
$a_{n+1}=d(x_{n},x_{n+1})=d(T(x_{n-1}),T(x_{n}))\leq$
$\Upsilon(d(x_{n-1},T(x_{n-1}))+d(x_{n},T(x_{n})))=\Upsilon(d(x_{n-1},x_{n})+d(x_{n},x_{n+1}))$
$=\Upsilon(a_{n}+a_{n+1})<\frac{a_{n}+a_{n+1}}{2}.$ (2)
From this we see that $a_{n+1}<a_{n}$, so the sequence $\\{a_{n}\\}$ is
decreasing and bounded below by 0, hence tends to a limit $a\geq 0$.
The authors then claim that if $a>0$ then $a_{n+1}\leq Y(a_{n})$. However,
what we showed in (2) does not support this conclusion, which is not justified
in any obvious way. Since the authors wish to contradict the hypothesis that
$a>0$ in order to derive that the sequence $\\{x_{n}\\}$ is a Cauchy sequence,
we must regard the assertion shown in Figure 4 as unproven.
### 3.4 Fixed point for Zamfirescu contraction
A Zamfirescu digital contraction is defined in Figure 5. This notion is used
in [11, 14], and will be discussed in the current section and in section 6.
Figure 5: Definition 3.2 of [13], used in [11, 14].
Figure 6 shows an assertion found on p. 10770 of [11].
Figure 6: Another “theorem” of [11], p. 10770
The argument offered as “proof” of this assertion considers cases. For an
arbitrary $x_{0}\in X$, a sequence is inductively defined via
$x_{n+1}=T(x_{n})$. For convenience, let us define
$M(x,y)=\max\left\\{d(x,y),\frac{d(x,Tx)+d(y,Ty)}{2},\frac{d(x,Ty)+d(y,Tx)}{2}\right\\}.$
The argument considers several cases.
* •
Case 1 says if $d(x_{n+1},x_{n})=M(x_{n+1},x_{n})$ then, by implied induction,
$d(x_{n+1},x_{n})\leq\lambda^{n}d(x_{1},x_{0}).$
But this argument is based on the unproven assumption that this is also the
case for all indices $i<n$; i.e., that $d(x_{i+1},x_{i})=M(x_{i+1},x_{i})$.
* •
Case 2 says if
$\frac{d(x_{n+1},Tx_{n+1})+d(x_{n},Tx_{n})}{2}=$
$\max\left\\{\begin{array}[]{l}d(x_{n+1},x_{n}),\frac{d(x_{n+1},Tx_{n+1})+d(x_{n},Tx_{n})}{2},\\\
\\\ \frac{d(x_{n+1},Tx_{n})+d(x_{n},Tx_{n_{1}})}{2}\end{array}\right\\},$
then
$d(x_{n+1},x_{n})=d(Tx_{n},Tx_{n-1})\leq\lambda\frac{d(x_{n},x_{n-1})+d(x_{n-1},x_{n-2})}{2}\leq$
$\lambda d(x_{n},x_{n-1}).$
But in order for the second inequality in this chain to be true, we would need
$d(x_{n-1},x_{n-2})\leq d(x_{n},x_{n-1})$, and no reason is given to believe
the latter.
* •
Case 3 says that if
$\frac{d(x_{n+1},Tx_{n})+d(x_{n},Tx_{n+1})}{2}=M(x_{n+1},x_{n})$ then
$d(x_{n+1},x_{n})=d(Tx_{n},Tx_{n-1})\leq\lambda\frac{d(x_{n},x_{n-2})+d(x_{n-1},x_{n-1})}{2}.$
The correct upper bound, according to the definition shown in Figure 5, is
$\lambda\frac{d(x_{n+1},Tx_{n})+d(x_{n},Tx_{n+1})}{2}=\lambda\frac{d(x_{n+1},x_{n+1})+d(x_{n},x_{n+2})}{2}=$
$\lambda\frac{d(x_{n},x_{n+2})}{2}.$
Further, the conclusion reached by the authors for this case, that the
distances $d(x_{n+1},x_{n})$ are bounded above by an expression that tends to
0 as $n\to\infty$, depends on the unproven hypothesis that an analog of this
case holds for all indices $i<n$.
Thus all three cases considered by the authors are handled incorrectly. We
must conclude that the assertion of Figure 6 is unproven.
### 3.5 Fixed point for Rhoades contraction
Figure 7 shows the definition appearing in [11] of a Rhoades digital
contraction. The paper [11] claims the fixed point result shown in Figure 8
for such functions. The argument offered in “proof” of this assertion has
errors that are discussed below.
Figure 7: Definition of Rhoades digital contraction, [11], p. 10769 Figure 8:
Fixed point assertion for Rhoades digital contraction, [11], p. 10771
For convenience, let
$M(x,y)=\max\left\\{d(x,y),~{}\frac{d(x,Tx)+d(y,Ty)}{2},~{}d(x,Ty),~{}d(y,Tx)\right\\}.$
The authors’ argument considers cases corresponding to which of the embraced
expressions above gives the value of $M(x_{n+1},x_{n})$. In each case, the
authors assume without proof that the same case is valid for
$M(x_{i+1},x_{i})$, for all indices $i<n$.
Additional errors:
* •
In case 2, the inequality
$d(Tx_{n},Tx_{n-1})\leq\lambda\frac{d(x_{n},x_{n-1})+d(x_{n-1},x_{n-2})}{2}$
should be, according to Figure 7,
$d(x_{n},x_{n+1})=d(Tx_{n},Tx_{n-1})\leq\lambda\frac{d(x_{n},Tx_{n})+d(x_{n+1},Tx_{n-1})}{2}$
$=\lambda\frac{d(x_{n},x_{n+1})+d(x_{n+1},x_{n})}{2}=\lambda
d(x_{n},x_{n+1}).$
Note this implies
$x_{n}=x_{n+1},$ (3)
which would imply the existence of a fixed point. Also, the authors claim that
$\lambda\frac{d(x_{n},x_{n-1})+d(x_{n-1},x_{n-2})}{2}\leq\lambda
d(x_{n},x_{n-1}),$
which is equivalent to
$d(x_{n-1},x_{n-2})\leq d(x_{n},x_{n-1}).$
No reason is given in support of the latter; further, it undermines the later
claim that $\\{x_{n}\\}$ is a Cauchy sequence, since the authors did not
deduce (3).
* •
In case 3, it is claimed that
$\lambda d(x_{n},Tx_{n-1})\leq\lambda d(x_{n},x_{n-1}).$
This should be corrected to
$\lambda d(x_{n},Tx_{n-1})=\lambda d(x_{n},x_{n})=0,$
which would guarantee a fixed point.
* •
In case 4, we see the claim
$d(x_{n-1},Tx_{n})=d(x_{n-1},x_{n-1})=0.$
This should be corrected to
$d(x_{n-1},Tx_{n})=d(x_{n-1},x_{n+1}).$
In view of these errors, we must regard the assertion shown in Figure 8 as
unproven.
## 4 Assertions for weakly compatible maps in [1]
In this section, we show that the assertions of [1] are trivial or incorrect.
### 4.1 Theorem 3.1 of [1]
###### Definition 4.1.
[8] Let $S,T:X\to X$. Then $S$ and $T$ are weakly compatible or coincidentally
commuting if, for every $x\in X$ such that $S(x)=T(x)$ we have
$S(T(x))=T(S(x))$.
The assertion stated as Theorem 3.1 in [1] is shown in Figure 9. In this
section, we show the assertion is false except in a trivial case.
Note if $d$ is any $\ell_{p}$ metric (including the usual Euclidean metric)
then the requirements of closed subsets of $X$ are automatically satisfied,
since $(X,d)$ is a discrete space.
###### Theorem 4.2.
If functions $G,H,P,Q$ satisfy the hypotheses of Theorem 3.1 of [1] then
$G=H=P=Q$. Therefore, each of the pairs $(P,G)$ and $(Q,H)$ has a unique
common point of coincidence if and only if $N$ consists of a single point.
###### Proof.
We observe the following.
Figure 9: The assertion stated as Theorem 3.1 of [1]
* •
The inequality in the assertion simplifies as
$\Psi(d(Px,Qy))\leq-\,\frac{1}{2}\Psi(d_{G,H}(x,y)).$
Since the function $\Psi$ is non-negative, we have
$\Psi(d(Px,Qy))=\Psi(d_{G,H}(x,y))=0.$ (4)
Therefore, we have
$P(x)=Q(y)\mbox{ for all }x,y\in N.$ (5)
* •
In the equation for $d_{G,H}$ in Figure 9, the pairs of points listed on the
right side should be understood as having $d$ applied, i.e.,
$d_{G,H}(x,y)=\max\left\\{\begin{array}[]{c}d(G(x),H(y)),d(G(x),P(x)),d(H(y),Q(y)),\\\
\frac{1}{2}[d(G(x),Q(y))+d(H(y),P(x))]\end{array}\right\\}.$ (6)
Since $\Psi(x)=0$ if and only if $x=0$, we have from (4) that
$d_{G,H}(x,y)=0$, so (5) and (6) imply
$G(x)=P(x)=Q(x)=H(x)\mbox{ for all }x\in N.$
We conclude that $(P,G)$ and $(Q,H)$ are pairs of functions whose respective
common points of coincidence are unique if and only if $N$ consists of a
single point. ∎
### 4.2 Example 3.1 of [1]
In Figure 10, we see the assertion presented as Example 3.1 of [1].
Figure 10: The assertion stated as Example 3.1 of [1]
Note the following.
* •
If “$X=[4,40]$” is meant to mean the real interval from 4 to 40, then $X$ is
not a digital image, as all coordinates of members of a digital image must be
integers. Perhaps $X$ is meant to be the digital interval
$[4,40]_{{\mathbb{Z}}}=[4,40]\cap{\mathbb{Z}}$.
* •
The function $G$ is not defined on all of $[4,40]_{{\mathbb{Z}}}$, appears to
be doubly defined for some values of $x$, (notice the incomplete inequality at
the end of the 3rd line) and is not restricted to integer values.
* •
The function $H$ is not defined on all of $[4,40]_{{\mathbb{Z}}}$ and is
doubly defined for $x\in\\{13,14\\}$.
* •
The function $Q$ is not defined for $x\in\\{9,10,11,12\\}$, and is doubly
defined for $x\in\\{14,15\\}$.
* •
The “above theorem” referenced in the last sentence of Figure 10 is the
assertion discredited by our Theorem 4.2, which shows that $P=Q=G=H$. Clearly,
the assertion shown in Figure 10 fails to satisfy the latter.
Thus, Example 3.1 of [1] is not useful.
### 4.3 Corollary 3.2 of [1]
The assertion presented as Corollary 3.2 of [1] is presented in Figure 11.
Notice that “weakly mappings” is an undefined phrase. No proof is given in [1]
for this assertion, and it is not clear how the assertion might follow from
previous assertions of the paper (which, as we have seen above, are also
flawed).
Perhaps “weakly mappings” is intended to be “weakly compatible mappings”. At
any rate, by labeling this assertion as a Corollary, the authors suggest that
it follows from the paper’s flawed “Theorem” 3.1.
The assertion presented as “Corollary” 3.2 of [1] must be regarded as
undefined and unproven.
Figure 11: The assertion stated as Corollary 3.2 of [1]
## 5 Assertion for coincidence and fixed points in [12]
Figure 12: The assertion presented as Theorem 4 of [12]
Figure 12 shows the assertion presented as Theorem 4 of [12]. The assertion as
stated is false. Flaws in this assertion include:
* •
“$D$” apparently should be “$L$”, and “$x_{x}$” apparently should be “$x$”.
* •
No restriction is stated for the value of $h$. Therefore, we can take $h=0$,
leaving the inequality in i) as $b(fx,fy)\geq 0$; since $b$ is a metric, this
inequality is not a restriction. Thus $f$ and $g$ are arbitrary; they need not
have a coincidence point or fixed points.
## 6 Assertion for Zamfirescu contractions in [14]
Let $f:X\to X$, where $(X,d,\kappa)$ is a digital metric space. Recall that a
Zamfirescu digital contraction [13] is defined in Figure 5.
We show the assertion presented as Theorem 4.1 of [14] in Figure 13.
Figure 13: The assertion presented as Theorem 4.1 of [14]
Observe:
* •
The symbol $\theta$ has distinct uses in this assertion. In the first line,
$\theta$ is introduced as both the metric and the adjacency of $X$. Since our
discussion below does not use an adjacency, we will assume $\theta$ is the
metric $d$.
* •
The symbol $\varphi$ seems intended to be a real number satisfying some
restriction, but no restriction is stated. Alternately, it may be that
$\varphi$ is intended to be a function to be applied to the $Max$ value in the
statement, but no description of such a function appears.
Perhaps most important, we have the following.
###### Theorem 6.1.
If $Y$ has more than one point and $d$ is any $\ell_{p}$ metric, then no
function $T$ satisfies the hypotheses shown in Figure 13.
###### Proof.
Suppose there is such a function $T$. By choice of $d$, there exist
$u_{0},v_{0}\in X$ such that
$d(u_{0},v_{0})=\min\\{d(x,y)~{}|~{}x,y\in X,x\neq y\\}.$
By the inequality stated in item 1 of Figure 13, $d(Tu_{0},Tv_{0})=0$. This
contradicts the hypothesis that $T$ is injective. ∎
## 7 Assertion for expansive map in [7]
The paper [7] claims to have a fixed point theorem for digital metric spaces.
However, it is not clear what the authors intend to assert, as the paper has
many undefined and unreferenced terms and many obscuring typographical errors.
The assertion stated as “Preposition” 2.7 of [7] (and also as an unlabeled
Proposition on p. 10769 of [11]) was shown in Example 4.1 of [6] to be false.
Figure 14: The statement presented as Definition 3.1 of [7]
The definition of what this paper calls a Generalized ($\alpha~{}-~{}\phi$) -
$\psi$ expansive mapping for random variable is shown in Figure 14. We observe
the following.
* •
This is not the same as a $\beta~{}-~{}\psi~{}-~{}\phi$ expansive mapping
defined in [10].
* •
Notice the $\rho$ that appears intended to be the adjacency of the digital
metric space. This is significant in our discussion later.
* •
The set $\Psi$ is not defined anywhere in the paper. Perhaps it is meant to be
the set $\Psi$ of [10].
* •
The functions $d$, $\alpha$, and $M$ all have a third parameter $a$ that
appears to be extraneous, since each of these functions is used later with
only two parameters.
* •
One supposes $\mu$ and $\omega$ must be non-negative, but this is not stated.
The assertion presented as Theorem 3.3 of [7] is shown in Figure 15.
Figure 15: The assertion presented as Theorem 3.3 of [7]
In the statement of this assertion:
* •
“DGMS” appears, although it is not defined anywhere in the paper. Perhaps it
represents “digital metric space”.
* •
The term “exclusive RFP” is not defined anywhere in the paper. One supposes
the “FP” is for “fixed point”.
In the argument offered as “Verification” of this assumption, we note the
following.
* •
The second line of the verification contains an undefined operator, $+$ $n$
which perhaps is meant to be $+$.
* •
The same line contains part of the phrase “$u_{n}$ is a unique point of $S$.”
What the authors intend by this is unclear.
* •
At the start of the long statement (3e), it is claimed that $M(\xi u_{n},\xi
u_{n+1})$ is the maximum of three expressions. The second term of the
expression for $M(\xi u_{n},\xi u_{n+1})$ applies $\rho$ to a numeric
expression. This makes no sense, since $\rho$ is the adjacency of $Y$ (see
Figure 14). Notice also that Figure 14 shows no such term in its expression
for the function $M$. The use of $\rho$, as a numeric value that has neither
been defined nor restricted to some range of values, propagates through both
of the cases considered.
* •
In the expression for $M(\xi u_{n},\xi u_{n+1})$, the third term, $\omega(\xi
u_{n},\xi u_{n-1})$, should be $\omega d(\xi u_{n},\xi u_{n-1})$ according to
Figure 14. This error repeats several times in statement (3e).
Other errors are present, but we have established enough to conclude that
whatever the authors were trying to prove is unproven.
## 8 Further remarks
We have shown that nearly every assertion introduced in the papers [11, 1, 12,
14, 7] is incorrect, unproven due to errors in the “proofs,” or trivial. These
papers are part of a larger body of highly flawed publications devoted to
fixed point assertions in digital metric spaces, and emphasize our contention
that the digital metric space is not a worthy subject of study.
## References
* [1] S.K. Barve, Q.A. Kabir, and R.D. Daheriya, Unique common fixed point theorem for weakly compatible mappings in digital metric space, International Journal of Scientific Research and Reviews 8(1) (2019), 2114-2121
* [2] L. Boxer, A classical construction for the digital fundamental group, Journal of Mathematical Imaging and Vision 10 (1999), 51-62.
* [3] L. Boxer, Remarks on Fixed Point Assertions in Digital Topology, 2, Applied General Topology 20, (1) (2019), 155-175.
* [4] L. Boxer, Remarks on Fixed Point Assertions in Digital Topology, 3, Applied General Topology 20 (2) (2019), 349-361.
* [5] L. Boxer, Remarks on Fixed Point Assertions in Digital Topology, 4, Applied General Topology 21 (2) (2020), 265 - 284
* [6] L. Boxer and P.C. Staecker, Remarks on fixed point assertions in digital topology, Applied General Topology 20 (1) (2019), 135-153.
* [7] C. Chauhan, J. Singhal, S. Shrivastava, Q.A. Kabir, and P.K. Jha, Digital topology with fixed point, Materials Today: Proceedings 47 (2021) 7167-7169
* [8] S. Dalal, Common fixed point results for weakly compatible map in digital metric spaces, Scholars Journal of Physics, Mathematics and Statistics 4 (4) (2017), 196-201.
* [9] O. Ege and I. Karaca, Digital homotopy fixed point theory, Comptes Rendus Mathematique 353 (11) (2015), 1029-1033.
* [10] K. Jyoti and A. Rani, Fixed point theorems for $\beta$ \- $\psi$ \- $\phi$-expansive type mappings in digital metric spaces, Asian Journal of Mathematics and Computer Research 24 (2) (2018), 56-66.
* [11] K. Jyoti and A. Rani, Fixed point theorems with digital contractions, International Journal of Current Advanced Research 7 (3(E)) (2018), 10768-10772
* [12] A. Mishra, P.K. Tripathi, A.K. Agrawal, and D.R. Joshi, A contraction mapping method in digital image processing, International Journal of Recent Technology and Engineering 8 (4S5) (2019), 193 - 196
* [13] L.N. Mishra, K. Jyoti, A. Rani, and Vandana, Fixed point theorems with digital contractions image processing, Nonlinear Science Letters A 9 (2) (2018), 104 - 115
* [14] K. Rana and A. Garg, Various contraction conditions in digital metric spaces, Advances in Mathematics: Scientific Journal 9 (2020), no.8, 5433-5441
* [15] A. Rosenfeld, ‘Continuous’ functions on digital pictures, Pattern Recognition Letters 4, pp. 177-184, 1986.
|
if $y,y^{\prime}\in M$ and $d_{g}(y,x)=d_{g}(y^{\prime},x)$ for all $x$ in
some open set ${\mathcal{U}}\subset M$, then $y=y^{\prime}$. ∎
We now proceed to prove Proposition 4.7. We first make some reductions using
the continuity of the distance function. By Proposition 4.2, for both $j=1,2$,
$\gamma_{g_{j}}([0,R_{0}],\xi^{\prime}_{0})\subset M_{j}$ are minimizing
segments between ${\bf 0}$ and $\gamma_{g_{j}}(R_{0},\xi^{\prime}_{0})\in
M_{j}$. Taking $R_{0}\in(0,T)$ slightly smaller we may assume without loss of
generality that both
$\displaystyle\gamma_{g_{1}}([0,R_{0}],\xi^{\prime}_{0}),\
\gamma_{g_{2}}([0,R_{0}],\xi^{\prime}_{0})\ {\mbox{are the unique minimizers
and have no conjugate points.}}$ (4.44)
Another observation is that with $y_{1}\in M_{1}$ and $y_{2}\in M_{2}$ fixed,
it suffices to prove (4.43) for a dense subset of $x\in{\mathcal{U}}$. As such
we only need to prove (4.43) for $x\in{\mathcal{U}}$ which is joined to
$y_{1}\in M_{1}$ and $y_{2}\in M_{2}$ by unique minimizing geodesics in
$M_{1}$ and $M_{2}$ which do not contain conjugate points. Furthermore we
assume that
$\displaystyle
x\notin\gamma_{g_{1}}([-\delta_{0},\delta_{0}],\xi^{\prime}_{0})=\gamma_{g_{2}}([-\delta_{0},\delta_{0}],\xi^{\prime}_{0}).$
(4.45)
To this end let $x_{1}\in{\mathcal{U}}$ satisfy the above assumptions. Let
$\gamma_{g_{2}}([0,R_{1}],\xi^{\prime}_{1})$ be the unique minimizing segment
between $x_{1}$ and $y_{2}$ in $M_{2}$ that does not contain conjugate points.
The distance $d_{g_{2}}(x_{1},y_{2})$ is given by
$\displaystyle R_{1}=d_{g_{2}}(x_{1},y_{2}).$ (4.46)
Again, by density, we may without loss of generality assume that
$\displaystyle\dot{\gamma}_{g_{2}}(R_{0},\xi^{\prime}_{0})\neq-\dot{\gamma}_{g_{2}}(R_{1},\xi^{\prime}_{1}).$
(4.47)
By the fact that $\gamma_{g_{2}}([0,R_{0}],\xi^{\prime}_{0})$ is the unique
minimizer between end points, (4.47) implies
$\displaystyle{\bf
0}\notin\gamma_{g_{2}}((0,2R_{0}],\xi^{\prime}_{0})\cup\gamma_{g_{2}}([0,R_{0}+R_{1}],\xi^{\prime}_{1}).$
(4.48)
Note that since both geodesic segments
$\gamma_{g_{2}}([0,R_{0}],\xi^{\prime}_{0})$ and
$\gamma_{g_{2}}([0,R_{1}],\xi^{\prime}_{1})$ are unique minimizers, the two
geodesic segments only intersect at the point $y_{2}$.
We set
$\eta^{\prime}_{l}:=-\dot{\gamma}_{g_{2}}(R_{l},\xi^{\prime}_{l})^{\flat}$ for
$l=0,1$ and use Lemma 2.7 to choose $\eta^{\prime}_{2}\in
S^{*}_{y_{2}}M_{2}\cap\operatorname{span}\\{\eta_{0}^{\prime},\eta_{1}^{\prime}\\}$
arbitrarily close to $\eta^{\prime}_{0}$ so that
$\displaystyle{\rm
Dim}\left(\operatorname{span}\\{-dt-\eta^{\prime}_{0},-dt-\eta^{\prime}_{1},-dt-\eta^{\prime}_{2}\\}\right)=3,\
{\rm
Dim}(\operatorname{span}\\{\eta_{0}^{\prime},\eta_{1}^{\prime},\eta_{2}^{\prime}\\})=2.$
(4.49)
Set $R_{2}:=R_{0}$. By Lemma 2.1 if $\eta^{\prime}_{2}$ is chosen close to
$\eta_{0}^{\prime}$, we can conclude that
$\gamma_{g_{2}}([0,R_{2}],\eta^{\prime}_{2})$ is a unique minimizing segment
containing no conjugate points. Set
$\xi^{\prime}_{2}:=-\dot{\gamma}_{g_{2}}(R_{2},\eta^{\prime}_{2})^{\flat}$
so that
$y_{2}=\gamma_{g_{2}}(R_{0},\xi^{\prime}_{2})=\gamma_{g_{2}}(R_{0},\xi^{\prime}_{0})=\gamma_{g_{2}}(R_{1},\xi^{\prime}_{1}).$
If $\eta^{\prime}_{2}$ is chosen sufficiently close to $\eta^{\prime}_{0}$,
condition (4.47) allows us to assert that
$\displaystyle\dot{\gamma}_{g_{2}}(R_{l},\xi^{\prime}_{0})\neq-\dot{\gamma}_{g_{2}}(R_{k},\xi^{\prime}_{1})$
(4.50)
for $l,k\in\\{0,1,2\\}$.
Due to (4.48), if $\eta^{\prime}_{2}$ is chosen sufficiently close to
$\eta^{\prime}_{0}$ we have
$\displaystyle{\bf
0}\notin\gamma_{g_{2}}((0,2R_{0}],\xi^{\prime}_{0})\cup\gamma_{g_{2}}([0,R_{0}+R_{1}],\xi^{\prime}_{1})\cup\gamma_{g_{2}}([0,2R_{0}],\xi^{\prime}_{2}).$
(4.51)
By Proposition 4.1 the same holds for the $g_{1}$ geodesics:
$\displaystyle{\bf
0}\notin\gamma_{g_{1}}((0,2R_{0}],\xi^{\prime}_{0})\cup\gamma_{g_{1}}([0,R_{0}+R_{1}],\xi^{\prime}_{1})\cup\gamma_{g_{1}}([0,2R_{0}],\xi^{\prime}_{2}).$
(4.52)
Set $t_{0}=t_{2}=0$ and $t_{1}=R_{0}-R_{1}$ so that $t_{l}+R_{l}=R_{0}$ for
all $l=0,1,2$. Note that by (4.46) this means
$\displaystyle d_{g_{2}}(x_{1},y_{2})=R_{0}-t_{1}.$ (4.53)
Also, by the strict triangle inequality, $d_{g_{2}}(x_{0},x_{1})>|t_{1}|$ and
$d_{g_{2}}(x_{2},x_{1})>|t_{1}|$. So
$\displaystyle(t_{l},x_{l})\notin I_{g_{2}}^{+}(t_{k},x_{k}),\ k\neq l.$
(4.54)
Define the lightlike covectors
$\displaystyle\xi_{l}=-dt+\xi^{\prime}_{l}\in
T^{*}_{(t_{l},x_{l})}{\mathcal{M}}_{2}$ (4.55)
for $l=0,1,2$ so that
$\displaystyle(R_{0},y_{2})\in\bigcap_{l=0}^{2}\pi\circ{\rm
FLO}_{g_{2}}^{+}(t_{l},x_{l},\xi_{l})$ (4.56)
By (4.52) there exists an $h_{0}>0$ and $\delta>0$ such that
$\displaystyle B_{G}(2R_{0},{\bf
0};\delta)\cap\left(\bigcup_{l=0}^{2}\pi\circ{\rm FLO}_{g_{1}}^{+}({\rm
ccl}\mathcal{B}_{h_{0}}(\xi_{l})\cap{\rm Char}_{g_{1}})\right)=\emptyset$
(4.57)
Furthermore we can choose the $\delta>0$ in (4.57) to satisfy
###### Lemma 4.9.
i)There exists a $\delta>0$ and $h_{0}>0$ such that if
$\xi\in\mathcal{B}_{h_{0}}(\xi_{0})\cap{\rm Char}_{g_{1}}$ then the lightlike
geodesic segment
$I_{g_{1}}^{-}(B_{G}(2R_{0},{\bf 0};\delta))\cap\pi\circ{\rm
FLO}_{g_{1}}^{+}(0,{\bf 0},\xi)$
is the unique lightlike geodesic segment joining any two points on it.
ii) $I_{g_{1}}^{-}(2R_{0},{\bf 0})\cap\pi\circ{\rm FLO}_{g_{1}}^{+}(0,{\bf
0},\xi_{0})\subset[0,R_{0}]\times M_{1}$.
###### Proof.
i) By assumption $\gamma_{g_{1}}([0,R_{0}];\xi^{\prime}_{0})$ is the unique
minimizer between end points and does not have conjugate points. By Lemma 2.1
there exists an open set $U^{\prime}\subset S^{*}_{\bf 0}{\mathcal{U}}$ and
$\delta^{\prime}>0$ such that if $\xi^{\prime}\in U^{\prime}$ then
$\gamma_{g_{1}}([0,R_{0}+\delta^{\prime}],\xi^{\prime})$ is the unique
minimizer between end points without conjugate points. A consequence of this
is that
$\displaystyle
d_{g_{1}}\left(\gamma_{g_{1}}(R_{0}+\delta^{\prime}/2+t,\xi^{\prime}),{\bf
0}\right)>R_{0}+\delta^{\prime}/2-t$ (4.58)
for all $t\geq 0$ and $\xi^{\prime}\in U^{\prime}$. Furthermore for all
$\xi^{\prime}\in U^{\prime}$, the lightlike geodesic segment
$\\{\pi\circ e_{+}(s,0,{\bf 0},-dt+\xi^{\prime})\mid
s\in[0,R_{0}+\delta^{\prime}]\\}$
is the unique lightlike geodesic joining any two points on it. Due to (4.58),
for $\xi^{\prime}\in U^{\prime}$
$I^{-}_{g_{1}}(2R_{0}+\delta^{\prime}/2,{\bf 0})\cap{\rm
FLO}_{g_{1}}^{+}(0,{\bf
0},-dt+\xi^{\prime})\subset\\{(t,x)\in{\mathcal{M}}_{1}\mid
t\in(0,R_{0}+\delta^{\prime}/2)\\}.$
So if we choose $h_{0}>0$ to satisfy
$\left(\mathcal{B}_{h_{0}}(\xi_{0})\cap{\rm
Char}_{g_{1}}\right)/\mathbb{R}^{+}\subset\\{-dt+\xi^{\prime}\mid\xi^{\prime}\in
U^{\prime}\\},$
then for any $\xi\in\mathcal{B}_{h_{0}}(\xi_{0})\cap{\rm Char}_{g_{1}}$, the
geodesic segment
$I_{g_{1}}^{-}(2R_{0}+\delta^{\prime}/2,{\bf 0})\cap\pi\circ{\rm
FLO}_{g_{1}}^{+}(0,{\bf 0},\xi)$
is the only lightlike geodesic segment joining any two points on it.
Therefore, the lemma is verified if $\delta>0$ is chosen small enough so that
$B_{G}(2R_{0},{\bf 0};\delta)\subset\subset
I_{g_{1}}^{-}(2R_{0}+\delta^{\prime}/2,{\bf 0})$.
ii) This can be seen by taking $\xi^{\prime}=\xi^{\prime}_{0}$ in (4.58) and
observing that
$I_{g_{1}}^{-}(2R_{0},{\bf 0})=\\{(2R_{0}-t,x)\in{\mathcal{M}}_{1}\mid
d_{g_{1}}(x,{\bf 0})\leq t,\ t\geq 0\\}.$
∎
The conditions (4.49), (4.56), (4.54),(4.50) allows us to evoke Proposition
3.7 to conclude
###### Lemma 4.10.
We have that
1. (1)
For each $l=0,1,2$ there exists a sequence $\xi_{l;j}\in
T^{*}_{(t_{l},x_{l})}{\mathcal{M}}_{2}\cap{\rm Char}_{g_{2}}$ converging to
$\xi_{l}$.
2. (2)
For each $l=0,1,2$ and $j\in\mathbb{N}$ large there is an $h_{j}\in(0,h_{0})$
with
$\mathcal{B}_{h_{j}}(\xi_{l;j})\subset\subset\mathcal{B}_{h_{0}}(\xi_{l})$
so that if $h\in(0,h_{j})$ then there exists sources $f_{l;j}(\cdot;h)$ of the
form (2.12) such that
$\xi_{l;j}\in\operatorname{WF}(f_{l;j}(\cdot;h))\subset{\rm
ccl}(\mathcal{B}_{h}(\xi_{l;j})\cup\mathcal{B}_{h}(-\xi_{l;j})),\
\sigma(f_{l;j}(\cdot;h))(t_{l},x_{l},\xi_{l;j})\neq 0.$
3. (3)
A sequence $(\tilde{T}_{j},\tilde{x}_{j})\to(2R_{0},{\bf 0})$,
$\tilde{T}_{j}>2R_{0}$, and $\tilde{x}_{j}\neq{\bf 0}$.
4. (4)
For any $j\in\mathbb{N}$ large and $h\in(0,h_{j})$ as above, and any
$a\in(0,h)$ sufficiently small, we have sources $f_{3;j}(\cdot;a)$ and
$f_{4;j}(\cdot;a)$ of the form (3.55) which are supported in
$B_{G}(\tilde{T}_{j},\tilde{x}_{j};a)$.
If $(t,x)\mapsto v^{j}(t,x;h,a)$ are solutions of (4.1) in ${\mathcal{M}}_{2}$
with source
$f^{j}(\cdot;h,a):=\sum_{l=0}^{2}\epsilon_{l}f_{l;j}(\cdot;h)+\sum_{l=3}^{4}\epsilon_{l}f_{l;j}(\cdot;a)$
then for each $j\in\mathbb{N}$ large
$\displaystyle\tilde{T}_{j}+d_{g_{2}}(\tilde{x}_{j},{\bf 0})\in{\rm
singsupp}(v^{j}_{01234}(\cdot,{\bf 0};h,a)).$ (4.59)
for all $0<a<h<h_{j}$ small.
Note that we now explicitly write out the dependence of the sources $f_{l;j}$
and solution $v^{j}$ on the parameters $0<a<h<h_{j}$.
Let $u^{j}(\cdot;h,a)$ solve (4.1) with metric $g_{1}$ and the same source as
$v^{j}(\cdot;h,a)$. The fivefold interaction of the nonlinear wave
$u^{j}(\cdot;h,a)$ is
$\displaystyle\Box_{g_{1}}u^{j}_{01234}(\cdot;h,a)=\sum_{\sigma\in
S_{5}}u^{j}_{\sigma(0)\sigma(1)\sigma(2)}(\cdot;h,a)u^{j}_{\sigma(3)}(\cdot;h,a)u^{j}_{\sigma(4)}(\cdot;h,a),\
u^{j}_{01234}(\cdot;h,a)\mid_{t<-1}=0$
We write the solution $u^{j}_{01234}(\cdot;h,a)$ of (4.3) as $u^{j}_{\rm
reg}+u^{j}_{\rm sing}(\cdot;h,a)$ where
$\displaystyle\Box_{g_{1}}u^{j}_{\rm reg}=\sum_{\sigma\in S_{5}\backslash
S_{3}}u^{j}_{\sigma(0)\sigma(1)\sigma(2)}(\cdot;h,a)u^{j}_{\sigma(3)}(\cdot;h,a)u^{j}_{\sigma(4)}(\cdot;h,a),\
u^{j}_{\rm reg}\mid_{t<-1}=0$ (4.61)
and
$\displaystyle\Box_{g_{1}}u^{j}_{\rm
sing}(\cdot;h,a)=6u^{j}_{012}(\cdot;h,a)u^{j}_{3}(\cdot;h,a)u_{4}^{j}(\cdot;h,a),\
u^{j}_{\rm sing}(\cdot;h,a)\mid_{t<-1}=0.$ (4.62)
Here we denote $S_{3}\subset S_{5}$ as elements of the permutation group which
maps the set of three letters $\\{0,1,2\\}$ to itself. Repeating the argument
of Lemma 3.8 we get that
###### Lemma 4.11.
If $j\in\mathbb{N}$ is chosen large enough and $a\in(0,h)$ is chosen small
enough so that $B_{G}(\tilde{T}_{j},\tilde{x}_{j};a)\subset\subset
B_{G}(2R_{0},{\bf 0};\delta)$ and
$B_{G}(\tilde{T}_{j},\tilde{x}_{j};a)\cap(\mathbb{R}\times\\{{\bf
0}\\})=\emptyset$, then
$(\tilde{T}_{j}+d_{g}(\tilde{x}_{j},{\bf 0}),{\bf 0})\notin{\rm
singsupp}(u_{\rm reg}).$
###### Proof.
This is similar to proof of Lemma 3.8 so we will only give a brief sketch. To
simplify notation we define
$f_{\rm source}:=\sum_{\sigma\in S_{5}\backslash
S_{3}}u^{j}_{\sigma(0)\sigma(1)\sigma(2)}(\cdot;h,a)u^{j}_{\sigma(3)}(\cdot;h,a)u^{j}_{\sigma(4)}(\cdot;h,a).$
Note that by (3.57)
$u^{j}_{4}=\chi_{a;j}\in
C^{\infty}_{c}(B_{G}(\tilde{T}_{j},\tilde{x}_{j};a)),\
u^{j}_{3}=\chi_{a;j}\langle D\rangle^{-N}\delta_{{\bf T}_{j}}\in
I({\mathcal{M}},N^{*}{\bf T}_{j}).$
Meanwhile by Proposition 2.9, for $l=0,1,2$
$u^{j}_{l}\in I({\mathcal{M}}_{1},T^{*}_{(t_{l},x_{l})}{\mathcal{M}}_{1},{\rm
FLO}_{g_{1}}^{+}({\rm
ccl}(\mathcal{B}_{h_{0}}(\xi_{l})\cup\mathcal{B}_{h_{0}}(-\xi_{l}))\cap{\rm
Char}_{g_{1}}).$
This combined with flowout condition (4.57) ensures that for $l=0,1,2$,
${\rm
singsupp}(u^{j}_{l})\cap\left(\operatorname{supp}(u_{3}^{j})\cup\operatorname{supp}(u_{4}^{j})\right)=\emptyset.$
Using these facts we can proceed as in Lemma 3.8 to conclude that
${\rm FLO}_{g_{1}}^{-}(\tilde{T}_{j}+d_{g}(\tilde{x}_{j},{\bf 0}),{\bf
0},\xi)\cap\operatorname{WF}(f_{\rm source})=\emptyset$
for all $\xi\in T^{*}_{(\tilde{T}_{j}+d_{g}(\tilde{x}_{j},{\bf 0}),{\bf
0})}{\mathcal{U}}\cap{\rm Char}_{g_{1}}$. The lemma then follows from Thm
23.2.9 of [H0̈7]. ∎
With these facts we can now give the
###### Proof of Proposition 4.7.
From (4.59) we can use Lemma 2.16 to deduce
$\tilde{T}_{j}+d_{g_{2}}(\tilde{x}_{j},{\bf 0})\in{\rm
singsupp}\left(\partial^{5}_{\epsilon_{0}\dots\epsilon_{4}}\left(v^{j}(\cdot,{\bf
0};h,a)\right)\mid_{\epsilon_{0}=\cdots=\epsilon_{4}=0}\right).$
By the fact that the source-to-solution maps agree, $t\mapsto u^{j}(t,{\bf
0};h,a)$ is the same as $t\mapsto v^{j}(t,{\bf 0};h,a)$. So we have that
$\tilde{T}_{j}+d_{g_{2}}(\tilde{x}_{j},{\bf 0})\in{\rm
singsupp}\left(\partial^{5}_{\epsilon_{0}\dots\epsilon_{4}}\left(u^{j}(\cdot,{\bf
0};h,a)\right)\mid_{\epsilon_{0}=\cdots=\epsilon_{4}=0}\right).$
Lemma 2.16 then implies $\tilde{T}_{j}+d_{g}(\tilde{x}_{j},{\bf 0})\in{\rm
singsupp}(u^{j}_{01234}(\cdot,{\bf 0};h,a))$. By Lemma 4.11 we have that
$\displaystyle(\tilde{T}_{j}+d_{g}(\tilde{x}_{j},{\bf 0}),{\bf 0})\in{\rm
singsupp}(u^{j}_{\rm sing}(\cdot;h,a)).$ (4.63)
Note that for all $a>0$ small, the distribution $u^{j}_{\rm sing}(\cdot;h,a)$
solves (4.62) and the source term of (4.62) is supported in
$B_{G}(\tilde{T}_{j},\tilde{x}_{j};a)$ since
$\operatorname{supp}(u_{4}^{j}(\cdot;a))\subset
B_{G}(\tilde{T}_{j},\tilde{x}_{j};a)$ by (3.57). Note that by Condition (3) of
Lemma 4.10, $x_{j}\neq{\bf 0}$. The solution $u^{j}_{4}(\cdot;a)$ is smooth
and the wavefront set of $u^{j}_{3}(\cdot;a)$ is spacelike. So in order for
(4.63) to hold,
$B_{G}((\tilde{T}_{j},\tilde{x}_{j}),a)\cap{\rm
singsupp}(u^{j}_{012}(\cdot;h))\neq\emptyset$
for all $a>0$. We remark that $u^{j}_{012}(\cdot;h)$ does not depend on the
parameter $a>0$ since $f_{l;j}(\cdot;h)$ depends only on the parameter $h>0$
for $l=0,1,2$. So taking intersection over all $a>0$, we conclude that, for
each fixed $j\in\mathbb{N}$ large, if $h\in(0,h_{j})$ is sufficiently small,
$\displaystyle(\tilde{T}_{j},\tilde{x}_{j})\in{\rm
singsupp}(u^{j}_{012}(\cdot;h)).$ (4.64)
Fix $j\in\mathbb{N}$ large and for each $h>0$ sufficiently small
$u^{j}_{012}(\cdot;h)$ solves
$\Box_{g_{0}}u^{j}_{012}(\cdot;h)=-6u^{j}_{0}(\cdot;h)u^{j}_{1}(\cdot;h)u^{j}_{2}(\cdot;h),\
u^{j}_{012}(\cdot;h)\mid_{t<-1}=0$
with each of $u^{j}_{l}(\cdot;h)$ solving linear inhomogeneous wave equation
with source $f_{l;j}(\cdot;h)$ so that ${\rm
singsupp}(u^{j}_{l}(\cdot;h))\subset\pi\circ{\rm
FLO}_{g_{1}}^{+}(\operatorname{WF}(f_{l;j}(\cdot;h))\cap{\rm Char}_{g_{1}})$
by Proposition 2.9.
Take $j\in\mathbb{N}$ large enough so that $(\tilde{T}_{j},\tilde{x}_{j})\in
B_{G}((2R_{0},{\bf 0}),\delta)$ where $\delta>0$ is chosen so that (4.57) and
the conclusion of Lemma 4.9 holds. By Lemma 2.11 and (4.57), in order for
(4.64) to hold we must have
$\displaystyle
I_{g_{1}}^{-}((\tilde{T}_{j},\tilde{x}_{j}))\cap\bigcap_{l=0}^{2}\pi\circ{\rm
FLO}_{g_{1}}^{+}(\operatorname{WF}(f_{l;j}(\cdot;h))\cap{\rm
Char}_{g_{1}})\neq\emptyset$ (4.65)
for all $h\in(0,h_{j})$.
By Condition (2) of Lemma 4.10,
$\operatorname{WF}(f_{l;j}(\cdot;h))\subset{\rm
ccl}\mathcal{B}_{h}(\pm\xi_{l;j})$ for $l=0,1,2$ so (4.65) becomes
$\displaystyle
I_{g_{1}}^{-}((\tilde{T}_{j},\tilde{x}_{j}))\cap\bigcap_{l=0}^{2}\pi\circ{\rm
FLO}_{g_{1}}^{+}({\rm ccl}\mathcal{B}_{h}(\xi_{l;j})\cap{\rm
Char}_{g_{1}})\neq\emptyset$ (4.66)
for all $h>0$. Furthermore we must have that
$\displaystyle\forall h\in(0,h_{j}),\exists\eta_{h}\in
T_{(t_{h},x_{h})}^{*}{\mathcal{M}}_{1}/\mathbb{R}^{+}\cap{\rm Char}_{g_{1}},\
(t_{h},x_{h})\in\bigcap_{l=0}^{2}\pi\circ{\rm FLO}_{g_{1}}^{+}({\rm
ccl}\mathcal{B}_{h}(\xi_{l;j})\cap{\rm Char}_{g_{1}})$ $\displaystyle{\rm
s.t.}\ (\tilde{T}_{j},\tilde{x}_{j})\in\pi\circ{\rm
FLO}^{+}_{g_{1}}(t_{h},x_{h},\eta_{h})$ (4.67)
for otherwise $(\tilde{T}_{j},\tilde{x}_{j})\notin{\rm
singsupp}(u_{012}^{j}(\cdot;h))$ by Thm 23.2.9 of [H0̈7].
Since this holds for all $h>0$, compactness then dictates that there exists
$(\hat{R}_{j},\hat{y}_{j})\in{\mathcal{M}}_{1}$ such that
$\displaystyle(\hat{R}_{j},\hat{y}_{j})\in
I_{g_{1}}^{-}((\tilde{T}_{j},\tilde{x}_{j}))\cap\bigcap_{l=0}^{2}\pi\circ{\rm
FLO}_{g_{1}}^{+}(t_{l},x_{l},\xi_{l;j}).$ (4.68)
By Lemma 4.9, $(\hat{R}_{j},\hat{y}_{j})$ is unique element in (4.68).
Therefore we have that
$\displaystyle(\hat{R}_{j},\hat{y}_{j})=I_{g_{1}}^{-}((\tilde{T}_{j},\tilde{x}_{j}))\cap\bigcap_{l=0}^{2}\pi\circ{\rm
FLO}_{g_{1}}^{+}(t_{l},x_{l},\xi_{l;j}).$ (4.69)
And by (4.3), there is a covector $\hat{\eta}_{j}\in
T^{*}_{(\hat{R}_{j},\hat{y}_{j})}{\mathcal{M}}_{1}/\mathbb{R}^{+}\cap{\rm
Char}_{g_{1}}$ such that
$\displaystyle(\tilde{T}_{j},\tilde{x}_{j})\in\pi\circ{\rm
FLO}_{g_{1}}^{+}(\hat{R}_{j},\hat{x}_{j},\hat{\eta}_{j}).$ (4.70)
Taking a converging subsequence as $j\to\infty$ in (4.69) and (4.70) we have,
by Conditions (1) and (3) of Lemma 4.10,
$\displaystyle(\hat{R}_{0},\hat{y}_{0})=I_{g_{1}}^{-}(2R_{0},{\bf
0})\cap\bigcap_{l=0}^{2}{\rm FLO}_{g_{1}}^{+}(t_{l},x_{l},\xi_{l}),$ (4.71)
$\displaystyle(2R_{0},{\bf 0})\in\pi\circ{\rm
FLO}_{g_{1}}^{+}(\hat{R}_{0},\hat{y}_{0},\hat{\eta})$ (4.72)
for some $(\hat{R}_{0},\hat{y}_{0})\in{\mathcal{M}}_{1}$ and $\hat{\eta}\in
T^{*}_{(\hat{R}_{0},\hat{y}_{0})}{\mathcal{M}}_{1}\cap{\rm Char}_{g_{1}}$.
Since $\xi_{0}\in T^{*}_{(0,{\bf 0})}{\mathcal{M}}_{1}\cap{\rm Char}_{g_{1}}$
(see (4.55)), this says that
$\displaystyle(\hat{R}_{0},\hat{y}_{0})\in{\rm FLO}_{g_{1}}^{+}(0,{\bf
0},\xi_{0})$ (4.73)
which in turn implies that
$\displaystyle\hat{y}_{0}=\gamma_{g_{1}}(\hat{R}_{0},\xi^{\prime}_{0}).$
(4.74)
Evoke part ii) of Lemma 4.9 combined with (4.73) we conclude
$\displaystyle\hat{R}_{0}\leq R_{0}.$ (4.75)
By (4.44), the geodesic segment $\gamma_{g_{1}}([0,R_{0}],\xi^{\prime}_{0})$
is the unique minimizer between the end points ${\bf 0}$ and $y_{1}$. So
(4.75) and (4.74) combined to give
$\displaystyle d_{g_{1}}(\hat{y}_{0},y_{1})=R_{0}-\hat{R}_{0}.$ (4.76)
By (4.55), $\xi_{1}\in T^{*}_{(t_{1},x_{1})}{\mathcal{M}}_{1}\cap{\rm
Char}_{g_{1}}$. So since $(\hat{R}_{0},\hat{y}_{0})\in{\rm
FLO}_{g_{1}}^{+}(t_{1},x_{1},\xi_{1})$ by (4.71), we can conclude that
$\displaystyle\hat{y}_{0}=\gamma_{g_{1}}(\hat{R}_{0}-t_{1},\xi_{1}^{\prime}),\
x_{1}=\gamma_{g_{1}}(0,\xi^{\prime}_{1}).$ (4.77)
Combining (4.77) and (4.76) we get
$d_{g_{1}}(x_{1},y_{1})\leq
d_{g_{1}}(x_{1},\hat{y}_{0})+d_{g_{1}}(\hat{y}_{0},y_{1})\leq(\hat{R}_{0}-t_{1})+(R_{0}-\hat{R}_{0})=R_{0}-t_{1}.$
So by (4.53) we get that
$d_{g_{1}}(x_{1},y_{1})\leq d_{g_{2}}(x_{1},y_{2}).$
∎
### 4.4. Recover Riemannian Structure from Distance Data
We now complete the proof of Theorem 1.1 with the following proposition. Let
$(M_{i},g_{i})$, $i=1,2$, be complete Riemannian manifolds. For some $x_{i}\in
M_{i}$ and $R>0$, we define
$\displaystyle V_{i}:=\big{\\{}\xi^{\prime}\in T_{x_{i}}M_{i}\ \big{|}\
d_{g_{i}}(x_{i},\exp_{x_{i}}(\xi^{\prime}))=\|\xi^{\prime}\|_{g_{i}}<T\big{\\}}.$
Notice that $V_{i}$ is precisely the set of those tangent vectors
$\xi^{\prime}\in T_{x_{i}}M_{i}$ whose associated geodesic
$\gamma(\cdot,\xi^{\prime}):[0,1]\to M_{i}$,
$\gamma(t,\xi^{\prime})=\exp_{x_{i}}(t\xi^{\prime})$ is length-minimizing.
###### Proposition 4.12.
Assume that, for some open neighborhoods ${\mathcal{U}}_{i}\subset
B_{g_{i}}(x_{i},T)$ of $x_{i}$, there exists an isometry
$\psi:({\mathcal{U}}_{1},g_{1})\to({\mathcal{U}}_{2},g_{2})$ such that
* (i)
$d\psi(x_{1})V_{1}=V_{2}$,
* (ii)
$d_{g_{1}}(y,\exp_{x_{1}}(\xi^{\prime}))=d_{g_{2}}(\psi(y),\exp_{x_{2}}(d\psi(x_{1})\xi^{\prime}))$
for all $y\in U_{1}$ and $\xi^{\prime}\in V_{1}$.
Then, there is an extension of $\psi$ to an isometry
$\psi:(B_{g_{1}}(x_{1},T),g_{1})\to(B_{g_{2}}(x_{2},T),g_{2}).$
###### Proof.
Since the Riemannian manifolds $(M_{i},g_{i})$ are complete, every point of
$B_{g_{i}}(x_{i},T)$ can be connected to $x_{i}$ by means of a length-
minimizing geodesic. Namely,
$\exp_{x_{i}}(V_{i})=B_{g_{i}}(x_{i},T).$
We claim that there exists a continuous bijection $\phi:B_{g_{1}}(x_{1},T)\to
B_{g_{2}}(x_{2},T)$ such that
$\displaystyle\phi(\exp_{x_{1}}(\xi^{\prime}))=\exp_{x_{2}}(d\psi(x_{1})\xi^{\prime}),\qquad\forall\xi^{\prime}\in
V_{1}$
Indeed, since the isometry $\psi$ maps geodesics to geodesics, such a $\phi$
is clearly well defined on a Riemannian ball
$B_{g_{1}}(x_{1},\epsilon)\subset{\mathcal{U}}_{1}$, and indeed
$\phi|_{B_{g_{1}}(x_{1},\epsilon)}=\psi|_{B_{g_{1}}(x_{1},\epsilon)}$. Assume
that there exist two distinct vectors $\xi^{\prime},\eta^{\prime}\in V_{1}$
such that $y_{1}:=\exp_{x_{1}}(\xi^{\prime})=\exp_{x_{1}}(\eta^{\prime})$. All
we need to show in order to have a well defined continuous bijection $\phi$ is
that
$\displaystyle\exp_{x_{2}}(d\psi(x_{1})\xi^{\prime})=\exp_{x_{2}}(d\psi(x_{1})\eta^{\prime}).$
(4.78)
We set $z_{1}:=\exp_{x_{1}}(\delta\eta^{\prime})\in U_{1}$ for some $\delta>0$
small enough, $z_{2}:=\exp_{x_{2}}(d\psi(x_{1})\delta\eta^{\prime})$, and
$y_{2}:=\exp_{x_{2}}(d\psi(x_{1})\xi^{\prime})$. We have
$\displaystyle d_{g_{2}}(x_{2},y_{2})$
$\displaystyle=\|\xi^{\prime}\|_{g_{1}}=d_{g_{1}}(x_{1},y_{1})=d_{g_{1}}(x_{1},z_{1})+d_{g_{1}}(z_{1},y_{1})=d_{g_{2}}(x_{2},z_{2})+d_{g_{2}}(z_{2},y_{2}),$
where the last equality follows by assumption (ii). This implies that $z_{2}$
lies on a length-minimizing geodesic segment joining $x_{2}$ and $y_{2}$, and
therefore (4.78) holds.
We now prove that $\phi$ is a diffeomorphism. Fix an arbitrary point $y_{1}\in
B_{g_{1}}(x_{1},T)$, and consider the unit-speed length-minimizing geodesic
$\gamma_{1}:[0,\ell]\to M_{1}$ joining $x_{1}=\gamma_{1}(0)$ and
$y_{1}=\gamma_{1}(\ell)$. We fix $\delta>0$ small enough so that
$z_{1}:=\gamma_{1}(\delta)\in B_{g_{1}}(x_{1},\epsilon)$. Notice that
$\gamma_{1}|_{[\delta,\ell]}$ is the unique length-minimizing geodesic segment
joining $z_{1}$ and $y_{1}$, and does not contain conjugate points.
Analogously, $\gamma_{2}:=\phi\circ\gamma_{1}|_{[\delta,\ell]}$ is the unique
length-minimizing geodesic segment joining $z_{2}:=\phi(z_{1})$ and
$y_{2}:=\phi(y_{1})$, and does not have conjugate points. Therefore, the
Riemannian distances $d_{g_{i}}$ are smooth in an open neighborhood
$Z_{i}\times Y_{i}$ of $(z_{i},y_{i})$. We take $Z_{i}$ to be small enough so
that $Z_{1}\subset B_{g_{1}}(x_{1},\epsilon)$ and $\phi(Z_{1})=Z_{2}$. For
each $w\in Z_{i}$, the derivative $\partial_{y_{i}}d_{g_{i}}(w,y_{i})\in
S_{y_{i}}^{*}M$ is precisely the $g_{i}$-dual of the tangent vector to the
unique length-minimizing geodesic segment joining $w$ and $y_{i}$. Therefore,
for a generic triple of points $w_{1},w_{2},w_{3}\in Z_{1}$ sufficiently close
to $z_{1}$, the triple
$\partial_{y_{1}}d_{g_{1}}(w_{1},y_{1}),\partial_{y_{1}}d_{g_{1}}(w_{2},y_{1}),\partial_{y_{1}}d_{g_{1}}(w_{3},y_{1})$
is a basis of $T_{y_{1}}^{*}M_{1}$, and the triple
$\partial_{y_{2}}d_{g_{2}}(\phi(w_{1}),y_{2}),\partial_{y_{2}}d_{g_{2}}(\phi(w_{2}),y_{2}),\partial_{y_{2}}d_{g_{2}}(\phi(w_{3}),y_{2})$
is a basis of $T_{y_{2}}^{*}M_{2}$. This implies that, up to shrinking the
open neighborhoods $Y_{i}$ of $y_{i}$, the maps
$\displaystyle\kappa_{1}:Y_{1}\to\mathbb{R}^{3},\qquad\kappa_{1}(y)$
$\displaystyle=(d_{g_{1}}(w_{1},y),d_{g_{1}}(w_{2},y),d_{g_{1}}(w_{3},y)),$
$\displaystyle\kappa_{2}:Y_{2}\to\mathbb{R}^{3},\qquad\kappa_{2}(y)$
$\displaystyle=(d_{g_{2}}(\phi(w_{1}),y),d_{g_{2}}(\phi(w_{2}),y),d_{g_{2}}(\phi(w_{3}),y))$
are diffeomorphisms onto their images. By assumption (ii), we have
$\kappa_{1}=\kappa_{2}\circ\phi|_{Y_{1}}$, and therefore
$\phi|_{Y_{1}}=\kappa_{2}^{-1}\circ\kappa_{1}$. This proves that $\phi$ is a
local diffeomorphism at $y_{1}$. Moreover, for all $z\in Z_{1}$, by
differentiating the equality
$d_{g_{1}}(z,y_{1})=d_{g_{2}}(\phi(z),\phi(y_{1}))$ with respect of $y_{1}$,
we obtain
$\displaystyle\underbrace{\partial_{y_{1}}d_{g_{1}}(z,y_{1})}_{\in
S^{*}_{y_{1}}M_{1}}=\partial_{y_{2}}d_{g_{2}}(\phi(z),y_{2})d\phi(z_{1})=d\phi(z_{1})^{*}\underbrace{\partial_{y_{2}}d_{g_{2}}(\phi(z),y_{2})}_{\in
S^{*}_{y_{2}}M_{2}}.$
Since this holds for all $z$ in the open set $Z_{1}$, we obtain non-empty open
sets $W_{i}\subset S^{*}_{y_{i}}M_{i}$ such that
$d\phi(y_{1})^{*}W_{2}=W_{1}$. This implies
$d\phi(y_{1})^{*}S^{*}_{y_{2}}M_{2}=S^{*}_{y_{1}}M_{1}$, and therefore
$(\phi^{*}g_{2})|_{y_{1}}=g_{1}|_{y_{1}}$. Since $y_{1}$ is arbitrary, we
conclude that $\phi$ is a Riemannian isometry as claimed. ∎
## References
* [CLOP21] Xi Chen, Matti Lassas, Lauri Oksanen, and Gabriel P Paternain, _Inverse problem for the yang–mills equations_ , Communications in Mathematical Physics (2021), 1–39.
* [CMOP19] X. Chen, Lassas M., L. Oksanen, and G. P. Paternain, _Detection of hermitian connections in wave equations with cubic non-linearity_ , arXiv:1902.05711, 02 2019.
* [dHUW19] Maarten de Hoop, Gunther Uhlmann, and Yiran Wang, _Nonlinear responses from the interaction of two progressing waves at an interface_ , Ann. Inst. H. Poincaré Anal. Non Linéaire 36 (2019), no. 2, 347–363. MR 3913189
* [dHUW20] by same author, _Nonlinear interaction of waves in elastodynamics and an inverse problem_ , Math. Ann. 376 (2020), no. 1-2, 765–795. MR 4055177
* [Dui96] J. J. Duistermaat, _Fourier integral operators_ , Progress in Mathematics, vol. 130, Birkhäuser Boston, Inc., Boston, MA, 1996. MR 1362544
* [FO19] Ali Feizmohammadi and Lauri Oksanen, _Recovery of zeroth order coefficients in non-linear wave equations_.
* [GU93] Allan Greenleaf and Gunther Uhlmann, _Recovering singularities of a potential from singularities of scattering data_ , Comm. Math. Phys. 157 (1993), no. 3, 549–572. MR 1243710
* [H0̈7] Lars Hörmander, _The analysis of linear partial differential operators. III_ , Classics in Mathematics, Springer, Berlin, 2007, Pseudo-differential operators, Reprint of the 1994 edition. MR 2304165
* [KLOU14] Yaroslav Kurylev, Matti Lassas, Lauri Oksanen, and Gunther Uhlmann, _Inverse problem for einstein-scalar field equations_.
* [KLU13] Yaroslav Kurylev, Matti Lassas, and Gunther Uhlmann, _Determination of structures in the space-time from local measurements: a detailed exposition_.
* [KLU14] by same author, _Inverse problems in spacetime i: Inverse problems for einstein equations - extended preprint version_.
* [KLU18] Yaroslav Kurylev, Matti Lassas, and Gunther Uhlmann, _Inverse problems for Lorentzian manifolds and non-linear hyperbolic equations_ , Invent. Math. 212 (2018), no. 3, 781–857. MR 3802298
* [KSU07] Carlos E. Kenig, Johannes Sjöstrand, and Gunther Uhlmann, _The Calderón problem with partial data_ , Ann. of Math. (2) 165 (2007), no. 2, 567–591. MR 2299741
* [LUW17] Matti Lassas, Gunther Uhlmann, and Yiran Wang, _Determination of vacuum space-times from the einstein-maxwell equations_.
* [LUW18] Matti Lassas, Gunther Uhlmann, and Yiran Wang, _Inverse problems for semilinear wave equations on Lorentzian manifolds_ , Comm. Math. Phys. 360 (2018), no. 2, 555–609. MR 3800791
* [MN21] L. Tzou M. Nursultanov, L. Oksanen, _Single observer determination of lorentzian manifolds_ , in preparation (2021), 1–39.
* [MU79] R. B. Melrose and G. A. Uhlmann, _Lagrangian intersection and the Cauchy problem_ , Comm. Pure Appl. Math. 32 (1979), no. 4, 483–519. MR 528633
* [OSSU20] Lauri Oksanen, Mikko Salo, Plamen Stefanov, and Gunther Uhlmann, _Inverse problems for real principal type operators_.
* [UW18] Gunther Uhlmann and Yiran Wang, _Convolutional neural networks in phase space and inverse problems_.
* [UW20] Gunther Uhlmann and Yiran Wang, _Determination of space-time structures from gravitational perturbations_ , Comm. Pure Appl. Math. 73 (2020), no. 6, 1315–1367. MR 4156604
* [UZ19] Gunther Uhlmann and Jian Zhai, _On an inverse boundary value problem for a nonlinear elastic wave equation_.
|
the Nash equilibrium i.e., a state beyond which they can not improve their
performance unilaterally [48]. While this cooperative architecture aims to
optimize a global loss function, the optimization problems faced by the
individual networks are fundamentally opposing. Due to this complexity in the
loss function, there can be situations where some minor adjustments in one
network can trigger substantial modifications in the other. Moreover, when
both the networks aim to independently optimize their loss functions without
coordination, attaining the Nash equilibrium can be hard. Such instances of
desynchronization between the networks can lead to instability in the overall
learning process and substantially increase the computation time [221]. To
counter this challenge, recent advancements in GAN architectures have been
focusing on enhancing training stability. The feature matching technique
improves the stability of the GAN framework by introducing an alternative cost
function for $G$ combining the output of the discriminator [202].
Additionally, historical averaging of the parameters [202], unrolled GAN
[219], and gradient penalty [122] strategies mitigate learning instability and
promote convergence of the model.
### VIII-D Stopping Problem
During GANs training, determining the appropriate time at which the networks
are fully optimized is crucial for addressing the problems related to
overfitting and underfitting. However, in GANs due to the minimax objective
function determining the state of the networks based on their respective loss
functions is impossible. To address this issue related to the GANs stopping
criterion, researchers often employ an early stopping approach where the
training halts based on a predefined threshold or the lack of improvement in
evaluation metrics.
### VIII-E Internal Distributional Shift
The internal distributional shift often called internal covariate shift refers
to the changing distribution in the network activations of the current layer
w.r.t the previous layer. In the context of GAN, when the generator’s
parameters are updated, the distribution of its output may change, leading to
internal distributional shifts in subsequent layers and causing the
discriminator’s learning to lag behind. This phenomenon affects the
convergence of the GAN training process and the computational complexity of
the network significantly increases to counter the shifts. To address this
issue batch normalization technique is widely adopted in various applications
of GAN [222].
## IX DISCUSSION
Over the past decade, GANs have emerged as the foremost and pivotal generative
architecture within the areas of computer vision, natural language processing,
and related fields. To enhance the performance of GAN architecture, numerous
studies have focused on the following: (i) the generation of high-quality
samples, (ii) diversity in the simulated samples, and (iii) stabilizing the
training algorithm. Constant efforts and improvements of the GAN model have
resulted in plausible sample generation, text/image-to-image translations,
data augmentation, style transfer, anomaly detection, and other applied
domains.
Recent advancements in machine learning with the help of Diffusion models [22,
223, 224] also known as score-based generative models have made a strong
impression on a variety of tasks including image denoising, image inpainting,
image super-resolution, and image generation. The primary goal of Diffusion
models is to learn the latent structure of the dataset by modeling the way in
which data points diffuse through the latent space. [225] has shown that
Diffusion models outperform GANs on image synthesis due to their better
stability and non-existence of mode collapse. However, the cost of
synthesizing new samples and computational time for making realistic images
lead to its shortcomings when applied to real-time application [226, 227]. Due
to the fact that GANs need fine-tuning in their hyperparameters, Transformers
[19] have been used to enhance the results of GANs that can adopt self-
attention layers. This helps in designing larger models and replacing the
neural network models of $G$ and $D$ within the GAN structure. TransGAN [228]
introduces a GAN architecture without convolutions by using Transformers in
both $G$ and $D$ of the GAN resulting in improved high-resolution image
generation. [229] presented an intersection of GANs and Transformers to
predict pedestrian paths. Although Transformers and their variants have
several advantages, they suffer from high computational (time and resource)
complexity [230]. More recently, physics-informed neural networks (PINN) [20]
was introduced as a universal function approximator that can incorporate
knowledge of physical laws to govern the data in the learning process. PINNs
overcome the low data availability issue [231] in which GANs and Transformers
lack robustness, rendering them ineffective scenarios. A GAN framework based
on a physics-informed (PI) discriminator for uncertainty quantification is
used to inform the knowledge of physics during the learning of both $G$ and
$D$ models. Physics-informed Discriminator GAN (PID-GAN) [232] doesn’t suffer
from an imbalance of generator gradient from multiple losses. Another
architecture namely Physics-informed GAN (PI-GAN) [233] tackles the problem of
sequence generation with limited data. It integrates a transition module in
the generator part that can iteratively construct the sequence with only one
initial point as input. Solving differential equations using GANs to learn the
loss function was presented in the Differential Equation GAN (DEQ-GAN) model
[234]. Combining GANs with PINNs achieved solution accuracies that are
competitive with popularly used numerical methods.
Large language models (LLMs) [21] became a very popular choice for their
ability to understand and generate human language. LLMs are neural networks
that are trained on massive text datasets to understand the relationship
between words and phrases. This enables LLMs to generate text that is both
coherent and grammatically correct. Recently, LLMs and their cousin ChatGPT
revolutionized the field of natural language processing, question-answering,
and creative writing. Additionally, LLMs and their variants are used to create
creative content such as poems, scripts, and codes. GANs and LLMs are two
powerful co-existing models where the former is used to generate realistic
images. Mega-TTS [235] adopt a VQGAN [169] based acoustic model and a latent-
code language model called Prosody-LLM (P-LLM) [236] to solve zero-shot text-
to-speech at scale with intrinsic inductive bias. Future works in the
hybridization of GANs with several other architectures will be a promising
field of future research.
## X FUTURE RESEARCH DIRECTION
Despite the substantial advancements achieved by GAN-based frameworks over the
past decade, there remain a number of challenges spanning both theoretical and
practical aspects that require further exploration in future research. In this
section, we identify these gaps that necessitate deeper investigation to
enhance our comprehension of GANs. The summary is presented below:
#### Fundamental questions on the theory of GANs
Recent advancements in the theory of GAN by [197, 192, 193] explored the role
of the discriminator family in terms of JS divergence and some large sample
properties (convergence and asymptotic normality) of the parameter describing
the empirically selected generator. However, a fundamental question of how
well GANs can approximate the target distribution $p^{*}$ remained largely
unanswered. From the theoretical perspective, there is still a mystery about
the role and impact of the discriminator on the quality of the approximation.
The universal consistency and the rate of convergence of GANs and their
variants still remain an open problem.
#### Improvement of training stability and diversity
Achieving the Nash equilibrium in GAN frameworks, which is essential for the
generator to learn the actual sample distribution, requires stable training
mechanisms [237, 238]. However, attaining this optimal balance between the
generator and discriminator remains challenging. Various approaches have been
explored, such as WGAN [109], SN-GAN [133], One-sided Label Smoothing [203],
and WGAN with gradient penalty (WGAN-GP) [122], to enhance training stability.
Additionally, addressing mode collapse, a common GAN issue that leads to
limited sample diversity, has prompted strategies like WGAN [109], U-GAN
[219], generator regulating GAN (GRGAN) [239], and Adaptive GAN [240]. Future
research could focus on devising techniques to stabilize GAN training and
alleviate problems like mode collapse through regularization methods,
alternative loss functions, and optimized hyperparameters. Incorporating
methods like multi-modal GANs, designed to generate diverse outputs from a
single input, might contribute to enhancing sample diversity [239].
#### Data scarcity in GAN
Addressing the issue of data scarcity in GANs stands as a crucial research
trajectory. To expand GAN applications, forthcoming investigations could focus
on devising training strategies for scenarios with limited data. Approaches
such as few-shot GANs, transfer learning, and domain adaptation offer the
potential to enhance GAN performance when data is scarce [241, 242]. This
challenge becomes especially pertinent when acquiring substantial datasets
poses difficulties. Additionally, refining training algorithms for maximal
data utility could be pursued. Bolstering GAN effectiveness in low-data
situations holds pivotal significance for broader adoption across various
industries and domains.
#### Ethics and privacy
Since its inception in 2014, GAN development has yielded substantial benefits
in research and real-world applications. However, the inappropriate
utilization of GANs can give rise to latent societal issues such as producing
deceptive content, malicious images, fabricated news, deepfakes, prejudiced
portrayals, and compromising individual safety [243]. To tackle these issues,
the establishment of ethical guidelines and regulations is imperative [244].
Future research avenues might center on developing robust techniques to detect
and alleviate ethical concerns associated with GANs, while also advocating
their ethical and responsible deployment in diverse fields. Essential to this
effort is the creation of forgery detection methods capable of effectively
identifying AI-generated content, including images produced through GANs.
Furthermore, GANs can be susceptible to adversarial attacks, wherein minor
modifications to input data result in visually convincing yet incorrect
outputs [245, 116]. Future investigations could prioritize the development of
robust GANs that can withstand such attacks, alongside methods for identifying
and countering them. Ensuring the integrity and reliability of GANs is of
utmost importance, particularly in contexts like authentication, content
verification, and cybersecurity [246, 216].
#### Real-time implementation and scalability
While GANs have shown immense potential, their resource-intensive nature
hinders real-time usage and scalability. Recent GAN variants like ProGAN [5]
and Att-GAN [148] aim to address this complexity. Future efforts might focus
on crafting efficient GAN architectures capable of generating high-quality
samples in real-time, vital for constrained platforms like mobile devices and
edge computing. Integrating GANs with reinforcement learning, transfer
learning, and supervised learning, as seen in RidgeGAN [10], opens
opportunities for hybrid models with expanded capabilities. Research should
delve into hybrid approaches, leveraging GANs alongside other techniques for
enhanced generative potential. Additionally, exploring multimodal GANs that
produce diverse outputs from multiple modalities can unlock novel avenues for
creating complex data [247].
#### Human-centric GANs
GANs have the potential to enable human-machine creative cooperation [248].
Future research could emphasize human-centric GANs, integrating human
feedback, preferences, and creativity into the generative process. This
direction might pave the way for interactive and co-creative GANs, enabling
the production of outputs aligned with human preferences and needs, while also
involving users in active participation during the generation process.
#### Other innovative applications and industry usage
Initially designed for generating realistic images, GANs have exhibited
impressive performance in computer vision. While their application has
extended to domains like time series generation [103, 102], audio synthesis
[8], and autonomous vehicles [120], their use outside computer vision remains
somewhat constrained. The divergent nature of image and non-image data
introduces challenges, particularly in non-image contexts like NLP, where
discrete values such as words and characters predominate [199]. Future
research can aim to overcome these challenges and enhance GANs’ capabilities
in discrete data scenarios. Furthermore, exploring unique applications of GANs
in fields like finance, education, and entertainment offers the potential to
introduce new possibilities and positively impact various industries [249].
Collaborative efforts across disciplines could also harness diverse expertise,
fostering synergies to enhance GANs’ adaptability across a broad spectrum of
applications [250].
## XI CONCLUSION
In this article, we presented a GAN survey, GAN variants, and a detailed
analysis of the wide range of GAN applications in several applied domains. In
addition, we reviewed the recent theoretical developments in the GAN
literature and the most common evaluation metrics. Despite all these one of
the core contributions of this survey is to discuss several obstacles of
various GAN architectures and their potential solutions for future research.
Overall, we discuss GANs’ potential to facilitate practical applications not
only in image, audio, and text but also in relatively uncommon areas such as
time series analysis, geospatial data analysis, and imbalanced learning. In
the discussion section, apart from GANs’ significant success, we detail the
failures of GANs due to their time complexity and unstable training. Although
GANs have been phenomenal for the generation of hyper-realistic data, current
progress in deep learning depicts an alternative narrative. Recently developed
architectures such as Diffusion models have demonstrated significant success
and outperformed GANs on image synthesis. On the other hand, Transformers, a
deep learning architecture based on a multi-head attention mechanism, has been
used within GAN architecture to enhance its performance. Furthermore, Large
Language Models, a widely utilized deep learning structure designed for
comprehending and producing natural language, have been incorporated into GAN
architecture to bolster its effectiveness. The hybridization of PINN and GAN
namely, PI-GAN can solve inverse and mixed stochastic problems based on a
limited number of scattered measurements. On the contrary, GANs’ ability which
relies on large data for training, using physical laws inside GANs in the form
of stochastic differential equations can mitigate the limited data problem.
Several hybrid approaches combining GAN with other powerful deep learners are
showing great merit and success as discussed in the discussion section.
Finally, several applications of GANs over the last decade are summarized and
criticized throughout the article.
## References
* [1] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets (advances in neural information processing systems)(pp. 2672–2680),” _Red Hook, NY Curran_ , 2014.
* [2] M. Mirza and S. Osindero, “Conditional generative adversarial nets,” _arXiv preprint arXiv:1411.1784_ , 2014.
* [3] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 2223–2232.
* [4] H. Zhang, T. Xu, H. Li, S. Zhang, X. Wang, X. Huang, and D. N. Metaxas, “Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 5907–5915.
* [5] T. Karras, T. Aila, S. Laine, and J. Lehtinen, “Progressive growing of gans for improved quality, stability, and variation,” _arXiv preprint arXiv:1710.10196_ , 2017.
* [6] T. Karras, S. Laine, and T. Aila, “A style-based generator architecture for generative adversarial networks,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2019, pp. 4401–4410.
* [7] X. Liu, M. Cheng, H. Zhang, and C.-J. Hsieh, “Towards robust neural networks via random self-ensemble,” in _Proceedings of the European Conference on Computer Vision (ECCV)_ , 2018, pp. 369–385.
* [8] L.-C. Yang, S.-Y. Chou, and Y.-H. Yang, “Midinet: A convolutional generative adversarial network for symbolic-domain music generation,” _arXiv preprint arXiv:1703.10847_ , 2017.
* [9] Y. Wu, M. Schuster, Z. Chen, Q. V. Le, M. Norouzi, W. Macherey, M. Krikun, Y. Cao, Q. Gao, K. Macherey, _et al._ , “Google’s neural machine translation system: Bridging the gap between human and machine translation,” _arXiv preprint arXiv:1609.08144_ , 2016.
* [10] R. Thottolil, U. Kumar, and T. Chakraborty, “Prediction of transportation index for urban patterns in small and medium-sized indian cities using hybrid ridgegan model,” _arXiv preprint arXiv:2306.05951_ , 2023.
* [11] K. E. Smith and A. O. Smith, “Conditional gan for timeseries generation,” _arXiv preprint arXiv:2006.16477_ , 2020.
* [12] H.-C. Shin, H. R. Roth, M. Gao, L. Lu, Z. Xu, I. Nogues, J. Yao, D. Mollura, and R. M. Summers, “Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning,” _IEEE transactions on medical imaging_ , vol. 35, no. 5, pp. 1285–1298, 2016.
* [13] J. Togelius, N. Shaker, and M. J. Nelson, “Procedural content generation in games: A textbook and an overview of current research,” _Togelius N. Shaker M. Nelson Berlin: Springer_ , 2014.
* [14] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “Infogan: Interpretable representation learning by information maximizing generative adversarial nets,” _Advances in neural information processing systems_ , vol. 29, 2016.
* [15] M. Arjovsky and L. Bottou, “Towards principled methods for training generative adversarial networks,” _arXiv preprint arXiv:1701.04862_ , 2017.
* [16] D. Wilby, T. Aarts, P. Tichit, A. Bodey, C. Rau, G. Taylor, and E. Baird, “Using micro-ct techniques to explore the role of sex and hair in the functional morphology of bumblebee (bombus terrestris) ocelli,” _Vision Research_ , vol. 158, pp. 100–108, 2019.
* [17] J. Buolamwini and T. Gebru, “Gender shades: Intersectional accuracy disparities in commercial gender classification,” in _Conference on fairness, accountability and transparency_. PMLR, 2018, pp. 77–91.
* [18] J. Zhao, T. Wang, M. Yatskar, V. Ordonez, and K.-W. Chang, “Gender bias in coreference resolution: Evaluation and debiasing methods,” _arXiv preprint arXiv:1804.06876_ , 2018.
* [19] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,” _Advances in neural information processing systems_ , vol. 30, 2017.
* [20] M. Raissi, P. Perdikaris, and G. E. Karniadakis, “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations,” _Journal of Computational physics_ , vol. 378, pp. 686–707, 2019.
* [21] A. Radford, J. Wu, D. Amodei, D. Amodei, J. Clark, M. Brundage, and I. Sutskever, “Better language models and their implications,” _OpenAI blog_ , vol. 1, no. 2, 2019.
* [22] J. Sohl-Dickstein, E. Weiss, N. Maheswaranathan, and S. Ganguli, “Deep unsupervised learning using nonequilibrium thermodynamics,” in _International conference on machine learning_. PMLR, 2015, pp. 2256–2265.
* [23] A. Radford, L. Metz, and S. Chintala, “Unsupervised representation learning with deep convolutional generative adversarial networks,” _arXiv preprint arXiv:1511.06434_ , 2015.
* [24] Y. Zhang, Z. Yin, Y. Li, G. Yin, J. Yan, J. Shao, and Z. Liu, “Celeba-spoof: Large-scale face anti-spoofing dataset with rich annotations,” in _Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XII 16_. Springer, 2020, pp. 70–85.
* [25] R. Kulkarni, R. Gaikwad, R. Sugandhi, P. Kulkarni, and S. Kone, “Survey on deep learning in music using gan,” _Int. J. Eng. Res. Technol_ , vol. 8, no. 9, pp. 646–648, 2019.
* [26] A. Jabbar, X. Li, and B. Omar, “A survey on generative adversarial networks: Variants, applications, and training,” _ACM Computing Surveys (CSUR)_ , vol. 54, no. 8, pp. 1–49, 2021.
* [27] M. Durgadevi _et al._ , “Generative adversarial network (gan): a general review on different variants of gan and applications,” in _2021 6th International Conference on Communication and Electronics Systems (ICCES)_. IEEE, 2021, pp. 1–8.
* [28] R. Nandhini Abirami, P. Durai Raj Vincent, K. Srinivasan, U. Tariq, and C.-Y. Chang, “Deep cnn and deep gan in computational visual perception-driven image analysis,” _Complexity_ , vol. 2021, pp. 1–30, 2021.
* [29] Z. Wang, Q. She, and T. E. Ward, “Generative adversarial networks in computer vision: A survey and taxonomy,” _ACM Computing Surveys (CSUR)_ , vol. 54, no. 2, pp. 1–38, 2021.
* [30] V. Sampath, I. Maurtua, J. J. Aguilar Martin, and A. Gutierrez, “A survey on generative adversarial networks for imbalance problems in computer vision tasks,” _Journal of big Data_ , vol. 8, pp. 1–59, 2021.
* [31] J. Gui, Z. Sun, Y. Wen, D. Tao, and J. Ye, “A review on generative adversarial networks: Algorithms, theory, and applications,” _IEEE transactions on knowledge and data engineering_ , 2021.
* [32] Y. Li, Q. Wang, J. Zhang, L. Hu, and W. Ouyang, “The theoretical research of generative adversarial networks: an overview,” _Neurocomputing_ , vol. 435, pp. 26–41, 2021.
* [33] W. Xia, Y. Zhang, Y. Yang, J.-H. Xue, B. Zhou, and M.-H. Yang, “Gan inversion: A survey,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2022.
* [34] S. Xun, D. Li, H. Zhu, M. Chen, J. Wang, J. Li, M. Chen, B. Wu, H. Zhang, X. Chai, _et al._ , “Generative adversarial networks in medical image segmentation: A review,” _Computers in biology and medicine_ , vol. 140, p. 105063, 2022.
* [35] S. Ji, X. Yang, and J. Luo, “A survey on deep learning for symbolic music generation: Representations, algorithms, evaluations, and challenges,” _ACM Computing Surveys_ , 2023.
* [36] G. Iglesias, E. Talavera, and A. Díaz-Álvarez, “A survey on gans for computer vision: Recent research, analysis and taxonomy,” _Computer Science Review_ , vol. 48, p. 100553, 2023.
* [37] E. Brophy, Z. Wang, Q. She, and T. Ward, “Generative adversarial networks in time series: A systematic literature review,” _ACM Computing Surveys_ , vol. 55, no. 10, pp. 1–31, 2023.
* [38] C. Vondrick, A. Shrivastava, A. Fathi, S. Guadarrama, and K. Murphy, “Tracking emerges by colorizing videos,” in _Proceedings of the European conference on computer vision (ECCV)_ , 2018, pp. 391–408.
* [39] L. Yu, W. Zhang, J. Wang, and Y. Yu, “Seqgan: Sequence generative adversarial nets with policy gradient,” in _Proceedings of the AAAI conference on artificial intelligence_ , vol. 31, no. 1, 2017.
* [40] J. Tan, L. Jing, Y. Huo, L. Li, O. Akin, and Y. Tian, “Lgan: Lung segmentation in ct scans using generative adversarial network,” _Computerized Medical Imaging and Graphics_ , vol. 87, p. 101817, 2021.
* [41] S. Nema, A. Dudhane, S. Murala, and S. Naidu, “Rescuenet: An unpaired gan for brain tumor segmentation,” _Biomedical Signal Processing and Control_ , vol. 55, p. 101641, 2020.
* [42] Y. Abouelnaga, O. S. Ali, H. Rady, and M. Moustafa, “Cifar-10: Knn-based ensemble of classifiers,” in _2016 International Conference on Computational Science and Computational Intelligence (CSCI)_. IEEE, 2016, pp. 1192–1195.
* [43] B. Recht, R. Roelofs, L. Schmidt, and V. Shankar, “Do imagenet classifiers generalize to imagenet?” in _International conference on machine learning_. PMLR, 2019, pp. 5389–5400.
* [44] M. Z. Alom, T. M. Taha, C. Yakopcic, S. Westberg, P. Sidike, M. S. Nasrin, M. Hasan, B. C. Van Essen, A. A. Awwal, and V. K. Asari, “A state-of-the-art survey on deep learning theory and architectures,” _electronics_ , vol. 8, no. 3, p. 292, 2019.
* [45] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial networks,” _Communications of the ACM_ , vol. 63, no. 11, pp. 139–144, 2020.
* [46] I. Goodfellow, Y. Bengio, and A. Courville, _Deep learning_. MIT press, 2016.
* [47] I. Goodfellow, “Nips 2016 tutorial: Generative adversarial networks,” _arXiv preprint arXiv:1701.00160_ , 2016.
* [48] J. Nash, “Non-cooperative games,” _Annals of mathematics_ , pp. 286–295, 1951\.
* [49] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, and S. Hochreiter, “Gans trained by a two time-scale update rule converge to a local nash equilibrium,” _Advances in neural information processing systems_ , vol. 30, 2017.
* [50] F. Farnia and A. Ozdaglar, “Do gans always have nash equilibria?” in _International Conference on Machine Learning_. PMLR, 2020, pp. 3029–3039.
* [51] M.-Y. Liu, X. Huang, J. Yu, T.-C. Wang, and A. Mallya, “Generative adversarial networks for image and video synthesis: Algorithms and applications,” _Proceedings of the IEEE_ , vol. 109, no. 5, pp. 839–862, 2021.
* [52] S. W. Kim, Y. Zhou, J. Philion, A. Torralba, and S. Fidler, “Learning to simulate dynamic environments with gamegan,” in _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , 2020, pp. 1231–1240.
* [53] Y.-J. Cao, L.-L. Jia, Y.-X. Chen, N. Lin, C. Yang, B. Zhang, Z. Liu, X.-X. Li, and H.-H. Dai, “Recent advances of generative adversarial networks in computer vision,” _IEEE Access_ , vol. 7, pp. 14 985–15 006, 2018.
* [54] L. Ma, X. Jia, Q. Sun, B. Schiele, T. Tuytelaars, and L. Van Gool, “Pose guided person image generation,” _Advances in neural information processing systems_ , vol. 30, 2017.
* [55] Y. Yu, Z. Gong, P. Zhong, and J. Shan, “Unsupervised representation learning with deep convolutional neural network for remote sensing images,” in _Image and Graphics: 9th International Conference, ICIG 2017, Shanghai, China, September 13-15, 2017, Revised Selected Papers, Part II 9_. Springer, 2017, pp. 97–108.
* [56] Y. Wang, P. Bilinski, F. Bremond, and A. Dantcheva, “Imaginator: Conditional spatio-temporal gan for video generation,” in _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_ , 2020, pp. 1160–1169.
* [57] S. Tulyakov, M.-Y. Liu, X. Yang, and J. Kautz, “Mocogan: Decomposing motion and content for video generation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 1526–1535.
* [58] W. Wang, H. Yang, Z. Tuo, H. He, J. Zhu, J. Fu, and J. Liu, “Videofactory: Swap attention in spatiotemporal diffusions for text-to-video generation,” _arXiv preprint arXiv:2305.10874_ , 2023.
* [59] M. Westerlund, “The emergence of deepfake technology: A review,” _Technology innovation management review_ , vol. 9, no. 11, 2019.
* [60] P. Korshunov and S. Marcel, “Vulnerability assessment and detection of deepfake videos,” in _2019 International Conference on Biometrics (ICB)_. IEEE, 2019, pp. 1–6.
* [61] P. Yu, Z. Xia, J. Fei, and Y. Lu, “A survey on deepfake video detection,” _Iet Biometrics_ , vol. 10, no. 6, pp. 607–624, 2021.
* [62] Q. Xie, Z. Dai, E. Hovy, T. Luong, and Q. Le, “Unsupervised data augmentation for consistency training,” _Advances in neural information processing systems_ , vol. 33, pp. 6256–6268, 2020.
* [63] S. R. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Jozefowicz, and S. Bengio, “Generating sentences from a continuous space,” _arXiv preprint arXiv:1511.06349_ , 2015.
* [64] M. Frid-Adar, E. Klang, M. Amitai, J. Goldberger, and H. Greenspan, “Synthetic data augmentation using gan for improved liver lesion classification,” in _2018 IEEE 15th international symposium on biomedical imaging (ISBI 2018)_. IEEE, 2018, pp. 289–293.
* [65] J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” in _Computer Vision–ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14_. Springer, 2016, pp. 694–711.
* [66] L. A. Gatys, A. S. Ecker, and M. Bethge, “A neural algorithm of artistic style,” _arXiv preprint arXiv:1508.06576_ , 2015.
* [67] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” _Neural computation_ , vol. 9, no. 8, pp. 1735–1780, 1997.
* [68] Y. Zhang, Z. Gan, and L. Carin, “Generating text via adversarial training,” in _NIPS workshop on Adversarial Training_ , vol. 21. academia. edu, 2016, pp. 21–32.
* [69] M. Toshevska and S. Gievska, “A review of text style transfer using deep learning,” _IEEE Transactions on Artificial Intelligence_ , 2021.
* [70] J. Guo, S. Lu, H. Cai, W. Zhang, Y. Yu, and J. Wang, “Long text generation via adversarial training with leaked information,” in _Proceedings of the AAAI conference on artificial intelligence_ , vol. 32, no. 1, 2018.
* [71] Z. Mu, X. Yang, and Y. Dong, “Review of end-to-end speech synthesis technology based on deep learning,” _arXiv preprint arXiv:2104.09995_ , 2021.
* [72] H.-W. Dong, W.-Y. Hsiao, L.-C. Yang, and Y.-H. Yang, “Musegan: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 32, no. 1, 2018.
* [73] M. Civit, J. Civit-Masot, F. Cuadrado, and M. J. Escalona, “A systematic review of artificial intelligence-based music generation: Scope, applications, and future trends,” _Expert Systems with Applications_ , p. 118190, 2022.
* [74] X. Mao, S. Wang, L. Zheng, and Q. Huang, “Semantic invariant cross-domain image generation with generative adversarial networks,” _Neurocomputing_ , vol. 293, pp. 55–63, 2018.
* [75] J. T. Guibas, T. S. Virdi, and P. S. Li, “Synthetic medical images from dual generative adversarial networks,” _arXiv preprint arXiv:1709.01872_ , 2017\.
* [76] N. K. Singh and K. Raza, “Medical image generation using generative adversarial networks: A review,” _Health informatics: A computational perspective in healthcare_ , pp. 77–96, 2021.
* [77] C. Wang, G. Yang, G. Papanastasiou, S. A. Tsaftaris, D. E. Newby, C. Gray, G. Macnaught, and T. J. MacGillivray, “Dicyc: Gan-based deformation invariant cross-domain information fusion for medical image synthesis,” _Information Fusion_ , vol. 67, pp. 147–160, 2021.
* [78] A. Kadurin, A. Aliper, A. Kazennov, P. Mamoshina, Q. Vanhaelen, K. Khrabrov, and A. Zhavoronkov, “The cornucopia of meaningful leads: Applying deep adversarial autoencoders for new molecule development in oncology,” _Oncotarget_ , vol. 8, no. 7, p. 10883, 2017.
* [79] A. Kadurin, S. Nikolenko, K. Khrabrov, A. Aliper, and A. Zhavoronkov, “drugan: an advanced generative adversarial autoencoder model for de novo generation of new molecules with desired molecular properties in silico,” _Molecular pharmaceutics_ , vol. 14, no. 9, pp. 3098–3104, 2017.
* [80] Y. Zhao, Y. Wang, J. Zhang, X. Liu, Y. Li, S. Guo, X. Yang, and S. Hong, “Surgical gan: Towards real-time path planning for passive flexible tools in endovascular surgeries,” _Neurocomputing_ , vol. 500, pp. 567–580, 2022\.
* [81] S. Ma, Z. Hu, K. Ye, X. Zhang, Y. Wang, and H. Peng, “Feasibility study of patient-specific dose verification in proton therapy utilizing positron emission tomography (pet) and generative adversarial network (gan),” _Medical Physics_ , vol. 47, no. 10, pp. 5194–5208, 2020.
* [82] A. Albert, E. Strano, J. Kaur, and M. C. González, “Modeling urbanization patterns with generative adversarial networks,” _IGARSS 2018 - 2018 IEEE International Geoscience and Remote Sensing Symposium_ , pp. 2095–2098, 2018\.
* [83] A. Albert, J. Kaur, E. Strano, and M. Gonzalez, “Spatial sensitivity analysis for urban land use prediction with physics-constrained conditional generative adversarial networks,” _arXiv preprint arXiv:1907.09543_ , 2019.
* [84] W. Zhang, Y. Ma, D. Zhu, L. Dong, and Y. Liu, “Metrogan: Simulating urban morphology with generative adversarial network,” in _Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_ , 2022, pp. 2482–2492.
* [85] L. Mosser, O. Dubrule, and M. J. Blunt, “Reconstruction of three-dimensional porous media using generative adversarial neural networks,” _Physical Review E_ , vol. 96, no. 4, p. 043309, 2017.
* [86] T.-F. Zhang, P. Tilke, E. Dupont, L.-C. Zhu, L. Liang, and W. Bailey, “Generating geologically realistic 3d reservoir facies models using deep learning of sedimentary architecture with generative adversarial networks,” _Petroleum Science_ , vol. 16, pp. 541–549, 2019.
* [87] T. Wang, D. Trugman, and Y. Lin, “Seismogen: Seismic waveform synthesis using gan with application to seismic data augmentation,” _Journal of Geophysical Research: Solid Earth_ , vol. 126, no. 4, p. e2020JB020077, 2021.
* [88] B. Gecer, B. Bhattarai, J. Kittler, and T.-K. Kim, “Semi-supervised adversarial learning to generate photorealistic face images of new identities from 3d morphable model,” in _Proceedings of the European conference on computer vision (ECCV)_ , 2018, pp. 217–234.
* [89] X. Pan, Y. You, Z. Wang, and C. Lu, “Virtual to real reinforcement learning for autonomous driving,” _arXiv preprint arXiv:1704.03952_ , 2017.
* [90] A. Shrivastava, T. Pfister, O. Tuzel, J. Susskind, W. Wang, and R. Webb, “Learning from simulated and unsupervised images through adversarial training,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 2107–2116.
* [91] M. Zhang, Y. Zhang, L. Zhang, C. Liu, and S. Khurshid, “Deeproad: Gan-based metamorphic testing and input validation framework for autonomous driving systems,” in _Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering_ , 2018, pp. 132–142.
* [92] S. Jiang and Y. Fu, “Fashion style generator.” in _IJCAI_ , 2017, pp. 3721–3727.
* [93] X. Han, Z. Wu, Z. Wu, R. Yu, and L. S. Davis, “Viton: An image-based virtual try-on network,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 7543–7552.
* [94] L. Liu, H. Zhang, Y. Ji, and Q. J. Wu, “Toward ai fashion design: An attribute-gan model for clothing match,” _Neurocomputing_ , vol. 341, pp. 156–167, 2019.
* [95] N. Pandey and A. Savakis, “Poly-gan: Multi-conditioned gan for fashion synthesis,” _Neurocomputing_ , vol. 414, pp. 356–364, 2020.
* [96] T. Chakraborty and A. K. Chakraborty, “Hellinger net: A hybrid imbalance learning model to improve software defect prediction,” _IEEE Transactions on Reliability_ , vol. 70, no. 2, pp. 481–494, 2020.
* [97] T. Dam, M. M. Ferdaus, M. Pratama, S. G. Anavatti, S. Jayavelu, and H. Abbass, “Latent preserving generative adversarial network for imbalance classification,” in _2022 IEEE International Conference on Image Processing (ICIP)_. IEEE, 2022, pp. 3712–3716.
* [98] G. Mariani, F. Scheidegger, R. Istrate, C. Bekas, and C. Malossi, “Bagan: Data augmentation with balancing gan,” _arXiv preprint arXiv:1803.09655_ , 2018\.
* [99] S. Suh, H. Lee, P. Lukowicz, and Y. O. Lee, “Cegan: Classification enhancement generative adversarial networks for unraveling data imbalance problems,” _Neural Networks_ , vol. 133, pp. 69–86, 2021.
* [100] M. Panja, T. Chakraborty, U. Kumar, and N. Liu, “Epicasting: An ensemble wavelet neural network for forecasting epidemics,” _Neural Networks_ , 2023\.
* [101] Y. Li, X. Peng, J. Zhang, Z. Li, and M. Wen, “Dct-gan: dilated convolutional transformer-based gan for time series anomaly detection,” _IEEE Transactions on Knowledge and Data Engineering_ , 2021.
* [102] Y. Li, X. Peng, Z. Wu, F. Yang, X. He, and Z. Li, “M3gan: A masking strategy with a mutable filter for multidimensional anomaly detection,” _Knowledge-Based Systems_ , vol. 271, p. 110585, 2023.
* [103] J. Yang, Y. Shao, and C.-N. Li, “Cnts: Cooperative network for time series,” _IEEE Access_ , vol. 11, pp. 31 941–31 950, 2023.
* [104] A. Geiger, D. Liu, S. Alnegheimish, A. Cuesta-Infante, and K. Veeramachaneni, “Tadgan: Time series anomaly detection using generative adversarial networks,” in _2020 IEEE International Conference on Big Data (Big Data)_. IEEE, 2020, pp. 33–43.
* [105] Y. Liu, J. Peng, J. James, and Y. Wu, “Ppgan: Privacy-preserving generative adversarial network,” in _2019 IEEE 25Th international conference on parallel and distributed systems (ICPADS)_. IEEE, 2019, pp. 985–989.
* [106] A. Torfi and E. A. Fox, “Corgan: correlation-capturing convolutional generative adversarial networks for generating synthetic healthcare records,” _arXiv preprint arXiv:2001.09346_ , 2020.
* [107] R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in _2017 IEEE symposium on security and privacy (SP)_. IEEE, 2017, pp. 3–18.
* [108] L. A. Gatys, A. S. Ecker, and M. Bethge, “Image style transfer using convolutional neural networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 2414–2423.
* [109] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein generative adversarial networks,” in _International conference on machine learning_. PMLR, 2017, pp. 214–223.
* [110] A. Brock, J. Donahue, and K. Simonyan, “Large scale gan training for high fidelity natural image synthesis,” _arXiv preprint arXiv:1809.11096_ , 2018\.
* [111] A. Odena, C. Olah, and J. Shlens, “Conditional image synthesis with auxiliary classifier gans,” in _International conference on machine learning_. PMLR, 2017, pp. 2642–2651.
* [112] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of neural networks,” _arXiv preprint arXiv:1312.6199_ , 2013.
* [113] J. Xiao, S. Zhang, Y. Yao, Z. Wang, Y. Zhang, and Y.-F. Wang, “Generative adversarial network with hybrid attention and compromised normalization for multi-scene image conversion,” _Neural Computing and Applications_ , vol. 34, no. 9, pp. 7209–7225, 2022.
* [114] E. L. Denton, S. Chintala, R. Fergus, _et al._ , “Deep generative image models using a laplacian pyramid of adversarial networks,” _Advances in neural information processing systems_ , vol. 28, 2015.
* [115] A. Krizhevsky, G. Hinton, _et al._ , _Learning multiple layers of features from tiny images_. Toronto, ON, Canada, 2009.
* [116] M. Lucic, K. Kurach, M. Michalski, S. Gelly, and O. Bousquet, “Are gans created equal? a large-scale study,” _Advances in neural information processing systems_ , vol. 31, 2018.
* [117] A. Makhzani, J. Shlens, N. Jaitly, I. Goodfellow, and B. Frey, “Adversarial autoencoders,” _arXiv preprint arXiv:1511.05644_ , 2015.
* [118] K. Bousmalis, N. Silberman, D. Dohan, D. Erhan, and D. Krishnan, “Unsupervised pixel-level domain adaptation with generative adversarial networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 3722–3731.
* [119] I. Higgins, L. Matthey, A. Pal, C. Burgess, X. Glorot, M. Botvinick, S. Mohamed, and A. Lerchner, “beta-vae: Learning basic visual concepts with a constrained variational framework,” in _International conference on learning representations_ , 2017.
* [120] A. Ghosh, B. Bhattacharya, and S. B. R. Chowdhury, “Sad-gan: Synthetic autonomous driving using generative adversarial networks,” _arXiv preprint arXiv:1611.08788_ , 2016.
* [121] X. Mao, Q. Li, H. Xie, R. Y. Lau, Z. Wang, and S. Paul Smolley, “Least squares generative adversarial networks,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 2794–2802.
* [122] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved training of wasserstein gans,” _Advances in neural information processing systems_ , vol. 30, 2017.
* [123] Z. Huang, X. Wang, L. Huang, C. Huang, Y. Wei, and W. Liu, “Ccnet: Criss-cross attention for semantic segmentation,” in _Proceedings of the IEEE/CVF international conference on computer vision_ , 2019, pp. 603–612.
* [124] C. Ledig, L. Theis, F. Huszár, J. Caballero, A. Cunningham, A. Acosta, A. Aitken, A. Tejani, J. Totz, Z. Wang, _et al._ , “Photo-realistic single image super-resolution using a generative adversarial network,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 4681–4690.
* [125] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” _IEEE transactions on image processing_ , vol. 13, no. 4, pp. 600–612, 2004.
* [126] L. Mescheder, S. Nowozin, and A. Geiger, “The numerics of gans,” _Advances in neural information processing systems_ , vol. 30, 2017.
* [127] Y. C. M. W. H. Sergio and G. Colmenarejo, “Learning to learn for global optimization of black box functions,” _stat_ , vol. 1050, p. 18, 2016.
* [128] Z. Yi, H. Zhang, P. Tan, and M. Gong, “Dualgan: Unsupervised dual learning for image-to-image translation,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 2849–2857.
* [129] S. R. Hashemi, S. S. M. Salehi, D. Erdogmus, S. P. Prabhu, S. K. Warfield, and A. Gholipour, “Asymmetric loss functions and deep densely-connected networks for highly-imbalanced medical image segmentation: Application to multiple sclerosis lesion detection,” _IEEE Access_ , vol. 7, pp. 1721–1735, 2018\.
* [130] R. Zhang, P. Isola, A. A. Efros, E. Shechtman, and O. Wang, “The unreasonable effectiveness of deep features as a perceptual metric,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 586–595.
* [131] A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu, “Wavenet: A generative model for raw audio,” _arXiv preprint arXiv:1609.03499_ , 2016.
* [132] H. Chu, R. Urtasun, and S. Fidler, “Song from pi: A musically plausible network for pop music generation,” _arXiv preprint arXiv:1611.03477_ , 2016\.
* [133] T. Miyato, T. Kataoka, M. Koyama, and Y. Yoshida, “Spectral normalization for generative adversarial networks,” _arXiv preprint arXiv:1802.05957_ , 2018\.
* [134] A. Jolicoeur-Martineau, “The relativistic discriminator: a key element missing from standard gan,” _arXiv preprint arXiv:1807.00734_ , 2018.
* [135] G. Gómez-de Segura and R. García-Mayoral, “Turbulent drag reduction by anisotropic permeable substrates–analysis and direct numerical simulations,” _Journal of Fluid Mechanics_ , vol. 875, pp. 124–172, 2019\.
* [136] A. Nguyen, J. Yosinski, and J. Clune, “Multifaceted feature visualization: Uncovering the different types of features learned by each neuron in deep neural networks,” _arXiv preprint arXiv:1602.03616_ , 2016.
* [137] F. Tramèr, A. Kurakin, N. Papernot, I. Goodfellow, D. Boneh, and P. McDaniel, “Ensemble adversarial training: Attacks and defenses,” _arXiv preprint arXiv:1705.07204_ , 2017.
* [138] Y. Choi, M. Choi, M. Kim, J.-W. Ha, S. Kim, and J. Choo, “Stargan: Unified generative adversarial networks for multi-domain image-to-image translation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2018, pp. 8789–8797.
* [139] Y. Li, C. Fang, J. Yang, Z. Wang, X. Lu, and M.-H. Yang, “Universal style transfer via feature transforms,” _Advances in neural information processing systems_ , vol. 30, 2017.
* [140] X. Huang and S. Belongie, “Arbitrary style transfer in real-time with adaptive instance normalization,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 1501–1510.
* [141] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 1125–1134.
* [142] J. Thies, M. Zollhofer, M. Stamminger, C. Theobalt, and M. Nießner, “Face2face: Real-time face capture and reenactment of rgb videos,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 2387–2395.
* [143] T. Karras, M. Aittala, J. Hellsten, S. Laine, J. Lehtinen, and T. Aila, “Training generative adversarial networks with limited data,” _Advances in neural information processing systems_ , vol. 33, pp. 12 104–12 114, 2020.
* [144] G. Franceschelli and M. Musolesi, “Creativity and machine learning: A survey,” _arXiv preprint arXiv:2104.02726_ , 2021.
* [145] V. Dumoulin, I. Belghazi, B. Poole, O. Mastropietro, A. Lamb, M. Arjovsky, and A. Courville, “Adversarially learned inference,” _arXiv preprint arXiv:1606.00704_ , 2016.
* [146] T. Iqbal and H. Ali, “Generative adversarial network for medical images (mi-gan),” _Journal of medical systems_ , vol. 42, pp. 1–11, 2018.
* [147] M. Mahmud, M. S. Kaiser, T. M. McGinnity, and A. Hussain, “Deep learning in mining biological data,” _Cognitive computation_ , vol. 13, pp. 1–33, 2021\.
* [148] Z. He, W. Zuo, M. Kan, S. Shan, and X. Chen, “Attgan: Facial attribute editing by only changing what you want,” _IEEE transactions on image processing_ , vol. 28, no. 11, pp. 5464–5478, 2019.
* [149] T. Dai, Y. Feng, B. Chen, J. Lu, and S.-T. Xia, “Deep image prior based defense against adversarial examples,” _Pattern Recognition_ , vol. 122, p. 108249, 2022.
* [150] X. Hou, L. Shen, K. Sun, and G. Qiu, “Deep feature consistent variational autoencoder,” in _2017 IEEE winter conference on applications of computer vision (WACV)_. IEEE, 2017, pp. 1133–1141.
* [151] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee, “Generative adversarial text to image synthesis,” in _International conference on machine learning_. PMLR, 2016, pp. 1060–1069.
* [152] M. Zhu, P. Pan, W. Chen, and Y. Yang, “Dm-gan: Dynamic memory generative adversarial networks for text-to-image synthesis,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2019, pp. 5802–5810.
* [153] K. Li, T. Zhang, and J. Malik, “Diverse image synthesis from semantic layouts via conditional imle,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 4220–4229.
* [154] V. Nair and G. E. Hinton, “Rectified linear units improve restricted boltzmann machines,” in _Proceedings of the 27th international conference on machine learning (ICML-10)_ , 2010, pp. 807–814.
* [155] Y. Bengio, P. Simard, and P. Frasconi, “Learning long-term dependencies with gradient descent is difficult,” _IEEE transactions on neural networks_ , vol. 5, no. 2, pp. 157–166, 1994.
* [156] A. Graves, G. Wayne, and I. Danihelka, “Neural turing machines,” _arXiv preprint arXiv:1410.5401_ , 2014.
* [157] M. D. Zeiler and R. Fergus, “Visualizing and understanding convolutional networks,” in _Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13_. Springer, 2014, pp. 818–833.
* [158] T. R. Shaham, T. Dekel, and T. Michaeli, “Singan: Learning a generative model from a single natural image,” in _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , 2019, pp. 4570–4580.
* [159] D. Berthelot, C. Raffel, A. Roy, and I. Goodfellow, “Understanding and improving interpolation in autoencoders via an adversarial regularizer,” _arXiv preprint arXiv:1807.07543_ , 2018.
* [160] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, _et al._ , “Language models are few-shot learners,” _Advances in neural information processing systems_ , vol. 33, pp. 1877–1901, 2020.
* [161] J. Jordon, J. Yoon, and M. Van Der Schaar, “Pate-gan: Generating synthetic data with differential privacy guarantees,” in _International conference on learning representations_ , 2018.
* [162] G. Rogez, P. Weinzaepfel, and C. Schmid, “Lcr-net++: Multi-person 2d and 3d pose detection in natural images,” _IEEE transactions on pattern analysis and machine intelligence_ , vol. 42, no. 5, pp. 1146–1161, 2019.
* [163] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in _Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18_. Springer, 2015, pp. 234–241.
* [164] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 770–778.
* [165] S. Zhu, R. Urtasun, S. Fidler, D. Lin, and C. Change Loy, “Be your own prada: Fashion synthesis with structural coherence,” in _Proceedings of the IEEE international conference on computer vision_ , 2017, pp. 1680–1688.
* [166] M. Mameli, M. Paolanti, R. Pietrini, G. Pazzaglia, E. Frontoni, and P. Zingaretti, “Deep learning approaches for fashion knowledge extraction from social media: a review,” _Ieee Access_ , vol. 10, pp. 1545–1576, 2021\.
* [167] Y. Wu, H. Liu, P. Lu, L. Zhang, and F. Yuan, “Design and implementation of virtual fitting system based on gesture recognition and clothing transfer algorithm,” _Scientific Reports_ , vol. 12, no. 1, p. 18356, 2022.
* [168] Z. Pan, F. Yuan, J. Lei, W. Li, N. Ling, and S. Kwong, “Miegan: Mobile image enhancement via a multi-module cascade neural network,” _IEEE Transactions on Multimedia_ , vol. 24, pp. 519–533, 2021.
* [169] P. Esser, R. Rombach, and B. Ommer, “Taming transformers for high-resolution image synthesis,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2021, pp. 12 873–12 883.
* [170] K. Chaitanya, E. Erdil, N. Karani, and E. Konukoglu, “Local contrastive loss with pseudo-label based self-training for semi-supervised medical image segmentation,” _Medical Image Analysis_ , vol. 87, p. 102792, 2023.
* [171] N. Kalchbrenner, A. Oord, K. Simonyan, I. Danihelka, O. Vinyals, A. Graves, and K. Kavukcuoglu, “Video pixel networks,” in _International Conference on Machine Learning_. PMLR, 2017, pp. 1771–1779.
* [172] A. Radford, J. Wu, R. Child, D. Amodei, and I. Sutskever, “Dall-e: Distributed, automated, and learning to generate adversarial networks,” _OpenAI Blog_ , 2021. [Online]. Available: https://openai.com/blog/dall-e/
* [173] A. Ramesh, M. Pavlov, G. Goh, S. Gray, C. Voss, A. Radford, M. Chen, and I. Sutskever, “Zero-shot text-to-image generation,” in _International Conference on Machine Learning_. PMLR, 2021, pp. 8821–8831.
* [174] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, _et al._ , “Learning transferable visual models from natural language supervision,” in _International conference on machine learning_. PMLR, 2021, pp. 8748–8763.
* [175] G. Singh, F. Deng, and S. Ahn, “Illiterate dall-e learns to compose,” _arXiv preprint arXiv:2110.11405_ , 2021.
* [176] G. Marcus, E. Davis, and S. Aaronson, “A very preliminary analysis of dall-e 2,” _arXiv preprint arXiv:2204.13807_ , 2022.
* [177] C. Rudin, “Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead,” _Nature machine intelligence_ , vol. 1, no. 5, pp. 206–215, 2019.
* [178] A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen, “Hierarchical text-conditional image generation with clip latents,” _arXiv preprint arXiv:2204.06125_ , 2022.
* [179] F. Doshi-Velez and B. Kim, “Towards a rigorous science of interpretable machine learning,” _arXiv preprint arXiv:1702.08608_ , 2017.
* [180] E. O. Brigham, _The fast Fourier transform and its applications_. Prentice-Hall, Inc., 1988.
* [181] D. B. Percival and A. T. Walden, _Wavelet methods for time series analysis_. Cambridge university press, 2000, vol. 4.
* [182] T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and G. Langs, “Unsupervised anomaly detection with generative adversarial networks to guide marker discovery,” in _International conference on information processing in medical imaging_. Springer, 2017, pp. 146–157.
* [183] T. Zhou, Z. Ma, Q. Wen, X. Wang, L. Sun, and R. Jin, “Fedformer: Frequency enhanced decomposed transformer for long-term series forecasting,” in _International Conference on Machine Learning_. PMLR, 2022, pp. 27 268–27 286.
* [184] V. Vovk, “Kernel ridge regression,” in _Empirical Inference: Festschrift in Honor of Vladimir N. Vapnik_. Springer, 2013, pp. 105–116.
* [185] K. P. Murphy, _Machine learning: a probabilistic perspective_. MIT press, 2012.
* [186] H. Dong, A. Supratak, L. Mai, F. Liu, A. Oehmichen, S. Yu, and Y. Guo, “TensorLayer: A Versatile Library for Efficient Deep Learning Development,” _ACM Multimedia_ , 2017. [Online]. Available: http://tensorlayer.org
* [187] C. Lai, J. Han, and H. Dong, “Tensorlayer 3.0: A deep learning library compatible with multiple backends,” in _2021 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)_. IEEE, 2021, pp. 1–3.
* [188] J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networkss,” in _Computer Vision (ICCV), 2017 IEEE International Conference on_ , 2017.
* [189] C. Esteban, S. L. Hyland, and G. Rätsch, “Real-valued (medical) time series generation with recurrent conditional gans,” _arXiv preprint arXiv:1706.02633_ , 2017.
* [190] G. Zhang, M. Kan, S. Shan, and X. Chen, “Generative adversarial network with spatial attention for face attribute editing,” in _Proceedings of the European conference on computer vision (ECCV)_ , 2018, pp. 417–432.
* [191] A. Razavi, A. Van den Oord, and O. Vinyals, “Generating diverse high-fidelity images with vq-vae-2,” _Advances in neural information processing systems_ , vol. 32, 2019.
* [192] G. Biau, B. Cadre, M. Sangnier, and U. Tanielian, “Some theoretical properties of gans,” _Ann. Statist._ , vol. 48 (3), pp. 1539 – 1566, 2020.
* [193] G. Biau, M. Sangnier, and U. Tanielian, “Some theoretical insights into wasserstein gans,” _The Journal of Machine Learning Research_ , vol. 22, no. 1, pp. 5287–5331, 2021.
* [194] D. Belomestny, E. Moulines, A. Naumov, N. Puchkin, and S. Samsonov, “Rates of convergence for density estimation with gans,” _arXiv preprint arXiv:2102.00199_ , 2021.
* [195] M. Meitz, “Statistical inference for generative adversarial networks,” _arXiv preprint arXiv:2104.10601_ , 2021.
* [196] S. D. Mbacke, F. Clerc, and P. Germain, “Pac-bayesian generalization bounds for adversarial generative models,” _arXiv preprint arXiv:2302.08942_ , 2023\.
* [197] S. Liu, O. Bousquet, and K. Chaudhuri, “Approximation and convergence properties of generative adversarial learning,” _Advances in Neural Information Processing Systems_ , vol. 30, 2017.
* [198] Z. Lin, V. Sekar, and G. Fanti, “On the privacy properties of gan-generated samples,” in _International Conference on Artificial Intelligence and Statistics_. PMLR, 2021, pp. 1522–1530.
* [199] D. Alvarez-Melis, V. Garg, and A. Kalai, “Are gans overkill for nlp?” _Advances in Neural Information Processing Systems_ , vol. 35, pp. 9072–9084, 2022.
* [200] A. Borji, “Pros and cons of gan evaluation measures,” _Computer vision and image understanding_ , vol. 179, pp. 41–65, 2019.
* [201] J. Xu, X. Ren, J. Lin, and X. Sun, “Diversity-promoting gan: A cross-entropy based generative adversarial network for diversified text generation,” in _Proceedings of the 2018 conference on empirical methods in natural language processing_ , 2018, pp. 3940–3949.
* [202] T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, “Improved techniques for training gans,” _Advances in neural information processing systems_ , vol. 29, 2016.
* [203] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna, “Rethinking the inception architecture for computer vision,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2016, pp. 2818–2826.
* [204] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “Imagenet: A large-scale hierarchical image database,” in _2009 IEEE conference on computer vision and pattern recognition_. Ieee, 2009, pp. 248–255.
* [205] S. Gurumurthy, R. Kiran Sarvadevabhatla, and R. Venkatesh Babu, “Deligan: Generative adversarial networks for diverse and limited data,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 166–174.
* [206] S. Nowozin, B. Cseke, and R. Tomioka, “f-gan: Training generative neural samplers using variational divergence minimization,” _Advances in neural information processing systems_ , vol. 29, 2016.
* [207] G. Daras, A. Odena, H. Zhang, and A. G. Dimakis, “Your local gan: Designing two dimensional local attention mechanisms for generative models,” in _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , 2020, pp. 14 531–14 539.
* [208] Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multiscale structural similarity for image quality assessment,” in _The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003_, vol. 2. Ieee, 2003, pp. 1398–1402.
* [209] E. L. Lehmann, J. P. Romano, and G. Casella, _Testing statistical hypotheses_. Springer, 1986, vol. 3.
* [210] P. Cunningham and S. J. Delany, “k-nearest neighbour classifiers-a tutorial,” _ACM computing surveys (CSUR)_ , vol. 54, no. 6, pp. 1–25, 2021.
* [211] W. Bounliphone, E. Belilovsky, M. B. Blaschko, I. Antonoglou, and A. Gretton, “A test of relative similarity for model selection in generative models,” _arXiv preprint arXiv:1511.04581_ , 2015.
* [212] V. Volodina and P. Challenor, “The importance of uncertainty quantification in model reproducibility,” _Philosophical Transactions of the Royal Society A_ , vol. 379, no. 2197, p. 20200071, 2021.
* [213] P. Oberdiek, G. Fink, and M. Rottmann, “Uqgan: A unified model for uncertainty quantification of deep classifiers trained via conditional gans,” _Advances in Neural Information Processing Systems_ , vol. 35, pp. 21 371–21 385, 2022.
* [214] W. He and Z. Jiang, “A survey on uncertainty quantification methods for deep neural networks: An uncertainty source perspective,” _arXiv preprint arXiv:2302.13425_ , 2023.
* [215] J. Gawlikowski, C. R. N. Tassi, M. Ali, J. Lee, M. Humt, J. Feng, A. Kruspe, R. Triebel, P. Jung, R. Roscher, _et al._ , “A survey of uncertainty in deep neural networks,” _Artificial Intelligence Review_ , pp. 1–77, 2023\.
* [216] P. Samangouei, M. Kabkab, and R. Chellappa, “Defense-gan: Protecting classifiers against adversarial attacks using generative models,” _arXiv preprint arXiv:1805.06605_ , 2018.
* [217] H. De Meulemeester, J. Schreurs, M. Fanuel, B. De Moor, and J. A. Suykens, “The bures metric for generative adversarial networks,” in _Joint European Conference on Machine Learning and Knowledge Discovery in Databases_. Springer, 2021, pp. 52–66.
* [218] W. Li, L. Fan, Z. Wang, C. Ma, and X. Cui, “Tackling mode collapse in multi-generator gans with orthogonal vectors,” _Pattern Recognition_ , vol. 110, p. 107646, 2021.
* [219] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein, “Unrolled generative adversarial networks,” _arXiv preprint arXiv:1611.02163_ , 2016.
* [220] Z. Zhang, C. Luo, and J. Yu, “Towards the gradient vanishing, divergence mismatching and mode collapse of generative adversarial nets,” in _Proceedings of the 28th ACM International Conference on Information and Knowledge Management_ , 2019, pp. 2377–2380.
* [221] B. Luo, Y. Liu, L. Wei, and Q. Xu, “Towards imperceptible and robust adversarial example attacks against neural networks,” in _Proceedings of the AAAI Conference on Artificial Intelligence_ , vol. 32, no. 1, 2018.
* [222] S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in _International conference on machine learning_. pmlr, 2015, pp. 448–456.
* [223] J. Ho, A. Jain, and P. Abbeel, “Denoising diffusion probabilistic models,” _Advances in neural information processing systems_ , vol. 33, pp. 6840–6851, 2020.
* [224] Y. Song and S. Ermon, “Generative modeling by estimating gradients of the data distribution,” _Advances in neural information processing systems_ , vol. 32, 2019.
* [225] P. Dhariwal and A. Nichol, “Diffusion models beat gans on image synthesis,” _Advances in neural information processing systems_ , vol. 34, pp. 8780–8794, 2021.
* [226] F.-A. Croitoru, V. Hondru, R. T. Ionescu, and M. Shah, “Diffusion models in vision: A survey,” _IEEE Transactions on Pattern Analysis and Machine Intelligence_ , 2023.
* [227] C. Saharia, W. Chan, H. Chang, C. Lee, J. Ho, T. Salimans, D. Fleet, and M. Norouzi, “Palette: Image-to-image diffusion models,” in _ACM SIGGRAPH 2022 Conference Proceedings_ , 2022, pp. 1–10.
* [228] Y. Jiang, S. Chang, and Z. Wang, “Transgan: Two transformers can make one strong gan,” _arXiv preprint arXiv:2102.07074_ , vol. 1, no. 3, 2021.
* [229] Z. Lv, X. Huang, and W. Cao, “An improved gan with transformers for pedestrian trajectory prediction models,” _International Journal of Intelligent Systems_ , vol. 37, no. 8, pp. 4417–4436, 2022.
* [230] L. Sasal, T. Chakraborty, and A. Hadid, “W-transformers: A wavelet-based transformer framework for univariate time series forecasting,” in _2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)_. IEEE, 2022, pp. 671–676.
* [231] Z. Elabid, T. Chakraborty, and A. Hadid, “Knowledge-based deep learning for modeling chaotic systems,” in _2022 21st IEEE International Conference on Machine Learning and Applications (ICMLA)_. IEEE, 2022, pp. 1203–1209.
* [232] A. Daw, M. Maruf, and A. Karpatne, “Pid-gan: A gan framework based on a physics-informed discriminator for uncertainty quantification with physics,” in _Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining_, 2021, pp. 237–247.
* [233] L. Yang, T. Meng, and G. E. Karniadakis, “Measure-conditional discriminator with stationary optimum for gans and statistical distance surrogates,” _arXiv preprint arXiv:2101.06802_ , 2021.
* [234] B. Bullwinkel, D. Randle, P. Protopapas, and D. Sondak, “Deqgan: Learning the loss function for pinns with generative adversarial networks,” _arXiv preprint arXiv:2209.07081_ , 2022.
* [235] Z. Jiang, Y. Ren, Z. Ye, J. Liu, C. Zhang, Q. Yang, S. Ji, R. Huang, C. Wang, X. Yin, _et al._ , “Mega-tts: Zero-shot text-to-speech at scale with intrinsic inductive bias,” _arXiv preprint arXiv:2306.03509_ , 2023.
* [236] Y. Ren, M. Lei, Z. Huang, S. Zhang, Q. Chen, Z. Yan, and Z. Zhao, “Prosospeech: Enhancing prosody with quantized vector pre-training in text-to-speech,” in _ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)_. IEEE, 2022, pp. 7577–7581.
* [237] L. J. Ratliff, S. A. Burden, and S. S. Sastry, “Characterization and computation of local nash equilibria in continuous games,” in _2013 51st Annual Allerton Conference on Communication, Control, and Computing (Allerton)_. IEEE, 2013, pp. 917–924.
* [238] S. Arora and Y. Zhang, “Do gans actually learn the distribution? an empirical study,” _arXiv preprint arXiv:1706.08224_ , 2017.
* [239] J. Wang, J. Lv, X. Yang, C. Tang, and X. Peng, “Multimodal image-to-image translation between domains with high internal variability,” _Soft Computing_ , vol. 24, pp. 18 173–18 184, 2020.
* [240] I. O. Tolstikhin, S. Gelly, O. Bousquet, C.-J. Simon-Gabriel, and B. Schölkopf, “Adagan: Boosting generative models,” _Advances in neural information processing systems_ , vol. 30, 2017.
* [241] B. Hariharan, P. Arbeláez, L. Bourdev, S. Maji, and J. Malik, “Semantic contours from inverse detectors,” in _2011 international conference on computer vision_. IEEE, 2011, pp. 991–998.
* [242] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial discriminative domain adaptation,” in _Proceedings of the IEEE conference on computer vision and pattern recognition_ , 2017, pp. 7167–7176.
* [243] D. Afchar, V. Nozick, J. Yamagishi, and I. Echizen, “Mesonet: a compact facial video forgery detection network,” in _2018 IEEE international workshop on information forensics and security (WIFS)_. IEEE, 2018, pp. 1–7.
* [244] A. Taeihagh, “Governance of artificial intelligence,” _Policy and society_ , vol. 40, no. 2, pp. 137–157, 2021.
* [245] J. Liu, J. Huang, Y. Zhou, X. Li, S. Ji, H. Xiong, and D. Dou, “From distributed machine learning to federated learning: A survey,” _Knowledge and Information Systems_ , vol. 64, no. 4, pp. 885–917, 2022.
* [246] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” _arXiv preprint arXiv:1412.6572_ , 2014.
* [247] M. Hausknecht and P. Stone, “Deep recurrent q-learning for partially observable mdps,” in _2015 aaai fall symposium series_ , 2015.
* [248] J. Yang, A. Kannan, D. Batra, and D. Parikh, “Lr-gan: Layered recursive generative adversarial networks for image generation,” _arXiv preprint arXiv:1703.01560_ , 2017.
* [249] G. Antipov, M. Baccouche, and J.-L. Dugelay, “Face aging with conditional generative adversarial networks,” in _2017 IEEE international conference on image processing (ICIP)_. IEEE, 2017, pp. 2089–2093.
* [250] S. Mohamed and B. Lakshminarayanan, “Learning in implicit generative models,” _arXiv preprint arXiv:1610.03483_ , 2016.
|
# DemOpts: Fairness corrections in COVID-19 case prediction models
Naman Awasthi
Saad Mohammad Abrar,
Daniel Smolyak,
Vanessa Frias-Martinez
###### Abstract
COVID-19 forecasting models have been used to inform decision making around
resource allocation and intervention decisions e.g., hospital beds or stay-at-
home orders. State of the art deep learning models often use multimodal data
such as mobility or socio-demographic data to enhance COVID-19 case prediction
models. Nevertheless, related work has revealed under-reporting bias in
COVID-19 cases as well as sampling bias in mobility data for certain minority
racial and ethnic groups, which could in turn affect the fairness of the
COVID-19 predictions along race labels. In this paper, we show that state of
the art deep learning models output mean prediction errors that are
significantly different across racial and ethnic groups; and which could, in
turn, support unfair policy decisions.
We also propose a novel de-biasing method, DemOpts, to increase the fairness
of deep learning based forecasting models trained on potentially biased
datasets. Our results show that DemOpts can achieve better error parity that
other state of the art de-biasing approaches, thus effectively reducing the
differences in the mean error distributions across more racial and ethnic
groups.
## 1 Introduction
Forecasting the number of COVID-19 cases, hospitalizations or deaths is
crucial to inform decision making. For example, COVID-19 forecasts can be used
by hospitals to evaluate medical needs and required resources such as supplies
or beds; or by public health officials to inform closure policies at various
geographical scales. In the US, COVID-19 forecasts have been used at the state
and county levels to inform social distancing or masking, such as the publicly
available forecasts on the COVID-19 Forecast Hub that the CDC routinely uses
in their communications (for Disease Control and Prevention 2023; CDC 2020).
Related work for the past three years has shown a diverse variety of COVID-19
forecasting approaches, from compartmental models (Zou et al. 2020; Pei and
Shaman 2020) to statistical (Chiang, Liu, and Mohler 2020; Galasso, Cao, and
Hochberg 2022) or deep learning methods (Arik et al. 2020a; Zhang-James et al.
2021; Le et al. 2020a; Lucas, Vahedi, and Karimzadeh 2023a). These models,
always trained with past COVID-19 cases publicly available from, for example,
NYT or JHU (Times 2021; Dong, Du, and Gardner 2020), frequently use
complementary datasets with the objective of improving forecasting accuracy.
In fact, analysing the forecasting models from over $50$ teams in the COVID-19
Forecast Hub, $39\%$ use demographic data - either directly from the ACS or
indirectly via community vulnerability indices like the CCVI (Smittenaar et
al. 2021); and $52\%$ of the models incorporate human mobility data from
Safegraph, Google or Apple, among others (Google 2022; Apple 2022; Labs 2023).
The majority of publications focused on COVID-19 case prediction have reported
results around the accuracy of the models i.e., minimizing the difference
between the predicted cases and the actual number of cases reported.
Nevertheless, prior work has shown that the accuracy of COVID-19 predictions
can depend on various social determinants, including race or ethnicity (Gursoy
and Kakadiaris 2022), income, or age (Erfani and Frias-Martinez 2023),
revealing worse performance for protected attributes and pointing to a lack on
COVID-19 predictive fairness that can affect resource allocation and decision
making. This lack of predictive fairness might be related to bias in the
datasets used to train the model i.e., bias in COVID-19 case reporting or bias
in mobility data. In fact, prior work has shown COVID-19 case bias due to
under-reporting issues in minority communities whereby missing racial data or
misclassified race has been a source of errors (Douglas et al. 2021) as well
as inadequate testing for minority groups across the US, such as
Hispanic/Latino communities (Del Rios et al. 2022). Additionally, prior work
has also revealed sampling bias in mobility data with Black and elder
communities being under-represented because of the way mobility data is
collected (via smart phones and mobile app use) (Coston et al. 2021).
Given the presence of bias in the training datasets frequently used by
COVID-19 forecast models, and prior work demonstrating that COVID-19
prediction accuracy can vary across social determinants, we posit that it
becomes critical to devise methods to prevent data biases from percolating
into the COVID-19 forecasts so as to guarantee fair decision making based on
case predictions. Mitigating bias in COVID-19 forecast models can be done
through pre-processing or in-processing approaches i.e., via bias mitigation
in the training datasets, applying correction methods to COVID-19 counts
(Angulo, Finelli, and Swerdlow 2021; Jagodnik et al. 2020); or via de-biasing
methods embedded in the predictive models that attempt to reduce data and
model bias during training (Yan, Seto, and Apostoloff 2022; Wang and Singh
2023; Yang, Soltan, and Clifton 2022; Estiri et al. 2022). In this paper, we
focus on in-processing approaches given their scarcity in the COVID-19
literature, and propose DemOpts (Demographic Optimization) a de-biasing method
designed to achieve COVID-19 case prediction error parity across racial and
ethnic groups in the context of deep learning models i.e., guarantee that
county prediction errors are not significantly different across racial and
ethnic groups. Although there exist a diverse set of COVID-19 predictive
approaches, we focus on deep learning models, because these are the most
frequently used models in the machine learning community (Meraihi et al.
2022); and narrow down our choice to transformer-based architectures in
particular, because they are state of the art in time series predictions (Lim
et al. 2021a).
The main objective of DemOpts is to improve the fairness of the COVID-19 case
predictions at the county level by achieving error parity in a regression
setting (Gursoy and Kakadiaris 2022). DemOpts proposes a novel de-biasing
approach that leverages county racial and ethnic data during training to
modify conventional deep learning loss functions so as to penalize the model
for statistically significant associations between the predictive error and
the race or ethnicity distribution of a county.
Like state of the art de-biasing methods for regression settings (such as
Individual (Berk et al. 2017), Group (Berk et al. 2017) and Sufficiency-based
fairness correction (Shah et al. 2022)) DemOpts can work in multimodal
contexts, allowing for deep learning models to be trained with different types
of input data besides the COVID-19 cases, including the use of mobility or
demographic data, which are frequently used in COVID-19 prediction models.
However, unlike state of the art de-biasing methods for regression, DemOpts is
designed to de-bias predictions based on the relationship between the
prediction errors and the percentage of racial and ethnic groups in that
county, effectively considering multiple protected groups per county in the
de-biasing process, instead of assigning a county to a unique protected race
or ethnicity.
Thus, the main contributions of this paper are:
* •
We present DemOpts, a novel de-biasing method for deep learning architectures,
that attempts to increase the fairness of the COVID-19 county case predictions
by achieving error parity i.e., guarantee that prediction errors are similar
across racial and ethnic groups.
* •
The DemOpts architecture is designed to optimize for error parity across race
and ethnicity using a novel multi-label approach that allows each county to be
characterized by its own racial and ethnic group distribution during the de-
biasing process, instead of by a unique label.
* •
We propose a novel evaluation protocol for the COVID-19 context and we show
that: (i) state of the art COVID-19 county case prediction models based on
transformer architectures lack error parity when no de-biasing method is
applied i.e., prediction errors are statistically significantly different
across racial and ethnic groups; (ii) DemOpts applied to transformer-based
architectures improves the error parity of COVID-19 county case prediction
models, increasing the similarity between mean prediction errors across racial
and ethnic groups, and (iii) DemOpts de-biasing approach performs better than
existing de-biasing methods for regression settings, namely, individual
fairness correction, group fairness correction and sufficiency-based penalty
for fairness correction.
The rest of the paper is structured as follows. We first discuss the related
literature, followed by a description of the DemOpts method to achieve error
parity in transformer based architectures, and its evaluation protocol for the
COVID-19 context, including two metrics to measure error parity. We finalize
the paper presenting the evaluation of DemOpts: first describing the datasets
used, and then showing how DemOpts improves fairness prediction by increasing
error parity across racial and ethnic groups.
## 2 Related Literature
In this section, we cover three areas that are of relevance to the research
proposed in this paper: deep learning models to forecast time series data, the
presence of bias in COVID-19 datasets used by COVID-19 case forecasting
methods; and approaches to measure and improve the fairness of predictions in
regression settings.
### Deep learning based Forecasting models
Deep learning models have started to become popular in time series prediction
tasks. The available methods include, (i) Autoregressive models, which are
modifications of recurrent neural networks (RNNs) such as Long Short Term
Memory (LSTM) Networks or Gated Recurrent Networks (GRN)(Hochreiter and
Schmidhuber 1997); (ii) Graph-based neural networks which encapsulate spatio-
temporal aspects of data in implementations such as Graph Attention Networks
(GANNs)(Velickovic et al. 2017), Spatio-temporal Graph Convolution network
(ST-GCN)(Yu, Yin, and Zhu 2018a), NBConv (Duan et al. 2017) or GGConv(Yu, Yin,
and Zhu 2018b); and (iii) transformers, which have gained success in various
applications such as computer vision (Dosovitskiy et al. 2021; Kirillov et al.
2023), natural language processing (Radford et al. 2018; Devlin et al. 2018),
speech (Latif et al. 2023) or tabular data (Liu et al. 2021; Wang et al. 2020;
Yin et al. 2020). There is a large body of work that utilizes Transformer-
based architecture models (Vaswani et al. 2017) to forecast time series with
state of the art performance including LogTrans (Li et al., 2019)(Li et al.
2019), Informer (Zhou et al., 2021) (Zhou et al. 2020), Autoformer (Wu et al.,
2021) (Wu et al. 2021), FEDformer (Zhou et al., 2022)(Zhou et al. 2022),
Pyraformer (Liu et al., 2022)(Liu et al. 2022), and PatchTST(Nie et al. 2023).
In this paper, we specifically focus on the Temporal Fusion Transformer
architecture (TFT) (Lim et al. 2021b) since it allows us to easily incorporate
exogenous variables (like mobility data) as well as static variables (like
demographic data) on top of the COVID-19 time series.
### Bias in Mobility and COVID-19 Data
Human mobility data has been used to characterize human behaviors in the built
environment (Vieira et al. 2010; Hernandez et al. 2017; Frias-Martinez and
Virseda 2013; Rubio et al. 2010; Wu, Frias-Martinez, and Frias-Martinez 2021),
for public safety (Wu et al. 2022, 2023), during epidemics and disasters
(Wesolowski et al. 2012; Bengtsson et al. 2015; Hong et al. 2017; Isaacman,
Frias-Martinez, and Frias-Martinez 2018; Ghurye, Krings, and Frias-Martinez
2016; Hong and Frias-Martinez 2020), as well as to support decision making for
socio-economic development (Frias-Martinez, Virseda, and Frias-Martinez 2010;
Fu et al. 2018; Frias-Martinez, Virseda, and Gomero 2012; Hong, Frias-
Martinez, and Frias-Martinez 2016; Frias-Martinez et al. 2012). During the
COVID-19 pandemic, human mobility has also played a central role in driving
decision making, acknowledging the impact of human movement on virus
propagation (Arik et al. 2020b; Lucas, Vahedi, and Karimzadeh 2023b; Le et al.
2020b; Erfani and Frias-Martinez 2023; Badr and Gardner 2021; Abrar et al.
2023).
Bias in mobility and COVID-19 data is being increasingly discussed due to the
exponential growth of COVID-19 forecasting models in the literature, and to
publicly available mobility data. Mobility data has been reported to suffer
from sampling bias given that digital traces are being collected from mobile
apps installed on smart phones, which limits the types of individuals from
whom mobility data is being collected. In fact, prior work has revealed
sampling bias in SafeGraph mobility data with Black and elder individuals
being under-represented in the datasets (Coston et al. 2021). Critical to the
research proposed in this paper, prior work has exposed biases in COVID-19
forecasting models (Tsai et al. 2022), and researchers have shown that
COVID-19 county prediction improvements associated to the use of mobility data
tend to take place in counties with lower presence of protected racial and
ethnic groups (Abrar et al. 2023). On the other hand, COVID-19 under-reporting
bias has been discussed in the literature (Douglas et al. 2021; Gross et al.
2020) and points to multiple causes, including inadequate testing across
certain minority groups such as Hispanic/Latinos (Del Rios et al. 2022); or
lack of consistency in reporting race and ethnicity for COVID-19 cases, which
has generated a lot of missing or incorrect racial data in COVID-19 case
statistics, as reported by the CDC (CDC 2023). Reducing the impact of mobility
or COVID-19 case bias in COVID-19 case predictions, as we do in this paper, is
of critical importance to support decision making processes focused on
resource allocation during pandemics, so as to reduce harms and guarantee that
decisions are fair and just across racial and ethnic groups.
### Fairness Metrics and Fairness Corrections in Machine learning models
Transformer-based COVID-19 case forecast models require the use of fairness
metrics for regression settings, given that the loss optimization process in
gradient based deep learning architectures uses real number predictions
instead of classes. Agarwal et al. (Agarwal, Dudík, and Wu 2019), Fitzsimons
et al. (Fitzsimons et al. 2019) or Gursoy et al. (Gursoy and Kakadiaris 2022)
outline the different aspects of fairness in regression settings, and propose
a set of fairness metrics for regression-type models. For this paper, we use
the error parity metric proposed in (Gursoy and Kakadiaris 2022). Error parity
requires error distributions to be statistically independent from racial and
ethnic groups. We expand this definition, and relax the statistical
significance requirement, to be able to also evaluate whether the proposed
DemOpts method can at least reduce the differences in error distributions
across racial and ethnic groups, even when they are still be statistically
significantly different.
To correct for bias and unfair performance in deep learning models,
researchers have used pre-processing and in-processing correction approaches.
Pre-processing approaches focus on creating a better input for learning deep
neural network models by removing bias from the datasets (Brunet et al.
2018),(Calmon et al. 2017); and there have been successful efforts focused on
de-biasing under-reporting COVID-19 datasets to estimate actual cases or
deaths before they are fed into predictive models (Jagodnik et al. 2020;
Albani et al. 2021). On the other hand, in-processing approaches to improve
the fairness of deep learning models, like the one we use in this paper, focus
on the model and its regularization, usually adding a bias correction term in
the loss function (Wang and Singh 2023; Das and Dooley 2023; Yan, Seto, and
Apostoloff 2022).
In this paper, we will compare our proposed de-biasing approach against three
state-of-the-art methods for de-biasing in regression settings: Individual
fairness correction (Berk et al. 2017), Group Fairness correction (Berk et al.
2017) and sufficiency based penalty for fairness correction (Shah et al.
2022). Individual and group fairness calculate penalties by determining over-
estimations across different groups and weighting the loss by a factor
proportional to the over-estimations; while sufficiency based regularizers
propose to make the loss independent of sensitive data attributes by
simultaneously training a joint model and subgroup specific networks to
achieve fair predictions (Shah et al. 2022).
## 3 DemOpts Method
Our modeling focus is on deep learning models, which are the most frequently
used approach for COVID-19 county case forecasts in the machine learning
community (Meraihi et al. 2022). We specifically focus on the Temporal Fusion
Transformer (TFT) model introduced in (Lim et al. 2021b) for several reasons.
First, this model is state of the art in interpretable time series prediction
(Lim et al. 2021b). Second, this model allows for the use of static reals as
input to the model (attributes that do not change over the duration of the
training process such as demographic percentages or population statistics);
and third, the model works well with time-dependent features including
COVID-19 cases or mobility data whereby past data influences future
statistics.
Figure 1: Flow Diagram for the DemOpts method.
Training a deep learning model has the following steps: (1) forward pass on
the training data, (2) computation of loss and (3) backward pass to change
weights of the model. DemOpts modifies conventional loss functions to penalize
the model for any statistically significant association between the county
prediction loss (error) and the county racial and ethnic groups. In other
words, DemOpts performs a race-based optimization on the error during model
training using county demographic racial and ethnic data. To achieve that,
DemOpts follows a three step process (see Figure 1 for a diagram and Algorithm
1 for the pseudocode):
### Step 1: Calculate Loss
To thoroughly model errors in time series, we use quantile predictions instead
of point-value predictions. Quantile predictions are measured for seven
quantiles ([0.02, 0.1, 0.25, 0.5, 0.75, 0.9, 0.98]) to gain insights into the
uncertainty ranges and confidence intervals of the COVID-19 county case
predictive models. When using quantile predictions, the error is computed
using the quantile loss, also known as pinball loss (PBL), and defined as
follows:
$PBL_{q}(y_{ip},y_{i})=\begin{cases}q*(y_{i}-y_{ip})&\text{if }y_{i}\geq
y_{ip}\\\ (q-1)*(y_{i}-y_{ip})&\text{if }y_{i}<y_{ip}\end{cases}$ (1)
For quantile $q$, the PBL for the prediction of a given input $X_{i}$ is
$PBL_{q}(y_{ip},y_{i})$, where $y_{i}$ is the ground truth and $y_{ip}$ is the
predicted value. The average over all quantiles can be represented as
$PBL(y_{ip},y_{i})=\frac{1}{|q|}\sum_{q}PBL_{q}(y_{ip},y_{i})$.
### Step 2: Identify Dependencies between Prediction Errors and Race and
Ethnicity
To achieve error parity i.e., mean errors being independent from racial and
ethnic groups, DemOpts first determines the relationship between errors and
race and ethnic labels. For that purpose, DemOpts fits a regression model
between the prediction losses $PBL(y_{ip},y_{i})$ across datapoints $i$ and
their corresponding county race and ethnicity labels $D_{i}$:
$\displaystyle PBL(y_{ip},y_{i})$ $\displaystyle=\beta*D_{i}+\alpha$ (2)
$\displaystyle\qquad\text{with}\quad D_{i}$
$\displaystyle=\begin{bmatrix}d_{1},d_{2},d_{3},d_{4},\text{lookahead}\end{bmatrix}$
where $d_{i}$ are the corresponding county demographic features extracted from
the U.S. census data and represented as the percentage of each racial and
ethnic group of the county corresponding to datapoint $i$, and lookahead
refers to the number of days into the future the COVID-19 case prediction was
generated for. In matrix representation: $PBL(Y_{ip},Y_{i})=\beta*D+\alpha$.
Once the regression model is fit, both regression coefficients ($\beta$) and
their statistical significance ($p-value$) are passed on to step 3, to modify
the adjusted loss and attempt to decouple race from the errors (loss).
### Step 3: Adjust the Loss
DemOpts modifies the conventional loss of deep learning models by adjusting
for racial or ethnic bias in the error i.e., the loss is increased whenever a
statistically significant regression coefficient for a race or ethnicity is
found in Step 2 (at level $p-value=0.05$). By increasing the loss, DemOpts
attempts to reduce the dependency between errors and race and ethnicity i.e.,
make the errors similar across racial and ethnic groups. Specifically, the
loss is adjusted by the product of the prior loss $PBL(y_{ip},y_{i})$, the
percentage race or ethnicity $D_{j}$ that holds a significant relationship
with the error, and its coefficient $\beta_{j}$ in absolute value:
$\displaystyle
L_{adj}=PBL(y_{ip},y_{i})+\sum_{j}H(pval_{j})(|\beta_{j}|\\*D_{j}\\*L)$ (3)
$\displaystyle\text{with}\qquad H(x)=\begin{cases}1&\text{if }x<0.05,\\\
0&\text{if }x\geq 0.05\end{cases}$
Algorithm 1 DemOpts Training
1:Input: Training set (X, D, Y), Learning rate (lr), Number of epochs
(epochs), threshold
2:Output: Trained model (M)
3:X : COVID-19 Timeseries data for all counties
4:Y : COVID-19 cases in future for all counties
5:D : Demographic data for all counties
6:Initialize model parameters randomly
7:for epoch in range(0, epochs) do
8: // sample from $X,D,Y$ of size b
9: for $(X_{b},D_{b},Y_{bt})$ in $(X,D,Y)$ do
10: // Forward propagation
11: $Y_{bp}=M(X_{b})$
12: //Calculate QuantileLoss
13: $L_{b}=QuantileLoss(Y_{bp},Y_{bt})$
14: //Find association
15: $olsreg=OLS.fit(D_{b},L_{b})$
16: $pvals,betas=olsreg.pvals,olsreg.coef$
17: // additional penalty on loss
18: for $index$ in $|pvals|$ do
19: $pval_{i},beta_{i}=pvals[index],betas[index]$
20: // Get the corresponding demographic percentage column and all rows
21: $D_{b,idx}=D_{b}[:,index]$
22: if $pval_{i}<threshold$ then //this ensures significant association
23: $L_{b}+=L_{b}*|beta_{i}|*D_{b,idx}$
24: end if
25: end for
26: $backpropagate(M,L_{b})$
27: end for
28:end for
29:return TFT
## 4 DemOpts Evaluation Protocol
In this section, we describe a novel protocol to evaluate DemOpts. For that
purpose, we first describe the COVID-19 county case prediction model we use,
and the different de-biasing approaches we evaluate on that prediction model.
Next, we describe the error parity metrics we use to evaluate the fairness of
each prediction model; and finally, we present the approach to analyze whether
DemOpts improves the error parity metrics when compared to other state-of-the-
art de-biasing approaches for regression settings.
### Predictive Model and De-Biasing Approaches
We use the Temporal Fusion Transformer model (TFT) with the conventional
pinball loss function as our baseline model ($TFT_{baseline}$) to predict the
number of COVID-19 county cases for a given day. Input data to the TFT model
include past COVID-19 cases per county, mobility data from SafeGraph and race
and ethnicity data for the county (further details about these datasets are
provided in the next section).
We also train and test another TFT enhanced with the DemOpts de-biasing
method, $TFT_{DemOpts}$, that adjusts the loss computation to attempt to
eliminate or reduce the dependencies between error and race so as to achieve
error parity. In addition, we train and test three more TFTs enhanced with
state-of-the-art de-biasing methods for regression settings namely, individual
fairness $TFT_{Individual}$ (Berk et al. 2017), group fairness $TFT_{Group}$
(Berk et al. 2017), and sufficiency based regularizer $TFT_{Sufficiency}$
(Shah et al. 2022). Individual and group fairness methods calculate penalties
by determining over-estimations across different groups and weighting the loss
by a factor proportional to the over-estimations; while the sufficiency based
regularizer trains a joint model and group-specific networks to achieve fair
predictions. We replicate their methodology and adapt it to the forecasting
setting by keeping TFT as the common network in the training process.
### Measuring Model Fairness
We choose error parity as our fairness metric (Gursoy and Kakadiaris 2022)
with a focus on evaluating whether the distribution of predictive errors at
the county level is independent of county race and ethnicity i.e., prediction
errors are not statistically significantly different across racial and ethnic
groups. To measure the fairness of each of the models $TFT_{baseline}$,
$TFT_{DemOpts}$, $TFT_{Individual}$, $TFT_{Group}$ and $TFT_{Sufficiency}$, we
propose a two-step process.
Table 1: Majority label counts Majority label | Count
---|---
Asian | 6
Black | 127
Hispanic | 126
White | 2825
Step 1: Associate errors to county race or ethnicity. To carry out the
fairness analysis, we need to associate the PBL error of each county with race
and ethnicity labels. However, that would require access to race-stratified
COVID-19 case data at the county level, unfortunately not available due to
systemic data collection failures during the pandemic (Kader and Smith 2021).
Hence, we propose to associate each county and its error to the majority race:
we label each county with the race or ethnicity that has the highest
population percentage in that county. Following this procedure and using data
from the 2019 U.S. Census, our fairness analysis will consider the following
race and ethnic groups: Asian, Black, Hispanic and White. Table 1 shows the
distribution of U.S. counties into these four racial and ethnic groups, and
Figure 2 shows a color-coded map with the majority racial or ethnic group for
each county. During the fairness analysis, we will refer to majority White
counties as the unprotected group, and majority Black, Hispanic or Asian
counties as the protected groups.
Figure 2: Counties with majority based labels.
In addition, we normalize each county PBL error by 1,000 county population
people. The normalization by county population allows us to scale the errors
appropriately, since higher population counties will have higher case counts
and thus, higher magnitude of error. Normalizing by population fairly compares
error per unit population of one county with another.
$\displaystyle NormPBL(y_{pi},y_{ti})$
$\displaystyle=\frac{1000*PBL(y_{pi},y_{ti})}{pop_{i}}$ (4)
where $y_{i}$ is the ground truth, $y_{ip}$ is the predicted value, and
$pop_{i}$ is the county population.
Step 2: Compute fairness metric. Once PBLs have been associated with racial
and ethnic groups in the U.S., we can compute the error parity i.e., the
fairness metric focused on evaluating whether the prediction errors are
different across race and ethnicity. We propose two metrics to measure the
error parity of COVID-19 county case predictions: hard error parity and soft
error parity. Next, we explain how we implement them and why both are needed.
Hard Error Parity Metric. Model predictions exhibit hard error parity when no
statistical significant difference exists between county case normalized mean
prediction errors (NormPBL) across racial or ethnic groups. In other words,
normalized mean PBL errors across counties of different racial and ethnic
groups are similar and hence not biased by race or ethnicity. To test for the
hard error parity of a prediction model, we propose to run one-way ANOVA
followed by post-hoc Tukey HSD tests between the county normalized mean error
distributions of all racial and ethnic groups. ANOVA tests have been shown to
be an adequate choice even in violation of normality, and in the presence of
unequal sample sizes, like our majority race/ethnic distributions; thus, we
choose this parametric test due to its superior strength (Blanca Mena et al.
2017; Zimmerman 1987).
Rejecting the null hypothesis for ANOVA would point to significantly different
mean error values across some racial or ethnic groups, and to a lack of
perfect hard error parity. The subsequent analysis of the post-hoc Tukey-HSD
test would reveal the pairs of racial and ethnic groups whose mean error
values are significantly different, and their numerical difference. The Tukey
test also highlights the pairs of racial and ethnic groups for which the mean
error is not statistically significantly different, pointing to instances
where hard error parity exists for that model. However, as Table 1 shows, the
number of points in some distributions might not be sufficient to reveal
statistically significant results (Nature 2019). Thus, we also propose a
relaxed definition of error parity: soft error parity.
Soft Error Parity Metric. Instead of measuring the statistical significance of
the relationship between county race labels and county errors, we propose to
use the Accuracy Equity Ratio metric (AER) (Castelnovo et al. 2022). AER
computes the ratio between the errors of the protected and unprotected groups
as follows:
$\displaystyle AER_{pg}$
$\displaystyle=\frac{AvgNormPBL(y_{p},y_{t},pg)}{AvgNormPBL(y_{p},y_{t},unpg)}$
(5)
where subscript $pg$ indicates counties labeled as protected group (Black,
Hispanic or Asian), $unpg$ indicates counties labeled as the unprotected group
(White), and AvgNormPBL is the average of the normalized PBL across all
counties for a given racial group $g$ ($pg$ or $unpg$):
$\displaystyle AvgNormPBL(y_{p},y_{t},g)$ $\displaystyle=\sum_{i\in
c_{g}}\frac{NormPBL(y_{pi},y_{ti})}{\lvert c_{g}\rvert}$ (6)
As defined, the $AER$ metric goes from $0$ to $\infty$. $AER$ values in the
range $0$ to $1$ indicate comparatively lower normalized PBL for protected
groups, which means the model predictions could be biased - have higher errors
- for White majority counties; while $AER$ values larger than one indicate
that the model could be biased against the protected group i.e., the
prediction errors are larger for counties with majority Black, Hispanic or
Asian population. Values close to one indicate parity in error distribution
between the protected group counties and the majority White counties. We claim
that a predictive model achieves soft error parity for a given protected group
when the $AER$ value is close to one, that is, the mean predictive error
between that protected group and the White race is similar.
### DemOpts Over State-of-the-Art
To assess whether DemOpts is a better de-biasing approach than state-of-the-
art methods, we need to compare the error parity metrics of the COVID-19
county case prediction model enhanced with the DemOpts method $TFT_{DemOpts}$
against the error parity metrics of the same prediction model enhanced with
the other de-biasing approaches (individual $TFT_{Individual}$, group
$TFT_{Group}$ or sufficiency $TFT_{Sufficiency}$) as well as with the baseline
COVID-19 county case prediction model without any de-biasing approach,
$TFT_{Baseline}$.
Next we describe how we carry out this analysis for the hard and soft error
parity metrics.
Hard Error Parity Evaluation.
We compute the hard error parity metric for each of the COVID-19 county case
prediction model, using one-way ANOVA and the post-hoc Tukey-HSD test. An
exploration of the statistical significance of the mean error difference for
each pair of racial and ethnic groups will reveal whether applying DemOpts to
the COVID-19 case prediction model produces less instances of significant mean
prediction error differences than any of the other de-biasing methods or the
baseline. In other words, a decrease in the number of significantly different
mean PBL errors between races would point to an achievement of hard error
parity for more racial and ethnic groups that other state-of-the-art de-
biasing approaches, or than the baseline.
Soft Error Parity Evaluation. To assess whether DemOpts applied to a COVID-19
case prediction model has higher soft error parity than any of the other
state-of-the-art de-biasing approaches, we propose to compare the AER values
for each protected race and ethnic group across the five models:
$TFT_{DemOpts}$, $TFT_{Individual}$, $TFT_{Group}$, $TFT_{Sufficiency}$ and
$TFT_{Baseline}$. Since AER values represent the quotient between the
normalized mean prediction errors of a protected race/ethnicity versus White
counties, the model with more AER values closer to one will be the approach
with the highest soft error parity. To measure AER’s distance to one, we
compute the $distance=|1-AER_{race}|$ for each race and ethnic group, which
represents the distance to a perfect soft parity error of $1$. Distances
closer to zero reveal better soft parities i.e., soft parity values closer to
one.
## 5 DemOpts Evaluation Results
We first present the datasets and the models used to train and test the
COVID-19 county case prediction models with and without de-biasing approaches.
We finalize discussing the hard and soft error parity analysis for
$TFT_{DemOpts}$ by comparing it against the other state-of-the-art de-biasing
methods and against the baseline.
### Datasets
In this paper, we train TFT COVID-19 county case prediction models for the
U.S. using COVID-19 case data, as well as mobility and demographic data.
Mobility data has been used by prior work in an attempt to inform case
prediction via human mobility behaviors, under the assumption that the way
people move might have an impact on the spreading of the epidemic. On the
other hand, demographic data either raw from the census, or combined in
different types of vulnerability indices, has also been shown to be helpful in
predicting COVID-19 prevalence, given the fact that COVID-19 has heavily
affected vulnerable populations (Gross et al. 2020).
COVID-19 Case Data. We use the COVID-19 case data compiled by the New York
Times at the county level (Times 2021). We account for delayed reporting, by
using the 7-day daily rolling average of COVID-19 cases (computed as the
average of its current value and 6 prior days) instead of raw counts. As
stated in the Introduction, case numbers might not be reflective of the actual
spread of COVID-19 for specific racial and ethnic groups, and such under-
reporting bias could in turn affect the fairness of the COVID-19 predictions
(Douglas et al. 2021; Del Rios et al. 2022).
(a)
(b)
Figure 3: (a) Case counts per 1000 population for all ethnicities and race.
(b) Case counts per 1000 population using majority based labelling of
counties. Note scale difference in y-axis. Figure 4: Mobility for all
ethnicity and races.
Figure 3(a) reflects the daily COVID-19 reported cases throughout the data
collection period and Figure 3(b) shows the case temporal distribution per
race and ethnicity.
Fairness Method | F(x,y)
---|---
Baseline | 1195.398**
Demopts | 668.769**
Group | 1455.528**
Individual | 1469.698**
Sufficiency | 1195.651**
Table 2: ANOVA F-test statistic comparing the mean prediction error for each
TFT prediction model: baseline and TFTs enhanced with de-biasing methods. All
tests were significant with p-value $<0.01$ (**) with DF (3,3080).
SafeGraph Mobility Data. SafeGraph open sourced the mobility patterns of smart
phone app users at the onset of the pandemic. These data points are curated by
tracking the movements of millions of pseudonymized users via mobile app SDKs.
Based on the data available, we use the daily O-D (origin-destination) county
to county flows (Kang et al. 2020). O-D flows represent the volume of trips
between pairs of counties across the United States for each day. For O-D
flows, we only use SafeGraph inflow (i.e. the mobility to the county). The
inflow mobility is measured as changes in volumes of flows with respect to a
baseline of normal behavior computed by SafeGraph using mobility data between
the period February 17, 2020 to March 7, 2020. Prior work has shown sampling
bias in mobility datasets revealing that not all races and ethnicities are
equally represented (Coston et al. 2021; Schlosser et al. 2021). It has also
been shown that sampling bias in the mobility data can negatively impact
downstream tasks such as COVID-19 forecasting (Abrar et al. 2023). While the
addition of mobility could potentially help improve the prediction accuracy
and support better decision making, it introduces bias. Our empirical analysis
of DemOpts aims to understand whether the de-biasing method proposed in this
paper can improve the fairness of COVID-19 county case predictive models when
mobility data is used as input into the predictive model. Figure 3(a) depicts
the daily average mobility across all counties in the US throughout the data
collection period.
Race and Ethnicity Data. We retrieve the race and ethnicity data from each
county in the U.S. from the American Community survey (ACS). The ACS survey
collects data annually from all 50 states, Puerto Rico and Washington DC. We
use the population race and ethnicity information for each county, and
consider the following labels: Asian, Black, Hispanic and White.
| Fairness Method | Baseline | DemOpts | Group | Individual | Sufficiency
---|---|---|---|---|---|---
Group1 | Group2 | | | | |
Asian | Black | -0.118 | 1.327 | -0.202 | -0.126 | -0.119
| Hispanic | -2.302** | -0.771 | -2.659** | -2.507** | -2.297**
| White | -2.064** | -0.968 | -2.515** | -2.517** | -2.061**
Black | Hispanic | -2.184** | -2.098** | -2.457** | -2.381** | -2.178**
| White | -1.946** | -2.295** | -2.313** | -2.391** | -1.942**
Hispanic | White | 0.238 | -0.197 | 0.144 | -0.01 | 0.236
Table 3: Hard error parity analysis. Each number represents the difference
between the mean normalized PBL loss for each pair of racial and ethnic
groups, and its statistical significance i.e., whether the difference is
statistically significant or not (with ** p-value $<0.01$, * p-value $<0.1$).
Bolded numbers represent the instances where DemOpts has removed a significant
difference in mean PBL errors, improving over the baseline and all the other
de-biasing methods. DemOpts achieves hard error parity for Asian counties.
### COVID-19 Case Prediction Models
We use COVID-19 case data as well as SafeGraph mobility data from March 18,
2020 to November 30, 2020 for the training (207 days) and testing (49 days) of
the TFT COVID-19 county case prediction models. The forecast task is the
prediction of the number of COVID-19 cases for a given county for day $X+1$ to
$X+49$ i.e., the following 3 months (long-term forecasting with lookahead
values from $1$ to $49$). Specifically, we train and test: (1) the
$TFT_{baseline}$, a TFT prediction model without a de-biasing method; (2) the
$TFT_{Individual}$, $TFT_{Group}$ and $TFT_{Sufficiency}$, TFT prediction
models with state-of-the-art de-biasing methods and (3) $TFT_{DemOpts}$, a TFT
prediction model enhanced with our proposed de-biasing method. All five models
are trained and tested for exactly the same temporal range; and all are
implemented using the pytorch-forecasting library. Although COVID-19 county
case data as well as mobility data are available for longer periods of time,
we decided to limit the period of analysis to a time before COVID-19 vaccines
were available, given that after that event, research has revealed a very
unclear relationship between mobility data and post-vaccines COVID-19 case
volumes (Gatalo et al. 2021). Once all models have been trained, we use the
prediction errors (PBL) per racial and ethnic group to analyze and compare
their hard and soft error parity.
Fairness Method | Baseline | Demopts | Group | Individual | Sufficiency
---|---|---|---|---|---
Group | | | | |
Asian | 0.811 | 0.248 | 0.842 | 0.850 | 0.811
Black | 0.764 | 0.588 | 0.774 | 0.807 | 0.764
Hispanic | 0.093 | 0.051 | 0.048 | 0.003 | 0.093
Table 4: Soft error parity analysis. Each number represents the distance
($|1-AER_{race}|$) for each protected group and de-biasing method. For each
protected race/ethnicity, distances closer to zero represent higher soft error
parity (signaled in bold font). $TFT_{DemOpts}$ achieves the highest soft
error parity for two of the three protected races under study.
$TFT_{Individual}$ achieves the best soft error parity for the Hispanic
counties when compared to prediction errors in White counties.
### Hard Error Parity Analysis
ANOVA tests of the normalized mean PBL error distributions across race and
ethnic groups for each de-biasing approach were all significant, pointing to a
dependency between race and the normalized prediction errors. Table 2 shows
the F-statistic and test significance for each of the prediction models with
and without de-biasing approaches. The significant ANOVA tests reveal that
perfect hard error parity is not achieved by any of the de-biasing methods. In
other words, for some racial and ethnic groups there exist statistically
significant differences between their mean PBL prediction errors and those of
other racial and ethnic groups; and this effect happens for the
$TFT_{baseline}$ as well as across all the other predictive models enhanced
with a de-biasing approach.
Nevertheless, post-hoc Tukey-HSD tests revealed interesting nuanced results,
showing significant differences in errors only between specific pairs of
racial and ethnic groups.
Table 3 shows the post-hoc Tukey-HSD test results for each COVID-19 case
predictive model: the baseline and each of the four models enhanced with a de-
biasing approach. Each row represents the output of the post-hoc test i.e.,
the difference between the normalized mean PBL error of Group 1 and Group2
i.e., $NormPBL_{Group1}-NormPBL_{Group2}$. If the difference is positive, it
means that the normalized mean predictive error is higher for Group 1; if the
difference is negative, the normalized PBL is higher for Group 2. The
asterisks indicate whether the difference is statistically significant or not.
The first relevant observation in looking at the table is that the baseline
model, focused on predicting COVID-19 county cases with no de-biasing
approach, is highly biased, with statistically significant differences between
the mean normalized errors across all pairs of races, except for the
comparison between Asian and Black counties as well as Hispanic and White
counties, for which there is no statistically significant difference between
the prediction errors. These results reveal that there is no racial group or
ethnicity that achieves hard error parity, and motivates our exploration of
whether state-of-the-art de-biasing methods or our proposed DemOpts can
improve the hard error parity results of the baseline model.
Looking at Table 3, we can observe that predictive models enhanced with the
Individual, Group, or Sufficiency de-biasing methods do not improve the hard
error parity over the baseline. In fact, each pair of racial and ethnic groups
whose prediction error distributions are significantly different for the
baseline (rows with asterisks in the Baseline column), remain significantly
different for the Individual, Group and Sufficiency de-biasing methods (rows
with asterisks in the Individual, Group and Sufficiency columns). Looking at
the significant mean PBL differences between racial and ethnic groups for the
baseline and the state-of-the art de-biasing models, we observe that all
coefficients have similar values, signaling similar significant mean PBL
differences between racial and ethnic groups (with values between $1.942$ and
$2.659$ error cases by 1,000 population). The sign of the coefficients reveals
higher mean PBL errors for Hispanic and White counties when compared to Asian
counties or Black counties; and higher mean PBL errors for White counties when
compared to Hispanic counties across all five models (baseline and four de-
biasing approaches). For example, Hispanic and White counties have mean
prediction errors $2.302$ and $2.064$ higher, when compared to Asian counties
and while using the baseline model; and Hispanic and White counties have
errors $2.184$ and $1.946$ higher when compared to Black counties and while
using the baseline model.
Fairness Method | Baseline | Demopts | Group | Individual | Sufficiency
---|---|---|---|---|---
Group | | | | |
Asian | 0.482 | 2.938 | 0.472 | 0.444 | 0.479
Black | 0.600 | 1.611 | 0.674 | 0.570 | 0.598
Hispanic | 2.784 | 3.709 | 3.131 | 2.951 | 2.776
White | 2.546 | 3.906 | 2.987 | 2.961 | 2.540
Table 5: Average prediction error (PBL) for each racial and ethnic group and
for each TFT COVID-19 county case prediction model: baseline model, and models
enhanced with a de-biasing method (Individual, Group, Sufficiency and
DemOpts). DemOpts achieves fairness by increasing mean errors for the Asian
and Black groups.
Notably, all predictive models including the baseline and those enhanced with
a de-biasing method ($TFT_{DemOpts}$, $TFT_{Group}$, $TFT_{Individual}$ and
$TFT_{Sufficiency}$) achieve hard error parity between Asian and Black
counties and between Hispanic and White counties i.e., the mean error
difference between these counties is not significant. But even more
interesting is the fact that DemOpts is the only de-biasing method that
achieves hard error parity in more cases than the baseline, effectively
removing some of the associations between race and ethnicity and the
normalized mean error distribution (PBL). Specifically, DemOpts removes the
significant difference between the prediction errors of Asian and White
counties, and of Asian and Hispanic counties (see bolded values in the Table),
effectively achieving hard error parity for Asian counties i.e., the mean PBL
in Asian counties is always similar to the mean error in counties of all the
other racial and ethnic groups. And these DemOpts improvements take place
while maintaining the $TFT_{baseline}$ hard error parity between Asian and
Black and Hispanic and White counties - also present in the other three de-
biasing methods. In other words, DemOpts improves the hard error parity of
COVID-19 county case predictions for two more racial and ethnic pairs than any
of the other de-biasing methods.
Finally, when looking specifically into the hard error parity between
protected (Asian, Black and Hispanic) and unprotected groups (White), DemOpts
achieves hard error parity for Asian and Hispanic i.e., their mean prediction
errors are not significantly different with respect to the White race; while
the baseline and the other three de-biasing methods only achieve hard error
parity for the Hispanic group when compared to White. These findings with
respect to the White group lead us to evaluate the soft error parity of the
different models, to understand, for example, if DemOpts achieves the best
soft error parity for the Black group (since hard error parity was not
achieved), or to see if DemOpts has better soft error parity than other de-
biasing methods for Asian or Hispanic groups. Next, we explore the soft error
parity metric for the TFT baseline and for all TFT models enhanced with de-
biasing approaches.
### Soft Error Parity Analysis
Table 4 shows the distance to the perfect soft error parity for each of the
de-biasing approaches and across all protected racial and ethnic groups. As we
can observe, DemOpts has the smallest values - closest distances to perfect
soft error parity - for Asian and Black counties; while the Individual de-
biasing method almost achieves perfect soft error parity for the Hispanic
counties. In other words, the errors for Asian and Black counties are the
closest to errors in White counties for the proposed DemOpts method, while the
Individual de-biasing model achieves errors for Hispanic counties that are the
closest to the White group. In addition, it is important to highlight that the
Group and Sufficiency de-biasing methods achieve soft error parities that are
similar to the $TFT_{baseline}$ which is not enhanced with any de-biasing
method. Overall, these results reveal that DemOpts is the de-biasing approach
that improves the most the soft error parity of COVID-19 county case
prediction models, with errors for Asian and Black counties being the closest
to errors in White counties; while the Individual de-biasing method achieves
the closest errors to the White race for Hispanic counties only.
### Why is DemOpts better?
The results have shown that DemOpts is the only de-biasing approach to achieve
both hard or soft error parity for all three racial minority groups when
compared to the White race. In an attempt to understand why DemOpts succeeds
in increasing both hard and soft error parity in the context of COVID-19
county case predictions, and when compared to other de-biasing methods, we
computed the average PBL for each racial and ethnic group and for each
predictive model enhanced, or not, with a de-biasing method (see Table 5).
We can observe that DemOpts achieves better hard and soft error parity metrics
because it considerably increases the errors for Asian and Black counties with
respect to the baseline, until the differences with Hispanic and White are
made not statistically significant (hard error parity) or closer to the White
mean errors (soft error parity). This result also points to another
interesting insight: the fact that DemOpts’ optimization could not decrease
prediction errors while trying to improve fairness, when fairness is measured
via statistical significance, showing a fairness-accuracy trade-off that has
been reported previously in the literature (Kim, Chen, and Talwalkar 2020).
Finally, it is also important to clarify that, in practice, the prediction
error increases brought about by DemOpts are not that large, with increases
between $1-2.5$ error cases by 1,000 people. We posit that these small
increases are acceptable if that is the requirement to guarantee hard and soft
error parity across protected and unprotected racial and ethnic groups.
## 6 Conclusion
In the past four years, researchers have worked profusely on the creation of
accurate COVID-19 case prediction models using not only historical COVID-19
cases but also complementary data such as human mobility or socio-demographic
information. However, there exists prior work showing that the accuracy of
COVID-19 predictions can depend on various social determinants, including race
and ethnicity, income, or age, revealing worse performance for protected
attributes and pointing to a lack on COVID-19 predictive fairness that could
affect resource allocation and decision making.
In this paper, we show that state of the art architectures in COVID-19 case
predictions (TFT models) incur in unfair prediction error distributions, and
we design a novel de-biasing approach to increase the fairness of the
predictions in the context of COVID-19 county case predictions. The new
proposed de-biasing approach, DemOpts, modifies the loss function in deep
learning models to reduce the dependencies between error distributions and
racial and ethnic labels. Our results show that DemOpts improves the most both
the hard and soft error parity of COVID-19 county case predictions when
compared to state-of-the-art de-biasing methods.
## References
* Abrar et al. (2023) Abrar, S. M.; Awasthi, N.; Smolyak, D.; and Frias-Martinez, V. 2023. Analysis of performance improvements and bias associated with the use of human mobility data in COVID-19 case prediction models. _ACM Journal on Computing and Sustainable Societies_.
* Agarwal, Dudík, and Wu (2019) Agarwal, A.; Dudík, M.; and Wu, Z. S. 2019. Fair Regression: Quantitative Definitions and Reduction-based Algorithms.
* Albani et al. (2021) Albani, V.; Loria, J.; Massad, E.; and Zubelli, J. 2021. COVID-19 underreporting and its impact on vaccination strategies. _BMC Infectious Diseases_ , 21: 1–13.
* Angulo, Finelli, and Swerdlow (2021) Angulo, F. J.; Finelli, L.; and Swerdlow, D. L. 2021. Estimation of US SARS-CoV-2 infections, symptomatic infections, hospitalizations, and deaths using seroprevalence surveys. _JAMA network open_ , 4(1): e2033706–e2033706.
* Apple (2022) Apple. 2022. COVID-19 Mobility Trends Reports.
* Arik et al. (2020a) Arik, S.; Li, C.-L.; Yoon, J.; Sinha, R.; Epshteyn, A.; Le, L.; Menon, V.; Singh, S.; Zhang, L.; Nikoltchev, M.; et al. 2020a. Interpretable sequence learning for COVID-19 forecasting. _Advances in Neural Information Processing Systems_ , 33: 18807–18818.
* Arik et al. (2020b) Arik, S.; Li, C.-L.; Yoon, J.; Sinha, R.; Epshteyn, A.; Le, L.; Menon, V.; Singh, S.; Zhang, L.; Nikoltchev, M.; et al. 2020b. Interpretable sequence learning for COVID-19 forecasting. _Advances in Neural Information Processing Systems_ , 33: 18807–18818.
* Badr and Gardner (2021) Badr, H. S.; and Gardner, L. M. 2021. Limitations of using mobile phone data to model COVID-19 transmission in the USA. _The Lancet Infectious Diseases_ , 21(5): e113.
* Bengtsson et al. (2015) Bengtsson, L.; Gaudart, J.; Lu, X.; Moore, S.; Wetter, E.; Sallah, K.; Rebaudet, S.; and Piarroux, R. 2015. Using mobile phone data to predict the spatial spread of cholera. _Scientific reports_ , 5(1): 1–5.
* Berk et al. (2017) Berk, R.; Heidari, H.; Jabbari, S.; Joseph, M.; Kearns, M.; Morgenstern, J.; Neel, S.; and Roth, A. 2017. A convex framework for fair regression. _arXiv preprint arXiv:1706.02409_.
* Blanca Mena et al. (2017) Blanca Mena, M. J.; Alarcón Postigo, R.; Arnau Gras, J.; Bono Cabré, R.; and Bendayan, R. 2017. Non-normal data: Is ANOVA still a valid option? _Psicothema, 2017, vol. 29, num. 4, p. 552-557_.
* Brunet et al. (2018) Brunet, M.-E.; Alkalay-Houlihan, C.; Anderson, A.; and Zemel, R. 2018. Understanding the Origins of Bias in Word Embeddings.
* Calmon et al. (2017) Calmon, F. P.; Wei, D.; Ramamurthy, K. N.; and Varshney, K. R. 2017. Optimized Data Pre-Processing for Discrimination Prevention.
* Castelnovo et al. (2022) Castelnovo, A.; Crupi, R.; Greco, G.; Regoli, D.; Penco, I. G.; and Cosentini, A. C. 2022. A clarification of the nuances in the fairness metrics landscape. _Scientific Reports_ , 12(1): 4209.
* CDC (2020) CDC. 2020. Forecast Hub. https://covid19forecasthub.org/.
* CDC (2023) CDC. 2023. Health Disparities. Last accessed January 2024.
* Chiang, Liu, and Mohler (2020) Chiang, W.-H.; Liu, X.; and Mohler, G. 2020. Hawkes process modeling of COVID-19 with mobility leading indicators and spatial covariates (preprint).
* Coston et al. (2021) Coston, A.; Guha, N.; Ouyang, D.; Lu, L.; Chouldechova, A.; and Ho, D. E. 2021. Leveraging Administrative Data for Bias Audits: Assessing Disparate Coverage with Mobility Data for COVID-19 Policy. In _Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency_ , FAccT ’21, 173–184. New York, NY, USA: Association for Computing Machinery. ISBN 9781450383097.
* Das and Dooley (2023) Das, R.; and Dooley, S. 2023. Fairer and More Accurate Tabular Models Through NAS.
* Del Rios et al. (2022) Del Rios, M.; Puente, S.; Vergara-Rodriguez, P.; and Sugrue, N. 2022. Invisibilidad de los latinos en la pandemia. _AMA Journal of Ethics_ , 289–295.
* Devlin et al. (2018) Devlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2018. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. _CoRR_ , abs/1810.04805.
* Dong, Du, and Gardner (2020) Dong, E.; Du, H.; and Gardner, L. 2020. An interactive web-based dashboard to track COVID-19 in real time. _The Lancet Infectious Diseases_ , 20(5): 533–534.
* Dosovitskiy et al. (2021) Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv:2010.11929.
* Douglas et al. (2021) Douglas, M. D.; Respress, E.; Gaglioti, A. H.; Li, C.; Blount, M. A.; Hopkins, J.; Baltrus, P. T.; Willock, R. J.; Caplan, L. S.; Dawes, D. E.; et al. 2021. Variation in reporting of the race and ethnicity of COVID-19 cases and deaths across US states: April 12, 2020, and November 9, 2020. _American Journal of Public Health_ , 111(6): 1141–1148.
* Duan et al. (2017) Duan, L.; Hu, T.; Cheng, E.; Zhu, J.; and Gao, C. 2017. Deep convolutional neural networks for spatiotemporal crime prediction. In _Proceedings of the International Conference on Information and Knowledge Engineering (IKE)_ , 61–67. csce.ucmss.com.
* Erfani and Frias-Martinez (2023) Erfani, A.; and Frias-Martinez, V. 2023. A fairness assessment of mobility-based COVID-19 case prediction models.
* Estiri et al. (2022) Estiri, H.; Strasser, Z. H.; Rashidian, S.; Klann, J. G.; Wagholikar, K. B.; McCoy Jr, T. H.; and Murphy, S. N. 2022. An objective framework for evaluating unrecognized bias in medical AI models predicting COVID-19 outcomes. _Journal of the American Medical Informatics Association_ , 29(8): 1334–1341.
* Fitzsimons et al. (2019) Fitzsimons, J.; Al Ali, A.; Osborne, M.; and Roberts, S. 2019. A General Framework for Fair Regression.
* for Disease Control and Prevention (2023) for Disease Control, C.; and Prevention. 2023. COVID-19 Forecasting and Mathematical Modeling. Accessed: 2023-12-25.
* Frias-Martinez et al. (2012) Frias-Martinez, V.; Soto, V.; Virseda, J.; and Frias-Martinez, E. 2012. Computing cost-effective census maps from cell phone traces. In _Workshop on pervasive urban applications_.
* Frias-Martinez and Virseda (2013) Frias-Martinez, V.; and Virseda, J. 2013. Cell phone analytics: Scaling human behavior studies into the millions. _Information Technologies & International Development_, 9(2): pp–35.
* Frias-Martinez, Virseda, and Frias-Martinez (2010) Frias-Martinez, V.; Virseda, J.; and Frias-Martinez, E. 2010. Socio-economic levels and human mobility. In _Qual meets quant workshop-QMQ_ , 1–6.
* Frias-Martinez, Virseda, and Gomero (2012) Frias-Martinez, V.; Virseda, J.; and Gomero, A. 2012. Mobilizing education: evaluation of a mobile learning tool in a low-income school. In _Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services_ , 441–450.
* Fu et al. (2018) Fu, C.; McKenzie, G.; Frias-Martinez, V.; and Stewart, K. 2018. Identifying spatiotemporal urban activities through linguistic signatures. _Computers, Environment and Urban Systems_ , 72: 25–37.
* Galasso, Cao, and Hochberg (2022) Galasso, J.; Cao, D. M.; and Hochberg, R. 2022. A random forest model for forecasting regional COVID-19 cases utilizing reproduction number estimates and demographic data. _Chaos, Solitons & Fractals_, 156: 111779.
* Gatalo et al. (2021) Gatalo, O.; Tseng, K.; Hamilton, A.; Lin, G.; and Klein, E. 2021. Associations between phone mobility data and COVID-19 cases. _The Lancet Infectious Diseases_ , 21(5): e111.
* Ghurye, Krings, and Frias-Martinez (2016) Ghurye, J.; Krings, G.; and Frias-Martinez, V. 2016. A framework to model human behavior at large scale during natural disasters. In _2016 17th IEEE International Conference on Mobile Data Management (MDM)_ , volume 1, 18–27. IEEE.
* Google (2022) Google. 2022. COVID-19 Community Mobility Reports.
* Gross et al. (2020) Gross, C. P.; Essien, U. R.; Pasha, S.; Gross, J. R.; Wang, S.-y.; and Nunez-Smith, M. 2020. Racial and ethnic disparities in population-level Covid-19 mortality. _Journal of general internal medicine_ , 35: 3097–3099.
* Gursoy and Kakadiaris (2022) Gursoy, F.; and Kakadiaris, I. A. 2022. Error Parity Fairness: Testing for Group Fairness in Regression Tasks. _arXiv preprint arXiv:2208.08279_.
* Hernandez et al. (2017) Hernandez, M.; Hong, L.; Frias-Martinez, V.; Whitby, A.; and Frias-Martinez, E. 2017. Estimating poverty using cell phone data: evidence from Guatemala.
* Hochreiter and Schmidhuber (1997) Hochreiter, S.; and Schmidhuber, J. 1997. Long short-term memory. _Neural computation_ , 9(8): 1735–1780.
* Hong, Frias-Martinez, and Frias-Martinez (2016) Hong, L.; Frias-Martinez, E.; and Frias-Martinez, V. 2016. Topic models to infer socio-economic maps. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 30.
* Hong and Frias-Martinez (2020) Hong, L.; and Frias-Martinez, V. 2020. Modeling and predicting evacuation flows during hurricane Irma. _EPJ Data Science_ , 9(1): 29.
* Hong et al. (2017) Hong, L.; Fu, C.; Torrens, P.; and Frias-Martinez, V. 2017. Understanding citizens’ and local governments’ digital communications during natural disasters: the case of snowstorms. In _Proceedings of the 2017 ACM on web science conference_ , 141–150.
* Isaacman, Frias-Martinez, and Frias-Martinez (2018) Isaacman, S.; Frias-Martinez, V.; and Frias-Martinez, E. 2018. Modeling human migration patterns during drought conditions in La Guajira, Colombia. In _Proceedings of the 1st ACM SIGCAS conference on computing and sustainable societies_ , 1–9.
* Jagodnik et al. (2020) Jagodnik, K. M.; Ray, F.; Giorgi, F. M.; and Lachmann, A. 2020. Correcting under-reported COVID-19 case numbers: estimating the true scale of the pandemic. _medRxiv_ , 2020–03.
* Kader and Smith (2021) Kader, F.; and Smith, C. L. 2021. Participatory approaches to addressing missing COVID-19 race and ethnicity data. _International Journal of Environmental Research and Public Health_ , 18(12): 6559.
* Kang et al. (2020) Kang, Y.; Gao, S.; Liang, Y.; Li, M.; Rao, J.; and Kruse, J. 2020. Multiscale dynamic human mobility flow dataset in the US during the COVID-19 epidemic. _Scientific data_ , 7(1): 390.
* Kim, Chen, and Talwalkar (2020) Kim, J. S.; Chen, J.; and Talwalkar, A. 2020. FACT: A diagnostic for group fairness trade-offs. In _International Conference on Machine Learning_ , 5264–5274. PMLR.
* Kirillov et al. (2023) Kirillov, A.; Mintun, E.; Ravi, N.; Mao, H.; Rolland, C.; Gustafson, L.; Xiao, T.; Whitehead, S.; Berg, A. C.; Lo, W.-Y.; Dollár, P.; and Girshick, R. 2023. Segment Anything. arXiv:2304.02643.
* Labs (2023) Labs, D. 2023. Mobility changes in response to COVID-19.
* Latif et al. (2023) Latif, S.; Zaidi, A.; Cuayahuitl, H.; Shamshad, F.; Shoukat, M.; and Qadir, J. 2023. Transformers in Speech Processing: A Survey. arXiv:2303.11607.
* Le et al. (2020a) Le, M.; Ibrahim, M.; Sagun, L.; Lacroix, T.; and Nickel, M. 2020a. Neural relational autoregression for high-resolution COVID-19 forecasting. _Facebook AI Research_.
* Le et al. (2020b) Le, M.; Ibrahim, M.; Sagun, L.; Lacroix, T.; and Nickel, M. 2020b. Neural relational autoregression for high-resolution COVID-19 forecasting. _Facebook AI Research_.
* Li et al. (2019) Li, S.; Jin, X.; Xuan, Y.; Zhou, X.; Chen, W.; Wang, Y.-X.; and Yan, X. 2019. Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting. In Wallach, H.; Larochelle, H.; Beygelzimer, A.; d'Alché-Buc, F.; Fox, E.; and Garnett, R., eds., _Advances in Neural Information Processing Systems_ , volume 32. Curran Associates, Inc.
* Lim et al. (2021a) Lim, B.; Ark, S.; Loeff, N.; and Pfister, T. 2021a. Temporal fusion transformers for interpretable multi-horizon time series forecasting. _International Journal of Forecasting_ , 37(4): 1748–1764.
* Lim et al. (2021b) Lim, B.; Arık, S. O.; Loeff, N.; and Pfister, T. 2021b. Temporal Fusion Transformers for interpretable multi-horizon time series forecasting. _International Journal of Forecasting_ , 37(4): 1748–1764.
* Liu et al. (2021) Liu, Q.; Chen, B.; Guo, J.; Lin, Z.; and Lou, J. 2021. TAPEX: Table Pre-training via Learning a Neural SQL Executor. _CoRR_ , abs/2107.07653.
* Liu et al. (2022) Liu, S.; Yu, H.; Liao, C.; Li, J.; Lin, W.; Liu, A. X.; and Dustdar, S. 2022. Pyraformer: Low-Complexity Pyramidal Attention for Long-Range Time Series Modeling and Forecasting. In _International Conference on Learning Representations_.
* Lucas, Vahedi, and Karimzadeh (2023a) Lucas, B.; Vahedi, B.; and Karimzadeh, M. 2023a. A spatiotemporal machine learning approach to forecasting COVID-19 incidence at the county level in the USA. _International Journal of Data Science and Analytics_ , 15(3): 247–266.
* Lucas, Vahedi, and Karimzadeh (2023b) Lucas, B.; Vahedi, B.; and Karimzadeh, M. 2023b. A spatiotemporal machine learning approach to forecasting COVID-19 incidence at the county level in the USA. _International Journal of Data Science and Analytics_ , 15(3): 247–266.
* Meraihi et al. (2022) Meraihi, Y.; Gabis, A. B.; Mirjalili, S.; Ramdane-Cherif, A.; and Alsaadi, F. E. 2022. Machine learning-based research for covid-19 detection, diagnosis, and prediction: A survey. _SN computer science_ , 3(4): 286.
* Nature (2019) Nature. 2019. 5 tips for dealing with non-significant results. Accessed: 2024-1-12.
* Nie et al. (2023) Nie, Y.; H. Nguyen, N.; Sinthong, P.; and Kalagnanam, J. 2023. A Time Series is Worth 64 Words: Long-term Forecasting with Transformers. In _International Conference on Learning Representations_.
* Pei and Shaman (2020) Pei, S.; and Shaman, J. 2020. Initial simulation of SARS-CoV2 spread and intervention effects in the continental US. _MedRxiv_ , 2020–03.
* Radford et al. (2018) Radford, A.; Narasimhan, K.; Salimans, T.; Sutskever, I.; et al. 2018. Improving language understanding by generative pre-training.
* Rubio et al. (2010) Rubio, A.; Frias-Martinez, V.; Frias-Martinez, E.; and Oliver, N. 2010. Human mobility in advanced and developing economies: A comparative analysis. In _2010 AAAI Spring Symposium Series_.
* Schlosser et al. (2021) Schlosser, F.; Sekara, V.; Brockmann, D.; and Garcia-Herranz, M. 2021. Biases in human mobility data impact epidemic modeling.
* Shah et al. (2022) Shah, A.; Bu, Y.; Lee, J. K.; Das, S.; Panda, R.; Sattigeri, P.; and Wornell, G. W. 2022. Selective Regression under Fairness Criteria. In Chaudhuri, K.; Jegelka, S.; Song, L.; Szepesvari, C.; Niu, G.; and Sabato, S., eds., _Proceedings of the 39th International Conference on Machine Learning_ , volume 162 of _Proceedings of Machine Learning Research_ , 19598–19615. PMLR.
* Smittenaar et al. (2021) Smittenaar, P.; Stewart, N.; Sutermaster, S.; Coome, L.; Dibner-Dunlap, A.; Jain, M.; Caplan, Y.; Campigotto, C.; and Sgaier, S. K. 2021. A COVID-19 Community Vulnerability Index to drive precision policy in the US. _medRxiv_.
* Times (2021) Times, T. N. Y. 2021. Coronavirus (Covid-19) Data in the United States. https://github.com/nytimes/covid-19-data. Accessed: 2023-12-15.
* Tsai et al. (2022) Tsai, T. C.; Arik, S.; Jacobson, B. H.; Yoon, J.; Yoder, N.; Sava, D.; Mitchell, M.; Graham, G.; and Pfister, T. 2022. Algorithmic fairness in pandemic forecasting: lessons from COVID-19.
* Vaswani et al. (2017) Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention Is All You Need.
* Velickovic et al. (2017) Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y.; et al. 2017. Graph attention networks. _stat_ , 1050(20): 10–48550.
* Vieira et al. (2010) Vieira, M. R.; Frias-Martinez, E.; Bakalov, P.; Frias-Martinez, V.; and Tsotras, V. J. 2010. Querying spatio-temporal patterns in mobile phone-call databases. In _2010 Eleventh International Conference on Mobile Data Management_ , 239–248. IEEE.
* Wang and Singh (2023) Wang, Y.; and Singh, L. 2023. Mitigating demographic bias of machine learning models on social media.
* Wang et al. (2020) Wang, Z.; Dong, H.; Jia, R.; Li, J.; Fu, Z.; Han, S.; and Zhang, D. 2020. Structure-aware Pre-training for Table Understanding with Tree-based Transformers. _CoRR_ , abs/2010.12537.
* Wesolowski et al. (2012) Wesolowski, A.; Eagle, N.; Tatem, A. J.; Smith, D. L.; Noor, A. M.; Snow, R. W.; and Buckee, C. O. 2012. Quantifying the impact of human mobility on malaria. _Science_ , 338(6104): 267–270.
* Wu et al. (2021) Wu, H.; Xu, J.; Wang, J.; and Long, M. 2021. Autoformer: Decomposition Transformers with Auto-Correlation for Long-Term Series Forecasting. _CoRR_ , abs/2106.13008.
* Wu et al. (2022) Wu, J.; Abrar, S. M.; Awasthi, N.; Frias-Martinez, E.; and Frias-Martinez, V. 2022. Enhancing short-term crime prediction with human mobility flows and deep learning architectures. _EPJ Data Science_ , 11(1): 53.
* Wu et al. (2023) Wu, J.; Abrar, S. M.; Awasthi, N.; and Frias-Martinez, V. 2023. Auditing the fairness of place-based crime prediction models implemented with deep learning approaches. _Computers, Environment and Urban Systems_ , 102: 101967.
* Wu, Frias-Martinez, and Frias-Martinez (2021) Wu, J.; Frias-Martinez, E.; and Frias-Martinez, V. 2021. Spatial sensitivity analysis for urban hotspots using cell phone traces. _Environment and Planning B: Urban Analytics and City Science_.
* Yan, Seto, and Apostoloff (2022) Yan, B.; Seto, S.; and Apostoloff, N. 2022. FORML: Learning to Reweight Data for Fairness.
* Yang, Soltan, and Clifton (2022) Yang, J.; Soltan, A. A. S.; and Clifton, D. A. 2022. Algorithmic Fairness and Bias Mitigation for Clinical Machine Learning: A New Utility for Deep Reinforcement Learning. _medRxiv_.
* Yin et al. (2020) Yin, P.; Neubig, G.; tau Yih, W.; and Riedel, S. 2020. TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data. In _Annual Conference of the Association for Computational Linguistics (ACL)_.
* Yu, Yin, and Zhu (2018a) Yu, B.; Yin, H.; and Zhu, Z. 2018a. Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting. In _Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI-18_ , 3634–3640. International Joint Conferences on Artificial Intelligence Organization.
* Yu, Yin, and Zhu (2018b) Yu, B.; Yin, H.; and Zhu, Z. 2018b. Spatio-Temporal Graph Convolutional Networks: A Deep Learning Framework for Traffic Forecasting.(2018).
* Zhang-James et al. (2021) Zhang-James, Y.; Hess, J.; Salkin, A.; Wang, D.; Chen, S.; Winkelstein, P.; Morley, C. P.; and Faraone, S. V. 2021. A seq2seq model to forecast the COVID-19 cases, deaths and reproductive R numbers in US counties. _Research Square_.
* Zhou et al. (2020) Zhou, H.; Zhang, S.; Peng, J.; Zhang, S.; Li, J.; Xiong, H.; and Zhang, W. 2020. Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting. _CoRR_ , abs/2012.07436.
* Zhou et al. (2022) Zhou, T.; Ma, Z.; Wen, Q.; Wang, X.; Sun, L.; and Jin, R. 2022. FEDformer: Frequency Enhanced Decomposed Transformer for Long-term Series Forecasting. _CoRR_ , abs/2201.12740.
* Zimmerman (1987) Zimmerman, D. W. 1987. Comparative power of Student t test and Mann-Whitney U test for unequal sample sizes and variances. _The Journal of Experimental Educational_ , 171–174.
* Zou et al. (2020) Zou, D.; Wang, L.; Xu, P.; Chen, J.; Zhang, W.; and Gu, Q. 2020. Epidemic model guided machine learning for COVID-19 forecasts in the United States. _MedRxiv_ , 2020–05.
|
# Separation of Infrared and Bulk in Thermal QCD
Xiao-Lan Meng1,2, Peng Sun3, Andrei Alexandru4, Ivan Horváth5,6, Keh-Fei Liu6,
Gen Wang7, and Yi-Bo Yang1,2,8,9 ($\chi$QCD & CLQCD Collaboration) 1CAS Key
Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese
Academy of Sciences, Beijing 100190, China
2University of Chinese Academy of Sciences, School of Physical Sciences,
Beijing 100049, China
3Institute of Modern Physics, Chinese Academy of Sciences, Lanzhou, 730000,
China
4The George Washington University, Washington, DC 20052, USA
5Nuclear Physics Institute CAS, 25068 Rez (Prague), Czech Republic
6University of Kentucky, Lexington, KY 40506, USA
7Aix-Marseille Université, Université de Toulon, CNRS, CPT, Marseille, France
8School of Fundamental Physics and Mathematical Sciences, Hangzhou Institute
for Advanced Study, UCAS, Hangzhou 310024, China
9International Centre for Theoretical Physics Asia-Pacific, Beijing/Hangzhou,
China
###### Abstract
New thermal phase of QCD, featuring scale invariance in the infrared (IR), was
proposed to exist both in the pure-glue (Nf=0) and the “real-world” (Nf=2+1)
settings. Among key aspects of the proposal is that the system in this IR
phase separates into two independent components: the scale-invariant IR part
and the non-invariant bulk. Such scenario requires non-analyticities in the
theory and, in case of pure-glue QCD, they were found to arise via Anderson-
like mobility edges in Dirac spectra ($\lambda_{\scriptscriptstyle{\text{\rm
IR}}}\\!=\\!0$, $\pm\lambda_{\text{A}}\\!\neq\\!0$) manifested in dimension
function $d_{\scriptscriptstyle{\text{\rm IR}}}(\lambda)$. Here we present
first evidence that this mechanism is also at work in “real-world QCD” (Nf=2+1
theory at physical quark masses and $a\\!=\\!0.105\,$fm), supporting the
existence of the proposed IR regime. Dimensional jump between zero-modes and
lowest near-zero modes very close to unity ($d_{\scriptscriptstyle{\text{\rm
IR}}}\\!=\\!3$ vs $d_{\scriptscriptstyle{\text{\rm IR}}}\\!\simeq\\!2$) was
found.
1\. Introduction: Starting with the pre-QCD times of Hagedorn Hagedorn (1965,
1971) and early lattice QCD calculations in the pure-glue setting McLerran and
Svetitsky (1981); Kuti _et al._ (1981); Engels _et al._ (1981), the question
of thermal transition in strongly-interacting matter has become one of the
highly researched topics in nuclear and particle physics. Apart from well-
motivated need to understand strong interactions, the interest in the issue
was fueled, to a large extent, by the potential significance of its resolution
to the physics of early universe.
Hagedorn in fact set the basic scenario, wherein the thermal transformation
process in strong interactions boils down to a single “instability
temperature” which, nowadays in QCD, is commonly referred as the critical
temperature ($T_{c}$). Due to the non-perturbative nature of the problem,
lattice QCD became the workhorse for investigations in this area. Advances in
lattice QCD techniques and computational resources led to a major conclusion,
namely that true phase transition does not occur in “real-world” QCD. Rather,
an analytic crossover takes place in the temperature range 150-200 MeV, with
$T_{c}\approx 155$ MeV for the case of chiral crossover Aoki _et al._ (2006).
Transitionless outlook meant a setback to QCD’s role in cosmology, but an
important new twist appeared around the same time. Experiments at RHIC and LHC
concluded that the state of strongly interacting matter with properties akin
to near-perfect fluid exists in certain range of temperatures Arsene _et al._
(2005); Back _et al._ (2005); Adams _et al._ (2005); Adcox _et al._ (2005);
Muller _et al._ (2012). Among other things, this invites questions about how
can such an exotic state of matter arise without a true phase transition.
To this end, some of us presented evidence of an unusual change in QCD Dirac
spectra at temperatures well above the chiral crossover Alexandru and Horváth
(2019): the anomalous accumulation of infrared (IR) Dirac modes, first seen in
high-$T$ phase of pure-glue QCD Edwards _et al._ (2000) and shown to persist
into the continuum and infinite-volume limits Alexandru and Horváth (2015),
dramatically increases and starts to exhibit signs of scale-invariant
behavior. This sharp change was found in both the pure-glue and real-world
($N_{f}$ = 2+1 at physical masses) QCD Alexandru and Horváth (2019). It was
proposed that, at the associated temperature $T_{\scriptscriptstyle{\text{\rm
IR}}}$, thermal state of strong interactions reconfigures by forming two
independent components separated by new scale
$\Lambda_{{\scriptscriptstyle{\text{\rm IR}}}}(T)\lesssim T$: the bulk
governing distances $\ell\\!<\\!1/\Lambda_{\scriptscriptstyle{\text{\rm IR}}}$
and the IR part describing $\ell\\!>\\!1/\Lambda_{\scriptscriptstyle{\text{\rm
IR}}}$ via scale-invariant glue Alexandru and Horváth (2019). In pure-glue
case, $T_{\scriptscriptstyle{\text{\rm IR}}}$ coincides with $T_{c}$ of
Polyakov-line phase transition. In real-world QCD, it was also proposed to be
a true phase transformation occurring at
$200\\!<\\!T_{\scriptscriptstyle{\text{\rm IR}}}\\!<\\!250$ MeV Alexandru and
Horváth (2019). Its presence may clarify the physics of near-perfect fluid and
enhance the role of “QCD transition” in cosmology.
The 2-component scenario was first evoked by a clean IR-bulk separation in
Dirac spectra (bimodality of mode density $\rho(\lambda)$), which is very
suggestive of decoupling Alexandru and Horváth (2019). But more detail was
needed. Indeed, how would the scale invariant and non-invariant physics
coexist and would it imply a non-analytic running of the coupling constant at
$\Lambda_{\scriptscriptstyle{\text{\rm IR}}}$? Concrete proposal was presented
in Refs. Alexandru and Horváth (2021, 2022), ascribing the origin of non-
analyticity to two Anderson-like mobility edges (critical points) in Dirac
spectra. The first one at $\lambda_{\scriptscriptstyle{\text{\rm A}}}>0$ was
found previously Garcia-Garcia and Osborn (2006, 2007); Kovacs and Pittler
(2010); Giordano _et al._ (2014); Ujfalusi _et al._ (2015), and its purpose
here is to shield the IR component from the influence of the bulk. Indeed,
bulk fluctuations (including UV) will not affect the IR component owing to the
intervening non-analyticity. The second mobility edge was found recently
Alexandru and Horváth (2022), and is strictly IR
($\lambda_{\scriptscriptstyle{\text{\rm IR}}}\\!=\\!0$). Its role is to
facilitate the long-range physics of the IR component.
A suitable tool to express this scenario is the function
$d_{\scriptscriptstyle{\text{\rm IR}}}(\lambda)$, namely the spatial IR
dimension of Dirac modes at eigenvalue $\lambda$ Alexandru and Horváth (2021).
Indeed, $d_{{\scriptscriptstyle{\text{\rm IR}}}}$ is a proper dimensional
construct to probe the infrared Horváth _et al._ (2023). The key conceptual
step granting its uses in quantum theory is the assignment of a meaningful
measure-based dimension to a region effectively selected by probabilities.
This has recently become possible via effective number theory Horváth and
Mendris (2020); Horváth (2021); Horváth and Mendris (2019): replacing ordinary
counting in definition of box-counting dimension for fixed sets by effective
counting for probability distributions leads to such measure-based dimension
Horváth _et al._ (2023). For Dirac modes in thermal QCD the prescription is
as follows. In lattice-regularized Euclidean setup the number of sites
$N(L)\equiv(L/a)^{3}/(aT)$ (UV cutoff $1/a$, IR cutoff $1/L$, temperature $T$)
grows as $L^{3}$ at fixed $a$, conveying that IR dimension of space is
$D_{\scriptscriptstyle{\text{\rm IR}}}\\!=\\!3$. But Dirac eigenmode
$D\psi(x)\\!=\\!\lambda\psi(x)$ entails probabilities
$P\\!=\\!(p_{1},p_{2},\ldots,p_{N})$,
$p_{i}\\!\equiv\\!\psi^{+}(x_{i})\psi(x_{i})$, and sites have to be counted
effectively in order to quantify the volume $\psi$ actually extends into,
namely Horváth and Mendris (2020)
$N\,\longrightarrow\,\mathscr{N}_{\star}[\psi]\,=\,\mathscr{N}_{\star}[P]\,=\,\sum_{i=1}^{N}\min\,\\{Np_{i},1\\}.\;\,$
(1)
The IR scaling of QCD-averaged effective volume at given $\lambda$ then
determines $d_{\scriptscriptstyle{\text{\rm IR}}}(\lambda)$ at UV cutoff $a$,
namely Alexandru and Horváth (2021); Horváth _et al._ (2023)
$\langle\,\mathscr{N}_{\star}\,\rangle_{L,\lambda,a}\,\propto\,L^{d_{\scriptscriptstyle{\text{\rm
IR}}}(\lambda,a)}\quad\text{for}\quad L\to\infty.\quad$ (2)
Using the overlap Dirac operator Neuberger (1998) due to its superior chiral
and topological properties, an unusual $d_{\scriptscriptstyle{\text{\rm
IR}}}(\lambda)$ was found in IR phase of pure-glue QCD Alexandru and Horváth
(2019). Indeed, the function is non-analytic at both
$\lambda_{\scriptscriptstyle{\text{\rm IR}}}$ and $\lambda_{A}$, with spectral
region of low-$d$ ($d_{\scriptscriptstyle{\text{\rm IR}}}\\!\leq\\!1$) modes
between them Alexandru and Horváth (2021). Moreover, in contrast to exact zero
modes, which are $d_{\scriptscriptstyle{\text{\rm IR}}}\\!=\\!3$, the lowest
near-zero modes $(\lambda\\!\to\\!0^{+})$ are close to other topological value
$d_{\scriptscriptstyle{\text{\rm IR}}}\\!=\\!2$. Such jump at
$\lambda_{\scriptscriptstyle{\text{\rm IR}}}\\!=\\!0$ is surprising since the
proposed origin of anomalous IR mode accumulation is the conventional mixing
of topological lumps Edwards _et al._ (2000); Vig and Kovacs (2021) which, in
absence of additional (unknown) effects, leads to
$d_{\scriptscriptstyle{\text{\rm IR}}}\\!=\\!3$ in both cases. The jump could
thus offer valuable clues on IR phase dynamics, and could be used to detect
the transition into IR phase.
In this work, we make a key step toward this proposal becoming a reality: we
present evidence supporting the existence of the above unusual pattern also in
“real-world QCD”. In particular, in $N_{f}\\!=\\!2+1$ theory at physical quark
masses (see setup below) we obtain
$d_{\scriptscriptstyle{\text{\rm IR}}}(0)=2.98(09),\quad\quad
d_{\scriptscriptstyle{\text{\rm IR}}}(0^{+})=2.03(16)$ (3)
This finding lends support to topological origin of exotic IR-phase dynamics
Alexandru and Horváth (2021, 2022) (see also Edwards _et al._ (2000); Vig and
Kovacs (2021); Cardinali _et al._ (2021)), and significantly strenghtens its
connection to non-analyticities of Anderson-like origin Alexandru and Horváth
(2022). Regarding the latter, the observed topology aspects may have close
ties to critical Anderson dynamics Horváth and Markoš (2022a, 2023, b) which
is entirely unexpected.
2\. Simulation Setup: We lattice-regularize $N_{f}\\!=\\!2+1$ QCD using
tadpole-improved clover fermion action (1-step stout link smearing with
parameter 0.125) and tadpole-improved Symanzik gauge action at
$a=0.105~{}\mathrm{fm}$ and $m_{\pi}\simeq 135$ MeV. At temperature
$T\\!=\\!234\,$MeV, numerous spatial volumes (up to $L\\!=\\!10.1\,$fm) were
simulated by CLQCD collaboration (see Table 1), allowing for reliable
$d_{\scriptscriptstyle{\text{\rm IR}}}(\lambda)$ calculations. More detailed
ensemble description is given in ref . We note in passing that ensembles with
similar quark and gauge actions were already used in previous zero-temperature
calculations Zhang _et al._ (2021); Liu _et al._ (2022); Xing _et al._
(2022); Liu _et al._ (2023).
Table 1: UV cutoff $a$, pion mass $m_{\pi}$, lattice volumes $n_{L}^{3}\times n_{T}$ and temperature $T$ of lattice QCD ensembles studied. $a$(fm) | $m_{\pi}$(MeV) | $n_{L}$ | $n_{T}$ | $T$(MeV)
---|---|---|---|---
0.105 | 135 | 24/28/32/40/48/64/96 | 8 | 234
Glue fields $U$ of this theory will be studied via their effect on the overlap
Dirac operator $D_{\text{ov}}[U]$. We construct $D_{\text{ov}}$ using the
original square-root prescription Neuberger (1998) at $\rho\\!=\\!1.5$ with
1-step HYP smearing of $U$. To determine the low-lying eigensystem, we select
the chiral sector containing zero mode(s), calculate the eigenvectors of
$D_{\rm ov}^{\dagger}D_{\rm ov}$ in it using Arnoldi method, and then
construct non-zero modes Sorensen (1992); Lehoucq and Sorensen (1996); Giusti
_et al._ (2003); Alexandru _et al._ (2011); ref . Transformation $D\equiv
D_{\textrm{ov}}/(1-\frac{a}{2\rho}D_{\textrm{ov}})$ Chiu and Zenkin (1999)
yields purely imaginary eigenvalues
($D\psi_{\lambda}(x)\\!=\\!i\lambda\psi_{\lambda}(x)$) and the associated
spectral density is
$\rho(\lambda)=T\sum_{i}\delta(\lambda-\lambda_{i})/L^{3}$. Further technical
details can be found in the supplemenatary material ref .
Eigenmodes with $\lambda$ up to $\sim 500\,$MeV were computed for all $L$ in
Table 1. Densities $\rho(\lambda)$ were renormalized in
$\overline{\textrm{MS}}$ at 2 GeV, using
$Z_{m}\\!=\\!Z^{-1}_{S}\\!=\\!0.907(26)$ obtained by interpolating the results
at 11 UV cutoffs He _et al._ (2022). We wish to focus on a temperature in the
range $200\\!<\\!T\\!<\\!250$ MeV, where the system was originally predicted
to reach the IR phase at certain $T_{\scriptscriptstyle{\text{\rm IR}}}$. In
Fig. 1 we show $\rho(\lambda)$ at $T\\!=\\!234~{}$MeV. The striking bimodal
structure exhibits features previously associated with IR phase Alexandru and
Horváth (2019), including a fully-formed region of severe depletion. We also
include the $T\\!\simeq\\!0$ result from identical simulation setup on
$48^{3}\times 96$ lattice and $\rho(\lambda)$ obtained using stochastic
estimator Cossu _et al._ (2016). The difference is indeed quite remarkable.
Note that, for large enough $\lambda$, the two densities come together which
is expected for all $T\\!\ll\\!1/a$.
Figure 1: Spectral density $\rho(\lambda)$ at $T\\!=\\!234\,$MeV (circles) and
$T\\!\simeq\\!0$ (triangles), both at $L=5.0$ fm.
3\. The Results: We now examine in detail whether the unusual
$d_{\scriptscriptstyle{\text{\rm IR}}}(\lambda)$ in IR phase of pure-glue QCD
Alexandru and Horváth (2021) is also present in real-world QCD at
$T\\!=\\!234~{}$MeV. To that end, we utilize and extend the techniques of
early studies. A useful concept is the “finite-volume”
$d_{\scriptscriptstyle{\text{\rm IR}}}$, namely Alexandru and Horváth (2021)
$d_{\scriptscriptstyle{\text{\rm
IR}}}(L,s)=\frac{1}{\ln(s)}\ln\frac{\mathscr{N}_{\star}(L)}{\mathscr{N}_{\star}(L/s)}\quad,\quad
s>0$ (4)
since then $d_{\scriptscriptstyle{\text{\rm
IR}}}\\!=\\!\lim_{L\to\infty}d_{\scriptscriptstyle{\text{\rm IR}}}(L,s)$
independently of $s$. Estimating the limit from linear extrapolations in $1/L$
work well in Anderson models, at least for extended states and at criticality
Horváth and Markoš (2022a). Here we utilize this, and point out that the
procedure is equivalent to direct fitting of $\mathscr{N}_{\star}(L)$ to the
form $b\,L^{d_{\scriptscriptstyle{\text{\rm IR}}}}e^{-c/L}$ ref , which is
technically more convenient. Using the data from our five largest systems in
such fits, we obtained $d_{\scriptscriptstyle{\text{\rm IR}}}(\lambda)$ shown
in Fig. 2. Despite some differences (see below), its behavior is strikingly
similar to pure-glue case (Fig. 1 of Ref. Alexandru and Horváth (2021)).
Important commonality is the discontinuity feature at
$\lambda_{\scriptscriptstyle{\text{\rm IR}}}\\!=\\!0$, suggesting that exact
zero-modes ($d_{\scriptscriptstyle{\text{\rm IR}}}(0)\\!\simeq\\!3$) differ
from lowest near-zero modes ($d_{\scriptscriptstyle{\text{\rm
IR}}}(0^{+})\\!\simeq\\!2$) in a robust qualitative manner. This is made
explicit by the inset of Fig. 2 focusing on very deep IR, and yielding
linearly $\lambda\\!\rightarrow\\!0^{+}$ extrapolated
$d_{\scriptscriptstyle{\text{\rm IR}}}(0^{+})\\!=\\!2.03(16)$, which is more
than $5\sigma$ smaller than 3. Explaining this difference in terms of the
underlying IR glue may prove important for deciphering the nature of IR phase.
Like in pure-glue case, we find a clear low-$d$
($d_{\scriptscriptstyle{\text{\rm IR}}}\leq 1$) plateau, here in the range of
about $10-220~{}$MeV. This roughly coincides with the region of strongly
suppressed $\rho(\lambda)$ (Fig. 1 vs Fig. 2). Dimensional structure of
plateaus will be further clarified using the multidimensionality technique of
Ref. Horváth and Markoš (2022b). Such studies are forthcoming.
The onset of rise toward dimension 3 past $\sim 230\,$MeV confirms the
viability of our scenario with mobility edges
$\lambda_{\scriptscriptstyle{\text{\rm IR}}}$ and $\pm\lambda_{A}$. However,
the discontinuity of $d_{\scriptscriptstyle{\text{\rm IR}}}(\lambda)$ at
presumed $\lambda_{A}$ is not apparent in our data, contrary to both the pure-
glue case Alexandru and Horváth (2021) and the situation in Anderson models
Horváth and Markoš (2022a). Resolution of this difference may provide an
additional new insight into the IR-phase dynamics.
Figure 2: Function $d_{\rm IR}(\lambda)$ at $T\\!=\\!234\,$MeV. The inset
zooms in on deep infrared with $\lambda\to 0^{+}$ extrapolation shown.
To illustrate the quality of scaling in various $\lambda$-regimes shown in
Fig. 2, we plot in Fig. 3 the fraction of volume occupied by the effective
support of the state, namely
$f_{\star}\\!\equiv\\!\mathscr{N}_{\star}/N\\!=\\!\mathscr{N}_{\star}/(n_{\scriptscriptstyle\text{L}}^{3}n_{\scriptscriptstyle\text{T}})$.
Since $\mathscr{N}_{\star}(L)\\!\propto\\!L^{d_{\scriptscriptstyle{\text{\rm
IR}}}}e^{-c/L}$ is used to extract $d_{\scriptscriptstyle{\text{\rm IR}}}$, we
have $f_{\star}(L)\\!\propto\\!L^{d_{\scriptscriptstyle{\text{\rm
IR}}}-3}e^{-c/L}$ and these fits are shown in Fig. 3. The displayed
$\chi^{2}$/dof for modes in different regimes do indeed confirm very good
scaling behavior. Note how functions $f_{\star}(L)$ in Fig. 3 visually
separate the bulk modes and near-bulk modes from IR modes. Indeed, although
zero-modes are $d_{\scriptscriptstyle{\text{\rm IR}}}\\!=\\!3$, and hence
occupy a finite fraction of volume in thermodynamic limit
($\lim_{L\to\infty}f_{\star}(L)\\!>\\!0$), its magnitude is much smaller than
that of typical bulk modes. At the same time, for
$d_{\scriptscriptstyle{\text{\rm IR}}}\\!<\\!3$ modes of IR component the
fraction vanishes ($\lim_{L\to\infty}f_{\star}(L)\\!=\\!0$).
Fig. 3 reveals that the lowest near-zero modes we studied
($\lambda\\!=\\!0.22\,$MeV, $d_{\scriptscriptstyle{\text{\rm IR}}}\\!<\\!3$)
have larger $f_{\star}$ at studied volumes than those of zero modes
($\lambda\\!=\\!0$, $d_{\scriptscriptstyle{\text{\rm IR}}}\\!=\\!3)$. But
given their $d_{\scriptscriptstyle{\text{\rm IR}}}$, this order has to reverse
at sufficiently large volume. We can read off the graphs in Fig. 3 that this
happens at $L\approx 20\,$fm. Such deep IR thresholds simply do not appear in
other QCD regimes. Only at larger $L$ will modes at $\lambda\\!=\\!0.22\,$MeV
become “sparser” than zero modes. Note that the qualitative difference between
zero and near-zero modes is expressed here by the opposite convexity
properties of their $f_{\star}(L;\lambda)$.
Figure 3: Function $f_{\star}(L)$ for various $\lambda$ at $T\\!=\\!234\,$MeV.
Figure 4: Typical color-coded $\log_{10}(\min\,\\{Np(x_{i}),1\\})$ in a 2d
plane containing $x_{i}$ with maximal probability. Modes in different
$\lambda$-regimes are shown. $T\\!=\\!234$ MeV and $L\\!=\\!10.1\,$fm.
Finally, we wish to gain some visual insight into the spatial geometry of
modes. In definition (1) of $\mathscr{N}_{\star}$, uniform probability
$p_{u}\\!=\\!1/N$ enters as a reference value: points $x_{i}$ with
$p(x_{i})\\!=\\!\psi_{\lambda}^{\dagger}(x_{i})\psi_{\lambda}(x_{i})\\!\geq\\!p_{u}$
are guaranteed to be in effective support, and we refer to them as “core”. We
wish to set up a sea-level representation that visualizes it sharply. Plotting
$\min\\{Np(x_{i}),1\\}$, namely the contribution of $x_{i}$ to effective
count, accomplishes that. In Fig. 4 we color-code this input (on a logarithmic
scale) and show its typical behavior on a plane containing the global
probability maximum. The black regions mark the core. The panels represent
different $\lambda$-regimes on the same glue background. The bulk mode at
$\lambda\\!=\\!330\,$MeV (right) resembles to some extent modes at low
temperatures. Indeed, its core spreads out contiguously over large distances
and its granularity (composition from distinct lumps) is not very obvious. To
the contrary, the plateau mode ($\lambda\\!=\\!100\,$MeV) is usually dominated
by a well-formed lump as shown. The near-zeromodes ($\lambda\\!=\\!0.22\,$MeV)
maintain the granularity, but involve multiple lumpy features forming a larger
spatial structure. The zero-modes (left) at this volume are in fact quite
similar but, due to $d_{\scriptscriptstyle{\text{\rm IR}}}\\!=\\!3$ vs
$d_{\scriptscriptstyle{\text{\rm IR}}}\\!=\\!2$ difference, will become
infinitely more “space-filling” in thermodynamic limit.
Additional results are described in our supplement ref .
4\. Summary and Discussion: Remarkable property of QCD IR phase Alexandru and
Horváth (2019) is that it requires the presence of non-analyticities not only
at the transition point $T_{\scriptscriptstyle{\text{\rm IR}}}$, but at any
temperature within the phase. It was proposed and verified Alexandru and
Horváth (2021, 2022) that in pure-glue QCD the system arranges for this by
reconfiguring itself into two independent components (IR & bulk), sharply
separated in Dirac spectrum. The needed non-analyticity enters via Anderson-
like mobility edges $\lambda_{\scriptscriptstyle{\text{\rm IR}}}\\!=\\!0$ and
$\pm\lambda_{\text{A}}\\!\neq\\!0$, encoded by dimension function
$d_{\scriptscriptstyle{\text{\rm IR}}}(\lambda)$. Our present results suggest
that key elements of this scenario also materialize in “real-world” QCD. Thus,
in certain regards, thermal state in IR phase of strong interactions resembles
the Tisza-Landau two-fluid model of liquid helium. The proposed 2-component
nature of thermal state may in fact be the most essential attribute of IR
phase.
We wish to point out certain aspects of our results. (i) The computed
dimension $d_{\scriptscriptstyle{\text{\rm IR}}}$ of near-zero modes is in
close vicinity of “topological” value 2, thus inviting a systematic inquiry
into its possible origin in certain topological feature of underlying glue
fields. At the same time, recent findings of possible topological behavior in
3d Anderson model Horváth and Markoš (2022b) also involve dimension $2$ but no
glue fields. (ii) In the existing QCD data there is no clear evidence yet for
critical value $d_{\scriptscriptstyle{\text{\rm IR}}}\\!\approx\\!8/3$, which
was suggested to be a generic feature of Anderson models Horváth and Markoš
(2022a). (iii) Unlike in the case of $\lambda_{\scriptscriptstyle{\text{\rm
IR}}}$, in the vicinity of $\lambda_{A}$ we did not find an obviuous dimension
jump. This differs from situation in pure-glue QCD Alexandru and Horváth
(2021) and from that at critical points of Anderson models Horváth and Markoš
(2022a). Taken together, points (i-iii) constitute an intriguing complex
puzzle to be solved by future studies. Indeed, the satisfactory understanding
of Anderson-like features in QCD require the resolution of these issues. This
resolution will involve control over the usual lattice QCD systematics, which
is important given that some of these effects are possibly enhanced in
dynamical simulations Zhao _et al._ (2023).
Recently a number of lattice QCD papers focused on the same temperature range
as the one investigated here and in other recent IR phase studies (see e.g.
Dick _et al._ (2015); Ding _et al._ (2020); Aoki _et al._ (2021); Kaczmarek
_et al._ (2021); Kotov _et al._ (2021); Chen _et al._ (2022); Kaczmarek _et
al._ (2023)). Their physics goals are mostly different and tend to involve the
chiral limit, such as in studies of U${}_{\text{A}}$(1) problem or chiral
phase transition. Other related developments include an approximate color-spin
symmetry Rohrhofer _et al._ (2019); Glozman _et al._ (2022), as well as
recent Refs. Liu (2023); Athenodorou _et al._ (2022); Kehr _et al._ (2023).
Conversely, the present CLQCD data could be used to study these other
problems.
## Acknowledgments
We thank the CLQCD collaborations for providing their gauge configurations,
and also to Ting-Wai Chiu, Heng-Tong Ding and Jian Liang for valuable
discussions. This work is supported in part by the Strategic Priority Research
Program of Chinese Academy of Sciences, Grant No. XDB34030303 and XDPB15, NSFC
grants No. 12293062, 12293065 and 12047503, and also a NSFC-DFG joint grant
under Grant No. 12061131006 and SCHA 458/22. This work is also supported in
part by the U.S. DOE Grant No. DE-SC0013065, DOE Grant No. DE-FG02-95ER40907,
and DOE Grant No. DE-AC05-06OR23177 which is within the framework of the TMD
Topical Collaboration. This publication received funding from the French
National Research Agency under the contract ANR-20-CE31-0016. The numerical
calculation were carried out on the ORISE Supercomputer through HIP
programming model Bi _et al._ (2020), and HPC Cluster of ITP-CAS. This
research used resources of the Oak Ridge Leadership Computing Facility at the
Oak Ridge National Laboratory, which is supported by the Office of Science of
the U.S. Department of Energy under Contract No. DE-AC05-00OR22725. This work
used Stampede time under the Extreme Science and Engineering Discovery
Environment (XSEDE), which is supported by National Science Foundation Grant
No. ACI-1053575. We also thank the National Energy Research Scientific
Computing Center (NERSC) for providing HPC resources that have contributed to
the research results reported within this paper. We acknowledge the facilities
of the USQCD Collaboration used for this research in part, which are funded by
the Office of Science of the U.S. Department of Energy.
## References
* Hagedorn (1965) R. Hagedorn, Nuovo Cim. Suppl. 3, 147 (1965).
* Hagedorn (1971) R. Hagedorn, CERN report 71-12 (1971), 10.5170/CERN-1971-012.
* McLerran and Svetitsky (1981) L. D. McLerran and B. Svetitsky, Phys. Lett. B 98, 195 (1981).
* Kuti _et al._ (1981) J. Kuti, J. Polonyi, and K. Szlachanyi, Phys. Lett. B 98, 199 (1981).
* Engels _et al._ (1981) J. Engels, F. Karsch, H. Satz, and I. Montvay, Phys. Lett. B 101, 89 (1981).
* Aoki _et al._ (2006) Y. Aoki, G. Endrodi, Z. Fodor, S. Katz, and K. Szabo, Nature 443, 675 (2006), arXiv:hep-lat/0611014 [hep-lat] .
* Arsene _et al._ (2005) I. Arsene _et al._ (BRAHMS), Nucl. Phys. A757, 1 (2005), arXiv:nucl-ex/0410020 [nucl-ex] .
* Back _et al._ (2005) B. B. Back _et al._ , Nucl. Phys. A757, 28 (2005).
* Adams _et al._ (2005) J. Adams _et al._ (STAR), Nucl. Phys. A757, 102 (2005), arXiv:nucl-ex/0501009 [nucl-ex] .
* Adcox _et al._ (2005) K. Adcox _et al._ (PHENIX), Nucl. Phys. A757, 184 (2005), arXiv:nucl-ex/0410003 [nucl-ex] .
* Muller _et al._ (2012) B. Muller, J. Schukraft, and B. Wyslouch, Ann. Rev. Nucl. Part. Sci. 62, 361 (2012), arXiv:1202.3233 [hep-ex] .
* Alexandru and Horváth (2019) A. Alexandru and I. Horváth, Phys. Rev. D 100, 094507 (2019), arXiv:1906.08047 [hep-lat] .
* Edwards _et al._ (2000) R. G. Edwards, U. M. Heller, J. E. Kiskis, and R. Narayanan, Phys.Rev. D61, 074504 (2000), arXiv:hep-lat/9910041 [hep-lat] .
* Alexandru and Horváth (2015) A. Alexandru and I. Horváth, Phys. Rev. D92, 045038 (2015), arXiv:1502.07732 [hep-lat] .
* Alexandru and Horváth (2021) A. Alexandru and I. Horváth, Phys. Rev. Lett. 127, 052303 (2021), arXiv:2103.05607 [hep-lat] .
* Alexandru and Horváth (2022) A. Alexandru and I. Horváth, Phys. Lett. B 833, 137370 (2022), arXiv:2110.04833 [hep-lat] .
* Garcia-Garcia and Osborn (2006) A. M. Garcia-Garcia and J. C. Osborn, Nucl. Phys. A 770, 141 (2006), arXiv:hep-lat/0512025 .
* Garcia-Garcia and Osborn (2007) A. M. Garcia-Garcia and J. C. Osborn, Phys. Rev. D 75, 034503 (2007), arXiv:hep-lat/0611019 .
* Kovacs and Pittler (2010) T. G. Kovacs and F. Pittler, Phys. Rev. Lett. 105, 192001 (2010), arXiv:1006.1205 [hep-lat] .
* Giordano _et al._ (2014) M. Giordano, T. G. Kovacs, and F. Pittler, Phys. Rev. Lett. 112, 102002 (2014), arXiv:1312.1179 [hep-lat] .
* Ujfalusi _et al._ (2015) L. Ujfalusi, M. Giordano, F. Pittler, T. G. Kovács, and I. Varga, Phys. Rev. D 92, 094513 (2015), arXiv:1507.02162 [cond-mat.dis-nn] .
* Horváth _et al._ (2023) I. Horváth, P. Markoš, and R. Mendris, Entropy 25, 482 (2023), arXiv:2205.11520 [hep-lat] .
* Horváth and Mendris (2020) I. Horváth and R. Mendris, Entropy 22, 1273 (2020), arXiv:1807.03995 [quant-ph] .
* Horváth (2021) I. Horváth, Quantum Rep. 3, 534 (2021), arXiv:1809.07249 [quant-ph] .
* Horváth and Mendris (2019) I. Horváth and R. Mendris, MDPI Proc. 13, 8 (2019), arXiv:1907.01606 [quant-ph] .
* Neuberger (1998) H. Neuberger, Phys. Lett. B417, 141 (1998), arXiv:hep-lat/9707022 [hep-lat] .
* Vig and Kovacs (2021) R. A. Vig and T. G. Kovacs, Phys. Rev. D 103, 114510 (2021), arXiv:2101.01498 [hep-lat] .
* Cardinali _et al._ (2021) M. Cardinali, M. D’Elia, and A. Pasqui, (2021), arXiv:2107.02745 [hep-lat] .
* Horváth and Markoš (2022a) I. Horváth and P. Markoš, Phys. Rev. Lett. 129, 106601 (2022a), arXiv:2110.11266 [cond-mat.dis-nn] .
* Horváth and Markoš (2023) I. Horváth and P. Markoš, Phys. Lett. A 467, 128735 (2023), arXiv:2207.13569 [cond-mat.dis-nn] .
* Horváth and Markoš (2022b) I. Horváth and P. Markoš, (2022b), arXiv:2212.09806 [cond-mat.dis-nn] .
* (32) Supplementary materials .
* Zhang _et al._ (2021) Q.-A. Zhang, J. Hua, F. Huang, R. Li, Y. Li, C.-D. Lu, P. Sun, W. Sun, W. Wang, and Y.-B. Yang, (2021), arXiv:2103.07064 [hep-lat] .
* Liu _et al._ (2022) H. Liu, J. He, L. Liu, P. Sun, W. Wang, Y.-B. Yang, and Q.-A. Zhang, (2022), arXiv:2207.00183 [hep-lat] .
* Xing _et al._ (2022) H. Xing, J. Liang, L. Liu, P. Sun, and Y.-B. Yang, (2022), arXiv:2210.08555 [hep-lat] .
* Liu _et al._ (2023) H. Liu, L. Liu, P. Sun, W. Sun, J.-X. Tan, W. Wang, Y.-B. Yang, and Q.-A. Zhang, (2023), arXiv:2303.17865 [hep-lat] .
* Sorensen (1992) D. C. Sorensen, SIAM J. on Matrix Analysis and Applications 13, 357 (1992).
* Lehoucq and Sorensen (1996) R. B. Lehoucq and D. C. Sorensen, SIAM J. Matrix Anal. Appl. 17, 789 (1996).
* Giusti _et al._ (2003) L. Giusti, C. Hoelbling, M. Luscher, and H. Wittig, Comput. Phys. Commun. 153, 31 (2003), arXiv:hep-lat/0212012 [hep-lat] .
* Alexandru _et al._ (2011) A. Alexandru, M. Lujan, C. Pelissier, B. Gamari, and F. X. Lee, in _Proceedings, SAAHPC’11_ (2011) pp. 123–130, arXiv:1106.4964 [hep-lat] .
* Chiu and Zenkin (1999) T.-W. Chiu and S. V. Zenkin, Phys. Rev. D59, 074501 (1999), arXiv:hep-lat/9806019 [hep-lat] .
* He _et al._ (2022) F. He, Y.-J. Bi, T. Draper, K.-F. Liu, Z. Liu, and Y.-B. Yang ($\chi$QCD), Phys. Rev. D 106, 114506 (2022), arXiv:2204.09246 [hep-lat] .
* Cossu _et al._ (2016) G. Cossu, H. Fukaya, S. Hashimoto, T. Kaneko, and J.-I. Noaki, PTEP 2016, 093B06 (2016), arXiv:1607.01099 [hep-lat] .
* Zhao _et al._ (2023) D.-J. Zhao, G. Wang, F. He, L. Jin, P. Sun, Y.-B. Yang, and K. Zhang ($\chi$QCD), Phys. Rev. D 107, L091501 (2023), arXiv:2207.14132 [hep-lat] .
* Dick _et al._ (2015) V. Dick, F. Karsch, E. Laermann, S. Mukherjee, and S. Sharma, Phys. Rev. D91, 094504 (2015).
* Ding _et al._ (2020) H. T. Ding, S. T. Li, S. Mukherjee, A. Tomiya, X. D. Wang, and Y. Zhang, (2020), arXiv:2010.14836 [hep-lat] .
* Aoki _et al._ (2021) S. Aoki, Y. Aoki, G. Cossu, H. Fukaya, S. Hashimoto, T. Kaneko, C. Rohrhofer, and K. Suzuki (JLQCD), Phys. Rev. D 103, 074506 (2021), arXiv:2011.01499 [hep-lat] .
* Kaczmarek _et al._ (2021) O. Kaczmarek, L. Mazur, and S. Sharma, Phys. Rev. D 104, 094518 (2021).
* Kotov _et al._ (2021) A. Y. Kotov, M. P. Lombardo, and A. Trunin, Physics Letters B 823, 136749 (2021).
* Chen _et al._ (2022) Y.-C. Chen, T.-W. Chiu, and T.-H. Hsieh (TWQCD), Phys. Rev. D 106, 074501 (2022), arXiv:2204.01556 [hep-lat] .
* Kaczmarek _et al._ (2023) O. Kaczmarek, R. Shanker, and S. Sharma, (2023), arXiv:2301.11610 [hep-lat] .
* Rohrhofer _et al._ (2019) C. Rohrhofer, Y. Aoki, G. Cossu, H. Fukaya, C. Gattringer, L. Ya. Glozman, S. Hashimoto, C. B. Lang, and S. Prelovsek, Phys. Rev. D100, 014502 (2019).
* Glozman _et al._ (2022) L. Y. Glozman, O. Philipsen, and R. D. Pisarski, Eur. Phys. J. A 58, 247 (2022), arXiv:2204.05083 [hep-ph] .
* Liu (2023) K.-F. Liu, (2023), arXiv:2302.11600 [hep-ph] .
* Athenodorou _et al._ (2022) A. Athenodorou, C. Bonanno, C. Bonati, G. Clemente, F. D’Angelo, M. D’Elia, L. Maio, G. Martinelli, F. Sanfilippo, and A. Todaro, JHEP 10, 197 (2022), arXiv:2208.08921 [hep-lat] .
* Kehr _et al._ (2023) R. Kehr, D. Smith, and L. von Smekal, (2023), arXiv:2304.13617 [hep-lat] .
* Bi _et al._ (2020) Y.-J. Bi, Y. Xiao, W.-Y. Guo, M. Gong, P. Sun, S. Xu, and Y.-B. Yang, _Proceedings, Lattice 2019_ , PoS LATTICE2019, 286 (2020), arXiv:2001.05706 [hep-lat] .
## I Supplemental Materials
### I.1 Visualization of Spatial Distributions
In the present subsection we wish to extend our visualization of mode
distributions in various $\lambda$-regimes shown in Fig. 4 of the manuscript.
In particular, Fig. 5 shows examples of modes at the same four values of
$\lambda$, but at all IR cutoffs (sizes $L$ of the system) considered in this
work. Note that $\lambda\\!=\\!\,$0, 0.22 and 100 MeV belong to the IR part of
the spectrum, while $\lambda\\!=\\!\,$330 MeV is in near-bottom part of the
bulk.
Interesting aspect of observing the typical geometry at fixed $\lambda$ for
increasing $L$ is an evolution in degree of granularity. Indeed, note that for
the plateau mode ($\lambda\\!=\\!100\,$MeV), increasing $L$ confirms the
picture of a single solid lump present in the volume. On the other hand, for
zero-modes and near-zero modes, the apparent (visually observable) degree of
granularity increases with increasing $L$, reflecting that their effective
supports keeps spreading out in the volume. In fact, all qualitative aspects
we observe agree with metal-to-critical picture of transition to IR phase, put
forward in Ref. Alexandru and Horváth (2022). The associated details will be
worked out in dedicated follow-up publications.
Figure 5: Typical color-coded $\log_{10}(\min\,\\{Np(x_{i}),1\\})$ in a 2d
plane containing point $x_{i}$ with maximal probability. Modes in different
$\lambda$-regimes on given equilibrium glue background at $T\\!=\\!234$ MeV
are shown for all spatial sizes $L$ studied.
### I.2 Accuracy of Overlap Eigenmodes
In this section we focus on the accuracy of the low-lying eigenvectors used in
this study. For efficiency reasons, we compute the low-lying eigenvalues and
eigenvectors of $M\equiv\frac{1}{\rho^{2}}D_{\rm ov}^{\dagger}D_{\rm
ov}=\frac{1}{\rho}(D_{\rm ov}+D_{\rm ov}^{\dagger})$ in the chiral sector
which includes the exact zero mode. The eigenvalues and eigenvectors of $D$
are simply related to the eigenvectors of $M$: non-zero modes of
$D_{\text{ov}}$ come in pairs, $\lambda_{R}\pm i\lambda_{I}$, and they span an
invariant two-dimensional space for $M$ with $\lambda_{M}=\lambda_{R}$ and for
near zero-modes $\lambda_{R}\approx\lambda_{I}^{2}/(2\rho)$. When computing
the spectrum of $M$, zero modes appear as Ritz pairs with eigenvalues of the
order of $\epsilon$, the precision of the sign-function approximation used to
implement $D_{\text{ov}}$. When we have near-zero modes with
$(a\lambda_{I})^{2}<\epsilon$, it is impossible to distinguish them from zero
modes. Using a polynomial approximation for the sign functions, the best
precision we are able to achieve is $\epsilon\approx 10^{-12}$, and
consequently we can only confidently resolve eigenvalues with
$a\lambda_{I}>10^{-6}$, which in physical units correspond to
$\lambda_{I}>2\,\text{eV}$. For the volumes used in this study, the near-zero
eigenmodes satisfy this condition.
Another concern is the mixing between nearly-degenerate eigenvectors. For
eigenvector observables (like $f_{\star}$) that are smooth as a function of
$\lambda$, this is less of a concern. However, at discontinuities mixing could
introduce systematic effects. This could potentially be a problem at
$\lambda=0$ since the zero modes and near-zero modes behave differently. We
argue here that this is not the case.
To see this, consider two eigenvectors of the projected operator $D$ with
$Dv_{1}=i\lambda_{1}v_{1}$ and $Dv_{2}=i\lambda_{2}v_{2}$. A mixed vector
$v=v_{1}\cos\theta+v_{2}\sin\theta$ has a Ritz “eigenvalue”
$i\lambda=v^{\dagger}Dv$. The residue for this vector is $\delta=\|Dv-i\lambda
v\|$ is going to be $|\cos\theta\sin\theta(\lambda_{1}-\lambda_{2})|$. For our
case we take $\lambda_{1}\approx 0$ and the near-zero have
$\lambda_{2}>0.1\,\text{MeV}$ and our residues, even in the worst case, are
$\delta>10^{-7}$. This implies that the mixing angle is at most
$\theta\sim\delta/\lambda_{2}=2\times 10^{-3}$. Given that $f_{\star}$ varies
slowly (the difference between zero modes and near-zero modes is less than
two), this mixing will have negligible effect given our statistical errors.
### I.3 Fitting of Effective Fractions
Our procedure to extract $d_{\scriptscriptstyle{\text{\rm IR}}}$ assumes an
approximately linear (in $1/L$) approach of “finite-volume” dimension
$d_{\scriptscriptstyle{\text{\rm IR}}}(L,s)$ in Eq. (4) to its
$L\\!\to\\!\infty$ limit. This was suggested by Ref. Horváth and Markoš
(2022a) in the context of Anderson models. One can easily check that this is
equivalent to direct fits of $\mathscr{N}_{\star}(L)$ to the form
$\mathscr{N}_{\star}(L)\\!\propto\\!L^{d_{\scriptscriptstyle{\text{\rm
IR}}}}e^{-c/L}$ and
$f_{\star}(L)\\!\propto\\!L^{d_{\scriptscriptstyle{\text{\rm IR}}}-3}e^{-c/L}$
for effective volume fraction
$f_{\star}(L)\\!\equiv\\!\mathscr{N}_{\star}(L)/N$. Fits shown in the left
panel of Fig. 6 support the validity of our approach.
Figure 6: Function $f_{\star}(L)$ (left) and the associated
$\bar{d}_{\scriptscriptstyle{\text{\rm IR}}}(\bar{L})$ (right) for various
$\lambda$ at $T\\!=\\!234\,$MeV. See text for explanations.
Here we wish to check the nature of finite-$L$ correction more directly. Given
the above scaling form, $L$-dependence of IR dimension can be expressed as
$d_{\scriptscriptstyle{\text{\rm IR}}}(L,s)=d_{\scriptscriptstyle{\text{\rm
IR}}}+\frac{s-1}{\ln(s)}\,\frac{c}{L}\quad\;\;\longrightarrow\quad\;\;\bar{d}_{\scriptscriptstyle{\text{\rm
IR}}}(\bar{L})=d_{\scriptscriptstyle{\text{\rm
IR}}}+\frac{c}{\bar{L}}\quad\,\text{with}\quad\,\bar{d}_{\scriptscriptstyle{\text{\rm
IR}}}(\bar{L}(L,s))\equiv d_{\scriptscriptstyle{\text{\rm
IR}}}(L,s)\;\;,\;\;\bar{L}(L,s)\equiv L\,\frac{\ln(s)}{s-1}\;\;$ (5)
Introduction of variable $\bar{L}$ thus makes it possible to combine
$d_{\scriptscriptstyle{\text{\rm IR}}}$ results from all pairs of distinct
lattices and follow their trends. Indeed, according to the above, value
$\bar{d}_{\scriptscriptstyle{\text{\rm IR}}}(1/\bar{L})$ from each pair should
fall on an indicated straight line, at least near $1/\bar{L}\\!=\\!0$. To
check this, we show in the left plot of Fig. 6 our $f_{\star}$ data for five
largest volumes at selected values of $\lambda$, and in the right plot the
associated functions $\bar{d}_{\scriptscriptstyle{\text{\rm IR}}}(1/\bar{L})$.
Note that there are 10 data points for each $\lambda$ in the latter case since
this is the number of possible lattice pairs. Displayed fits are indeed
consistent with linear nature of $d_{\scriptscriptstyle{\text{\rm
IR}}}(1/\bar{L})$ near $1/\bar{L}\\!=\\!0$. The qualitative difference between
exact zero modes and lowest near-zero modes is expressed in the right plot by
the crossed lines representing $\lambda\\!=\\!0$ and
$\lambda\\!=\\!0.22\,$MeV. Their finite-volume corrections are in fact of
opposite sign.
Figure 7: Function $f_{\star}(L)$ with different simulated size $L$ at $T$=234
MeV, for $\lambda\in[0.1,2.0]$ MeV (left), $\lambda\in[20,200]$ MeV (middle)
and $\lambda\in[280,330]$ MeV (right).
We also provide the $f_{\star}(1/L;\lambda)$ data for more values of $\lambda$
in Fig. 7. As shown in the left panel, the
$f_{\star}(L>4~{}\mathrm{fm};0<\lambda<2~{}\mathrm{MeV})$ becomes higher when
$\lambda$ is smaller, and thus the corresponding $d_{\rm IR}$ is also larger.
The tendency converges at $\lambda\sim$ 0.2 MeV (orange band), which
corresponds to $d_{\rm IR}=2$ and is consistent with the $\lambda=$0.13 MeV
case (dark red band) within the uncertainty. However, this limit is
significantly different from $f_{\star}(;0)$ as illustrated by the red band.
Thus, $f_{\star}$ and also $d_{\rm IR}$ will be discontinued at $\lambda=0$.
In contrast, the middle panel of Fig. 7 shows that $d_{\rm IR}$ changes
smoothly with large $\lambda$ for $\lambda\in[20,200]$ MeV, corresponding to
$d_{\rm IR}\sim 1$ within 2$\sigma$. The changes on $d_{\rm IR}$ is also
smooth in the range of $\lambda\in[280,330]$ MeV where $d_{\rm IR}$ converges
to 3 with increasing $\lambda$, as in the right panel. Therefore the
discontinuation of $d_{\rm IR}$ would only occurs at $\lambda=0$, given our
statistical precision at $a=0.105$ fm and $T$=234 MeV.
Figure 8: Function $f_{\star}(L)$ at $T\\!=\\!234\,$MeV in different spectral
regions, for all simulated sizes $L$. Left: zero modes and near-zero modes.
Right: plateau, just below $\lambda_{A}$ and at the bottom of the bulk. Shaded
regions are excluded from the displayed fits.
Finally, we give a justification for using the five largest lattices in our
fits i.e. systems with $L\\!>\\!3\,$fm. To that end, we show in Fig. 8
functions $f_{\star}(1/L)$ for all simulated volumes, together with previously
shown fits. Shaded areas mark the volumes excluded from these fits. One can
see that in case of zero modes and near-zero modes (left plot), the systems in
shaded region do not follow the fit curves, and were thus excluded from fits
in all spectral regions.
|
# To Rain or Not to Rain: Correlating GOES Flare Class and Coronal Rain
Statistics
E I Mason Predictive Science Inc.
9990 Mesa Rim Rd, Suite 170
San Diego, CA 92121, USA K Kniezewski United States Naval Academy
121 Blake Road
Annapolis, MD 21402, USA
(Received March 8, 2022; Accepted September 21, 2022)
###### Abstract
Post-flare arcades are well-known components of solar flare evolution, which
have been observed for several decades. Coronal rain, cascades of
catastrophically-cooled plasma, outline the loops and provide eye-catching
evidence of the recent flare. These events are acknowledged to be common, but
the scientific literature does not include any statistical overview
documenting just how common the phenomenon actually is. This study reviews
Solar Dynamics Observatory Atmospheric Imaging Assembly (SDO AIA) observations
of 241 flares collected from the Space Weather Prediction Center (SWPC)
database between 2011 and 2018. The flares cover the entire strength range of
the C, M, and X GOES classes, and are distributed evenly across the SDO-
observed majority of Solar Cycle 24. We find that post-flare arcade rain
occurs for nearly all X and most M-class flares, but that it tapers off
rapidly within C-class flares. There appears to be a cut-off point around C5,
below which the occurrence of post-flare arcade rain drops significantly.
There is also a general positive correlation between GOES class and the
average duration of post-flare rain events. Post-flare arcade rain events in
X- and M-class flares appear to track with the sunspot number, providing a
potential new tool for estimating, if not predicting, solar cycle strength.
Furthermore, arcades are observed to persist for up to several days after the
originating flare, transitioning from hosting post-flare rain to typical
quiescent active region condensations. These results open up further avenues
for future research, including new methods to estimate energy deposition and
to gain greater insight into steady active region heating.
††journal: ApJ
## 1 Introduction
This paper addresses coronal rain events in solar flares, investigating the
frequency with which such events occur, their duration, and other salient
details. We correlate these statistics to overarching open questions
concerning the Sun: flare evolution, coronal mass ejection (CME) creation,
solar cycle characteristics, and localized coronal heating.
White-light study of solar flares began in 1859 with the independent but
concurrent observations of Carrington (1859) and Hodgson (1859), and
investigation of these explosive events spread to both longer and shorter
wavelengths – sometimes intentionally, sometimes serendipitously – as
technology advanced. The current model for flares, known as the standard model
and succinctly summarized by Holman (2016), holds that magnetic reconnection
across field lines arrayed along a polarity inversion line releases energy,
heating the plasma to temperatures in excess of 30 MK and accelerating
particles along the newly-rearranged magnetic field. Flares are often cited as
sources of solar energetic particles (e.g., Reames (1990), Cliver et al.
(2012), Reames (2017)), radio bursts (for starters, see Wild & Smerd (1972),
Aschwanden et al. (1985), Nelson & Melrose (1985), and Cane et al. (2002)),
and CMEs (Feynman & Hundhausen (1994), Švestka (2001), Andrews (2003), Yashiro
et al. (2005), Cane & Lario (2006), among many others).
According to the flare Standard Model (Holman (2016)), shearing along a
polarity inversion line (PIL) elongates field lines; eventually, some form of
disturbance can cause these lines to distort, triggering reconnection along
the PIL. This builds up a flux rope (which carries twist, translated upward
into the corona from the shearing) above an arcade of loops that are arrayed
roughly perpendicular with respect to the PIL. If the flare is eruptive, the
flux rope is then ejected into the outer corona and the solar wind as a CME
(Lin & Forbes (2000), Mittal & Narain (2010)); otherwise, it is considered a
confined flare. The conditions which determine whether or not a flare is
confined are not well-known (Sun et al. (2015), Zuccarello et al. (2017),
Mason et al. (2021)). These events carry magnetic field as well as plasma into
the heliosphere, and can strongly affect systems both on the Earth and in
space. Early observations tied their existence almost solely to solar flares,
but it was subsequently found that CMEs also frequently form from more
quiescent events like prominence eruptions (Gosling (1993)). However, strong
(and particularly long-duration) flares are well-correlated with CMEs, as
discussed in Sheeley et al. (1983) and Yashiro et al. (2005).
The arcade loops that form below the flux rope are filled with extremely hot,
dense plasma through electron beam heating, in a process termed ”chromospheric
evaporation”. This results in a loop which is severely outside the normal
coronal conditions, resulting in thermal instability (Parker (1953), Priest
(1978), Antiochos (1980), and Klimchuk (2019), among many others). The
temperature and density in the loop are unsupported by the energy being added
by whatever ambient background or footpoint heating is occurring in the arcade
loop, so the loop begins radiating rapidly. Here we reach the end of what is
well-understood about flare arcades: if the temperature profile of the loop
was uniform, we would expect the arcade loop to cool and evacuate in its
entirety, as was found by Reep et al. (2020) for loops heated purely by the
electron beams. Contrary to this, the recent work of Ruan et al. (2021)
successfully reproduced post-flare rain with a 2.5D MHD simulation without
even including beams. That model gets sufficient heating and condensations
simply from the magnetic energy conversion during flare reconnection.
Observations show that individual condensations occur within arcade loops;
this implies instead that a localized temperature inversion is critical to the
post-flare arcade rain formation mechanism, to seed nascent rain blobs.
In this work, we focus on these diminutive condensations. It is important to
point out and clearly define the phenomenon which we study here, because it
delineates an important distinction for observations. Any typical active
region will host significant numbers of coronal condensations on its loops
during its lifetime, and these may increase in number in the time after the
impulsive phase of a flare. However, since our goal in this paper is to
quantify that rain which is an immediate effect of flare reconnection, the
most natural observational selection was to only consider condensations formed
within the flare arcade. One of the main findings of this study, however, was
the discovery that _the arcades themselves can often become simply other
coronal loops that host quiescent rain, given enough time_ : this frequently
occurs with medium-sized flares, where the arcade grows to a point but does
not vanish. Since the rain which can occur continuously in these loops several
days after the flare is almost certainly not causally linked to the flare,
this rain also cannot be considered. Therefore, in our observations we also
needed a marker for the end of the rain duration. We define it to be at the
point that the raining arcade loops ceased to increase in height (best for
off-limb flares), or when the flare ribbons stopped evolving (best for on-disk
flares). Both of these measures – which are two observables of the same
physical process – signal that the flare reconnection has ceased and that
further rain on the same loops is likely due to another heating source. Using
this narrow definition, we consider only the rain directly tied to flare
reconnection, producing results that are the most relevant to flare dynamics.
This is understood by our use of the common terms _post-flare rain_ or _post-
flare arcade rain_ throughout this paper.
One of the greatest benefits of the Solar Dynamics Observatory Atmospheric
Imaging Assembly (SDO AIA; Pesnell et al., 2011; Lemen et al., 2011) has been
the sheer number of events which have been captured in detail by the near-
constant view across a broad range of extreme ultraviolet (EUV) passbands.
Much of the flare literature revolves around in-depth case studies of
individual flares, or small collections of similar flares. Here we conduct an
assessment of post-flare rain in a large pool of events. Rain has been
reported for years as a feature of strong flares, but the frequency and
statistical properties of it has not, to our knowledge, ever been quantified.
While not the brightest signal of the elusive magnetic dynamics involved in
flares, post-flare rain provides a critical reservoir which we can use to
measure how much of the flare’s energy is actually deposited in the plasma
still trapped in the corona. Energy which is radiated away as the
condensations form is energy that is no longer contained in the magnetic
field, powering a CME or an SEP event (Reames (2013), Drake & Swisdak (2012)).
We review all of the X-class flares that occurred between 2011 and 2019, as
well as a large, representative sample of M- and C-class flares from the same
period (we detail the collection and analysis of these in Section 2). This
survey allows us to build a strong basis for general statements about energy
budgeting and magnetic field dynamics in more energetic flares.
## 2 Data Collection and Analysis
### 2.1 Data Selection
We queried the Heliophysics Events Knowledgebase (HEK; accessible at
https://www.lmsal.com/hek/index.html) for flares detected by the GOES mission
between 2011 January 1 and 2019 December 31. We selected this date range to
cover most of Solar Cycle 24 for which we have routine daily SDO coverage.
Solar Cycle 24 began in December 2008, but SDO finished commissioning and
began its main science mission in May 2010. During this time period, there
were 49 X-class flares, 732 M-class flares, and 7681 C-class flares. Since
this study was conducted manually, a subset of the M and C-class flares were
selected; all of the X-class flares were analyzed. We chose flares to create a
pool that covered roughly equal numbers of flares in all relevant
characteristics: class (i.e., M), intra-class magnitude (i.e., M5), and
distribution across the solar cycle (equal numbers from each year covered by
the study, when possible). All the flares in the data set were selected for
their adherence to these characteristics, and prior to analysis; a flare was
not viewed until it had been selected for the study. If enough flares in a
category were unable to be analyzed due to data dropouts, eclipsing of SDO, or
similar reasons such that it would skew the distribution of the final data
set, more flares were chosen from the list.
The final selection included all 49 X-class flares, as well as 78 M-class and
118 C-class flares. 5 of the X-class flares had data dropouts which precluded
analysis, so the statistics here are on the 44 which had available AIA data. A
few other events had truncated rain durations because of data dropouts or
subsequent flares in the same location; these are all noted as such in the
full dataset, available on Zenodo at https://doi.org/10.5281/zenodo.7028211.
From the GOES flare list, we obtained the flare start, peak, and end times;
the location on the Sun in heliographic coordinates; and the source NOAA
active region designation.
### 2.2 Observational Analysis and Challenges
The analysis itself was primarily conducted via visual inspection of SDO AIA
data. Movies were generated with JHelioviewer (Müller et al. (2017)), showing
eight hours of data at a time beginning with the start time of the flare, and
171 and 304 Å data were evaluated for signs of cool condensations (see Figure
1 for a still-frame of one such movie) within flare arcades. Observationally,
arcade loops appear as relatively short, bright loops with footpoints rooted
at (indeed, defining) the flare ribbons. They generally form perpendicular to
the solar surface, and are almost always symmetric about their apex. These
were the observational hallmarks we used, which include characteristics that
make them readily identifiable both on-disk and off-limb.
When post-flare rain was present up until the end of the 8-hour movie, the
inspection time period was extended (this occurred primarily for X-class
flares, as can be seen from Figure 2). 304 Å is the coolest main SDO EUV
channel, and the one most commonly used to observe coronal rain in emission;
we chose 171 Å for its cool, well-defined loop structures in active regions,
where condensations can often appear clearly outlined as both emission and
absorption features. Hotter lines also sometimes show rain in absorption, but
the attendant diffuse glow from the active region obfuscates individual loops,
and so these were not widely used in our analysis. Once post-flare arcade rain
was established to be observed in 304, the duration of the event was observed
and recorded, in accordance with the definition we outlined in Section 1.
There are, of course, pitfalls to attempting to observe small, cool structures
in 304, particularly when the flare occurs on disk. One of these challenges is
that 304 frequently saturates early in a flare, and some detail is lost. As
can be seen from Figure 1, we adjusted the maximum and minimum values for the
color table from the default scaling that is commonly applied to AIA data;
while this did not entirely eliminate the early saturation during the
impulsive phase, it did frequently resolve structure significantly earlier
than would normally be observed with the standard scaling rules. The early
saturation is a well-known problem which cannot be overcome without
introducing another instrument, which would have significantly restricted our
pool of events.
Another common observational challenge relates to identifying cool
condensations when the flare occurred on-disk. Since 304 Å observations are
dominated on-disk by chromospheric and transition region emission, rain can be
difficult to pick out. However, this problem affects quiescent rain more than
post-flare arcade rain; when observing the flares, the activated source region
was bright enough that the cool blobs were easily viewed the majority of the
time. For cases where the observation was somewhat ambiguous, we searched for
transient dark structures (e.g., blobs, streaks, or lines) running
perpendicular to the flare ribbons in both 171 and 304. Since 171 is the next
coolest channel to 304, it was often possible to see a brightening form and
then dim in 171, followed by a cospatial signal in 304. For the handful of
highly ambiguous cases, we used running difference movies to capture
precipitating condensation motions that were otherwise masked by more random
activity in the region.
Figure 1: Left: Screen capture of animation (Movie 1, 5 seconds in duration,
available online showing 108 minutes of flare arcade evolution) showing the
flare arcades from an M6.5 flare in SDO AIA 171 Å. An example cool
condensation is indicated in absorption with the blue arrow, and in emission
with a black arrow. Right: The same flare as seen in Movie 1 in 304 Å, with
adjusted scaling to resolve the condensations. An example post-flare arcade
rain blob is indicated in emission with a white arrow, which is not seen in
the left-hand panel, while the black arrow corresponds to an analogous
condensation signature as that indicated by the black arrow in 171.
The last but most pervasive challenge when observing medium-to-large flares
did not involve problems with observing the condensations themselves, but
rather how to define when flare-related rain transitioned to the typical
active region quiescent rain. For many of the flares analyzed here, the arcade
loops did not disappear after the flare; they either began raining again, or
continued raining without pause for tens of hours after all other
observational signatures of the flare had ended. An example of this can be
seen in the movie associated with Figure 1. As we discussed in Section 1, the
most logical, physically-motivated dividing line was the point at which the
arcade reached its apparent maximum height or the point when the flare ribbons
ceased evolving (these being physically equivalent), which both imply that the
flare reconnection forming the arcade has ended. Within a few minutes, the
last arcade-coupled rain should fall, and subsequent rain should be motivated
by more quiescent loop conditions.
Due to the limitations inherent in remote EUV observations with a resolution
lower than that of the smallest condensations, there are likely to be smaller
rain events that are unresolved or lost in the background signal. There are
also line-of-sight constraints which make it difficult to determine when the
coronal rain has reached the chromosphere. None of the uncertainties presented
in this section are unique to this study, but are indeed shared by all remote
solar observations. While it is possible to wait for a prime alignment to
observe when studying long-duration structures, the impulsive nature of solar
flares often means taking measurements at imperfect angles. This study,
therefore, represents a uniformly-analyzed body of flares which were chosen
for the best possible statistical coverage, whose condensations were
identified by eye and were categorized with a particularly restrictive
definition in order to provide the most physically meaningful results.
We also determined the eruptivity of each solar flare event (whether or not a
CME was released) through HEK following flare events of interest. HEK was
queried for 1-hour-long spans beginning at the flare start time, and we then
assessed the returned CME or CMEs to determine whether the location correlated
with the flare within the respective time frame. CME candidates for these
flares were visually identified and confirmed through the SOHO LASCO CME
Catalog.111This CME catalog is generated and maintained at the CDAW Data
Center by NASA and The Catholic University of America in cooperation with the
Naval Research Laboratory. SOHO is a project of international cooperation
between ESA and NASA.
## 3 Statistical Results
Table 1 summarizes the proportions of flares which did and did not host post-
flare arcade rain. Only two X-class flares out of the 44 did not produce
coronal post-flare arcade rain; one of these, on 2012 March 5 at 02:30 UTC,
was a large and unusual flare near the limb that simply did not form an
arcade, but which was followed several hours later by a second flare with a
classic arcade and several hours of post-flare rain (the second flare was not
part of our analysis). This pair of flares is worthy of further study, to
determine how they were related, and what magnetic changes would lead to their
unique characteristics. The other (2013 November 8, 4:20 UTC) was in a small
confined jet on the border of NOAA active region 11890, which did not exhibit
an arcade large enough to be able to detect individual condensations. The
preponderance of M-class flare arcades also exhibited post-flare arcade rain,
although there is a slight drop of rain frequency between the X and M classes.
C-class flares break this trend by showing a much lower propensity for
condensation formation; only $26\%$ of these flares in the data set hosted
post-flare arcade rain. Furthermore, this tendency showed an uneven
distribution across the C flare magnitudes: the bottom-right panel of Figure 3
shows that there is significantly more rain observed in flares of intensity C5
or higher. As we discuss in the next section, this implies a minimum energy
deposition required for condensations to form.
Table 1: Total Flares Analyzed by Post-Flare Arcade Rain Occurrence | Post-Flare Arcade Rain | No Post-Flare Arcade Rain
---|---|---
X | 42 (95%) | 2 (5%)
M | 56 (72%) | 22 (28%)
C | 31 (26%) | 87 (74%)
Totals | 129 (54%) | 111 (46%)
Figure 2 shows the overall correlation between GOES class and the duration of
post-flare rain events. X-class flares have a broad distribution between the
<1 to 6 hour bins; the average rain duration was 2 hours 51 minutes. M-class
flares show a strong peak of 58 in the <1-3 hour range (average value 1 hour
14 minutes), which tails off precipitately for longer post-flare arcade rain
events, while C-class flares have 87 events with no post-flare arcade rain and
30 with only short post-flare arcade rain “storms” (average value 15 minutes);
there was only one C-class flare with post-flare arcade rain slightly over 3
hours in length, and none longer. Figure 3 shows these correlations in more
detail by graphing the time spans of post-flare arcade rain events for each
class separately. The time bins there are uniform between X and M-classes, but
are broken down into smaller time spans for the C-class flares. The
correlation between flare class and post-flare arcade rain duration appears
clearly even with a data set that spans a large representative sample of the M
and C-class catalogs, so we consider the relationship strong and significant.
Figure 2: Graph summarizing the duration of post-flare rain events by GOES
class. X-class flares have a wide distribution of flare rain durations, while
M-class flares’ post-flare arcade rain most often lies in the <1-3 hour range.
C-class post-flare arcade rain never exceeded 3 hours in duration in the
sample studied here (the bins are not end-inclusive; for a more detailed
breakdown of C-class rain times, see Figure 3d).
(a) (b)
(c) (d)
Figure 3: Top left: histogram presenting the percentage frequency of X-class
flares hosting post-flare arcade rain of duration ranging from under an hour
to over 6
hours. Top right: analogous histogram for M-class flares. Bottom left:
analogous histogram for C-class flares; please note that these are subdivided
into shorter time bins only ranging between <1 hour and 4 hours, due to the
absence of any longer post-flare arcade rain events detected in the survey.
Bottom right: the percentages of flares which post-flare arcade rain by
strength within the C-class designation, showing the strong change in post-
flare arcade rain occurrence around C5.
Given the very wide range of post-flare arcade rain durations which we found
in the sample, we investigated whether there was a relationship between the
length of the flare itself, as measured by the GOES X-ray intensity, and the
duration of the subsequent post-flare arcade rain. Figure 4 shows the results
of this investigation; we applied a linear regression to the full set of each
class of raining flare and assessed the fit via the $r^{2}$ measure. None of
the data shows a statistically significant relationship, although the M-class
flares show the least-poor correlation, with an $r^{2}$ value of 0.2486.
X-class flares tended to have long post-flare arcade rain events, regardless
of the length of the flare, while C-class flares had a wide flare duration
spread but very short rain durations. There has been some debate about the
actual duration of C-class flares, and several background-subtraction methods
have been put forward (discussed recently in Reep & Knizhnik, 2019). Given the
very low rates of flare rain across all C-class flares, however, it is
unlikely that background subtracting these low-energy flares would
significantly improve the correlation.
Figure 4: Graph of flare duration, as determined by the GOES X-ray data, vs.
post-flare arcade rain duration. Linear fits for each class show that all are
generally a positive relationship, but none are statistically significant.
M-class flares have the highest correlation, with an r-squared value of
approximately 0.25.
Figure 5: This graph presents the proportions of X, M, and C class flares
which are associated with CMEs and post-flare rain. There are few statistical
conclusions that can be drawn between the raining and eruptive categories, due
to the low rates of non-raining flares in X- and M-class flares, and the low
rates of eruptive flares in C-class flares.
Figure 6: Top: a histogram summarizing the relationship between post-flare
arcade rain occurrence and the Hale class of the source active region (the
bins along the x-axis), color-coded by GOES flare class. Bottom: an analogous
histogram showing eruptivity of flares and the source region’s Hale class.
Both show strong preference for $\beta$ type active regions.
Figure 7: Top: This graph shows the relationship between the proportion of
post-flare arcade rain occurrences (by GOES class) and the solar cycle as
plotted by the annual sunspot number. Bottom left: Sunspot number versus the
percentage of flares in each class that produce rain. Linear fit lines have
been plotted and their respective $r^{2}$ values are shown on the left side of
the graph. All of are reasonably good fits, ranging from about 0.55 (C-class)
to 0.74 (M-class). These may prove to be useful tracers for the solar cycle.
Bottom left: This graph shows the residuals for all three classes of flare;
the distribution of all three are random, showing that the linear fit is a
good choice for these relationships.
Next, we explored a potential connection between the eruptivity of a flare and
whether it produced post-flare arcade rain, since both are well-known to
correlate separately with GOES class. Figure 5 shows a nested pie chart for
all three classes of flares, indicating the proportions of the flares which
fell into the classifications CME/post-flare arcade rain, CME/no post-flare
arcade rain, no CME/post-flare arcade rain, and no CME/no post-flare arcade
rain. As has already been well-established by Kawabata et al. (2018) and
references therein, most X (here $80\%$) and M-class flares ($71\%$) are
eruptive; however, only $13\%$ of the C-class flares were associated with
CMEs.
To try to establish whether there was a correlation between CMEs and post-
flare arcade rain events, we compared the proportions of post-flare arcade
rain/CME and no post-flare arcade rain/CME with no post-flare arcade rain/no
CME and post-flare arcade rain/no CME. The results of this are found in Table
2. There is a statistically insignificant correlation between eruptivity and
post-flare arcade rain occurrence for C-class flares (Fisher’s exact test
gives a value of 0.204). M-class flares show a nearly statistically-
significant relationship between rain presence and eruptivity, with a Fisher
test value of 0.084; X-class flares have the poorest relationship, with a
Fisher test value of 0.704. We chose the Fisher exact test due to the
proportionately small number of non-eruptive flares in X- and M-classes in
general, but would still caution against drawing definitive conclusions given
the low numbers in the non-eruptive category. While intriguing, further study
would be required to show that these relationships are more than simple
correlations.
We also compared the Mt. Wilson Hale class of each source active region across
the flare classes and against the flares’ eruptivity. The results of this
investigation are shown in Figure 6; there are a few flares of each class in
$\alpha$ type regions, but the overwhelming number of flares of all types
occur in the more complex $\beta$ designation – particularly
$\beta\gamma\delta$ spots, which are commonly known to be the most active
(Benz (2017) and references therein). We saw very similar results when the
Hale class was graphed against flare eruptivity, seen in the right graph of
Figure 6.
Finally, to determine whether the presence of post-flare rain followed the
solar cycle, we plotted the post-flare arcade rain-positive flares in each
class by year as a percent of all flares of that class, against the SILSO
annual sunspot number, obtained from the Royal Observatory of Belgium,
Brussels (https://wwwbis.sidc.be/silso/datafiles). The results can be seen in
Figure 7. All three classes of flares track the solar cycle in rough terms,
with reasonably good fits for positive linear relationships to the solar
cycle, as shown in the bottom left graph of 7. X-class flares have an $r^{2}$
value of 0.67, while M- and C-class flares exhibit values of 0.74 and 0.55,
respectively. The bottom right graph shows that the residuals for each class
are fairly randomly distributed, which means that the linear fit is a
reasonable one for this relationship. Therefore, we conclude that X- and
M-class flares show promise as indicators of solar cycle strength. While the
span covered here did not quite capture a full solar cycle, these data show
moderate levels of post-flare arcade rain before the peak, increasing peaks
that match the sunspot number peak, and then dying away almost entirely across
the years of solar minimum. We discuss this finding in more detail in the next
section.
Table 2: Post-Flare Arcade Rain and CME Occurrence | Post-Flare Arcade Rain | No Post-Flare Arcade Rain
---|---|---
| No CME | CME | No CME | CME
X-class | 7 | 35 | 0 | 2
M-class | 14 | 42 | 9 | 13
C-class | 26 | 5 | 76 | 11
## 4 Implications
The purpose of this paper is to establish a benchmark set of statistics for a
well-known phenomenon which has nevertheless been under-reported. The main
findings of this statistical study can be summarized as follows:
1. 1.
Post-flare arcade rain is prevalent to the point of ubiquity in X-class flares
(as previously reported, see Benz, 2017), and very prevalent across the
M-class flare population. However, flare duration and post-flare arcade rain
duration are _not_ well-correlated. One notable subset we identified, confined
jets, did not have identifiable post-flare arcade rain despite relatively
strong flares. We suspect that this is a simple matter of insufficient spatial
resolution, since there is plenty of work showing condensations in larger jets
with the same dynamics (Raouafi et al. (2016), Kumar et al. (2019), Mason et
al. (2019)). There is also a distinct, direct relationship between GOES class
and post-flare arcade rain duration, suggesting that more sets of loops
reconnect across the neutral line to form arcades in stronger flares. Post-
flare arcade rain occurs, though with markedly reduced frequency, in C-class
flares, making up only approximately half of those analyzed.
2. 2.
X- and M-class post-flare arcade rain may be a useful tracker of the solar
cycle. While rain in all flares tracked the solar cycle to a moderate extent,
the highest correlation occurred for M-class flares, followed by X-class
flares. If this correlation is shown to extend across more than one solar
cycle, the rate of raining X- and M-class flares may be useful at the early
stages of a solar cycle to predict the strength of solar maximum, and later in
a cycle to tell how quickly solar minimum is approaching (with the caveat that
weaker flares are more common, particularly as active regions weaken near the
tail of the cycle). Further research is planned to corroborate this finding.
3. 3.
The presence or absence of post-flare rain is not consistently correlated with
whether the flare was eruptive or not. As pointed out in the discussion
concerning Table 2, X- and C-class flares do not show any relationship between
eruptivity and rain presence; M-class flares do, although it is not quite
statistically significant. More work is required to settle whether the weak
correlation is truly due to the post-flare arcade rain or simply the fact that
the majority of X and M-class flares are eruptive and the majority of C-class
flares are not, producing significantly smaller values in one category in each
case. Regardless, this lack of strong, consistent correlation is perhaps
unsurprising; while the two are connected by the process of reconnection,
which influences both the creation of the CME and the amount of energy
injected into the arcade loops, the dynamics which drive the arcade after its
formation (radiative cooling and conduction) are physically isolated from and
occur at different time scales from those driving CME creation (magnetic
reconnection). By all accounts, flare reconnection occurs in a very short time
period. Once the reconnection has happened, the loop cools in relative
isolation, disconnected from the presence or absence of a CME evolving above
it. Certainly, if one considers the pool of all flares, the strongest are more
likely to host CMEs and more likely to host post-flare arcade rain, and our
findings support this. Within a class, though, the reconnection should be
roughly comparable across all of the flares, and there we do not see a strong
correlation between CME and post-flare arcade rain occurrence. It is entirely
possible for, as an example, an M-class arcade loop to simply cool non-
catastrophically and drain (with a CME or without), and it is also possible
for an M-class arcade loop to cool catastrophically and form post-flare arcade
rain (with a CME or without).
4. 4.
Post-flare rain can act as a proxy for flare energy release. The near-
universal prevalence of post-flare arcade rain in X and M-class flares,
coupled with its distinct drop in occurrence around C5, point to a direct
energy correlation for condensation formation in arcades. Further research is
required to better define the low-end energy cutoff and other factors which
may affect it. However, as individual post-flare arcade rain condensation
geometry estimates have been carried out before (Antolin et al. (2015)), and
radiative cooling rates are well-established, these observations of post-flare
arcade rain presence and duration can be utilized as a proxy to aid in
calculating well-constrained estimates of the energy released during
impulsive-phase flare reconnection.
5. 5.
Post-flare arcades can persist for days after the flare, eventually
transitioning from post-flare rain to quiescent active region condensations.
While most C-class flares did not produce arcades, and most X-class flares
produced arcades that faded after a few hours, many M-class and some X-class
flares produced arcades that continued to be visible in 171 Å for several days
after the flare, well after the ribbons had ceased evolving. At this point,
the pronounced region of higher emission at the loops’ tops dissipated, and
the loops proceeded to produce more erratic condensations that were no longer
coordinated with neighboring loops. This shift was subtle but common, and
points to a shift from post-reconnection condensation formation to the
ubiquitous quiescent active region thermal nonequilibrium condensations that
have been widely discussed. To the authors’ knowledge, this has not been
previously reported; the life span of such loops, which do not appear to
reconnect again readily after the flare, may be useful for studying active
region aging.
## 5 Acknowledgements
The authors would like to thank the reviewer for constructive suggestions that
greatly improved the manuscript, and Jeff Reep for suggesting this paper topic
several years ago. EIM was supported during the paper’s early development by
an appointment to the NASA Postdoctoral Program at the Goddard Space Flight
Center, administered by Universities Space Research Association under contract
with NASA, and later by NSF’s Solar-Terrestrial Program (Award No.
AGS-1923377). KK contributed to the research described in this paper through
an internship arranged at the Goddard Space Flight Center through the Naval
Academy’s Summer Internship Program. Both authors gratefully acknowledge
collaboration with the NASA GSFC Internal Scientist Funding Model program
“Magnetic Energy Buildup and Explosive Release in the Solar Atmosphere”.
## References
* Andrews (2003) Andrews, M. D. 2003, Sol. Phys., 218, 261, doi: 10.1023/B:SOLA.0000013039.69550.bf
* Antiochos (1980) Antiochos, S. K. 1980, The Astrophysical Journal, 241, 385, doi: 10.1086/158351
* Antolin et al. (2015) Antolin, P., Vissers, G., Pereira, T. M. D., Rouppe van der Voort, L., & Scullion, E. 2015, Astrophys. J., 806, 81, doi: 10.1088/0004-637X/806/1/81
* Aschwanden et al. (1985) Aschwanden, M. J., Wiehl, H. J., Benz, A. O., & Kane, S. R. 1985, Sol. Phys., 97, 159, doi: 10.1007/BF00152985
* Benz (2017) Benz, A. O. 2017, Living Reviews in Solar Physics, 14, 2, doi: 10.1007/s41116-016-0004-3
* Cane et al. (2002) Cane, H. V., Erickson, W. C., & Prestage, N. P. 2002, Journal of Geophysical Research (Space Physics), 107, 1315, doi: 10.1029/2001JA000320
* Cane & Lario (2006) Cane, H. V., & Lario, D. 2006, Space Sci. Rev., 123, 45, doi: 10.1007/s11214-006-9011-3
* Carrington (1859) Carrington, R. C. 1859, MNRAS, 20, 13, doi: 10.1093/mnras/20.1.13
* Cliver et al. (2012) Cliver, E. W., Ling, A. G., Belov, A., & Yashiro, S. 2012, Astrophys. J.l, 756, L29, doi: 10.1088/2041-8205/756/2/L29
* Drake & Swisdak (2012) Drake, J. F., & Swisdak, M. 2012, Space Sci. Rev., 172, 227, doi: 10.1007/s11214-012-9903-3
* Feynman & Hundhausen (1994) Feynman, J., & Hundhausen, A. J. 1994, J. Geophys. Res., 99, 8451, doi: 10.1029/94JA00202
* Gosling (1993) Gosling, J. T. 1993, J. Geophys. Res., 98, 18937, doi: 10.1029/93JA01896
* Hodgson (1859) Hodgson, R. 1859, MNRAS, 20, 15, doi: 10.1093/mnras/20.1.15
* Holman (2016) Holman, G. D. 2016, Journal of Geophysical Research (Space Physics), 121, 11,667, doi: 10.1002/2016JA022651
* Kawabata et al. (2018) Kawabata, Y., Iida, Y., Doi, T., et al. 2018, Astrophys. J., 869, 99, doi: 10.3847/1538-4357/aaebfc
* Klimchuk (2019) Klimchuk, J. A. 2019, Sol. Phys., 294, 173, doi: 10.1007/s11207-019-1562-z
* Kumar et al. (2019) Kumar, P., Karpen, J. T., Antiochos, S. K., et al. 2019, ApJ, 873, 93, doi: 10.3847/1538-4357/ab04af
* Lemen et al. (2011) Lemen, J. R., Title, A. M., Akin, D. J., et al. 2011, in The Solar Dynamics Observatory (New York, NY: Springer US), 17–40, doi: 10.1007/978-1-4614-3673-7_3
* Lin & Forbes (2000) Lin, J., & Forbes, T. G. 2000, J. Geophys. Res., 105, 2375, doi: 10.1029/1999JA900477
* Mason et al. (2019) Mason, E. I., Antiochos, S. K., & Viall, N. M. 2019, Astrophys. J.l, 874, L33, doi: 10.3847/2041-8213/ab0c5d
* Mason et al. (2021) Mason, E. I., Antiochos, S. K., & Vourlidas, A. 2021, ApJ, 914, L8, doi: 10.3847/2041-8213/ac0259
* Mittal & Narain (2010) Mittal, N., & Narain, U. 2010, Journal of Atmospheric and Solar-Terrestrial Physics, 72, 643, doi: 10.1016/j.jastp.2010.03.011
* Müller et al. (2017) Müller, D., Nicula, B., Felix, S., et al. 2017, A&A, 606, A10, doi: 10.1051/0004-6361/201730893
* Nelson & Melrose (1985) Nelson, G., & Melrose, D. 1985, Solar Radiophysics: Studies of emission from the sun at metre wavelengths, 333
* Parker (1953) Parker, E. N. 1953, ApJ, 117, 431, doi: 10.1086/145707
* Pesnell et al. (2011) Pesnell, W. D., Thompson, B. J., & Chamberlin, P. C. 2011, in The Solar Dynamics Observatory (New York, NY: Springer US), 3–15, doi: 10.1007/978-1-4614-3673-7_2
* Priest (1978) Priest, E. R. 1978, Sol. Phys., 58, 57, doi: 10.1007/BF00152555
* Raouafi et al. (2016) Raouafi, N. E., Patsourakos, S., Pariat, E., et al. 2016, Space Science Reviews, 201, 1, doi: 10.1007/s11214-016-0260-5
* Reames (1990) Reames, D. V. 1990, Astrophys. J.s, 73, 235, doi: 10.1086/191456
* Reames (2013) —. 2013, Space Sci. Rev., 175, 53, doi: 10.1007/s11214-013-9958-9
* Reames (2017) —. 2017, Solar Energetic Particles, Vol. 932, doi: 10.1007/978-3-319-50871-9
* Reep et al. (2020) Reep, J. W., Antolin, P., & Bradshaw, S. J. 2020, The Astrophysical Journal, 890, 100, doi: 10.3847/1538-4357/ab6bdc
* Reep & Knizhnik (2019) Reep, J. W., & Knizhnik, K. J. 2019, ApJ, 874, 157, doi: 10.3847/1538-4357/ab0ae7
* Ruan et al. (2021) Ruan, W., Zhou, Y., & Keppens, R. 2021, ApJ, 920, L15, doi: 10.3847/2041-8213/ac27b0
* Sheeley et al. (1983) Sheeley, N. R., J., Howard, R. A., Koomen, M. J., & Michels, D. J. 1983, Astrophys. J., 272, 349, doi: 10.1086/161298
* Sun et al. (2015) Sun, X., Bobra, M. G., Hoeksema, J. T., et al. 2015, ApJ, 804, L28, doi: 10.1088/2041-8205/804/2/L28
* Švestka (2001) Švestka, Z. 2001, Space Sci. Rev., 95, 135, doi: 10.1023/A:1005225208925
* Wild & Smerd (1972) Wild, J. P., & Smerd, S. F. 1972, Annual Review of Astron. and Astrophys., 10, 159, doi: 10.1146/annurev.aa.10.090172.001111
* Yashiro et al. (2005) Yashiro, S., Gopalswamy, N., Akiyama, S., Michalek, G., & Howard, R. A. 2005, Journal of Geophysical Research (Space Physics), 110, A12S05, doi: 10.1029/2005JA011151
* Zuccarello et al. (2017) Zuccarello, F. P., Chandra, R., Schmieder, B., Aulanier, G., & Joshi, R. 2017, A&A, 601, A26, doi: 10.1051/0004-6361/201629836
|
# Metal Borohydrides as high-$T_{\text{c}}$ ambient pressure superconductors
Simone Di Cataldo<EMAIL_ADDRESS>Dipartimento di Fisica,
Sapienza Università di Roma, 00185 Roma, Italy Institute of Theoretical and
Computational Physics, Graz University of Technology, NAWI Graz, 8010 Graz,
Austria Lilia Boeri<EMAIL_ADDRESS>Dipartimento di Fisica, Sapienza
Università di Roma, 00185 Roma, Italy Centro Ricerche Enrico Fermi, Via
Panisperna 89 A, 00184 Rome, Italy
###### Abstract
The extreme pressures required to stabilize the recently discovered
superhydrides represent a major obstacle to their practical application. In
this paper, we propose a novel route to attain high-temperature
superconductivity in hydrides at ambient pressure, by doping commercial metal
borohydrides. Using first-principles calculations based on Density Functional
Theory and Migdal-Éliashberg theory, we demonstrate that in Ca(BH4)2 a
moderate hole doping of 0.03 holes per formula unit, obtained through a
partial replacement of Ca with monovalent K, is sufficient to achieve
$T_{\text{c}}$’s as high as 110 K. The high-$T_{\text{c}}$ arises because of
the strong electron-phonon coupling between the B-H $\sigma$ molecular
orbitals and bond-stretching phonons. Using a random sampling of large
supercells to estimate the local effects of doping, we show that the required
doping can be achieved without significant disruption of the electronic
structure and at moderate energetic cost. Given the wide commercial
availability of metal borohydrides, the ideas presented here can find prompt
experimental confirmation. If successful, the synthesis of high-$T_{\text{c}}$
doped borohydrides will represent a formidable advancement towards
technological exploitation of conventional superconductors.
## I Introduction
Since the discovery of superconductivity with a critical temperature
($T_{\text{c}}$) of 203 K at 150 GPa in H3S [1, 2], hydrogen-rich
superconductors have revolutionized the landscape of superconductivity
research; After H3S, many other superhydrides with $T_{\text{c}}$’s close to,
or even above, room temperature have been found [3, 4, 5, 6, 7, 8, 9, 10, 11,
12], but the extreme synthesis pressures represent an insurmountable obstacle
to any practical application.
On the other hand, an increasing demand exists for new materials enabling
superconductor-based technologies: for most large-scale applications,
synthesizability at ambient pressure is a strict requirement, whereas the
threshold for $T_{\text{c}}$ is sensibly lower than ambient temperature, but
is dictated by the need to maintain a robust superconducting state under
liquid nitrogen cooling. [13]
The spectacular success of computational methods for superhydrides raises the
hope that these techniques may accelerate the identification of suitable
materials.[14, 15] The first predictions which have started to appear in
literature follow essentially two different routes to boost $T_{\text{c}}$
within the conventional ($ep$) scenario: 1) optimization of the effective
chemical pressure in ternary hydrides [16, 17, 18, 19, 19, 20], through a
careful combination of guest elements in host-guest metallic superhydrides;
and 2) doping of non-hydride covalent structures [21, 22, 23, 24, 25, 26, 27],
which exploits the large intrinsic $ep$ coupling of covalent insulators,
turned metallic via either external or self doping, as in B-doped diamond or
MgB2 [28, 29, 21, 30].
Both approaches present potential drawbacks: while even the most optimized
ternary hydrides seem to require synthesis pressure of at least a few GPa
[17], the $T_{\text{c}}$’s of systems containing exclusively boron, carbon and
other heavier elements are unlikely to exceed the 80 K threshold found in
best-case scenarios [26, 27], due to the intrinsic phonon energy scales
determined by the relatively large boron and carbon atomic masses.
In this paper we propose a hybrid strategy which combines the best of the two:
doping covalent bonds (ambient pressure) in a hydrogen-rich structure (higher
$T_{\text{c}}$). In particular, we show that metal borohydrides (MBH) can be
turned into high-temperature conventional superconductors at ambient pressure,
via small substitutional doping at the metal site, which effectively
transforms MBH into highly-tunable hole-doped hydrocarbons.
MBH form a broad class of materials widely used in commercial hydrogen storage
applications, due to the high hydrogen content, and the ease of hydrogen
uptake and dehydrogenation [31, 32]. In these compounds boron and hydrogen
form quasi-molecular units arranged on open structures, with mono-, di- or
trivalent metals ($M$) on the interstitial sites. Our strategy to turn MBH
into high-$T_{\text{c}}$ superconductors is quite general, and consists in
replacing a small fraction of $M$ atoms with a lower-valence atom, realising
hole doping; in this work, we study the specific case of K- doping of the
$\alpha$ phase of Ca(BH4)2 [33].
Our calculations demonstrate that substitutional K doping in Ca(BH4)2 is
energetically feasible up to at least 0.10 h+/f.u.; concentrations as low as
0.03 holes per formula unit (h+/fu) are sufficient to induce superconductivity
with a $T_{\text{c}}$ as high as 110 K. As MBH are commercially available
materials, we expect our work to find an immediate response from experimental
researchers.
Figure 1: Crystal structure of $\alpha$-Ca(BH4)2. The Ca, B, and H atoms are
shown as green, orange, and blue spheres, respectively. BH${}_{4}^{-}$ anions
are shown as tetrahedra.
$\alpha$-Ca(BH4)2 (Fig. 1) is a molecular crystal, in which boron and hydrogen
form BH4 tetrahedra, and Ca occupies interstitial sites. Ca is almost
completely ionized (Ca++), and donates charge to the BH${}_{4}^{-}$
tetrahedra, which are thus not only isostructural, but also isoelectronic to
methane (C$H_{4}$). The spacing between BH${}_{4}^{-}$ molecular units is
quite large, about 3.5 $\AA$, indicating extremely weak intermolecular
interactions.
Fig. 2 shows the electronic band structure and the atom-projected Density of
States (DOS) of the $\alpha$ phase of Ca(BH4)2. Undoped $\alpha$-Ca(BH4)2 is
an insulator, with a calculated direct band gap of 5 eV. The bands have a
reduced dispersion, as typical of molecular crystals; the electronic DOS
exhibits extremely sharp and narrow peaks, particularly near the valence band
maximum (VBM). Electronic states in this region have a mixed (50/50) B/H
character and derive from the three-fold degenerate $1t_{2}$ (0 to -2 eV) and
the single $2a_{1}$ (-6 to -8 eV) molecular $\sigma$ orbitals of BH4, which
are expected to couple strongly to B-H bond-stretching and bond-bending,
phonons.
Due to the extremely sharp profile of the DOS, even extremely small hole
dopings are sufficient to shift the Fermi energy into the large-DOS,
large-$ep$ region below the VBM, which should induce high-$T_{\text{c}}$
conventional SC.
Figure 2: Electronic band structure and atom-projected DOS of Ca(BH4)2. The
energy zero is set to the valence band maximum. The DOS is in units of states
$eV^{-1}atom^{-1}$.
Hole doping in Ca(BH4)2 can be realized by substituting Ca with a monovalent
atom. In this work, we consider K, which is the neighbour of Ca in the
periodic table, and hence has very similar size and core. Replacing a fraction
$\delta$ of divalent Ca with monovalent K amounts to doping $\alpha$-Ca(BH4)2
with $\delta$ holes/f.u.; in an ideal rigid-band picture, these holes would
form at the top of the valence band, turning K-doped Ca(BH4)2 into a self-
doped version of methane (CH4).
Due to the presence of stiff covalent bonds coupled by symmetry to bond-
stretching phonons, doped hydrocarbons have long been postulated to exhibit
large $ep$ coupling; controversial reports of high-$T_{\text{c}}$
superconductivity in polyacenes doped with alkali and alkaline earths
(electron doping) have appeared in the early 2010’s [34, 35, 36, 37]. However,
doping hydrocarbons and related C-H systems with holes has so far proven
impossible, as intercalation with electronegative elements (I,F, Cl) is
ineffective, and doping the C-H sublattice by substitutional atoms or
vacancies is extremely unfavorable energetically and tends to seriously
disrupt the crystal and electronic structure due to the presence of stiff
covalent bonds [38, 39].
In K-doped Ca(BH4)2, on the other hand, doping only involves the metal site,
which is very weakly bonded to the rest of the structure. This should allow a
convenient fine-tuning of the superconducting properties at a reasonable
energy cost, without major modifications of the structure [40].
To substantiate our hypotheses, we computed the superconducting properties of
K-doped Ca(BH4)2 for various hole concentrations $\delta$, computing the
isotropic Éliashberg functions for various values of $\delta$ using Density
Functional Perturbation Theory [41], and obtaining the $T_{\text{c}}$ by
numerically solving the isotropic Éliashberg equations 111Calculations were
performed using DFPT as implemented in Quantum Espresso. Integration of
electron-phonon properties was performed on $3\times 3\times 3$ grid for
phonons and $16\times 16\times 16$ grid for electrons, using a Gaussian
smearing of 200 meV. Further computational details are provided in the
Supplementary Material [44, 49, 50, 41, 51, 52, 53].
Doping is simulated using the virtual crystal approximation (VCA), which
amounts to replacing each Ca (pseudo)atom in the $\alpha$-Ca(BH4)2 structure
with an average virtual (pseudo)atom, obtained by mixing K and Ca in the
appropriate proportions.
In Fig. 3 we report a summary of the electronic and superconducting properties
of K-doped Ca(BH4)2 as a function of the hole concentration $\delta$. The DOS
at the Fermi level N${}_{E_{F}}$ (panel (a)) rapidly increases with doping, as
does the total $ep$ coupling constant $\lambda$ (panel (c)), while the average
phonon frequency $\omega_{log}$ (b) slightly decreases. In the Éliashberg
function (shown in Fig. S2 of the Supplementary Material) almost all $ep$
coupling is concentrated in the high-energy B-H stretching and bending modes.
For $\delta$ larger than 0.10 the system develops a dynamical instability,
while for values smaller than 0.03 the Fermi energy is too close to the VBM to
allow a reasonable estimate of $\lambda$ and $T_{\text{c}}$. $T_{\text{c}}$
attains its maximum value of 130 K at $\delta$ = 0.10 and decreases linearly
with decreasing $\delta$ down to 110 K at $\delta$=0.03; extrapolating this
trend, we can reasonably suppose that $T_{\text{c}}$’s higher than 100 K may
be achieved for dopings $\delta\gtrsim 0.01$.
Figure 3: Electronic and superconducting properties as a function of the
doping $\delta$. (a): DOS at the Fermi level, (b): logarithmic average phonon
frequency $\omega_{log}$, (c): $ep$ coupling coefficient $\lambda$, (d):
superconducting critical temperature $T_{\text{c}}$.
The VCA has the advantage of making $T_{\text{c}}$ calculations feasible even
for small dopings, correctly capturing the average effect of substitutional
doping on the electronic structure, in particular the critical role of
electronic screening on phonon spectra and $ep$ matrix elements. [37] However,
effects such as charge localization, deep defect levels or carrier trapping
[43, 24], which may sensibly affect the electronic structure, require more
complex approximations that can capture depend on the local environment of the
impurities.
To simulate these effects, we developed an ad-hoc scheme, based on averaging
over random supercells. First, we constructed a 2$\times$2$\times$2 supercell
containing 32 formula units of Ca(BH4)2 (352 atoms); then, to simulate hole
concentrations from $\delta=0.03$ to $\delta=0.5$, we substituted Ca atoms
with the appropriate fraction of K, placed at random positions; for each value
of $\delta$, we generated ten supercells. These supercells were then relaxed
to minimize stress and forces, before computing the total energies and DOS’s –
Figs. 4. Computations on the supercells were performed using the Vienna Ab-
initio Simulation Package (VASP). Further details are provided in the
Supplementary Materials [44]. The average DOS for each doping was then
obtained by performing a weighted average over the relative supercells, with
weights corresponding to the probability of that configuration (See SM for
further details [44]). The average DOS’s for different values of $\delta$ are
shown in Fig. 4. Although doping causes sizable modifications of the DOS for
$\delta>0.12$, especially in the low-lying region below -6 eV, the DOS for
$\delta$ up to 0.09 are essentially unchanged compared to the undoped
compound. In particular, there are no inter-gap states up to $\delta$ = 0.09,
and the relative weigth remains negligible up to $\delta=0.25$.
Hence, for doping of interest the main effect of K/Ca substitution is indeed a
quasi-rigid shift of the Fermi level into the valence band, well reproduced by
VCA; in particular, the states just below the VBM, which participate in the SC
pairing, are only weakly influenced by doping. This is expected, since
$\sigma$ orbitals of the BH${}_{4}^{-}$ molecular ions are only weakly
affected by distortions and rearrangements of atoms in the crystal structure
which do not modify the overall shape of the molecular ion itself.
Figure 4: Density of states (DOS) of K-doped Ca(BH4)2 as a function of the
hole concentration $\delta$. The Fermi energy is taken as the energy zero
except for $\delta=0$, in which the VBM is used.
Using supercells we can also estimate the energetic cost of K substitution
into the Ca site. In Fig. 5 we show the formation energy of KδCa1-δ(BH4)2 with
respect to decomposition into K + Ca(BH4)2 222For Ca and K we assumed a face-
centered and a body-centered cubic structure, respectively, for all
configurations sampled (ten for each doping). The formation energies $\delta
E$ increase linearly with doping, with little dispersion for different
supercells, remaining below 200 meV/atom for $\delta$ up to 0.12.
On purely energetic grounds, these values indicate that K-doping of Ca(BH4)2
should be experimentally feasible. However, experimental synthesis conditions
depend on complex details of the kinetic barrier protecting the doped
structure from decomposition, and on the entropy contribution to the free
energy, whose evaluation goes well beyond the scope of this paper.
Moreover, independently of their energetic cost, not all perturbations induced
by doping will have the same effect on superconductivity; while small
distortions and rearrangements of BH${}_{4}^{-}$ anions within the open
$\alpha$ structure, should have only minor effects on the the superconducting
properties, dehydrogenation, which implies a weakening of the B-H bonds, has
severe consequences, since it implies a major rearrangement of the whole
electronic structure. We will therefore assume that the dehydrogenation energy
can be used to estimate an effective synthesizabilty threshold for K-doped
Ca(BH4)2.
A hand-waving estimate can be obtained as follows. Assuming the measured
dehydrogenation temperature of Ca(BH4)2, which is around 700 K [46, 47, 48],
to be a reasonable guess of the kinetic barrier for dehydrogenation, doped
Ca(BH4)2 should be able to withstand perturbations with a positive $\Delta E$
of the order of 60 meV/atom without decomposing. This corresponds to a doping
$\delta\sim$ of 0.06 (Fig. 5), which as Fig. 3 (d) shows, largely exceeds the
doping levels required for high-$T_{\text{c}}$ superconductivity. Hence,
high-$T_{\text{c}}$ superconductivity should be observable, before
dehydrogenation sets in.
Figure 5: Formation energy $\Delta E$ as a function of hole-doping $\delta$.
In summary, in this paper we proposed a strategy to attain high-$T_{\text{c}}$
conventional superconductivity in the complex bordohydride Ca(BH4)2 using
substitutional doping of monovalent K on the Ca site, and substantiated it
with first-principles calculations.
K-doped Ca(BH4)2 behaves essentially as hole-doped methane (CH4), where the
high-$T_{\text{c}}$ derives from a strong coupling between $\sigma$ bonding
electronic states and bond-stretching phonons of the BH${}_{4}^{-}$ molecular
units. Compared to CH4, however, the big advantage of Ca(BH4)2 is that hole
doping is realized by acting on the weakly-bonded metal site, and not on the
covalent B-H (or C-H) sublattice, and this causes only minor disruptions of
the crystal and electronic structure, implying an affordable energy cost.
According to our calculations, a partial replacement of 3 $\%$ Ca atoms with K
atoms would have an energy cost of around 50 meV/atom, which is below the
dehydrogenation threshold, and lead to an estimated $T_{\text{c}}$ of 110 K at
ambient pressure, almost on par with the best copper-oxide superconductors.
With a figure of merit $S$ between 2.8 and 3.3 [13], doped Ca(BH4)2 is better
than any other superhydride, as well as all other known ambient-pressure
conventional superconductors ($S=1$ in MgB2), and very close to HgBaCaCuO.
Note that the strategy proposed here is very general, and can in principle be
applied to turn any of the many existing MBH into doped hydrocarbons, by
suitable metal substitutions. We are strongly convinced that, if synthesized,
doped metal borohydrides will represent a huge leap forward in research on
high-temperature superconductors.
Given the easy commercial availability of metal borohydrides, we hope that our
work will stimulate a positive response from experimentalists.
## II Acknowledgments
The authors warmly thank Antonio Sanna for sharing the code to solve the
isotropic Éliashberg equations. L.B. and S.d.C. acknowledge funding from the
Austrian Science Fund (FWF) P30269-N36 and support from Fondo Ateneo-Sapienza
2017-2020. S.D.C. acknowledges computational resources from CINECA, proj.
IsC90-HTS-TECH and IsC99-ACME-C, and the Vienna Scientific Cluster, proj.
71754 ”TEST”.
## References
* Drodzov _et al._ [2015] A. P. Drodzov, M. I. Eremets, I. A. Troyan, V. Ksenofontov, and S. I. Shylin, Conventional superconductivity at 203 Kelvin at high pressures in the sulfur hydride system, Nature 525, 73 (2015).
* Einaga _et al._ [2016] M. Einaga, M. Sakata, T. Ishikawa, K. Shimizu, M. Eremets, A. P. Drodzov, I. A. Troyan, N. Hirao, and Y. Ohishi, Crystal structure of the superconducting phase of sulfur hydride, Nature Physics 12, 835 (2016).
* Liu _et al._ [2018] H. Liu, I. I. Haumov, Z. M. Geballe, M. Somayazulu, J. S. Tse, and R. J. Hemley, Dynamics and superconductivity in compressed lanthanum superhydride, Phys. Rev. B 98, 100102 (2018).
* Somayazulu _et al._ [2019] M. Somayazulu, M. Ahart, A. K. Mishra, Z. M. Geballe, M. Baldini, Y. Meng, V. V. Struzhkin, and R. J. Hemley, Evidence for superconductivity above 260 K in lanthanum superhydride at Megabar pressures, Phys. Rev. Lett. 122, 027001 (2019).
* Drodzov _et al._ [2019] A. P. Drodzov, P. P. Kong, S. P. Besedin, M. A. Kuzonikov, S. Mozaffari, L. Balicas, F. F. Balakirev, D. E. Graf, V. B. Prakapenka, E. Greenberg, D. A. Knyazev, M. Tkacz, and M. I. Eremets, Superconductivity at 250 K in lanthanum hydride under high pressure, Nature 569, 528 (2019).
* Snider _et al._ [2020] E. Snider, N. Dasenbrock-Gammon, R. McBride, M. Debessai, H. Vindana, K. Vencatasamy, K. V. Lawler, A. Salamat, and R. P. Dias, Room-temperature superconductivity in a carbonaceous sulfur hydride, Nature 586, 373 (2020).
* Ma _et al._ [2022] L. Ma, K. Wang, Y. Xie, X. Yang, Y. Wang, M. Zhou, H. Liu, X. Yu, Y. Zhao, H. Wang, G. Liu, and Y. Ma, High-temperature superconducting phase in clathrate calcium hydride cah6 up to 215 k at a pressure of 172 gpa, Phys. Rev. Lett. 128, 167001 (2022).
* Semenok _et al._ [2020] D. V. Semenok, A. G. Kvashin, A. G. Ivanova, V. Svitlyk, V. Y. Fominski, A. V. Sadakov, O. A. Sobolevskiy, V. M. Pudalov, I. A. Troyan, and A. R. Oganov, Superconductivity at 161 K in thorium hydride ThH10: Synthesis and properties, Materials Today 33, 36 (2020).
* Kong _et al._ [2021] P. Kong, V. S. Minkov, M. A. Kuzovnikov, A. P. Drodzov, S. P. Besedin, S. Mozaffari, L. Balicas, F. F. Balekirev, V. B. Prakapenka, S. Chariton, D. V. Semenok, E. Greenberg, and M. Eremets, Superconductivity up to 243 K in yttrium hydrides under high pressure, Nature Communications 12 (2021).
* Troyan _et al._ [2021] I. A. Troyan, D. V. Semenok, A. G. Kvashin, A. V. Sadakov, O. A. Sobolevskiy, V. M. Pudalov, A. G. Ivanova, V. B. Prakapenka, E. Greenberg, A. G. Gavriliuk, V. V. Struzhkin, A. Bergara, I. Errea, R. Bianco, M. Calandra, F. Mauri, L. Monacelli, R. Akashi, and A. R. Oganov, Anomalous high-temperature superconductivity in YH6, Advanced Materials 33, 2006832 (2021).
* Chen _et al._ [2021] W. Chen, D. V. Semenok, X. Huang, H. Shu, X. Li, D. Duan, T. Cui, and A. R. Oganov, High-temperature superconducting phases in cerium superhydride with a Tc up to 115 K below a pressure of 1 Megabar, Phys. Rev. Lett. 127, 117001 (2021).
* Grockowiak _et al._ [2022] A. D. Grockowiak, M. Ahart, T. Helm, W. A. Coniglio, R. Kumar, K. Glazyrin, G. Garbarino, Y. Meng, M. Oliff, V. Williams, N. W. Ashcroft, R. J. Hemley, M. Somayazulu, and S. W. Tozer, Hot hydride superconductivity above 550 K, Frontiers in Electronic Materials 2 (2022).
* Pickard _et al._ [2020] C. J. Pickard, I. Errea, and M. Eremets, Superconducting hydrides under pressure, Annual Review of Condensed Matter Physics 11, 57 (2020).
* Boeri and Bachelet [2019] L. Boeri and G. B. Bachelet, Viewpoint: the road to room-temperature conventional superconductivity, J. Phys.: Condens. Matter 31, 234002 (2019).
* Flores-Livas _et al._ [2020] J. A. Flores-Livas, L. Boeri, A. Sanna, G. Profeta, R. Arita, and M. Eremets, A perspective on conventional high-temperature superconductors at high pressure: Methods and materials, Physics Reports 856, 1 (2020).
* Cataldo _et al._ [2021] S. D. Cataldo, C. Heil, W. von der Linden, and L. Boeri, LaBH8: towards high-Tc low-pressure superconductivity in ternary superhydrides, Phys. Rev. B 104, L020511 (2021).
* Lucrezi _et al._ [2022] R. Lucrezi, S. Di Cataldo, W. von der Linden, L. Boeri, and C. Heil, In-silico synthesis of lowest-pressure high-Tc ternary superhydrides, NPJ: computational materials 8 (2022).
* Cataldo _et al._ [2020] S. D. Cataldo, W. von der Linden, and L. Boeri, Phase diagram and superconductivity of calcium borohydrides at extreme pressures, Phys. Rev. B 102, 014516 (2020).
* Zhang _et al._ [2022] Z. Zhang, T. Cui, M. J. Hutcheon, A. M. Shipley, H. Song, M. Du, V. Z. Kresin, D. Duan, C. J. Pickard, and Y. Yao, Design principles for high temperature superconductors with hydrogen-based alloy backbone at moderate pressure, Physical Review Letters 128 (2022).
* Hilleke and Zurek [2022] K. P. Hilleke and E. Zurek, Rational design of superconducting metal hydrides via chemical pressure tuning, arXiv preprint, arXiv:2205.11569 (2022).
* Ekimov _et al._ [2004] E. A. Ekimov, V. A. Sidorov, E. D. Bauer, N. N. Mel’nik, N. J. Curro, J. D. Thompson, and S. M. Stishov, Superconductivity in diamond, Nature 428, 542 (2004).
* Sidorov and Ekimov [2010] V. A. Sidorov and E. A. Ekimov, Superconductivity in diamond, Diamond and Related Materials 19, 351 (2010).
* Cui _et al._ [2020] X. Cui, K. P. Hilleke, X. Wang, M. Lu, M. Zhang, E. Zurek, W. Li, D. Zhang, Y. Yan, and T. Bi, RbB3Si3: An alkali metal borosilicide that is metastable and superconducting at 1 atm, Journal of Physical Chemistry C 124 (2020).
* Flores-Livas _et al._ [2017] J. A. Flores-Livas, A. Sanna, M. Grauzinyte, A. Davydov, S. Goedecker, and M. A. L. Marques, Emergence of superconductivity in doped H2O ice at high pressure, Scientific Reports 7, 6825 (2017).
* Rosner _et al._ [2002] H. Rosner, A. Kitaigorodsky, and W. E. Pickett, Prediction of high ${T}_{c}$ superconductivity in hole-doped LiBC, Phys. Rev. Lett. 88, 127001 (2002).
* Saha _et al._ [2020] S. Saha, S. D. Cataldo, M. Amsler, W. von der Linden, and L. Boeri, High-temperature conventional superconductivity in the boron-carbon system: Material trends, Phys. Rev. B 102, 024519 (2020).
* Cataldo _et al._ [2022] S. D. Cataldo, S. Qulaghasi, G. B. Bachelet, and L. Boeri, High-Tc superconductivity in doped boron-carbon clathrates, Physical Review B 105 (2022).
* Kortus _et al._ [2001] J. Kortus, I. I. Mazin, K. D. Belashchenko, V. P. Antropov, and L. L. Boyer, Superconductivity of metallic boron in MgB2, Phys. Rev. Lett. 86, 4656 (2001).
* Boeri _et al._ [2004] L. Boeri, J. Kortus, and O. K. Andersen, Three-dimensional ${\mathrm{m}\mathrm{g}\mathrm{b}}_{2}$-type superconductivity in hole-doped diamond, Phys. Rev. Lett. 93, 237002 (2004).
* Strobel [2022] T. A. Strobel, Abstract: S24.00003: Superconductivity in carbon-boron clathrates (2022), aPS Match Meeting 2022.
* ichi Orimo _et al._ [2007] S. ichi Orimo, Y. Nakamori, J. R. Eliseo, A. Züttel, and C. M. Jensen, Complex hydrides for hydrogen storage, Chem. Rev. 107, 4111 (2007).
* Li _et al._ [2011] S.-W. Li, Y. Yan, S. ichi Orimo, A. Züttel, and C. M. Jensen, Recent progress in metal borohydrides for hydrogen storage, Energies 4, 185 (2011).
* Filinchuk _et al._ [2009] Y. Filinchuk, E. Rönnebro, and D. Chandra, Crystal structures and phase transformations in Ca(BH4)2, Acta Materialia 57, 732 (2009).
* Devos and Lannoo [1998] A. Devos and M. Lannoo, Electron-phonon coupling for aromatic molecular crystals: Possible consequences for their superconductivity, Phys. Rev. B 58, 8236 (1998).
* Mitsuhashi _et al._ [2010] R. Mitsuhashi, Y. Suzuki, Y. Yamanari, H. Mitamura, T. Kambe, N. Ikeda, H. Okamoto, A. Fujiwara, M. Yamaji, N. Kawasaki, Y. Maniwa, and Y. Kubozono, Superconductivity in alkali-metal-doped picene, Nature 464, 76 (2010).
* Subedi and Boeri [2011] A. Subedi and L. Boeri, Vibrational spectrum and electron-phonon coupling of doped solid picene from first principles, Phys. Rev. B 84, 020508 (2011).
* Casula _et al._ [2012] M. Casula, M. Calandra, and F. Mauri, Local and nonlocal electron-phonon couplings in k3 picene and the effect of metallic screening, Phys. Rev. B 86, 075445 (2012).
* Savini _et al._ [2010] G. Savini, A. C. Ferrari, and F. Giustino, First-principles prediction of doped graphane as a high-temperature electron-phonon superconductor, Phys. Rev. Lett. 105, 037002 (2010).
* Flores-Livas _et al._ [2018] J. A. Flores-Livas, M. Grauzinyte, L. Boeri, G. Profeta, and A. Sanna, Superconductivity in doped polyethylene at high pressure, The European Physical Journal B 91, 176 (2018).
* Moussa and Cohen [2008] J. E. Moussa and M. L. Cohen, Using molecular fragments to estimate electron-phonon coupling and possible superconductivity in covalent materials, Phys. Rev. B 78, 064502 (2008).
* Baroni _et al._ [2001] S. Baroni, S. de Gironcoli, A. D. Corso, and P. Giannozzi, Phonons and related crystal properties from density-functional perturbation theory, Rev. Mod. Phys 73, 515 (2001).
* Note [1] Calculations were performed using DFPT as implemented in Quantum Espresso. Integration of electron-phonon properties was performed on $3\times 3\times 3$ grid for phonons and $16\times 16\times 16$ grid for electrons, using a Gaussian smearing of 200 meV. Further computational details are provided in the Supplementary Material [44, 49, 50, 41, 51, 52, 53].
* Freysoldt _et al._ [2014] C. Freysoldt, B. Grabowski, T. Hickel, and J. Neugebauer, First-principles calculations for point defects in solids, Reviews of Modern Physics 86, 253 (2014).
* [44] URL_will_be_inserted_by_publisher, the supplementary material is available at..
* Note [2] For Ca and K we assumed a face-centered and a body-centered cubic structure, respectively.
* Riktor _et al._ [2007] M. D. Riktor, M. H. Sorby, K. Chlopek, M. Fichtner, F. Buchter, A. Züttel, and B. C. Hauback, In situ synchrothron diffraction studies of phase transitions and thermal decomposition of Mg(BH4)2 and Ca(BH4)2, Journal of Materials Chemistry 17, 4939 (2007).
* Kim _et al._ [2008] J.-H. Kim, S.-A. Jin, J.-H. Shim, and Y. W. Cho, Thermal decomposition behavior of calcium borohydride Ca(BH4)2, Journal of Alloys and Compounds 461, L20 (2008).
* Sahle _et al._ [2016] C. J. Sahle, C. Sternemann, C. Giacobbe, Y. Yan, C. Weis, M. Harder, Y. Forov, G. Spiekermann, M. Tolan, M. Krisch, and A. Remhof, Formation of CaB6 in the thermal decomposition of the hydrogen storage material Ca(BH4)2, Phys. Chem. Chem. Phys. 18, 19866 (2016).
* Kresse and Furthmüller [1996] G. Kresse and J. Furthmüller, Efficient iterative schemes for ab-initio total-energy calculations using a plane-wave basis set, Phys. Rev. B 54, 11169 (1996).
* Momma and Izumi [2008] K. Momma and F. Izumi, VESTA: a three-dimensional visualization system for electronic and structural analysis, J. Appl. Cryst. 41, 653 (2008).
* Giannozzi _et al._ [2009] P. Giannozzi, S. Baroni, N. Bonini, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, G. L. Chiarotti, M. Cococcioni, and I. Dabo, QUANTUM ESPRESSO: a modular and open-source software project for quantum simulation of materials, J. Phys.: Condens. Matter 21, 395502 (2009).
* Giannozzi _et al._ [2017] P. Giannozzi, O. Andreussi, T. Brumme, O. Bunau, M. B. Nardelli, M. Calandra, R. Car, C. Cavazzoni, D. Ceresoli, M. Cococcioni, N. Colonna, I. Carnimeo, A. D. Corso, S. de Gironcoli, P. Delugas, R. A. DiStasio, A. Ferretti, A. Floris, G. Fratesi, G. Fugallo, R. Gebauer, U. Gerstmann, F. Giustino, T. Gorni, J. Jia, M. Kawamura, H.-Y. Ko, A. Kokalj, E. Kücükbenli, M. Lazzeri, M. Marsili, N. Marzari, F. Mauri, N. L. Nguyen, H.-V. Nguyen, A. O. de-la Roza, L. Paulatto, S. Poncé, D. Rocca, R. Sabatini, B. Santra, M. Schlipf, A. P. Seitsonen, A. Smogunov, I. Timrov, T. Thonhauser, P. Umari, N. Vast, X. Wu, and S. Baroni, Advanced capabilities for materials modelling with quantum espresso, J. Phys.: Condens. Matter 29, 465901 (2017).
* Hamann [2017] D. R. Hamann, Optimized norm-conserving vanderbilt pseudopotentials, Phys. Rev. B 88, 085117 (2017).
|
# Exact Solution of Bipartite Fluctuations in One-Dimensional Fermions
Kazuya Fujimoto Department of Physics, Tokyo Institute of Technology, 2-12-1
Ookayama, Meguro-ku, Tokyo 152-8551, Japan Tomohiro Sasamoto Department of
Physics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo
152-8551, Japan
###### Abstract
Emergence of hydrodynamics in quantum many-body systems has recently garnered
growing interest. The recent experiment of ultracold atoms [J. F. Wienand et
al., arXiv:2306.11457] studied emergent hydrodynamics in hard-core bosons
using a bipartite fluctuation, which quantifies how the particle number
fluctuates in a subsystem. In this Letter, we theoretically study the variance
of a bipartite fluctuation in one-dimensional noninteracting fermionic
dynamics starting from an alternating state, deriving the exact solution of
the variance and its asymptotic linear growth law for the long-time dynamics.
To compare the theoretical prediction with the experiment, we generalize our
exact solution by incorporating the incompleteness of the initial alternating
state, deriving the general linear growth law analytically. We find that it
shows quantitative agreement with the experimentally observed variance growth
without any fitting parameters.
Introduction.— Relaxation to an equilibrium state is ubiquitous in quantum
many-body dynamics, bringing up fundamental and intriguing questions, e.g.,
how an isolated quantum system relaxes to a thermal equilibrium state
Polkovnikov _et al._ (2011); Eisert _et al._ (2015); Nandkishore and Huse
(2015); Torres-Herrera _et al._ (2015); Luca D’Alessio and Rigol (2016); Mori
_et al._ (2018); Abanin _et al._ (2019); Gong and Hamazaki (2022). Over
decades, such quantum thermalization has been intensively studied from broad
perspectives, such as the eigenstate thermalization hypothesis Deutsch (1991);
Srednicki (1994), integrability Kinoshita _et al._ (2006); Gring _et al._
(2012); Cazalilla (2006); Rigol _et al._ (2006, 2007); Calabrese _et al._
(2011); Fagotti and Essler (2013); Langen _et al._ (2015); Ilievski _et al._
(2015); Tang _et al._ (2018), many-body localization Basko _et al._ (2006);
Žnidarič _et al._ (2008); Pal and Huse (2010); Bardarson _et al._ (2012);
Serbyn _et al._ (2013); Huse _et al._ (2014); Schreiber _et al._ (2015);
Potter _et al._ (2015); Vasseur and Moore (2016); yoon Choi _et al._ (2016);
Lüschen _et al._ (2017); Kohlert _et al._ (2019), and quantum scars Bernien
_et al._ (2017); Shiraishi and Mori (2017); Turner _et al._ (2018a, b); Lin
and Motrunich (2019); Shibata _et al._ (2020). To date, many experiments
involving ultracold atoms and trapped ions have observed various phenomena
related to quantum thermalization Kinoshita _et al._ (2006); Gring _et al._
(2012); Trotzky _et al._ (2012); Langen _et al._ (2013); Schreiber _et al._
(2015); Langen _et al._ (2015); Clos _et al._ (2016); Kaufman _et al._
(2016); Neill _et al._ (2016); Lüschen _et al._ (2017); Bernien _et al._
(2017); Tang _et al._ (2018); Kohlert _et al._ (2019). For example, S.
Trotzky et al. realized an isolated bosonic system on a one-dimensional
lattice, observing the relaxation dynamics starting from an alternating state
where the bosons occupy every other site Trotzky _et al._ (2012).
Recently, beyond the conventional quantum thermalization, hydrodynamic
description based on local equilibrium has rapidly developed in quantum many-
body systems. This situation is exemplified by the recent observation of
electron fluids in strongly interacting systems Moll _et al._ (2016);
Bandurin _et al._ (2016); Gooth _et al._ (2018); Krishna Kumar _et al._
(2017); Sulpizio _et al._ (2019); Aharon-Steinberg _et al._ (2022) and the
success of generalized hydrodynamics being valid for integrable quantum models
Castro-Alvaredo _et al._ (2016); Bertini _et al._ (2016); Bulchandani _et
al._ (2017, 2018); Doyon and Yoshimura (2017); Doyon _et al._ (2017, 2018);
Collura _et al._ (2018); De Nardis _et al._ (2018); Schemmer _et al._
(2019); Gopalakrishnan and Vasseur (2019); Doyon (2020); Alba _et al._
(2021); Malvania _et al._ (2021); Bouchoule and Dubail (2022); Essler (2023).
When considering such hydrodynamic description, it is of essence to scrutinize
how a system approaches a local equilibrium state. One of the most useful
quantities for diagnosing local equilibrium is a bipartite fluctuation that
quantifies how a physical quantity in a subsystem fluctuates. In a recent
experiment of ultracold atoms for hard-core bosons Wienand _et al._ (2023),
J. F. Wienand et al. utilized the bipartite fluctuation of the particle number
in quench dynamics starting from the alternating state, studying the local
equilibrium and the emergence of fluctuating hydrodynamics in the quantum
many-body system.
Despite the recent strong interest in emergent hydrodynamics in a quantum
regime, exact and analytical results of the bipartite fluctuation with the
initial alternating state have been elusive even for the noninteracting
fermions Groha _et al._ (2018); com , although there are several numerical
works Fujimoto _et al._ (2020); Jin _et al._ (2020); Fujimoto _et al._
(2021, 2022); Cecile _et al._ (2024); Aditya and Roy (2024); Bhakuni and Lev
(2023). Hence, its fundamental and analytical understanding is highly
desirable now, and it is of great importance to explain the recently observed
bipartite fluctuation Wienand _et al._ (2023) quantitatively via analytical
solutions.
Figure 1: Schematic illustration of the main result. (a) One-dimensional
fermions addressed in this work. The blue circles represent fermions occupying
every other site at the initial time. This initial state is referred to as an
alternating state. Under unitary time evolution, the fermions hop to
neighboring sites. The quantity of our interest is the variance of the
particle number in a subsystem $A$ for a time-evolved state. (b) Schematic for
the variance growth. The abscissa and ordinate, respectively, denote time $t$
and a subtracted variance, which is defined by the variance from which one
subtracts its initial value. The two solid lines represent our analytical
results, namely $2t/\pi$ and $2(n_{\rm even}-n_{\rm odd})^{2}t/\pi$, obtained
by considering the variances growing from the complete and incomplete
alternating states. Here, $n_{\rm even}$ and $n_{\rm odd}$ are, respectively,
averaged initial particle numbers at even and odd sites, quantifying the
incompleteness of the initial alternating state.
In this Letter, we consider one-dimensional fermions starting from the
alternating state, theoretically studying time evolution for the variance of
the particle numbers in a subsystem as schematically shown in Fig. 1 (a) and
compare our theoretical prediction with the recent experimental data of Ref.
Wienand _et al._ (2023). First, in the noninteracting fermions, we exactly
compute infinite series involving the $n$th-order Bessel function $J_{n}(x)$
of the first kind, obtaining the exact and simple expression of the variance.
We apply asymptotic analysis to the exact result, analytically deriving the
linear growth law of the variance and its sub-leading correction for the long-
time dynamics. Second, we compare our analytical result with the recent
experiment of Ref. Wienand _et al._ (2023). For this purpose, we focus on the
incompleteness of the initial alternating state realized in the experiment. We
incorporate the incompleteness into our exact solution, deriving the general
linear growth law of the variance. This general law can quantitatively explain
the variance growth observed in Ref. Wienand _et al._ (2023) without any
fitting parameters. The impact of the incompleteness is schematically
displayed in Fig. 1 (b). Finally, we numerically investigate interaction
effects on the variance growth using the time-evolving decimation (TEBD)
method Vidal (2003, 2004); Schollwöck (2011); Paeckel _et al._ (2019),
finding that the presence of the interaction substantially alters the behavior
of the variance.
Before ending the introduction, it is worth noting that a bipartite
fluctuation has been studied in terms of an integrated current and full
counting statistics. In the former case, most of previous works consider
dynamics starting from a domain-wall initial state, investigating a bipartite
fluctuation for a conserved quantity by connecting it to an integrated current
via a continuity equation Antal _et al._ (2008); Eisler and Rácz (2013);
Ljubotina _et al._ ; Moriya _et al._ (2019); Gamayun _et al._ (2020); Jin
_et al._ (2021); Gopalakrishnan and Vasseur (2023); Wei _et al._ (2022);
Krajnik _et al._ (2022a, b, 2024a, 2024b). One of the notable results is the
logarithmic growth of the variance in noninteracting fermions Antal _et al._
(2008). In the latter case, a bipartite fluctuation has been employed to study
static and dynamic properties of entanglement entropy Klich and Levitov
(2009); Song _et al._ (2010, 2012); Rachel _et al._ (2012); Parez _et al._
(2021); Oshima and Fuji (2023); Bertini _et al._ (2023a) and fluctuations of
physical quantities Schönhammer (2007); Collura _et al._ (2017); Stéphan and
Pollmann (2017); Najafi and Rajabpour (2017); Arzamasovs and Gangardt (2019);
Calabrese _et al._ (2020); Ares _et al._ (2021); Smith _et al._ (2021);
McCulloch _et al._ (2023); Hercé _et al._ (2023); Bertini _et al._ (2023b).
Setup.— Let us consider fermions on a one-dimensional lattice labeled by
$\Lambda\coloneqq\\{-L,-L+1,...,L\\}$ with a positive even number $L$. We
denote the fermionic creation and annihilation operators at a site
$j\in\Lambda$ by $\hat{a}_{j}^{\dagger}$ and $\hat{a}_{j}$. Then, the
Hamiltonian is given by
$\displaystyle\hat{H}\coloneqq-\sum_{j=-L}^{L-1}\left(\hat{a}_{j+1}^{\dagger}\hat{a}_{j}+\hat{a}_{j}^{\dagger}\hat{a}_{j+1}\right)+U\sum_{j=-L}^{L-1}\hat{n}_{j+1}\hat{n}_{j}$
(1)
with the particle-number operator
$\hat{n}_{j}\coloneqq\hat{a}_{j}^{\dagger}\hat{a}_{j}$ and the interaction
parameter $U$. Here, the boundary condition is basically a periodic boundary
condition ($\hat{a}_{L}=\hat{a}_{-L}$), but we use an open boundary condition
($\hat{a}_{L}=0$) in several cases. In the latter case, we will explicitly
specify the boundary condition. The initial state is the alternating state
defined by $\ket{\psi_{\rm
alt}}\coloneqq\prod_{j=-L/2}^{L/2-1}\hat{a}_{2j+1}^{\dagger}\ket{0}$. We
denote a quantum state at time $t$ by $\ket{\psi(t)}$ and assume that the
state obeys the Schrödinger equation $\displaystyle{\rm
i}d\ket{\psi(t)}/dt=\hat{H}\ket{\psi(t)}$. Here, the Dirac constant $\hbar$ is
set to be unity.
In this Letter, we shall study the fluctuation of the particle number in a
subsystem. The operator for the bipartite particle-number is defined by
$\hat{N}_{M}\coloneqq\sum_{j=0}^{M-1}\hat{a}_{j}^{\dagger}\hat{a}_{j}$ with a
positive integer $M$. Then, a generating function for the $n$th moment of
$\hat{N}_{M}$ is given by
$G_{M}(\lambda,t)\coloneqq\braket{e^{\lambda\hat{N}_{M}}}_{t}$, where we
introduce the notation $\braket{\bullet}_{t}\coloneqq{\rm
Tr}[\hat{\rho}(t)\bullet]$ with the density matrix
$\hat{\rho}(t)\coloneqq\ket{\psi(t)}\bra{\psi(t)}$. We can compute the moments
by differentiating $G_{M}(\lambda,t)$ with respect to $\lambda$.
Exact solution for the variance of the noninteracting fermions.— We study the
time evolution for the variance of $\hat{N}_{M}$ in the noninteracting
fermions ($U=0$) under the thermodynamic limit ($L\rightarrow\infty$). As
described in Sec. I of the Supplemental Material SM , the generating function
$G_{M}(\lambda,t)$ becomes Eisler and Rácz (2013); Schönhammer (2007); Parez
_et al._ (2021)
$\displaystyle G_{M}(\lambda,t)={\rm
det}\left[\delta_{j,k}+(e^{\lambda}-1)\braket{\hat{a}_{j}^{\dagger}\hat{a}_{k}}_{t}\right]_{j,k=0}^{M-1}.$
(2)
Here, the two-point correlator $\braket{\hat{a}_{j}^{\dagger}\hat{a}_{k}}_{t}$
in the thermodynamic limit ($L\rightarrow\infty$) is given by Flesch _et al._
(2008)
$\displaystyle\braket{\hat{a}_{j}^{\dagger}\hat{a}_{k}}_{t}=\frac{\delta_{j,k}}{2}-\frac{{\rm
i}^{j+k}}{2}J_{k-j}(4t).$ (3)
We here derive the exact and simple expression for the variance of
$\hat{N}_{M}$ under the limit $M\rightarrow\infty$. The first step is to
express the variance via the Bessel function $J_{n}(x)$ of the first kind.
Differentiating Eq. (2) with respect to $\lambda$, we obtain the variance
$\sigma_{M}(t)^{2}\coloneqq\braket{\hat{N}_{M}^{2}}_{t}-\braket{\hat{N}_{M}}_{t}^{2}=\partial^{2}G_{M}(\lambda,t)/\partial\lambda^{2}|_{\lambda=0}-(\partial
G_{M}(\lambda,t)/\partial\lambda|_{\lambda=0})^{2}$ as
$\displaystyle\sigma_{M}(t)^{2}=\frac{M}{4}\left(1-J_{0}(4t)^{2}-2\sum_{k=1}^{M-1}J_{k}(4t)^{2}\right)+\frac{1}{2}\sum_{k=1}^{M-1}kJ_{k}(4t)^{2}.$
(4)
The detailed derivation of Eq. (4) is given in Sec. II of the Supplemental
Material SM . The next task is to take the limit $M\rightarrow\infty$ in Eq.
(4). As proved in Sec. III of the Supplemental Material SM , we have
$\lim_{M\rightarrow\infty}M\left(1-J_{0}(4t)^{2}-2\sum_{k=1}^{M-1}J_{k}(4t)^{2}\right)/4=0$
for $t>0$ and
$\lim_{M\rightarrow\infty}\sum_{k=1}^{M-1}kJ_{k}(4t)^{2}/2=4t^{2}\left(J_{0}(4t)^{2}+J_{1}(4t)^{2}\right)-tJ_{0}(4t)J_{1}(4t)$.
We utilize these formulae and Eq. (4), deriving the following exact and simple
expression of the variance
$\sigma(t)^{2}\coloneqq\lim_{M\rightarrow\infty}\sigma_{M}(t)^{2}$,
$\displaystyle\sigma(t)^{2}=4t^{2}\left(J_{0}(4t)^{2}+J_{1}(4t)^{2}\right)-tJ_{0}(4t)J_{1}(4t)$
(5)
for $t>0$. Note that Eq. (5) is still valid at $t=0$ since we can show
$\sigma_{M}(0)^{2}=0$ and $\sigma(0)^{2}=0$ from Eqs. (4) and (5),
respectively. Thus, we can relax the constraint $t>0$ for Eq. (5) to $t\geq
0$.
Figure 2: Numerical verification of Eq. (6). The time evolution of the
variance denoted by the blue square symbols is numerically obtained using Eqs.
(2) and (3) with $M=200$. In the upper and lower panels, we compare Eq. (6)
with the numerical result.
We apply asymptotic analysis to Eq. (5), deriving the asymptotic forms of
$\sigma(t)^{2}$ both for the short-time ($t\ll 1$) and long-time ($t\gg 1$)
dynamics. Using the asymptotic forms of the Bessel functions of the first kind
mat (2015), we obtain
$\displaystyle\sigma(t)^{2}\simeq\begin{dcases}2t^{2}&(t\ll 1)\\\
\frac{2}{\pi}t-\frac{1}{64\pi t}\left(2\sin(8t)+1\right)&(t\gg
1).\end{dcases}$ (6)
The detailed derivation of Eq. (6) is given in Sec. IV of the Supplemental
Material SM . The essential consequence of Eq. (6) is that the variance is
proportional to time $t$ for $t\gg 1$. This linear growth law is entirely
different from the variance growth in noninteracting fermionic dynamics
starting from the domain-wall state $\ket{\psi_{\rm
DW}}\coloneqq\prod_{j\in\mathbb{Z}_{<0}}\hat{a}_{j}^{\dagger}\ket{0}$, where
the variance was shown to obey the logarithmic growth Antal _et al._ (2008);
Moriya _et al._ (2019). Note that the linear growth law was numerically
confirmed in Ref. Fujimoto _et al._ (2022) and Eq. (6) gives its analytical
explanation.
Finally, we numerically verify Eq. (6). Figure 2 shows the time evolution of
the variance obtained by Eqs. (2) and (3) numerically. One can see the
excellent agreement with Eq. (6). Note that the linear growth of the variance
appears in $t\gtrsim 0.4$, though it is originally derived under $t\gg 1$.
This behavior is favorable for observing the linear growth experimentally.
Comparison with the experimental result of Ref. Wienand _et al._ (2023).— We
consider whether our theoretical prediction of the variance for the
noninteracting fermions can explain the experimental result reported in Ref.
Wienand _et al._ (2023). For this purpose, we must note that the experiment
does not realize the complete alternating initial state $\ket{\psi_{\rm
alt}}$. To make this point clear, let us denote averaged particle numbers at
the even and odd sites by $n_{\rm even}$ and $n_{\rm odd}$. The observed
imbalance parameter $I\coloneqq(n_{\rm odd}-n_{\rm even})/(n_{\rm even}+n_{\rm
odd})$ is about $0.91$ Wienand _et al._ (2023), which means the existence of
deviation from the complete alternating state since $\ket{\psi_{\rm alt}}$ has
$I=1$. This observation strongly suggests that we need to incorporate the
incompleteness of the initial alternating state into the analytical results of
Eqs. (5) and (6). In what follows, we describe our analytical solution with
the incomplete alternating state, comparing it with the experimental data.
Before calculating the variance, we first specify an initial state for the
incomplete alternating state by considering how the alternating state is
experimentally prepared. The experiment prepares the initial state by ramping
up a strongly tilted double-well potential in one direction (see Sec. I. B. of
the supplementary material of Ref. Wienand _et al._ (2023)). Thus, we expect
that the experimental initial state is well described by a product state,
assuming that the initial density matrix is approximated by
$\displaystyle\hat{\rho}_{\rm
alt}\coloneqq\dfrac{1}{\Xi}\exp\left(-\beta\hat{H}_{\rm
alt}+\beta\mu\hat{N}_{\rm tot}\right)$ (7)
with the inverse temperature $\beta$, the chemical potential $\mu$, and the
partition function $\Xi\coloneqq{\rm Tr}\exp(-\beta\hat{H}_{\rm
alt}+\beta\mu\hat{N}_{\rm tot})$. The operators $\hat{H}_{\rm alt}$ and
$\hat{N}_{\rm tot}$ are defined by $\hat{H}_{\rm
alt}\coloneqq\sum_{j=-L/2}^{L/2-1}(\hat{a}_{2j}^{\dagger}\hat{a}_{2j}-\hat{a}_{2j+1}^{\dagger}\hat{a}_{2j+1})$
and $\hat{N}_{\rm
tot}\coloneqq\sum_{j=-L}^{L-1}\hat{a}_{j}^{\dagger}\hat{a}_{j}$. The averaged
particle numbers at the even and odd sites become $n_{\rm
even}=1/[e^{\beta(1-\mu)}+1]$ and $n_{\rm odd}=1/[e^{\beta(-1-\mu)}+1]$. The
parameters $\beta$ and $\mu$ are determined by the filling factor
$\nu\coloneqq(n_{\rm even}+n_{\rm odd})/2$ and the imbalance parameter $I$,
both of which are observable in the experiment.
We analytically derive the exact and asymptotic forms of the variance for the
dynamics starting from the incomplete alternating state of Eq. (7). As
detailed in Sec. V of the Supplemental Material SM , the two-point correlator
becomes
$\displaystyle\braket{\hat{a}_{j}^{\dagger}\hat{a}_{k}}_{t}=\frac{1}{2}\left(n_{\rm
even}+n_{\rm odd}\right)\delta_{j,k}+\frac{1}{2}\left(n_{\rm even}-n_{\rm
odd}\right){\rm i}^{j+k}J_{k-j}(4t)$ (8)
in the thermodynamic limit ($L\rightarrow\infty$). Here the density matrix is
given by $\hat{\rho}(t)=e^{-{\rm i}\hat{H}t}\hat{\rho}_{\rm alt}e^{{\rm
i}\hat{H}t}$ with Eq. (1) and $U=0$. Following the derivation given in Sec. VI
of the Supplemental Material SM , we obtain
$\displaystyle\sigma_{\rm
sub}(t)^{2}\coloneqq\lim_{M\rightarrow\infty}\left(\sigma_{M}(t)^{2}-\delta\sigma_{M}(t)^{2}\right)$
$\displaystyle=\left(n_{\rm even}-n_{\rm
odd}\right)^{2}\Bigl{[}4t^{2}\left(J_{0}(4t)^{2}+J_{1}(4t)^{2}\right)-tJ_{0}(4t)J_{1}(4t)\Bigl{]}$
(9)
with the function $\delta\sigma_{M}(t)^{2}\coloneqq M[n_{\rm even}(1-n_{\rm
even})+n_{\rm odd}(1-n_{\rm odd})]/2+J_{0}(4t)[n_{\rm even}(1-n_{\rm
even})-n_{\rm odd}(1-n_{\rm odd})][\sum_{m=0}^{M-1}(-1)^{m}]/2$. The time-
dependent term of $\delta\sigma_{M}(t)^{2}$ is much smaller than the time-
independent term when $M$ is large. Hence, we have
$\delta\sigma_{M}(t)^{2}\simeq\sigma_{M}(0)^{2}$ for $M\gg 1$, and thus
$\sigma_{\rm sub}(t)^{2}$ can be interpreted as the variance from which one
subtracts its initial value. Applying the asymptotic analysis used in Eq. (6)
to Eq. (9), we derive
$\displaystyle\sigma_{\rm sub}(t)^{2}\simeq\begin{dcases}2\left(n_{\rm
even}-n_{\rm odd}\right)^{2}t^{2}&(t\ll 1)\\\ \dfrac{2}{\pi}\left(n_{\rm
even}-n_{\rm odd}\right)^{2}t&(t\gg 1).\end{dcases}$ (10)
This result elucidates that the incompleteness of the alternating state
substantially affects the coefficient of the variance, but the exponents of
time are robust against it.
In addition to Eq. (10), we shall compare the numerical calculations in a
finite system with the experiment. Our numerical calculation is implemented
for $2L=40$, $\nu=0.44$, and $I=0.91$ with the initial state of Eq. (7). These
concrete values are taken from Ref. Wienand _et al._ (2023) (see Sec. II. C.
of the supplementary material of Ref. Wienand _et al._ (2023)). We adopt the
Hamiltonian defined by
$\displaystyle\hat{H}^{\prime}\coloneqq-\sum_{j=-L}^{L-1}\left(\hat{a}_{j+1}^{\dagger}\hat{a}_{j}+\hat{a}_{j}^{\dagger}\hat{a}_{j+1}\right)+\sum_{j=-L}^{L-1}V_{j}\hat{n}_{j}$
(11)
with the open boundary condition ($\hat{a}_{L}=0$). Following the numerical
simulation in Ref. Wienand _et al._ (2023), we here add a random potential
$V_{j}$, which is sampled from a uniform distribution with the range
$[-\Delta,\Delta]$. Here, $\Delta\geq 0$ denotes the strength of the
randomness. Under this setup, we investigate the variance
$\sigma^{\prime}_{M}(t)^{2}\coloneqq\overline{\braket{\hat{N}_{M}^{\prime
2}}}-\left(~{}\overline{\braket{\hat{N}_{M}^{\prime}}}~{}\right)^{2}$ with
$\hat{N}_{M}^{\prime}\coloneqq\sum_{m=-M/2}^{M/2-1}\hat{a}^{\dagger}_{m}\hat{a}_{m}$.
Here, the overline $\overline{\bullet}$ denotes the ensemble average over the
random potentials, and we use 10000 samples for this.
Figure 3: Comparison of our theoretical prediction with the experimental
data. The parameters are $2L=40$, $\nu=0.44$, $I=0.91$, and $M=8$, which are
taken from Ref. Wienand _et al._ (2023). The time evolution of the variance,
denoted by circle and pentagon markers, is numerically obtained using Eq. (11)
with $\Delta=0$ and $1$. For the comparison with the experiment, we plot the
subtracted variance $2\sigma^{\prime}_{M}(t)^{2}-2\sigma^{\prime}_{M}(0)^{2}$
for the theoretical results and $\sigma_{\rm exp}(t)^{2}-\sigma_{\rm
exp}(t_{0})^{2}$ for the experimental result because the experiment studies
two-ladder lattice systems and thus the observed variance is twice as large as
ours. Here, the experimental data $\sigma_{\rm exp}(t)^{2}$ is taken from Fig.
S8 of Ref. Wienand _et al._ (2023), and $t_{0}$ is about $0.0006$. The dotted
and dashed lines correspond to our analytical expressions of Eqs. (6) and (10)
for $t\gg 1$, respectively. The plus marker denotes the numerical data of the
6th-order perturbative calculation.
Figure 3 compares our theoretical results with the experimental data of Ref.
Wienand _et al._ (2023). Our numerical result with $\Delta=0$ can capture the
variance growth in the early stage of the dynamics, but the deviation from the
experimental data grows in time. On the other hand, when we turn on the random
potential with $\Delta=1$, the numerical result well reproduces the
experimental data. The saturation of the variance for $t\gtrsim 2$ is caused
by the finite size effect and the disorder potential. These findings were
reported in the numerical simulations of Ref. Wienand _et al._ (2023). We
also implement the numerical perturbative calculation with $\Delta$, finding
that the 6th-order perturbative result describes the time evolution before the
saturation (see Sec. VII of the Supplemental Material for the detailed
numerical method). These results show that the disorder effect is irrelevant
in $t\lesssim 1$. For an arbitrary strength $\Delta$, we can identify such a
time scale at which the disorder begins to affect the dynamics, by computing
the variances for $\Delta=0$ and $\Delta\neq 0$.
The dotted and dashed lines in Fig. 3 show the analytical results for the
linear growth of Eqs. (6) and (10). We find that the latter including the
incompleteness of the initial alternating state exhibits the reasonable
agreement with the experimental linear growth, while the former not including
the incompleteness systematically deviates from the experimental data. We here
emphasize that our analytical result (10) quantitatively describes the
experimental data without any fitting parameters under the assumption of (7).
Note that one can see that the disorder effect emerges before the finite $M$
effect by comparing the numerical data for $\Delta=0$ and 1 with Eq. (10)
obtained in the limit ($L,M\rightarrow\infty$), though the localization length
is larger than $M=8$ Wienand _et al._ (2023). In general, one can identify
which the finite $M$ effect or the disorder effect emerges earlier, in the
same manner with the help of Eq. (10).
Figure 4: Numerical results for the variance
$\sigma_{M}^{\prime\prime}(t)^{2}$ with $M=40$. The number of the lattice is
$2L=120$. We numerically solve the Schrödinger equation using the TEBD method
Vidal (2003, 2004); Schollwöck (2011); Paeckel _et al._ (2019), calculating
the variances for $U=0,-1,-2$, and $-3$. The dashed line denotes the leading
term of Eq. (6) for $t\gg 1$, and the dotted and dot-dashed lines are curves
for $t^{2/3}$ and $t^{1/5}$, respectively. We insert these curves with the
exponents $2/3$ and $1/5$ for reference.
Numerical study for the variance growth of the interacting fermions.— We
numerically study the interaction effect on the time evolution of the
variance. Our numerical method is the time-evolving decimation method Vidal
(2003, 2004); Schollwöck (2011); Paeckel _et al._ (2019) with Eq. (1) and the
open boundary condition ($\hat{a}_{L}=0$). We set $L$ to be $60$, and compute
the variance for $U=0,-1,-2$, and $-3$. In the language of the XXZ chain,
$U=-2$ corresponds to the critical point at which the model is identical to
the XXX chain Franchini (2017). To weaken the boundary effect, we use the
variance
$\sigma^{\prime\prime}_{M}(t)^{2}\coloneqq\langle(\hat{N}_{M}^{\prime})^{2}\rangle_{t}-\langle\hat{N}_{M}^{\prime}\rangle_{t}^{2}$
with $M=40$.
Figure 4 displays the time evolution of the variance
$\sigma^{\prime\prime}_{M}(t)^{2}$. In the noninteracting fermions ($U=0$),
our numerical result for the finite system well reproduces the leading term of
Eq. (6) for the infinite system. This demonstrates that the boundary effect is
negligible. In the interacting cases ($U\neq 0$), the time evolution exhibits
different growth behaviors, and we cannot find the clear ballistic property,
particularly for the $U=-2$ and $-3$ cases.
Note that Cecile et al. recently reported similar numerical results for the
variance in the XXZ chain Cecile _et al._ (2024). For example, they confirmed
the signature of the anomalous power law growth
($\sigma^{\prime\prime}_{M}(t)^{2}\propto t^{2/3}$) in the dynamics starting
from the N$\rm\acute{e}$el state identical to the alternating state. The
similar signature was also discussed in Ref. Fujimoto _et al._ (2020). We
here display Fig. 4 to show that our analytical result for the noninteracting
fermions does not work in the interacting cases.
Conclusions and prospects.— We theoretically studied the time evolution for
the variance of the particle numbers in the subsystem of the fermions on the
one-dimensional lattice, quantitatively explaining the recent experimental
result of Ref. Wienand _et al._ (2023). The initial state used in this work
was the alternating state. In the noninteracting fermions, we analytically
derived the exact and simple expression of the variance, elucidating that the
variance linearly grows in the long-time dynamics. Developing the exact
analysis, we incorporated the incompleteness of the initial alternating state
into our analytical expression of the variance, finding that the improved
expression shows the reasonable agreement with the experimental result of Ref.
Wienand _et al._ (2023) without any fitting parameters. Finally, our
numerical results based on the TEBD method Vidal (2003, 2004); Schollwöck
(2011); Paeckel _et al._ (2019) find that the presence of the interaction
qualitatively alters the variance growth.
As a prospect, it is intriguing to study the variance growth in other models
accompanying quantum phase transitions, such as the transverse Ising model and
the Su-Schrieffer-Heeger model. These models are mapped into noninteracting
fermions, and one may analytically study the variance growth from the
perspective of the universal nature of the phase transitions. Furthermore,
studying the variance of other physical quantities, such as energy and spins,
is fascinating. In another direction, it is worth studying the variance growth
in open quantum systems. Depending on kinds of interactions with environments,
an open quantum system approaches a nonequilibrium stationary state that is
completely different from the equilibrium state addressed in this work. Thus,
uncovering features of the variance growth in such a case is fundamentally
interesting.
###### Acknowledgements.
KF is grateful to Ryusuke Hamazaki and Yuki Kawaguchi for fruitful discussions
and comments on the manuscript, Masaya Kunimi for helpful discussions on TEBD
with a conserved quantity, and Hiroki Moriya for helpful discussions. KF and
ST are grateful to Monika Aidelsburger, Immanuel Bloch, Alexander Impertro,
Simon Karch, Christian Schweizer, and Julian F. Wienand for sharing the
experimental data of Ref. Wienand _et al._ (2023) used in Fig. 3. The work of
KF has been supported by JSPS KAKENHI Grant No. JP23K13029. The work of TS has
been supported by JSPS KAKENHI Grants No. JP21H04432, No. JP22H01143.
## References
* Polkovnikov _et al._ (2011) A. Polkovnikov, K. Sengupta, A. Silva, and M. Vengalattore, Colloquium: Nonequilibrium dynamics of closed interacting quantum systems, Rev. Mod. Phys. 83, 863 (2011).
* Eisert _et al._ (2015) J. Eisert, M. Friesdorf, and C. Gogolin, Quantum many-body systems out of equilibrium, Nature Physics 11, 124 (2015).
* Nandkishore and Huse (2015) R. Nandkishore and D. A. Huse, Many-body localization and thermalization in quantum statistical mechanics, Annual Review of Condensed Matter Physics 6, 15 (2015).
* Torres-Herrera _et al._ (2015) E. J. Torres-Herrera, D. Kollmar, and L. F. Santos, Relaxation and thermalization of isolated many-body quantum systems, Physica Scripta 2015, 014018 (2015).
* Luca D’Alessio and Rigol (2016) A. P. Luca D’Alessio, Yariv Kafri and M. Rigol, From quantum chaos and eigenstate thermalization to statistical mechanics and thermodynamics, Advances in Physics 65, 239 (2016).
* Mori _et al._ (2018) T. Mori, T. N. Ikeda, E. Kaminishi, and M. Ueda, Thermalization and prethermalization in isolated quantum systems: a theoretical overview, Journal of Physics B: Atomic, Molecular and Optical Physics 51, 112001 (2018).
* Abanin _et al._ (2019) D. A. Abanin, E. Altman, I. Bloch, and M. Serbyn, Colloquium: Many-body localization, thermalization, and entanglement, Rev. Mod. Phys. 91, 021001 (2019).
* Gong and Hamazaki (2022) Z. Gong and R. Hamazaki, Bounds in nonequilibrium quantum dynamics, International Journal of Modern Physics B 36, 2230007 (2022).
* Deutsch (1991) J. M. Deutsch, Quantum statistical mechanics in a closed system, Phys. Rev. A 43, 2046 (1991).
* Srednicki (1994) M. Srednicki, Chaos and quantum thermalization, Phys. Rev. E 50, 888 (1994).
* Kinoshita _et al._ (2006) T. Kinoshita, T. Wenger, and D. S. Weiss, A quantum newton’s cradle, Nature 440, 900 (2006).
* Gring _et al._ (2012) M. Gring, M. Kuhnert, T. Langen, T. Kitagawa, B. Rauer, M. Schreitl, I. Mazets, D. A. Smith, E. Demler, and J. Schmiedmayer, Relaxation and prethermalization in an isolated quantum system, Science 337, 1318 (2012).
* Cazalilla (2006) M. A. Cazalilla, Effect of suddenly turning on interactions in the luttinger model, Phys. Rev. Lett. 97, 156403 (2006).
* Rigol _et al._ (2006) M. Rigol, A. Muramatsu, and M. Olshanii, Hard-core bosons on optical superlattices: Dynamics and relaxation in the superfluid and insulating regimes, Phys. Rev. A 74, 053616 (2006).
* Rigol _et al._ (2007) M. Rigol, V. Dunjko, V. Yurovsky, and M. Olshanii, Relaxation in a completely integrable many-body quantum system: An ab initio study of the dynamics of the highly excited states of 1d lattice hard-core bosons, Phys. Rev. Lett. 98, 050405 (2007).
* Calabrese _et al._ (2011) P. Calabrese, F. H. L. Essler, and M. Fagotti, Quantum quench in the transverse-field ising chain, Phys. Rev. Lett. 106, 227203 (2011).
* Fagotti and Essler (2013) M. Fagotti and F. H. L. Essler, Reduced density matrix after a quantum quench, Phys. Rev. B 87, 245107 (2013).
* Langen _et al._ (2015) T. Langen, S. Erne, R. Geiger, B. Rauer, T. Schweigler, M. Kuhnert, W. Rohringer, I. E. Mazets, T. Gasenzer, and J. Schmiedmayer, Experimental observation of a generalized gibbs ensemble, Science 348, 207 (2015).
* Ilievski _et al._ (2015) E. Ilievski, J. De Nardis, B. Wouters, J.-S. Caux, F. H. L. Essler, and T. Prosen, Complete generalized gibbs ensembles in an interacting theory, Phys. Rev. Lett. 115, 157201 (2015).
* Tang _et al._ (2018) Y. Tang, W. Kao, K.-Y. Li, S. Seo, K. Mallayya, M. Rigol, S. Gopalakrishnan, and B. L. Lev, Thermalization near integrability in a dipolar quantum newton’s cradle, Phys. Rev. X 8, 021030 (2018).
* Basko _et al._ (2006) D. Basko, I. Aleiner, and B. Altshuler, Metal–insulator transition in a weakly interacting many-electron system with localized single-particle states, Annals of Physics 321, 1126 (2006).
* Žnidarič _et al._ (2008) M. Žnidarič, T. Prosen, and P. Prelovšek, Many-body localization in the heisenberg XXZ magnet in a random field, Phys. Rev. B 77, 064426 (2008).
* Pal and Huse (2010) A. Pal and D. A. Huse, Many-body localization phase transition, Phys. Rev. B 82, 174411 (2010).
* Bardarson _et al._ (2012) J. H. Bardarson, F. Pollmann, and J. E. Moore, Unbounded growth of entanglement in models of many-body localization, Phys. Rev. Lett. 109, 017202 (2012).
* Serbyn _et al._ (2013) M. Serbyn, Z. Papić, and D. A. Abanin, Local conservation laws and the structure of the many-body localized states, Phys. Rev. Lett. 111, 127201 (2013).
* Huse _et al._ (2014) D. A. Huse, R. Nandkishore, and V. Oganesyan, Phenomenology of fully many-body-localized systems, Phys. Rev. B 90, 174202 (2014).
* Schreiber _et al._ (2015) M. Schreiber, S. S. Hodgman, P. Bordia, H. P. Lüschen, M. H. Fischer, R. Vosk, E. Altman, U. Schneider, and I. Bloch, Observation of many-body localization of interacting fermions in a quasirandom optical lattice, Science 349, 842 (2015).
* Potter _et al._ (2015) A. C. Potter, R. Vasseur, and S. A. Parameswaran, Universal properties of many-body delocalization transitions, Phys. Rev. X 5, 031033 (2015).
* Vasseur and Moore (2016) R. Vasseur and J. E. Moore, Nonequilibrium quantum dynamics and transport: from integrability to many-body localization, Journal of Statistical Mechanics: Theory and Experiment 2016, 064010 (2016).
* yoon Choi _et al._ (2016) J. yoon Choi, S. Hild, J. Zeiher, P. Schauß, A. Rubio-Abadal, T. Yefsah, V. Khemani, D. A. Huse, I. Bloch, and C. Gross, Exploring the many-body localization transition in two dimensions, Science 352, 1547 (2016).
* Lüschen _et al._ (2017) H. P. Lüschen, P. Bordia, S. Scherg, F. Alet, E. Altman, U. Schneider, and I. Bloch, Observation of slow dynamics near the many-body localization transition in one-dimensional quasiperiodic systems, Phys. Rev. Lett. 119, 260401 (2017).
* Kohlert _et al._ (2019) T. Kohlert, S. Scherg, X. Li, H. P. Lüschen, S. Das Sarma, I. Bloch, and M. Aidelsburger, Observation of many-body localization in a one-dimensional system with a single-particle mobility edge, Phys. Rev. Lett. 122, 170403 (2019).
* Bernien _et al._ (2017) H. Bernien, S. Schwartz, A. Keesling, H. Levine, A. Omran, H. Pichler, S. Choi, A. S. Zibrov, M. Endres, M. Greiner, V. Vuletić, and M. D. Lukin, Probing many-body dynamics on a 51-atom quantum simulator, Nature 551, 579 (2017).
* Shiraishi and Mori (2017) N. Shiraishi and T. Mori, Systematic construction of counterexamples to the eigenstate thermalization hypothesis, Phys. Rev. Lett. 119, 030601 (2017).
* Turner _et al._ (2018a) C. J. Turner, A. A. Michailidis, D. A. Abanin, M. Serbyn, and Z. Papić, Weak ergodicity breaking from quantum many-body scars, Nature Physics 14, 745 (2018a).
* Turner _et al._ (2018b) C. J. Turner, A. A. Michailidis, D. A. Abanin, M. Serbyn, and Z. Papić, Quantum scarred eigenstates in a rydberg atom chain: Entanglement, breakdown of thermalization, and stability to perturbations, Phys. Rev. B 98, 155134 (2018b).
* Lin and Motrunich (2019) C.-J. Lin and O. I. Motrunich, Exact quantum many-body scar states in the rydberg-blockaded atom chain, Phys. Rev. Lett. 122, 173401 (2019).
* Shibata _et al._ (2020) N. Shibata, N. Yoshioka, and H. Katsura, Onsager’s scars in disordered spin chains, Phys. Rev. Lett. 124, 180604 (2020).
* Trotzky _et al._ (2012) S. Trotzky, Y.-A. Chen, A. Flesch, I. P. McCulloch, U. Schollwöck, J. Eisert, and I. Bloch, Probing the relaxation towards equilibrium in an isolated strongly correlated one-dimensional bose gas, Nature Physics 8, 325 (2012).
* Langen _et al._ (2013) T. Langen, R. Geiger, M. Kuhnert, B. Rauer, and J. Schmiedmayer, Local emergence of thermal correlations in an isolated quantum many-body system, Nature Physics 9, 640 (2013).
* Clos _et al._ (2016) G. Clos, D. Porras, U. Warring, and T. Schaetz, Time-resolved observation of thermalization in an isolated quantum system, Phys. Rev. Lett. 117, 170401 (2016).
* Kaufman _et al._ (2016) A. M. Kaufman, M. E. Tai, A. Lukin, M. Rispoli, R. Schittko, P. M. Preiss, and M. Greiner, Quantum thermalization through entanglement in an isolated many-body system, Science 353, 794 (2016).
* Neill _et al._ (2016) C. Neill, P. Roushan, M. Fang, Y. Chen, M. Kolodrubetz, Z. Chen, A. Megrant, R. Barends, B. Campbell, B. Chiaro, A. Dunsworth, E. Jeffrey, J. Kelly, J. Mutus, P. J. J. O’Malley, C. Quintana, D. Sank, A. Vainsencher, J. Wenner, T. C. White, A. Polkovnikov, and J. M. Martinis, Ergodic dynamics and thermalization in an isolated quantum system, Nature Physics 12, 1037 (2016).
* Moll _et al._ (2016) P. J. W. Moll, P. Kushwaha, N. Nandi, B. Schmidt, and A. P. Mackenzie, Evidence for hydrodynamic electron flow in pdcoo¡sub¿2¡/sub¿, Science 351, 1061 (2016).
* Bandurin _et al._ (2016) D. A. Bandurin, I. Torre, R. K. Kumar, M. B. Shalom, A. Tomadin, A. Principi, G. H. Auton, E. Khestanova, K. S. Novoselov, I. V. Grigorieva, L. A. Ponomarenko, A. K. Geim, and M. Polini, Negative local resistance caused by viscous electron backflow in graphene, Science 351, 1055 (2016).
* Gooth _et al._ (2018) J. Gooth, F. Menges, N. Kumar, V. Sü$\beta$, C. Shekhar, Y. Sun, U. Drechsler, R. Zierold, C. Felser, and B. Gotsmann, Thermal and electrical signatures of a hydrodynamic electron fluid in tungsten diphosphide, Nature Communications 9, 4093 (2018).
* Krishna Kumar _et al._ (2017) R. Krishna Kumar, D. A. Bandurin, F. M. D. Pellegrino, Y. Cao, A. Principi, H. Guo, G. . H. Auton, M. Ben Shalom, L. A. Ponomarenko, G. Falkovich, K. Watanabe, T. Taniguchi, I. . V. Grigorieva, L. S. Levitov, M. Polini, and A. . K. Geim, Superballistic flow of viscous electron fluid through graphene constrictions, Nature Physics 13, 1182 (2017).
* Sulpizio _et al._ (2019) J. A. Sulpizio, L. Ella, A. Rozen, J. Birkbeck, D. J. Perello, D. Dutta, M. Ben-Shalom, T. Taniguchi, K. Watanabe, T. Holder, R. Queiroz, A. Principi, A. Stern, T. Scaffidi, A. K. Geim, and S. Ilani, Visualizing poiseuille flow of hydrodynamic electrons, Nature 576, 75 (2019).
* Aharon-Steinberg _et al._ (2022) A. Aharon-Steinberg, T. Völkl, A. Kaplan, A. K. Pariari, I. Roy, T. Holder, Y. Wolf, A. Y. Meltzer, Y. Myasoedov, M. E. Huber, B. Yan, G. Falkovich, L. S. Levitov, M. Hücker, and E. Zeldov, Direct observation of vortices in an electron fluid, Nature 607, 74 (2022).
* Castro-Alvaredo _et al._ (2016) O. A. Castro-Alvaredo, B. Doyon, and T. Yoshimura, Emergent hydrodynamics in integrable quantum systems out of equilibrium, Phys. Rev. X 6, 041065 (2016).
* Bertini _et al._ (2016) B. Bertini, M. Collura, J. De Nardis, and M. Fagotti, Transport in out-of-equilibrium XXZ chains: Exact profiles of charges and currents, Phys. Rev. Lett. 117, 207201 (2016).
* Bulchandani _et al._ (2017) V. B. Bulchandani, R. Vasseur, C. Karrasch, and J. E. Moore, Solvable hydrodynamics of quantum integrable systems, Phys. Rev. Lett. 119, 220604 (2017).
* Bulchandani _et al._ (2018) V. B. Bulchandani, R. Vasseur, C. Karrasch, and J. E. Moore, Bethe-boltzmann hydrodynamics and spin transport in the XXZ chain, Phys. Rev. B 97, 045407 (2018).
* Doyon and Yoshimura (2017) B. Doyon and T. Yoshimura, A note on generalized hydrodynamics: inhomogeneous fields and other concepts, SciPost Phys. 2, 014 (2017).
* Doyon _et al._ (2017) B. Doyon, J. Dubail, R. Konik, and T. Yoshimura, Large-scale description of interacting one-dimensional bose gases: Generalized hydrodynamics supersedes conventional hydrodynamics, Phys. Rev. Lett. 119, 195301 (2017).
* Doyon _et al._ (2018) B. Doyon, T. Yoshimura, and J.-S. Caux, Soliton gases and generalized hydrodynamics, Phys. Rev. Lett. 120, 045301 (2018).
* Collura _et al._ (2018) M. Collura, A. De Luca, and J. Viti, Analytic solution of the domain-wall nonequilibrium stationary state, Phys. Rev. B 97, 081111 (2018).
* De Nardis _et al._ (2018) J. De Nardis, D. Bernard, and B. Doyon, Hydrodynamic diffusion in integrable systems, Phys. Rev. Lett. 121, 160603 (2018).
* Schemmer _et al._ (2019) M. Schemmer, I. Bouchoule, B. Doyon, and J. Dubail, Generalized hydrodynamics on an atom chip, Phys. Rev. Lett. 122, 090601 (2019).
* Gopalakrishnan and Vasseur (2019) S. Gopalakrishnan and R. Vasseur, Kinetic theory of spin diffusion and superdiffusion in XXZ spin chains, Phys. Rev. Lett. 122, 127202 (2019).
* Doyon (2020) B. Doyon, Lecture notes on Generalised Hydrodynamics, SciPost Phys. Lect. Notes , 18 (2020).
* Alba _et al._ (2021) V. Alba, B. Bertini, M. Fagotti, L. Piroli, and P. Ruggiero, Generalized-hydrodynamic approach to inhomogeneous quenches: correlations, entanglement and quantum effects, Journal of Statistical Mechanics: Theory and Experiment 2021, 114004 (2021).
* Malvania _et al._ (2021) N. Malvania, Y. Zhang, Y. Le, J. Dubail, M. Rigol, and D. S. Weiss, Generalized hydrodynamics in strongly interacting 1d bose gases, Science 373, 1129 (2021).
* Bouchoule and Dubail (2022) I. Bouchoule and J. Dubail, Generalized hydrodynamics in the one-dimensional bose gas: theory and experiments, Journal of Statistical Mechanics: Theory and Experiment 2022, 014003 (2022).
* Essler (2023) F. H. Essler, A short introduction to generalized hydrodynamics, Physica A: Statistical Mechanics and its Applications 631, 127572 (2023), lecture Notes of the 15th International Summer School of Fundamental Problems in Statistical Physics.
* Wienand _et al._ (2023) J. F. Wienand, S. Karch, A. Impertro, C. Schweizer, E. McCulloch, R. Vasseur, S. Gopalakrishnan, M. Aidelsburger, and I. Bloch, Emergence of fluctuating hydrodynamics in chaotic quantum systems (2023), arXiv:2306.11457 [cond-mat.quant-gas] .
* Groha _et al._ (2018) S. Groha, F. H. L. Essler, and P. Calabrese, Full counting statistics in the transverse field Ising chain, SciPost Phys. 4, 043 (2018).
* (68) In Ref. Groha _et al._ (2018), a probability function for the bipartite fluctuation of a transverse Ising chain was analytically studied, but this model is different from the experimental situation .
* Fujimoto _et al._ (2020) K. Fujimoto, R. Hamazaki, and Y. Kawaguchi, Family-vicsek scaling of roughness growth in a strongly interacting bose gas, Phys. Rev. Lett. 124, 210604 (2020).
* Jin _et al._ (2020) T. Jin, A. Krajenbrink, and D. Bernard, From stochastic spin chains to quantum Kardar-Parisi-Zhang dynamics, Phys. Rev. Lett. 125, 040603 (2020).
* Fujimoto _et al._ (2021) K. Fujimoto, R. Hamazaki, and Y. Kawaguchi, Dynamical scaling of surface roughness and entanglement entropy in disordered fermion models, Phys. Rev. Lett. 127, 090601 (2021).
* Fujimoto _et al._ (2022) K. Fujimoto, R. Hamazaki, and Y. Kawaguchi, Impact of dissipation on universal fluctuation dynamics in open quantum systems, Phys. Rev. Lett. 129, 110403 (2022).
* Cecile _et al._ (2024) G. Cecile, J. De Nardis, and E. Ilievski, Squeezed ensembles and anomalous dynamic roughening in interacting integrable chains, Phys. Rev. Lett. 132, 130401 (2024).
* Aditya and Roy (2024) S. Aditya and N. Roy, Family-vicsek dynamical scaling and kardar-parisi-zhang-like superdiffusive growth of surface roughness in a driven one-dimensional quasiperiodic model, Phys. Rev. B 109, 035164 (2024).
* Bhakuni and Lev (2023) D. S. Bhakuni and Y. B. Lev, Dynamic scaling relation in quantum many-body systems (2023), arXiv:2309.03273 [cond-mat.dis-nn] .
* Vidal (2003) G. Vidal, Efficient classical simulation of slightly entangled quantum computations, Phys. Rev. Lett. 91, 147902 (2003).
* Vidal (2004) G. Vidal, Efficient simulation of one-dimensional quantum many-body systems, Phys. Rev. Lett. 93, 040502 (2004).
* Schollwöck (2011) U. Schollwöck, The density-matrix renormalization group in the age of matrix product states, Annals of Physics 326, 96 (2011).
* Paeckel _et al._ (2019) S. Paeckel, T. Köhler, A. Swoboda, S. R. Manmana, U. Schollwöck, and C. Hubig, Time-evolution methods for matrix-product states, Annals of Physics 411, 167998 (2019).
* Antal _et al._ (2008) T. Antal, P. L. Krapivsky, and A. Rákos, Logarithmic current fluctuations in nonequilibrium quantum spin chains, Phys. Rev. E 78, 061115 (2008).
* Eisler and Rácz (2013) V. Eisler and Z. Rácz, Full counting statistics in a propagating quantum front and random matrix spectra, Phys. Rev. Lett. 110, 060602 (2013).
* (82) M. Ljubotina, M. Žnidarič, and T. Prosen, .
* Moriya _et al._ (2019) H. Moriya, R. Nagao, and T. Sasamoto, Exact large deviation function of spin current for the one dimensional XX spin chain with domain wall initial condition, Journal of Statistical Mechanics: Theory and Experiment 2019, 063105 (2019).
* Gamayun _et al._ (2020) O. Gamayun, O. Lychkovskiy, and J.-S. Caux, Fredholm determinants, full counting statistics and Loschmidt echo for domain wall profiles in one-dimensional free fermionic chains, SciPost Phys. 8, 036 (2020).
* Jin _et al._ (2021) T. Jin, T. Gautié, A. Krajenbrink, P. Ruggiero, and T. Yoshimura, Interplay between transport and quantum coherences in free fermionic systems, Journal of Physics A: Mathematical and Theoretical 54, 404001 (2021).
* Gopalakrishnan and Vasseur (2023) S. Gopalakrishnan and R. Vasseur, Anomalous transport from hot quasiparticles in interacting spin chains, Reports on Progress in Physics 86, 036502 (2023).
* Wei _et al._ (2022) D. Wei, A. Rubio-Abadal, B. Ye, F. Machado, J. Kemp, K. Srakaew, S. Hollerith, J. Rui, S. Gopalakrishnan, N. Y. Yao, I. Bloch, and J. Zeiher, Quantum gas microscopy of kardar-parisi-zhang superdiffusion, Science 376, 716 (2022).
* Krajnik _et al._ (2022a) Ž. Krajnik, E. Ilievski, and T. Prosen, Absence of normal fluctuations in an integrable magnet, Phys. Rev. Lett. 128, 090604 (2022a).
* Krajnik _et al._ (2022b) Ž. Krajnik, J. Schmidt, V. Pasquier, E. Ilievski, and T. Prosen, Exact anomalous current fluctuations in a deterministic interacting model, Phys. Rev. Lett. 128, 160601 (2022b).
* Krajnik _et al._ (2024a) Ž. Krajnik, J. Schmidt, E. Ilievski, and T. Prosen, Dynamical criticality of magnetization transfer in integrable spin chains, Phys. Rev. Lett. 132, 017101 (2024a).
* Krajnik _et al._ (2024b) Ž. Krajnik, J. Schmidt, V. Pasquier, T. Prosen, and E. Ilievski, Universal anomalous fluctuations in charged single-file systems, Phys. Rev. Res. 6, 013260 (2024b).
* Klich and Levitov (2009) I. Klich and L. Levitov, Quantum noise as an entanglement meter, Phys. Rev. Lett. 102, 100502 (2009).
* Song _et al._ (2010) H. F. Song, S. Rachel, and K. Le Hur, General relation between entanglement and fluctuations in one dimension, Phys. Rev. B 82, 012405 (2010).
* Song _et al._ (2012) H. F. Song, S. Rachel, C. Flindt, I. Klich, N. Laflorencie, and K. Le Hur, Bipartite fluctuations as a probe of many-body entanglement, Phys. Rev. B 85, 035409 (2012).
* Rachel _et al._ (2012) S. Rachel, N. Laflorencie, H. F. Song, and K. Le Hur, Detecting quantum critical points using bipartite fluctuations, Phys. Rev. Lett. 108, 116401 (2012).
* Parez _et al._ (2021) G. Parez, R. Bonsignori, and P. Calabrese, Quasiparticle dynamics of symmetry-resolved entanglement after a quench: Examples of conformal field theories and free fermions, Phys. Rev. B 103, L041104 (2021).
* Oshima and Fuji (2023) H. Oshima and Y. Fuji, Charge fluctuation and charge-resolved entanglement in a monitored quantum circuit with U(1) symmetry, Phys. Rev. B 107, 014308 (2023).
* Bertini _et al._ (2023a) B. Bertini, P. Calabrese, M. Collura, K. Klobas, and C. Rylands, Nonequilibrium full counting statistics and symmetry-resolved entanglement from space-time duality, Phys. Rev. Lett. 131, 140401 (2023a).
* Schönhammer (2007) K. Schönhammer, Full counting statistics for noninteracting fermions: Exact results and the levitov-lesovik formula, Phys. Rev. B 75, 205329 (2007).
* Collura _et al._ (2017) M. Collura, F. H. L. Essler, and S. Groha, Full counting statistics in the spin-1/2 heisenberg XXZ chain, Journal of Physics A: Mathematical and Theoretical 50, 414002 (2017).
* Stéphan and Pollmann (2017) J.-M. Stéphan and F. Pollmann, Full counting statistics in the haldane-shastry chain, Phys. Rev. B 95, 035119 (2017).
* Najafi and Rajabpour (2017) K. Najafi and M. A. Rajabpour, Full counting statistics of the subsystem energy for free fermions and quantum spin chains, Phys. Rev. B 96, 235109 (2017).
* Arzamasovs and Gangardt (2019) M. Arzamasovs and D. M. Gangardt, Full counting statistics and large deviations in a thermal 1d bose gas, Phys. Rev. Lett. 122, 120401 (2019).
* Calabrese _et al._ (2020) P. Calabrese, M. Collura, G. D. Giulio, and S. Murciano, Full counting statistics in the gapped XXZ spin chain, Europhysics Letters 129, 60007 (2020).
* Ares _et al._ (2021) F. Ares, M. A. Rajabpour, and J. Viti, Exact full counting statistics for the staggered magnetization and the domain walls in the XY spin chain, Phys. Rev. E 103, 042107 (2021).
* Smith _et al._ (2021) N. R. Smith, P. L. Doussal, S. N. Majumdar, and G. Schehr, Full counting statistics for interacting trapped fermions, SciPost Phys. 11, 110 (2021).
* McCulloch _et al._ (2023) E. McCulloch, J. De Nardis, S. Gopalakrishnan, and R. Vasseur, Full counting statistics of charge in chaotic many-body quantum systems, Phys. Rev. Lett. 131, 210402 (2023).
* Hercé _et al._ (2023) G. Hercé, J.-P. Bureik, A. Ténart, A. Aspect, A. Dareau, and D. Clément, Full counting statistics of interacting lattice gases after an expansion: The role of condensate depletion in many-body coherence, Phys. Rev. Res. 5, L012037 (2023).
* Bertini _et al._ (2023b) B. Bertini, K. Klobas, M. Collura, P. Calabrese, and C. Rylands, Dynamics of charge fluctuations from asymmetric initial states (2023b), arXiv:2306.12404 [cond-mat.stat-mech] .
* (110) See Supplemental Material for (I) Derivation of Eq. (2), (II) Derivation of Eq. (4), (III) Derivation of the two limiting formulae, (IV) Derivation of Eq. (6), (V) Derivation of Eq. (8), (VI) Derivation of Eq. (9), and (VII) Numerical perturbative calculation for the disorder potential .
* Flesch _et al._ (2008) A. Flesch, M. Cramer, I. P. McCulloch, U. Schollwöck, and J. Eisert, Probing local relaxation of cold atoms in optical superlattices, Phys. Rev. A 78, 033608 (2008).
* mat (2015) 8 - special functions, in _Table of Integrals, Series, and Products (Eighth Edition)_, edited by D. Zwillinger, V. Moll, I. Gradshteyn, and I. Ryzhik (Academic Press, Boston, 2015) eighth edition ed., pp. 867–1013.
* Franchini (2017) F. Franchini, _An introduction to integrable techniques for one-dimensional quantum systems_ , Vol. 940 (Springer, 2017).
* Watson (1922) G. N. Watson, _A treatise on the theory of Bessel functions_ , Vol. 2 (The University Press, 1922).
* Ifantis and Siafarikas (1990) E. Ifantis and P. Siafarikas, Inequalities involving bessel and modified bessel functions, Journal of Mathematical Analysis and Applications 147, 214 (1990).
* Anderson and Qiu (1997) G. D. Anderson and S.-L. Qiu, A monotoneity property of the gamma function, Proc. Amer. Math. Soc. 125, 3355 (1997).
## Supplemental Material for “Exact solution of bipartite fluctuations in one-
dimensional fermions”
Kazuya Fujimoto and Tomohiro Sasamoto
Department of Physics, Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-
ku, Tokyo 152-8551, Japan
This Supplemental Material describes the following:
* (I) Derivation of Eq. (2),
* (II) Derivation of Eq. (4),
* (III) Derivation of the two limiting formulae,
* (IV) Derivation of Eq. (6),
* (V) Derivation of Eq. (8),
* (VI) Derivation of Eq. (9),
* (VII) Numerical perturbative calculation for the disorder potential.
## I Derivation of Eq. (2)
We derive the determinantal formula for the generating function, namely Eq.
(2) of the main text. The essential ingredient of the derivation is the fact
that the Wick theorem can be applicable since the quantum state in our setup
is the Gaussian state. The detailed calculation is given by
$\displaystyle G_{M}(\lambda,t)$
$\displaystyle=\left\langle\prod_{j=0}^{M-1}e^{\lambda\hat{a}_{j}^{\dagger}\hat{a}_{j}}\right\rangle_{t}$
$\displaystyle=\left\langle\prod_{j=0}^{M-1}\biggl{[}1+(e^{\lambda}-1)\hat{a}_{j}^{\dagger}\hat{a}_{j}\biggl{]}\right\rangle_{t}$
$\displaystyle=1+\sum_{n=1}^{M}(e^{\lambda}-1)^{n}\sum_{\begin{subarray}{c}j_{1}<j_{2}<...<j_{n}\\\
j_{k}\in\\{0,1,2,...,M-1\\}\end{subarray}}\left\langle\prod_{k=1}^{n}\hat{a}_{j_{k}}^{\dagger}\hat{a}_{j_{k}}\right\rangle_{t}$
$\displaystyle=1+\sum_{n=1}^{M}(e^{\lambda}-1)^{n}\sum_{\begin{subarray}{c}j_{1}<j_{2}<...<j_{n}\\\
j_{k}\in\\{0,1,2,...,M-1\\}\end{subarray}}{\rm
det}\left[\braket{\hat{a}_{j_{k}}^{\dagger}\hat{a}_{j_{l}}}_{t}\right]_{k,l=1}^{n}$
$\displaystyle={\rm
det}\left[\delta_{j,k}+(e^{\lambda}-1)\braket{\hat{a}_{j}^{\dagger}\hat{a}_{k}}_{t}\right]_{j,k=0}^{M-1}.$
(S-1)
In the fourth line, we use the Wick theorem. This kind of determinantal form
was derived in several previous works Schönhammer (2007); Eisler and Rácz
(2013); Parez _et al._ (2021).
## II Derivation of Eq. (4)
We express the variance $\sigma_{M}(t)^{2}$ by the Bessel function $J_{n}(x)$
of the first kind. Using the property of the generating function, we can
obtain
$\displaystyle\sigma_{M}(t)^{2}$
$\displaystyle=\left.\dfrac{\partial^{2}}{\partial\lambda^{2}}G_{M}(\lambda,t)\right|_{\lambda=0}-\left(\left.\dfrac{\partial}{\partial\lambda}G_{M}(\lambda,t)\right|_{\lambda=0}\right)^{2}$
(S-2)
$\displaystyle=\sum_{k=0}^{M-1}\braket{\hat{a}_{k}^{\dagger}\hat{a}_{k}}_{t}-\sum_{j=0}^{M-1}\sum_{k=0}^{M-1}|\braket{\hat{a}_{j}^{\dagger}\hat{a}_{k}}_{t}|^{2}.$
(S-3)
The first term on the right-hand side of Eq. (S-3) becomes
$\displaystyle\sum_{k=0}^{M-1}\braket{\hat{a}_{k}^{\dagger}\hat{a}_{k}}_{t}=\frac{M}{2}-\frac{J_{0}(4t)}{2}\sum_{k=0}^{M-1}(-1)^{k},$
(S-4)
where we use the following expression of the two-point correlator,
$\displaystyle\braket{\hat{a}_{j}^{\dagger}\hat{a}_{k}}_{t}=\frac{\delta_{j,k}}{2}-\frac{{\rm
i}^{j+k}}{2}J_{k-j}(4t).$ (S-5)
Similarly, the second term on the right-hand side of Eq. (S-3) becomes
$\displaystyle\sum_{j=1}^{M-1}\sum_{k=1}^{M-1}|\braket{\hat{a}_{j}^{\dagger}\hat{a}_{k}}_{t}|^{2}$
$\displaystyle=\frac{M}{4}-\frac{J_{0}(4t)}{2}\sum_{k=0}^{M-1}(-1)^{k}+\frac{1}{4}\sum_{j=0}^{M-1}\sum_{k=0}^{M-1}J_{j-k}(4t)^{2}.$
(S-6)
The double summation on the right-hand side of Eq. (S-6) becomes
$\displaystyle\sum_{j=0}^{M-1}\sum_{k=0}^{M-1}J_{j-k}(4t)^{2}$
$\displaystyle=\sum_{l=0}^{M-1}\sum_{m=0}^{M-l-1}J_{l}(4t)^{2}+\sum_{l=-M+1}^{-1}\sum_{m=-l}^{M-1}J_{l}(4t)^{2}$
(S-7)
$\displaystyle=MJ_{0}(4t)^{2}+2\sum_{l=1}^{M-1}\left(M-l\right)J_{l}(4t)^{2}$
(S-8)
$\displaystyle=M\left(J_{0}(4t)^{2}+2\sum_{l=1}^{M-1}J_{l}(4t)^{2}\right)-2\sum_{l=1}^{M-1}lJ_{l}(4t)^{2}.$
(S-9)
In Eq. (S-7), we introduce $l=j-k$ and divide the single double summation into
the two double summations. Figure S-1 schematically describes this procedure
in detail. We also use the formula $J_{-l}(4t)^{2}=J_{l}(4t)^{2}$ in Eq.
(S-8). Finally, putting Eq. (S-9) into Eq. (S-6), we get
$\displaystyle\sum_{j=1}^{M-1}\sum_{k=1}^{M-1}|\braket{\hat{a}_{j}^{\dagger}\hat{a}_{k}}_{t}|^{2}$
$\displaystyle=\frac{M}{4}-\frac{J_{0}(4t)}{2}\sum_{k=0}^{M-1}(-1)^{k}+\frac{M}{4}\left(J_{0}(4t)^{2}+2\sum_{l=1}^{M-1}J_{l}(4t)^{2}\right)-\dfrac{1}{2}\sum_{l=1}^{M-1}lJ_{l}(4t)^{2}.$
(S-10)
Substituting Eqs. (S-4) and (S-10) into (S-3), we derive Eq. (4) of the main
text, namely
$\displaystyle\sigma_{M}(t)^{2}=\frac{M}{4}\left(1-J_{0}(4t)^{2}-2\sum_{k=1}^{M-1}J_{k}(4t)^{2}\right)+\frac{1}{2}\sum_{k=1}^{M-1}kJ_{k}(4t)^{2}.$
(S-11)
Figure S-1: Schematic illustration in deriving Eq. (S-7). The filled circles
denotes the points for the double summation
$\sum_{j=0}^{M-1}\sum_{k=0}^{M-1}\bullet$. In the blue region specified by the
set $\\{(k,j)\in\\{0,1,...,M-1\\}^{2}\mid j\geq k\\}$, the $j$-intercept
$l=j-k$ can take the values from $\\{0,1,...,M-1\\}$. On the other hand, the
$j$-intercept takes the values from $\\{-M+1,-M+2,...,-1\\}$ in the red region
specified by the set $\\{(k,j)\in\\{0,1,...,M-1\\}^{2}\mid j<k\\}$. Using the
$j$-intercept $l$, we can transform the double summation of the left-hand side
of Eq. (S-7) into the right-hand side of Eq. (S-7).
## III Derivation of the two limiting formulae
In the derivation of Eq. (5), we use the following formulae,
$\displaystyle\lim_{M\rightarrow\infty}\frac{M}{4}\left(1-J_{0}(4t)^{2}-2\sum_{k=1}^{M-1}J_{k}(4t)^{2}\right)=0~{}~{}~{}(t>0),$
(S-12)
$\displaystyle\lim_{M\rightarrow\infty}\frac{1}{2}\sum_{k=1}^{M-1}kJ_{k}(4t)^{2}=4t^{2}\left(J_{0}(4t)^{2}+J_{1}(4t)^{2}\right)-tJ_{0}(4t)J_{1}(4t).$
(S-13)
In this section, we shall prove them.
### III.1 Proof of Eq. (S-12)
For the proof of Eq. (S-12), we first derive the following inequality,
$\displaystyle\frac{M}{4}\left|\left(1-J_{0}(4t)^{2}-2\sum_{k=1}^{M-1}J_{k}(4t)^{2}\right)\right|<\dfrac{1}{2}M^{(\gamma-1)M+1}(M+1)^{(\gamma-1)(M+1)+1}(2t)^{2M}$
(S-14)
for $t>0$. Here $\gamma\simeq 0.577$ is the Euler-Mascheroni constant. The
proof of Eq. (S-14) is described in the following. We employ the recursion
relation $2dJ_{n}(x)/dx=J_{n-1}(x)-J_{n+1}(x)$ for the Bessel function of the
first kind, deriving
$\displaystyle\frac{d}{dx}\sum_{n=1}^{M-1}J_{n}(x)^{2}$
$\displaystyle=\sum_{n=1}^{M-1}\left(J_{n}(x)J_{n-1}(x)-J_{n}(x)J_{n+1}(x)\right)$
(S-15) $\displaystyle=J_{1}(x)J_{0}(x)-J_{M-1}(x)J_{M}(x)$ (S-16)
$\displaystyle=-\frac{1}{2}\frac{d}{dx}J_{0}(x)^{2}-J_{M-1}(x)J_{M}(x),$
(S-17)
where we use the formula $J_{1}(x)=-dJ_{0}(x)/dx$ in the last equality.
Integrating Eq. (S-17) from $0$ to $x$ leads to
$\displaystyle
J_{0}(x)^{2}+2\sum_{k=1}^{M-1}J_{k}(x)^{2}-1=-2\int_{0}^{x}J_{M-1}(y)J_{M}(y)dy,$
(S-18)
where we use $J_{0}(0)=1$ and $J_{l}(0)=0~{}(l>0)$. We next consider the
absolute value of Eq. (S-18) and then obtain
$\displaystyle\left|\left(1-J_{0}(x)^{2}-2\sum_{k=1}^{M-1}J_{k}(x)^{2}\right)\right|$
(S-19) $\displaystyle\leq
2\int_{0}^{x}\left|J_{M-1}(y)\right|\left|J_{M}(y)\right|dy$ (S-20)
$\displaystyle<2M^{(\gamma-1)M+1}(M+1)^{(\gamma-1)(M+1)+1}\int_{0}^{x}\left(\frac{y}{2}\right)^{2M-1}\exp\left(-\frac{y^{2}}{4M}-\frac{y^{2}}{4M+4}\right)dy$
(S-21)
$\displaystyle<2M^{(\gamma-1)M+1}(M+1)^{(\gamma-1)(M+1)+1}\int_{0}^{x}\left(\frac{y}{2}\right)^{2M-1}dy$
(S-22)
$\displaystyle=2M^{(\gamma-1)M}(M+1)^{(\gamma-1)(M+1)+1}\left(\frac{x}{2}\right)^{2M}.$
(S-23)
In Eq. (S-21), we use the following inequality,
$\displaystyle
J_{M}(x)<\left(\frac{x}{2}\right)^{M}(M+1)^{(\gamma-1)(M+1)+1}\exp\left(-\frac{x^{2}}{4M+4}\right)$
(S-24)
for $x>0$. This inequality can be derived using the two inequalities given by
$\displaystyle
J_{\nu}(x)<\left(\frac{x}{2}\right)^{\nu}\frac{1}{\Gamma(\nu+1)}\exp\left(-\frac{x^{2}}{4\nu+4}\right)$
(S-25)
for $\nu\geq 0$ and $x>0$, and
$\displaystyle x^{(1-\gamma)x-1}<\Gamma(x)<x^{x-1}$ (S-26)
for $x\in(1,\infty)$. Here, $\Gamma(x)$ denotes the Gamma function. The
inequalities of Eqs. (S-25) and (S-26) were given in Refs. Watson (1922);
Ifantis and Siafarikas (1990) and Anderson and Qiu (1997), respectively. Thus,
employing the inequality of Eq. (S-23) with $x=4t$, we can readily derive the
inequality of Eq. (S-14).
Finally, we take the limit $M\rightarrow\infty$ in Eq. (S-14). For this
purpose, we consider the logarithmic of the right-hand side of Eq. (S-14),
which reads
$\displaystyle-\log 2+\left[(\gamma-1)M+1\right]\log
M+\left[(\gamma-1)(M+1)+1\right]\log(M+1)+2M\log(2t).$ (S-27)
Thus, when $M$ is much larger than unity, the above for a fixed nonzero time
$t$ can be approximated to be $2(\gamma-1)M\log M$. This becomes $-\infty$ for
$M\rightarrow\infty$ because the Euler-Mascheroni constant $\gamma\simeq
0.577$ is smaller than unity. Hence, the left-hand side of Eq. (S-14) for
$M\rightarrow\infty$ can be bounded by zero. This proves Eq. (S-12).
### III.2 Proof of Eq. (S-13)
We shall prove Eq. (S-13) by noting the recursion relation
$2kJ_{k}(x)/x=J_{k-1}(x)+J_{k+1}(x)$ for the Bessel function of the first
kind. This relation leads to
$\displaystyle\frac{2k}{x}J_{k}(x)J_{k}(y)=J_{k-1}(x)J_{k}(y)+J_{k+1}(x)J_{k}(y),$
(S-28)
$\displaystyle\frac{2k}{y}J_{k}(x)J_{k}(y)=J_{k-1}(y)J_{k}(x)+J_{k+1}(y)J_{k}(x).$
(S-29)
Subtracting them and taking the summation for the index $k$ from zero to a
positive integer $N$, we obtain
$\displaystyle\left(\frac{2}{x}-\frac{2}{y}\right)\sum_{k=0}^{N}kJ_{k}(x)J_{k}(y)$
$\displaystyle=\sum_{k=1}^{N}(J_{k+1}(x)J_{k}(y)-J_{k+1}(y)J_{k}(x)+J_{k-1}(x)J_{k}(y)-J_{k-1}(y)J_{k}(x))$
(S-30)
$\displaystyle=J_{N+1}(x)J_{N}(y)-J_{N+1}(y)J_{N}(x)+J_{0}(x)J_{1}(y)-J_{0}(y)J_{1}(x),$
(S-31)
where we use the property of the telescoping series to derive the last
expression. Taking the limit $N\rightarrow\infty$, we get
$\displaystyle\sum_{k=0}^{\infty}kJ_{k}(x)J_{k}(y)=\frac{xy}{2}\frac{J_{0}(x)J_{1}(y)-J_{0}(y)J_{1}(x)}{y-x}.$
(S-32)
Finally, we take the limit $y\rightarrow x$ in Eq. (S-32) via the L’Hôpital’s
rule, obtaining
$\displaystyle\sum_{k=0}^{\infty}kJ_{k}(x)^{2}=\frac{x^{2}}{2}J_{0}(x)^{2}+\frac{x^{2}}{2}J_{1}(x)^{2}-\frac{x}{2}J_{0}(x)J_{1}(x).$
(S-33)
Here, we use the following formulae,
$\displaystyle\frac{d}{dx}J_{k+1}(x)=J_{k}(x)-\frac{k+1}{x}J_{k+1}(x),$ (S-34)
$\displaystyle\frac{d}{dx}J_{k-1}(x)=-J_{k}(x)+\frac{k-1}{x}J_{k-1}(x).$
(S-35)
Finally, we put $x=4t$ into Eq. (S-33), completing the proof of Eq. (S-13).
## IV Derivation of Eq. (6)
We derive the asymptotic expression of the variance given by Eq. (6) of the
main text for $t\ll 1$ and $t\gg 1$. For the short-time dynamics ($t\ll 1$),
the Bessel functions $J_{0}(4t)$ and $J_{1}(4t)$ of the first kind is
approximated by
$\displaystyle J_{0}(4t)\simeq 1,$ (S-36) $\displaystyle J_{1}(4t)\simeq 2t,$
(S-37)
which can be readily derived using the infinite series of $J_{n}(x)$.
Substituting Eqs. (S-36) and (S-37) into Eq. (5) of the main text, we obtain,
up to the leading order,
$\displaystyle\sigma(t)^{2}\simeq 2t^{2}.$ (S-38)
For the long-time dynamics ($t\gg 1$), we use the asymptotic formula mat
(2015) given by
$\displaystyle
J_{n}(4t)=\frac{\displaystyle\cos\left(4t-\frac{\pi}{2}n-\frac{\pi}{4}\right)}{\displaystyle\sqrt{2\pi
t}}\left(1-\frac{\Gamma(n+5/2)}{128\Gamma(n-3/2)}t^{-2}\right)-\frac{\displaystyle\sin\left(4t-\frac{\pi}{2}n-\frac{\pi}{4}\right)}{\displaystyle\sqrt{2\pi}t^{3/2}}\frac{\Gamma(n+3/2)}{8\Gamma(n-1/2)}+\mathcal{O}(t^{-7/2}).$
(S-39)
This expression leads to
$\displaystyle J_{0}(4t)^{2}$ $\displaystyle\simeq\dfrac{1}{2\pi
t}\cos\left(4t-\dfrac{\pi}{4}\right)^{2}+\dfrac{1}{64\pi
t^{2}}\cos\left(8t\right)+\dfrac{1}{2048\pi
t^{3}}\sin\left(4t-\dfrac{\pi}{4}\right)^{2}-\dfrac{9}{2048\pi
t^{3}}\cos\left(4t-\dfrac{\pi}{4}\right)^{2},$ (S-40) $\displaystyle
J_{1}(4t)^{2}$ $\displaystyle\simeq\dfrac{1}{2\pi
t}\sin\left(4t-\dfrac{\pi}{4}\right)^{2}-\dfrac{3}{64\pi
t^{2}}\cos\left(8t\right)+\dfrac{9}{2048\pi
t^{3}}\cos\left(4t-\dfrac{\pi}{4}\right)^{2}+\dfrac{15}{2048\pi
t^{3}}\sin\left(4t-\dfrac{\pi}{4}\right)^{2},$ (S-41) $\displaystyle
J_{0}(4t)J_{1}(4t)$ $\displaystyle\simeq-\dfrac{1}{8\pi
t}\cos\left(8t\right)+\dfrac{3}{64\pi
t^{2}}\cos\left(4t-\dfrac{\pi}{4}\right)^{2}+\dfrac{1}{64\pi
t^{2}}\sin\left(4t-\dfrac{\pi}{4}\right)^{2}.$ (S-42)
Employing these expressions and Eq. (5), we derive
$\displaystyle\sigma(t)^{2}\simeq\dfrac{2}{\pi}t-\dfrac{1}{64\pi
t}\left(2\sin(8t)+1\right).$ (S-43)
## V Derivation of Eq. (8)
We shall derive the explicit expression of the two-point correlator with the
incomplete alternating initial state $\hat{\rho}_{\rm alt}$ defined by Eq. (7)
of the main text. We first define the two-point correlator $C_{m,n}(t)$ as
$\displaystyle C_{m,n}(t)\coloneqq{\rm Tr}\left[e^{-{\rm
i}\hat{H}t}\hat{\rho}_{\rm alt}e^{{\rm
i}\hat{H}t}\hat{a}^{\dagger}_{m}\hat{a}_{n}\right].$ (S-44)
The straightforward calculation leads to the equation of motion for
$C_{m,n}(t)$,
$\displaystyle{\rm
i}\dfrac{d}{dt}C_{m,n}(t)=C_{m+1,n}(t)+C_{m-1,n}(t)-C_{m,n+1}(t)-C_{m,n-1}(t).$
(S-45)
To solve the differential equation, we expand the correlator via the discrete
Fourier transformation, getting
$\displaystyle
C_{m,n}(t)=\dfrac{1}{4L^{2}}\sum_{\alpha=-L}^{L-1}\sum_{\beta=-L}^{L-1}D_{\alpha,\beta}(t)e^{{\rm
i}\pi(n\beta-m\alpha)/L},$ (S-46)
where $D_{\alpha,\beta}(t)$ is the coefficient of the expansion. Substituting
Eq. (S-46) into Eq. (S-45), we readily derive
$\displaystyle{\rm
i}\dfrac{d}{dt}D_{\alpha,\beta}(t)=\mathcal{E}_{\alpha,\beta}D_{\alpha,\beta}(t)$
(S-47)
with $\mathcal{E}_{\alpha,\beta}\coloneqq
2\cos(\pi\alpha/L)-2\cos(\pi\beta/L)$. The initial condition
$D_{\alpha,\beta}(0)$ is calculated as
$\displaystyle D_{\alpha,\beta}(0)$
$\displaystyle=\sum_{m=-L}^{L-1}\sum_{n=-L}^{L-1}C_{m,n}(0)e^{{\rm
i}\pi(-n\beta+m\alpha)/L}$ (S-48) $\displaystyle=n_{\rm
even}\sum_{m=-L/2}^{L/2-1}e^{{\rm i}2\pi m(\alpha-\beta)/L}+n_{\rm
odd}\sum_{m=-L/2}^{L/2-1}e^{{\rm i}\pi(2m+1)(\alpha-\beta)/L}$ (S-49)
$\displaystyle=Ln_{\rm
even}(\delta_{\alpha,\beta}+\delta_{\alpha,\beta+L}+\delta_{\alpha,\beta-L})+Ln_{\rm
odd}(\delta_{\alpha,\beta}-\delta_{\alpha,\beta+L}-\delta_{\alpha,\beta-L}),$
(S-50)
where we use $-2L+1\leq\alpha\pm\beta\leq 2L-1$ since the values of $\alpha$
and $\beta$ are restricted to the 1st Brillouin zone. As to the initial
condition $C_{m,n}(0)$, by the definition of $\hat{\rho}_{\rm alt}$, we can
obtain
$\displaystyle C_{m,n}(0)=\begin{dcases}n_{\rm even}&(n=m\wedge n~{}{\rm
is~{}even})\\\ n_{\rm odd}&(n=m\wedge n~{}{\rm is~{}odd})\\\ 0&({\rm
otherwise}),\end{dcases}$ (S-51)
where $n_{\rm even}$ and $n_{\rm odd}$ are averaged particle numbers at the
even and odd sites for the initial density matrix $\hat{\rho}_{\rm alt}$,
respectively. Solving the differential equation of Eq. (S-47), we get
$\displaystyle D_{\alpha,\beta}(t)$ $\displaystyle=D_{\alpha,\beta}(0)e^{-{\rm
i}\mathcal{E}_{\alpha,\beta}t}$ (S-52) $\displaystyle=Ln_{\rm
T}\delta_{\alpha,\beta}+Ln_{\rm
D}\left(\delta_{\alpha,\beta+L}+\delta_{\alpha,\beta-L}\right)e^{{\rm
i}4t\cos(\pi\beta/L)},$ (S-53)
where we define $n_{\rm T}\coloneqq n_{\rm even}+n_{\rm odd}$ and $n_{\rm
D}\coloneqq n_{\rm even}-n_{\rm odd}$. We substitute Eq. (S-53) into Eq.
(S-46), getting
$\displaystyle C_{m,n}(t)$
$\displaystyle=\dfrac{1}{4L^{2}}\sum_{\alpha=-L}^{L-1}\sum_{\beta=-L}^{L-1}D_{\alpha,\beta}(t)e^{{\rm
i}\pi(n\beta-m\alpha)/L}$ (S-54)
$\displaystyle=\dfrac{1}{4L}\sum_{\alpha=-L}^{L-1}\sum_{\beta=-L}^{L-1}\left(n_{\rm
T}\delta_{\alpha,\beta}+n_{\rm
D}\left(\delta_{\alpha,\beta+L}+\delta_{\alpha,\beta-L}\right)e^{{\rm
i}4t\cos(\pi\beta/L)}\right)e^{{\rm i}\pi(n\beta-m\alpha)/L}$ (S-55)
$\displaystyle=\dfrac{n_{\rm T}}{4L}\sum_{\alpha=-L}^{L-1}e^{{\rm
i}\pi(n-m)\alpha/L}+\dfrac{n_{\rm D}(-1)^{n}}{4L}\sum_{\alpha=0}^{L-1}e^{-{\rm
i}4t\cos(\pi\alpha/L)}e^{{\rm i}\pi(n-m)\alpha/L}+\dfrac{n_{\rm
D}(-1)^{n}}{4L}\sum_{\alpha=-L}^{-1}e^{-{\rm i}4t\cos(\pi\alpha/L)}e^{{\rm
i}\pi(n-m)\alpha/L}$ (S-56) $\displaystyle=\dfrac{n_{\rm
T}}{4L}\sum_{\alpha=-L}^{L-1}e^{{\rm i}\pi(n-m)\alpha/L}+\dfrac{n_{\rm
D}(-1)^{n}}{4L}\sum_{\alpha=-L}^{L-1}e^{-{\rm i}4t\cos(\pi\alpha/L)}e^{{\rm
i}\pi(n-m)\alpha/L}$ (S-57) $\displaystyle=\dfrac{n_{\rm
T}}{2}\delta_{m,n}+\dfrac{n_{\rm D}(-1)^{n}}{4L}\sum_{\alpha=-L}^{L-1}e^{-{\rm
i}4t\cos(\pi\alpha/L)}e^{{\rm i}\pi(n-m)\alpha/L},$ (S-58)
where we again use $-2L+1\leq\alpha\pm\beta\leq 2L-1$. Finally, we take the
thermodynamic limit ($L\rightarrow\infty$), obtaining
$\displaystyle C_{m,n}(t)$ $\displaystyle=\dfrac{n_{\rm
T}}{2}\delta_{m,n}+\dfrac{n_{\rm
D}(-1)^{n}}{4\pi}\int_{-\pi}^{\pi}d\theta~{}e^{{\rm i}(n-m)\theta-{\rm
i}4t\cos\theta}$ (S-59) $\displaystyle=\dfrac{n_{\rm
T}}{2}\delta_{m,n}+\dfrac{n_{\rm
D}(-1)^{m}}{4\pi}\int_{0}^{2\pi}d\theta~{}e^{{\rm i}(n-m)\theta+{\rm
i}4t\cos\theta}$ (S-60) $\displaystyle=\dfrac{n_{\rm
T}}{2}\delta_{m,n}+\dfrac{n_{\rm D}}{2}{\rm i}^{n+m}J_{n-m}(4t).$ (S-61)
In the last line, we use the integral formula for the Bessel function of the
first kind given by
$\displaystyle J_{n}(x)=\dfrac{1}{2\pi{\rm
i}^{n}}\int_{0}^{2\pi}d\theta~{}e^{{\rm i}n\theta+{\rm i}x\cos\theta}.$ (S-62)
## VI Derivation of Eq. (9)
We shall derive Eq. (9) of the main text using Eq. (8) of the main text. Note
that the initial state $\hat{\rho}_{\rm alt}$ is the Gaussian state and thus
we can use the Wick theorem. As a result, the variance is given by
$\displaystyle\sigma_{M}(t)^{2}=\sum_{m=0}^{M-1}C_{m,m}(t)-\sum_{m=0}^{M-1}\sum_{n=0}^{M-1}|C_{m,n}(t)|^{2}.$
(S-63)
Following the almost same procedure as Sec. II, we obtain
$\displaystyle\sum_{m=0}^{M-1}C_{m,m}(t)$ $\displaystyle=\dfrac{M}{2}(n_{\rm
even}+n_{\rm odd})+\dfrac{1}{2}(n_{\rm even}-n_{\rm
odd})J_{0}(4t)\sum_{m=0}^{M-1}(-1)^{m},$ (S-64)
$\displaystyle\sum_{m=0}^{M-1}\sum_{n=0}^{M-1}|C_{m,n}(t)|^{2}$
$\displaystyle=\dfrac{M}{4}(n_{\rm even}+n_{\rm odd})^{2}+\dfrac{1}{2}(n_{\rm
even}^{2}-n_{\rm odd}^{2})J_{0}(4t)\sum_{m=0}^{M-1}(-1)^{m}$ (S-65)
$\displaystyle+\dfrac{1}{4}(n_{\rm even}-n_{\rm
odd})^{2}\sum_{m=0}^{M-1}\sum_{n=0}^{M-1}J_{n-m}^{2}(4t)$ (S-66)
$\displaystyle=\dfrac{M}{4}(n_{\rm even}+n_{\rm odd})^{2}+\dfrac{1}{2}(n_{\rm
even}^{2}-n_{\rm odd}^{2})J_{0}(4t)\sum_{m=0}^{M-1}(-1)^{m}$ (S-67)
$\displaystyle+\dfrac{M}{4}(n_{\rm even}-n_{\rm
odd})^{2}\left(J_{0}(4t)^{2}+2\sum_{m=1}^{M-1}J_{m}(4t)^{2}\right)-\dfrac{1}{2}(n_{\rm
even}-n_{\rm odd})^{2}\sum_{m=1}^{M-1}mJ_{m}(4t)^{2}.$ (S-68)
Thus, the variance becomes
$\displaystyle\sigma_{M}(t)^{2}$ $\displaystyle=\dfrac{M}{4}(n_{\rm
even}+n_{\rm odd})(2-n_{\rm even}-n_{\rm odd})+\dfrac{1}{2}(n_{\rm
even}-n_{\rm odd}-n_{\rm even}^{2}+n_{\rm
odd}^{2})J_{0}(4t)\sum_{m=0}^{M-1}(-1)^{m}$ (S-69)
$\displaystyle-\dfrac{M}{4}(n_{\rm even}-n_{\rm
odd})^{2}\left(J_{0}(4t)^{2}+2\sum_{m=1}^{M-1}J_{m}(4t)^{2}\right)+\dfrac{1}{2}(n_{\rm
even}-n_{\rm odd})^{2}\sum_{m=1}^{M-1}mJ_{m}(4t)^{2}.$ (S-70)
Next, we define $\delta\sigma_{M}(t)^{2}$ as
$\displaystyle\delta\sigma_{M}(t)^{2}\coloneqq\dfrac{M}{2}\bigl{(}n_{\rm
even}(1-n_{\rm even})+n_{\rm odd}(1-n_{\rm
odd})\bigl{)}+\dfrac{1}{2}J_{0}(4t)\bigl{(}n_{\rm even}(1-n_{\rm even})-n_{\rm
odd}(1-n_{\rm odd})\bigl{)}\sum_{m=0}^{M-1}(-1)^{m}.$ (S-71)
As a result, we can obtain
$\displaystyle\sigma_{M}(t)^{2}-\delta\sigma_{M}(t)^{2}=(n_{\rm even}-n_{\rm
odd})^{2}\left[\dfrac{M}{4}\left(1-J_{0}(4t)^{2}-2\sum_{m=1}^{M-1}J_{m}(4t)^{2}\right)+\dfrac{1}{2}\sum_{m=1}^{M-1}mJ_{m}(4t)^{2}\right].$
(S-72)
Finally, applying Eqs. (S-12) and (S-13) into the above, we derive Eq. (9) of
the main text, namely
$\displaystyle\lim_{M\rightarrow\infty}\bigl{(}\sigma_{M}(t)^{2}-\delta\sigma_{M}(t)^{2}\bigl{)}=(n_{\rm
even}-n_{\rm
odd})^{2}\bigl{[}4t^{2}\left(J_{0}(4t)^{2}+J_{1}(4t)^{2}\right)-tJ_{0}(4t)J_{1}(4t)\bigl{]}.$
(S-73)
## VII Numerical perturbative calculation for the disorder potential
We shall explain how to implement the numerical perturbative calculation for
the disorder potential. In the main text, we define the random potential
$V_{j}$, which is independently sampled from the uniform distribution with the
range $[-\Delta,\Delta]$. Here, $\Delta\geq 0$ denotes the strength of the
randomness. This definition is inconvenient to proceed perturbative
calculation for the disorder potential, so thus we introduce a new random
variable $v_{n}$, which is independently sampled from the uniform distribution
with the range $[-1,1]$. Then, the Hamiltonian $\hat{H}^{\prime}$ of Eq. (11)
in the main text is written as
$\displaystyle\hat{H}^{\prime}=-\sum_{j=-L}^{L-1}\left(\hat{a}_{j+1}^{\dagger}\hat{a}_{j}+\hat{a}_{j}^{\dagger}\hat{a}_{j+1}\right)+\Delta\sum_{j=-L}^{L-1}v_{j}\hat{n}_{j}.$
(S-74)
Using this notation, we can derive the equation of motion for the two-point
correlator $C_{m,n}(t)$
$\displaystyle{\rm
i}\dfrac{d}{dt}C_{m,n}(t)=C_{m+1,n}(t)+C_{m-1,n}(t)-C_{m,n+1}(t)-C_{m,n-1}(t)+\Delta(v_{n}-v_{m})C_{m,n}(t).$
(S-75)
In what follows, we regard $\Delta$ as a small parameter and construct the
perturbative equation of motion for the two-point correlator.
We expand the two-point correlator with the parameter $\Delta$ as
$\displaystyle C_{m,n}(t)=C_{m,n}^{(0)}(t)+\Delta
C_{m,n}^{(1)}(t)+\Delta^{2}C_{m,n}^{(2)}(t)+...~{}~{}.$ (S-76)
Substituting Eq. (S-76) into Eq. (S-75), we obtain the $\alpha$th-order
equation of motion as
$\displaystyle{\rm
i}\dfrac{d}{dt}C_{m,n}^{(\alpha)}(t)=C_{m+1,n}^{(\alpha)}(t)+C_{m-1,n}^{(\alpha)}(t)-C_{m,n+1}^{(\alpha)}(t)-C_{m,n-1}^{(\alpha)}(t)+(v_{n}-v_{m})C_{m,n}^{(\alpha-1)}(t),$
(S-77)
where we define $C_{m,n}^{-1}(t)\coloneqq 0$. When you calculate the variance
up to the $\alpha$th-order, we numerically solve Eq. (S-77) for
$C_{m,n}^{(0)}(t),C_{m,n}^{(1)}(t),...,C_{m,n}^{(\alpha)}(t)$, by the 4th-
order Runge-Kutta method, obtaining the $\alpha$th-order two-point correlator
as
$\displaystyle\bar{C}_{m,n}(t)\coloneqq\sum_{\beta=0}^{\alpha}\Delta^{\beta}C_{m,n}^{(\beta)}(t).$
(S-78)
Then, the variance $\sigma_{M}^{\prime}(t)^{2}$ defined in the main text can
be computed using
$\displaystyle\braket{\hat{N}_{M}^{\prime}}=\sum_{m=-M/2}^{M/2-1}\bar{C}_{m,m}(t),$
(S-79) $\displaystyle\braket{\hat{N}_{M}^{\prime
2}}=\braket{\hat{N}_{M}^{\prime}}+\braket{\hat{N}_{M}^{\prime}}^{2}-\sum_{m=-M/2}^{M/2-1}\sum_{n=-M/2}^{M/2-1}|\bar{C}_{m,n}(t)|^{2}.$
(S-80)
Figure S-2 displays the numerical results of the variance for the 2nd-, 4th-,
and 6th-order perturbative calculations with $\Delta=1$. One can see that the
agreement of the perturbative calculations with the exact one becomes better
as the order $\alpha$ increases.
Figure S-2: Numerical results of the variance for the perturbative
calculations with $\Delta=1$. The parameters are the same as those used in
Fig. 3 of the main text. The pentagon, square, and triangle markers denote the
data of the variance $\sigma^{\prime}_{M}(t)^{2}$ for the $\alpha=2,4,$ and
$6$th-order perturbative calculations, respectively, and the circle marker
does the exact numerical data. Here, the exact result is numerically obtained
without using the perturbation calculation.
|
# $SU(2)$-bundles over highly connected $8$-manifolds
Samik Basu Stat Math Unit, Indian Statistical Institute Kolkata 700108, India
<EMAIL_ADDRESS>, Aloke Kr Ghosh Stat Math Unit, Indian Statistical
Institute Kolkata 700108, India<EMAIL_ADDRESS>and Subhankar Sau
Stat Math Unit, Indian Statistical Institute Kolkata 700108, India
<EMAIL_ADDRESS>
###### Abstract.
In this paper, we analyze the possible homotopy types of the total space of a
principal $SU(2)$-bundle over a $3$-connected $8$-dimensional Poincaré duality
complex. Along the way, we also classify the $3$-connected $11$-dimensional
complexes $E$ formed from a wedge of $S^{4}$ and $S^{7}$ by attaching a
$11$-cell.
###### Key words and phrases:
Poincaré duality complexes, principal bundles, sphere fibrations, loop spaces.
###### 2020 Mathematics Subject Classification:
Primary: 55R25, 57P10; Secondary: 57R19, 55P35.
## 1\. Introduction
This paper explores $SU(2)$-bundles over $8$-manifolds, aiming for results
akin to those about circle bundles over $4$-manifolds [8, 4]. In the case of
simply connected $4$-manifolds, the results are established by leveraging the
classification of simply connected $5$-manifolds achieved by Smale [12] and
Barden [3].
A circle bundle $S^{1}\to X\to M$ over a simply connected $4$-manifold $M$ is
classified by $\alpha\in H^{2}(M)$, the total space $X(\alpha)$ is simply
connected if $\alpha$ is primitive, and there are only two possibilities of
$X(\alpha)$ via the classification of simply connected $5$-manifolds.
Explicitly, we have [8, Theorem 2]
1. (1)
For every simply connected $4$-manifold $M$, there is a circle bundle
$\alpha$, such that $X(\alpha)$ is homotopy equivalent to a connected sum of
$S^{2}\times S^{3}$. If $M$ is spin, among primitive $\alpha$, this is the
only possibility.
2. (2)
For a simply connected $4$-manifold $M$ which is not spin and a circle bundle
$\alpha$ over it, $X(\alpha)$ is either homotopy equivalent to a connected sum
of $S^{2}\times S^{3}$, or to a connected sum of $S^{2}\times S^{3}$ and
another manifold $B$. The manifold $B$ is (up to diffeomorphism unique) a non-
spin simply connected $5$-manifold whose homology is torsion-free, and
$\mbox{Rank}(H_{2}(B))=1$.
The results of Smale and Barden are geometric in nature, and do not generalize
easily to higher dimensions. Using homotopy theoretic methods, it was possible
to construct sphere fibrations [7] over highly connected Poincaré-duality
complexes possibly by inverting a few primes or in high enough rank. Among
these sphere fibrations, the only case where they could be principal bundles
was in dimension $8$, and the question whether they may be realized as such
was left unresolved.
In this paper, we consider principal $SU(2)$-bundles, noting that
$SU(2)=S^{3}$ is the only case apart from the circle where the sphere is a Lie
group. The base space of the $SU(2)$-bundle which is appropriate for making a
similar analysis is a highly connected $8$-manifold. More precisely, we
consider Poincaré duality complexes $M$ ($8$-dimensional) that are
$3$-connected. These are obtained by attaching a single $8$-cell to a buoquet
of $4$-spheres. We denote
${\mathcal{P}}{\mathcal{D}}_{3}^{8}=\mbox{ the collection of
}3\mbox{-connected }8\mbox{-dimensional Poincar\'{e} duality complexes. }$
The notation $M_{k}\in{\mathcal{P}}{\mathcal{D}}_{3}^{8}$ assumes that
$\mbox{Rank}(H_{4}(M_{k}))=k$. The attaching map of the $8$-cell is denoted by
$L(M_{k})$, and is of the form (once we have chosen a basis
$\\{\alpha_{1},\dots,\alpha_{k}\\}$ of $\pi_{4}(M_{k})\cong{\mathbb{Z}}^{k}$)
(1.1) ${}L(M_{k})=\sum_{1\leq i<j\leq
k}g_{i,j}[\alpha_{i},\alpha_{j}]+\sum_{i=1}^{k}g_{i,i}\nu_{i}+\sum_{i=1}^{k}l_{i}\nu^{\prime}_{i}.$
The matrix $\big{(}(g_{i,j})\big{)}$ is the matrix of the intersection form,
and hence, is invertible. The notation $\nu_{i}$ stands for
$\alpha_{i}\circ\nu$ and $\nu^{\prime}_{i}$ for $\alpha_{i}\circ\nu^{\prime}$.
Here $\nu$ is the Hopf map, and $\nu^{\prime}\in\pi_{7}(S^{4})$ is the
generator for the ${\mathbb{Z}}/(12)$ factor satisfying
$[\iota_{4},\iota_{4}]=2\nu+\nu^{\prime}$. For such complexes, we consider
${\mathcal{P}}(M_{k})=\mbox{ the set of principal }SU(2)\mbox{-bundles
}E(\psi)\stackrel{{\scriptstyle\psi}}{{\to}}M_{k}\mbox{ such that
}E(\psi)\mbox{ is }3\mbox{-connected}.$
The bundle $\psi$ is classified by a primitive element $\psi\in H^{4}(M_{k})$,
which satisfies a criterion (see Proposition 4.9). In this context, we first
encounter the question whether ${\mathcal{P}}(M_{k})$ is non-empty. We prove
(see Proposition 4.3 and Proposition 5.2)
###### Theorem A.
For $k\geq 3$, the set ${\mathcal{P}}(M_{k})$ is non-empty.
For $k=2$, there are examples where ${\mathcal{P}}(M_{k})$ is empty. This
means that for every principal $SU(2)$-bundle over such complexes, the total
space has non-trivial $\pi_{3}$. The idea here is that the existence of $\psi$
is given by a certain equation in $k$ variables, and solutions exist once $k$
is large enough.
In the case of simply connected $4$-manifolds, the first kind of
classification of circle bundles is the result of Giblin[9] which states
$\mbox{ If }X=S^{2}\times S^{2},\mbox{ then }X(\alpha)\simeq S^{2}\times
S^{3}\mbox{ for any primitive }\alpha.$
We also have an analogous result in the $8$-dimensional case
$\mbox{ If }\psi\in{\mathcal{P}}(S^{4}\times S^{4}),\mbox{ then }E(\psi)\simeq
S^{4}\times S^{7}.$
In fact, this fits into a more general framework. We call a manifold
$M_{k}\in{\mathcal{P}}{\mathcal{D}}_{3}^{8}$ stably trivial if $L(M_{k})$ is
stably null-homotopic (that is the stable homotopy class of
$L(M_{k}):S^{7}\to\big{(}S^{4}\big{)}^{\vee k}$ is $0$). In terms of (1.1),
this means for every $i$, $g_{i,i}-2l_{i}\equiv 0\pmod{24}$. We have the
following theorem (see Proposition 4.7)
###### Theorem B.
Suppose $M_{k}$ is stably trivial. Then, for every
$\psi\in{\mathcal{P}}(M_{k})$, $E(\psi)\simeq\\#^{k-1}S^{4}\times S^{7}$, a
connected sum of $k-1$ copies of $S^{4}\times S^{7}$.
This directly generalizes the result for circle bundles over simply connected
$4$-manifolds that are spin (identifying the spin manifolds as those whose
attaching map is stably null).
We proceed towards a more general classification of the homotopy type of the
space $E(\psi)$ for $\psi\in{\mathcal{P}}(M_{k})$. Let
${\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$ be the class of $3$-connected
$11$-dimensional Poincaré duality complexes $E$ such that
$E\setminus\\{pt\\}\simeq$ a wedge of $S^{4}$ and $S^{7}$. We first observe
that $E(\psi)\in{\mathcal{P}}{\mathcal{D}}_{4,7}^{11}$ (see Proposition 3.2),
and we try to address the question of the classification of complexes in
${\mathcal{P}}{\mathcal{D}}_{4,7}^{11}$ up to homotopy equivalence. The
homology of such complexes $E$ is given by
$H_{m}(E)\cong\begin{cases}{\mathbb{Z}}&m=0,11\\\ {\mathbb{Z}}^{r}&m=4,7\\\
0&\mbox{otherwise}.\end{cases}$
We denote the number $r$ by $\mbox{Rank}(E)$. The classification works
differently for $r=1$, and for $r\geq 2$. Table 1 lists the various
possibilities for $r=1$. For $r\geq 2$, $E$ is a connected sum of copies of
$S^{4}\times S^{7}$, and the complexes $E_{\lambda,\epsilon,\delta}$ defined
below. Note that
$\displaystyle\pi_{10}(S^{4}\vee S^{7})$
$\displaystyle\cong\pi_{10}(S^{4})\oplus\pi_{10}(S^{7})\oplus\pi_{10}(S^{10})$
$\displaystyle\cong{\mathbb{Z}}/(24)\\{x\\}\oplus{\mathbb{Z}}/(3)\\{y\\}\oplus{\mathbb{Z}}/(24)\\{\nu_{7}\\}\oplus{\mathbb{Z}}\\{[\iota_{4},\iota_{7}]\\}.$
Here, $x=\nu\circ\nu_{7}$ and $y=\nu^{\prime}\circ\nu_{7}$. Let
$\phi_{\lambda,\epsilon,\delta}=[\iota_{4},\iota_{7}]+\lambda(\iota_{7}\circ\nu_{7})+\epsilon(\iota_{4}\circ
x)+\delta(\iota_{4}\circ y),$
and define,
$E_{\lambda,\epsilon,\delta}=(S^{4}\vee
S^{7})\cup_{\phi_{\lambda,\epsilon,\delta}}D^{11}.$
The attaching map of the top cell of $E$ takes the form
$L(E):S^{10}\to\big{(}S^{4}\vee S^{7}\big{)}^{\vee r}.$
The stable homotopy class of $L(E)$ lies in
$\pi_{10}^{s}\Big{(}\big{(}S^{4}\vee S^{7}\big{)}^{\vee
r}\Big{)}\cong\big{(}{\mathbb{Z}}/(24)\\{\nu\\}\oplus{\mathbb{Z}}/(2)\\{\nu^{2}\\}\big{)}^{\oplus
r}.$
This takes the form $\lambda_{s}\beta\circ\nu+\epsilon_{s}\alpha\circ\nu^{2}$
for some $\beta\in\pi_{7}\Big{(}\big{(}S^{4}\vee S^{7}\big{)}^{\vee r}\Big{)}$
and $\alpha\in\pi_{4}\Big{(}\big{(}S^{4}\vee S^{7}\big{)}^{\vee r}\Big{)}$. Up
to a change of basis we may assume that $\lambda_{s}\mid 24$, and if
$\lambda_{s}$ is even, $\epsilon_{s}\in{\mathbb{Z}}/(2)$. These numbers are
invariant over the homotopy equivalence class of $E$, and are denoted by
$\lambda_{s}(E)$, and $\epsilon_{s}(E)$ (defined only if $\lambda_{s}(E)$ is
even). We use these invariants to classify the homotopy types of elements in
${\mathcal{P}}{\mathcal{D}}_{4,7}^{11}$ (see Theorem 2.17)
###### Theorem C.
Let $E\in{\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$. Then the homotopy type of $E$
is determined by the following.
1. (1)
If $\lambda_{s}(E)$ is even and $\epsilon_{s}(E)=0$, then
$E\simeq\\#^{r-1}E_{0,0,0}\\#E_{\lambda_{s},\epsilon,\delta}\quad\text{where
}\epsilon\equiv 0\pmod{2}.$
2. (2)
If $\lambda_{s}(E)$ is even and $\epsilon_{s}(E)=1$, then
$\displaystyle
E\simeq\\#^{r-1}E_{0,0,0}\\#E_{\lambda_{s},\epsilon,\delta}\quad$
$\displaystyle\text{where }\epsilon\equiv 1\pmod{2}$ or $\displaystyle
E\simeq\\#^{r-2}E_{0,0,0}\\#E_{0,1,0}\\#E_{\lambda_{s},\epsilon,\delta}\quad$
$\displaystyle\text{where }\epsilon\equiv 0\pmod{2}.$
3. (3)
If $\lambda_{s}(E)$ is odd, then
$E\simeq\\#^{r-1}E_{0,0,0}\\#E_{\lambda_{s},\epsilon,\delta}\quad\text{or}\quad
E\simeq\\#^{r-2}E_{0,0,0}\\#E_{0,1,0}\\#E_{\lambda_{s},\epsilon,\delta}.$
Further given $\lambda_{s}$, the choices of $\epsilon$ and $\delta$ are those
which are mentioned in Table 1.
We see that in the list given in Table 1, for certain cases the homotopy type
of $E$ is determined by $\lambda_{s}(E)$ and $\epsilon_{s}(E)$. This happens
if $\lambda_{s}(E)=$ $0$, or $12$. We also observe that the homotopy type of
$\Omega E$ depends only on the rank $r$. Now, we look at
$M_{k}\in{\mathcal{P}}{\mathcal{D}}_{3}^{8}$, and try to determine the set of
homotopy equivalence classes of $E(\psi)$ for $\psi\in{\mathcal{P}}(M_{k})$.
In this process, we determine a formula for
$\lambda(\psi):=\lambda_{s}(E(\psi))$ (Proposition 3.13), and using this we
determine the set of possible values of $\lambda_{s}(\psi)$ for
$\psi\in{\mathcal{P}}(M_{k})$. The stable homotopy class of $L(M_{k})$ lies in
$\pi_{7}^{s}\Big{(}(S^{4})^{\vee
k}\Big{)}\cong\big{(}{\mathbb{Z}}/(24)\\{\nu\\}\big{)}^{\oplus k}.$
This takes the form $\sigma_{s}\alpha\circ\nu$ for some
$\alpha\in\pi_{4}\big{(}(S^{4})^{\vee k}\big{)}$, and up to a change of basis
for $k\geq 2$, $\sigma(M_{k}):=\gcd(\sigma_{s},24)$ is an invariant of the
stable homotopy type of $M_{k}$. Other than $k$ and $\sigma(M_{k})$, the
explicit stable homotopy class of $\alpha$ above yields a linear map
$\tau:H^{4}(M_{k})\to{\mathbb{Z}}/(24)$ given by
$\tau(\psi)=\psi(\sigma_{s}\alpha)$. We use the invariants $k$,
$\sigma(M_{k})$, $\tau$, and the intersection form to completely determine the
possibilities of $\lambda(\psi)$ for $\psi\in{\mathcal{P}}(M_{k})$. (see
Theorem 3.14, Proposition 4.10, Theorem 4.11, Theorem 4.14, and Theorem 5.5)
###### Theorem D.
For any $\psi\in{\mathcal{P}}(M_{k})$, $\lambda(\psi)$ is a multiple of
$\sigma(M_{k})$ ($\pmod{24}$). Conversely, the multiples of $\sigma(M_{k})$
that may be achieved are described as follows
1. (1)
If the intersection form of $M_{k}$ is odd and $k\geq 7$, then
$\\{\lambda(\psi)\mid\psi\in{\mathcal{P}}(M_{k})\\}$ equals the set of
multiples of $\sigma(M_{k})\pmod{24}$.
2. (2)
If the intersection form of $M_{k}$ is even, each
$\psi\in{\mathcal{P}}(M_{k})$ satisfies $\epsilon_{s}(\psi)\equiv 0\pmod{2}$.
3. (3)
If $k\geq 7$, there are $\psi\in{\mathcal{P}}(M_{k})$ such that
$\lambda(\psi)=\sigma(M_{k})$, and also there are
$\psi\in{\mathcal{P}}(M_{k})$ such that $\lambda(\psi)=3\sigma(M_{k})$.
4. (4)
If $\sigma(M_{k})\equiv 2$, or $4\pmod{8}$ for $k\geq 5$, there is a
$\psi\in{\mathcal{P}}(M_{k})$ such that $\lambda(\psi)\equiv 0\pmod{8}$ if and
only if the complex satisfies hypothesis $(H_{8})$.
5. (5)
If $\sigma(M_{k})\equiv 2\pmod{8}$ for $k\geq 5$, there is a
$\psi\in{\mathcal{P}}(M_{k})$ such that $\lambda(\psi)\equiv 4\pmod{8}$ if and
only if the complex satisfies hypothesis $(H_{4})$.
For lower values of $k$, we do not get systematic results like the above. That
is, the set $\\{\lambda(\psi)\mid\psi\in{\mathcal{P}}(M_{k})\\}$ is not
completely determined by $\sigma(M_{k})$, $k$, $\tau$, and the intersection
form. Theorem D implies that there are certain $M_{k}$ whose intersection form
is even and there is no $\psi\in{\mathcal{P}}(M_{k})$ such that
$E(\psi)\simeq\\#^{k-1}S^{4}\times S^{7}$, however if the intersection form is
odd, then for $k\geq 7$, there is a principal bundle
$SU(2)\to\\#^{k-1}(S^{4}\times S^{7})\to M_{k}$.
###### 1.2.
Organization. In §2, we prove a classification result for certain
$3$-connected $11$-dimensional complexes which proves Theorem C. In §3, we
prove formulas relating the stable homotopy type of the total space to that of
the base using the Chern character. The results for manifolds with even
intersection form are proved in §4, and those with odd intersection form in
§5.
## 2\. Homotopy classification of certain $3$-connected $11$-complexes
We study $3$-connected $11$-dimensional Poincaré duality complexes $E$ such
that $E\setminus\\{pt\\}$ is homotopic to a wedge of copies of $S^{4}$ and
$S^{7}$. We write ${\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$ for the collection
of such complexes. Our target in this section is to analyze them up to
homotopy equivalence. We show that these are classified by numbers $\lambda$,
$\epsilon$ and $\delta$ which are explained in detail below.
###### 2.1.
The rank one case. Let $E\in{\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$ be such
that $E\setminus\\{pt\\}\simeq S^{4}\vee S^{7}$, that is,
$\mbox{Rank}(H_{4}(E))=1$. The homotopy type of $E$ is determined by the
attaching map of the top cell, which is an element of
$\pi_{10}(S^{4}\vee
S^{7})\cong\pi_{10}(S^{4})\oplus\pi_{10}(S^{7})\oplus\pi_{10}(S^{10}).$
This must be of the form
(2.2)
${}\phi_{\lambda,\epsilon,\delta}=[\iota_{4},\iota_{7}]+\lambda(\iota_{7}\circ\nu_{7})+\epsilon(\iota_{4}\circ
x)+\delta(\iota_{4}\circ y).$
The total space associated with $\phi_{\lambda,\epsilon,\delta}$ is denoted by
$E_{\lambda,\epsilon,\delta}=(S^{4}\vee
S^{7})\cup_{\phi_{\lambda,\epsilon,\delta}}D^{11}.$
Note that as $(-\iota_{4})\circ\nu=\nu+\nu^{\prime}$, we have
$(-\iota_{4})\circ x=x+y.$ For given any $\lambda,\epsilon$ and $\delta$; we
observe the effect of the self homotopy equivalences on
$E_{\lambda,\epsilon,\delta}$ as follows
$\displaystyle\iota_{4}\mapsto-\iota_{4},\quad\iota_{7}\mapsto-\iota_{7}$
$\displaystyle\implies
E_{\lambda,\epsilon,\delta}\xrightarrow[]{\simeq}E_{-\lambda,\epsilon,\epsilon-\delta},$
$\displaystyle\iota_{4}\mapsto\iota_{4},\quad\iota_{7}\mapsto\iota_{7}+a\iota_{4}\circ\nu$
$\displaystyle\implies
E_{\lambda,\epsilon,\delta}\xrightarrow[]{\simeq}E_{\lambda,\epsilon+(\lambda+2)a,\delta},$
$\displaystyle\iota_{4}\mapsto\iota_{4},\quad\iota_{7}\mapsto\iota_{7}+b\iota_{4}\circ\nu^{\prime}$
$\displaystyle\implies
E_{\lambda,\epsilon,\delta}\xrightarrow[]{\simeq}E_{\lambda,\epsilon-4b,\delta+(\lambda+1)b}.$
This leads us to homotopy equivalences between $E_{\lambda,\epsilon,\delta}$’s
depending on the choice of $\lambda\in\pi_{10}(S^{7})\cong{\mathbb{Z}}/{24}$.
Table 1 lists the different homotopy types in
${\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$ of rank $1$.
$\lambda$ | $\\#E_{\lambda,\epsilon,\delta}$’s | $E_{\lambda,\epsilon,\delta}$’s
---|---|---
$0$ | $2$ | $E_{0,0,0}$, $E_{0,1,0}$
$1$ | $3$ | $E_{1,0,0}$, $E_{1,1,0}$, $E_{1,2,0}$
$2$ | $12$ | $E_{2,0,0}$, $E_{2,1,0}$, $E_{2,2,0}$, $E_{2,3,0}$, $E_{2,0,1}$, $E_{2,1,1}$,
| | $E_{2,2,1}$, $E_{2,3,1}$, $E_{2,0,2}$, $E_{2,1,2}$, $E_{2,2,2}$, $E_{2,3,2}$
$3$ | $1$ | $E_{3,0,0}$
$4$ | $6$ | $E_{4,0,0}$, $E_{4,1,0}$, $E_{4,2,0}$, $E_{4,3,0}$, $E_{4,4,0}$, $E_{4,5,0}$
$5$ | $3$ | $E_{5,0,0}$, $E_{5,0,1}$, $E_{5,0,2}$
$6$ | $4$ | $E_{6,0,0}$, $E_{6,1,0}$, $E_{6,2,0}$, $E_{6,3,0}$
$7$ | $3$ | $E_{7,0,0}$, $E_{7,1,0}$, $E_{7,2,0}$
$8$ | $6$ | $E_{8,0,0}$, $E_{8,1,0}$, $E_{8,0,1}$, $E_{8,1,1}$, $E_{8,0,2}$, $E_{8,1,2}$
$9$ | $1$ | $E_{9,0,0}$
$10$ | $12$ | $E_{10,0,0}$, $E_{10,1,0}$, $E_{10,2,0}$, $E_{10,3,0}$, $E_{10,4,0}$, $E_{10,5,0}$,
| | $E_{10,6,0}$, $E_{10,7,0}$, $E_{10,8,0}$, $E_{10,9,0}$, $E_{10,10,0}$, $E_{10,11,0}$
$11$ | $3$ | $E_{11,0,0}$, $E_{11,0,1}$, $E_{11,0,2}$
$12$ | $2$ | $E_{12,0,0}$, $E_{12,1,0}$
Table 1. Homotopy equivalence classes of $E_{\lambda,\epsilon,\delta}.$
###### 2.4.
A simplification of the attaching map. We simplify and reduce the attaching
map of the top cell of $E$.
###### Proposition 2.5.
Let $E\in{\mathcal{P}}{\mathcal{D}}_{4,7}^{11}$ with $\mbox{Rank}(E)=k-1$. The
attaching map $\phi$ of the top cell of $E$ as in (2.2) can be reduced, up to
homotopy, to the following form
$\phi=\sum_{i=1}^{k-1}[\iota_{4}^{i},\iota_{7}^{i}]+\sum_{i=1}^{k-1}\lambda_{i}\iota_{7}^{i}\circ\nu_{(7)}+\sum_{i=1}^{k-1}s_{i}\nu_{i}\circ\nu_{(7)}+\sum_{i=1}^{k-1}r_{i}\nu_{i}^{\prime}\circ\nu_{(7)}.$
###### Proof.
By Hilton-Milnor decomposition, we have the following equivalence
$\pi_{10}((S^{4}\vee
S^{7})^{k-1})\cong\pi_{10}(S^{4})^{\oplus(k-1)}\oplus\pi_{10}(S^{7})^{\oplus(k-1)}\oplus\pi_{10}(S^{7})^{\oplus{k-1\choose
2}}\oplus\\\
\pi_{10}(S^{10})^{\oplus{(k-1)\times(k-1)}}\oplus\pi_{10}(S^{10})^{\oplus{k-1\choose
3}}.$
We choose $\eta_{1},\dots,\eta_{k-1}\in\pi_{4}(E)$ and
$\gamma_{1},\dots,\gamma_{k-1}\in\pi_{7}(E)$ such that they correspond to the
homology generators, say $\tilde{\eta}_{1},\dots,\tilde{\eta}_{k-1}\in
H_{4}(E)$ and $\tilde{\gamma}_{1},\dots,\tilde{\gamma}_{k-1}\in H_{7}(E)$ such
that
(2.6)
${}\tilde{\eta}_{i}^{*}\cup\tilde{\gamma}_{j}^{*}=\begin{cases}1\quad\text{if
}i=j,\\\ 0\quad\text{if }i\neq j.\end{cases}$
Let $\tilde{f}\colon(S^{4}\vee S^{7})^{\vee k-1}\to E$ be the inclusion which
sends $\iota_{4}^{i}\mapsto\eta_{i}$ and $\iota_{7}^{i}\mapsto\gamma_{i}$ for
$1\leq i\leq k-1$. Then $\tilde{f}\circ\phi\in\pi_{10}(E)$ whose image under
the map $\rho\colon\pi_{10}(E)\rightarrow\pi_{9}(\Omega E)\rightarrow
H_{9}(\Omega E)$ is $0$ in the tensor algebra
$T(\tilde{\eta}_{1},\dots,\tilde{\eta}_{k-1},\tilde{\gamma}_{1},\dots,\tilde{\gamma}_{k-1})/(\sum_{i=1}^{k-1}[\tilde{\eta}_{i},\tilde{\gamma}_{i}])$.
The attaching map $\phi$ may contain triple Whitehead products as
$[\iota^{4}_{i},[\iota^{4}_{j},\iota^{4}_{\ell}]]$, the Whitehead products of
the form $[\iota_{i}^{4},\iota_{j}^{7}]$ for $i\neq j$, the terms involving
$[\iota_{i}^{4},\iota_{j}^{4}]\circ\nu_{7}$ and
$[\iota_{i}^{4},\iota_{j}^{4}]\circ\nu_{7}^{\prime}$. The triple Whitehead
product maps injectively to the loop homology and hence they can not occur in
the attaching map. The cup product formula in (2.6) implies that there is no
Whitehead product of the form $[\iota_{i}^{4},\iota_{j}^{7}]$ for $i\neq j$.
If $[\iota_{i}^{4},\iota_{j}^{4}]\circ\nu_{7}$ and
$[\iota_{i}^{4},\iota_{j}^{4}]\circ\nu_{7}^{\prime}$ appear in the attaching
map, we update the map $\tilde{f}$ by appropriately sending
$\iota_{i}^{7}\mapsto\gamma_{i}-\eta_{j}\circ\nu^{\prime}$,
$\iota_{i}^{7}\mapsto\gamma_{i}-\eta_{j}\circ\nu$ and
$\iota_{j}^{7}\mapsto\gamma_{j}-[\eta_{i},\eta_{j}]$ to get the desired form
of the attaching map. ∎
We note that the composition is given by
$S^{10}\to(S^{4}\vee S^{7})^{\vee k-1}\to(S^{7})^{\vee k-1}$
which is an element of $(\pi_{10}S^{7})^{\oplus
k-1}\cong({\mathbb{Z}}/{24}\\{\nu\\})^{\oplus k-1}$. This can be calculated
using the real $e$-invariant, see [2]. We use this to reduce Proposition 2.5
to the case $\lambda_{i}=0$ for $i\leq k-2$.
###### Proposition 2.7.
Let $E\in{\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$ and $\mbox{Rank}(E)=k-1$. Then
the attaching map $\phi$ of the top cell of $E$ can be reduced to the
following form
(2.8)
${}\phi=\sum_{i=1}^{k-1}[\iota_{4}^{i},\iota_{7}^{i}]+\lambda\iota_{7}^{k-1}\circ\nu_{(7)}+\sum_{i=1}^{k-1}\epsilon_{i}\nu_{i}\circ\nu_{(7)}+\sum_{i=1}^{k-1}\delta_{i}\nu_{i}^{\prime}\circ\nu_{(7)}.$
As a consequence
$E\simeq\\#_{i=1}^{k-2}E_{0,\epsilon_{i},\delta_{i}}\\#E_{\lambda,\epsilon_{k-1},\delta_{k-1}}$.
###### Proof.
Let $\tau\colon
H^{7}(E)\cong\mathbb{Z}\\{\tilde{\gamma}_{1}^{*},\tilde{\gamma}_{2}^{*},\dots,\tilde{\gamma}_{k-1}^{*}\\}\rightarrow\mathbb{Z}/24$
be the linear map defined by
$\tau(\tilde{\gamma}_{i}^{*})=e(r_{i}\circ\phi),\quad\quad\text{for }1\leq
i\leq k-1,$
where $e$ denotes for the real $e$-invariant and $r_{i}:(S^{4}\vee
S^{7})^{\vee k-1}\to S^{7}$ is the retraction onto the $i$-th factor. Let
$\tilde{\tau}\colon H^{7}(E)\rightarrow\mathbb{Z}$ be the lift of $\tau$ and
$\lambda=\gcd(\tilde{\tau}(\tilde{\gamma}_{1}^{*}),\dots,\tilde{\tau}(\tilde{\gamma}_{k-1}^{*})$.
Then we can change the generators
$\tilde{\gamma}_{1},\dots,\tilde{\gamma}_{k-1}$ such that
$\tilde{\tau}(\tilde{\gamma}_{i})=0$ for $1\leq i<k-1$ and
$\tilde{\tau}(\tilde{\gamma}_{k-1})=\lambda.$ So, for a suitable choice of
dual bases we can have $\phi$ as in (2.8). ∎
###### 2.9.
A general classification up to homotopy. We now proceed to the classification
of elements in ${\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$. In the connected sum
$E_{\lambda_{1},\epsilon_{1},\delta_{1}}\\#E_{\lambda_{2},\epsilon_{2},\delta_{2}}$,
we transform $\alpha_{1}=\alpha_{1}^{\prime}+\alpha_{2}^{\prime}$ and
$\alpha_{2}=\alpha_{2}^{\prime}$ by $A=\begin{pmatrix}1&1\\\
0&1\end{pmatrix}$. If $2|\lambda_{2}$, then we transform
$\beta_{1}=\beta_{1}^{\prime}-\epsilon_{1}\nu_{2}-\frac{\lambda_{2}\epsilon_{1}}{2}\nu_{2}^{\prime},\quad\beta_{2}=-\beta_{1}^{\prime}+\beta_{2}^{\prime}-\epsilon_{1}[\alpha_{1}^{\prime},\alpha_{2}^{\prime}].$
Hence by the following expression
$\displaystyle[\alpha_{1},\beta_{1}]+[\alpha_{2},\beta_{2}]+\lambda_{1}\beta_{1}\circ\nu_{(7)}+\lambda_{2}\beta_{2}\circ\nu_{7}+\epsilon_{1}x_{1}+\epsilon_{2}x_{2}+\delta_{1}y_{1}+\delta_{2}y_{2}$
$\displaystyle=$
$\displaystyle[\alpha_{1}^{\prime},\beta_{1}^{\prime}]+[\alpha_{2}^{\prime},\beta_{2}^{\prime}]+(\lambda_{1}-\lambda_{2})\beta_{1}^{\prime}\circ\nu_{(7)}+\lambda_{2}\beta_{2}^{\prime}\circ\nu_{7}+\epsilon_{1}x_{1}+(-\epsilon_{1}+\epsilon_{2}+2\lambda_{2}\epsilon_{1}-\lambda_{1}\epsilon_{1})x_{2}$
$\displaystyle\hskip
199.16928pt\delta_{1}y_{1}+(\delta_{1}+\delta_{2}+\lambda_{2}\epsilon_{1}(1+\lambda_{1}))y_{2},$
we conclude
(2.10)
${}E_{\lambda_{1},\epsilon_{1},\delta_{1}}\\#E_{\lambda_{2},\epsilon_{2},\delta_{2}}\simeq
E_{\lambda_{1}-\lambda_{2},\epsilon_{1},\delta_{1}}\\#E_{\lambda_{2},\epsilon_{2}+\epsilon_{1}(2\lambda_{2}-\lambda_{1}-1),\delta_{1}+\delta_{2}+(1+\lambda_{1})\epsilon_{1}\lambda_{2}}\quad\quad\text{when
}2|\lambda_{2}.$
###### Proposition 2.11.
For any unit $a\in{\mathbb{Z}}/24$, we have homotopy equivalence
$E_{\lambda,\epsilon,\delta}\\#E_{0,0,0}\simeq\begin{cases}E_{a\lambda,\epsilon,\delta}\\#E_{0,0,0}\quad&\text{if
}a\equiv 1\pmod{3}\\\ E_{-a\lambda,\epsilon,\delta}\\#E_{0,0,0}\quad&\text{if
}a\equiv 2\pmod{3}.\end{cases}$
###### Proof.
We transform
$\displaystyle\alpha_{1}=a\alpha_{1}^{\prime}+b\alpha_{2}^{\prime}$
$\displaystyle\beta_{1}=a\beta_{1}^{\prime}-24\beta_{2}^{\prime}-b\epsilon\nu_{2}-24b\epsilon\nu_{1}$
$\displaystyle\alpha_{2}=24\alpha_{1}^{\prime}+a\alpha_{2}^{\prime}$
$\displaystyle\beta_{2}=-b\beta_{1}^{\prime}+a\beta_{2}^{\prime}-\epsilon
b[\alpha_{1}^{\prime},\alpha_{2}^{\prime}]$
with $a^{2}-24b=1$ and calculate
$[\alpha_{1},\beta_{1}]+[\alpha_{2},\beta_{2}]+\lambda\beta\circ\nu_{(7)}+\epsilon
x_{1}+\delta y_{2}$. This gives the homotopy equivalence
(2.12) ${}E_{\lambda,\epsilon,\delta}\\#E_{0,0,0}\\\ \simeq
E_{a\lambda,a^{2}\epsilon,a\delta+{{a}\choose
2}\epsilon}\\#E_{0,-b^{2}\epsilon-b\lambda\epsilon,b\delta+{{b}\choose
2}\epsilon}.$
This proves the proposition except for when $\lambda\equiv 0\pmod{2}$ and
$\epsilon,b\equiv 1\pmod{2}$. In that case, we have $E_{0,1,0}$ instead of
$E_{0,0,0}$ in the second component of the connected sum. From (2.10), we get
(2.13) ${}E_{\lambda,\epsilon,\delta}\\#E_{0,1,0}\simeq
E_{\lambda,\epsilon,\delta}\\#E_{0,1+\epsilon(-\lambda-1),\delta}$
which we apply following the equivalence in (2.12) for $\lambda$ even and
$\epsilon$,$b$ odd. This concludes the proof. ∎
The transformations above further simplify the possibilities of
$E\in{\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$ listed in Proposition 2.7.
###### Proposition 2.14.
Let $E\in{\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$ and $\mbox{Rank}(E)=k-1$. Then
$E\simeq\\#^{k-3}E_{0,0,0}\\#E_{0,\hat{\epsilon},0}\\#E_{\lambda,\epsilon,\delta}$
for some $\lambda,\epsilon\in\mathbb{Z}/24$, $\delta\in{\mathbb{Z}}/3$,
$\hat{\epsilon}\in{\mathbb{Z}}/2$.
###### Proof.
From Proposition 2.7, we have the attaching map $\phi$ as in (2.8). Repeated
use of the homotopy equivalences in (2.1) and (2.13) gives the reduced form
(2.8) as follows
$\phi=\sum_{i=1}^{k-1}[\iota_{4}^{i},\iota_{7}^{i}]+\lambda\iota_{7}^{k-1}\circ\nu_{(7)}+\hat{\epsilon}x_{k-2}+\epsilon
x_{k-1}+\delta y_{k-1},$
where $\lambda,\epsilon\in\mathbb{Z}/24$, $\delta\in{\mathbb{Z}}/3$,
$\hat{\epsilon}\in{\mathbb{Z}}/2$. ∎
###### Corollary 2.15.
Let $E\in{\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$ and $\mbox{Rank}(E)=k-1$. Then
stably we have
(2.16) ${}\Sigma^{\infty}E\simeq\Sigma^{\infty}(S^{4}\vee
S^{7})^{\vee{k-2}}\vee\Sigma^{\infty}Cone(\lambda_{s}(E)\nu_{(11)}+\epsilon_{s}(E)x),$
where $x$ is the generator of the stable homotopy group
$\pi_{14}(S^{8})\cong\mathbb{Z}/2$ and $\lambda_{s}(E)\in\mathbb{Z}/24$,
$\epsilon_{s}(E)\in\mathbb{Z}/2.$ Moreover, for $k-1\geq 2$ if
$\lambda_{s}(E)\equiv 1\pmod{2}$ in (2.16), then
$\epsilon_{s}(E)=0\in\mathbb{Z}/2$.
###### Proof.
From (2.8) and Proposition 2.14,
$\Sigma^{4}\phi=\lambda\iota_{11}^{k-1}\circ\nu_{(11)}+\hat{\epsilon}\iota_{8}^{k-2}\circ\nu_{(11)}+\epsilon\iota_{8}^{k-1}\circ\nu_{(11)}.$
Note that $\nu_{(11)}\in\pi^{s}_{3}(S^{0})\simeq\mathbb{Z}/24$ and
$\Sigma^{4}(\nu\circ\nu_{(7)})\in\pi^{s}_{6}(S^{0})\simeq\mathbb{Z}/2$ are the
generators. If $\hat{\epsilon}=0\in\mathbb{Z}/2$, the result readily follows.
Otherwise, if $\epsilon=1,\hat{\epsilon}=1\in\mathbb{Z}/2$, we apply the
transformation $\iota_{8}^{k-2}+\iota^{k-1}_{8}\mapsto\iota^{k-1}_{8}.$ If
$\epsilon=0,\hat{\epsilon}=1\in\mathbb{Z}/2$, we interchange $\iota_{8}^{k-2}$
and $\iota_{8}^{k-1}$ to deduce the result. ∎
We see that the stable homotopy type of
$E\in{\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$ is determined by $\lambda_{s}(E)$
which is a divisor of $24$ and $\epsilon_{s}(E)\in{\mathbb{Z}}/2$. The
following theorem classifies the different homotopy types of $E$ given the
values of $\lambda_{s}$ and $\epsilon_{s}$.
###### Theorem 2.17.
Let $E\in{\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$ and $\mbox{Rank}(E)=k-1$. Then
depending on $\lambda_{s}=\lambda_{s}(E)$ and $\epsilon_{s}=\epsilon_{s}(E)$,
the homotopy type of $E$ is determined by the following.
1. (1)
If $\lambda_{s}$ is even and $\epsilon_{s}=0$, then
$E\simeq\\#^{k-2}E_{0,0,0}\\#E_{\lambda_{s},\epsilon,\delta}\quad\text{where
}\epsilon\equiv\epsilon_{s}\pmod{2}.$
2. (2)
If $\lambda_{s}$ is even and $\epsilon_{s}=1$, then
$\displaystyle
E\simeq\\#^{k-2}E_{0,0,0}\\#E_{\lambda_{s},\epsilon,\delta}\quad$
$\displaystyle\text{where }\epsilon\equiv 1\pmod{2}$ or $\displaystyle
E\simeq\\#^{k-3}E_{0,0,0}\\#E_{0,1,0}\\#E_{\lambda_{s},\epsilon,\delta}\quad$
$\displaystyle\text{where }\epsilon\equiv 0\pmod{2}.$
3. (3)
If $\lambda_{s}$ is odd, then
$E\simeq\\#^{k-2}E_{0,0,0}\\#E_{\lambda_{s},\epsilon,\delta}\quad\text{or}\quad
E\simeq\\#^{k-3}E_{0,0,0}\\#E_{0,1,0}\\#E_{\lambda_{s},\epsilon,\delta}.$
Further given $\lambda_{s}$, the choices of $\epsilon$ and $\delta$ are those
which are mentioned in Table 1.
###### Proof.
We write $Y_{1}=\\#^{k-2}E_{0,0,0}\\#E_{\lambda,\epsilon,\delta}$ and
$Y_{2}=\\#^{k-2}E_{0,0,0}\\#E_{\lambda,\epsilon^{\prime},\delta^{\prime}}$,
and let $Y_{1}\stackrel{{\scriptstyle f}}{{\to}}Y_{2}$ be a homotopy
equivalence with homotopy inverse $g$. We show that in this case the pair
$(\epsilon,\delta)$ is related to $(\epsilon^{\prime},\delta^{\prime})$ by the
transformations (2.1). There exists a unique (up to homotopy) factorization
$(S^{7})^{\vee k-1}\to(S^{7}\vee S^{4})^{\vee k-1}\to Y_{2}$ through cellular
approximation $\tilde{f}$ as in the following diagram.
$\textstyle{(S^{7})^{\vee
k-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tilde{f}}$$\scriptstyle{f_{7}}$$\textstyle{\\#^{k-2}E_{0,0,0}\\#E_{\lambda,\epsilon,\delta}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{\\#^{k-2}E_{0,0,0}\\#E_{\lambda,\epsilon^{\prime},\delta^{\prime}}}$$\textstyle{(S^{7}\vee
S^{4})^{\vee
k-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\rho}$$\textstyle{(S^{7})^{\vee
k-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$
Now we consider the composition $f_{7}\colon=\rho\circ\tilde{f}$ where $\rho$
is the projection map. From the stable homotopy type, we get an isomorphism
$\textstyle{f_{7}\colon\underset{\displaystyle\overset{\displaystyle{\rotatebox{90.0}{$\cong$}}}{{\mathbb{Z}}\\{\beta_{1},\dots,\beta_{k-1}\\}}}{\pi_{7}((S^{7})^{\vee
k-1})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\cong}$$\textstyle{\underset{\displaystyle\overset{\displaystyle{\rotatebox{90.0}{$\cong$}}}{{\mathbb{Z}}\\{\gamma_{1},\dots,\gamma_{k-1}\\}}}{\pi_{7}((S^{7})^{\vee
k-1})}}$
where $f_{7}(\beta_{k-1})\equiv\gamma_{k-1}\pmod{24}$. Hence, the
corresponding matrix of $f_{7}$ is
$\equiv\begin{pmatrix}&&&0\\\ &A&&\vdots\\\ &&&0\\\ *&\dots&*&1\\\
\end{pmatrix}\pmod{24}$
for some $A\in GL_{k-2}({\mathbb{Z}})$. From the inverse homotopy equivalence
$g\colon Y_{2}\to Y_{1}$, we can construct a corresponding block matrix of
$g_{7}$ similar to that of $f_{7}$ for some $B\in GL_{k-2}({\mathbb{Z}})$
$\pmod{24}$. Through suitable pre-composition of $f_{7}$ and post-composition
of $g_{7}$ we may assume that $A=B=I$ where $I$ is the identity matrix in
$GL_{k-2}({\mathbb{Z}})$. Thus both $f_{7}$ and $g_{7}$ are composition of
shearing maps with $\beta_{k-1}\mapsto\gamma_{k-1}$, that is, they are
composition of maps associated to
$\beta_{i}\mapsto\gamma_{i}+c_{i}\gamma_{k-1}$ for some
$c_{i}\in{\mathbb{Z}}$.
We now consider the map $f$ on the $7$-skeleton
$\textstyle{(S^{4}\vee S^{7})^{\vee
k-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f^{(7)}}$$\scriptstyle{\vee_{i=1}^{k-1}(\alpha_{i}\vee\beta_{i})}$$\textstyle{(S^{4}\vee
S^{7})^{\vee
k-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\vee_{i=1}^{k-1}(\eta_{i}\vee\gamma_{i})}$$\textstyle{Y_{1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{f}$$\textstyle{Y_{2},}$
which takes the form
$\displaystyle\beta_{i}\mapsto\gamma_{i}+c_{i}\gamma_{k-1}+\sum_{j=1}^{k-1}a_{i,j}\nu_{j}+\sum_{j=1}^{k-1}a_{i,j}^{\prime}\nu_{j}^{\prime}+\sum_{\begin{subarray}{c}j=1\\\
\ell=1\\\ j\neq\ell\end{subarray}}^{\begin{subarray}{c}j=k-1\\\
\ell=k-1\end{subarray}}a_{i,j,\ell}[\eta_{j},\eta_{\ell}],$
$\displaystyle\alpha_{i}\mapsto\eta_{i},\quad\text{for }1\leq i\leq k-2,$
$\displaystyle\beta_{k-1}\mapsto\gamma_{k-1}+\sum_{j=1}^{k-1}b_{j}\nu_{j}+\sum_{j=1}^{k-1}b_{j}^{\prime}\nu_{j}^{\prime}+\sum_{\begin{subarray}{c}j=1\\\
\ell=1\\\ j\neq\ell\end{subarray}}^{\begin{subarray}{c}j=k-1\\\
\ell=k-1\end{subarray}}b_{j,\ell}[\eta_{j},\eta_{\ell}],$
$\displaystyle\alpha_{k-1}\mapsto\eta_{k-1}-\sum_{j=1}^{k-2}c_{j}\eta_{j}.$
As $f$ is a homotopy equivalence, we must have that the attaching map of the
$11$-cell of $Y_{1}$ must be carried by $f^{(7)}$ to the attaching map of
$Y_{2}$, that is, $f^{(7)}\circ L(Y_{1})\simeq L(Y_{2})$. We now look at the
coefficients of $\eta_{k-1}\circ x$ and $\eta_{k-1}\circ y$ that arises in
$f^{(7)}\circ L(Y_{1})$ and note that the only terms which contribute to these
coefficients are
$f^{(7)}([\alpha_{k-1},\beta_{k-1}]+\lambda_{s}\beta_{k-1}\circ\nu_{(7)}+\epsilon\alpha_{k-1}\circ
x+\delta\alpha_{k-1}\circ y).$
We now deduce
$\epsilon^{\prime}=(\lambda_{s}+2)b_{k-1}-4b^{\prime}_{k-1}\quad\quad\text{and}\quad\quad\delta^{\prime}=(\lambda_{s}+1)b_{k-1}^{\prime},$
which verifies the result for complexes of the type
$\\#^{k-2}E_{0,0,0}\\#E_{\lambda,\epsilon,\delta}$.
For the remaining cases, we follow the same argument with
$Y_{1}=\\#^{k-3}E_{0,0,0}\\#E_{0,1,0}\\#E_{\lambda_{s},\epsilon,\delta}$ with
$\epsilon$ even if $\lambda_{s}$ is, and
$Y_{2}=\\#^{k-3}E_{0,0,0}\\#E_{0,\hat{\epsilon},0}\\#E_{\lambda_{s},\epsilon^{\prime},\delta^{\prime}}$,
where $\hat{\epsilon}=0$ or $1$. Note that the only terms which contribute to
$\eta_{k-2}\circ x$ are
$f^{(7)}([\alpha_{k-1},\beta_{k-1}]+[\alpha_{k-2},\beta_{k-2}]+\alpha_{k-2}\circ
x+\lambda_{s}\beta_{k-1}\circ\nu_{(7)}+\epsilon\alpha_{k-1}\circ
x+\delta\alpha_{k-1}\circ y).$
A direct computation implies
(2.18) ${}\lambda_{s}b_{k-2}+\epsilon
c_{k-2}^{2}+1\equiv\hat{\epsilon}\pmod{2}.$
First let $\lambda_{s}$ be even and $\epsilon\equiv 0\pmod{2}$. This implies
$\hat{\epsilon}=1$. Finally, let $\lambda_{s}$ is odd. We look at the
coefficients of $[\eta_{k-1},[\eta_{k-2},\eta_{k-1}]]$ and
$[\eta_{k-2},\eta_{k-1}]\circ\nu_{(7)}$ in $f^{(7)}\circ L(Y_{1})$ $\pmod{2}$,
which are $b_{k-2,k-1}+c_{k-2}b_{k-1}-a_{k-2,k-1}$ and
$\lambda_{s}b_{k-2,k-1}+a_{k-2,k-1}+b_{k-2}-c_{k-2}b_{k-1}-\epsilon c_{k-2}$.
Since both these coefficients are zero, we have $b_{k-2}\equiv\epsilon
c_{k-2}\pmod{2}$. Using the relation (2.18), we observe that
$\hat{\epsilon}=1$. The conditions on $\epsilon^{\prime}$ and
$\delta^{\prime}$ are verified analogously as in the previous case. This
completes the proof of the various implications in the theorem. ∎
###### 2.19.
The loopspace homotopy type. We study the loop space homotopy type of
$E\in{\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$ with $E^{(7)}\simeq(S^{4}\vee
S^{7})^{\vee k-1}$ and show that the loop space homotopy is independent of the
$\lambda,\epsilon$ and $\delta$ occurring in the attaching map $\phi$ of $E$.
###### Theorem 2.20.
The homotopy type of the loop space of $E$ is a weak product of loop spaces on
spheres, and depends only on $k-1=\mbox{Rank}(H_{4}(E))$. In particular,
$\Omega E\simeq\Omega(\\#^{k-1}(S^{4}\times S^{7})).$
###### Proof.
This follows from the arguments in [5] and [6]. More explicitly, we first
compute the homology of $\Omega E$ via cobar construction, see [1]. In this
case, $H_{*}(E)$ is a coalgebra which is quasi-isomorphic to $C_{*}(E)$, and
hence we may compute the cobar construction of $H_{*}(E)$ and deduce as in [6,
Proposition 2.2]
$H_{*}(\Omega E)\cong
T(a_{1},b_{1},\dots,a_{k-1},b_{k-1})/(\sum[a_{i},b_{i}])$
where $\rho(\alpha_{i})=a_{i}$ and $\rho(\beta_{i})=b_{i}$ with $\rho$ defined
as
$\rho\colon\pi_{r}(E)\cong\pi_{r-1}(\Omega E)\xrightarrow{Hur}H_{r-1}(\Omega
E).$
We then note that $H_{*}(\Omega E)$ is the universal enveloping algebra of the
graded Lie algebra $L(a_{1},b_{1},\dots,a_{k-1},b_{k-1})/(\sum[a_{i},b_{i}])$
where $L$ is the free Lie algebra functor. Now we apply the Poincaré-Birkhoff-
Witt theorem as in [6, Proposition 3.6] to deduce the result. ∎
## 3\. Stable homotopy type of the total space.
In this section, we examine the possible stable homotopy types of the total
space $E$ for a principal $SU(2)$-fibration over
$M_{k}\in{\mathcal{P}}{\mathcal{D}}^{8}_{3}$. We relate this to the stable
homotopy type of $M_{k}$.
Let $f\colon M_{k}\to{\mathbb{H}}P^{\infty}$ be a map such that
$\pi_{4}(f)\colon\pi_{4}(M_{k})\rightarrow\pi_{4}(\mathbb{H}P^{\infty})\cong\mathbb{Z}$
is surjective. This ensures that the homotopy fibre $E(f)$ is $3$-connected
and is a Poincaré duality complex of dimension $11$. One easily deduces
$H_{i}(E(f))=\begin{cases}\mathbb{Z}&\quad\text{}i=0,11\\\
\mathbb{Z}^{\oplus{k-1}}&\quad\text{}i=4,7\\\
0&\quad\text{otherwise}\end{cases}$
from the Serre spectral sequence associated to the fibration $S^{3}\to E(f)\to
M_{k}$. We may now consider a minimal CW-complex structure on $E:=E(f)$ with
$(k-1)$ $4$-cells, $(k-1)$ $7$-cells, and one $11$-cell, see [10, Section
2.2]. The $7$-th skeleton $E^{(7)}$ is, therefore, a pushout
(3.1)
$\textstyle{(S^{6})^{\vee{(k-1)}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\vee_{i=1}^{k-1}\phi_{i}}$$\textstyle{({D^{7}})^{\vee{(k-1)}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{(S^{4})^{\vee{(k-1)}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{E^{(7)}}$
We now observe that the $\phi_{i}$ are all $0$.
###### Proposition 3.2.
The maps $\phi_{i}\simeq 0$ for $1\leq i\leq k-1$.
###### Proof.
Note that the homotopy class of each of the attaching maps is in
$\pi_{6}(S^{4})^{\oplus{k-1}}$ which lies in the stable range. Applying the
$\Sigma^{\infty}$ functor on the diagram (3.1), we get the cofibre sequence
$\Sigma^{\infty}(S^{6})^{\vee{(k-1)}}\xrightarrow{\Sigma^{\infty}(\vee_{i=1}^{k-1}\phi_{i})}\Sigma^{\infty}(S^{4})^{\vee{(k-1)}}\rightarrow\Sigma^{\infty}E^{(7)}$
which in turn induces a long exact sequence (on the stable homotopy groups)
(3.3)
${}\dots\rightarrow\pi_{7}^{s}(E^{(7)})\rightarrow\pi_{6}^{s}(S^{6})^{\oplus
k-1}\xrightarrow{\Phi}\pi_{6}^{s}(S^{4})^{\oplus
k-1}\rightarrow\pi_{6}^{s}(E^{(7)})\rightarrow\dots$
where $\Phi$ is the induced map of $\vee_{i=1}^{k-1}\phi_{i}$. We have the
following commutative diagram
---
$\textstyle{\pi_{6}^{s}(S^{4})^{\oplus
k-1}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\pi^{s}_{6}(E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\pi^{s}_{6}(E^{(7)})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\simeq}$$\textstyle{\underset{\displaystyle\overset{\displaystyle\shortparallel}{0}}{\pi^{s}_{6}(S^{7})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\pi_{6}^{s}(S^{4})^{\oplus
k}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\pi_{6}^{s}(M_{k}).}$
The second row is a part of a long exact sequence and so, the map
$\pi_{6}^{s}(S^{4})^{\oplus k}\to\pi^{s}_{6}(M_{k})$ is injective. Hence, the
map $\pi_{6}^{s}(S^{4})^{\oplus{(k-1)}}\rightarrow\pi_{6}^{s}(E)$ is
injective, in (3.3) $\Phi$ is forced to be $0$. The result follows. ∎
Proposition 3.2 implies that $E$ fits into the pushout
$\textstyle{S^{10}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{L(E)}$$\textstyle{{D^{11}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{(S^{4}\vee
S^{7})^{\vee{(k-1)}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{E}$
for some $[L(E)]\in\pi_{10}((S^{4}\vee S^{7})^{\vee(k-1)})$. Hence, $E$
belongs to ${\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$. We consider
$\textstyle{S^{10}\xrightarrow{L(E)}(S^{4}\vee
S^{7})^{\vee{(k-1)}}\xrightarrow{}(S^{7})^{\vee{(k-1)}}}$
which is of the form
$\sum_{i=1}^{k-1}\lambda_{i}\iota^{i}_{7}\circ\nu_{7}\in\pi_{10}(S^{7})^{\oplus{k-1}}\cong\bigoplus_{i=1}^{k-1}\mathbb{Z}/24\\{\nu_{7}\\}.$
The coefficients $\lambda_{i}$ can be computed via the $e$-invariant, see [2].
Recall that the $e$-invariant of a map $g\colon S^{11}\to S^{8}$ can be
computed using Chern character. The complex $K$-theoretic $e$-invariant
$e_{{\mathbb{C}}}$ is computed via the diagram
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\overset{\displaystyle\underset{\displaystyle{\rotatebox{90.0}{$\cong$}}}{{\mathbb{Z}}\\{b_{12}\\}}}{\tilde{K}(S^{12})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{ch}$$\textstyle{\tilde{K}(\Sigma
C_{g})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{ch}$$\textstyle{\overset{\displaystyle\underset{\displaystyle{\rotatebox{90.0}{$\cong$}}}{{\mathbb{Z}}\\{b_{8}\\}}}{\tilde{K}(S^{8})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{ch}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\underset{\displaystyle\overset{\displaystyle{\rotatebox{90.0}{$\cong$}}}{{\mathbb{Q}}\\{a_{12}\\}}}{\tilde{H}^{ev}(S^{12};\mathbb{Q})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\tilde{H}^{ev}(\Sigma
C_{g};\mathbb{Q})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\underset{\displaystyle\overset{\displaystyle{\rotatebox{90.0}{$\cong$}}}{{\mathbb{Q}}\\{a_{8}\\}}}{\tilde{H}^{ev}(S^{8};\mathbb{Q})}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0.}$
We obtain
$ch(b_{12})=a_{12},\quad ch(b_{8})=a_{8}+r{a}_{12}.$
If $g=\lambda\nu_{(7)}$,
$e_{{\mathbb{C}}}(g)=r=\frac{\lambda}{12}\in{\mathbb{Q}}/{\mathbb{Z}}$, see
[2]. We also have $e_{{\mathbb{C}}}=2e$, where $e$ is computed using the Chern
character of the complexification $c\colon KO\to K$. Therefore, from [2,
Proposition 7.14]
(3.4) ${}b_{8}\in Im(c)\implies e(g)=\frac{r}{2}\in{\mathbb{Q}}/{\mathbb{Z}}.$
###### 3.5.
$K$-theory of $M_{k}$. Consider the Atiyah-Hirzebruch spectral sequence
$E_{2}^{**}=H^{*}(M_{k};\pi_{*}K)\implies K^{*}(M_{k}).$
As $M$ has only even dimensional cells, this has no non-trivial differential
for degree reasons. This gives the additive structure of $K^{0}(M_{k})$. Let
$H^{4}(M_{k})\cong{\mathbb{Z}}\\{\psi_{1},\dots,\psi_{k}\\}$ and
$H^{8}(M_{k})\cong{\mathbb{Z}}\\{z\\}$. Note that if
$\alpha_{1},\dots,\alpha_{k}\in\pi_{4}(M_{k})\cong
H_{4}(M_{k})\cong{\mathbb{Z}}^{k}$ is dual to the basis
$\psi_{1},\dots,\psi_{k}$, in the expression (1.1), the matrix
$\big{(}(g_{i,j})\big{)}$ of the intersection form is related to the cup
product via the equation $\psi_{i}\cup\psi_{j}=g_{i,j}z$. Let
$\tilde{\psi_{1}},\dots,\tilde{\psi_{k}},\tilde{z}$ be classes in $K(M_{k})$
corresponding to $\psi_{1},\dots,\psi_{k},z$ respectively, in the
$E^{\infty}$-page. We have the diagram
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\tilde{K}^{0}(S^{8})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{q^{*}}$$\scriptstyle{ch}$$\textstyle{\tilde{K}^{0}(M_{k})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{i^{*}}$$\scriptstyle{ch}$$\textstyle{\tilde{K}^{0}((S^{4})^{\vee{k}})\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{ch}$$\textstyle{0}$$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\tilde{H}^{ev}(S^{8};\mathbb{Q})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\tilde{H}^{ev}(M_{k};\mathbb{Q})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\tilde{H}^{ev}((S^{4})^{\vee{k}};\mathbb{Q})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
where
(3.6) ${}ch(\tilde{z})=z,\quad ch(i^{*}(\tilde{\psi_{j}}))={\psi_{j}}(\implies
ch(\tilde{\psi_{j}})={\psi_{j}}+\tau_{j}z),1\leq j\leq k.$
Note that in terms of the formula (1.1), $\psi_{i}\psi_{j}=g_{i,j}z$. We now
use the fact that $ch$ is a ring map to get
$ch(\tilde{\psi_{i}}\tilde{\psi_{j}})=(\psi_{i}+\tau_{i}z)\cup(\psi_{j}+\tau_{j}z)=\psi_{i}\cup\psi_{j}=g_{i,j}z.$
As $ch\colon K(M_{k})\to H^{ev}(M_{k};{\mathbb{Q}})$ is injective, we deduce
$\tilde{\psi_{i}}\tilde{\psi_{j}}=g_{i,j}\tilde{z}$.
Further let $q_{i}\colon(S^{4})^{\vee k}\to S^{4}$ be the retraction onto the
$i$-th factor, and note that $q_{i}\circ L(M)$ is stably equivalent to
$(g_{i,i}-2l_{i})\alpha_{i}\circ\nu_{(7)}$. Thus the $e$-invariant
$e_{{\mathbb{C}}}(q_{i}\circ
L(M))=(g_{i,i}-2l_{i})e_{{\mathbb{C}}}(\Sigma\nu_{(7)})=\frac{g_{i,i}-2l_{i}}{12}\in\mathbb{Q}/\mathbb{Z}.$
We summarize these observations in the following proposition.
###### Proposition 3.7.
Let $M_{k}\in{\mathcal{P}}{\mathcal{D}}^{8}_{3}$ with $L(M_{k})$ as in (1.1).
Then,
${K}^{0}(M_{k})\cong{\mathbb{Z}}\\{1,\tilde{\psi}_{1},\tilde{\psi}_{2},\dots,\tilde{\psi_{k}},\tilde{z}\\}$.
The ring structure is given by
$\tilde{\psi}_{i}\tilde{\psi}_{j}=g_{i,j}\tilde{z}\quad\text{ for }1\leq
i,j\leq k.$
and
$e_{{\mathbb{C}}}(q_{i}\circ
L(M))=\frac{g_{i,i}-2l_{i}}{12}\in\mathbb{Q}/\mathbb{Z}\quad\text{ for }1\leq
i\leq k.$
###### 3.8.
$K$-theory of $E(f)$. The space $E:=E(f)$ is the total space of the sphere
bundle associated to the quaternionic line bundle classified by $f$. We note
that the quaternionic line bundle has a complex structure, and therefore, has
a $K$-orientation.
Let $\gamma_{\mathbb{H}}$ be the canonical $\mathbb{H}$-bundle over
$\mathbb{H}P^{\infty}$. The $K$-theoretic Thom class of $\gamma_{\mathbb{H}}$
is given by $\gamma_{\mathbb{H}}-2\in\tilde{K}^{0}(\mathbb{H}P^{\infty})$. As
the total space of the sphere bundle is contractible, the Thom space
$Th(\gamma_{{\mathbb{H}}})\simeq{\mathbb{H}}P^{\infty}$. Consider the map
$\pi\colon\mathbb{C}P^{\infty}\to\mathbb{H}P^{\infty}$. The pullback bundle
$\pi^{*}(\gamma_{\mathbb{H}})=\gamma_{{\mathbb{C}}}\oplus\bar{\gamma}_{{\mathbb{C}}}$,
where $\gamma_{{\mathbb{C}}}$ is the canonical line bundle over
${{\mathbb{C}}}P^{\infty}$. Therefore,
$\pi^{*}ch(\gamma_{\mathbb{H}}-2)=ch(\pi^{*}(\gamma_{\mathbb{H}}-2))=ch(\gamma_{\mathbb{C}})+ch(\bar{\gamma}_{\mathbb{C}})-2=e^{x}+e^{-x}-2,$
where $H^{\ast}({\mathbb{C}}P^{\infty};{\mathbb{Z}})\cong{\mathbb{Z}}[x]$ and
$x=c_{1}(\gamma_{\mathbb{C}})$. Since $\pi^{*}$ is injective on $H^{*}$, we
may use this formula to deduce
(3.9)
${}ch(\Phi_{K}(\gamma_{\mathbb{H}}))=\Phi_{H}(\gamma_{\mathbb{H}})(1+\frac{y}{12}+\frac{y2}{360}+\dots)$
where $H^{*}({\mathbb{H}}P^{\infty})\cong{\mathbb{Z}}[y]$ and
$\pi^{*}(y)=x^{2}$. We use the Thom isomorphism associated to
$f^{*}(\gamma_{{\mathbb{H}}})$ to deduce the following.
###### Proposition 3.10.
Assume that $f^{*}(y)=\psi_{k}$. Then,
$\tilde{K}^{0}(Th(f^{*}(\gamma_{\mathbb{H}})))\cong\tilde{K}^{0}(M)\\{\Phi_{K}(f^{*}\gamma_{\mathbb{H}})\\}$
as a $\tilde{K}^{0}(M)$-module, and
$ch(\Phi_{K}(f^{*}\gamma_{\mathbb{H}}))=\Phi_{H}(f^{*}\gamma_{\mathbb{H}})(1+\frac{\psi_{k}}{12}+\frac{\psi_{k}^{2}}{360}+\dots).$
###### Proof.
From the naturality of the Chern character as well as the Thom class, we have
$ch(\Phi_{K}(f^{*}(\gamma_{\mathbb{H}}))=ch(f^{*}(\Phi_{K}((\gamma_{\mathbb{H}}))=f^{*}ch(\Phi_{K}(\gamma_{\mathbb{H}})).$
The result follows from (3.9) and the fact $f^{*}(y)=\psi_{k}$. ∎
###### Notation 3.11.
Suppose we are in the situation of Proposition 3.10, that is,
${f^{*}(y)=\psi_{k}}$. We now assume that the basis
$\\{\psi_{1},\dots,\psi_{k}\\}$ is such that one of the following cases occur
* Case 1
If $\psi_{k}\cup\psi_{k}=\pm z$, then $\psi_{j}\psi_{k}=0$ for $1\leq j\leq
k-1$.
* Case 2
If $\psi_{k}\cup\psi_{k}=g_{k,k}z$ for some integer $g_{k,k}\neq\pm 1,$ then
assume $\psi_{k-1}\psi_{k}=1$ and $\psi_{j}\psi_{k}=0$ for $1\leq j\leq k-2$.
In terms of these notations we prove the following calculation. Here we
consider the cofibre sequence
$E\rightarrow M_{k}\rightarrow Th(f^{*}(\gamma_{\mathbb{H}}))\rightarrow\Sigma
E\rightarrow\Sigma M\rightarrow\dots$
which demonstrates $K(\Sigma E)$ as a submodule of
$K(Th(f^{*}\gamma_{{\mathbb{H}}}))$ because $K(\Sigma M_{k})=0$. The following
proposition identifies this submodule.
###### Proposition 3.12.
(1) Suppose we are in Case 1 of Notation 3.11, then we have
$\tilde{K}^{0}(\Sigma
E)\cong\mathbb{Z}\\{\Phi_{K}(f^{*}(\gamma_{\mathbb{H}}))\tilde{\psi}_{1},\dots,\Phi_{K}(f^{*}(\gamma_{\mathbb{H}}))\tilde{\psi}_{k-1},\Phi_{K}(f^{*}(\gamma_{\mathbb{H}}))\tilde{z}\\}.$
(2) Suppose we are in Case 2 of Notation 3.11, then
$\tilde{K}^{0}(\Sigma
E)\cong\mathbb{Z}\\{\Phi_{K}(f^{*}(\gamma_{\mathbb{H}}))\tilde{\psi}_{1},\dots,\Phi_{K}(f^{*}(\gamma_{\mathbb{H}}))\tilde{\psi}_{k-2},\Phi_{K}(f^{*}(\gamma_{\mathbb{H}}))(\tilde{\psi}_{k}-g_{k,k}\tilde{\psi}_{k-1}),\Phi_{K}(f^{*}(\gamma_{\mathbb{H}}))\tilde{z}\\}.$
###### Proof.
We have the following short exact sequence
$\textstyle{0\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\tilde{K}^{0}(\Sigma
E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{\tilde{K}^{0}(Th(f^{*}(\gamma_{\mathbb{H}}))\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{s_{0}}$$\textstyle{\tilde{K}^{0}(M)\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{0}$
which implies that
$\tilde{K}^{0}(\Sigma
E)=\mbox{Ker}(s_{0}\colon\tilde{K}^{0}(Th(f^{*}(\gamma_{\mathbb{H}}))\rightarrow\tilde{K}^{0}(M))$
where $s_{0}$ is the restriction along the zero section. Note that $s_{0}$ is
a $\tilde{K}^{0}(M)$-module map. Hence,
$\displaystyle s_{0}(\Phi_{K}(f^{*}\gamma_{\mathbb{H}}))\tilde{z})$
$\displaystyle=e_{K}(f^{*}\gamma_{\mathbb{H}})\tilde{z}=0$ $\displaystyle
s_{0}(\Phi_{K}(f^{*}\gamma_{\mathbb{H}}))\tilde{\psi}_{i})$
$\displaystyle=e_{K}(f^{*}\gamma_{\mathbb{H}})\tilde{\psi}_{i}=g_{ik}\tilde{z},1\leq
i\leq k$
since $e_{K}(f^{*}\gamma_{\mathbb{H}}))=\tilde{\psi}_{k}+m\tilde{z}$ for some
$m$. The result follows from a direct calculation of the kernel and the
assumptions in the respective cases. ∎
We now choose the various maps $\chi_{j}\colon S^{7}\to E$ for $1\leq j\leq
k-1$ such that on $K$-theory they precisely represent the choice of the first
$(k-1)$-elements in the basis of Proposition 3.12. In these terms we calculate
the $e_{{\mathbb{C}}}$-value of the composite
$r_{j}\circ L(E)\colon S^{10}\stackrel{{\scriptstyle L(E)}}{{\to}}(S^{4}\vee
S^{7})^{\vee k-1}\stackrel{{\scriptstyle r_{j}}}{{\to}}S^{7}$
where $r_{j}$ is the restriction onto the $j$-th factor, and $L(E)$ is the
attaching map of the top cell of $E$.
###### Proposition 3.13.
(1) If we are in Case 1, then
$e_{{\mathbb{C}}}(r_{j}\circ
L(E))=\tau_{j}=\frac{g_{j,j}-2l_{j}}{12}\in\mathbb{Q}/\mathbb{Z},1\leq j\leq
k-1.$
(2) If we are in Case 2, then
$e_{{\mathbb{C}}}(r_{j}\circ
L(E))=\tau_{j}=\frac{g_{j,j}-2l_{j}}{12}\in\mathbb{Q}/\mathbb{Z},1\leq j\leq
k-2;\quad e_{{\mathbb{C}}}(r_{k-1}\circ
L(E))=\tau_{k}-g_{k,k}\tau_{k-1}\in\mathbb{Q}/\mathbb{Z}.$
###### Proof.
For the $e$-invariant, we calculate the Chern character
$ch\colon\tilde{K}^{0}(Th(f^{*}\gamma_{\mathbb{H}}))\rightarrow
H^{ev}(Th(f^{*}\gamma_{\mathbb{H}});\mathbb{Q})$ in terms of (3.6) as follows
$\displaystyle ch(\Phi_{K}((f^{*}\gamma_{\mathbb{H}}))\tilde{z})$
$\displaystyle=\Phi_{H}(f^{*}\gamma_{\mathbb{H}})(1+\frac{\psi_{k}}{12}+\dots)z=\Phi_{H}(f^{*}\gamma_{\mathbb{H}})z,$
$\displaystyle ch(\Phi_{K}((f^{*}\gamma_{\mathbb{H}}))\tilde{\psi}_{j})$
$\displaystyle=\Phi_{H}(f^{*}\gamma_{\mathbb{H}})(1+\frac{\psi_{k}}{12}+\dots)(\psi_{j}+\tau_{j}z)$
$\displaystyle=\Phi_{H}(f^{*}\gamma_{\mathbb{H}})\psi_{j}+\tau_{j}\Phi_{H}(f^{*}\gamma_{\mathbb{H}})z,$
$\displaystyle
ch((\Phi_{K}(f^{*}\gamma_{\mathbb{H}})(\tilde{\psi}_{k}-g_{k,k}\tilde{\psi}_{k-1}))$
$\displaystyle=\Phi_{H}(f^{*}\gamma_{\mathbb{H}})(1+\frac{\psi_{k}}{12}+\dots)(\psi_{k}+\tau_{k}z-g_{k,k}\psi_{k-1}-g_{k,k}\tau_{k-1}z)$
$\displaystyle=\Phi_{H}(f^{*}\gamma_{\mathbb{H}})(\psi_{k}-g_{k,k}\psi_{k-1})+\Phi_{H}(f^{*}\gamma_{\mathbb{H}})(\tau_{k}-g_{k,k}\tau_{k-1})z.$
∎
We now turn our attention to the attaching map
$S^{10}\stackrel{{\scriptstyle L(E)}}{{\to}}(S^{4}\vee S^{7})^{\vee
k-1}\to{(S^{7})}^{\vee k-1}.$
In order to identify the composite we are required to compute the
$KO$-theoretic $e$-invariant. We know that
$\displaystyle KO^{*}=\mathbb{Z}[\eta,u][\mu^{\pm 1}]/(2\eta,\eta^{3},\eta
u,u^{2}-4\mu)\quad\text{and}\quad K^{*}=\mathbb{Z}[\beta^{\pm 1}],$
with $|\eta|=-1,~{}|u|=-4,~{}|\mu|=-8$ and $|\beta|=-2$. The complexification
map $c\colon KO\rightarrow K$ induces a graded ring homomorphism $c\colon
KO^{*}(X)\rightarrow K^{*}(X)$ with
$c(\eta)=0,\quad c(u)=2\beta^{2},\quad c(\mu)=\beta^{4}.$
###### Theorem 3.14.
Let $S^{3}\rightarrow E\rightarrow M_{k}$ be the principal $SU(2)$-fibration
classified by a given map $f\colon M_{k}\rightarrow\mathbb{HP^{\infty}}$.
Suppose that $\Sigma L(M_{k})\equiv 0\pmod{\lambda}$. Then $\Sigma(r_{j}\circ
L(E))\equiv 0\pmod{\lambda},1\leq j\leq k-1$ where $r_{j}:(S^{4}\vee
S^{7})^{\vee k-1}\to S^{7}$ is the retraction onto the $j$-th factor.
###### Proof.
We consider the Atiyah-Hirzebruch spectral sequences for
${\mathbb{H}}P^{\infty}$ and $M_{k}$
$\displaystyle E^{*,*}_{2}$
$\displaystyle=H^{*}(\mathbb{H}P^{\infty};\mathbb{Z})\otimes
KO^{*}(pt)\implies KO^{*}(\mathbb{HP^{\infty}}),$ $\displaystyle E^{*,*}_{2}$
$\displaystyle=H^{*}(M_{k};\mathbb{Z})\otimes KO^{*}(pt)\implies
KO^{*}(M_{k}).$
The spectral sequences have no non-trivial differentials for degree reasons.
Thus, $KO^{*}(\mathbb{HP^{\infty}})\cong KO^{*}[\hat{y}],$ where $\hat{y}\in
KO^{4}({\mathbb{H}}P^{\infty})$. The class $\hat{y}$ serves as $KO$-theoretic
Thom class for $\gamma_{{\mathbb{H}}}$. We have
$c(\Phi_{KO}(\gamma_{\mathbb{H}}))=c(\hat{y})=\beta^{-2}\Phi_{K}(\gamma_{\mathbb{H}}).$
Let $\hat{\psi}_{j}\in\tilde{KO}^{4}(M_{k})$,
$\hat{z}\in\tilde{KO}^{8}(M_{k})$ be the class in the $E^{\infty}$-page
represented by $\psi_{j}\in H^{4}(M_{k})$ and $z\in H^{8}(M_{k})$. Then we get
$c(\hat{\psi}_{i})=\beta^{-2}\tilde{\psi}_{i},\quad
c(\hat{z})=\beta^{-4}\tilde{z}\quad\text{and}\quad\hat{\psi}_{i}\hat{\psi}_{j}=g_{i,j}\hat{z}.$
It follows that the $K$-theoretic generators
$\Phi_{K}(f^{*}\gamma_{{\mathbb{H}}})\tilde{\psi}_{j}$ and
$\Phi_{K}(f^{*}\gamma_{{\mathbb{H}}})(\tilde{\psi}_{k}-g_{k,k}\tilde{\psi}_{k-1})$
lie in the image of the map $c$. Therefore by (3.4), we get in Case 2 that
$e(r_{j}\circ
L(E))\equiv\begin{cases}\frac{g_{j,j}-2l_{j}}{24}&\pmod{\mathbb{Z}},\quad\text{for
}j<k-1,\\\
\frac{(g_{kk}-2l_{k})-g_{k,k}(g_{k-1,k-1}-2l_{k-1})}{24}&\pmod{\mathbb{Z}},\quad\text{for
}j=k-1;\end{cases}$
and in Case 1 that
$e(r_{j}\circ
L(E))\equiv\frac{g_{j,j}-2l_{j}}{24}\pmod{\mathbb{Z}},\quad\text{for }j\leq
k-1.$
The result now follows from Proposition 3.7. ∎
## 4\. $SU(2)$-bundles over even complexes
We study the homotopy type of $E\in{\mathcal{P}}{\mathcal{D}}_{4,7}^{11}$, the
total space of a stable principal $SU(2)$-bundle over
$M_{k}\in{\mathcal{P}}{\mathcal{D}}^{8}_{3}$. In this section, we consider the
case where the intersection pairing $\langle-,-\rangle\colon
H_{4}(M_{k})\times H_{4}(M_{k})\rightarrow\mathbb{Z}$ is even i.e. $\langle
x,x\rangle\in 2\mathbb{Z}$ for all $x\in H_{4}(M_{k})$. Note that in this
case, $k$ must be even. We observe the following.
1. (1)
If $k\geq 4$, $M_{k}$ supports a principal $SU(2)$-bundle whose total space
$E$ is $3$-connected.
2. (2)
The possible stable homotopy types of $E$ can be determined directly from the
stable homotopy type of $M_{k}$ and the intersection form. In this regard, the
formulas in Theorem 3.14 are used to demonstrate this connection.
###### 4.1.
Existence of $SU(2)$-bundles. We discuss the existence of principal
$SU(2)$-bundle over $M_{k}\in{\mathcal{P}}{\mathcal{D}}^{8}_{3}$ with an
attaching map as in (1.1) whose intersection form is even. If
$\mbox{Rank}(H_{4}(M))=2$, then up to isomorphism $\begin{pmatrix}0&1\\\
1&0\end{pmatrix}$ is the only possible matrix for the intersection form. The
attaching map $L(M)\in\pi_{7}(S^{4}\vee S^{4})$ of $M$ is of the form
(4.2)
${}L(M)=[\alpha_{1},\alpha_{2}]+l_{1}v^{\prime}_{1}+l_{2}v^{\prime}_{2},$
where $l_{1},l_{2}\in\mathbb{Z}/12$. For a principal bundle $SU(2)\to E\to
M_{k}$ where $E$ is $3$-connected, the classifying map $f_{E}\colon
M_{k}\rightarrow\mathbb{H}P^{\infty}$ is such that $\pi_{4}(f)$ is surjective.
Suppose $\alpha_{i}\mapsto n_{i}\iota_{4},i=1,2$ such that
$\gcd(n_{1},n_{2})=1$ and $f\circ L(M)\simeq*$. If both $l_{1}$ and $l_{2}$
are odd, no such $n_{1}$ and $n_{2}$ exists. However, for $k\geq 4$, one may
always construct a suitable map.
###### Proposition 4.3.
Suppose $M_{k}\in\mathcal{PD}_{3}^{8}$ such that
$\mbox{Rank}(H_{4}(M_{k}))=k\geq 4$ and the intersection form is even. Then
there exists a map $\psi\colon M_{k}\rightarrow\mathbb{H}P^{\infty}$ such that
$\mbox{hofib}(\psi)$ is $3$-connected.
###### Proof.
From [7, Theorem 4.20], if $k\geq 6$, the attaching map of $M_{k}$ can be
expressed as
$L(M_{k})=\sum_{1\leq i<j\leq
k}g_{i,j}[\alpha_{i},\alpha_{j}]+\sum_{1=1}^{k}\frac{g_{i,i}}{2}[\alpha_{i},\alpha_{i}]+\sum_{i=1}^{k}s_{i}\nu_{i}^{\prime}$
such that $s_{i}=0$ for $i\geq 2$ for a choice of basis
$\\{\alpha_{1},\dots,\alpha_{k}\\}$ of $H_{4}(M_{k})$. Then the map
$\tilde{\psi}\colon(S^{4})^{\vee
k}\xrightarrow{(0,0,\dots,0,1)}\mathbb{H}P^{\infty}$ extends to a map
$\psi\colon M_{k}\rightarrow\mathbb{H}P^{\infty}$ such that $\pi_{4}(\psi)$ is
surjective.
Now if $k=4,$ by [11], the attaching map of $M_{k}$ can be expressed as
$L(M_{k})=[\alpha_{1},\alpha_{2}]+[\alpha_{3},\alpha_{4}]+\sum_{i=1}^{4}l_{i}\nu_{i}^{\prime}$
for a choice of basis of $H_{4}(M_{k}).$ Choose two positive integers $m,n$
such that $\gcd(m,n)=1$ and $ml_{1}+nl_{3}\equiv 0\pmod{12}$. Then the map
$\tilde{\psi}\colon(S^{4})^{\vee
4}\xrightarrow{(m,0,n,0)}\mathbb{H}P^{\infty}$ extends to a map $\psi\colon
M_{k}\rightarrow\mathbb{H}P^{\infty}$ such that $\pi_{4}(\psi)$ is surjective.
∎
We now focus on the stable homotopy type of the total space $E(f)$ for
$f\colon M_{k}\to{\mathbb{H}}P^{\infty}$ such that $\pi_{4}(f)$ is injective.
From the attaching map of $M_{k}$ as in (1.1) and even intersection form, we
have
(4.4) ${}\Sigma^{\infty}M_{k}\simeq\Sigma^{\infty}(S^{4})^{\vee
k-1}\vee\Sigma^{\infty}(Cone({\sigma(M_{k})}\nu_{(7)})$
for some even $\sigma(M_{k})$. Hence the stable homotopy type of $M_{k}$ is
determined by $\sigma(M_{k})$.
###### Proposition 4.5.
Let $E(f_{\psi})$ be the total space of a principal $SU(2)$-bundle over
$M_{k}\in\mathcal{PD}^{8}_{3}$, classified by a map $f_{\psi}\colon
M_{k}\rightarrow\mathbb{H}P^{\infty}$ for $k\geq 4.$ Then
$\Sigma^{\infty}E(f_{\psi})\simeq\Sigma^{\infty}(S^{4}\vee
S^{7})^{\vee{k-2}}\vee\Sigma^{\infty}Cone({{\lambda(\psi)}}\Sigma^{4}\nu_{(7)}),$
where $\lambda(\psi):=\lambda_{s}(E(f_{\psi}))$, is even and a multiple of
$\sigma(M_{k})$.
###### Proof.
Note that $E(f_{\psi})\in\mathcal{PD}_{4,7}^{11}$ and its stable homotopy type
is given in the Corollary 2.15 where
$\epsilon=\epsilon_{s}(E(f_{\psi}))\in\mathbb{Z}/2$ and
$\lambda(\psi)\in\mathbb{Z}/24$. So, it suffices to show that
$2|\lambda(\psi)$ and $\epsilon=0$. The fact $2|\lambda(\psi)$ follows from
(4.4) and Theorem 3.14.
Now, the cofibre sequence obtained from cell structure of $E$ and $M_{k}$
induces the following commutative diagram
$\textstyle{\pi_{10}^{s}(S^{10})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\phi_{*}}$$\textstyle{\pi_{10}^{s}(S^{4}\vee
S^{7})^{\bigoplus{k-1}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}\pi_{10}^{s}(E)\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}}$$\textstyle{\pi_{10}^{s}(S^{7})\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{0}$$\scriptstyle{\pi_{10}^{s}(\sum^{\infty}L(M))}$$\textstyle{\pi^{s}_{10}(S^{4})^{\bigoplus
k}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\textstyle{{}\pi_{10}^{s}(M_{k})}$
of stable homotopy groups where $\phi$ is the attaching map of top cell in
$E$.
Since $\lambda(\psi)\beta_{k-1}\circ\nu_{7}+\epsilon\alpha_{k-1}\circ x=0$ in
$\pi^{s}_{10}(E)$, its image in $\pi_{10}^{s}(M)$ is $0$. Note that bottom
left map $\pi_{10}^{s}(\sum^{\infty}L(M))=0$ because $2|\sigma(M_{k})$ in
(4.4) and hence the bottom right map is injective, where
$\pi_{10}^{s}(S^{4})^{\oplus
k}\cong\mathbb{Z}/2\\{\alpha_{1}\circ\nu^{2},\dots,\alpha_{k}\circ\nu^{2}\\}$.
Since $2|\lambda(\psi)$, the middle vertical arrow sends
$\Sigma^{\infty}\phi=\lambda(\psi)\beta_{k-1}\circ\nu_{(7)}+\epsilon\alpha_{k-1}\circ\nu^{2}$
to $\epsilon\alpha_{k-1}\circ\nu^{2}$ which is in turn mapped to $0$ via the
bottom right map as $\Sigma^{\infty}\phi=0\in\pi_{10}^{s}(E).$ Hence
$\epsilon=0\in\mathbb{Z}/2$. ∎
###### 4.6.
Stably trivial manifolds. The following result states that the total space of
a principal $SU(2)$-bundle over stably trivial $M_{k}\in\mathcal{PD}^{8}_{3}$
( i.e., $\sigma(M_{k})\equiv 0\pmod{24}$), is itself a connected sum of copies
of $S^{4}\times S^{7}$.
###### Proposition 4.7.
Let $E\in{\mathcal{P}}{\mathcal{D}}^{11}_{4,7}$ be the total space of a
principal $SU(2)$-bundle over stably trivial $M_{k}\in\mathcal{PD}^{8}_{3}$.
Then $E\simeq\\#^{k-1}(S^{4}\times S^{7})$.
###### Proof.
From Theorem 2.17, we have
$E\simeq\\#^{k-2}E_{0,0,0}\\#E_{0,\epsilon,\delta}$. Then the result follows
from Proposition 4.5 for $k\geq 4$. For $k=2$, the attaching map of $M_{2}$ is
of the form (4.2). Stably trivial condition implies $M_{2}=S^{4}\times S^{4}$
which further implies $E=S^{4}\times S^{7}$. ∎
###### 4.8.
Possible stable homotopy types of the total space. Let $\psi\in
H^{4}(M_{k};\mathbb{Z})$ be a cohomology class represented by a map
$\overline{\psi}\colon M_{k}\rightarrow K(\mathbb{Z},4)$ which has a unique
lift $\tilde{\psi}\colon M_{k}\rightarrow S^{4}$ up to homotopy if
${\overline{\psi}}\big{|}_{{(S^{4})}^{\vee{k-1}}}\circ
L(M_{k})\in\pi_{7}(S^{4})$ is $0$. As the inclusion
$(S^{4})^{\vee{k}}\rightarrow M_{k}$ induces an isomorphism on $H_{4}$ and
$H^{4}$, a cohomology class $\psi\in H^{4}(M_{k})$ always induces a map
$\tilde{\psi}\colon(S^{4})^{\vee{k}}\rightarrow S^{4}.$ We consider the
following diagram
$\textstyle{(S^{4})^{\vee{k}}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}\ignorespaces\ignorespaces\ignorespaces\ignorespaces}$$\scriptstyle{\tilde{\psi}}$$\textstyle{M_{k}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}}$$\textstyle{S^{4}\ignorespaces\ignorespaces\ignorespaces\ignorespaces{}}$$\textstyle{\mathbb{H}P^{2}}$
and formulate when the map $M_{k}\rightarrow\mathbb{H}P^{2}$ exists. Note that
if the map exists then its homotopy fibre will be $3$-connected.
For $\psi\in H^{4}(M_{k}),$ consider the composite
$S^{7}\xrightarrow{L(M_{k})}(S^{4})^{\vee{k}}\xrightarrow{\tilde{\psi}}S^{4}$
and define $\tau(\psi)=[\tilde{\psi}\circ L(M)]\in\pi_{7}^{s}(S^{4})$. Thus
$\tau\colon H^{4}(M_{k})\rightarrow\mathbb{Z}/24$
and one can check it is a linear map.
###### Proposition 4.9.
Suppose $\psi\in H^{4}(M_{k})$ is primitive. The map
$\tilde{\psi}\colon(S^{4})^{\vee{k}}\rightarrow S^{4}$ extends to a map
$f_{\psi}:M_{k}\rightarrow\mathbb{H}P^{2}$ if and only if
$\psi\cup\psi\equiv\tau(\psi)z\pmod{24}$ for some $z\in H^{8}(M_{k})$.
###### Proof.
Consider a primitive element $\psi\in H^{4}(M_{k})$. We extend $\psi$ to a
basis of $H^{4}(M_{k})$, and use the dual basis of $\pi_{4}(M_{k})$ to write
down the attaching map of $M_{k}$ is as in $\eqref{Eq_general L(M) define}$.
In this notation, we have $\psi^{2}=g_{k,k}z$ where $z\in H^{8}(M_{k})$ is the
chosen generator and $\tau(\psi)=g_{k,k}-2l_{k}$. Thus $\tilde{\psi}\circ
L(M)$ maps to $0\in\pi_{7}(\mathbb{H}P^{2})$ if and only if $l_{k}=0$, that is
$\psi\cup\psi\equiv\tau{(\psi)}z\pmod{24}$. Hence, the result follows. ∎
Proposition 4.9 gives us criteria for constructing the maps $f_{\psi}$ out of
cohomology classes $\psi$. Using this we determine which multiples of
$\sigma(M_{k})$ may occur as $\lambda_{s}(E)$ for $E\to M_{k}$ a principal
$SU(2)$-bundle where $E$ is $3$-connected. We first show that there exist
$f_{\psi}$ such that $\lambda(\psi)=\sigma(M_{k})$ if $k$ is large enough.
###### Proposition 4.10.
Suppose the stable homotopy type of $M_{k}$ is determined by $\sigma(M_{k}).$
Then there exists $f_{\psi}\colon M_{k}\rightarrow\mathbb{H}P^{\infty}$ such
that
1. (1)
$\lambda(\psi)\equiv\sigma(M_{k})\pmod{3}$ for $k\geq 5$,
2. (2)
$\lambda(\psi)\equiv\sigma(M_{k})\pmod{8}$ for $k\geq 7$.
###### Proof.
We begin the proof with the first case. If $\tau\equiv 0\pmod{3}$, the proof
follows from Theorem 3.14. Let $\tau\not\equiv 0\pmod{3}$ and $\psi_{0}$ a
primitive cohomology class such that
$\tau(\psi_{0})\equiv\sigma(M_{k})\not\equiv 0\pmod{3}$. We need to choose a
$\psi\in\ker(-\cup\psi_{0})\cap\ker(\tau)$ such that
$\psi^{2}\equiv\tau(\psi)\equiv 0\pmod{3}$. This we can do for $k-2\geq 3$,
see [11, Chapter II, (3.2)-(3.4)]. By Poincaré duality, we get $\psi^{\prime}$
such that $\psi\cup\psi^{\prime}=z$. We write
$\psi^{\prime}=\psi^{\prime}_{\ker(\tau)}+t\psi_{0}$ for some $t$ where
$\psi^{\prime}_{\ker(\tau)}\in\ker(\tau)$. Since
$z=\psi^{\prime}\cup\psi=\psi^{\prime}_{\ker(\tau)}\cup\psi$, we may assume
$\psi^{\prime}=\psi^{\prime}_{\ker\tau}$. Thus by assigning
$\tau_{1}=\tau(\psi_{0})\equiv\sigma(M_{k})\pmod{3}$,
$\tau_{k-1}=\tau(\psi^{\prime})\equiv 0\pmod{3}$ and
$\tau_{k}=\tau(\psi)\equiv 0\pmod{3}$; and choosing other $\psi_{i}$’s such
that $\tau_{i}=0$ for $i=2,\dots,k-2$ we have
$\lambda(\psi)\equiv\sigma(M_{k})\pmod{3}\quad\quad\text{for }k\geq 5.$
Now we look into the second case. The proof goes similarly to that of the
above, except when we choose $\psi\in\ker(-\cup\psi_{0})\cap\ker(\tau)$ such
that $\psi^{2}\equiv\tau(\psi)\equiv 0\pmod{8}$. We can choose such $\psi$ for
$k-2\geq 5$, see [11, Chapter II, (3.2)-(3.4)]. Then following similar
arguments one can deduce
$\lambda(\psi)\equiv\sigma(M_{k})\pmod{8}\quad\quad\text{for }k\geq 7.$
∎
The following theorem constructs $f_{\psi}$ with
$\lambda(\psi)=3\sigma(M_{k})$ if $\sigma(M_{k})$ is not divisible by $3$.
###### Theorem 4.11.
Suppose $3\nmid\sigma(M_{k})$. Then for $k\geq 7$, there exists
$f_{\psi}\colon M_{k}\rightarrow\mathbb{H}P^{2}$ such that
$\lambda(\psi)=3\sigma(M_{k})$.
###### Proof.
Let
$\tau^{(3)}\colon H^{4}(M_{k},\mathbb{F}_{3})\rightarrow\mathbb{F}_{3}$
be the restriction of $\tau$ in modulo $3$. As $\tau^{3}$ is surjective, there
exists $\psi_{0}\in H^{4}(M_{k},\mathbb{F}_{3})$ such that for all cohomology
class $\psi$, $\tau^{(3)}(\psi)z\equiv\psi\cup\psi_{0}\pmod{3}$. In
particular, $\tau^{(3)}(\psi_{0})z\equiv\psi_{0}\cup\psi_{0}\pmod{3}$. We
consider two cases, $\psi_{0}^{2}=0$ or $\psi_{0}^{2}$ is unit.
First let $\psi_{0}^{2}$ is unit. Then we can choose
$\psi_{1},\dots,\psi_{k-1}$ such that $\psi_{0}\cup\psi_{i}=0$. We take the
dual basis $\alpha_{i}$ corresponding to $\psi_{i}$ and $\alpha_{k}$
corresponding to $\psi_{0}$. Thus $\psi_{0}(\alpha_{i})=0$ for $i=1,\dots,k-1$
and $\psi_{0}(\alpha_{k})=1$. Hence
$\tau_{1}\equiv\dots\equiv\tau_{k-1}\equiv
0\pmod{3},\quad\text{and}\quad\tau_{k}\equiv g_{k,k}\equiv 1\pmod{3},$
which implies $\lambda(\psi)=\gcd(\tau_{1},\dots,\tau_{k-2},\tau_{k-1})\equiv
0\pmod{3}$.
Now let $\psi_{0}^{2}=0$. Then $\tau^{(3)}(\psi_{0})z=0$. We choose
$\psi_{1},\dots,\psi_{k-1}$ such that $\psi_{k-1}\cup\psi_{0}=1$ and
$\psi_{i}\cup\psi_{0}=0$ for $i=1,\dots,k-2$. After taking the dual basis,
with similar argument we have
$\tau_{1}\equiv\dots\equiv\tau_{k-2}\equiv
0\pmod{3},\quad\tau_{k-1}\equiv\sigma(M_{k})\pmod{3},\quad\text{and}\quad\tau_{k}\equiv
g_{k,k}\equiv 0\pmod{3}.$
Hence
$\lambda(\psi)=\gcd(\tau_{1},\dots,\tau_{k-2},\tau_{k}-g_{k,k}\tau_{k-1})\equiv
0\pmod{3}$. ∎
###### Remark 4.12.
We note that Proposition 4.9, Proposition 4.10, and Theorem 4.11 does not use
the fact that the intersection form is even, and also holds in the case where
the intersection form is odd.
Now we look to prove similar results modulo $8$, which in turn provide us
desired construction $\psi$ as in Proposition 4.9 using the Chinese reminder
theorem. However, in this case certain conditions are required for obtaining
analogous $f_{\psi}$.
###### Definition 4.13.
* •
A complex $M_{k}$ with $\sigma(M_{k})=2$ or $4$, is said to satisfy hypothesis
$(H_{8})$ if $(\ker\tau)^{\perp}=(\sigma(M_{k})\psi)\pmod{8}$ where $\psi\in
H^{4}(M_{k})$ (which is unique $\pmod{\frac{8}{\sigma(M_{k})}}$) satisfies
$\begin{cases}\psi^{2}\equiv 0\pmod{8}&\text{ if }\sigma(M_{k})=2\\\
\psi^{2}\equiv 0\pmod{4}&\text{ if }\sigma(M_{k})=4.\end{cases}$
* •
A complex $M_{k}$ with $\sigma(M_{k})=2$ is said to satisfy hypothesis
$(H_{4})$ if $(\ker\tau)^{\perp}=(2\psi)\pmod{4}$ where $\psi\in H^{4}(M_{k})$
(which is unique $\pmod{2}$) satisfies
$\psi^{2}\equiv\tau(\psi)\equiv 0\mbox{ or }4\pmod{8}.$
Note that the hypotheses $(H_{8})$ and $(H_{4})$ depends only on the
intersection form and $\tau$ and not on the choice of $\psi$. We now prove the
existence of $f_{\psi}$ under the hypothesis defined above.
###### Theorem 4.14.
1. (1)
Suppose $8\nmid\sigma(M_{k})$. For $k\geq 5$, there exist $f_{\psi}\colon
M_{k}\rightarrow\mathbb{H}P^{2}$ such that $\lambda(\psi)\equiv 0\pmod{8}$ if
and only if the complex satisfies hypothesis $(H_{8})$.
2. (2)
Suppose $\sigma(M_{k})=2$. Then for $k\geq 5$, there exist $f_{\psi}\colon
M_{k}\rightarrow\mathbb{H}P^{2}$ such that $\lambda(\psi)\equiv 4\pmod{8}$ if
and only if the complex satisfies hypothesis $(H_{4})$.
###### Proof.
The condition $k\geq 5$ comes from the fact that we are required to make
certain choices modulo $3$ using Proposition 4.10. First suppose in case (1),
a $f_{\psi}$ exists such that $\lambda(\psi)=8\sigma(M_{k})$. Then there
exists a basis $\\{\psi_{1},\dots,\psi_{k-2},\psi^{\prime},\psi\\}$ satisfying
(working $\pmod{8}$)
(4.15) $\displaystyle\psi\cup\psi_{i}=0\text{ for }1\leq i\leq k-2,\quad$
$\displaystyle\psi\cup\psi^{\prime}=1,$ $\displaystyle\tau(\psi_{i})=0\text{
for }1\leq i\leq k-2,\quad$
$\displaystyle\tau(\psi^{\prime})=\sigma(M_{k})\quad\text{and}\quad\tau(\psi)=\psi^{2}=0.$
Note that
$\langle\psi\rangle^{\perp}=\langle\psi_{1},\dots,\psi_{k-2},\psi\rangle\subset\ker(\tau)=\langle\psi_{1},\dots,\psi_{k-2},\psi,\frac{8}{\sigma(M_{k})}\psi^{\prime}\rangle$.
This implies $(\ker(\tau))^{\perp}=(\sigma(M_{k})\psi)$, and thus the
hypothesis $(H_{8})$ is satisfied.
For the converse part if the complex satisfies hypothesis $(H_{8})$, one can
check that there is a choice of $\psi$ such that (4.15) is satisfied. We look
into the cases $\sigma(M_{k})=2,4$.
First, let $\sigma(M_{k})=2$. Then $(\ker(\tau))^{\perp}=(2\psi)$ and $\psi$
is well defined modulo $4$. We note that
(4.16) ${}\chi\cup(2\psi)\equiv\tau(\chi)z\pmod{8}~{}\forall\chi\in
H^{4}(M_{k}).$
This implies
$\tau(\psi)\equiv 2\psi^{2}\equiv 0\pmod{8}.$
For any $\chi\in\ker(\tau)$, we have $(2\psi)\chi=\tau(\chi)\equiv 0\pmod{8}$.
In particular $2\psi^{2}=\tau(\psi)\equiv 0\pmod{8}$. Together these two
implies $\psi^{2}\equiv 0\pmod{8}$. The equation (4.16) implies that if
$\chi\cup\psi=0$, $\tau(\chi)=0$. Now choosing a basis as in Case (2) of
Proposition 3.13, we obtain the conditions in (4.15).
Now let $\sigma(M_{k})=4$. Then $(\ker(\tau))^{\perp}=(4\psi)$ and $\psi$ is
determined modulo $2$. We proceed analogously observing that
$\chi\cup(4\psi)\equiv\tau(\chi)z\pmod{8}~{}\forall\chi\in H^{4}(M_{k}),$
which implies
$\tau(\psi)\equiv 4\psi^{2}\equiv 0\pmod{8}.$
Let $\psi^{\prime}$ be such that $\tau(\psi^{\prime})=4$, and so we have that
$\psi\cup\psi^{\prime}$ is an odd multiple of $z$. If $\psi^{2}$ is
$4\pmod{8}$ we change $\psi$ to $\psi+2\psi^{\prime}$ to ensure
$\psi^{2}=\tau(\psi)\equiv 0\pmod{8}$. Now choosing a basis as in Case (2) of
Proposition 3.13, we obtain the conditions in (4.15).
The case (2) also proceeds along analogous lines. For the existence (of
$f_{\psi}$ for some $\psi$) question, we need a basis satisfying
(4.17) $\displaystyle\psi\cup\psi_{i}=0\text{ for }1\leq i\leq
k-2,\quad\psi\cup\psi^{\prime}=1,$ $\displaystyle\tau(\psi_{i})\equiv
0\pmod{4}\text{ for }1\leq i\leq
k-2,\quad\tau(\psi^{\prime})=\sigma(M_{k}),\quad\tau(\psi)=\psi^{2}\equiv
0\mbox{ or }4\pmod{8},$ $\displaystyle\mbox{ such that at least one of
}\tau(\psi_{i})\mbox{ for }1\leq i\leq k-2,\mbox{ or }\tau(\psi)\equiv
4\pmod{8}.$
Note that
$\langle\psi\rangle^{\perp}=\langle\psi_{1},\dots,\psi_{k-2},\psi\rangle\subset\ker(\tau)=\langle\psi_{1},\dots,\psi_{k-2},\psi,2\psi^{\prime}\rangle\pmod{4}$
and $(\ker(\tau))^{\perp}=(2\psi)\pmod{4}$. Hence, the hypothesis $(H_{4})$ is
satisfied.
Conversely, if $(H_{4})$ is satisfied, we obtain a $\psi$ such that
$\psi^{2}\equiv\tau(\psi)\equiv 0\mbox{ or }4\pmod{8}$. This $\psi$ also
satisfies
$\chi\cup(2\psi)\equiv\tau(\chi)z\pmod{4}~{}\forall\chi\in H^{4}(M_{k}).$
Let $\psi^{\prime}$ be such that $\tau(\psi^{\prime})=2$, which implies that
$\psi\cup\psi^{\prime}$ is an odd multiple of $z$. Now replacing $\psi$ by
$\psi+2\psi^{\prime}$ if required we may assume that
$\psi^{2}\equiv\tau(\psi)\equiv 4\pmod{8}$. Now choosing a basis as in Case
(2) of Proposition 3.13, we obtain the conditions in (4.17). ∎
If $\mbox{Rank}(H_{4}(M_{k}))=k\geq 5$, the above results indicate a
systematic computation of possible stable homotopy types of the total space
depending on $k$, $\sigma(M_{k})$ and the intersection form. In lower rank
cases, the results depend on the explicit formula for the attachment
$L(M_{k})$, and not just on these variables. Hence the systematic description
turns out to be cumbersome. We demonstrate some observations on the
$\mbox{Rank}(H_{4}(M))=2$ case.
###### Example 4.18.
Recall that the attaching map $L(M)\in\pi_{7}(S^{4}\vee S^{4})$ of $M_{2}$ is
of the form
$L(M)=[\alpha_{1},\alpha_{2}]+l_{1}v^{\prime}_{1}+l_{2}v^{\prime}_{2}$
where $l_{1},l_{2}\in\mathbb{Z}/12$. We already have
* •
If both $l_{1}$ and $l_{2}$ are odd, there does not exist $f\colon
M\rightarrow\mathbb{H}P^{\infty}$ such that $E=\mbox{hofib}(f)$ is
$3$-connected.
If one of $l_{1}$ and $l_{2}$ is even, or both are even, there exists $f\colon
M\rightarrow\mathbb{H}P^{\infty}$ such that $E=\mbox{hofib}(f)$ is
$3$-connected. Via an explicit calculation using Proposition 3.13, we observe
the following.
1. (1)
If none of $l_{1}$ and $l_{2}$ are divisible by $3$, then we obtain
$\lambda(E)\equiv 0\pmod{3}$.
2. (2)
If either of $l_{1}$ or $l_{2}$ but not both are divisible by $3$, then
$\lambda(E)\not\equiv 0\pmod{3}$.
3. (3)
If $\sigma(M)\equiv 4\pmod{8}$ and $l_{1}l_{2}\equiv 0\pmod{8}$ where none of
$l_{1}$ and $l_{2}$ are divisible by $3$, then $\lambda(E)\equiv 0\pmod{8}$.
4. (4)
If $\sigma(M)\equiv 2\pmod{8}$, we can never obtain $\lambda(E)\equiv
0,4\pmod{8}$.
## 5\. $SU(2)$-bundles over odd complexes
We now work out the case of $M_{k}\in{\mathcal{P}}{\mathcal{D}}_{3}^{8}$ for
which the intersection form is odd. Recall that the notation $M_{k}$ means
that $\mbox{Rank}(H_{4}(M_{k}))=k$. The intersection form being odd implies
that there are two possibilities of $\sigma(M_{k})$, namely, $1$ and $3$ among
the divisors of $24$. Here, we prove
1) For $k\geq 3$, it is possible to obtain a $SU(2)$-bundle whose total space
is $3$-connected.
2) Further if $k\geq 7$, it is possible to obtain maps
$\psi(j):M_{k}\to{\mathbb{H}}P^{\infty}$ with $\lambda(\psi(j))=j$ for every
multiple $j\pmod{24}$ of $\sigma(M_{k})$ which is also a divisor of $24$.
###### 5.1.
Existence of $SU(2)$-bundles. Through an explicit computation, we demonstrate
the existence of principal $SU(2)$-bundles over
$M_{k}\in{\mathcal{P}}{\mathcal{D}}^{8}_{3}$ if $k\geq 3$ when the
intersection form is odd.
###### Proposition 5.2.
Suppose $M_{k}\in\mathcal{PD}_{3}^{8}$ such that
$\mbox{Rank}(H_{4}(M_{k}))=k\geq 3$ and the intersection form is odd. Then
there exists a map $\psi\colon M_{k}\rightarrow\mathbb{H}P^{\infty}$ such that
$\mbox{hofib}(\psi)$ is $3$-connected.
###### Proof.
Recall that the attaching map of $M_{k}$ can be expressed as (1.1). Using
Proposition 4.9, we are required to find a primitive element $\psi$ such that
$\tau(\psi)z\equiv\psi^{2}\pmod{24}$, which is equivalent to checking that the
coefficient of $\nu^{\prime}$ in $\tilde{\psi}\circ L(M_{k})$ is $0\pmod{12}$.
It suffices to find $\psi\pmod{8}$ and $\psi\pmod{3}$ separately.
We first work out the $\pmod{3}$ case, where the base ring is a field of
characteristic $\neq 2$ so that the form is diagonalizable. Considering the
map $(S^{4})^{\vee
k}\xrightarrow[]{(0,\dots,0,n_{1},n_{2},n_{3})}{\mathbb{H}}P^{\infty}$ which
sends $L(M_{k})$ to
$\Big{(}\pm{n_{1}\choose 2}\pm{n_{2}\choose 2}\pm{n_{3}\choose
2}+n_{1}l_{k-2}+n_{2}l_{k-1}+n_{3}l_{k}\Big{)}\cdot\nu^{\prime}+\mbox{
multiple of }\nu,$
where the $\pm$ correspond to the diagonal entries $\pmod{3}$. We observe
through a direct calculation that for every fixed choice
$\epsilon_{1},\epsilon_{2},\epsilon_{3}$ of $\pm 1$s, there is a
$n_{1},n_{2},n_{3}$ with $\gcd(n_{1},n_{2},n_{3})=1$ such that
$\Big{(}\epsilon_{1}{n_{1}\choose 2}+\epsilon_{2}{n_{2}\choose
2}+\epsilon_{3}{n_{3}\choose
2}+n_{1}l_{k-2}+n_{2}l_{k-1}+n_{3}l_{k}\Big{)}\equiv 0\pmod{3}.$
This completes the argument $\pmod{3}$.
Working $\pmod{8}$, the fact that the intersection form is odd implies that we
may choose a basis such that $g_{k,k}\equiv\pm 1\pmod{8}$ and $g_{k,k-1}\equiv
0\pmod{8}$, see [11, Chapter II, (4.3)]. Also the intersection form can be
written as the block matrix $\begin{pmatrix}A&0\\\ 0&\pm 1\end{pmatrix}.$
If $A$ is an even intersection form, then the result follows from Proposition
4.3 for $k\geq 5$. Now let $A$ is not even, then the intersection form is
$\begin{pmatrix}A^{\prime}&0\\\ 0&B^{\prime}\end{pmatrix}$ where $B^{\prime}$
is a diagonal matrix of order $2$ with diagonal entries $\pm 1$. If
$A^{\prime}$ is an even intersection form, then for $k\geq 6$ the result
follows from Proposition 4.3. If $A^{\prime}$ is not even, the intersection
form updates to
$\begin{pmatrix}A^{\prime\prime}&0\\\ 0&B^{\prime\prime}\end{pmatrix}$
where $B^{\prime\prime}$ is a diagonal matrix of order $3$ with diagonal
entries $\pm 1$. For $B^{\prime\prime}=I_{3}$, the map $(S^{4})^{\vee
k}\xrightarrow[]{(0,\dots,0,n_{1},n_{2},n_{3})}{\mathbb{H}}P^{\infty}$ sends
$L(M_{k})$ to
$\Big{(}{n_{1}\choose 2}+{n_{2}\choose 2}+{n_{3}\choose
2}+n_{1}l_{k-2}+n_{2}l_{k-1}+n_{3}l_{k}\Big{)}\cdot\nu^{\prime}+\mbox{
multiple of }\nu,$
where $\gcd(n_{1},n_{2},n_{3})=1$ ensures that the corresponding $\psi$ is
primitive. We may directly compute and observe that the equations
$\Big{(}{n_{1}\choose 2}+{n_{2}\choose 2}+{n_{3}\choose
2}+n_{1}l_{k-2}+n_{2}l_{k-1}+n_{3}l_{k}\Big{)}\equiv
0\pmod{8},\gcd(n_{1},n_{2},n_{3})=1,$
have a common solution. A similar argument works for the other diagonal $\pm
1$ matrix choices for $B^{\prime\prime\prime}$. This proves the result for
$k\geq 6$.
If $3\leq k\leq 5$, we know from [11, Chapter II, (3.2)-(3.4)] that the form
is a direct sum of $\pm 1$ and the hyperbolic form. The argument in the even
case in Proposition 4.3 implies the result for a sum of two hyperbolic forms,
and the above argument implies the result for a sum of $3$ $\pm 1$s. The
remaining cases are taken care of if we show the result for the intersection
form
$\begin{pmatrix}H&0\\\ 0&\pm 1\end{pmatrix}$
where $H=\begin{pmatrix}0&1\\\ 1&0\end{pmatrix}$ is the hyperbolic matrix. We
consider the map $(S^{4})^{\vee
3}\xrightarrow[]{(n_{1},n_{2},n_{3})}{\mathbb{H}}P^{\infty}$, and as above we
need to find a common solution of
${n_{3}\choose 2}+n_{1}n_{2}+\sum_{i=1}^{3}n_{i}l_{i}\equiv
0\pmod{8},\gcd(n_{1},n_{2},n_{3})=1.$
Through a direct calculation, we check that such solutions always exist. This
completes the proof. ∎
The following example shows that Proposition 5.2 does not extend to the $k=2$
case.
###### Example 5.3.
Consider $M_{2}$ such that
$L(M_{2})=\nu_{1}+\nu_{2}+l_{1}\nu_{1}^{\prime}+l_{2}\nu_{2}^{\prime},\quad
l_{1}\equiv l_{2}\equiv 2\pmod{3}.$
A map $M_{2}\to{\mathbb{H}}P^{\infty}$ which restricts to $(n_{1},n_{2})$ on
the $4$-skeleton sends $L(M_{2})$ to
$\Big{(}{n_{1}\choose 2}+{n_{2}\choose
2}+n_{1}l_{1}+n_{2}l_{2}\Big{)}\cdot\nu^{\prime}+\mbox{ multiple of }\nu.$
We may check directly that
$\Big{(}{n_{1}\choose 2}+{n_{2}\choose 2}+n_{1}l_{1}+n_{2}l_{2}\Big{)}\equiv
0\pmod{3}\implies n_{1}\equiv n_{2}\equiv 0\pmod{3}.$
Therefore, there is no map $M_{2}\to{\mathbb{H}}P^{\infty}$ whose homotopy
fibre is $3$-connected.
###### 5.4.
Possible stable homotopy type of the total space. We note from §4 that
Propositions 4.9, 4.10, and Theorem 4.11 are also valid when the intersection
form is odd. We check that all stable homotopy types are achievable in the odd
case if the rank $k$ of $H_{4}(M)$ is $\geq 7$. Applying the results from §4,
it only remains to check that the different possibilities $\pmod{8}$ are
achievable. We do this in the theorem below.
###### Theorem 5.5.
Suppose $\sigma(M_{k})$ is odd. Then for $k\geq 5$ and $j\in\\{0,2,4\\}$,
there exists $\psi\colon M_{k}\to{\mathbb{H}}P^{2}$ such that
$\lambda(\psi)\equiv j\pmod{8}$.
###### Proof.
We work $\pmod{8}$, knowing that if $k\geq 5$, Proposition 4.10 allows us to
make a choice of $\psi$ so that $\lambda(\psi)$ is as required $\pmod{3}$. The
proof is very similar to the proof of Theorem 4.11. If $\sigma(M_{k})$ is odd,
the linear map $\tau\colon({\mathbb{Z}}/8)^{k}\to{\mathbb{Z}}/8$ is
represented by some primitive class $\psi$ (that is,
$\tau(\chi)=\langle\chi,\psi\rangle\pmod{8}$, and ${\mathbb{Z}}\\{\psi\\}$ is
a summand of $H^{4}(M_{k})$). In particular, $\psi^{2}=\tau(\psi)$. Now, we
have two cases.
First let $\tau(\psi)$ be odd, i.e. $\psi^{2}$ is unit in modulo $8$. Then we
can extend $\psi$ to a basis $\psi,\psi_{1},\dots,\psi_{k-1}$ such that
$\psi\cup\psi_{i}=0$ for $1\leq i\leq k-1$. We take the dual basis
$\alpha_{i}$ corresponding to $\psi_{i}$ and $\alpha_{k}$ corresponding to
$\psi$. Thus $\psi(\alpha_{i})=0$ for $i=1,\dots,k-1$ and
$\psi(\alpha_{k})=1$. Hence
$\tau_{1}\equiv\dots\equiv\tau_{k-1}\equiv
0\pmod{8},\quad\text{and}\quad\tau_{k}\equiv g_{k,k}\pmod{8},$
which implies $\lambda(\psi)=\gcd(\tau_{1},\dots,\tau_{k-2},\tau_{k-1})\equiv
0\pmod{8}$ by Case (1) of Proposition 3.13.
Now let $\psi^{2}$ be even. Extend $\psi$ to a basis
$\psi,\psi_{1},\dots,\psi_{k-1}$ such that $\psi_{k-1}\cup\psi=1$ and
$\psi_{i}\cup\psi=0$ for $i=1,\dots,k-2$. After taking the dual basis, with
similar argument we have
$\tau_{1}\equiv\dots\equiv\tau_{k-2}\equiv 0\pmod{8},\quad\tau_{k-1}\equiv
1\pmod{8},\quad\text{and}\quad\tau_{k}\equiv g_{k,k}\pmod{8}.$
Hence
$\lambda(\psi)=\gcd(\tau_{1},\dots,\tau_{k-2},\tau_{k}-g_{k,k}\tau_{k-1})\equiv
0\pmod{8}$ by Case (2) of Proposition 3.13. This proves the result for $j=0$.
For $j=2,$ or $4$, we can use $\psi(j)=\psi+j\psi_{1}$, and note that
$\tau(\psi(j))=\tau(\psi),~{}\psi(j)^{2}=\psi^{2}+j^{2}\psi_{1}^{2}\equiv\psi^{2}\pmod{8}.$
The last equivalence comes from the fact that
$\tau(\psi_{1})z=\psi\cup\psi_{1}\equiv 0\pmod{8}$, and so $\psi_{1}^{2}$ is
forced to be an even multiple of $z$. We may now compute using the formulas of
Proposition 3.13 to conclude that $\lambda(\psi(j))\equiv j\pmod{8}$. ∎
As in the even case, when the rank is high enough we have a systematic idea of
the possibilities of the total space. However, in the low rank cases ($k\leq
6$) the results are not systematic, and may depend on individual cases rather
than only on $\sigma(M_{k}),k$, and the intersection form.
## References
* [1] J. F. Adams, On the cobar construction, Proc. Nat. Acad. Sci. U.S.A., 42 (1956), pp. 409–412.
* [2] , On the groups $J(X)$. IV, Topology, 5 (1966), pp. 21–71.
* [3] D. Barden, Simply connected five-manifolds, Ann. of Math. (2), 82 (1965), pp. 365–385.
* [4] S. Basu and S. Basu, Homotopy groups and periodic geodesics of closed 4-manifolds, Internat. J. Math., 26 (2015), pp. 1550059, 34.
* [5] , Homotopy groups of highly connected manifolds, Adv. Math., 337 (2018), pp. 363–416.
* [6] , Homotopy groups of certain highly connected manifolds via loop space homology, Osaka J. Math., 56 (2019), pp. 417–430.
* [7] S. Basu and A. K. Ghosh, Sphere fibrations over highly connected manifolds, (2023).
* [8] H. Duan and C. Liang, Circle bundles over 4-manifolds, Arch. Math. (Basel), 85 (2005), pp. 278–282.
* [9] P. J. Giblin, Circle bundles over a complex quadric, J. London Math. Soc., 43 (1968), pp. 323–324.
* [10] A. Hatcher, Algebraic topology, Cambridge University Press, Cambridge, 2002.
* [11] J. Milnor and D. Husemoller, Symmetric bilinear forms, vol. Band 73 of Ergebnisse der Mathematik und ihrer Grenzgebiete [Results in Mathematics and Related Areas], Springer-Verlag, New York-Heidelberg, 1973.
* [12] S. Smale, On the structure of $5$-manifolds, Ann. of Math. (2), 75 (1962), pp. 38–46.
|
###### Abstract
We prove that in every $2$-colouring of the edges of $K_{\mathbb{N}}$ there
exists a monochromatic infinite path $P$ such that $V(P)$ has upper density at
least ${(12+\sqrt{8})}/{17}\approx 0.87226$ and further show that this is best
possible. This settles a problem of Erdős and Galvin.
title = Upper Density of Monochromatic Infinite Paths, author = Jan Corsten,
Louis DeBiasio, Ander Lamaison, and Richard Lang, plaintextauthor = Jan
Corsten, Louis DeBiasio, Ander Lamaison, Richard Lang, keywords = infinite
graph, Ramsey, upper density, regularity lemma., year=2019, number=4,
received=10 August 2018, revised=24 December 2018, published=30 October 2019,
doi=10.19086/aic.10810,
[classification=text]
## 1 Introduction
Given a complete graph $K_{n}$, whose edges are coloured in red and blue, what
is the longest monochromatic path one can find? Gerencsér and Gyárfás [4]
proved that there is always a monochromatic path on
$\left\lceil(2n+1)/3\right\rceil$ vertices, which is best possible. It is
natural to consider a density analogue of this result for 2-colourings of
$K_{\mathbb{N}}$. The _upper density_ of a graph $G$ with
$V(G)\subseteq\mathbb{N}$ is defined as
$\bar{d}(G)=\limsup_{t\rightarrow\infty}\frac{|V(G)\cap\\{1,2,\ldots,t\\}|}{t}.$
The _lower density_ is defined similarly in terms of the infimum and we speak
of the _density_ , whenever lower and upper density coincide.
Erdős and Galvin [3] described a $2$-colouring of $K_{\mathbb{N}}$ in which
every monochromatic infinite path has lower density $0$ and thus we restrict
our attention to upper densities. Rado [9] proved that every $r$-edge-coloured
$K_{\mathbb{N}}$ contains $r$ vertex-disjoint monochromatic paths which
together cover all vertices. In particular, one of them must have upper
density at least $1/r$. Erdős and Galvin [3] proved that in every
$2$-colouring of $K_{\mathbb{N}}$ there is a monochromatic path $P$ with
$\bar{d}(P)\geq 2/3$. Moreover, they constructed a $2$-colouring of
$K_{\mathbb{N}}$ in which every monochromatic path $P$ has upper density at
most $8/9$. DeBiasio and McKenney [2] recently improved the lower bound to
$3/4$ and conjectured the correct value to be $8/9$. Progress towards this
conjecture was made by Lo, Sanhueza-Matamala and Wang [8], who raised the
lower bound to $(9+\sqrt{17})/16\approx 0.82019$.
We prove that the correct value is in fact ${(12+\sqrt{8})}/{17}\approx
0.87226$.
###### Theorem 1.1.
There exists a 2-colouring of the edges of $K_{\mathbb{N}}$ such that every
monochromatic path has upper density at most ${(12+\sqrt{8})}/{17}$.
###### Theorem 1.2.
In every $2$-colouring of the edges of $K_{\mathbb{N}}$, there exists a
monochromatic path of upper density at least ${(12+\sqrt{8})}/{17}$.
Now that we have solved the problem for two colours, it would be very
interesting to make any improvement on Rado’s lower bound of $1/r$ for $r\geq
3$ colours (see [2, Corollary 3.5] for the best known upper bound). In
particular for three colours, the correct value is between $1/3$ and $1/2$.
## 2 Notation
We write $\mathbb{N}$ to be the positive integers with the standard ordering.
Throughout the paper when referring to a finite graph on $n$ vertices, it is
always assumed that the vertex set is $[n]=\\{1,2,\dots,n\\}$ and that it is
ordered in the natural way. An _infinite path_ $P$ is a graph with vertex set
$V(P)=\\{v_{i}:i\in\mathbb{N}\\}$ and edge set
$E(P)=\\{v_{i}v_{i+1}:i\in\mathbb{N}\\}$. While paths are defined to be one-
way infinite, all of the results mentioned above on upper density of
monochromatic infinite paths apply equally well to two-way infinite paths. For
a graph $G$ with $V(G)\subseteq\mathbb{N}$ and $t\in\mathbb{N}$, we define
$d(G,t)=\frac{|V(G)\cap[t]|}{t}.$
Thus we can express the upper density of $G$ as
$\bar{d}(G)=\limsup_{t\rightarrow\infty}d(G,t).$
## 3 Upper bound
In this section, we will prove Theorem 1.1. Let $q>1$ be a real number, whose
exact value will be chosen later on. We start by defining a colouring of the
edges of the infinite complete graph. Let $A_{0},A_{1},\dots$ be a partition
of $\mathbb{N}$, such that every element of $A_{i}$ precedes every element of
$A_{i+1}$ and $|A_{i}|=\lfloor q^{i}\rfloor$. We colour the edges of
$G=K_{\mathbb{N}}$ such that every edge $uv$ with $u\in A_{i}$ and $v\in
A_{j}$ is red if $\min\\{i,j\\}$ is odd, and blue if it is even. A
straightforward calculation shows that for $q=2$, every monochromatic path $P$
in $G$ satisfies $\bar{d}(P)\leq 8/9$ (see Theorem 1.5 in [3]). We will
improve this bound by reordering the vertices of $G$ and then optimizing the
value of $q$.
For convenience, we will say that the vertex $v\in A_{i}$ is red if $i$ is odd
and blue if $i$ is even. We also denote by $B$ the set of blue vertices and by
$R$ be the set of red vertices. Let $b_{i}$ and $r_{i}$ denote the $i$-th blue
vertex and the $i$-th red vertex, respectively. We define a monochromatic red
matching $M_{r}$ by forming a matching between $A_{2i-1}$ and the first
$|A_{2i-1}|$ vertices of $A_{2i}$ for each $i\geq 1$. Similarly, we define a
monochromatic blue matching $M_{b}$ by forming a matching between $A_{2i}$ and
the first $|A_{2i}|$ vertices of $A_{2i+1}$ for each $i\geq 0$.
Figure 1: The colouring for $q=2$ and the reordering by $f$.
Next, let us define a bijection $f\colon\mathbb{N}\to V(G)$, which will serve
as a reordering of $G$. Let $r_{t}^{*}$ denote the $t$-th red vertex not in
$M_{b}$, and $b_{t}^{*}$ denote the $t$-th blue vertex not in $M_{r}$. The
function $f$ is defined as follows. We start enumerating blue vertices, in
their order, until we reach $b_{1}^{*}$. Then we enumerate red vertices, in
their order, until we reach $r_{1}^{*}$. Then we enumerate blue vertices again
until we reach $b_{2}^{*}$. We continue enumerating vertices in this way,
changing colours whenever we find an $r_{t}^{*}$ or a $b_{t}^{*}$. (See Figure
1.) Finally, for every $H\subseteq G$, we define
$\bar{d}(H;f)=\limsup_{t\rightarrow\infty}\frac{|V(H)\cap f([t])|}{t}.$
Note that $\bar{d}(H;f)$ is the upper density of $H$ in the reordered graph
$f^{-1}(G)$.
###### Claim 3.1.
Let $P_{r}$ and $P_{b}$ be infinite monochromatic red and blue paths in $G$,
respectively. Then $\bar{d}(P_{r};f)\leq\bar{d}(M_{r};f)$ and
$\bar{d}(P_{b};f)\leq\bar{d}(M_{b};f)$.
###### Claim 3.2.
We have
$\bar{d}(M_{r};f),\leavevmode\nobreak\
\bar{d}(M_{b};f)\leq\frac{q^{2}+2q-1}{q^{2}+3q-2}.$
We can easily derive Theorem 1.1 from these two claims. Note that the rational
function in Claim 3.2 evaluates to ${(12+\sqrt{8})}/{17}$ at
$q\coloneqq\sqrt{2}+1$. It then follows from Claim 3.1 and 3.2, that every
monochromatic path $P$ in $G$ satisfies
$\bar{d}(P;f)\leq{(12+\sqrt{8})}/{17}$. Thus we can define the desired
colouring of $K_{\mathbb{N}}$, by colouring each edge $ij$ with the colour of
the edge $f(i)f(j)$ in $G$.
It remains to prove Claim 3.1 and 3.2. The intuition behind Claim 3.1 is that
in every monochromatic red path $P_{r}$ there is a red matching with the same
vertex set, and that $M_{r}$ has the largest upper density among all red
matchings, as it contains every red vertex and has the largest possible upper
density of blue vertices. Note that the proof of Claim 3.1 only uses the
property that $f$ preserves the order of the vertices inside $R$ and inside
$B$.
###### Proof of Claim 3.1.
We will show $\bar{d}(P_{r};f)\leq\bar{d}(M_{r};f)$. (The other case is
analogous.) We prove that, for every positive integer $k$, we have
$|V(P_{r})\cap f([k])|\leq|V(M_{r})\cap f([k])|$. Assume, for contradiction,
that this is not the case and let $k$ be the minimum positive integer for
which the inequality does not hold. Every red vertex is saturated by $M_{r}$,
so $|V(P_{r})\cap f([k])\cap B|>|V(M_{r})\cap f([k])\cap B|$. By the
minimality of $k$, $f(k)$ must be in $P_{r}$ but not in $M_{r}$, and in
particular it must be blue.
Let $f(k)\in A_{2i}$. Since $f(k)\not\in M_{r}$, we know that $f(k)$ is not
among the first $|A_{2i-1}|$ vertices of $A_{2i}$. Therefore, since $f$
preserves the order of the vertices inside $B$, $f([k])$ contains the first
$|A_{2i-1}|$ blue vertices in $A_{2i}$, and hence
$|V(P_{r})\cap f([k])\cap B|>|V(M_{r})\cap f([k])\cap
B|=\sum_{j=1}^{i}|A_{2j-1}|.$ (1)
On the other hand, every edge between two blue vertices is blue, so the
successor of every blue vertex in $P_{r}$ is red, and in particular there is a
red matching between $V(P_{r})\cap B$ and $R$ saturating $V(P_{r})\cap B$. So
by (1), the number of red neighbours of $V(P_{r})\cap f([k])\cap B$ is at
least $|V(P_{r})\cap f([k])\cap B|>\sum_{j=1}^{i}|A_{2j-1}|$. Observe that by
the definition of $f$, we have $V(P_{r})\cap f([k])\cap
B\subseteq\bigcup_{j=0}^{i}A_{2j}$. Hence the red neighbourhood of
$V(P_{r})\cap f([k])\cap B$ is contained in $\bigcup_{j=1}^{i}A_{2j-1}$, a
contradiction. ∎
###### Proof of Claim 3.2.
Let $\ell_{r}(t)$ and $\ell_{b}(t)$ denote the position of $r_{t}^{*}$ among
the red vertices and of $b_{t}^{*}$ among the blue vertices, respectively. In
other words, let $\ell_{r}(t)=i$ where $r_{t}^{*}=r_{i}$ and $\ell_{b}(t)=j$
where $b_{t}^{*}=b_{j}$ (so for example in Figure 1, $\ell_{r}(4)=9$ and
$\ell_{b}(4)=14$). Note that $f(\ell_{b}(t)+\ell_{r}(t))=r_{t}^{*}$, so for
$\ell_{b}(t-1)+\ell_{r}(t-1)\leq k\leq\ell_{b}(t)+\ell_{r}(t)-1$, $f([k])$ has
exactly $t-1$ vertices outside of $M_{b}$ and at least $t-1$ vertices outside
of $M_{r}$. As a consequence, we obtain
$\bar{d}(M_{r};f),\leavevmode\nobreak\
\bar{d}(M_{b};f)\leq\limsup_{k\to\infty}(1-h(k))=\limsup\limits_{t\rightarrow\infty}\left(1-\frac{t-1}{\ell_{r}(t)+\ell_{b}(t)-1}\right),$
(2)
where $h(k)=(t-1)/k$ if $\ell_{b}(t-1)+\ell_{r}(t-1)\leq
k\leq\ell_{b}(t)+\ell_{r}(t)-1$. It is easy to see that
$\displaystyle\ell_{r}(t)=t+\sum\limits_{j=0}^{i}|A_{2j}|\quad$
$\displaystyle\text{for}\quad\sum\limits_{j=0}^{i-1}(|A_{2j+1}|-|A_{2j}|)<t\leq\sum\limits_{j=0}^{i}(|A_{2j+1}|-|A_{2j}|),\text{
and}$ $\displaystyle\ell_{b}(t)=t+\sum\limits_{j=1}^{i}|A_{2j-1}|\quad$
$\displaystyle\text{for}\quad\sum\limits_{j=1}^{i-1}(|A_{2j}|-|A_{2j-1}|)<t-|A_{0}|\leq\sum\limits_{j=1}^{i}(|A_{2j}|-|A_{2j-1}|).$
Note that $\ell_{r}(t)-t$ and $\ell_{b}(t)-t$ are piecewise constant and non-
decreasing. We claim that, in order to compute the right hand side of (2), it
suffices to consider values of $t$ for which
$\ell_{r}(t)-t>\ell_{r}(t-1)-(t-1)$ or $\ell_{b}(t)-t>\ell_{b}(t-1)-(t-1)$.
This is because we can write
$1-\frac{t-1}{\ell_{r}(t)+\ell_{b}(t)-1}=\frac{1}{2}+\frac{(\ell_{r}(t)-t)+(\ell_{b}(t)-t)+1}{2(\ell_{r}(t)+\ell_{b}(t)-1)}.$
In this expression, the second fraction has a positive, piecewise constant
numerator and a positive increasing denominator. Therefore, the local maxima
are attained precisely at the values for which the numerator increases. We
will do the calculations for the case when $\ell_{r}(t)-t>\ell_{r}(t-1)-(t-1)$
(the other case is similar), in which we have
$\displaystyle t$
$\displaystyle=1+\sum\limits_{j=0}^{i-1}(|A_{2j+1}|-|A_{2j}|)=1+\sum\limits_{j=0}^{i-1}(1+o(1))q^{2j}(q-1)=(1+o(1))\frac{q^{2i}}{q+1},$
$\displaystyle\ell_{r}(t)$
$\displaystyle=t+\sum\limits_{j=0}^{i}|A_{2j}|=(1+o(1))\left(\frac{q^{2i}}{q+1}+\sum\limits_{j=0}^{i}q^{2j}\right)=(1+o(1))\frac{(q^{2}+q-1)q^{2i}}{q^{2}-1},\text{
and}$ $\displaystyle\ell_{b}(t)$
$\displaystyle=t+\sum\limits_{j=1}^{i}|A_{2j-1}|=(1+o(1))\left(\frac{q^{2i}}{q+1}+\sum\limits_{j=1}^{i}q^{2j-1}\right)=(1+o(1))\frac{(2q-1)q^{2i}}{q^{2}-1}.$
Plugging this into (2) gives the desired result. ∎
## 4 Lower bound
This section is dedicated to the proof of Theorem 1.2. A _total colouring_ of
a graph $G$ is a colouring of the vertices and edges of $G$. Due to an
argument of Erdős and Galvin, the problem of bounding the upper density of
monochromatic paths in edge coloured graphs can be reduced to the problem of
bounding the upper density of monochromatic path forests in totally coloured
graphs.
###### Definition 4.1 (Monochromatic path forest).
Given a totally coloured graph $G$, a forest $F\subseteq G$ is said to be a
_monochromatic path forest_ if $\Delta(F)\leq 2$ and there is a colour $c$
such that all leaves, isolated vertices, and edges of $F$ receive colour $c$.
###### Lemma 4.2.
For every $\gamma>0$ and $k\in\mathbb{N}$, there is some
$n_{0}=n_{0}(k,\gamma)$ so that the following is true for every $n\geq n_{0}$.
For every total $2$-colouring of $K_{n}$, there is an integer $t\in[k,n]$ and
a monochromatic path forest $F$ with $d(F,t)\geq{(12+\sqrt{8})}/{17}-\gamma$.
Some standard machinery related to Szemerédi’s regularity lemma, adapted to
the ordered setting, will allow us to reduce the problem of bounding the upper
density of monochromatic path forests to the problem of bounding the upper
density of monochromatic simple forests.
###### Definition 4.3 (Monochromatic simple forest).
Given a totally coloured graph $G$, a forest $F\subseteq G$ is said to be a
_monochromatic simple forest_ if $\Delta(F)\leq 1$ and there is a colour $c$
such that all edges and isolated vertices of $F$ receive colour $c$ and at
least one endpoint of each edge of $F$ receives colour $c$.
###### Lemma 4.4.
For every $\gamma>0$, there exists $k_{0},N\in\mathbb{N}$ and $\alpha>0$ such
that the following holds for every integer $k\geq k_{0}$. Let $G$ be a totally
$2$-coloured graph on $kN$ vertices with minimum degree at least
$(1-\alpha)kN$. Then there exists an integer $t\in[k/8,kN]$ and a
monochromatic simple forest $F$ such that
$d(F,t)\geq{(12+\sqrt{8})}/{17}-\gamma$.
The heart of the proof is Lemma 4.4, which we shall prove in Section 4.3. But
first, in the next two sections, we show how to deduce Theorem 1.2 from Lemmas
4.2 and 4.4.
### 4.1 From path forests to paths
In this section we use Lemma 4.2 to prove Theorem 1.2. Our exposition follows
that of Theorem 1.6 in [2].
###### Proof of Theorem 1.2.
Fix a $2$-colouring of the edges of $K_{\mathbb{N}}$ in red and blue. We
define a $2$-colouring of the vertices by colouring $n\in\mathbb{N}$ red if
there are infinitely many $m\in\mathbb{N}$ such that the edge $nm$ is red and
blue otherwise.
Case 1. Suppose there are vertices $x$ and $y$ of the same colour, say red,
and a finite set $S\subseteq\mathbb{N}$ such that there is no red path
disjoint from $S$ which connects $x$ to $y$.
We partition $\mathbb{N}\setminus S$ into sets $X,Y,Z$, where $x^{\prime}\in
X$ if and only if there is a red path, disjoint from $S$, which connects
$x^{\prime}$ to $x$ and $y^{\prime}\in Y$ if and only if there is a red path
disjoint from $S$ which connects $y$ to $y^{\prime}$. Note that every edge
from $X\cup Y$ to $Z$ is blue. Since $x$ and $y$ are coloured red, both $X$
and $Y$ are infinite, and by choice of $x$ and $y$ all edges in the bipartite
graph between $X$ and $Y\cup Z$ are blue. Hence there is a blue path with
vertex set $X\cup Y\cup Z=\mathbb{N}\setminus S$.
Case 2. Suppose that for every pair of vertices $x$ and $y$ of the same colour
$c$, and every finite set $S\subseteq\mathbb{N}$, there is a path from $x$ to
$y$ of colour $c$ which is disjoint from $S$.
Let $\gamma_{n}$ be a sequence of positive reals tending to zero, and let
$a_{n}$ and $k_{n}$ be increasing sequences of integers such that
$a_{n}\geq n_{0}(k_{n},\gamma_{n})$ and
$k_{n}/(a_{1}+\dots+a_{n-1}+k_{n})\rightarrow 1$,
where $n_{0}(k,\gamma)$ is as in Lemma 4.2. Let $\mathbb{N}=(A_{i})$ be a
partition of $\mathbb{N}$ into consecutive intervals with $|A_{n}|=a_{n}$. By
Lemma 4.2 there are monochromatic path forests $F_{n}$ with $V(F_{n})\subseteq
A_{n}$ and initial segments $I_{n}\subseteq A_{n}$ of length at least $k_{n}$
such that
$|V(F_{n})\cap
I_{n}|\geq\left(\frac{12+\sqrt{8}}{17}-\gamma_{n}\right)|I_{n}|.$
It follows that for any $G\subseteq K_{\mathbb{N}}$ containing infinitely many
$F_{n}$’s we have
$\bar{d}(G)\geq\limsup_{n\rightarrow\infty}\frac{|V(F_{n})\cap
I_{n}|}{a_{1}+\dots+a_{n-1}+|I_{n}|}\geq\limsup_{n\rightarrow\infty}\frac{12+\sqrt{8}}{17}-\gamma_{n}=\frac{12+\sqrt{8}}{17}.$
By the pigeonhole principle, there are infinitely many $F_{n}$’s of the same
colour, say blue. We will recursively construct a blue path $P$ which contains
infinitely many of these $F_{n}$’s. To see how this is done, suppose we have
constructed a finite initial segment $p$ of $P$. We will assume as an
inductive hypothesis that $p$ ends at a blue vertex $v$. Let $n$ be large
enough that $\min(A_{n})$ is greater than every vertex in $p$, and $F_{n}$ is
blue. Let $F_{n}=\\{P_{1},\dots,P_{s}\\}$ for some $s\in\mathbb{N}$ and let
$w_{i},w_{i}^{\prime}$ be the endpoints of the path $P_{i}$ (note that $w_{i}$
and $w_{i}^{\prime}$ could be equal) for every $i\in[s]$. By the case
assumption, there is a blue path $q_{1}$ connecting $v$ to $w_{1}$, such that
$q_{1}$ is disjoint from $A_{1}\cup\dots\cup A_{n}$. Similarly, there is a
blue path $q_{2}$ connecting $w_{1}^{\prime}$ to $w_{2}$, such that $q_{2}$ is
disjoint from $A_{1}\cup\dots\cup A_{n}\cup\\{q_{1}\\}$. Continuing in this
fashion, we find disjoint blue paths $q_{3},\dots,q_{s}$ such that $q_{i}$
connects $w_{i-1}^{\prime}$ to $w_{i}$. Hence, we can extend $p$ to a path
$p^{\prime}$ which contains all of the vertices of $F_{n}$ and ends at a blue
vertex. ∎
### 4.2 From simple forests to path forests
In this section we use Lemma 4.4 to prove Lemma 4.2. The proof is based on
Szemerédi’s Regularity Lemma, which we introduce below. The main difference to
standard applications of the Regularity Lemma is, that we have to define an
ordering of the reduced graph, which approximately preserves densities. This
is done by choosing a suitable initial partition.
Let $G=(V,E)$ be a graph and $A$ and $B$ be non-empty, disjoint subsets of
$V$. We write $e_{G}(A,B)$ for the number of edges in $G$ with one vertex in
$A$ and one in $B$ and define the _density_ of the pair $(A,B)$ to be
$d_{G}(A,B)=e_{G}(A,B)/(|A||B|)$. The pair $(A,B)$ is _$\varepsilon$ -regular_
(in $G$) if we have $|d_{G}(A^{\prime},B^{\prime})-d_{G}(A,B)|\leq\varepsilon$
for all $A^{\prime}\subseteq A$ with $|A^{\prime}|\geq\varepsilon|A|$ and
$B^{\prime}\subseteq B$ with $|B^{\prime}|\geq\varepsilon|B|$. It is well-
known (see for instance [5]) that dense regular pairs contain almost spanning
paths. We include a proof of this fact for completeness.
###### Lemma 4.5.
For $0<\varepsilon<1/4$ and $d\geq 2\sqrt{\varepsilon}+\varepsilon$, every
$\varepsilon$-regular pair $(A,B)$ with density at least $d$ contains a path
with both endpoints in $A$ and covering all but at most
$2\sqrt{\varepsilon}|A|$ vertices of $A\cup B$.
###### Proof.
We will construct a path $P_{k}=(a_{1}b_{1}\dots a_{k})$ for every
$k=1,\ldots,\lceil(1-\sqrt{\varepsilon})|A|\rceil$ such that $B_{k}\coloneqq
N(a_{k})\setminus V(P_{k})$ has size at least $\varepsilon|B|$. As
$d\geq\varepsilon$, this is easy for $k=1$. Assume now that we have
constructed $P_{k}$ for some $1\leq k<(1-\sqrt{\varepsilon})|A|$. We will show
how to extend $P_{k}$ to $P_{k+1}$. By $\varepsilon$-regularity of $(A,B)$,
the set $\bigcup_{b\in B_{k}}N(b)$ has size at least $(1-\varepsilon)|A|$. So
$A^{\prime}\coloneqq\bigcup_{b\in B_{k}}N(b)\setminus V(P_{k})$ has size at
least $(\sqrt{\varepsilon}-\varepsilon)|A|\geq\varepsilon|A|$. Let
$B^{\prime}=B\setminus V(P_{k})$ and note that
$|B^{\prime}|\geq\sqrt{\varepsilon}|B|$ as $k<(1-\sqrt{\varepsilon})|A|$ and
$|A|=|B|$. By $\varepsilon$-regularity of $(A,B)$, there exists $a_{k+1}\in
A^{\prime}$ with at least $(d-\varepsilon)|B^{\prime}|\geq 2\varepsilon|B|$
neighbours in $B^{\prime}$. Thus we can define $P_{k+1}=(a_{1}b_{1}\dots
a_{k}b_{k}a_{k+1})$, where $b_{k}\in B_{k}\cap N(a_{k+1})$. ∎
A family of disjoint subsets $\\{V_{i}\\}_{i\in[m]}$ of a set $V$ is said to
_refine_ a partition $\\{W_{j}\\}_{j\in[\ell]}$ of $V$ if, for all $i\in[m]$,
there is some $j\in[\ell]$ with $V_{i}\subseteq W_{j}$.
###### Lemma 4.6 (Regularity Lemma [6, 10]).
For every $\varepsilon>0$ and $m_{0},\ell\geq 1$ there exists
$M=M(\varepsilon,m_{0},\ell)$ such that the following holds. Let $G$ be a
graph on $n\geq M$ vertices whose edges are coloured in red and blue and let
$d>0$. Let $\\{W_{i}\\}_{i\in[\ell]}$ be a partition of $V(G)$. Then there
exists a partition $\\{V_{0},\dots,V_{m}\\}$ of $V(G)$ and a subgraph $H$ of
$G$ with vertex set $V(G)\setminus V_{0}$ such that the following holds:
1. _(i)_
$m_{0}\leq m\leq M$;
2. _(ii)_
$\\{V_{i}\\}_{i\in[m]}$ refines $\\{W_{i}\cap V(H)\\}_{i\in[\ell]}$;
3. _(iii)_
$|V_{0}|\leq\varepsilon n$ and $|V_{1}|=\dots=|V_{m}|\leq\lceil\varepsilon
n\rceil$;
4. _(iv)_
$\deg_{H}(v)\geq\deg_{G}(v)-(d+\varepsilon)n$ for each $v\in V(G)\setminus
V_{0}$;
5. _(v)_
$H[V_{i}]$ has no edges for $i\in[m]$;
6. _(vi)_
all pairs $(V_{i},V_{j})$ are $\varepsilon$-regular and with density either 0
or at least $d$ in each colour in $H$.
Before we start with the proof, we will briefly describe the setup and proof
strategy of Lemma 4.2. Consider a totally $2$-coloured complete graph
$G=K_{n}$. Denote the sets of red and blue vertices by $R$ and $B$,
respectively. For $\ell\geq 4,$ let $\\{W_{j}\\}_{j\in[\ell]}$ be a partition
of $[n]$ such that each $W_{j}$ consists of at most $\lceil n/{\ell}\rceil$
subsequent vertices. The partition $\\{W^{\prime}_{j}\\}_{j\in[2\ell]}$, with
parts of the form $W_{i}\cap R$ and $W_{i}\cap B$, refines both
$\\{W_{j}\\}_{j\in[\ell]}$ and $\\{R,B\\}$. Suppose that $V_{0}\cup\dots\cup
V_{m}$ is a partition obtained from Lemma 4.6 applied to $G$ and
$\\{W^{\prime}_{j}\\}_{j\in[2\ell]}$ with parameters $\varepsilon$, $m_{0}$,
$2\ell$ and $d$. We define the $(\varepsilon,d)$-_reduced graph_
${G^{\prime}}$ to be the graph with vertex set $V({G^{\prime}})=[m]$ where
$ij$ is an edge of ${G^{\prime}}$ if and only if if $(V_{i},V_{j})$ is an
$\varepsilon$-regular pair of density at least $d$ in the red subgraph of $H$
or in the blue subgraph of $H$. Furthermore, we colour $ij$ red if
$(V_{i},V_{j})$ is an $\varepsilon$-regular pair of density at least $d$ in
the red subgraph of $H$, otherwise we colour $ij$ blue. As
$\\{V_{i}\\}_{i\in[m]}$ refines $\\{R,B\\}$, we can extend this to a total
2-colouring of $G^{\prime}$ by colouring each vertex $i$ red, if
$V_{i}\subseteq R$, and blue otherwise. By relabelling the clusters, we can
furthermore assume that $i<j$ if and only if
$\max\\{V_{i}\\}<\max\\{V_{j}\\}$. Note that, by choice of
$\\{W_{j}\\}_{j\in[\ell]}$, any two vertices in $V_{i}$ differ by at most
$n/\ell$. Moreover, a simple calculation (see [7, Proposition 42]) shows that
${G^{\prime}}$ has minimum degree at least $(1-d-3\varepsilon)m$.
Given this setup, our strategy to prove Lemma 4.2 goes as follows. First, we
apply Lemma 4.4 to obtain $t^{\prime}\in[m]$ and a, red say, simple forest
$F^{\prime}\subseteq G^{\prime}$ with
$d(F^{\prime},t^{\prime})\approx{(12+\sqrt{8})}/{17}$. Next, we turn
$F^{\prime}$ into a red path forest $F\subseteq G$. For every isolated vertex
$i\in V(F^{\prime})$, this is straightforward as $V_{i}\subseteq R$ by the
refinement property. For every edge $ij\in E(F^{\prime})$ with $i\in R$, we
apply Lemma 4.5 to obtain a red path that almost spans $(V_{i},V_{j})$ and has
both ends in $V_{i}$. So the union $F^{\prime}$ of these paths and vertices is
indeed a red path forest. Since the vertices in each $V_{i}$ do not differ too
much, it will follow that $d(F,t)\approx{(12+\sqrt{8})}/{17}$ for
$t=\max\\{V_{t^{\prime}}\\}$.
###### Proof of Lemma 4.2.
Suppose we are given $\gamma>0$ and $k\in\mathbb{N}$ as input. Let
$k_{0},N\in\mathbb{N}$ and $\alpha>0$ be as in Lemma 4.4 with input
$\gamma/4$. We choose constants $d,\varepsilon>0$ and
$\ell,m_{0}\in\mathbb{N}$ satisfying
$2\sqrt{\varepsilon}+\varepsilon\leq 1/\ell,d\leq\alpha/8$ and $m_{0}\geq
4N/d,2k_{0}N$.
We obtain $M$ from Lemma 4.6 with input $\varepsilon,m_{0}$ and $2\ell$.
Finally, set $n_{0}=16k\ell MN$.
Now let $n\geq n_{0}$ and suppose that $K_{n}$ is an ordered complete graph on
vertex set $[n]$ and with a total $2$-colouring in red and blue. We have to
show that there is an integer $t\in[k,n]$ and a monochromatic path forest $F$
such that $|V(F)\cap[t]|\geq({(12+\sqrt{8})}/{17}-\gamma)t$.
Denote the red and blue vertices by $R$ and $B$, respectively. Let
$\\{W^{\prime}_{j}\\}_{j\in[\ell]}$ refine $\\{R,B\\}$ as explained in the
above setting. Let $\\{V_{0},\dots,V_{m}\\}$ be a partition of $[n]$ with
respect to $G=K_{n}$ and $\\{W^{\prime}_{j}\\}_{j\in[\ell]}$ as detailed in
Lemma 4.6 with totally 2-coloured $(\varepsilon,d)$-reduced graph
$G^{\prime\prime}$ of minimum degree $\delta(G^{\prime\prime})\geq(1-4d)m$.
Set $k^{\prime}=\lfloor m/N\rfloor\geq k_{0}$ and observe that the subgraph
${G^{\prime}}$ induced by $G^{\prime\prime}$ in $[k^{\prime}N]$ satisfies
$\delta({G^{\prime}})\geq(1-8d)m\geq(1-\alpha)m$ as $m\geq 4N/d$. Thus we can
apply Lemma 4.4 with input ${G^{\prime}}$, $k^{\prime}$, $\gamma/4$ to obtain
an integer $t^{\prime}\in[k^{\prime}/8,k^{\prime}N]$ and a monochromatic (say
red) simple forest $F^{\prime}\subseteq{G^{\prime}}$ such that
$d(F^{\prime},t^{\prime})\geq{(12+\sqrt{8})}/{17}-\gamma/4$.
Set $t=\max V_{t^{\prime}}$. We have that $V_{t^{\prime}}\subseteq W_{j}$ for
some $j\in[\ell]$. Recall that $i<j$ if and only if
$\max\\{V_{i}\\}<\max\\{V_{j}\\}$ for any $i,j\in[m]$. It follows that
$V_{i}\subseteq[t]$ for all $i\leq t^{\prime}$. Hence
$t\geq
t^{\prime}|V_{1}|\geq\frac{k^{\prime}}{8}|V_{1}|\geq\left\lfloor\frac{m}{N}\right\rfloor\frac{(1-\varepsilon)n}{8m}\geq\frac{n}{16N}.$
(3)
This implies $t\geq k$ by choice of $n_{0}$. Since $[t]$ is covered by
$V_{0}\cup W_{j}\cup\bigcup_{i\in[t^{\prime}]}V_{i}$, it follows that
$\displaystyle t^{\prime}|V_{1}|$ $\displaystyle\geq t-|V_{0}|-|W_{j}|$
$\displaystyle\geq\left(1-\varepsilon\frac{n}{t}-\frac{4}{\ell}\frac{n}{t}\right)t$
$\displaystyle\geq\left(1-16\varepsilon
N-\frac{64N}{\ell}\right)t\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\ \text{ (by
}\eqref{equ:t-geq-n/64N})$
$\displaystyle\geq\left(1-\frac{\gamma}{2}\right)t.$ (4)
For every edge $ij\in E(F^{\prime})$ with $V_{i}\subseteq R$, we apply Lemma
4.5 to choose a path $P_{ij}$ which starts and ends in $V_{i}$ and covers all
but at most $2\sqrt{\varepsilon}|V_{1}|$ vertices of each $V_{i}$ and $V_{j}$.
We denote the isolated vertices of $F^{\prime}$ by $I^{\prime}$. For each
$i\in I^{\prime}$ we have $V_{i}\subseteq R$. Hence the red path forest
$F\coloneqq\bigcup_{i\in I^{\prime}}V_{i}\cup\bigcup_{ij\in
E(F^{\prime})}P_{ij}\subseteq K_{n}$ satisfies
$\displaystyle|V(F)\cap[t]|$ $\displaystyle=\sum_{i\in
I^{\prime}}|V_{i}\cap[t]|+\sum_{ij\in E(F^{\prime})}|V(P_{ij})\cap[t]|$
$\displaystyle\geq\sum_{i\in I^{\prime}\cap[t^{\prime}]}|V_{i}|+\sum_{i\in
V(F^{\prime}-I^{\prime})\cap[t^{\prime}],}(|V_{i}|-2\sqrt{\varepsilon}|V_{1}|)$
$\displaystyle\geq(1-2\sqrt{\varepsilon})|V_{1}||V(F^{\prime})\cap[t^{\prime}]|$
$\displaystyle\geq(1-2\sqrt{\varepsilon})\left(\frac{12+\sqrt{8}}{17}-\frac{\gamma}{4}\right)t^{\prime}|V_{1}|$
$\displaystyle\overset{\eqref{equ:t-t'}}{\geq}\left(\frac{12+\sqrt{8}}{17}-\gamma\right)t$
as desired. ∎
### 4.3 Upper density of simple forests
In this section we prove Lemma 4.4. For a better overview, we shall define all
necessary constants here. Suppose we are given $\gamma^{\prime}>0$ as input
and set $\gamma=\gamma^{\prime}/4$. Fix a positive integer $N=N(\gamma)$ and
let $0<\alpha\leq\gamma/(8N)$. The exact value of $N$ will be determined later
on. Let $k_{0}=\lceil 8/\gamma\rceil$ and fix a positive integer $k\geq
k_{0}$. Consider a totally 2-coloured graph $G^{\prime}$ on $n=kN$ vertices
with minimum degree at least $(1-\alpha)n$.
Denote the sets of red and blue vertices by $R$ and $B$, respectively. As it
turns out, we will not need the edges inside $R$ and $B$. So let $G$ be the
spanning bipartite subgraph, obtained from $G^{\prime}$ by deleting all edges
within $R$ and $B$. For each red vertex $v$, let $d_{b}(v)$ be the number of
blue edges incident to $v$ in $G$. Let $a_{1}\leq\dots\leq a_{|R|}$ denote the
degree sequence taken by $d_{b}(v)$. The whole proof of Lemma 4.4 revolves
around analysing this sequence.
Fix an integer $t=t(\gamma,N,k)$ and subset $R^{\prime}\subseteq R$,
$B^{\prime}\subseteq B$. The value of $t$ and nature of $R^{\prime}$,
$B^{\prime}$ will be determined later. The following two observations explain
our interest in the sequence $a_{1}\leq\dots\leq a_{|R|}$.
###### Claim 4.7.
If $a_{j}>j-t$ for all $1\leq j\leq|R^{\prime}|-1$, then there is a blue
simple forest covering all but at most $t$ vertices of $R^{\prime}\cup B$.
###### Proof.
We write $R^{\prime}=\\{v_{1},\dots,v_{|R^{\prime}|}\\}$ such that
$d_{b}(v_{i})\leq d_{b}(v_{j})$ for every ${1\leq i\leq j\leq|R^{\prime}|}$.
By assumption, we have $d_{b}(v_{j})\geq a_{j}>j-t$ for all $1\leq
j\leq|R^{\prime}|-1$. Thus we can greedily select a blue matching containing
$\\{v_{t},v_{t+1},\dots,v_{|R^{\prime}|-1}\\}$, which covers all but $t$
vertices of $R^{\prime}$. Together with the rest of $B$, this forms the
desired blue simple forest. ∎
###### Claim 4.8.
If $a_{i}<i+t$ for all $1\leq i\leq|B^{\prime}|-t$, then there is a red simple
forest covering all but at most $t+\alpha n$ vertices of $R\cup B^{\prime}$.
###### Proof.
Let $X^{\prime}$ be a minimum vertex cover of the red edges in the subgraph of
$G$ induced by $R\cup B^{\prime}$. If $|X^{\prime}|\geq|B^{\prime}|-t-\alpha
n$, then by König’s theorem there exists a red matching covering at least
$|B^{\prime}|-t-\alpha n$ vertices of $B^{\prime}$. This together with the
vertices in $R$ yields the desired red simple forest.
Suppose now that $|X^{\prime}|<|B^{\prime}|-t-\alpha n$. Since every edge
between $R\setminus(X^{\prime}\cap R)$ and $B^{\prime}\setminus(X^{\prime}\cap
B^{\prime})$ is blue, we have for every vertex $v$ in
$R\setminus(X^{\prime}\cap R)$,
$d_{b}(v)\geq|B^{\prime}|-|X^{\prime}\cap B^{\prime}|-\alpha n=|X^{\prime}\cap
R|+|B^{\prime}|-|X^{\prime}|-\alpha n>|X^{\prime}\cap R|+t.$
In particular, this implies $a_{i}\geq i+t$ for $i=|X^{\prime}\cap R|+1$. So
$|B^{\prime}|-t+1\leq|X^{\prime}\cap R|+1$ by the assumption in the statement.
Together with
$|X^{\prime}\cap R|+1\leq|X^{\prime}|+1<|B^{\prime}|-t-\alpha
n+1<|B^{\prime}|-t+1,$
we reach a contradiction. ∎
Motivated by this, we introduce the following definitions.
###### Definition 4.9 (Oscillation, $\ell^{+}(t)$, $\ell^{-}(t)$).
Let $a_{1},\dots,a_{n}$ be a non-decreasing sequence of non-negative real
numbers. We define its _oscillation_ as the maximum value $T$, for which there
exist indices $i,j\in[n]$ with $a_{i}-i\geq T$ and $j-a_{j}\geq T$. For all
$0<t\leq T$, set
$\displaystyle\ell^{+}(t)$ $\displaystyle=\min\\{i\in[n]\colon\ a_{i}\geq
i+t\\},$ $\displaystyle\ell^{-}(t)$ $\displaystyle=\min\\{j\in[n]\colon\
a_{j}\leq j-t\\}.$
Suppose that the degree sequence $a_{1},\dots,a_{|R|}$ has oscillation $T$ and
fix some positive integer $t\leq T$. We define $\ell$ and $\lambda$ by
$\ell=\ell^{+}(t)+\ell^{-}(t)=\lambda t.$ (5)
The next claim combines Claims 4.7 and 4.8 into a density bound of a
monochromatic simple forest in terms of the ratio $\ell/t=\lambda$. (Note
that, in practice, the term $\alpha n$ will be of negligible size.)
###### Claim 4.10.
There is a monochromatic simple forest $F\subseteq G$ with
$d(F,\ell+t)\geq\frac{\ell-\alpha n}{\ell+t}=\frac{\lambda t-\alpha
n}{(1+\lambda)t}.$
###### Proof.
Let $R^{\prime}=R\cap[\ell+t]$ and $B^{\prime}=B\cap[\ell+t]$ so that
$\ell^{+}(t)+\ell^{-}(t)=\ell=|R^{\prime}|+|B^{\prime}|-t$. Thus we have
either $\ell^{-}(t)\geq|R^{\prime}|$ or $\ell^{+}(t)>|B^{\prime}|-t$. If
$\ell^{-}(t)\geq|R^{\prime}|$, then $a_{j}>j-t$ for every $1\leq
j\leq|R^{\prime}|-1$. Thus Claim 4.7 provides a blue simple forest $F$
covering all but at most $t$ vertices of $[\ell+t]$. On the other hand, if
$\ell^{+}(t)>|B^{\prime}|-t$, then $a_{i}<i+t$ for every $1\leq
i\leq|B^{\prime}|-t$. In this case Claim 4.8 yields a red simple forest $F$
covering all but at most $t+\alpha n$ vertices of $[\ell+t]$. ∎
Claim 4.10 essentially reduces the problem of finding a dense linear forest to
a problem about bounding the ratio $\ell/t$ in integer sequences. It is, for
instance, not hard to see that we always have $\ell\geq 2t$ (which, together
with the methods of the previous two subsections, would imply the bound
$\bar{d}(P)\geq 2/3$ of Erdős and Galvin). The following lemma provides an
essentially optimal lower bound on $\ell/t=\lambda$. Note that for
$\lambda=4+\sqrt{8}$, we have
$\frac{\lambda}{\lambda+1}={(12+\sqrt{8})}/{17}$.
###### Lemma 4.11.
For all $\gamma\in\mathbb{R}^{+}$, there exists $N\in\mathbb{N}$ such that,
for all $k\in\mathbb{R}^{+}$ and all sequences with oscillation at least $kN$,
there exists a real number $t\in[k,kN]$ with
$\ell:=\ell^{+}(t)+\ell^{-}(t)\geq\left(4+\sqrt{8}-\gamma\right)t.$
The proof of Lemma 4.11 is deferred to the last section. We now finish the
proof of Lemma 4.4. Set $N=N(\gamma)$ to be the integer returned by Lemma 4.11
with input $\gamma=\gamma^{\prime}/4$. In order to use Lemma 4.11, we have to
bound the oscillation of $a_{1},\dots,a_{|R|}$:
###### Claim 4.12.
The degree sequence $a_{1},\dots,a_{|R|}$ has oscillation $T\geq kN/8$ or
there is a monochromatic simple forest $F\subseteq G$ with
$d(F,n)\geq{(12+\sqrt{8})}/{17}-\gamma$.
Before we prove Claim 4.12, let us see how this implies Lemma 4.4.
###### Proof of Lemma 4.4.
By Claim 4.12, we may assume that the sequence $a_{1},\dots,a_{|R|}$ has
oscillation at least $kN/8$. By Lemma 4.11, there is a real number
$t^{\prime}\in\left[k/8,{kN}/8\right]$ with
$\ell=\ell^{+}(t^{\prime})+\ell^{-}(t^{\prime})\geq(4+\sqrt{8}-\gamma)t^{\prime}.$
Let $t=t(\gamma,N,k)=\left\lceil t^{\prime}\right\rceil$. Since the $a_{i}$’s
are all integers, we have $\ell^{+}(t)=\ell^{+}(t^{\prime})$ and
$\ell^{-}(t)=\ell^{-}(t^{\prime})$. Let $F\subseteq G$ be the monochromatic
simple forest obtained from Claim 4.10. As $n=kN$, $\ell\geq
t^{\prime}\geq{k/8\geq 1/\gamma}$, $\alpha\leq\gamma/(8N)$, and by (5), it
follows that
$\displaystyle d(F,\ell+t)$ $\displaystyle\geq\frac{\ell-\alpha
n}{\ell+t}=\frac{1-\alpha n/\ell}{1+\frac{t}{\ell}}\geq\frac{1-8\alpha
N}{1+\frac{t^{\prime}}{\ell}{+\frac{1}{\ell}}}\geq\frac{1}{1+\frac{t^{\prime}}{\ell}}{-2\gamma}$
$\displaystyle\geq\frac{1}{1+\frac{1}{4+\sqrt{8}-\gamma}}-2\gamma=\frac{4+\sqrt{8}-\gamma}{5+\sqrt{8}-\gamma}-2\gamma$
$\displaystyle\geq\frac{4+\sqrt{8}}{5+\sqrt{8}}-4\gamma=\frac{12+\sqrt{8}}{17}-\gamma^{\prime},$
as desired. ∎
To finish, it remains to show Claim 4.12. The proof uses König’s theorem and
is similar to the proof of Claim 4.8.
###### Proof of Claim 4.12.
Let $X$ be a minimum vertex cover of the red edges. If
$|X|\geq{|B|}-(1/8+\alpha)n$, then König’s theorem implies that there is a red
matching covering all but at most $(1/8+\alpha)n$ blue vertices. Thus adding
the red vertices, we obtain a red simple forest $F$ with $d(F,kN)\geq
7/8-\alpha\geq{(12+\sqrt{8})}/{17}-\gamma$. Therefore, we may assume that
$|X|<{|B|}-(1/8+\alpha)n$. Every edge between $R\setminus(X\cap R)$ and
$B\setminus(X\cap B)$ is blue. So there are at least ${|R|}-|X\cap R|$ red
vertices $v$ with
$d_{b}(v)\geq{|B|}-|X\cap B|-\alpha n=|X\cap R|+{|B|}-|X|-\alpha n>|X\cap
R|+n/8.$
This implies that $a_{i}\geq i+n/8$ for $i=|X\cap R|+1$. (See Figure 2.)
Figure 2: The sequence $a_{1},\dots,a_{|R|}$ has oscillation at least $kN/8$.
Let $Y$ be a minimum vertex cover of the blue edges. Using König’s theorem as
above, we can assume that $|Y|\leq{|R|}-{n}/{8}$. Every edge between
$R\setminus(Y\cap R)$ and $B\setminus(Y\cap B)$ is red. It follows that there
are at least ${|R|}-|Y\cap R|$ red vertices $v$ with
$d_{b}(v)\leq|Y\cap B|=|Y|-|Y\cap R|\leq{|R|}-|Y\cap R|-\frac{n}{8}.$
This implies that $a_{j}\leq j-{n}/8$ for $j={|R|}-|Y\cap R|$. Thus
$a_{1},\dots,a_{|R|}$ has oscillation at least $n/8=kN/8$. ∎
### 4.4 Sequences and oscillation
We now present the quite technical proof of Lemma 4.11. We will use the
following definition and related lemma in order to describe the oscillation
from the diagonal.
###### Definition 4.13 ($k$-good, $u_{\text{o}}(k)$, $u_{\text{e}}(k)$).
Let $a_{1},\dots,a_{n}$ be a sequence of non-negative real numbers and let $k$
be a positive real number. We say that the sequence is $k$-_good_ if there
exists an odd $i$ and an even $j$ such that $a_{i}\geq k$ and $a_{j}\geq k$.
If the sequence is $k$-good, we define for all $0<t\leq k$
$\displaystyle u_{\text{o}}(t)$
$\displaystyle=a_{1}+\dots+a_{i_{o}-1}\quad\text{where }i_{o}=\min\\{i\colon\
a_{i}\geq t,i\text{ odd}\\},$ $\displaystyle u_{\text{e}}(t)$
$\displaystyle=a_{1}+\dots+a_{i_{e}-1}\quad\text{where }i_{e}=\min\\{i\colon\
a_{i}\geq t,i\text{ even}\\}.$
###### Lemma 4.14.
For all $\gamma\in\mathbb{R}^{+}$ there exists $N\in\mathbb{N}$ such that for
all $k\in\mathbb{R}^{+}$ and all $(kN)$-good sequences, there exists a real
number $t\in[k,kN]$ with
$u_{\text{o}}(t)+u_{\text{e}}(t)\geq\left(3+\sqrt{8}-\gamma\right)t.$
First we use Lemma 4.14 to prove Lemma 4.11.
###### Proof of Lemma 4.11.
Given $\gamma>0$, let $N$ be obtained from Lemma 4.14. Let
$k\in\mathbb{R}^{+}$ and $a_{1},\dots,a_{n}$ be a sequence with oscillation at
least $kN$. Suppose first that $a_{1}\geq 1$. Partition $[n]$ into a family of
non-empty intervals $I_{1},\dots,I_{r}$ with the following properties:
* •
For every odd $i$ and every $j\in I_{i}$, we have $a_{j}\geq j$.
* •
For every even $i$ and every $j\in I_{i}$, we have $a_{j}<j$.
Define $s_{i}=\max\left\\{|a_{j}-j|\colon\ j\in I_{i}\right\\}$. Intuitively,
this is saying that the values in the odd indexed intervals are “above the
diagonal” and the values in the even indexed intervals are “below the
diagonal” and $s_{i}$ is the largest gap between sequence values and the
“diagonal” in each interval.
Since $a_{1},\dots,a_{n}$ has oscillation at least $kN$, the sequence
$s_{1},\dots,s_{r}$ is $(kN)$-good and thus by Lemma 4.14, there exists
$t\in[k,kN]$ such that
$u_{\text{o}}(t)+u_{\text{e}}(t)\geq\left(3+\sqrt{8}-\gamma\right)t.$ (6)
Since the sequence $a_{1},a_{2},\dots,a_{n}$ is non-decreasing, $a_{j}-j$ can
decrease by at most one in each step and thus we have $|I_{i}|\geq s_{i}$ for
every $i\in[r-1]$. Moreover, we can find bounds on $\ell^{+}(t)$ and
$\ell^{-}(t)$ in terms of the $s_{i}$:
* •
$\ell^{+}(t)$ must lie in the interval $I_{i}$ with the smallest odd index
$i_{o}$ such that $s_{i_{o}}\geq t$, therefore $\ell^{+}(t)\geq
s_{1}+\dots+s_{i_{o}-1}=u_{\text{o}}(t)$.
* •
$\ell^{-}(t)$ must lie in the interval $I_{j}$ with the smallest even index
$i_{e}$ such that $s_{i_{e}}\geq t$. Moreover, it must be at least the $t$-th
element in this interval, therefore $\ell^{-}(t)\geq
s_{1}+\dots+s_{i_{e}-1}+t=u_{\text{e}}(t)+t$.
Combining the previous two observations with (6) gives
$\ell^{+}(t)+\ell^{-}(t)\geq
u_{\text{o}}(t)+u_{\text{e}}(t)+t\geq\left(4+\sqrt{8}-\gamma\right)t,$
as desired.
If $0\leq a_{1}<1$, we start by partitioning $[n]$ into a family of non-empty
intervals $I_{1},\dots,I_{r}$ with the following properties:
* •
For every even $i$ and every $j\in I_{i}$, we have $a_{j}\geq j$.
* •
For every odd $i$ and every $j\in I_{i}$, we have $a_{j}<j$.
From this point, the proof is analogous. ∎
Finally, it remains to prove Lemma 4.14. The proof is by contradiction and the
main strategy is to find a subsequence with certain properties which force the
sequence to become negative eventually.
###### Proof of Lemma 4.14.
Let $\rho=3+\sqrt{8}-\gamma$ and let $m:=m(\rho)$ be a positive integer which
will be specified later. Suppose that the statement of the lemma is false for
$N=6\cdot 4^{m}$ and let $a_{1},\dots,a_{n}$ be an $(Nk)$-good sequence
without $t$ as in the statement. We first show that $a_{i}$ has a long
strictly increasing subsequence. Set
$I=\\{i\colon\ a_{i}\geq k,a_{i}>a_{j}\text{ for all }j<i\\},$
denote the elements of $I$ by $i_{1}\leq i_{2}\leq\dots\leq i_{r}$ and let
$a^{\prime}_{j}=a_{i_{j}}$. Consider any $j\in[r-1]$ and suppose without loss
of generality that $i_{j+1}$ is odd. For $\delta$ small enough, this implies
$u_{\text{o}}(a^{\prime}_{j}+\delta)=a_{1}+\dots+a_{i_{j+1}-1}\geq
a^{\prime}_{1}+\dots+a^{\prime}_{j}$, and
$u_{\text{e}}(a^{\prime}_{j}+\delta)\geq a_{1}+\dots+a_{i_{j+1}}\geq
a^{\prime}_{1}+\dots+a^{\prime}_{j+1}$. By assumption we have
$u_{\text{o}}(a^{\prime}_{j}+\delta)+u_{\text{e}}(a^{\prime}_{j}+\delta)<\rho(a^{\prime}_{j}+\delta)$.
Hence, letting $\delta\rightarrow 0$ we obtain
$2\left(a^{\prime}_{1}+\dots+a^{\prime}_{j}\right)+a^{\prime}_{j+1}\leq\rho
a^{\prime}_{j}$, which rearranges to
$a^{\prime}_{j+1}\leq(\rho-2)a^{\prime}_{j}-2\left(a^{\prime}_{1}+\dots+a^{\prime}_{j-1}\right).$
(7)
In particular, this implies
$a^{\prime}_{j+1}\leq(\rho-2)a^{\prime}_{j}<4a^{\prime}_{j}$. Moreover, we
have $a_{1}^{\prime}\leq u_{\text{o}}(k)$ if $i_{1}$ is even and
$a_{1}^{\prime}\leq u_{\text{e}}(k)$ if $i_{1}$ is odd. Therefore,
$6k\cdot 4^{m}=kN\leq a_{r}^{\prime}<4^{r}\cdot a_{1}^{\prime}\leq
4^{r}\max\\{u_{\text{o}}(k),u_{\text{e}}(k)\\}\leq
4^{r}(u_{\text{o}}(k)+u_{\text{e}}(k))<4^{r}\cdot\rho k<6k\cdot 4^{r}$
and thus $r\geq m$.
Finally, we show that any sequence of reals satisfying (7), will eventually
become negative, but since $a_{i}^{\prime}$ is non-negative this will be a
contradiction.
We start by defining the sequence $b_{1},b_{2},\dots$ recursively by $b_{1}=1$
and $b_{i+1}=(\rho-2)b_{i}-2(b_{1}+\dots+b_{i-1})$. Note that
$\displaystyle b_{i+1}$ $\displaystyle=(\rho-2)b_{i}-2(b_{1}+\dots+b_{i-1})$
$\displaystyle=(\rho-1)b_{i}-b_{i}-2(b_{1}+\dots+b_{i-1})$
$\displaystyle=(\rho-1)b_{i}-((\rho-2)b_{i-1}-2(b_{1}+\dots+b_{i-2}))-2(b_{1}+\dots+b_{i-1})$
$\displaystyle=(\rho-1)b_{i}-\rho b_{i-1}$
So equivalently the sequence is defined by,
$b_{1}=1,\leavevmode\nobreak\ b_{2}=\rho-2,\text{ and
}b_{i+1}=(\rho-1)b_{i}-\rho b_{i-1}\text{ for }i\geq 2.$
It is known that a second order linear recurrence relation whose
characteristic polynomial has non-real roots will eventually become negative
(see [1]). Indeed, the characteristic polynomial $x^{2}-(\rho-1)x+\rho$ has
discriminant $\rho^{2}-6\rho+1<0$ and so its roots $\alpha,\bar{\alpha}$ are
non-real. Hence the above recursively defined sequence has the closed form of
$b_{i}=z\alpha^{i}+\bar{z}\bar{\alpha}^{i}=2\text{Re}\left(z\alpha^{i}\right)$
for some complex number $z$. By expressing $z\alpha^{i}$ in polar form we can
see that $b_{m}<0$ for some positive integer $m$. Note that the calculation of
$m$ only depends on $\rho$.
Now let $a^{\prime}_{1},\dots,a^{\prime}_{m}$ be a sequence of non-negative
reals satisfying (7). We will be done if we can show that $a_{j}^{\prime}\leq
a_{1}^{\prime}b_{j}$ for all $1\leq j\leq m$; so suppose
$a^{\prime}_{s}>a^{\prime}_{1}b_{s}$ for some $s$, and such that
$\\{a^{\prime}_{j}\\}_{j=1}^{m}$ and $\\{a^{\prime}_{1}b_{j}\\}_{j=1}^{m}$
coincide on the longest initial subsequence. Let $p$ be the minimum value such
that $a^{\prime}_{p}\neq a^{\prime}_{1}b_{p}$. Clearly $p>1$. Applying (7) to
$j=p-1$ we see that
$\displaystyle
a^{\prime}_{p}\leq(\rho-2)a^{\prime}_{p-1}-2(a^{\prime}_{1}+\dots+a^{\prime}_{p-2})$
$\displaystyle=(\rho-2)a^{\prime}_{1}b_{p-1}-2(a^{\prime}_{1}b_{1}+\dots+a^{\prime}_{1}b_{p-2})$
$\displaystyle=a_{1}^{\prime}((\rho-2)b_{p-1}-2(b_{1}+\dots+b_{p-2}))=a^{\prime}_{1}b_{p}$
and thus $a^{\prime}_{p}<a^{\prime}_{1}b_{p}$.
Let $\beta={(a^{\prime}_{1}b_{p}-a^{\prime}_{p})}/{a^{\prime}_{1}}>0$. Now
consider the sequence $a^{\prime\prime}_{j}$ where
$a^{\prime\prime}_{j}=a^{\prime}_{j}$ for $j<p$ and
$a^{\prime\prime}_{j}=a^{\prime}_{j}+\beta a^{\prime}_{j-p+1}$ for $j\geq p$.
Then $a^{\prime\prime}_{p}=a^{\prime}_{1}b_{p}=a^{\prime\prime}_{1}b_{p}$.
Clearly, this new sequence satisfies (7) for every $j<p$. Furthermore, we have
$\displaystyle a^{\prime\prime}_{p+j}$ $\displaystyle=a^{\prime}_{p+j}+\beta
a^{\prime}_{j+1}$
$\displaystyle\leq(\rho-2)a^{\prime}_{p+j-1}-2\left(a^{\prime}_{1}+\dots+a^{\prime}_{p+j-2}\right)+\beta(\rho-2)a^{\prime}_{j}-2\beta\left(a^{\prime}_{1}+\dots+a^{\prime}_{j-1}\right)$
$\displaystyle=(\rho-2)a^{\prime\prime}_{p+j-1}-2\left(a^{\prime\prime}_{1}+\dots+a^{\prime\prime}_{p+j-2}\right)$
for every $j\geq 0$. Hence, the whole sequence satisfies (7). We also have
$a^{\prime\prime}_{s}\geq
a^{\prime}_{s}>a^{\prime}_{1}b_{s}=a^{\prime\prime}_{1}b_{s}$. This
contradicts the fact that $a_{j}^{\prime}$ was such a sequence which coincided
with $a_{1}^{\prime}b_{j}$ on the longest initial subsequence. ∎
## Acknowledgments
This project began as part of the problem session of the “Extremal Graph
Theory and Ramsey Theory” focus week of the “Rio Workshop on Extremal and
Structural Combinatorics” held at IMPA, Rio de Janeiro, Brazil in January
2018. We thank the organisers of the workshop and IMPA for the stimulating
working environment.
We also thank the referees for their careful reading of the paper and their
helpful suggestions.
## References
* [1] J. R. Burke and W. A. Webb, _Asymptotic behavior of linear recurrences_ , Fibonacci Quarterly 19 (1981), no. 4, 318–321.
* [2] L. DeBiasio and P. McKenney, _Density of monochromatic infinite subgraphs_ , Combinatorica (2019). https://doi.org/10.1007/s00493-018-3724-2.
* [3] P. Erdős and F. Galvin, _Monochromatic infinite paths_ , Discrete Mathematics 113 (1993), no. 1, 59–70.
* [4] L. Gerencsér and A. Gyárfás, _On Ramsey-type problems_ , Ann. Sci. Budapest. Eötvös Sect. Math 10 (1967), 167–170.
* [5] P. Haxell, _Partitioning complete bipartite graphs by monochromatic cycles_ , Journal of Combinatorial Theory, Series B 69 (1997), no. 2, 210–218.
* [6] J. Komlós and M. Simonovits, _Szemerédi’s Regularity Lemma and its applications in graph theory_ , Combinatorics, Paul Erdős is Eighty (D. Miklós, V. T. Sós, and T. Szőnyi, eds.), vol. 2, Bolyai Society Mathematical Studies, 1996, pp. 295–352.
* [7] D. Kühn and D. Osthus, _Embedding large subgraphs into dense graphs_ , Surveys in Combinatorics (2009), 137–168.
* [8] A. Lo, N. Sanhueza-Matamala, and G. Wang, _Density of monochromatic infinite paths_ , Electronic Journal of Combinatorics 25 (2018), no. 4, P4.29.
* [9] R. Rado, _Monochromatic paths in graphs_ , Ann. Discrete Math 3 (1978), 191–194.
* [10] E. Szemerédi, _Regular partitions of graphs_ , Colloq. Internat. CNRS 260 (1976), 399–401.
[jc] Jan Corsten
London School of Economics, Department of Mathematics, London WC2A 2AE.
jcorstenlseacuk [ld] Louis DeBiasio
Miami University, Department of Mathematics, Oxford, OH, 45056, United States.
debiasldmiamiohedu [al] Ander Lamaison
Institut für Mathematik, Freie Universität Berlin and Berlin Mathematical
School, Berlin, Germany.
lamaisonzedatfu-berlinde [rl] Richard Lang
University of Waterloo, Combinatorics & Optimization, Waterloo, ON, N2L 3G1,
Canada.
r7languwaterloocl
|
Subsets and Splits
Filtered Text Samples
Retrieves 100 samples of text containing the specific phrase "You are a helpful assistant", providing limited insight into the dataset.
Helpful Assistant Text Samples
Returns a limited set of rows containing the phrase 'helpful assistant' in the text, providing basic filtering of relevant entries.